Updates from: 01/22/2021 04:13:55
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/add-sign-in-policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/add-sign-in-policy.md
@@ -26,7 +26,7 @@ The sign-in policy lets users:
* Users can sign in with an Azure AD B2C Local Account * Sign-up or sign-in with a social account * Password reset
-* Users cannot sign up for an Azure AD B2C Local Account - To create an account, an Administrator can use [MS Graph API](manage-user-accounts-graph-api.md).
+* Users cannot sign up for an Azure AD B2C Local Account - To create an account, an Administrator can use [MS Graph API](microsoft-graph-operations.md).
![Profile editing flow](./media/add-sign-in-policy/sign-in-user-flow.png)
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/app-registrations-training-guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/app-registrations-training-guide.md
@@ -62,9 +62,10 @@ In the legacy experience, apps were always created as customer-facing applicatio
You can also use this option to use Azure AD B2C as a SAML service provider. [Learn more](identity-provider-adfs.md). ## Applications for DevOps scenarios+ You can use the other account types to create an app to manage your DevOps scenarios, like using Microsoft Graph to upload Identity Experience Framework policies or provision users. Learn [how register a Microsoft Graph application to manage Azure AD B2C resources](microsoft-graph-get-started.md).
-You might not see all Microsoft Graph permissions, because many of these permissions don't apply to Azure B2C consumer users. [Read more about managing users using Microsoft Graph](manage-user-accounts-graph-api.md).
+You might not see all Microsoft Graph permissions, because many of these permissions don't apply to Azure B2C consumer users. [Read more about managing users using Microsoft Graph](microsoft-graph-operations.md).
## Admin consent and offline_access+openid scopes <!-- Azure AD B2C doesn't support user consent. That is, when a user signs into an application, the user doesn't see a screen requesting consent for the application permissions. All permissions have to be granted through admin consent. -->
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/custom-policy-trust-frameworks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/custom-policy-trust-frameworks.md
@@ -49,7 +49,7 @@ The [claims transformations](claimstransformations.md) are predefined functions
### Customize and localize your UI
-When you'd like to collect information from your users by presenting a page in their web browser, use the [self-asserted technical profile](self-asserted-technical-profile.md). You can edit your self-asserted technical profile to [add claims and customize user input](custom-policy-configure-user-input.md).
+When you'd like to collect information from your users by presenting a page in their web browser, use the [self-asserted technical profile](self-asserted-technical-profile.md). You can edit your self-asserted technical profile to [add claims and customize user input](./configure-user-input.md).
To [customize the user interface](customize-ui-with-html.md) for your self-asserted technical profile, you specify a URL in the [content definition](contentdefinitions.md) element with customized HTML content. In the self-asserted technical profile, you point to this content definition ID.
@@ -129,11 +129,11 @@ Within an Azure AD B2C custom policy, you can integrate your own business logic
- Create your logic within the **extension policy**, or **relaying party policy**. You can add new elements, which will override the base policy by referencing the same ID. This will allow you to scale out your project while making it easier to upgrade base policy later on if Microsoft releases new starter packs. - Within the **base policy**, we highly recommend avoiding making any changes. When necessary make comments where the changes are made.-- When you're overriding an element, such as technical profile metadata, avoid copying the entire technical profile from the base policy. Instead, copy only the required section of the element. See [Disable email verification](custom-policy-disable-email-verification.md) for an example of how to make the change.
+- When you're overriding an element, such as technical profile metadata, avoid copying the entire technical profile from the base policy. Instead, copy only the required section of the element. See [Disable email verification](./disable-email-verification.md) for an example of how to make the change.
- To reduce duplication of technical profiles, where core functionality is shared, use [technical profile inclusion](technicalprofiles.md#include-technical-profile). - Avoid writing to the Azure AD directory during sign-in, which may lead to throttling issues. - If your policy has external dependencies, such as REST API makes sure they are highly available.-- For a better user experience, make sure your custom HTML templates, are globally deployed using [online content delivery](https://docs.microsoft.com/azure/cdn/). Azure Content Delivery Network (CDN) lets you reduce load times, save bandwidth, and speed responsiveness.
+- For a better user experience, make sure your custom HTML templates, are globally deployed using [online content delivery](../cdn/index.yml). Azure Content Delivery Network (CDN) lets you reduce load times, save bandwidth, and speed responsiveness.
- If you want to make a change to user journey. Copy the entire user journey from the base policy to the extension policy. Provide a unique user journey ID to the user journey you have copied. Then in the [relying party policy](relyingparty.md), change the [default user journey](relyingparty.md#defaultuserjourney) element to point to the new user journey. ## Troubleshooting
@@ -164,9 +164,9 @@ You get started with Azure AD B2C custom policy:
After you set up and test your Azure AD B2C policy, you can start customizing your policy. Go through the following articles to learn how to: -- [Add claims and customize user input](custom-policy-configure-user-input.md) using custom policies. Learn how to define a claim, add a claim to the user interface by customizing some of the starter pack technical profiles.
+- [Add claims and customize user input](./configure-user-input.md) using custom policies. Learn how to define a claim, add a claim to the user interface by customizing some of the starter pack technical profiles.
- [Customize the user interface](customize-ui-with-html.md) of your application using a custom policy. Learn how to create your own HTML content, and customize the content definition.-- [Localize the user interface](custom-policy-localization.md) of your application using a custom policy. Learn how to set up the list of supported languages, and provide language-specific labels, by adding the localized resources element.-- During your policy developing and testing you can [disable email verification](custom-policy-disable-email-verification.md). Learn how to overwrite a technical profile metadata.-- [Set up sign-in with a Google account](identity-provider-google-custom.md) using custom policies. Learn how to create a new claims provider with OAuth2 technical profile. Then customize the user journey to include the Google sign-in option.
+- [Localize the user interface](./language-customization.md) of your application using a custom policy. Learn how to set up the list of supported languages, and provide language-specific labels, by adding the localized resources element.
+- During your policy developing and testing you can [disable email verification](./disable-email-verification.md). Learn how to overwrite a technical profile metadata.
+- [Set up sign-in with a Google account](./identity-provider-google.md) using custom policies. Learn how to create a new claims provider with OAuth2 technical profile. Then customize the user journey to include the Google sign-in option.
- To diagnose problems with your custom policies you can [Collect Azure Active Directory B2C logs with Application Insights](troubleshoot-with-application-insights.md). Learn how to add new technical profiles, and configure your relaying party policy.\ No newline at end of file
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/faq.md
@@ -92,7 +92,7 @@ For information about account lockouts and passwords, see [Manages threats to re
### Can I use Azure AD Connect to migrate consumer identities that are stored on my on-premises Active Directory to Azure AD B2C?
-No, Azure AD Connect is not designed to work with Azure AD B2C. Consider using the [Microsoft Graph API](manage-user-accounts-graph-api.md) for user migration. See the [User migration guide](user-migration.md) for details.
+No, Azure AD Connect is not designed to work with Azure AD B2C. Consider using the [Microsoft Graph API](microsoft-graph-operations.md) for user migration. See the [User migration guide](user-migration.md) for details.
### Can my app open up Azure AD B2C pages within an iFrame?
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/identity-provider-azure-ad-multi-tenant https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-azure-ad-multi-tenant.md
@@ -36,7 +36,7 @@ This article shows you how to enable sign-in for users using the multi-tenant en
## Register an application
-To enable sign-in for users with an Azure AD account in Azure Active Directory B2C (Azure AD B2C), you need to create an application in [Azure portal](https://portal.azure.com). For more information, see [Register an application with the Microsoft identity platform](https://docs.microsoft.com/azure/active-directory/develop/quickstart-register-app).
+To enable sign-in for users with an Azure AD account in Azure Active Directory B2C (Azure AD B2C), you need to create an application in [Azure portal](https://portal.azure.com). For more information, see [Register an application with the Microsoft identity platform](../active-directory/develop/quickstart-register-app.md).
1. Sign in to the [Azure portal](https://portal.azure.com).
@@ -190,4 +190,4 @@ When working with custom policies, you might sometimes need additional informati
To help diagnose issues, you can temporarily put the policy into "developer mode" and collect logs with Azure Application Insights. Find out how in [Azure Active Directory B2C: Collecting Logs](troubleshoot-with-application-insights.md).
-::: zone-end
+::: zone-end
\ No newline at end of file
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/identity-provider-azure-ad-single-tenant https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-azure-ad-single-tenant.md
@@ -34,7 +34,7 @@ This article shows you how to enable sign-in for users from a specific Azure AD
## Register an Azure AD app
-To enable sign-in for users with an Azure AD account from a specific Azure AD organization, in Azure Active Directory B2C (Azure AD B2C), you need to create an application in [Azure portal](https://portal.azure.com). For more information, see [Register an application with the Microsoft identity platform](https://docs.microsoft.com/azure/active-directory/develop/quickstart-register-app).
+To enable sign-in for users with an Azure AD account from a specific Azure AD organization, in Azure Active Directory B2C (Azure AD B2C), you need to create an application in [Azure portal](https://portal.azure.com). For more information, see [Register an application with the Microsoft identity platform](../active-directory/develop/quickstart-register-app.md).
1. Sign in to the [Azure portal](https://portal.azure.com). 1. Make sure you're using the directory that contains your organizational Azure AD tenant (for example, contoso.com). Select the **Directory + subscription filter** in the top menu, and then choose the directory that contains your Azure AD tenant.
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/identity-provider-linkedin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-linkedin.md
@@ -32,7 +32,7 @@ zone_pivot_groups: b2c-policy-type
## Create a LinkedIn application
-To enable sign-in for users with a LinkedIn account in Azure Active Directory B2C (Azure AD B2C), you need to create an application in [LinkedIn Developers website](https://www.developer.linkedin.com/). For more information, see [Authorization Code Flow](https://docs.microsoft.com/linkedin/shared/authentication/authorization-code-flow). If you don't already have a LinkedIn account, you can sign up at [https://www.linkedin.com/](https://www.linkedin.com/).
+To enable sign-in for users with a LinkedIn account in Azure Active Directory B2C (Azure AD B2C), you need to create an application in [LinkedIn Developers website](https://www.developer.linkedin.com/). For more information, see [Authorization Code Flow](/linkedin/shared/authentication/authorization-code-flow). If you don't already have a LinkedIn account, you can sign up at [https://www.linkedin.com/](https://www.linkedin.com/).
1. Sign in to the [LinkedIn Developers website](https://www.developer.linkedin.com/) with your LinkedIn account credentials. 1. Select **My Apps**, and then click **Create app**.
@@ -377,4 +377,4 @@ For a full sample of a policy that uses the LinkedIn identity provider, see the
LinkedIn recently [updated their APIs from v1.0 to v2.0](https://engineering.linkedin.com/blog/2018/12/developer-program-updates). As part of the migration, Azure AD B2C is only able to obtain the full name of the LinkedIn user during the sign-up. If an email address is one of the attributes that is collected during sign-up, the user must manually enter the email address and validate it.
-::: zone-end
+::: zone-end
\ No newline at end of file
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/identity-provider-microsoft-account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-microsoft-account.md
@@ -32,7 +32,7 @@ zone_pivot_groups: b2c-policy-type
## Create a Microsoft account application
-To enable sign-in for users with a Microsoft account in Azure Active Directory B2C (Azure AD B2C), you need to create an application in [Azure portal](https://portal.azure.com). For more information, see [Register an application with the Microsoft identity platform](https://docs.microsoft.com/azure/active-directory/develop/quickstart-register-app). If you don't already have a Microsoft account, you can get one at [https://www.live.com/](https://www.live.com/).
+To enable sign-in for users with a Microsoft account in Azure Active Directory B2C (Azure AD B2C), you need to create an application in [Azure portal](https://portal.azure.com). For more information, see [Register an application with the Microsoft identity platform](../active-directory/develop/quickstart-register-app.md). If you don't already have a Microsoft account, you can get one at [https://www.live.com/](https://www.live.com/).
1. Sign in to the [Azure portal](https://portal.azure.com). 1. Make sure you're using the directory that contains your Azure AD tenant by selecting the **Directory + subscription** filter in the top menu and choosing the directory that contains your Azure AD tenant.
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/manage-user-accounts-graph-api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/manage-user-accounts-graph-api.md deleted file mode 100644
@@ -1,131 +0,0 @@
-title: Manage users with the Microsoft Graph API
-titleSuffix: Azure AD B2C
-description: How to manage users in an Azure AD B2C tenant by calling the Microsoft Graph API and using an application identity to automate the process.
-services: active-directory-b2c
-author: msmimart
-manager: celestedg
-
-ms.service: active-directory
-ms.workload: identity
-ms.topic: how-to
-ms.date: 01/13/2021
-ms.custom: project-no-code
-ms.author: mimart
-ms.subservice: B2C
-# Manage Azure AD B2C user accounts with Microsoft Graph
-
-Microsoft Graph allows you to manage user accounts in your Azure AD B2C directory by providing create, read, update, and delete methods in the Microsoft Graph API. You can migrate an existing user store to an Azure AD B2C tenant and perform other user account management operations by calling the Microsoft Graph API.
-
-In the sections that follow, the key aspects of Azure AD B2C user management with the Microsoft Graph API are presented. The Microsoft Graph API operations, types, and properties presented here are a subset of that which appears in the Microsoft Graph API reference documentation.
-
-## Register a management application
-
-Before any user management application or script you write can interact with the resources in your Azure AD B2C tenant, you need an application registration that grants the permissions to do so.
-
-Follow the steps in this how-to article to create an application registration that your management application can use:
-
-[Manage Azure AD B2C with Microsoft Graph](microsoft-graph-get-started.md)
-
-## User management Microsoft Graph operations
-
-The following user management operations are available in the [Microsoft Graph API](/graph/api/resources/user):
--- [Get a list of users](/graph/api/user-list)-- [Create a user](/graph/api/user-post-users)-- [Get a user](/graph/api/user-get)-- [Update a user](/graph/api/user-update)-- [Delete a user](/graph/api/user-delete)--
-## Code sample: How to programmatically manage user accounts
-
-This code sample is a .NET Core console application that uses the [Microsoft Graph SDK](/graph/sdks/sdks-overview) to interact with Microsoft Graph API. Its code demonstrates how to call the API to programmatically manage users in an Azure AD B2C tenant.
-You can [download the sample archive](https://github.com/Azure-Samples/ms-identity-dotnetcore-b2c-account-management/archive/master.zip) (*.zip), [browse the repository](https://github.com/Azure-Samples/ms-identity-dotnetcore-b2c-account-management) on GitHub, or clone the repository:
-
-```cmd
-git clone https://github.com/Azure-Samples/ms-identity-dotnetcore-b2c-account-management.git
-```
-
-After you've obtained the code sample, configure it for your environment and then build the project:
-
-1. Open the project in [Visual Studio](https://visualstudio.microsoft.com) or [Visual Studio Code](https://code.visualstudio.com).
-1. Open `src/appsettings.json`.
-1. In the `appSettings` section, replace `your-b2c-tenant` with the name of your tenant, and `Application (client) ID` and `Client secret` with the values for your management application registration (see the [Register a management application](#register-a-management-application) section of this article).
-1. Open a console window within your local clone of the repo, switch into the `src` directory, then build the project:
- ```console
- cd src
- dotnet build
- ```
-1. Run the application with the `dotnet` command:
-
- ```console
- dotnet bin/Debug/netcoreapp3.1/b2c-ms-graph.dll
- ```
-
-The application displays a list of commands you can execute. For example, get all users, get a single user, delete a user, update a user's password, and bulk import.
-
-### Code discussion
-
-The sample code uses the [Microsoft Graph SDK](/graph/sdks/sdks-overview), which is designed to simplify building high-quality, efficient, and resilient applications that access Microsoft Graph.
-
-Any request to the Microsoft Graph API requires an access token for authentication. The solution makes use of the [Microsoft.Graph.Auth](https://www.nuget.org/packages/Microsoft.Graph.Auth/) NuGet package that provides an authentication scenario-based wrapper of the Microsoft Authentication Library (MSAL) for use with the Microsoft Graph SDK.
-
-The `RunAsync` method in the _Program.cs_ file:
-
-1. Reads application settings from the _appsettings.json_ file
-1. Initializes the auth provider using [OAuth 2.0 client credentials grant](../active-directory/develop/v2-oauth2-client-creds-grant-flow.md) flow. With the client credentials grant flow, the app is able to get an access token to call the Microsoft Graph API.
-1. Sets up the Microsoft Graph service client with the auth provider:
-
- ```csharp
- // Read application settings from appsettings.json (tenant ID, app ID, client secret, etc.)
- AppSettings config = AppSettingsFile.ReadFromJsonFile();
-
- // Initialize the client credential auth provider
- IConfidentialClientApplication confidentialClientApplication = ConfidentialClientApplicationBuilder
- .Create(config.AppId)
- .WithTenantId(config.TenantId)
- .WithClientSecret(config.ClientSecret)
- .Build();
- ClientCredentialProvider authProvider = new ClientCredentialProvider(confidentialClientApplication);
-
- // Set up the Microsoft Graph service client with client credentials
- GraphServiceClient graphClient = new GraphServiceClient(authProvider);
- ```
-
-The initialized *GraphServiceClient* is then used in _UserService.cs_ to perform the user management operations. For example, getting a list of the user accounts in the tenant:
-
-```csharp
-public static async Task ListUsers(GraphServiceClient graphClient)
-{
- Console.WriteLine("Getting list of users...");
-
- // Get all users (one page)
- var result = await graphClient.Users
- .Request()
- .Select(e => new
- {
- e.DisplayName,
- e.Id,
- e.Identities
- })
- .GetAsync();
-
- foreach (var user in result.CurrentPage)
- {
- Console.WriteLine(JsonConvert.SerializeObject(user));
- }
-}
-```
-
-[Make API calls using the Microsoft Graph SDKs](/graph/sdks/create-requests) includes information on how to read and write information from Microsoft Graph, use `$select` to control the properties returned, provide custom query parameters, and use the `$filter` and `$orderBy` query parameters.
-
-## Next steps
-
-For a full index of the Microsoft Graph API operations supported for Azure AD B2C resources, see [Microsoft Graph operations available for Azure AD B2C](microsoft-graph-operations.md).
-
-<!-- LINK -->
-
-[graph-objectIdentity]: /graph/api/resources/objectidentity
-[graph-user]: (https://docs.microsoft.com/graph/api/resources/user)
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/microsoft-graph-get-started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/microsoft-graph-get-started.md
@@ -1,5 +1,5 @@
---
-title: Manage resources with Microsoft Graph
+title: Register a Microsoft Graph application
titleSuffix: Azure AD B2C description: Prepare for managing Azure AD B2C resources with Microsoft Graph by registering an application that's granted the required Graph API permissions. services: B2C
@@ -9,12 +9,12 @@ manager: celestedg
ms.service: active-directory ms.workload: identity ms.topic: how-to
-ms.date: 02/14/2020
+ms.date: 01/21/2021
ms.author: mimart ms.subservice: B2C ---
-# Manage Azure AD B2C with Microsoft Graph
+# Register a Microsoft Graph application
[Microsoft Graph][ms-graph] allows you to manage many of the resources within your Azure AD B2C tenant, including customer user accounts and custom policies. By writing scripts or applications that call the [Microsoft Graph API][ms-graph-api], you can automate tenant management tasks like:
@@ -79,14 +79,15 @@ If your application or script needs to delete users or update their passwords, a
1. Select **Add**. It might take a few minutes to for the permissions to fully propagate. ## Next steps+ Now that you've registered your management application and have granted it the required permissions, your applications and services (for example, Azure Pipelines) can use its credentials and permissions to interact with the Microsoft Graph API. * [Get an access token from Azure AD](/graph/auth-v2-service#4-get-an-access-token) * [Use the access token to call Microsoft Graph](/graph/auth-v2-service#4-get-an-access-token) * [B2C operations supported by Microsoft Graph](microsoft-graph-operations.md)
-* [Manage Azure AD B2C user accounts with Microsoft Graph](manage-user-accounts-graph-api.md)
+* [Manage Azure AD B2C user accounts with Microsoft Graph](microsoft-graph-operations.md)
* [Get audit logs with the Azure AD reporting API](view-audit-logs.md#get-audit-logs-with-the-azure-ad-reporting-api) <!-- LINKS --> [ms-graph]: /graph/
-[ms-graph-api]: https://docs.microsoft.com/graph/api/overview
\ No newline at end of file
+[ms-graph-api]: /graph/api/overview
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/microsoft-graph-operations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/microsoft-graph-operations.md
@@ -1,24 +1,26 @@
---
-title: Supported Microsoft Graph operations
+title: Manage resources with Microsoft Graph
titleSuffix: Azure AD B2C
-description: An index of the Microsoft Graph operations supported for the management of Azure AD B2C resources, including users, user flows, identity providers, custom policies, policy keys, and more.
+description: How to manage resources in an Azure AD B2C tenant by calling the Microsoft Graph API and using an application identity to automate the process.
services: B2C author: msmimart manager: celestedg ms.service: active-directory ms.workload: identity
-ms.topic: reference
-ms.date: 10/15/2020
+ms.topic: how-to
+ms.date: 01/21/2021
+ms.custom: project-no-code
ms.author: mimart ms.subservice: B2C
-ms.custom: fasttrack-edit
---
-# Microsoft Graph operations available for Azure AD B2C
+# Manage Azure AD B2C with Microsoft Graph
-The following Microsoft Graph API operations are supported for the management of Azure AD B2C resources, including users, identity providers, user flows, custom policies, and policy keys.
+Microsoft Graph allows you to manage resources in your Azure AD B2C directory. The following Microsoft Graph API operations are supported for the management of Azure AD B2C resources, including users, identity providers, user flows, custom policies, and policy keys. Each link in the following sections targets the corresponding page within the Microsoft Graph API reference for that operation.
-Each link in the following sections targets the corresponding page within the Microsoft Graph API reference for that operation.
+## Perquisites
+
+To use MS Graph API, and interact with resources in your Azure AD B2C tenant, you need an application registration that grants the permissions to do so. Follow the steps in the [Manage Azure AD B2C with Microsoft Graph](microsoft-graph-get-started.md) article to create an application registration that your management application can use.
## User management
@@ -28,8 +30,6 @@ Each link in the following sections targets the corresponding page within the Mi
- [Update a user](/graph/api/user-update) - [Delete a user](/graph/api/user-delete)
-For more information about managing Azure AD B2C user accounts with the Microsoft Graph API, see [Manage Azure AD B2C user accounts with Microsoft Graph](manage-user-accounts-graph-api.md).
- ## User phone number management - [Add](/graph/api/authentication-post-phonemethods)
@@ -37,7 +37,7 @@ For more information about managing Azure AD B2C user accounts with the Microsof
- [Update](/graph/api/b2cauthenticationmethodspolicy-update) - [Delete](/graph/api/phoneauthenticationmethod-delete)
-For more information about managing user's sign-in phone number with the Microsoft Graph API, see [B2C Authentication Methods](/graph/api/resources/b2cauthenticationmethodspolicy).
+For more information about managing user's sign-in phone number, see [B2C Authentication Methods](/graph/api/resources/b2cauthenticationmethodspolicy).
## Identity providers (user flow)
@@ -72,7 +72,7 @@ The following operations allow you to manage your Azure AD B2C Trust Framework p
The Identity Experience Framework stores the secrets referenced in a custom policy to establish trust between components. These secrets can be symmetric or asymmetric keys/values. In the Azure portal, these entities are shown as **Policy keys**.
-The top-level resource for policy keys in the Microsoft Graph API is the [Trusted Framework Keyset](/graph/api/resources/trustframeworkkeyset). Each **Keyset** contains at least one **Key**. To create a key, first create an empty keyset, and then generate a key in the keyset. You can create a manual secret, upload a certificate, or a PKCS12 key. The key can be a generated secret, a string you define (such as the Facebook application secret), or a certificate you upload. If a keyset has multiple keys, only one of the keys is active.
+The top-level resource for policy keys in the Microsoft Graph API is the [Trusted Framework Keyset](/graph/api/resources/trustframeworkkeyset). Each **Keyset** contains at least one **Key**. To create a key, first create an empty keyset, and then generate a key in the keyset. You can create a manual secret, upload a certificate, or a PKCS12 key. The key can be a generated secret, a string (such as the Facebook application secret), or a certificate you upload. If a keyset has multiple keys, only one of the keys is active.
### Trust Framework policy keyset
@@ -109,4 +109,93 @@ Azure AD B2C provides a directory that can hold 100 custom attributes per user.
- [List audit logs](/graph/api/directoryaudit-list)
-For more information about accessing Azure AD B2C audit logs with the Microsoft Graph API, see [Accessing Azure AD B2C audit logs](view-audit-logs.md).
\ No newline at end of file
+For more information about accessing Azure AD B2C audit logs, see [Accessing Azure AD B2C audit logs](view-audit-logs.md).
+
+## Code sample: How to programmatically manage user accounts
+
+This code sample is a .NET Core console application that uses the [Microsoft Graph SDK](/graph/sdks/sdks-overview) to interact with Microsoft Graph API. Its code demonstrates how to call the API to programmatically manage users in an Azure AD B2C tenant.
+You can [download the sample archive](https://github.com/Azure-Samples/ms-identity-dotnetcore-b2c-account-management/archive/master.zip) (*.zip), [browse the repository](https://github.com/Azure-Samples/ms-identity-dotnetcore-b2c-account-management) on GitHub, or clone the repository:
+
+```cmd
+git clone https://github.com/Azure-Samples/ms-identity-dotnetcore-b2c-account-management.git
+```
+
+After you've obtained the code sample, configure it for your environment and then build the project:
+
+1. Open the project in [Visual Studio](https://visualstudio.microsoft.com) or [Visual Studio Code](https://code.visualstudio.com).
+1. Open `src/appsettings.json`.
+1. In the `appSettings` section, replace `your-b2c-tenant` with the name of your tenant, and `Application (client) ID` and `Client secret` with the values for your management application registration. For more information, see [Register a Microsoft Graph Application](microsoft-graph-get-started.md).
+1. Open a console window within your local clone of the repo, switch into the `src` directory, then build the project:
+
+ ```console
+ cd src
+ dotnet build
+ ```
+
+1. Run the application with the `dotnet` command:
+
+ ```console
+ dotnet bin/Debug/netcoreapp3.1/b2c-ms-graph.dll
+ ```
+
+The application displays a list of commands you can execute. For example, get all users, get a single user, delete a user, update a user's password, and bulk import.
+
+### Code discussion
+
+The sample code uses the [Microsoft Graph SDK](/graph/sdks/sdks-overview), which is designed to simplify building high-quality, efficient, and resilient applications that access Microsoft Graph.
+
+Any request to the Microsoft Graph API requires an access token for authentication. The solution makes use of the [Microsoft.Graph.Auth](https://www.nuget.org/packages/Microsoft.Graph.Auth/) NuGet package that provides an authentication scenario-based wrapper of the Microsoft Authentication Library (MSAL) for use with the Microsoft Graph SDK.
+
+The `RunAsync` method in the _Program.cs_ file:
+
+1. Reads application settings from the _appsettings.json_ file
+1. Initializes the auth provider using [OAuth 2.0 client credentials grant](../active-directory/develop/v2-oauth2-client-creds-grant-flow.md) flow. With the client credentials grant flow, the app is able to get an access token to call the Microsoft Graph API.
+1. Sets up the Microsoft Graph service client with the auth provider:
+
+ ```csharp
+ // Read application settings from appsettings.json (tenant ID, app ID, client secret, etc.)
+ AppSettings config = AppSettingsFile.ReadFromJsonFile();
+
+ // Initialize the client credential auth provider
+ IConfidentialClientApplication confidentialClientApplication = ConfidentialClientApplicationBuilder
+ .Create(config.AppId)
+ .WithTenantId(config.TenantId)
+ .WithClientSecret(config.ClientSecret)
+ .Build();
+ ClientCredentialProvider authProvider = new ClientCredentialProvider(confidentialClientApplication);
+
+ // Set up the Microsoft Graph service client with client credentials
+ GraphServiceClient graphClient = new GraphServiceClient(authProvider);
+ ```
+
+The initialized *GraphServiceClient* is then used in _UserService.cs_ to perform the user management operations. For example, getting a list of the user accounts in the tenant:
+
+```csharp
+public static async Task ListUsers(GraphServiceClient graphClient)
+{
+ Console.WriteLine("Getting list of users...");
+
+ // Get all users (one page)
+ var result = await graphClient.Users
+ .Request()
+ .Select(e => new
+ {
+ e.DisplayName,
+ e.Id,
+ e.Identities
+ })
+ .GetAsync();
+
+ foreach (var user in result.CurrentPage)
+ {
+ Console.WriteLine(JsonConvert.SerializeObject(user));
+ }
+}
+```
+
+[Make API calls using the Microsoft Graph SDKs](/graph/sdks/create-requests) includes information on how to read and write information from Microsoft Graph, use `$select` to control the properties returned, provide custom query parameters, and use the `$filter` and `$orderBy` query parameters.
+
+<!-- LINK -->
+
+[graph-objectIdentity]: /graph/api/resources/objectidentity
+[graph-user]: (https://docs.microsoft.com/graph/api/resources/user)
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/openid-connect-technical-profile https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/openid-connect-technical-profile.md
@@ -90,7 +90,7 @@ The technical profile also returns claims that aren't returned by the identity p
| IncludeClaimResolvingInClaimsHandling  | No | For input and output claims, specifies whether [claims resolution](claim-resolver-overview.md) is included in the technical profile. Possible values: `true`, or `false` (default). If you want to use a claims resolver in the technical profile, set this to `true`. | | token_endpoint_auth_method | No | Specifies how Azure AD B2C sends the authentication header to the token endpoint. Possible values: `client_secret_post` (default), and `client_secret_basic` (public preview). For more information, see [OpenID Connect client authentication section](https://openid.net/specs/openid-connect-core-1_0.html#ClientAuthentication). | | token_signing_algorithm | No | The signing algorithm used for client assertions when the **token_endpoint_auth_method** metadata is set to `private_key_jwt`. Possible values: `RS256` (default). |
-| SingleLogoutEnabled | No | Indicates whether during sign-in the technical profile attempts to sign out from federated identity providers. For more information, see [Azure AD B2C session sign-out](session-overview.md#sign-out). Possible values: `true` (default), or `false`. |
+| SingleLogoutEnabled | No | Indicates whether during sign-in the technical profile attempts to sign out from federated identity providers. For more information, see [Azure AD B2C session sign-out](./session-behavior.md#sign-out). Possible values: `true` (default), or `false`. |
```xml <Metadata>
@@ -132,4 +132,4 @@ Examples:
- [Add Microsoft Account (MSA) as an identity provider using custom policies](identity-provider-microsoft-account.md) - [Sign in by using Azure AD accounts](identity-provider-azure-ad-single-tenant.md)-- [Allow users to sign in to a multi-tenant Azure AD identity provider using custom policies](identity-provider-azure-ad-multi-tenant.md)
+- [Allow users to sign in to a multi-tenant Azure AD identity provider using custom policies](identity-provider-azure-ad-multi-tenant.md)
\ No newline at end of file
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/partner-gallery https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/partner-gallery.md
@@ -36,6 +36,7 @@ Microsoft partners with the following ISVs for identity verification and proofin
|![Screenshot of an Experian logo.](./media/partner-gallery/experian-logo.png) | [Experian](./partner-experian.md) is an identity verification and proofing provider that performs risk assessments based on user attributes to prevent fraud. | |![Screenshot of an IDology logo.](./media/partner-gallery/idology-logo.png) | [IDology](./partner-idology.md) is an identity verification and proofing provider with ID verification solutions, fraud prevention solutions, compliance solutions, and others.| |![Screenshot of a Jumio logo.](./media/partner-gallery/jumio-logo.png) | [Jumio](./partner-jumio.md) is an ID verification service, which enables real-time automated ID verification, safeguarding customer data. |
+|![Screenshot of a Keyless logo.](./media/partner-gallery/keyless-logo.png) | [Keyless](./partner-keyless.md) is an ID verification service that provides authentication in the form of a facial biometric scan and eliminates fraud, phishing, and credential reuse.
| ![Screenshot of a LexisNexis logo.](./media/partner-gallery/lexisnexis-logo.png) | [LexisNexis](./partner-lexisnexis.md) is a profiling and identity validation provider that verifies user identification and provides comprehensive risk assessment based on userΓÇÖs device. | | ![Screenshot of a Onfido logo](./media/partner-gallery/onfido-logo.png) | [Onfido](./partner-onfido.md) is a document ID and facial biometrics verification solution that allows companies to meet *Know Your Customer* and identity requirements in real time. |
@@ -74,10 +75,10 @@ Microsoft partners with the following ISVs for security.
## Additional information -- [Custom policies in Azure AD B2C](https://docs.microsoft.com/azure/active-directory-b2c/custom-policy-overview)
+- [Custom policies in Azure AD B2C](./custom-policy-overview.md)
-- [Get started with custom policies in Azure AD B2C](https://docs.microsoft.com/azure/active-directory-b2c/custom-policy-get-started?tabs=applications)
+- [Get started with custom policies in Azure AD B2C](./custom-policy-get-started.md?tabs=applications)
## Next steps
-Select a partner in the tables mentioned to learn how to integrate their solution with Azure AD B2C.
+Select a partner in the tables mentioned to learn how to integrate their solution with Azure AD B2C.
\ No newline at end of file
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/partner-keyless https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/partner-keyless.md new file mode 100644
@@ -0,0 +1,158 @@
+---
+title: Tutorial for configuring Keyless with Azure Active Directory B2C
+titleSuffix: Azure AD B2C
+description: Tutorial for configuring Keyless with Azure Active Directory B2C for passwordless authentication
+services: active-directory-b2c
+author: gargi-sinha
+manager: martinco
+
+ms.service: active-directory
+ms.workload: identity
+ms.topic: how-to
+ms.date: 1/17/2021
+ms.author: gasinh
+ms.subservice: B2C
+---
+
+# Tutorial for configuring Keyless with Azure Active Directory B2C
+
+In this sample tutorial, we provide guidance on how to configure Azure Active Directory (AD) B2C with [Keyless](https://keyless.io/). With Azure AD B2C as an Identity provider, you can integrate Keyless with any of your customer applications to provide true passwordless authentication to your users.
+
+Keyless's solution **Keyless Zero-Knowledge Biometric (ZKBΓäó)** provides passwordless multifactor authentication that eliminates fraud, phishing, and credential reuse ΓÇô all while enhancing customer experience and protecting their privacy.
+
+## Pre-requisites
+
+To get started, you'll need:
+
+- An Azure subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+
+- An [Azure AD B2C tenant](https://docs.microsoft.com/azure/active-directory-b2c/tutorial-create-tenant). Tenant must be linked to your Azure subscription.
+
+- A Keyless cloud tenant, get a free [trial account](https://keyless.io/go).
+
+- The Keyless Authenticator app installed on your userΓÇÖs device.
+
+## Scenario description
+
+The Keyless integration includes the following components:
+
+- Azure AD B2C ΓÇô The authorization server, responsible for verifying the userΓÇÖs credentials, also known as the identity provider.
+
+- Web and mobile applications ΓÇô Your mobile or web applications that you choose to protect with Keyless and Azure AD B2C.
+
+- The Keyless mobile app ΓÇô The Keyless mobile app will be used for authentication to the Azure AD B2C enabled applications.
+
+The following architecture diagram shows the implementation.
+
+![Image shows Keyless architecture diagram](./media/partner-keyless/keyless-architecture-diagram.png)
+
+|Step | Description |
+|:-----| :-----------|
+| 1. | User arrives at a login page. Users select sign-in/sign-up and enters the username
+| 2. | The application sends the user attributes to Azure AD B2C for identity verification.
+| 3. | Azure AD B2C collects the user attributes and sends the attributes to Keyless to authenticate the user through the Keyless mobile app.
+| 4. | Keyless sends a push notification to the registered user's mobile device for a privacy-preserving authentication in the form of a facial biometric scan.
+| 5. | After the user responds to the push notification, the user is either granted or denied access to the customer application based on the verification results.
+
+## Integrate with Azure AD B2C
+
+### Add a new Identity provider
+
+To add a new Identity provider, follow these steps:
+
+1. Sign in to the **[Azure portal](https://portal.azure.com/#home)** as the global administrator of your Azure AD B2C tenant.
+
+2. Make sure you're using the directory that contains your Azure AD B2C tenant by selecting the **Directory + subscription** filter on the top menu and choosing the directory that contains your tenant.
+
+3. Choose **All services** in the top-left corner of the Azure portal, search for and select **Azure AD B2C**.
+
+4. Navigate to **Dashboard** > **Azure Active Directory B2C** > **Identity providers**
+
+5. Select **Identity providers**.
+
+6. Select **Add**.
+
+### Configure an Identity provider
+
+To configure an identity provider, follow these steps:
+
+1. Select **Identity provider type** > **OpenID Connect (Preview)**
+2. Fill out the form to set up the Identity provider:
+
+ |Property | Value |
+ |:-----| :-----------|
+ | Name | Keyless |
+ | Metadata URL | Insert the URI of the hosted Keyless Authentication app, followed by the specific path such as https://keyless.auth/.well-known/openid-configuration |
+ | Client Secret | The secret associated with the Keyless Authentication instance - not same as the one configured before. Insert a complex string of your choice. This secret will be used later in the Keyless Container configuration.|
+ | Client ID | The ID of the client. This ID will be used later in the Keyless Container configuration.|
+ | Scope | openid |
+ | Response type | id_token |
+ | Response mode | form_post|
+
+3. Select **OK**.
+
+4. Select **Map this identity providerΓÇÖs claims**.
+
+5. Fill out the form to map the Identity provider:
+
+ |Property | Value |
+ |:-----| :-----------|
+ | UserID | From subscription |
+ | Display name | From subscription |
+ | Response mode | From subscription |
+
+6. Select **Save** to complete the setup for your new Open ID Connect (OIDC) Identity provider.
+
+### Create a user flow policy
+
+You should now see Keyless as a new OIDC Identity provider listed within your B2C identity providers.
+
+1. In your Azure AD B2C tenant, under **Policies**, select **User flows**.
+
+2. Select **New** user flow.
+
+3. Select **Sign up and sign in**, select a **version**, and then select **Create**.
+
+4. Enter a **Name** for your policy.
+
+5. In the Identity providers section, select your newly created Keyless Identity Provider.
+
+6. Set up the parameters of your User flow. Insert a name and select the Identity provider youΓÇÖve created. You can also add email address. In this case, Azure wonΓÇÖt redirect the login procedure directly to Keyless instead it will show a screen where the user can choose the option they would like to use.
+
+7. Leave the **Multi-factor Authentication** field as is.
+
+8. Select **Enforce conditional access policies**
+
+9. Under **User attributes and token claims**, select **Email Address** in the Collect attribute option. You can add all the attributes that Azure Active Directory can collect about the user alongside the claims that Azure AD B2C can return to the client application.
+
+10. Select **Create**.
+
+11. After a successful creation, select your new **User flow**.
+
+12. On the left panel, select **Application Claims**. Under options, tick the **email** checkbox and select **Save**.
+
+## Test the user flow
+
+1. Open the Azure AD B2C tenant and under Policies select Identity Experience Framework.
+
+2. Select your previously created SignUpSignIn.
+
+3. Select Run user flow and select the settings:
+
+ a. Application: select the registered app (sample is JWT)
+
+ b. Reply URL: select the redirect URL
+
+ c. Select Run user flow.
+
+4. Go through sign-up flow and create an account
+
+5. Keyless will be called during the flow, after user attribute is created. If the flow is incomplete, check that user isn't saved in the directory.
+
+## Next steps
+
+For additional information, review the following articles:
+
+- [Custom policies in Azure AD B2C](https://docs.microsoft.com/azure/active-directory-b2c/custom-policy-overview)
+
+- [Get started with custom policies in Azure AD B2C](https://docs.microsoft.com/azure/active-directory-b2c/custom-policy-get-started?tabs=applications)
\ No newline at end of file
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/partner-nevis https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/partner-nevis.md
@@ -25,9 +25,9 @@ To get started, you'll need:
- An Azure AD subscription. If you don't have one, get a [free account](https://azure.microsoft.com/free/). -- An [Azure AD B2C tenant](https://docs.microsoft.com/azure/active-directory-b2c/tutorial-create-tenant) that is linked to your Azure subscription.
+- An [Azure AD B2C tenant](./tutorial-create-tenant.md) that is linked to your Azure subscription.
-- Configured Azure AD B2C environment for using [custom policies](https://docs.microsoft.com/azure/active-directory-b2c/custom-policy-get-started), if you wish to integrate Nevis into your sign-up policy flow.
+- Configured Azure AD B2C environment for using [custom policies](./custom-policy-get-started.md), if you wish to integrate Nevis into your sign-up policy flow.
## Scenario description
@@ -129,9 +129,9 @@ You'll receive two emails:
4. **Save** the changes to the file.
-5. Follow the [instructions](https://docs.microsoft.com/azure/active-directory-b2c/customize-ui-with-html#2-create-an-azure-blob-storage-account) and upload the **nevis.html** file to your Azure blob storage.
+5. Follow the [instructions](./customize-ui-with-html.md#2-create-an-azure-blob-storage-account) and upload the **nevis.html** file to your Azure blob storage.
-6. Follow the [instructions](https://docs.microsoft.com/azure/active-directory-b2c/customize-ui-with-html#3-configure-cors) and enable Cross-Origin Resource Sharing (CORS) for this file.
+6. Follow the [instructions](./customize-ui-with-html.md#3-configure-cors) and enable Cross-Origin Resource Sharing (CORS) for this file.
7. Once the upload is complete and CORS is enabled, select the **nevis.html** file in the list.
@@ -263,6 +263,6 @@ You'll receive two emails:
For additional information, review the following articles -- [Custom policies in Azure AD B2C](https://docs.microsoft.com/azure/active-directory-b2c/custom-policy-overview)
+- [Custom policies in Azure AD B2C](./custom-policy-overview.md)
-- [Get started with custom policies in Azure AD B2C](https://docs.microsoft.com/azure/active-directory-b2c/custom-policy-get-started?tabs=applications)
+- [Get started with custom policies in Azure AD B2C](./custom-policy-get-started.md?tabs=applications)
\ No newline at end of file
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/partner-zscaler https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/partner-zscaler.md
@@ -22,7 +22,7 @@ In this tutorial, you'll learn how to integrate Azure Active Directory B2C (Azur
Before you begin, youΓÇÖll need: - An Azure subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). -- [An Azure AD B2C tenant](https://docs.microsoft.com/azure/active-directory-b2c/tutorial-create-tenant) that's linked to your Azure subscription.
+- [An Azure AD B2C tenant](./tutorial-create-tenant.md) that's linked to your Azure subscription.
- [A ZPA subscription](https://azuremarketplace.microsoft.com/marketplace/apps/aad.zscalerprivateaccess?tab=Overview). ## Scenario description
@@ -91,15 +91,15 @@ After you've configured Azure AD B2C, the rest of the IdP configuration resumes.
>[!Note] >This step is required only if you havenΓÇÖt already configured custom policies. If you already have one or more custom policies, you can skip this step.
-To configure custom policies on your Azure AD B2C tenant, see [Get started with custom policies in Azure Active Directory B2C](https://docs.microsoft.com/azure/active-directory-b2c/custom-policy-get-started).
+To configure custom policies on your Azure AD B2C tenant, see [Get started with custom policies in Azure Active Directory B2C](./custom-policy-get-started.md).
### Step 3: Register ZPA as a SAML application in Azure AD B2C
-To configure a SAML application in Azure AD B2C, see [Register a SAML application in Azure AD B2C](https://docs.microsoft.com/azure/active-directory-b2c/connect-with-saml-service-providers).
+To configure a SAML application in Azure AD B2C, see [Register a SAML application in Azure AD B2C](./connect-with-saml-service-providers.md).
-In step ["3.2 Upload and test your policy metadata"](https://docs.microsoft.com/azure/active-directory-b2c/connect-with-saml-service-providers#32-upload-and-test-your-policy-metadata), copy or note the IdP SAML metadata URL that's used by Azure AD B2C. You'll need it later.
+In step ["3.2 Upload and test your policy metadata"](./connect-with-saml-service-providers.md#32-upload-and-test-your-policy-metadata), copy or note the IdP SAML metadata URL that's used by Azure AD B2C. You'll need it later.
-Follow the instructions through step ["4.2 Update the app manifest"](https://docs.microsoft.com/azure/active-directory-b2c/connect-with-saml-service-providers#42-update-the-app-manifest). In step 4.2, update the app manifest properties as follows:
+Follow the instructions through step ["4.2 Update the app manifest"](./connect-with-saml-service-providers.md#42-update-the-app-manifest). In step 4.2, update the app manifest properties as follows:
- For **identifierUris**: Use the Service Provider Entity ID that you copied or noted earlier in "Step 1.6.b". - For **samlMetadataUrl**: Skip this property, because ZPA doesn't host a SAML metadata URL.
@@ -144,7 +144,7 @@ Go to a ZPA user portal or a browser-access application, and test the sign-up or
For more information, review the following articles: -- [Get started with custom policies in Azure AD B2C](https://docs.microsoft.com/azure/active-directory-b2c/custom-policy-get-started)-- [Register a SAML application in Azure AD B2C](https://docs.microsoft.com/azure/active-directory-b2c/connect-with-saml-service-providers)
+- [Get started with custom policies in Azure AD B2C](./custom-policy-get-started.md)
+- [Register a SAML application in Azure AD B2C](./connect-with-saml-service-providers.md)
- [Step-by-step configuration guide for ZPA](https://help.zscaler.com/zpa/step-step-configuration-guide-zpa)-- [Configure an IdP for single sign-on](https://help.zscaler.com/zpa/configuring-idp-single-sign)
+- [Configure an IdP for single sign-on](https://help.zscaler.com/zpa/configuring-idp-single-sign)
\ No newline at end of file
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/phone-authentication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/phone-authentication.md
@@ -35,12 +35,12 @@ With phone sign-up and sign-in, the user can sign up for the app using a phone n
> > *&lt;insert: a link to your Privacy Statement&gt;*<br/>*&lt;insert: a link to your Terms of Service&gt;*
-To add your own consent information, customize the following sample and include it in the LocalizedResources for the ContentDefinition used by the self-asserted page with the display control (the *Phone_Email_Base.xml* file in the [phone sign-up and sign-in starter pack][starter-pack-phone]):
+To add your own consent information, customize the following sample. Include it in the `LocalizedResources` for the ContentDefinition used by the self-asserted page with the display control (the *Phone_Email_Base.xml* file in the [phone sign-up and sign-in starter pack][starter-pack-phone]):
```xml <LocalizedResources Id="phoneSignUp.en"> <LocalizedStrings>
- <LocalizedString ElementType="DisplayControl" ElementId="phoneControl" StringId="disclaimer_msg_intro">By providing your phone number, you consent to receiving a one-time passcode sent by text message to help you sign into {insert your application name}. Standard messsage and data rates may apply.</LocalizedString>
+ <LocalizedString ElementType="DisplayControl" ElementId="phoneControl" StringId="disclaimer_msg_intro">By providing your phone number, you consent to receiving a one-time passcode sent by text message to help you sign into {insert your application name}. Standard message and data rates may apply.</LocalizedString>
<LocalizedString ElementType="DisplayControl" ElementId="phoneControl" StringId="disclaimer_link_1_text">Privacy Statement</LocalizedString> <LocalizedString ElementType="DisplayControl" ElementId="phoneControl" StringId="disclaimer_link_1_url">{insert your privacy statement URL}</LocalizedString> <LocalizedString ElementType="DisplayControl" ElementId="phoneControl" StringId="disclaimer_link_2_text">Terms and Conditions</LocalizedString>
@@ -60,7 +60,7 @@ A one-time verification code is sent to the user's phone number. The user enters
![User verifies code during phone sign-up](media/phone-authentication/phone-signup-verify-code.png)
- The user enters any other information requested on the sign-up page, for example, **Display Name**, **Given Name**, and **Surname** (Country and phone number remain populated). If the user wants to use a different phone number, they can choose **Change number** to restart sign-up. When finished, the user selects **Continue**.
+The user enters any other information requested on the sign-up page. For example, **Display Name**, **Given Name**, and **Surname** (Country and phone number remain populated). If the user wants to use a different phone number, they can choose **Change number** to restart sign-up. When finished, the user selects **Continue**.
![User provides additional info](media/phone-authentication/phone-signup-additional-info.png)
@@ -96,8 +96,6 @@ You need the following resources in place before setting up OTP.
Start by updating the phone sign-up and sign-in custom policy files to work with your Azure AD B2C tenant.
-The following steps assume that you've completed the [prerequisites](#prerequisites) and have already cloned the [custom policy starter pack][starter-pack] repository to your local machine.
- 1. Find the [phone sign-up and sign-in custom policy files][starter-pack-phone] in your local clone of the starter pack repo, or download them directly. The XML policy files are located in the following directory: `active-directory-b2c-custom-policy-starterpack/scenarios/`**`phone-number-passwordless`**
@@ -132,9 +130,9 @@ As you upload each file, Azure adds the prefix `B2C_1A_`.
## Get user account by phone number
-A user that signs up with a phone number but does not provide a recovery email address is recorded in your Azure AD B2C directory with their phone number as their sign-in name. If the user then wishes to change their phone number, your help desk or support team must first find their account, and then update their phone number.
+A user that signs up with a phone number, without a recovery email address is recorded in your Azure AD B2C directory with their phone number as their sign-in name. To change the phone number, your help desk or support team must first find their account, and then update their phone number.
-You can find a user by their phone number (sign-in name) by using [Microsoft Graph](manage-user-accounts-graph-api.md):
+You can find a user by their phone number (sign-in name) by using [Microsoft Graph](microsoft-graph-operations.md):
```http GET https://graph.microsoft.com/v1.0/users?$filter=identities/any(c:c/issuerAssignedId eq '+{phone number}' and c/issuer eq '{tenant name}.onmicrosoft.com')
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/saml-identity-provider-technical-profile https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/saml-identity-provider-technical-profile.md
@@ -18,7 +18,7 @@ ms.subservice: B2C
[!INCLUDE [active-directory-b2c-advanced-audience-warning](../../includes/active-directory-b2c-advanced-audience-warning.md)]
-Azure Active Directory B2C (Azure AD B2C) provides support for the SAML 2.0 identity provider. This article describes the specifics of a technical profile for interacting with a claims provider that supports this standardized protocol. With a SAML technical profile you can federate with a SAML-based identity provider, such as [ADFS](identity-provider-adfs2016-custom.md) and [Salesforce](identity-provider-salesforce-saml.md). This federation allows your users to sign in with their existing social or enterprise identities.
+Azure Active Directory B2C (Azure AD B2C) provides support for the SAML 2.0 identity provider. This article describes the specifics of a technical profile for interacting with a claims provider that supports this standardized protocol. With a SAML technical profile you can federate with a SAML-based identity provider, such as [ADFS](./identity-provider-adfs.md) and [Salesforce](identity-provider-salesforce-saml.md). This federation allows your users to sign in with their existing social or enterprise identities.
## Metadata exchange
@@ -213,4 +213,4 @@ Example:
See the following articles for examples of working with SAML identity providers in Azure AD B2C: - [Add ADFS as a SAML identity provider using custom policies](identity-provider-adfs.md)-- [Sign in by using Salesforce accounts via SAML](identity-provider-salesforce-saml.md)
+- [Sign in by using Salesforce accounts via SAML](identity-provider-salesforce-saml.md)
\ No newline at end of file
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/user-flow-custom-attributes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/user-flow-custom-attributes.md
@@ -27,7 +27,7 @@ Your Azure AD B2C directory comes with a [built-in set of attributes](user-profi
* An identity provider has a unique user identifier, **uniqueUserGUID**, that must be persisted. * A custom user journey needs to persist the state of the user, **migrationStatus**, for other logic to operate on.
-Azure AD B2C allows you to extend the set of attributes stored on each user account. You can also read and write these attributes by using the [Microsoft Graph API](manage-user-accounts-graph-api.md).
+Azure AD B2C allows you to extend the set of attributes stored on each user account. You can also read and write these attributes by using the [Microsoft Graph API](microsoft-graph-operations.md).
## Prerequisites
@@ -56,7 +56,7 @@ The custom attribute is now available in the list of **User attributes** and for
1. Select **Application claims** and then select the custom attribute. 1. Click **Save**.
-Once you've created a new user using a user flow which uses the newly created custom attribute, the object can be queried in [Microsoft Graph Explorer](https://developer.microsoft.com/graph/graph-explorer). Alternatively you can use the [Run user flow](./tutorial-create-user-flows.md) feature on the user flow to verify the customer experience. You should now see **ShoeSize** in the list of attributes collected during the sign-up journey, and see it in the token sent back to your application.
+Once you've created a new user using a user flow, which uses the newly created custom attribute, the object can be queried in [Microsoft Graph Explorer](https://developer.microsoft.com/graph/graph-explorer). Alternatively you can use the [Run user flow](./tutorial-create-user-flows.md) feature on the user flow to verify the customer experience. You should now see **ShoeSize** in the list of attributes collected during the sign-up journey, and see it in the token sent back to your application.
::: zone-end
@@ -131,7 +131,7 @@ You can create these attributes by using the portal UI before or after you use t
|Name |Used in | |---------|---------| |`extension_loyaltyId` | Custom policy|
-|`extension_<b2c-extensions-app-guid>_loyaltyId` | [Microsoft Graph API](manage-user-accounts-graph-api.md)|
+|`extension_<b2c-extensions-app-guid>_loyaltyId` | [Microsoft Graph API](microsoft-graph-operations.md)|
The following example demonstrates the use of custom attributes in an Azure AD B2C custom policy claim definition.
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/user-migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/user-migration.md
@@ -15,7 +15,7 @@ ms.subservice: B2C
--- # Migrate users to Azure AD B2C
-Migrating from another identity provider to Azure Active Directory B2C (Azure AD B2C) might also require migrating existing user accounts. Two migration methods are discussed here, *pre migration* and *seamless migration*. With either approach, you're required to write an application or script that uses the [Microsoft Graph API](manage-user-accounts-graph-api.md) to create user accounts in Azure AD B2C.
+Migrating from another identity provider to Azure Active Directory B2C (Azure AD B2C) might also require migrating existing user accounts. Two migration methods are discussed here, *pre migration* and *seamless migration*. With either approach, you're required to write an application or script that uses the [Microsoft Graph API](microsoft-graph-operations.md) to create user accounts in Azure AD B2C.
## Pre migration
@@ -29,7 +29,7 @@ Use the pre migration flow in either of these two situations:
- You have access to a user's plaintext credentials (their username and password). - The credentials are encrypted, but you can decrypt them.
-For information about programmatically creating user accounts, see [Manage Azure AD B2C user accounts with Microsoft Graph](manage-user-accounts-graph-api.md).
+For information about programmatically creating user accounts, see [Manage Azure AD B2C user accounts with Microsoft Graph](microsoft-graph-operations.md).
## Seamless migration
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/user-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/user-overview.md
@@ -34,7 +34,7 @@ When you add a new work account, you need to consider the following configuratio
- **Name** and **User name** - The **Name** property contains the given and surname of the user. The **User name** is the identifier that the user enters to sign in. The user name includes the full domain. The domain name portion of the user name must either be the initial default domain name *your-domain.onmicrosoft.com*, or a verified, non-federated [custom domain](../active-directory/fundamentals/add-custom-domain.md) name such as *contoso.com*. - **Profile** - The account is set up with a profile of user data. You have the opportunity to enter a first name, last name, job title, and department name. You can edit the profile after the account is created.-- **Groups** - Use a group to perform management tasks such as assigning licenses or permissions to a number of users or devices at once. You can put the new account into an existing [group](../active-directory/fundamentals/active-directory-groups-create-azure-portal.md) in your tenant.
+- **Groups** - Use groups to perform management tasks such as assigning licenses or permissions to many users, or devices at once. You can put the new account into an existing [group](../active-directory/fundamentals/active-directory-groups-create-azure-portal.md) in your tenant.
- **Directory role** - You need to specify the level of access that the user account has to resources in your tenant. The following permission levels are available: - **User** - Users can access assigned resources but cannot manage most tenant resources.
@@ -66,7 +66,7 @@ You can use the following information to reset the password of a user:
You can invite external users to your tenant as a guest user. A typical scenario for inviting a guest user to your Azure AD B2C tenant is to share administration responsibilities. For an example of using a guest account, see [Properties of an Azure Active Directory B2B collaboration user](../active-directory/external-identities/user-properties.md).
-When you invite a guest user to your tenant, you provide the email address of the recipient and a message describing the invitation. The invitation link takes the user to the consent page where the **Get Started** button is selected and the review of permissions is accepted. If an inbox isn't attached to the email address, the user can navigate to the consent page by going to a Microsoft page using the invited credentials. The user is then forced to redeem the invitation the same way as clicking on the link in the email. For example: `https://myapps.microsoft.com/B2CTENANTNAME`.
+When you invite a guest user to your tenant, you provide the email address of the recipient and a message describing the invitation. The invitation link takes the user to the consent page. If an inbox isn't attached to the email address, the user can navigate to the consent page by going to a Microsoft page using the invited credentials. The user is then forced to redeem the invitation the same way as clicking on the link in the email. For example: `https://myapps.microsoft.com/B2CTENANTNAME`.
You can also use the [Microsoft Graph API](/graph/api/invitation-post?view=graph-rest-beta) to invite a guest user.
@@ -74,7 +74,7 @@ You can also use the [Microsoft Graph API](/graph/api/invitation-post?view=graph
The consumer user can sign in to applications secured by Azure AD B2C, but cannot access Azure resources such as the Azure portal. The consumer user can use a local account or federated accounts, such as Facebook or Twitter. A consumer account is created by using a [sign-up or sign-in user flow](user-flow-overview.md), using the Microsoft Graph API, or by using the Azure portal.
-You can specify the data that is collected when a consumer user account is created by using custom user attributes. For more information, see [Define custom attributes in Azure Active Directory B2C](user-flow-custom-attributes.md).
+You can specify the data that is collected when a consumer user account is created. For more information, see [Add user attributes and customize user input](configure-user-input.md).
For more information about managing consumer accounts, see [Manage Azure AD B2C user accounts with Microsoft Graph](manage-user-accounts-graph-api.md).
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/user-profile-attributes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/user-profile-attributes.md
@@ -156,7 +156,7 @@ In user migration scenarios, if the accounts you want to migrate have weaker pas
## MFA phone number attribute
-When using a phone for multi-factor authentication (MFA), the mobile phone is used to verify the user identity. To [add](https://docs.microsoft.com/graph/api/authentication-post-phonemethods) a new phone number programatically, [update](https://docs.microsoft.com/graph/api/b2cauthenticationmethodspolicy-update), [get](https://docs.microsoft.com/graph/api/b2cauthenticationmethodspolicy-get), or [delete](https://docs.microsoft.com/graph/api/phoneauthenticationmethod-delete) the phone number, use MS Graph API [phone authentication method](https://docs.microsoft.com/graph/api/resources/phoneauthenticationmethod).
+When using a phone for multi-factor authentication (MFA), the mobile phone is used to verify the user identity. To [add](/graph/api/authentication-post-phonemethods) a new phone number programatically, [update](/graph/api/b2cauthenticationmethodspolicy-update), [get](/graph/api/b2cauthenticationmethodspolicy-get), or [delete](/graph/api/phoneauthenticationmethod-delete) the phone number, use MS Graph API [phone authentication method](/graph/api/resources/phoneauthenticationmethod).
In Azure AD B2C [custom policies](custom-policy-overview.md), the phone number is available through `strongAuthenticationPhoneNumber` claim type.
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/view-audit-logs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/view-audit-logs.md
@@ -219,4 +219,4 @@ Here's the JSON representation of the example activity event shown earlier in th
## Next steps
-You can automate other administration tasks, for example, [manage Azure AD B2C user accounts with Microsoft Graph](manage-user-accounts-graph-api.md).
\ No newline at end of file
+You can automate other administration tasks, for example, [manage Azure AD B2C user accounts with Microsoft Graph](microsoft-graph-operations.md).
\ No newline at end of file
active-directory-domain-services https://docs.microsoft.com/en-us/azure/active-directory-domain-services/overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-domain-services/overview.md
@@ -9,7 +9,7 @@ ms.service: active-directory
ms.subservice: domain-services ms.workload: identity ms.topic: overview
-ms.date: 12/03/2020
+ms.date: 01/20/2021
ms.author: justinha ms.custom: contperf-fy21q1
@@ -19,22 +19,26 @@ ms.custom: contperf-fy21q1
# What is Azure Active Directory Domain Services?
-Azure Active Directory Domain Services (AD DS) provides managed domain services such as domain join, group policy, lightweight directory access protocol (LDAP), and Kerberos / NTLM authentication. You use these domain services without the need to deploy, manage, and patch domain controllers (DCs) in the cloud.
+Azure Active Directory Domain Services (AD DS) provides managed domain services such as domain join, group policy, lightweight directory access protocol (LDAP), and Kerberos/NTLM authentication. You use these domain services without the need to deploy, manage, and patch domain controllers (DCs) in the cloud.
An Azure AD DS managed domain lets you run legacy applications in the cloud that can't use modern authentication methods, or where you don't want directory lookups to always go back to an on-premises AD DS environment. You can lift and shift those legacy applications from your on-premises environment into a managed domain, without needing to manage the AD DS environment in the cloud.
-Azure AD DS integrates with your existing Azure AD tenant. This integration lets users sign in to service and applications connected to the managed domain using their existing credentials. You can also use existing groups and user accounts to secure access to resources. These features provide a smoother lift-and-shift of on-premises resources to Azure.
+Azure AD DS integrates with your existing Azure AD tenant. This integration lets users sign in to services and applications connected to the managed domain using their existing credentials. You can also use existing groups and user accounts to secure access to resources. These features provide a smoother lift-and-shift of on-premises resources to Azure.
> [!div class="nextstepaction"] > [To get started, create an Azure AD DS managed domain using the Azure portal][tutorial-create]
+Take a look at our short video to learn more about Azure AD DS.
+
+> [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RE4LblD]
+ ## How does Azure AD DS work? When you create an Azure AD DS managed domain, you define a unique namespace. This namespace is the domain name, such as *aaddscontoso.com*. Two Windows Server domain controllers (DCs) are then deployed into your selected Azure region. This deployment of DCs is known as a replica set. You don't need to manage, configure, or update these DCs. The Azure platform handles the DCs as part of the managed domain, including backups and encryption at rest using Azure Disk Encryption.
-A managed domain is configured to perform a one-way synchronization from Azure AD to provide access to a central set of users, groups, and credentials. You can create resources directly in the managed domain, but they aren't synchronized back to Azure AD. Applications, services, and VMs in Azure that connect to the managed domain can then use common AD DS features such as domain join, group policy, LDAP, and Kerberos / NTLM authentication.
+A managed domain is configured to perform a one-way synchronization from Azure AD to provide access to a central set of users, groups, and credentials. You can create resources directly in the managed domain, but they aren't synchronized back to Azure AD. Applications, services, and VMs in Azure that connect to the managed domain can then use common AD DS features such as domain join, group policy, LDAP, and Kerberos/NTLM authentication.
In a hybrid environment with an on-premises AD DS environment, [Azure AD Connect][azure-ad-connect] synchronizes identity information with Azure AD, which is then synchronized to the managed domain.
active-directory-domain-services https://docs.microsoft.com/en-us/azure/active-directory-domain-services/powershell-scoped-synchronization https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-domain-services/powershell-scoped-synchronization.md
@@ -9,7 +9,7 @@ ms.service: active-directory
ms.subservice: domain-services ms.workload: identity ms.topic: how-to
-ms.date: 07/24/2020
+ms.date: 01/20/2021
ms.author: justinha ---
@@ -37,15 +37,14 @@ To complete this article, you need the following resources and privileges:
By default, all users and groups from an Azure AD directory are synchronized to a managed domain. If only a few users need to access the managed domain, you can synchronize only those user accounts. This scoped synchronization is group-based. When you configure group-based scoped synchronization, only the user accounts that belong to the groups you specify are synchronized to the managed domain. Nested groups aren't synchronized, only the specific groups you select.
-You can change the synchronization scope when you create the managed domain, or once it's deployed. You can also now change the scope of synchronization on an existing managed domain without needing to recreate it.
+You can change the synchronization scope before or after you create the managed domain. The scope of synchronization is defined by a service principal with the application identifier 2565bd9d-da50-47d4-8b85-4c97f669dc36. To prevent scope loss, don't delete or change the service principal. If it is accidentally deleted, the synchronization scope can't be recovered.
-To learn more about the synchronization process, see [Understand synchronization in Azure AD Domain Services][concepts-sync].
+Keep in mind the following caveats if you change the synchronization scope:
-> [!WARNING]
-> Changing the scope of synchronization causes the managed domain to resynchronize all data. The following considerations apply:
->
-> * When you change the synchronization scope for a managed domain, a full resynchronization occurs.
-> * Objects that are no longer required in the managed domain are deleted. New objects are created in the managed domain.
+- A full synchronization occurs.
+- Objects that are no longer required in the managed domain are deleted. New objects are created in the managed domain.
+
+To learn more about the synchronization process, see [Understand synchronization in Azure AD Domain Services][concepts-sync].
## PowerShell script for scoped synchronization
active-directory-domain-services https://docs.microsoft.com/en-us/azure/active-directory-domain-services/scoped-synchronization https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-domain-services/scoped-synchronization.md
@@ -10,7 +10,7 @@ ms.service: active-directory
ms.subservice: domain-services ms.workload: identity ms.topic: how-to
-ms.date: 07/24/2020
+ms.date: 01/20/2021
ms.author: justinha ms.custom: devx-track-azurepowershell
@@ -39,15 +39,14 @@ To complete this article, you need the following resources and privileges:
By default, all users and groups from an Azure AD directory are synchronized to a managed domain. If only a few users need to access the managed domain, you can synchronize only those user accounts. This scoped synchronization is group-based. When you configure group-based scoped synchronization, only the user accounts that belong to the groups you specify are synchronized to the managed domain. Nested groups aren't synchronized, only the specific groups you select.
-You can change the synchronization scope when you create the managed domain, or once it's deployed. You can also now change the scope of synchronization on an existing managed domain without needing to recreate it.
+You can change the synchronization scope before or after you create the managed domain. The scope of synchronization is defined by a service principal with the application identifier 2565bd9d-da50-47d4-8b85-4c97f669dc36. To prevent scope loss, don't delete or change the service principal. If it is accidentally deleted, the synchronization scope can't be recovered.
-To learn more about the synchronization process, see [Understand synchronization in Azure AD Domain Services][concepts-sync].
+Keep in mind the following caveats if you change the synchronization scope:
+
+- A full synchronization occurs.
+- Objects that are no longer required in the managed domain are deleted. New objects are created in the managed domain.
-> [!WARNING]
-> Changing the scope of synchronization causes the managed domain to resynchronize all data. The following considerations apply:
->
-> * When you change the synchronization scope for a managed domain, a full resynchronization occurs.
-> * Objects that are no longer required in the managed domain are deleted. New objects are created in the managed domain.
+To learn more about the synchronization process, see [Understand synchronization in Azure AD Domain Services][concepts-sync].
## Enable scoped synchronization
active-directory-domain-services https://docs.microsoft.com/en-us/azure/active-directory-domain-services/tutorial-create-forest-trust https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-domain-services/tutorial-create-forest-trust.md
@@ -9,7 +9,7 @@ ms.service: active-directory
ms.subservice: domain-services ms.workload: identity ms.topic: tutorial
-ms.date: 07/06/2020
+ms.date: 01/21/2021
ms.author: justinha #Customer intent: As an identity administrator, I want to create a one-way outbound forest from an Azure Active Directory Domain Services resource forest to an on-premises Active Directory Domain Services forest to provide authentication and resource access between forests.
@@ -17,7 +17,7 @@ ms.author: justinha
# Tutorial: Create an outbound forest trust to an on-premises domain in Azure Active Directory Domain Services
-In environments where you can't synchronize password hashes, or you have users that exclusively sign in using smart cards so they don't know their password, you can use a resource forest in Azure Active Directory Domain Services (Azure AD DS). A resource forest uses a one-way outbound trust from Azure AD DS to one or more on-premises AD DS environments. This trust relationship lets users, applications, and computers authenticate against an on-premises domain from the Azure AD DS managed domain. In a resource forest, on-premises password hashes are never synchronized.
+In environments where you can't synchronize password hashes, or where users exclusively sign in using smart cards and don't know their password, you can use a resource forest in Azure Active Directory Domain Services (Azure AD DS). A resource forest uses a one-way outbound trust from Azure AD DS to one or more on-premises AD DS environments. This trust relationship lets users, applications, and computers authenticate against an on-premises domain from the Azure AD DS managed domain. In a resource forest, on-premises password hashes are never synchronized.
![Diagram of forest trust from Azure AD DS to on-premises AD DS](./media/concepts-resource-forest/resource-forest-trust-relationship.png)
@@ -59,7 +59,7 @@ Before you configure a forest trust in Azure AD DS, make sure your networking be
* Use private IP addresses. Don't rely on DHCP with dynamic IP address assignment. * Avoid overlapping IP address spaces to allow virtual network peering and routing to successfully communicate between Azure and on-premises.
-* An Azure virtual network needs a gateway subnet to configure an [Azure site-to-site (S2S) VPN][vpn-gateway] or [ExpressRoute][expressroute] connection
+* An Azure virtual network needs a gateway subnet to configure an [Azure site-to-site (S2S) VPN][vpn-gateway] or [ExpressRoute][expressroute] connection.
* Create subnets with enough IP addresses to support your scenario. * Make sure Azure AD DS has its own subnet, don't share this virtual network subnet with application VMs and services. * Peered virtual networks are NOT transitive.
@@ -82,8 +82,8 @@ The on-premises AD DS domain needs an incoming forest trust for the managed doma
To configure inbound trust on the on-premises AD DS domain, complete the following steps from a management workstation for the on-premises AD DS domain:
-1. Select **Start | Administrative Tools | Active Directory Domains and Trusts**.
-1. Right-select domain, such as *onprem.contoso.com*, then select **Properties**.
+1. Select **Start** > **Administrative Tools** > **Active Directory Domains and Trusts**.
+1. Right-click the domain, such as *onprem.contoso.com*, then select **Properties**.
1. Choose **Trusts** tab, then **New Trust**. 1. Enter the name for Azure AD DS domain name, such as *aaddscontoso.com*, then select **Next**. 1. Select the option to create a **Forest trust**, then to create a **One way: incoming** trust.
@@ -92,6 +92,14 @@ To configure inbound trust on the on-premises AD DS domain, complete the followi
1. Step through the next few windows with default options, then choose the option for **No, do not confirm the outgoing trust**. 1. Select **Finish**.
+If the forest trust is no longer needed for an environment, complete the following steps to remove it from the on-premises domain:
+
+1. Select **Start** > **Administrative Tools** > **Active Directory Domains and Trusts**.
+1. Right-click the domain, such as *onprem.contoso.com*, then select **Properties**.
+1. Choose **Trusts** tab, then **Domains that trust this domain (incoming trusts)**, click the trust to be removed, and then click **Remove**.
+1. On the Trusts tab, under **Domains trusted by this domain (outgoing trusts)**, click the trust to be removed, and then click Remove.
+1. Click **No, remove the trust from the local domain only**.
+ ## Create outbound forest trust in Azure AD DS With the on-premises AD DS domain configured to resolve the managed domain and an inbound forest trust created, now create the outbound forest trust. This outbound forest trust completes the trust relationship between the on-premises AD DS domain and the managed domain.
@@ -105,12 +113,18 @@ To create the outbound trust for the managed domain in the Azure portal, complet
> If you don't see the **Trusts** menu option, check under **Properties** for the *Forest type*. Only *resource* forests can create trusts. If the forest type is *User*, you can't create trusts. There's currently no way to change the forest type of a managed domain. You need to delete and recreate the managed domain as a resource forest. 1. Enter a display name that identifies your trust, then the on-premises trusted forest DNS name, such as *onprem.contoso.com*.
-1. Provide the same trust password that was used when configuring the inbound forest trust for the on-premises AD DS domain in the previous section.
+1. Provide the same trust password that was used to configure the inbound forest trust for the on-premises AD DS domain in the previous section.
1. Provide at least two DNS servers for the on-premises AD DS domain, such as *10.1.1.4* and *10.1.1.5*. 1. When ready, **Save** the outbound forest trust. ![Create outbound forest trust in the Azure portal](./media/tutorial-create-forest-trust/portal-create-outbound-trust.png)
+If the forest trust is no longer needed for an environment, complete the following steps to remove it from Azure AD DS:
+
+1. In the Azure portal, search for and select **Azure AD Domain Services**, then select your managed domain, such as *aaddscontoso.com*.
+1. From the menu on the left-hand side of the managed domain, select **Trusts**, choose the trust, and click **Remove**.
+1. Provide the same trust password that was used to configure the forest trust and click **OK**.
+ ## Validate resource authentication The following common scenarios let you validate that forest trust correctly authenticates users and access to resources:
active-directory https://docs.microsoft.com/en-us/azure/active-directory/app-provisioning/customize-application-attributes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/customize-application-attributes.md
@@ -110,7 +110,7 @@ Applications and systems that support customization of the attribute list includ
> [!NOTE]
-> Editing the list of supported attributes is only recommended for administrators who have customized the schema of their applications and systems, and have first-hand knowledge of how their custom attributes have been defined. This sometimes requires familiarity with the APIs and developer tools provided by an application or system. The ability to edit the list of supported attributes is locked down by default, but customers can enable the capability by navigating to the following URL: https://portal.azure.com/?Microsoft_AAD_IAM_forceSchemaEditorEnabled=true . You can then navigate to your application to view the attribute list as described [above](https://docs.microsoft.com/azure/active-directory/app-provisioning/customize-application-attributes#editing-the-list-of-supported-attributes).
+> Editing the list of supported attributes is only recommended for administrators who have customized the schema of their applications and systems, and have first-hand knowledge of how their custom attributes have been defined. This sometimes requires familiarity with the APIs and developer tools provided by an application or system. The ability to edit the list of supported attributes is locked down by default, but customers can enable the capability by navigating to the following URL: https://portal.azure.com/?Microsoft_AAD_IAM_forceSchemaEditorEnabled=true . You can then navigate to your application to view the attribute list as described [above](#editing-the-list-of-supported-attributes).
When editing the list of supported attributes, the following properties are provided:
@@ -334,4 +334,4 @@ Selecting this option will effectively force a resynchronization of all users wh
- [Writing Expressions for Attribute-Mappings](functions-for-customizing-application-data.md) - [Scoping Filters for User Provisioning](define-conditional-rules-for-provisioning-user-accounts.md) - [Using SCIM to enable automatic provisioning of users and groups from Azure Active Directory to applications](use-scim-to-provision-users-and-groups.md)-- [List of Tutorials on How to Integrate SaaS Apps](../saas-apps/tutorial-list.md)
+- [List of Tutorials on How to Integrate SaaS Apps](../saas-apps/tutorial-list.md)
\ No newline at end of file
active-directory https://docs.microsoft.com/en-us/azure/active-directory/app-provisioning/use-scim-to-provision-users-and-groups https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/use-scim-to-provision-users-and-groups.md
@@ -51,7 +51,7 @@ Every application requires different attributes to create a user or group. Start
|loginName|userName|userPrincipalName| |firstName|name.givenName|givenName| |lastName|name.lastName|lastName|
-|workMail|Emails[type eq ΓÇ£workΓÇ¥].value|Mail|
+|workMail|emails[type eq ΓÇ£workΓÇ¥].value|Mail|
|manager|manager|manager| |tag|urn:ietf:params:scim:schemas:extension:2.0:CustomExtension:tag|extensionAttribute1| |status|active|isSoftDeleted (computed value not stored on user)|
active-directory https://docs.microsoft.com/en-us/azure/active-directory/authentication/howto-authentication-sms-signin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-authentication-sms-signin.md
@@ -1,12 +1,12 @@
--- title: SMS-based user sign in for Azure Active Directory
-description: Learn how to configure and enable users to sign in to Azure Active Directory using SMS (preview)
+description: Learn how to configure and enable users to sign in to Azure Active Directory using SMS
services: active-directory ms.service: active-directory ms.subservice: authentication ms.topic: conceptual
-ms.date: 10/05/2020
+ms.date: 01/21/2021
ms.author: justinha author: justinha
@@ -16,15 +16,12 @@ ms.reviewer: rateller
ms.collection: M365-identity-device-management ---
-# Configure and enable users for SMS-based authentication using Azure Active Directory (preview)
+# Configure and enable users for SMS-based authentication using Azure Active Directory
-To reduce the complexity and security risks for users to sign in to applications and services, Azure Active Directory (Azure AD) provides multiple authentication options. SMS-based authentication, currently in preview, lets users sign in without needing to provide, or even know, their username and password. After their account is created by an identity administrator, they can enter their phone number at the sign-in prompt, and provide an authentication code that's sent to them via text message. This authentication method simplifies access to applications and services, especially for front line workers.
+To simplify and secure sign in to applications and services, Azure Active Directory (Azure AD) provides multiple authentication options. SMS-based authentication lets users sign in without providing, or even knowing, their user name and password. After their account is created by an identity administrator, they can enter their phone number at the sign-in prompt. They receive an authentication code via text message that they can provide to complete the sign in. This authentication method simplifies access to applications and services, especially for front line workers.
This article shows you how to enable SMS-based authentication for select users or groups in Azure AD.
-> [!NOTE]
-> SMS-based authentication for users is a public preview feature of Azure Active Directory. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- ## Before you begin To complete this article, you need the following resources and privileges:
@@ -41,10 +38,10 @@ To complete this article, you need the following resources and privileges:
## Limitations
-During the public preview of SMS-based authentication, the following limitations apply:
+The following limitations apply to SMS-based authentication:
* SMS-based authentication isn't currently compatible with Azure AD Multi-Factor Authentication.
-* With the exception of Teams, SMS-based authentication isn't currently compatible with native Office applications.
+* Except for Teams, SMS-based authentication isn't compatible with native Office applications.
* SMS-based authentication isn't recommended for B2B accounts. * Federated users won't authenticate in the home tenant. They only authenticate in the cloud.
@@ -55,15 +52,15 @@ There are three main steps to enable and use SMS-based authentication in your or
* Enable the authentication method policy. * Select users or groups that can use the SMS-based authentication method. * Assign a phone number for each user account.
- * This phone number can be assigned in the Azure portal (which is shown in this article), and in *My Staff* or *My Profile*.
+ * This phone number can be assigned in the Azure portal (which is shown in this article), and in *My Staff* or *My Account*.
First, let's enable SMS-based authentication for your Azure AD tenant. 1. Sign in to the [Azure portal][azure-portal] as a *global administrator*. 1. Search for and select **Azure Active Directory**.
-1. From the navigation menu on the left-hand side of the Azure Active Directory window, select **Security > Authentication methods > Authentication method policy (preview)**.
+1. From the navigation menu on the left-hand side of the Azure Active Directory window, select **Security > Authentication methods > Authentication method policy**.
- [![Browse to and select the Authentication method policy (preview) window in the Azure portal.](media/howto-authentication-sms-signin/authentication-method-policy-cropped.png)](media/howto-authentication-sms-signin/authentication-method-policy.png#lightbox)
+ [![Browse to and select the Authentication method policy window in the Azure portal.](media/howto-authentication-sms-signin/authentication-method-policy-cropped.png)](media/howto-authentication-sms-signin/authentication-method-policy.png#lightbox)
1. From the list of available authentication methods, select **Text message**. 1. Set **Enable** to *Yes*.
@@ -87,7 +84,7 @@ Each user that's enabled in the text message authentication method policy must b
## Set a phone number for user accounts
-Users are now enabled for SMS-based authentication, but their phone number must be associated with the user profile in Azure AD before they can sign in. The user can [set this phone number themselves](../user-help/sms-sign-in-explainer.md) in *My Profile*, or you can assign the phone number using the Azure portal. Phone numbers can be set by *global admins*, *authentication admins*, or *privileged authentication admins*.
+Users are now enabled for SMS-based authentication, but their phone number must be associated with the user profile in Azure AD before they can sign in. The user can [set this phone number themselves](../user-help/sms-sign-in-explainer.md) in *My Account*, or you can assign the phone number using the Azure portal. Phone numbers can be set by *global admins*, *authentication admins*, or *privileged authentication admins*.
When a phone number is set for SMS-sign, it's also then available for use with [Azure AD Multi-Factor Authentication][tutorial-azure-mfa] and [self-service password reset][tutorial-sspr].
@@ -134,13 +131,13 @@ If a user has already registered for Azure AD Multi-Factor Authentication and /
A user that has a phone number already set for their account is displayed a button to *Enable for SMS sign-in* in their **My Profile** page. Select this button, and the account is enabled for use with SMS-based sign-in and the previous Azure AD Multi-Factor Authentication or SSPR registration.
-For more information on the end-user experience, see [SMS sign-in user experience for phone number (preview)](../user-help/sms-sign-in-explainer.md).
+For more information on the end-user experience, see [SMS sign-in user experience for phone number](../user-help/sms-sign-in-explainer.md).
### Error when trying to set a phone number on a user's account If you receive an error when you try to set a phone number for a user account in the Azure portal, review the following troubleshooting steps:
-1. Make sure that you're enabled for the SMS-based sign-in preview.
+1. Make sure that you're enabled for the SMS-based sign-in.
1. Confirm that the user account is enabled in the *Text message* authentication method policy. 1. Make sure you set the phone number with the proper formatting, as validated in the Azure portal (such as *+1 4251234567*). 1. Make sure that the phone number isn't used elsewhere in your tenant.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/authentication/overview-authentication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/overview-authentication.md
@@ -6,7 +6,7 @@ services: active-directory
ms.service: active-directory ms.subservice: authentication ms.topic: overview
-ms.date: 07/13/2020
+ms.date: 01/20/2021
ms.author: justinha author: justinha
@@ -27,6 +27,8 @@ One of the main features of an identity platform is to verify, or *authenticate*
* Hybrid integration to enforce password protection policies for an on-premises environment * Passwordless authentication
+Take a look at our short video to learn more about these authentication components.
+ ## Improve the end-user experience Azure AD helps to protect a user's identity and simplify their sign-in experience. Features like self-service password reset let users update or change their passwords using a web browser from any device. This feature is especially useful when the user has forgotten their password or their account is locked. Without waiting for a helpdesk or administrator to provide support, a user can unblock themselves and continue to work.
@@ -79,7 +81,7 @@ The end-goal for many environments is to remove the use of passwords as part of
![Security versus convenience with the authentication process that leads to passwordless](./media/concept-authentication-passwordless/passwordless-convenience-security.png)
-When you sign in with a passwordless method, credentials are provided through the use of methods like biometrics with Windows Hello for Business, or a FIDO2 security key. These authentication methods can't be easily duplicated by an attacker.
+When you sign in with a passwordless method, credentials are provided by using methods like biometrics with Windows Hello for Business, or a FIDO2 security key. These authentication methods can't be easily duplicated by an attacker.
Azure AD provides ways to natively authenticate using passwordless methods to simplify the sign-in experience for users and reduce the risk of attacks.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/authentication/troubleshoot-sspr-writeback https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/troubleshoot-sspr-writeback.md
@@ -40,7 +40,7 @@ For Azure AD Connect version *1.1.443.0* and above, *outbound HTTPS* access is r
* *\*.passwordreset.microsoftonline.com* * *\*.servicebus.windows.net*
-Azure [GOV endpoints](https://docs.microsoft.com/azure/azure-government/compare-azure-government-global-azure#guidance-for-developers):
+Azure [GOV endpoints](../../azure-government/compare-azure-government-global-azure.md#guidance-for-developers):
* *\*.passwordreset.microsoftonline.us* * *\*.servicebus.usgovcloudapi.net*
@@ -234,4 +234,4 @@ To properly assist you, we ask that you provide as much detail as possible when
## Next steps
-To learn more about SSPR, see [How it works: Azure AD self-service password reset](concept-sspr-howitworks.md) or [How does self-service password reset writeback work in Azure AD?](concept-sspr-writeback.md).
+To learn more about SSPR, see [How it works: Azure AD self-service password reset](concept-sspr-howitworks.md) or [How does self-service password reset writeback work in Azure AD?](concept-sspr-writeback.md).
\ No newline at end of file
active-directory https://docs.microsoft.com/en-us/azure/active-directory/cloud-sync/how-to-attribute-mapping https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/cloud-sync/how-to-attribute-mapping.md
@@ -7,7 +7,7 @@ manager: daveba
ms.service: active-directory ms.workload: identity ms.topic: how-to
-ms.date: 09/22/2020
+ms.date: 01/21/2021
ms.subservice: hybrid ms.author: billmath ms.collection: M365-identity-device-management
active-directory https://docs.microsoft.com/en-us/azure/active-directory/cloud-sync/how-to-configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/cloud-sync/how-to-configure.md
@@ -7,7 +7,7 @@ manager: daveba
ms.service: active-directory ms.workload: identity ms.topic: how-to
-ms.date: 02/26/2020
+ms.date: 01/21/2021
ms.subservice: hybrid ms.author: billmath ms.collection: M365-identity-device-management
active-directory https://docs.microsoft.com/en-us/azure/active-directory/cloud-sync/how-to-inbound-synch-ms-graph https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/cloud-sync/how-to-inbound-synch-ms-graph.md
@@ -25,7 +25,7 @@ The structure of how to do this consists of the following steps. They are:
- [Start sync job](#start-sync-job) - [Review status](#review-status)
-Use these [Microsoft Azure Active Directory Module for Windows PowerShell](https://docs.microsoft.com/powershell/module/msonline/) commands to enable synchronization for a production tenant, a pre-requisite for being able to call the Administration Web Service for that tenant.
+Use these [Microsoft Azure Active Directory Module for Windows PowerShell](/powershell/module/msonline/) commands to enable synchronization for a production tenant, a pre-requisite for being able to call the Administration Web Service for that tenant.
## Basic setup
@@ -54,7 +54,7 @@ You need to use this application ID 1a4721b3-e57f-4451-ae87-ef078703ec94. The di
## Create sync job The output of the above command will return the objectId of the service principal that was created. For this example, the objectId is 614ac0e9-a59b-481f-bd8f-79a73d167e1c. Use Microsoft Graph to add a synchronizationJob to that service principal.
-Documentation for creating a sync job can be found [here](https://docs.microsoft.com/graph/api/synchronization-synchronizationjob-post?view=graph-rest-beta&tabs=http).
+Documentation for creating a sync job can be found [here](/graph/api/synchronization-synchronizationjob-post?tabs=http&view=graph-rest-beta).
If you did not record the ID above, you can find the service principal by running the following MS Graph call. You'll need Directory.Read.All permissions to make that call:
@@ -215,11 +215,11 @@ The job can be retrieved again via the following command:
`GET https://graph.microsoft.com/beta/servicePrincipals/[SERVICE_PRINCIPAL_ID]/synchronization/jobs/ `
-Documentation for retrieving jobs can be found [here](https://docs.microsoft.com/graph/api/synchronization-synchronizationjob-list?view=graph-rest-beta&tabs=http).
+Documentation for retrieving jobs can be found [here](/graph/api/synchronization-synchronizationjob-list?tabs=http&view=graph-rest-beta).
To start the job, issue this request, using the objectId of the service principal created in the first step, and the job identifier returned from the request that created the job.
-Documentation for how to start a job can be found [here](https://docs.microsoft.com/graph/api/synchronization-synchronizationjob-start?view=graph-rest-beta&tabs=http).
+Documentation for how to start a job can be found [here](/graph/api/synchronization-synchronizationjob-start?tabs=http&view=graph-rest-beta).
``` POST https://graph.microsoft.com/beta/servicePrincipals/8895955e-2e6c-4d79-8943-4d72ca36878f/synchronization/jobs/AD2AADProvisioning.fc96887f36da47508c935c28a0c0b6da/start
@@ -228,7 +228,7 @@ Documentation for how to start a job can be found [here](https://docs.microsoft.
The expected response is … HTTP 204/No content.
-Other commands for controlling the job are documented [here](https://docs.microsoft.com/graph/api/resources/synchronization-synchronizationjob?view=graph-rest-beta).
+Other commands for controlling the job are documented [here](/graph/api/resources/synchronization-synchronizationjob?view=graph-rest-beta).
To restart a job, one would use …
@@ -254,4 +254,4 @@ Look under the 'status' section of the return object for relevant details
- [What is Azure AD Connect cloud sync?](what-is-cloud-sync.md) - [Transformations](how-to-transformation.md)-- [Azure AD Synchronization API](https://docs.microsoft.com/graph/api/resources/synchronization-overview?view=graph-rest-beta)
+- [Azure AD Synchronization API](/graph/api/resources/synchronization-overview?view=graph-rest-beta)
\ No newline at end of file
active-directory https://docs.microsoft.com/en-us/azure/active-directory/cloud-sync/how-to-manage-registry-options https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/cloud-sync/how-to-manage-registry-options.md
@@ -29,7 +29,7 @@ When performing LDAP operations on configured Active Directory domain controller
System.DirectoryServices.Protocols.LdapException: The operation was aborted because the client side timeout limit was exceeded. `
-LDAP search operations can take longer if the search attribute is not indexed. As a first step, if you get the above error, first check if the search/lookup attribute is [indexed](https://docs.microsoft.com/windows/win32/ad/indexed-attributes). If the search attributes are indexed and the error persists, you can increase the LDAP connection timeout using the following steps:
+LDAP search operations can take longer if the search attribute is not indexed. As a first step, if you get the above error, first check if the search/lookup attribute is [indexed](/windows/win32/ad/indexed-attributes). If the search attributes are indexed and the error persists, you can increase the LDAP connection timeout using the following steps:
1. Log on as Administrator on the Windows server running the Azure AD Connect Provisioning Agent. 1. Use the *Run* menu item to open the registry editor (regedit.exe)
@@ -44,7 +44,7 @@ LDAP search operations can take longer if the search attribute is not indexed. A
1. If you have deployed multiple provisioning agents, apply this registry change to all agents for consistency. ## Configure referral chasing
-By default, the Azure AD Connect provisioning agent does not chase [referrals](https://docs.microsoft.com/windows/win32/ad/referrals).
+By default, the Azure AD Connect provisioning agent does not chase [referrals](/windows/win32/ad/referrals).
You may want to enable referral chasing, to support certain HR inbound provisioning scenarios such as: * Checking uniqueness of UPN across multiple domains * Resolving cross-domain manager references
active-directory https://docs.microsoft.com/en-us/azure/active-directory/conditional-access/concept-continuous-access-evaluation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/conditional-access/concept-continuous-access-evaluation.md
@@ -25,7 +25,7 @@ Timely response to policy violations or security issues really requires a ΓÇ£con
The initial implementation of continuous access evaluation focuses on Exchange, Teams, and SharePoint Online.
-To prepare your applications to use CAE, see [How to use Continuous Access Evaluation enabled APIs in your applications](/azure/active-directory/develop/app-resilience-continuous-access-evaluation).
+To prepare your applications to use CAE, see [How to use Continuous Access Evaluation enabled APIs in your applications](../develop/app-resilience-continuous-access-evaluation.md).
### Key benefits
@@ -184,4 +184,4 @@ Sign-in Frequency will be honored with or without CAE.
## Next steps
-[Announcing continuous access evaluation](https://techcommunity.microsoft.com/t5/azure-active-directory-identity/moving-towards-real-time-policy-and-security-enforcement/ba-p/1276933)
+[Announcing continuous access evaluation](https://techcommunity.microsoft.com/t5/azure-active-directory-identity/moving-towards-real-time-policy-and-security-enforcement/ba-p/1276933)
\ No newline at end of file
active-directory https://docs.microsoft.com/en-us/azure/active-directory/conditional-access/plan-conditional-access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/conditional-access/plan-conditional-access.md
@@ -482,4 +482,4 @@ Once you have collected the information, See the following resources:
[Learn more about Identity Protection](../identity-protection/overview-identity-protection.md)
-[Manage Conditional Access policies with Microsoft Graph API](https://docs.microsoft.com/graph/api/resources/conditionalaccesspolicy)
+[Manage Conditional Access policies with Microsoft Graph API](/graph/api/resources/conditionalaccesspolicy)
\ No newline at end of file
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/active-directory-configurable-token-lifetimes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/active-directory-configurable-token-lifetimes.md
@@ -77,9 +77,11 @@ You can set token lifetime policies for refresh tokens and session tokens.
> [!IMPORTANT] > As of May 2020, new tenants can not configure refresh and session token lifetimes. Tenants with existing configuration can modify refresh and session token policies until January 30, 2021. Azure Active Directory will stop honoring existing refresh and session token configuration in policies after January 30, 2021. You can still configure access, SAML, and ID token lifetimes after the retirement. >
-> If you need to continue to define the time period before a user is asked to sign in again, configure sign-in frequency in Conditional Access. To learn more about Conditional Access, read [Configure authentication session management with Conditional Access](/azure/active-directory/conditional-access/howto-conditional-access-session-lifetime).
+> If you need to continue to define the time period before a user is asked to sign in again, configure sign-in frequency in Conditional Access. To learn more about Conditional Access, read [Configure authentication session management with Conditional Access](../conditional-access/howto-conditional-access-session-lifetime.md).
> > If you do not want to use Conditional Access after the retirement date, your refresh and session tokens will be set to the [default configuration](#configurable-token-lifetime-properties-after-the-retirement) on that date and you will no longer be able to change their lifetimes.
+>
+> Existing tokenΓÇÖs lifetime will not be changed. After they expire, a new token will be issued based on the default value.
:::image type="content" source="./media/active-directory-configurable-token-lifetimes/roadmap.svg" alt-text="Retirement information":::
@@ -270,4 +272,4 @@ You can use the following cmdlets for service principal policies.
## Next steps
-To learn more, read [examples of how to configure token lifetimes](configure-token-lifetimes.md).
\ No newline at end of file
+To learn more, read [examples of how to configure token lifetimes](configure-token-lifetimes.md).
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/authentication-vs-authorization https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/authentication-vs-authorization.md
@@ -48,7 +48,7 @@ This video explains the Microsoft identity platform and the basics of modern aut
Here's a comparison of the protocols that the Microsoft identity platform uses: * **OAuth versus OpenID Connect**: The platform uses OAuth for authorization and OpenID Connect (OIDC) for authentication. OpenID Connect is built on top of OAuth 2.0, so the terminology and flow are similar between the two. You can even both authenticate a user (through OpenID Connect) and get authorization to access a protected resource that the user owns (through OAuth 2.0) in one request. For more information, see [OAuth 2.0 and OpenID Connect protocols](active-directory-v2-protocols.md) and [OpenID Connect protocol](v2-protocols-oidc.md).
-* **OAuth versus SAML**: The platform uses OAuth 2.0 for authorization and SAML for authentication. For more information on how to use these protocols together to both authenticate a user and get authorization to access a protected resource, see [Microsoft identity platform and OAuth 2.0 SAML bearer assertion flow](v2-saml-bearer-assertion.md).
+* **OAuth versus SAML**: The platform uses OAuth 2.0 for authorization and SAML for authentication. For more information on how to use these protocols together to both authenticate a user and get authorization to access a protected resource, see [Microsoft identity platform and OAuth 2.0 SAML bearer assertion flow](./scenario-token-exchange-saml-oauth.md).
* **OpenID Connect versus SAML**: The platform uses both OpenID Connect and SAML to authenticate a user and enable single sign-on. SAML authentication is commonly used with identity providers such as Active Directory Federation Services (AD FS) federated to Azure AD, so it's often used in enterprise applications. OpenID Connect is commonly used for apps that are purely in the cloud, such as mobile apps, websites, and web APIs. ## Next steps
@@ -56,4 +56,4 @@ Here's a comparison of the protocols that the Microsoft identity platform uses:
For other topics that cover authentication and authorization basics: * To learn how access tokens, refresh tokens, and ID tokens are used in authorization and authentication, see [Security tokens](security-tokens.md).
-* To learn about the process of registering your application so it can integrate with the Microsoft identity platform, see [Application model](application-model.md).
+* To learn about the process of registering your application so it can integrate with the Microsoft identity platform, see [Application model](application-model.md).
\ No newline at end of file
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/configure-token-lifetimes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/configure-token-lifetimes.md
@@ -82,7 +82,7 @@ In this example, you create a policy that requires users to authenticate more fr
> [!IMPORTANT] > As of May 2020, new tenants can not configure refresh and session token lifetimes. Tenants with existing configuration can modify refresh and session token policies until January 30, 2021. Azure Active Directory will stop honoring existing refresh and session token configuration in policies after January 30, 2021. You can still configure access, SAML, and ID token lifetimes after the retirement. >
-> If you need to continue to define the time period before a user is asked to sign in again, configure sign-in frequency in Conditional Access. To learn more about Conditional Access, read [Configure authentication session management with Conditional Access](/azure/active-directory/conditional-access/howto-conditional-access-session-lifetime).
+> If you need to continue to define the time period before a user is asked to sign in again, configure sign-in frequency in Conditional Access. To learn more about Conditional Access, read [Configure authentication session management with Conditional Access](../conditional-access/howto-conditional-access-session-lifetime.md).
> > If you do not want to use Conditional Access after the retirement date, your refresh and session tokens will be set to the [default configuration](active-directory-configurable-token-lifetimes.md#configurable-token-lifetime-properties-after-the-retirement) on that date and you will no longer be able to change their lifetimes.
@@ -206,4 +206,4 @@ In this example, you create a few policies to learn how the priority system work
You now have the original policy linked to your service principal, and the new policy is set as your organization default policy. It's important to remember that policies applied to service principals have priority over organization default policies. ## Next steps
-Learn about [authentication session management capabilities](../conditional-access/howto-conditional-access-session-lifetime.md) in Azure AD Conditional Access.
+Learn about [authentication session management capabilities](../conditional-access/howto-conditional-access-session-lifetime.md) in Azure AD Conditional Access.
\ No newline at end of file
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/msal-android-b2c https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/msal-android-b2c.md
@@ -33,7 +33,7 @@ Given a B2C application that has two policies:
The configuration file for the app would declare two `authorities`. One for each policy. The `type` property of each authority is `B2C`.
->Note: The `account_mode` must be set to **MULTIPLE** for B2C applications. Refer to the documentation for more information about [multiple account public client apps](https://docs.microsoft.com/azure/active-directory/develop/single-multi-account#multiple-account-public-client-application).
+>Note: The `account_mode` must be set to **MULTIPLE** for B2C applications. Refer to the documentation for more information about [multiple account public client apps](./single-multi-account.md#multiple-account-public-client-application).
### `app/src/main/res/raw/msal_config.json` ```json
@@ -239,4 +239,4 @@ When you renew tokens for a policy with `acquireTokenSilent`, provide the same `
## Next steps
-Learn more about Azure Active Directory B2C (Azure AD B2C) at [What is Azure Active Directory B2C?](../../active-directory-b2c/overview.md)
+Learn more about Azure Active Directory B2C (Azure AD B2C) at [What is Azure Active Directory B2C?](../../active-directory-b2c/overview.md)
\ No newline at end of file
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-v2-javascript-auth-code-angular https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-javascript-auth-code-angular.md new file mode 100644
@@ -0,0 +1,194 @@
+---
+title: "Quickstart: Sign in users in JavaScript Angular single-page apps (SPA) with auth code and call Microsoft Graph | Azure"
+titleSuffix: Microsoft identity platform
+description: In this quickstart, learn how a JavaScript Angular single-page application (SPA) can sign in users of personal accounts, work accounts, and school accounts by using the authorization code flow and call Microsoft Graph.
+services: active-directory
+author: j-mantu
+manager: CelesteDG
+
+ms.service: active-directory
+ms.subservice: develop
+ms.topic: quickstart
+ms.workload: identity
+ms.date: 01/14/2021
+ms.author: jamesmantu
+ms.custom: aaddev, scenarios:getting-started, languages:JavaScript, devx-track-js
+#Customer intent: As an app developer, I want to learn how to get access tokens and refresh tokens by using the Microsoft identity platform endpoint so that my JavaScript Angular app can sign in users of personal accounts, work accounts, and school accounts.
+---
+
+# Quickstart: Sign in and get an access token in an Angular SPA using the auth code flow
+
+In this quickstart, you download and run a code sample that demonstrates how a JavaScript Angular single-page application (SPA) can sign in users and call Microsoft Graph using the authorization code flow. The code sample demonstrates how to get an access token to call the Microsoft Graph API or any web API.
+
+See [How the sample works](#how-the-sample-works) for an illustration.
+
+This quickstart uses MSAL Angular v2 with the authorization code flow. For a similar quickstart that uses MSAL Angular 1.x with the implicit flow, see [Quickstart: Sign in users in JavaScript single-page apps](./quickstart-v2-angular.md).
+
+## Prerequisites
+
+* Azure subscription - [Create an Azure subscription for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)
+* [Node.js](https://nodejs.org/en/download/)
+* [Visual Studio Code](https://code.visualstudio.com/download) or another code editor
+
+> [!div renderon="docs"]
+> ## Register and download your quickstart application
+> To start your quickstart application, use either of the following options.
+>
+> ### Option 1 (Express): Register and auto configure your app and then download your code sample
+>
+> 1. Sign in to the [Azure portal](https://portal.azure.com).
+> 1. If your account gives you access to more than one tenant, select your account at the top right, and then set your portal session to the Azure AD tenant you want to use.
+> 1. Select [App registrations](https://aka.ms/AAatehv).
+> 1. Enter a name for your application.
+> 1. Under **Supported account types**, select **Accounts in any organizational directory and personal Microsoft accounts**.
+> 1. Select **Register**.
+> 1. Go to the quickstart pane and follow the instructions to download and automatically configure your new application.
+>
+> ### Option 2 (Manual): Register and manually configure your application and code sample
+>
+> #### Step 1: Register your application
+>
+> 1. Sign in to the [Azure portal](https://portal.azure.com).
+> 1. If your account gives you access to more than one tenant, select your account at the top right, and then set your portal session to the Azure Active Directory (Azure AD) tenant you want to use.
+> 1. Select [App registrations](https://go.microsoft.com/fwlink/?linkid=2083908).
+> 1. Select **New registration**.
+> 1. When the **Register an application** page appears, enter a name for your application.
+> 1. Under **Supported account types**, select **Accounts in any organizational directory and personal Microsoft accounts**.
+> 1. Select **Register**. On the app **Overview** page, note the **Application (client) ID** value for later use.
+> 1. In the left pane of the registered application, select **Authentication**.
+> 1. Under **Platform configurations**, select `Add a platform`.
+> 1. In the resulting window, select **Single-page application**.
+> 1. Set the **Redirect URIs** value to `http://localhost:4200/`. This is the default port NodeJS will listen on your local machine. WeΓÇÖll return the authentication response to this URI after successfully authenticating the user.
+> 1. Click the **Configure** button to apply the changes.
+> 1. Under **Platform Configurations** expand **Single-page application**.
+> 1. Confirm that under **Grant types** ![Already configured](media/quickstart-v2-javascript/green-check.png)Your Redirect URI is eligible for the Authorization Code Flow with PKCE.
+
+> [!div class="sxs-lookup" renderon="portal"]
+> #### Step 1: Configure your application in the Azure portal
+> To make the code sample in this quickstart work, you need to add a `redirectUri` as `http://localhost:4200/`.
+> > [!div renderon="portal" id="makechanges" class="nextstepaction"]
+> > [Make these changes for me]()
+>
+> > [!div id="appconfigured" class="alert alert-info"]
+> > ![Already configured](media/quickstart-v2-javascript/green-check.png) Your application is configured with these attributes.
+
+ #### Step 2: Download the project
+
+> [!div renderon="docs"]
+> To run the project with a web server by using Node.js, [download the core project files](https://github.com/Azure-Samples/ms-identity-javascript-angular-spa/archive/main.zip).
+
+> [!div renderon="portal" class="sxs-lookup"]
+> Run the project with a web server by using Node.js
+
+> [!div renderon="portal" class="sxs-lookup" id="autoupdate" class="nextstepaction"]
+> [Download the code sample](https://github.com/Azure-Samples/ms-identity-javascript-angular-spa/archive/main.zip)
+
+> [!div renderon="docs"]
+> #### Step 3: Configure your JavaScript app
+>
+> In the *src* folder, open the *app* folder then open the *app.module.ts* file and update the `clientID`, `authority`, and `redirectUri` values in the `auth` object.
+>
+> ```javascript
+> // MSAL instance to be passed to msal-angular
+> export function MSALInstanceFactory(): IPublicClientApplication {
+> return new PublicClientApplication({
+> auth: {
+> clientId: 'Enter_the_Application_Id_Here',
+> authority: 'Enter_the_Cloud_Instance_Id_HereEnter_the_Tenant_Info_Here',
+> redirectUri: 'Enter_the_Redirect_Uri_Here'
+> },
+> cache: {
+> cacheLocation: BrowserCacheLocation.LocalStorage,
+> storeAuthStateInCookie: isIE, // set to true for IE 11
+> },
+> });
+> }
+> ```
+
+> [!div renderon="portal" class="sxs-lookup"]
+> > [!NOTE]
+> > `Enter_the_Supported_Account_Info_Here`
+
+> [!div renderon="docs"]
+>
+> Modify the values in the `auth` section as described here:
+>
+> - `Enter_the_Application_Id_Here` is the **Application (client) ID** for the application you registered.
+> - `Enter_the_Cloud_Instance_Id_Here` is the instance of the Azure cloud. For the main or global Azure cloud, enter `https://login.microsoftonline.com/`. For **national** clouds (for example, China), see [National clouds](authentication-national-cloud.md).
+> - `Enter_the_Tenant_info_here` is set to one of the following:
+> - If your application supports *accounts in this organizational directory*, replace this value with the **Tenant ID** or **Tenant name**. For example, `contoso.microsoft.com`.
+> - If your application supports *accounts in any organizational directory*, replace this value with `organizations`.
+> - If your application supports *accounts in any organizational directory and personal Microsoft accounts*, replace this value with `common`. **For this quickstart**, use `common`.
+> - To restrict support to *personal Microsoft accounts only*, replace this value with `consumers`.
+> - `Enter_the_Redirect_Uri_Here` is `http://localhost:4200/`.
+>
+> The `authority` value in your *app.module.ts* should be similar to the following if you're using the main (global) Azure cloud:
+>
+> ```javascript
+> authority: "https://login.microsoftonline.com/common",
+> ```
+>
+> > [!TIP]
+> > To find the values of **Application (client) ID**, **Directory (tenant) ID**, and **Supported account types**, go to the app registration's **Overview** page in the Azure portal.
+
+> [!div class="sxs-lookup" renderon="portal"]
+> #### Step 3: Your app is configured and ready to run
+> We have configured your project with values of your app's properties.
+
+> [!div renderon="docs"]
+>
+> Scroll down in the same file and update the `graphMeEndpoint`.
+> - Replace the string `Enter_the_Graph_Endpoint_Herev1.0/me` with `https://graph.microsoft.com/v1.0/me`
+> - `Enter_the_Graph_Endpoint_Herev1.0/me` is the endpoint that API calls will be made against. For the main (global) Microsoft Graph API service, enter `https://graph.microsoft.com/` (include the trailing forward-slash). For more information, see the [documentation](https://docs.microsoft.com/graph/deployments).
+>
+>
+> ```javascript
+> export function MSALInterceptorConfigFactory(): MsalInterceptorConfiguration {
+> const protectedResourceMap = new Map<string, Array<string>>();
+> protectedResourceMap.set('Enter_the_Graph_Endpoint_Herev1.0/me', ['user.read']);
+>
+> return {
+> interactionType: InteractionType.Redirect,
+> protectedResourceMap
+> };
+> }
+> ```
+>
+>
+ #### Step 4: Run the project
+
+Run the project with a web server by using Node.js:
+
+1. To start the server, run the following commands from within the project directory:
+ ```console
+ npm install
+ npm start
+ ```
+1. Browse to `http://localhost:4200/`.
+
+1. Select **Login** to start the sign-in process and then call the Microsoft Graph API.
+
+ The first time you sign in, you're prompted to provide your consent to allow the application to access your profile and sign you in. After you're signed in successfully, click the **Profile** button to display your user information on the page.
+
+## More information
+
+### How the sample works
+
+![Diagram showing the authorization code flow for a single-page application.](media/quickstart-v2-javascript-auth-code/diagram-01-auth-code-flow.png)
+
+### msal.js
+
+The MSAL.js library signs in users and requests the tokens that are used to access an API that's protected by Microsoft identity platform.
+
+If you have Node.js installed, you can download the latest version by using the Node.js Package Manager (npm):
+
+```console
+npm install @azure/msal-browser @azure/msal-angular@2
+```
+
+## Next steps
+
+For a detailed step-by-step guide on building the auth code flow application using vanilla JavaScript, see the following tutorial:
+
+> [!div class="nextstepaction"]
+> [Tutorial to sign in and call MS Graph](./tutorial-v2-javascript-auth-code.md)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-v2-javascript-auth-code-react https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-javascript-auth-code-react.md new file mode 100644
@@ -0,0 +1,192 @@
+---
+title: "Quickstart: Sign in users in JavaScript React single-page apps (SPA) with auth code and call Microsoft Graph | Azure"
+titleSuffix: Microsoft identity platform
+description: In this quickstart, learn how a JavaScript React single-page application (SPA) can sign in users of personal accounts, work accounts, and school accounts by using the authorization code flow and call Microsoft Graph.
+services: active-directory
+author: j-mantu
+manager: CelesteDG
+
+ms.service: active-directory
+ms.subservice: develop
+ms.topic: quickstart
+ms.workload: identity
+ms.date: 01/14/2021
+ms.author: jamesmantu
+ms.custom: aaddev, scenarios:getting-started, languages:JavaScript, devx-track-js
+#Customer intent: As an app developer, I want to learn how to login, logout, conditionally render components to authenticated users, and acquire an access token for a protected resource such as Microsoft Graph by using the Microsoft identity platform endpoint so that my JavaScript React app can sign in users of personal accounts, work accounts, and school accounts.
+---
+
+# Quickstart: Sign in and get an access token in a React SPA using the auth code flow
+
+In this quickstart, you download and run a code sample that demonstrates how a JavaScript React single-page application (SPA) can sign in users and call Microsoft Graph using the authorization code flow. The code sample demonstrates how to get an access token to call the Microsoft Graph API or any web API.
+
+See [How the sample works](#how-the-sample-works) for an illustration.
+
+This quickstart uses MSAL React with the authorization code flow. For a similar quickstart that uses MSAL.js with the implicit flow, see [Quickstart: Sign in users in JavaScript single-page apps](./quickstart-v2-javascript.md).
+
+## Prerequisites
+
+* Azure subscription - [Create an Azure subscription for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)
+* [Node.js](https://nodejs.org/en/download/)
+* [Visual Studio Code](https://code.visualstudio.com/download) or another code editor
+
+> [!div renderon="docs"]
+> ## Register and download your quickstart application
+> To start your quickstart application, use either of the following options.
+>
+>
+> ### Option 1 (Express): Register and auto configure your app and then download your code sample
+>
+> 1. Sign in to the [Azure portal](https://portal.azure.com).
+> 1. If your account gives you access to more than one tenant, select your account at the top right, and then set your portal session to the Azure AD tenant you want to use.
+> 1. Select [App registrations](https://aka.ms/AAatrux).
+> 1. Enter a name for your application.
+> 1. Under **Supported account types**, select **Accounts in any organizational directory and personal Microsoft accounts**.
+> 1. Select **Register**.
+> 1. Go to the quickstart pane and follow the instructions to download and automatically configure your new application.
+>
+> ### Option 2 (Manual): Register and manually configure your application and code sample
+>
+> #### Step 1: Register your application
+>
+> 1. Sign in to the [Azure portal](https://portal.azure.com).
+> 1. If your account gives you access to more than one tenant, select your account at the top right, and then set your portal session to the Azure AD tenant you want to use.
+> 1. Select [App registrations](https://go.microsoft.com/fwlink/?linkid=2083908).
+> 1. Select **New registration**.
+> 1. When the **Register an application** page appears, enter a name for your application.
+> 1. Under **Supported account types**, select **Accounts in any organizational directory and personal Microsoft accounts**.
+> 1. Select **Register**. On the app **Overview** page, note the **Application (client) ID** value for later use.
+> 1. In the left pane of the registered application, select **Authentication**.
+> 1. Under **Platform configurations**, select `Add a platform`.
+> 1. In the resulting window, select **Single-page application**.
+> 1. Set the **Redirect URIs** value to `http://localhost:3000/`. This is the default port NodeJS will listen on your local machine. WeΓÇÖll return the authentication response to this URI after successfully authenticating the user.
+> 1. Click the **Configure** button to apply the changes.
+> 1. Under **Platform Configurations** expand **Single-page application**.
+> 1. Confirm that under **Grant types** ![Already configured](media/quickstart-v2-javascript/green-check.png)Your Redirect URI is eligible for the Authorization Code Flow with PKCE.
+
+> [!div class="sxs-lookup" renderon="portal"]
+> #### Step 1: Configure your application in the Azure portal
+> To make the code sample in this quickstart work, you need to add a `redirectUri` as `http://localhost:3000/`.
+> > [!div renderon="portal" id="makechanges" class="nextstepaction"]
+> > [Make these changes for me]()
+>
+> > [!div id="appconfigured" class="alert alert-info"]
+> > ![Already configured](media/quickstart-v2-javascript/green-check.png) Your application is configured with these attributes.
+
+#### Step 2: Download the project
+
+> [!div renderon="docs"]
+> To run the project with a web server by using Node.js, [download the core project files](https://github.com/Azure-Samples/ms-identity-javascript-react-spa/archive/main.zip).
+
+> [!div renderon="portal" class="sxs-lookup"]
+> Run the project with a web server by using Node.js
+
+> [!div renderon="portal" class="sxs-lookup" id="autoupdate" class="nextstepaction"]
+> [Download the code sample](https://github.com/Azure-Samples/ms-identity-javascript-react-spa/archive/main.zip)
+
+> [!div renderon="docs"]
+> #### Step 3: Configure your JavaScript app
+>
+> In the *src* folder, open the *authConfig.js* file and update the `clientID`, `authority`, and `redirectUri` values in the `msalConfig` object.
+>
+> ```javascript
+> /**
+> * Configuration object to be passed to MSAL instance on creation.
+> * For a full list of MSAL.js configuration parameters, visit:
+> * https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/configuration.md
+> */
+> export const msalConfig = {
+> auth: {
+> clientId: "Enter_the_Application_Id_Here",
+> authority: "Enter_the_Cloud_Instance_Id_HereEnter_the_Tenant_Info_Here",
+> redirectUri: "Enter_the_Redirect_Uri_Here"
+> },
+> cache: {
+> cacheLocation: "sessionStorage", // This configures where your cache will be stored
+> storeAuthStateInCookie: false, // Set this to "true" if you are having issues on IE11 or Edge
+> },
+> ```
+
+> [!div renderon="portal" class="sxs-lookup"]
+> > [!NOTE]
+> > `Enter_the_Supported_Account_Info_Here`
+
+> [!div renderon="docs"]
+>
+> Modify the values in the `msalConfig` section as described here:
+>
+> - `Enter_the_Application_Id_Here` is the **Application (client) ID** for the application you registered.
+> - `Enter_the_Cloud_Instance_Id_Here` is the instance of the Azure cloud. For the main or global Azure cloud, enter `https://login.microsoftonline.com/`. For **national** clouds (for example, China), see [National clouds](authentication-national-cloud.md).
+> - `Enter_the_Tenant_info_here` is set to one of the following:
+> - If your application supports *accounts in this organizational directory*, replace this value with the **Tenant ID** or **Tenant name**. For example, `contoso.microsoft.com`.
+> - If your application supports *accounts in any organizational directory*, replace this value with `organizations`.
+> - If your application supports *accounts in any organizational directory and personal Microsoft accounts*, replace this value with `common`. **For this quickstart**, use `common`.
+> - To restrict support to *personal Microsoft accounts only*, replace this value with `consumers`.
+> - `Enter_the_Redirect_Uri_Here` is `http://localhost:3000/`.
+>
+> The `authority` value in your *authConfig.js* should be similar to the following if you're using the main (global) Azure cloud:
+>
+> ```javascript
+> authority: "https://login.microsoftonline.com/common",
+> ```
+>
+> > [!TIP]
+> > To find the values of **Application (client) ID**, **Directory (tenant) ID**, and **Supported account types**, go to the app registration's **Overview** page in the Azure portal.
+
+> [!div class="sxs-lookup" renderon="portal"]
+> #### Step 3: Your app is configured and ready to run
+> We have configured your project with values of your app's properties.
+
+> [!div renderon="docs"]
+>
+> Scroll down in the same file and update the `graphMeEndpoint`.
+> - Replace the string `Enter_the_Graph_Endpoint_Herev1.0/me` with `https://graph.microsoft.com/v1.0/me`
+> - `Enter_the_Graph_Endpoint_Herev1.0/me` is the endpoint that API calls will be made against. For the main (global) Microsoft Graph API service, enter `https://graph.microsoft.com/` (include the trailing forward-slash). For more information, see the [documentation](https://docs.microsoft.com/graph/deployments).
+>
+>
+>
+> ```javascript
+> // Add here the endpoints for MS Graph API services you would like to use.
+> export const graphConfig = {
+> graphMeEndpoint: "Enter_the_Graph_Endpoint_Herev1.0/me"
+> };
+> ```
+>
+>
+#### Step 4: Run the project
+
+Run the project with a web server by using Node.js:
+
+1. To start the server, run the following commands from within the project directory:
+ ```console
+ npm install
+ npm start
+ ```
+1. Browse to `http://localhost:3000/`.
+
+1. Select **Sign In** to start the sign-in process and then call the Microsoft Graph API.
+
+ The first time you sign in, you're prompted to provide your consent to allow the application to access your profile and sign you in. After you're signed in successfully, click on the **Request Profile Information** to display your profile information on the page.
+
+## More information
+
+### How the sample works
+
+![Diagram showing the authorization code flow for a single-page application.](media/quickstart-v2-javascript-auth-code/diagram-01-auth-code-flow.png)
+
+### msal.js
+
+The MSAL.js library signs in users and requests the tokens that are used to access an API that's protected by Microsoft identity platform.
+
+If you have Node.js installed, you can download the latest version by using the Node.js Package Manager (npm):
+
+```console
+npm install @azure/msal-browser @azure/msal-react
+```
+
+## Next steps
+
+For a detailed step-by-step guide on building the auth code flow application using vanilla JavaScript, see the following tutorial:
+
+> [!div class="nextstepaction"]
+> [Tutorial to sign in and call MS Graph](./tutorial-v2-javascript-auth-code.md)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/scenario-desktop-acquire-token https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/scenario-desktop-acquire-token.md
@@ -1178,7 +1178,7 @@ The customization of token cache serialization to share the SSO state between AD
### Simple token cache serialization (MSAL only)
-The following example is a naive implementation of custom serialization of a token cache for desktop applications. Here, the user token cache is in a file in the same folder as the application or, in a per user per app folder in the case where the app is a [packaged desktop application](https://docs.microsoft.com/windows/msix/desktop/desktop-to-uwp-behind-the-scenes). For the full code, see the following sample: [active-directory-dotnet-desktop-msgraph-v2](https://github.com/Azure-Samples/active-directory-dotnet-desktop-msgraph-v2).
+The following example is a naive implementation of custom serialization of a token cache for desktop applications. Here, the user token cache is in a file in the same folder as the application or, in a per user per app folder in the case where the app is a [packaged desktop application](/windows/msix/desktop/desktop-to-uwp-behind-the-scenes). For the full code, see the following sample: [active-directory-dotnet-desktop-msgraph-v2](https://github.com/Azure-Samples/active-directory-dotnet-desktop-msgraph-v2).
After you build the application, you enable the serialization by calling ``TokenCacheHelper.EnableSerialization()`` and passing the application `UserTokenCache`.
@@ -1405,4 +1405,4 @@ namespace CommonCacheMsalV3
## Next steps Move on to the next article in this scenario,
-[Call a web API from the desktop app](scenario-desktop-call-api.md).
+[Call a web API from the desktop app](scenario-desktop-call-api.md).
\ No newline at end of file
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/scenario-protected-web-api-app-configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/scenario-protected-web-api-app-configuration.md
@@ -172,7 +172,7 @@ services.AddControllers();
> - `$"api://{ClientId}` in all other cases (for v1.0 [access tokens](access-tokens.md)). > For details, see Microsoft.Identity.Web [source code](https://github.com/AzureAD/microsoft-identity-web/blob/d2ad0f5f830391a34175d48621a2c56011a45082/src/Microsoft.Identity.Web/Resource/RegisterValidAudience.cs#L70-L83).
-The preceding code snippet is extracted from the [ASP.NET Core web API incremental tutorial](https://github.com/Azure-Samples/active-directory-dotnet-native-aspnetcore-v2/blob/63087e83326e6a332d05fee6e1586b66d840b08f/1.%20Desktop%20app%20calls%20Web%20API/TodoListService/Startup.cs#L23-L28). The detail of **AddMicrosoftIdentityWebApiAuthentication** is available in [Microsoft.Identity.Web](microsoft-identity-web.md). This method calls [AddMicrosoftIdentityWebAPI](https://docs.microsoft.com/dotnet/api/microsoft.identity.web.microsoftidentitywebapiauthenticationbuilderextensions.addmicrosoftidentitywebapi?view=azure-dotnet-preview&preserve-view=true), which itself instructs the middleware on how to validate the token.
+The preceding code snippet is extracted from the [ASP.NET Core web API incremental tutorial](https://github.com/Azure-Samples/active-directory-dotnet-native-aspnetcore-v2/blob/63087e83326e6a332d05fee6e1586b66d840b08f/1.%20Desktop%20app%20calls%20Web%20API/TodoListService/Startup.cs#L23-L28). The detail of **AddMicrosoftIdentityWebApiAuthentication** is available in [Microsoft.Identity.Web](microsoft-identity-web.md). This method calls [AddMicrosoftIdentityWebAPI](/dotnet/api/microsoft.identity.web.microsoftidentitywebapiauthenticationbuilderextensions.addmicrosoftidentitywebapi?preserve-view=true&view=azure-dotnet-preview), which itself instructs the middleware on how to validate the token.
## Token validation
@@ -240,4 +240,4 @@ You can also validate incoming access tokens in Azure Functions. You can find ex
## Next steps Move on to the next article in this scenario,
-[Verify scopes and app roles in your code](scenario-protected-web-api-verification-scope-app-roles.md).
+[Verify scopes and app roles in your code](scenario-protected-web-api-verification-scope-app-roles.md).
\ No newline at end of file
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/tutorial-blazor-webassembly https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/tutorial-blazor-webassembly.md
@@ -23,7 +23,7 @@ In this tutorial:
> * Create a new Blazor WebAssembly app configured to use Azure Active Directory (Azure AD) for [authentication and authorization](authentication-vs-authorization.md) using the Microsoft identity platform > * Retrieve data from a protected web API, in this case [Microsoft Graph](/graph/overview)
-This tutorial uses .NET Core 3.1. The .NET docs contain instructions on [how to secure a Blazor WebAssembly app](https://docs.microsoft.com/aspnet/core/blazor/security/webassembly/graph-api) using ASP.NET Core 5.0.
+This tutorial uses .NET Core 3.1. The .NET docs contain instructions on [how to secure a Blazor WebAssembly app](/aspnet/core/blazor/security/webassembly/graph-api) using ASP.NET Core 5.0.
We also have a [tutorial for Blazor Server](tutorial-blazor-server.md).
@@ -77,7 +77,7 @@ The components of this template that enable logins with Azure AD using the Micro
[Microsoft Graph](/graph/overview) contains APIs that provide access to Microsoft 365 data for your users, and it supports the tokens issued by the Microsoft identity platform, which makes it a good protected API to use as an example. In this section, you add code to call Microsoft Graph and display the user's emails on the application's "Fetch data" page.
-This section is written using a common approach to calling a protected API using a named client. The same method can be used for other protected APIs you want to call. However, if you do plan to call Microsoft Graph from your application you can use the Graph SDK to reduce boilerplate. The .NET docs contain instructions on [how to use the Graph SDK](https://docs.microsoft.com/aspnet/core/blazor/security/webassembly/graph-api?view=aspnetcore-5.0).
+This section is written using a common approach to calling a protected API using a named client. The same method can be used for other protected APIs you want to call. However, if you do plan to call Microsoft Graph from your application you can use the Graph SDK to reduce boilerplate. The .NET docs contain instructions on [how to use the Graph SDK](/aspnet/core/blazor/security/webassembly/graph-api?view=aspnetcore-5.0).
Before you start, log out of your app since you'll be making changes to the required permissions, and your current token won't work. If you haven't already, run your app again and select **Log out** before updating the code below.
@@ -243,4 +243,4 @@ After granting consent, navigate to the "Fetch data" page to read some email.
## Next steps > [!div class="nextstepaction"]
-> [Microsoft identity platform best practices and recommendations](./identity-platform-integration-checklist.md)
+> [Microsoft identity platform best practices and recommendations](./identity-platform-integration-checklist.md)
\ No newline at end of file
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/v2-howto-app-gallery-listing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/v2-howto-app-gallery-listing.md
@@ -170,7 +170,7 @@ Supporting [SCIM](https://aka.ms/scimoverview) provisioning is an optional, but
To learn more about the SCIM standards and benefits for your customers, see [provisioning with SCIM - getting started](https://aka.ms/scimoverview). ### Understand the Azure AD SCIM implementation
-To learn more about the Azure AD SCIM implementation, see [build a SCIM endpoint and configure user provisioning with Azure AD](https://docs.microsoft.com/azure/active-directory/app-provisioning/use-scim-to-provision-users-and-groups).
+To learn more about the Azure AD SCIM implementation, see [build a SCIM endpoint and configure user provisioning with Azure AD](../app-provisioning/use-scim-to-provision-users-and-groups.md).
### Implement SCIM Azure AD provides [reference code](https://aka.ms/scimoverview) to help you build a SCIM endpoint. There are also many third party libraries / references that you can find on GitHub.
@@ -181,7 +181,7 @@ You will need an Azure AD tenant in order to test your app. To set up your devel
Alternatively, an Azure AD tenant comes with every Microsoft 365 subscription. To set up a free Microsoft 365 development environment, see [Join the Microsoft 365 Developer Program](/office/developer-program/microsoft-365-developer-program).
-Once you have a tenant, you need to test single-sign on and [provisioning](https://docs.microsoft.com/azure/active-directory/app-provisioning/use-scim-to-provision-users-and-groups#step-4-integrate-your-scim-endpoint-with-the-azure-ad-scim-client).
+Once you have a tenant, you need to test single-sign on and [provisioning](../app-provisioning/use-scim-to-provision-users-and-groups.md#step-4-integrate-your-scim-endpoint-with-the-azure-ad-scim-client).
**For OIDC or Oath applications**, [Register your application](quickstart-register-app.md) as a multi-tenant application. ΓÇÄSelect the Accounts in any organizational directory and personal Microsoft accounts option in Supported Account types.
@@ -269,7 +269,7 @@ If you want to add your application to list in the gallery by using password SSO
![Listing a password SSO application in the gallery](./media/howto-app-gallery-listing/passwordsso.png)
-If you are implementing a [SCIM](../app-provisioning/use-scim-to-provision-users-and-groups.md) 2.0 endpoint for user provisioning, select the option as shown. When providing the schema in the onboarding request, please follow the directions [here](https://docs.microsoft.com/azure/active-directory/app-provisioning/export-import-provisioning-configuration) to download your schema. We will use the schema you configured when testing the non-gallery application to build the gallery application.
+If you are implementing a [SCIM](../app-provisioning/use-scim-to-provision-users-and-groups.md) 2.0 endpoint for user provisioning, select the option as shown. When providing the schema in the onboarding request, please follow the directions [here](../app-provisioning/export-import-provisioning-configuration.md) to download your schema. We will use the schema you configured when testing the non-gallery application to build the gallery application.
![Request for user provisioning](./media/howto-app-gallery-listing/user-provisioning.png)
@@ -314,4 +314,4 @@ The Microsoft Partner Network provides instant access to exclusive resources, pr
## Next steps * [Build a SCIM endpoint and configure user provisioning](../app-provisioning/use-scim-to-provision-users-and-groups.md)
-* [Authentication scenarios for Azure AD](authentication-flows-app-scenarios.md)
+* [Authentication scenarios for Azure AD](authentication-flows-app-scenarios.md)
\ No newline at end of file
active-directory https://docs.microsoft.com/en-us/azure/active-directory/devices/azuread-join-sso https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/azuread-join-sso.md
@@ -17,13 +17,13 @@ ms.collection: M365-identity-device-management
--- # How SSO to on-premises resources works on Azure AD joined devices
-It is probably not a surprise that an Azure Active Directory (Azure AD) joined device gives you a single sign-on (SSO) experience to your tenant's cloud apps. If your environment has an on-premises Active Directory (AD), you can extend the SSO experience on these devices to resources and applications that rely on on-premises AD as well.
+It is probably not a surprise that an Azure Active Directory (Azure AD) joined device gives you a single sign-on (SSO) experience to your tenant's cloud apps. If your environment has an on-premises Active Directory (AD), you can also get SSO experience on Azure AD joined devices to resources and applications that rely on on-premises AD.
This article explains how this works. ## Prerequisites
- If Azure AD joined machines are not connected to your organization's network, a VPN or other network infrastructure is required. On-premises SSO requires line-of-sight communication with your on-premises AD DS domain controllers.
+On-premises SSO requires line-of-sight communication with your on-premises AD DS domain controllers. If Azure AD joined devices are not connected to your organization's network, a VPN or other network infrastructure is required.
## How it works
@@ -31,11 +31,14 @@ With an Azure AD joined device, your users already have an SSO experience to the
Azure AD joined devices have no knowledge about your on-premises AD environment because they aren't joined to it. However, you can provide additional information about your on-premises AD to these devices with Azure AD Connect.
-An environment that has both, an Azure AD and an on-premises AD, is also known has hybrid environment. If you have a hybrid environment, it is likely that you already have Azure AD Connect deployed to synchronize your on-premises identity information to the cloud. As part of the synchronization process, Azure AD Connect synchronizes on-premises user information to Azure AD. When a user signs in to an Azure AD joined device in a hybrid environment:
+If you have a hybrid environment, with both Azure AD and on-premises AD, it is likely that you already have Azure AD Connect deployed to synchronize your on-premises identity information to the cloud. As part of the synchronization process, Azure AD Connect synchronizes on-premises user and domain information to Azure AD. When a user signs in to an Azure AD joined device in a hybrid environment:
1. Azure AD sends the details of the user's on-premises domain back to the device, along with the [Primary Refresh Token](concept-primary-refresh-token.md) 1. The local security authority (LSA) service enables Kerberos and NTLM authentication on the device.
+>[!NOTE]
+> Windows Hello for Business requires additional configuration to enable on-premises SSO from an Azure AD joined device. For more information, see [Configure Azure AD joined devices for On-premises Single-Sign On using Windows Hello for Business](/windows/security/identity-protection/hello-for-business/hello-hybrid-aadj-sso-base).
+ During an access attempt to a resource requesting Kerberos or NTLM in the user's on-premises environment, the device: 1. Sends the on-premises domain information and user credentials to the located DC to get the user authenticated.
@@ -43,8 +46,6 @@ During an access attempt to a resource requesting Kerberos or NTLM in the user's
All apps that are configured for **Windows-Integrated authentication** seamlessly get SSO when a user tries to access them.
-Windows Hello for Business requires additional configuration to enable on-premises SSO from an Azure AD joined device. For more information, see [Configure Azure AD joined devices for On-premises Single-Sign On using Windows Hello for Business](/windows/security/identity-protection/hello-for-business/hello-hybrid-aadj-sso-base).
- ## What you get With SSO, on an Azure AD joined device you can:
active-directory https://docs.microsoft.com/en-us/azure/active-directory/devices/device-management-azure-portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/device-management-azure-portal.md
@@ -166,7 +166,7 @@ This option is a premium edition capability available through products such as A
- **Devices to be Azure AD joined or Azure AD registered require Multi-Factor Authentication** - You can choose whether users are required to provide an additional authentication factor to join or register their device to Azure AD. The default is **No**. We recommend requiring multi-factor authentication when registering or joining a device. Before you enable multi-factor authentication for this service, you must ensure that multi-factor authentication is configured for the users that register their devices. For more information on different Azure AD Multi-Factor Authentication services, see [getting started with Azure AD Multi-Factor Authentication](../authentication/concept-mfa-howitworks.md). > [!NOTE]
-> **Devices to be Azure AD joined or Azure AD registered require Multi-Factor Authentication** setting applies to devices that are either Azure AD joined (with some exceptions) or Azure AD registered. This setting does not apply to hybrid Azure AD joined devices, [Azure AD joined VMs in Azure](/azure/active-directory/devices/howto-vm-sign-in-azure-ad-windows#enabling-azure-ad-login-in-for-windows-vm-in-azure) and Azure AD joined devices using [Windows Autopilot self-deployment mode](/mem/autopilot/self-deploying).
+> **Devices to be Azure AD joined or Azure AD registered require Multi-Factor Authentication** setting applies to devices that are either Azure AD joined (with some exceptions) or Azure AD registered. This setting does not apply to hybrid Azure AD joined devices, [Azure AD joined VMs in Azure](./howto-vm-sign-in-azure-ad-windows.md#enabling-azure-ad-login-in-for-windows-vm-in-azure) and Azure AD joined devices using [Windows Autopilot self-deployment mode](/mem/autopilot/self-deploying).
- **Maximum number of devices** - This setting enables you to select the maximum number of Azure AD joined or Azure AD registered devices that a user can have in Azure AD. If a user reaches this quota, they are not be able to add additional devices until one or more of the existing devices are removed. The default value is **50**.
@@ -215,4 +215,4 @@ In addition to the filters, you can search for specific entries.
[How to manage stale devices in Azure AD](manage-stale-devices.md)
-[Enterprise State Roaming](enterprise-state-roaming-overview.md)
+[Enterprise State Roaming](enterprise-state-roaming-overview.md)
\ No newline at end of file
active-directory https://docs.microsoft.com/en-us/azure/active-directory/devices/howto-vm-sign-in-azure-ad-windows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/howto-vm-sign-in-azure-ad-windows.md
@@ -333,7 +333,7 @@ If you see the following error message when you initiate a remote desktop connec
Verify that you have [configured Azure RBAC policies](../../virtual-machines/linux/login-using-aad.md) for the VM that grants the user either the Virtual Machine Administrator Login or Virtual Machine User Login role: > [!NOTE]
-> If you are running into issues with Azure role assignments, see [Troubleshoot Azure RBAC](https://docs.microsoft.com/azure/role-based-access-control/troubleshooting#azure-role-assignments-limit).
+> If you are running into issues with Azure role assignments, see [Troubleshoot Azure RBAC](../../role-based-access-control/troubleshooting.md#azure-role-assignments-limit).
#### Unauthorized client
@@ -371,4 +371,4 @@ Share your feedback about this preview feature or report issues using it on the
## Next steps
-For more information on Azure Active Directory, see [What is Azure Active Directory](../fundamentals/active-directory-whatis.md)
+For more information on Azure Active Directory, see [What is Azure Active Directory](../fundamentals/active-directory-whatis.md)
\ No newline at end of file
active-directory https://docs.microsoft.com/en-us/azure/active-directory/hybrid/how-to-connect-import-export-config https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-import-export-config.md
@@ -13,7 +13,7 @@ ms.author: billmath
ms.collection: M365-identity-device-management ---
-# Import and export Azure AD Connect configuration settings (public preview)
+# Import and export Azure AD Connect configuration settings
Azure Active Directory (Azure AD) Connect deployments vary from a single forest Express mode installation to complex deployments that synchronize across multiple forests by using custom synchronization rules. Because of the large number of configuration options and mechanisms, it's essential to understand what settings are in effect and be able to quickly deploy a server with an identical configuration. This feature introduces the ability to catalog the configuration of a given synchronization server and import the settings into a new deployment. Different synchronization settings snapshots can be compared to easily visualize the differences between two servers, or the same server over time.
@@ -86,7 +86,7 @@ To migrate the settings:
Comparing the originally imported settings file with the exported settings file of the newly deployed server is an essential step in understanding any differences between the intended versus the resulting deployment. Using your favorite side-by-side text comparison application yields an instant visualization that quickly highlights any desired or accidental changes.
-While many formerly manual configuration steps are now eliminated, you should still follow your organization's certification process to ensure no additional configuration is required. This configuration might occur if you use advanced settings, which aren't currently captured in the public preview release of settings management.
+While many formerly manual configuration steps are now eliminated, you should still follow your organization's certification process to ensure no additional configuration is required. This configuration might occur if you use advanced settings, which aren't currently captured in the this release of settings management.
Here are known limitations: - **Synchronization rules**: The precedence for a custom rule must be in the reserved range of 0 to 99 to avoid conflicts with Microsoft's standard rules. Placing a custom rule outside the reserved range might result in your custom rule being shifted around as standard rules are added to the configuration. A similar issue will occur if your configuration contains modified standard rules. Modifying a standard rule is discouraged, and rule placement is likely to be incorrect.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/application-proxy-add-on-premises-application https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-proxy-add-on-premises-application.md
@@ -8,16 +8,18 @@ ms.service: active-directory
ms.subservice: app-mgmt ms.workload: identity ms.topic: tutorial
-ms.date: 12/10/2020
+ms.date: 01/20/2021
ms.author: kenwith ms.reviewer: japere
-ms.custom: contperf-fy21q2
+ms.custom: contperf-fy21q3
--- # Tutorial: Add an on-premises application for remote access through Application Proxy in Azure Active Directory Azure Active Directory (Azure AD) has an Application Proxy service that enables users to access on-premises applications by signing in with their Azure AD account. This tutorial prepares your environment for use with Application Proxy. Once your environment is ready, you'll use the Azure portal to add an on-premises application to your Azure AD tenant.
+:::image type="content" source="./media/application-proxy-add-on-premises-application/app-proxy-diagram.png" alt-text="Application Proxy Overview Diagram" lightbox="./media/application-proxy-add-on-premises-application/app-proxy-diagram.png":::
+ Connectors are a key part of Application Proxy. To learn more about connectors, see [Understand Azure AD Application Proxy connectors](application-proxy-connectors.md). This tutorial:
@@ -121,7 +123,11 @@ Allow access to the following URLs:
| login.windows.net<br>secure.aadcdn.microsoftonline-p.com<br>&ast;.microsoftonline.com<br>&ast;.microsoftonline-p.com<br>&ast;.msauth.net<br>&ast;.msauthimages.net<br>&ast;.msecnd.net<br>&ast;.msftauth.net<br>&ast;.msftauthimages.net<br>&ast;.phonefactor.net<br>enterpriseregistration.windows.net<br>management.azure.com<br>policykeyservice.dc.ad.msft.net<br>ctldl.windowsupdate.com<br>www.microsoft.com/pkiops | 443/HTTPS |The connector uses these URLs during the registration process. | | ctldl.windowsupdate.com | 80/HTTP |The connector uses this URL during the registration process. |
-You can allow connections to &ast;.msappproxy.net, &ast;.servicebus.windows.net, and other URLs above if your firewall or proxy lets you configure DNS allow lists. If not, you need to allow access to the [Azure IP ranges and Service Tags - Public Cloud](https://www.microsoft.com/download/details.aspx?id=56519). The IP ranges are updated each week.
+You can allow connections to &ast;.msappproxy.net, &ast;.servicebus.windows.net, and other URLs above if your firewall or proxy lets you configure access rules based on domain suffixes. If not, you need to allow access to the [Azure IP ranges and Service Tags - Public Cloud](https://www.microsoft.com/download/details.aspx?id=56519). The IP ranges are updated each week.
+
+### DNS name resolution for Azure AD Application Proxy endpoints
+
+Public DNS records for Azure AD Application Proxy endpoints are chained CNAME records pointing to an A record. This ensures fault tolerance and flexibility. ItΓÇÖs guaranteed that the Azure AD Application Proxy Connector always accesses hostnames with the domain suffixes _*.msappproxy.net_ or _*.servicebus.windows.net_. However, during the name resolution the CNAME records might contain DNS records with different hostnames and suffixes. Due to this, you must ensure that the device (depending on your setup - connector server, firewall, outbound proxy) can resolve all the records in the chain and allows connection to the resolved IP addresses. Since the DNS records in the chain might be changed from time to time, we cannot provide you with any list DNS records.
## Install and register a connector
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/application-proxy-configure-single-sign-on-with-headers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-proxy-configure-single-sign-on-with-headers.md
@@ -83,6 +83,10 @@ When you've completed all these steps, your app should be running and available.
1. Open a new browser or private browser window to make sure previously cached headers are cleared. Then navigate to the **External URL** from the Application Proxy settings. 2. Sign in with the test account that you assigned to the app. If you can load and sign into the application using SSO, then you're good!
+## Considerations
+
+- Application Proxy is used to provide remote access to apps on-premises or on private cloud. Application Proxy is not recommended to handle traffic originating internally from the corporate network.
+- Access to header-based authentication applications should be restricted to only traffic from the connector or other permitted header-based authentication solution. This is commonly done through restricting network access to the application using a firewall or IP restriction on the application server.
## Next steps
active-directory https://docs.microsoft.com/en-us/azure/active-directory/reports-monitoring/concept-reporting-api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/reports-monitoring/concept-reporting-api.md
@@ -15,7 +15,7 @@ ms.topic: reference
ms.tgt_pltfrm: na ms.workload: identity ms.subservice: report-monitor
-ms.date: 11/13/2018
+ms.date: 01/21/2021
ms.author: markvi ms.reviewer: dhanyahk
@@ -46,8 +46,10 @@ For detailed instructions, see the [prerequisites to access the Azure Active Dir
The Microsoft Graph API endpoint for audit logs is `https://graph.microsoft.com/v1.0/auditLogs/directoryAudits` and the Microsoft Graph API endpoint for sign-ins is `https://graph.microsoft.com/v1.0/auditLogs/signIns`. For more information, see the [audit API reference](/graph/api/resources/directoryaudit) and [sign-in API reference](/graph/api/resources/signIn).
-In addition, you can use the [Identity Protection risk detections API](/graph/api/resources/identityriskevent?view=graph-rest-beta) to gain programmatic access to security detections using Microsoft Graph. For more information, see [Get started with Azure Active Directory Identity Protection and Microsoft Graph](../identity-protection/howto-identity-protection-graph-api.md).
+You can use the [Identity Protection risk detections API](/graph/api/resources/identityriskevent?view=graph-rest-beta) to gain programmatic access to security detections using Microsoft Graph. For more information, see [Get started with Azure Active Directory Identity Protection and Microsoft Graph](../identity-protection/howto-identity-protection-graph-api.md).
+You can also use the [provisioning logs API](https://docs.microsoft.com/graph/api/resources/provisioningobjectsummary?view=graph-rest-beta) to get programmatic access to provisioning events in your tenant.
+ ## APIs with Microsoft Graph Explorer You can use the [Microsoft Graph explorer](https://developer.microsoft.com/graph/graph-explorer) to verify your sign-in and audit API data. Make sure to sign in to your account using both of the sign-in buttons in the Graph Explorer UI, and set **AuditLog.Read.All** and **Directory.Read.All** permissions for your tenant as shown.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/reports-monitoring/concept-usage-insights-report https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/reports-monitoring/concept-usage-insights-report.md
@@ -33,7 +33,7 @@ To access the data from the usage and insights report, you need:
* An Azure AD tenant * An Azure AD premium (P1/P2) license to view the sign-in data
-* A user in the global administrator, security administrator, security reader or report reader roles. In addition, any user (non-admins) can access their own sign-ins.
+* A user in the global administrator, security administrator, security reader, or report reader roles. In addition, any user (non-admins) can access their own sign-ins.
## Access the usage and insights report
@@ -46,16 +46,18 @@ To access the data from the usage and insights report, you need:
## Use the report
-The usage and insights report shows the list of applications with one or more sign in attempt, and allows you to sort by the number of successful sign-ins, failed sign-ins, and the success rate.
+The usage and insights report shows the list of applications with one or more sign-in attempts, and allows you to sort by the number of successful sign-ins, failed sign-ins, and the success rate.
-Clicking load more at the bottom of the list allows you to view additional applications on the page. You can select the date range to view all applications that have been used within the range.
+Clicking **Load more** at the bottom of the list allows you to view additional applications on the page. You can select the date range to view all applications that have been used within the range.
-You can also set the focus on a specific application. Select **view sign-in activity** to see the sign in activity over time for the application as well as the top errors.
+![Screenshot shows Usage & insights for Application activity where you can select a range and view sign-in activity for different apps.](./media/concept-usage-insights-report/usage-and-insights-report.png)
+
+You can also set the focus on a specific application. Select **view sign-in activity** to see the sign-in activity over time for the application as well as the top errors.
When you select a day in the application usage graph, you get a detailed list of the sign-in activities for the application.
-![Screenshot shows Usage & insights for Application activity where you can select a range and view sign-in activity for different apps.](./media/concept-usage-insights-report/usage-and-insights-report.png)
+:::image type="content" source="./media/concept-usage-insights-report/usage-and-insights-application-report.png" alt-text="Screenshot shows Usage & insights for a specific application where you can see a graph for the sign-in activity.":::
## Next steps
-* [Sign-ins report](concept-sign-ins.md)
\ No newline at end of file
+* [Sign-ins report](concept-sign-ins.md)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/reports-monitoring/howto-manage-inactive-user-accounts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/reports-monitoring/howto-manage-inactive-user-accounts.md
@@ -14,9 +14,9 @@ ms.topic: how-to
ms.tgt_pltfrm: na ms.workload: identity ms.subservice: report-monitor
-ms.date: 04/07/2020
+ms.date: 01/21/2021
ms.author: markvi
-ms.reviewer: dhanyahk
+ms.reviewer: besiler
ms.collection: M365-identity-device-management ---
@@ -81,7 +81,7 @@ Each interactive sign-in that was successful results in an update of the underly
To generate a lastSignInDateTime timestamp, you need a successful sign-in. Because the lastSignInDateTime property is a new feature, the value of the lastSignInDateTime property can be blank if: -- The last successful sign-in of a user took place before this feature was released (December 1st, 2019).
+- The last successful sign-in of a user took place before April 2020.
- The affected user account was never used for a successful sign-in. ## Next steps
active-directory https://docs.microsoft.com/en-us/azure/active-directory/roles/security-emergency-access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/security-emergency-access.md
@@ -1,6 +1,5 @@
----
-title: Manage emergency access admin accounts - Azure AD | Microsoft Docs
+title: Manage emergency access admin accounts - Azure AD
description: This article describes how to use emergency access accounts to help prevent being inadvertently locked out of your Azure Active Directory (Azure AD) organization. services: active-directory author: markwahl-msft
@@ -56,7 +55,7 @@ During an emergency, you do not want a policy to potentially block your access t
## Federation guidance
-An additional option for organizations that use AD Domain Services and ADFS or similar identity provider to federate to Azure AD, is to configure an emergency access account whose MFA claim could be supplied by that identity provider. For example, the emergency access account could be backed by a certificate and key pair such as one stored on a smartcard. When that user is authenticated to AD, ADFS can supply a claim to Azure AD indicating that the user has met MFA requirements. Even with this approach, organizations must still have cloud-based emergency access accounts in case federation cannot be established.
+Some organizations use AD Domain Services and ADFS or similar identity provider to federate to Azure AD. [There should be no on-premises accounts with administrative privileges](../fundamentals/protect-m365-from-on-premises-attacks.md). Mastering and or sourcing authentication for accounts with administrative privilege outside Azure AD adds unnecessary risk in the event of an outage or compromise of those system(s).
## Store account credentials safely
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/github-ae-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/github-ae-tutorial.md
@@ -9,7 +9,7 @@ ms.service: active-directory
ms.subservice: saas-app-tutorial ms.workload: identity ms.topic: tutorial
-ms.date: 10/30/2020
+ms.date: 01/18/2021
ms.author: jeedes ---
@@ -86,6 +86,20 @@ Follow these steps to enable Azure AD SSO in the Azure portal.
> [!NOTE] > These values are not real. Update these values with the actual Sign on URL, Reply URL and Identifier. Contact [GitHub AE Client support team](mailto:support@github.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal. +
+1. GitHub AE application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+
+ ![image](common/default-attributes.png)
+
+1. In addition to above, GitHub AE application expects few more attributes to be passed back in SAML response which are shown below. These attributes are also pre populated but you can review them as per your requirements.
+
+ | Name | Source Attribute|
+ | ----------- | --------- |
+ | administrator | true |
+
+ > [!NOTE]
+ > To know the instructions on how to add a claim, please follow the [link](https://docs.github.com/en/github-ae@latest/admin/authentication/configuring-authentication-and-provisioning-for-your-enterprise-using-azure-ad).
+ 1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer. ![The Certificate download link](common/certificateBase64.png)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/google-apps-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/google-apps-tutorial.md
@@ -9,7 +9,7 @@ ms.service: active-directory
ms.subservice: saas-app-tutorial ms.workload: identity ms.topic: tutorial
-ms.date: 12/27/2020
+ms.date: 01/11/2021
ms.author: jeedes ---
@@ -127,8 +127,8 @@ Follow these steps to enable Azure AD SSO in the Azure portal.
c. In the **Reply URL** textbox, type a URL using the following pattern: ```http
- https://www.google.com
- https://www.google.com/a/<yourdomain.com>
+ https://www.google.com/acs
+ https://www.google.com/a/<yourdomain.com>/acs
``` 1. On the **Basic SAML Configuration** section, if you want to configure for the **Google Cloud Platform** perform the following steps:
@@ -147,8 +147,8 @@ Follow these steps to enable Azure AD SSO in the Azure portal.
c. In the **Reply URL** textbox, type a URL using the following pattern: ```http
- https://www.google.com
- https://www.google.com/a/<yourdomain.com>
+ https://www.google.com/acs
+ https://www.google.com/a/<yourdomain.com>/acs
``` > [!NOTE]
@@ -255,4 +255,4 @@ Once you configure Google Cloud (G Suite) Connector you can enforce Session Cont
[10]: ./media/google-apps-tutorial/gapps-security.png [11]: ./media/google-apps-tutorial/security-gapps.png
-[12]: ./media/google-apps-tutorial/gapps-sso-config.png
\ No newline at end of file
+[12]: ./media/google-apps-tutorial/gapps-sso-config.png
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/mediusflow-provisioning-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/mediusflow-provisioning-tutorial.md
@@ -150,17 +150,25 @@ This section guides you through the steps to configure the Azure AD provisioning
9. Review the user attributes that are synchronized from Azure AD to MediusFlow in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in MediusFlow for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you will need to ensure that the MediusFlow API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
- |Attribute|Type|
- |---|---|
- |userName|String|
+ |Attribute|Type|Supported for filtering|
+ |---|---|---|
+ |userName|String|&check;|
|emails[type eq "work"].value|String| |name.displayName|String| |active|Boolean| |name.givenName|String| |name.familyName|String| |name.formatted|String|
- |externalID|String|
+ |externalId|String|
|urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:manager|Reference|
+ |urn:ietf:params:scim:schemas:extension:medius:2.0:User:configurationFilter|String|
+ |urn:ietf:params:scim:schemas:extension:medius:2.0:User:identityProvider|String|
+ |urn:ietf:params:scim:schemas:extension:medius:2.0:User:nameIdentifier|String|
+ |urn:ietf:params:scim:schemas:extension:medius:2.0:User:customFieldText1|String|
+ |urn:ietf:params:scim:schemas:extension:medius:2.0:User:customFieldText2|String|
+ |urn:ietf:params:scim:schemas:extension:medius:2.0:User:customFieldText3|String|
+ |urn:ietf:params:scim:schemas:extension:medius:2.0:User:customFieldText4|String|
+ |urn:ietf:params:scim:schemas:extension:medius:2.0:User:customFieldText5|String|
10. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to MediusFlow**.
@@ -196,6 +204,10 @@ Once you've configured provisioning, use the following resources to monitor your
2. Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion 3. If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+## Change log
+
+* 01/21/2021 - Custom extension attributes **configurationFilter**, **identityProvider**, **nameIdentifier**, **customFieldText1**, **customFieldText2**, **customFieldText3**, **customFieldText3** and **customFieldText5** has been added.
+ ## Additional resources * [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/sharefile-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/sharefile-tutorial.md
@@ -9,7 +9,7 @@ ms.service: active-directory
ms.subservice: saas-app-tutorial ms.workload: identity ms.topic: tutorial
-ms.date: 12/21/2020
+ms.date: 01/18/2021
ms.author: jeedes --- # Tutorial: Azure Active Directory integration with Citrix ShareFile
@@ -25,8 +25,8 @@ Integrating Citrix ShareFile with Azure AD provides you with the following benef
To configure Azure AD integration with Citrix ShareFile, you need the following items:
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get one-month trial [here](https://azure.microsoft.com/pricing/free-trial/)
-* Citrix ShareFile single sign-on enabled subscription
+* An Azure AD subscription. If you don't have an Azure AD environment, you can get one-month trial [here](https://azure.microsoft.com/pricing/free-trial/).
+* Citrix ShareFile single sign-on enabled subscription.
## Scenario description
@@ -121,7 +121,7 @@ In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the app's overview page, find the **Manage** section and select **Users and groups**. 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog. 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
-If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
1. In the **Add Assignment** dialog, click the **Assign** button. ## Configure Citrix ShareFile SSO
@@ -136,7 +136,7 @@ If you are expecting a role to be assigned to the users, you can select it from
3. If you want to setup Citrix ShareFile manually, in a different web browser window, sign in to your Citrix ShareFile company site as an administrator.
-1. In the **Dashboard**, click on **Settings** and select **Admin Settings**
+1. In the **Dashboard**, click on **Settings** and select **Admin Settings**.
![Administration](./media/sharefile-tutorial/settings.png)
@@ -160,7 +160,9 @@ If you are expecting a role to be assigned to the users, you can select it from
f. In **Logout URL** textbox, paste the value of **Logout URL** which you have copied from Azure portal.
-5. Click **Save** on the Citrix ShareFile management portal.
+ g. In the **Optional Settings**, choose **SP-Initiated Auth Context** as **User Name and Password** and **Exact**.
+
+5. Click **Save**.
## Create Citrix ShareFile test user
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/splashtop-provisioning-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/splashtop-provisioning-tutorial.md new file mode 100644
@@ -0,0 +1,152 @@
+---
+title: 'Tutorial: Configure Splashtop for automatic user provisioning with Azure Active Directory | Microsoft Docs'
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to Splashtop.
+services: active-directory
+documentationcenter: ''
+author: Zhchia
+writer: Zhchia
+manager: beatrizd
+
+ms.assetid: 8d8c3745-aaa9-4dbd-9fbf-92da4ada2a9e
+ms.service: active-directory
+ms.subservice: saas-app-tutorial
+ms.workload: identity
+ms.tgt_pltfrm: na
+ms.devlang: na
+ms.topic: article
+ms.date: 01/19/2021
+ms.author: Zhchia
+---
+
+# Tutorial: Configure Splashtop for automatic user provisioning
+
+This tutorial describes the steps you need to perform in both Splashtop and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Splashtop](https://www.splashtop.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../manage-apps/user-provisioning.md).
++
+## Capabilities Supported
+> [!div class="checklist"]
+> * Create users in Splashtop
+> * Remove users in Splashtop when they do not require access anymore
+> * Keep user attributes synchronized between Azure AD and Splashtop
+> * Provision groups and group memberships in Splashtop
+> * [Single sign-on](https://docs.microsoft.com/azure/active-directory/saas-apps/splashtop-tutorial) to Splashtop (recommended)
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](https://docs.microsoft.com/azure/active-directory/develop/quickstart-create-new-tenant)
+* A user account in Azure AD with [permission](https://docs.microsoft.com/azure/active-directory/users-groups-roles/directory-assign-admin-roles) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* A Splashtop team with SSO supported. Fill out this [contact form](https://marketing.splashtop.com/acton/fs/blocks/showLandingPage/a/3744/p/p-0095/t/page/fm/0) to trial or subscribe to the SSO feature.
+
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](https://docs.microsoft.com/azure/active-directory/manage-apps/user-provisioning).
+2. Determine who will be in [scope for provisioning](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
+3. Determine what data to [map between Azure AD and Splashtop](https://docs.microsoft.com/azure/active-directory/manage-apps/customize-application-attributes).
+
+## Step 2. Configure Splashtop to support provisioning with Azure AD
+
+1. Apply for a new [SSO method](https://support-splashtopbusiness.splashtop.com/hc/articles/360038280751-How-to-apply-for-a-new-SSO-method-) on Splashtop web portal.
+2. On the Splashtop web portal, generate the [API token](https://support-splashtopbusiness.splashtop.com/hc/articles/360046055352-How-to-generate-the-SCIM-provisioning-token-) to configure provisioning in Azure AD.
+
+## Step 3. Add Splashtop from the Azure AD application gallery
+
+Add Splashtop from the Azure AD application gallery to start managing provisioning to Splashtop. If you have previously setup Splashtop for SSO, you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](https://docs.microsoft.com/azure/active-directory/manage-apps/add-gallery-app).
+
+## Step 4. Define who will be in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user and group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
+
+* When assigning users and groups to Splashtop, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](https://docs.microsoft.com/azure/active-directory/develop/howto-add-app-roles-in-azure-ad-apps) to add other roles.
+
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
++
+## Step 5. Configure automatic user provisioning to Splashtop
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and groups in TestApp based on user and group assignments in Azure AD.
+
+### To configure automatic user provisioning for Splashtop in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Enterprise applications blade](common/enterprise-applications.png)
+
+2. In the applications list, select **Splashtop**.
+
+ ![The Splashtop link in the Applications list](common/all-applications.png)
+
+3. Select the **Provisioning** tab.
+
+ ![Provisioning tab](common/provisioning.png)
+
+4. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Provisioning tab automatic](common/provisioning-automatic.png)
+
+5. Under the **Admin Credentials** section, input your Splashtop Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to Splashtop. If the connection fails, ensure your Splashtop account has Admin permissions and try again.
+
+ ![Token](common/provisioning-testconnection-tenanturltoken.png)
+
+6. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Notification Email](common/provisioning-notification-email.png)
+
+7. Select **Save**.
+
+8. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Splashtop**.
+
+9. Review the user attributes that are synchronized from Azure AD to Splashtop in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Splashtop for update operations. If you choose to change the [matching target attribute](https://docs.microsoft.com/azure/active-directory/manage-apps/customize-application-attributes), you will need to ensure that the Splashtop API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for Filtering|
+ |---|---|---|
+ |userName|String|&check;|
+ |active|Boolean|
+ |displayName|String|
+ |name.givenName|String|
+ |name.familyName|String|
+ |externalId|String|
+ |urn:ietf:params:scim:schemas:extension:Splashtop:2.0:User:ssoName|String|
+
+
+10. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Splashtop**.
+
+11. Review the group attributes that are synchronized from Azure AD to Splashtop in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in Splashtop for update operations. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for Filtering|
+ |---|---|---|
+ |displayName|String|&check;|
+ |externalId|String|
+ |members|Reference|
+
+12. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../manage-apps/define-conditional-rules-for-provisioning-user-accounts.md).
+
+13. To enable the Azure AD provisioning service for Splashtop, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Provisioning Status Toggled On](common/provisioning-toggle-on.png)
+
+14. Define the users and groups that you would like to provision to Splashtop by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Provisioning Scope](common/provisioning-scope.png)
+
+15. When you are ready to provision, click **Save**.
+
+ ![Saving Provisioning Configuration](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+1. Use the [provisioning logs](https://docs.microsoft.com/azure/active-directory/reports-monitoring/concept-provisioning-logs) to determine which users have been provisioned successfully or unsuccessfully
+2. Check the [progress bar](https://docs.microsoft.com/azure/active-directory/app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user) to see the status of the provisioning cycle and how close it is to completion
+3. If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](https://docs.microsoft.com/azure/active-directory/manage-apps/application-provisioning-quarantine-status).
+
+## Additional resources
+
+* [Managing user account provisioning for Enterprise Apps](../manage-apps/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../manage-apps/check-status-user-account-provisioning.md)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/webroot-security-awareness-training-provisioning-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/webroot-security-awareness-training-provisioning-tutorial.md
@@ -116,7 +116,8 @@ This section guides you through the steps to configure the Azure AD provisioning
|Attribute|Type|Supported for filtering| |---|---|---|
- |externalId|String|&check;|
+ |userName|String|&check;|
+ |externalId|String|
|name.givenName|String| |name.familyName|String| |emails[type eq "work"].value|String|
active-directory https://docs.microsoft.com/en-us/azure/active-directory/user-help/sms-sign-in-explainer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/user-help/sms-sign-in-explainer.md
@@ -1,5 +1,5 @@
---
-title: SMS sign-in user experience for phone number (preview) - Azure AD
+title: SMS sign-in user experience for phone number - Azure AD
description: Learn more about SMS sign-in user experience for new or existing phone numbers services: active-directory author: curtand
@@ -8,13 +8,13 @@ ms.service: active-directory
ms.subservice: user-help ms.workload: identity ms.topic: end-user-help
-ms.date: 04/14/2020
+ms.date: 01/21/2021
ms.author: curtand ms.reviewer: kasimpso ms.custom: "user-help, seo-update-azuread-jan" ---
-# Use your phone number as a user name (preview)
+# Use your phone number as a user name
Registering a device gives your phone access to your organization's services and doesn't allow your organization access to your phone. If you're an administrator, you can find more information in [Configure and enable users for SMS-based authentication](../authentication/howto-authentication-sms-signin.md).
@@ -31,7 +31,7 @@ If you get a new phone or new number and you register it with an organization fo
1. You will see a prompt that says "SMS verified. Your phone was registered successfully." > [!Important]
-> Due to a known issue in the preview, for a short time adding phone number will not register the number for SMS sign-in. You'll have to sign in with the added number and then follow the prompts to register the number for SMS sign-in.
+> Due to a known issue, for a short time adding phone number will not register the number for SMS sign-in. You'll have to sign in with the added number and then follow the prompts to register the number for SMS sign-in.
### When the phone number is in use
aks https://docs.microsoft.com/en-us/azure/aks/concepts-clusters-workloads https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/concepts-clusters-workloads.md
@@ -74,7 +74,6 @@ Node resources are utilized by AKS to make the node function as part of your clu
To find a node's allocatable resources, run: ```kubectl kubectl describe node [NODE_NAME]- ``` To maintain node performance and functionality, resources are reserved on each node by AKS. As a node grows larger in resources, the resource reservation grows due to a higher amount of user deployed pods needing management.
@@ -82,22 +81,24 @@ To maintain node performance and functionality, resources are reserved on each n
>[!NOTE] > Using AKS add-ons such as Container Insights (OMS) will consume additional node resources. -- **CPU** - reserved CPU is dependent on node type and cluster configuration, which may cause less allocatable CPU due to running additional features
+Two types of resources are reserved:
+
+- **CPU** - Reserved CPU is dependent on node type and cluster configuration, which may cause less allocatable CPU due to running additional features
-| CPU cores on host | 1 | 2 | 4 | 8 | 16 | 32|64|
-|---|---|---|---|---|---|---|---|
-|Kube-reserved (millicores)|60|100|140|180|260|420|740|
+ | CPU cores on host | 1 | 2 | 4 | 8 | 16 | 32|64|
+ |---|---|---|---|---|---|---|---|
+ |Kube-reserved (millicores)|60|100|140|180|260|420|740|
-- **Memory** - memory utilized by AKS includes the sum of two values.
+- **Memory** - Memory utilized by AKS includes the sum of two values.
-1. The kubelet daemon is installed on all Kubernetes agent nodes to manage container creation and termination. By default on AKS, this daemon has the following eviction rule: *memory.available<750Mi*, which means a node must always have at least 750 Mi allocatable at all times. When a host is below that threshold of available memory, the kubelet will terminate one of the running pods to free memory on the host machine and protect it. This action is triggered once available memory decreases beyond the 750Mi threshold.
+ 1. The kubelet daemon is installed on all Kubernetes agent nodes to manage container creation and termination. By default on AKS, this daemon has the following eviction rule: *memory.available<750Mi*, which means a node must always have at least 750 Mi allocatable at all times. When a host is below that threshold of available memory, the kubelet will terminate one of the running pods to free memory on the host machine and protect it. This action is triggered once available memory decreases beyond the 750Mi threshold.
-2. The second value is a regressive rate of memory reservations for the kubelet daemon to properly function (kube-reserved).
- - 25% of the first 4 GB of memory
- - 20% of the next 4 GB of memory (up to 8 GB)
- - 10% of the next 8 GB of memory (up to 16 GB)
- - 6% of the next 112 GB of memory (up to 128 GB)
- - 2% of any memory above 128 GB
+ 2. The second value is a regressive rate of memory reservations for the kubelet daemon to properly function (kube-reserved).
+ - 25% of the first 4 GB of memory
+ - 20% of the next 4 GB of memory (up to 8 GB)
+ - 10% of the next 8 GB of memory (up to 16 GB)
+ - 6% of the next 112 GB of memory (up to 128 GB)
+ - 2% of any memory above 128 GB
The above rules for memory and CPU allocation are used to keep agent nodes healthy, including some hosting system pods that are critical to cluster health. These allocation rules also cause the node to report less allocatable memory and CPU than it normally would if it were not part of a Kubernetes cluster. The above resource reservations can't be changed.
aks https://docs.microsoft.com/en-us/azure/aks/limit-egress-traffic https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/limit-egress-traffic.md
@@ -743,7 +743,7 @@ voting-storage ClusterIP 10.41.221.201 <none> 3306/TCP 9
Get the service IP by running: ```bash
-SERVICE_IP=$(k get svc voting-app -o jsonpath='{.status.loadBalancer.ingress[*].ip}')
+SERVICE_IP=$(kubectl get svc voting-app -o jsonpath='{.status.loadBalancer.ingress[*].ip}')
``` Add the NAT rule by running:
aks https://docs.microsoft.com/en-us/azure/aks/private-clusters https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/private-clusters.md
@@ -62,23 +62,23 @@ Where `--enable-private-cluster` is a mandatory flag for a private cluster.
> [!NOTE] > If the Docker bridge address CIDR (172.17.0.1/16) clashes with the subnet CIDR, change the Docker bridge address appropriately.
-### Configure Private DNS Zone
+## Configure Private DNS Zone
The following parameters can be leveraged to configure Private DNS Zone. 1. "System" is the default value. If the --private-dns-zone argument is omitted, AKS will create a Private DNS Zone in the Node Resource Group. 2. "None" means AKS will not create a Private DNS Zone. This requires you to Bring Your Own DNS Server and configure the DNS resolution for the Private FQDN. If you don't configure DNS resolution, DNS is only resolvable within the agent nodes and will cause cluster issues after deployment.
-3. "Custom private dns zone name" should be in this format for azure global cloud: `privatelink.<region>.azmk8s.io`. The user assigned identity or service principal must be granted at least `private dns zone contributor` role to the custom private dns zone.
+3. "Custom private dns zone name" should be in this format for azure global cloud: `privatelink.<region>.azmk8s.io`. You will need the Resource Id of that Private DNS Zone. Additionally, you will need a user assigned identity or service principal with at least the `private dns zone contributor` role to the custom private dns zone.
-## No Private DNS Zone Prerequisites
+### Prerequisites
-* The Azure CLI version 0.4.71 or later
+* The AKS Preview version 0.4.71 or later
* The api version 2020-11-01 or later
-## Create a private AKS cluster with Private DNS Zone
+### Create a private AKS cluster with Private DNS Zone
```azurecli-interactive
-az aks create -n <private-cluster-name> -g <private-cluster-resource-group> --load-balancer-sku standard --enable-private-cluster --private-dns-zone [none|system|custom private dns zone]
+az aks create -n <private-cluster-name> -g <private-cluster-resource-group> --load-balancer-sku standard --enable-private-cluster --enable-managed-identity --assign-identity <ResourceId> --private-dns-zone [none|system|custom private dns zone ResourceId]
``` ## Options for connecting to the private cluster
aks https://docs.microsoft.com/en-us/azure/aks/security-controls-policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/security-controls-policy.md
@@ -1,7 +1,7 @@
--- title: Azure Policy Regulatory Compliance controls for Azure Kubernetes Service (AKS) description: Lists Azure Policy Regulatory Compliance controls available for Azure Kubernetes Service (AKS). These built-in policy definitions provide common approaches to managing the compliance of your Azure resources.
-ms.date: 01/08/2021
+ms.date: 01/21/2021
ms.topic: sample ms.service: container-service ms.custom: subject-policy-compliancecontrols
aks https://docs.microsoft.com/en-us/azure/aks/ssh https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/ssh.md
@@ -33,7 +33,7 @@ Use the [az aks show][az-aks-show] command to get the resource group name of you
```azurecli-interactive CLUSTER_RESOURCE_GROUP=$(az aks show --resource-group myResourceGroup --name myAKSCluster --query nodeResourceGroup -o tsv)
-SCALE_SET_NAME=$(az vmss list --resource-group $CLUSTER_RESOURCE_GROUP --query [0].name -o tsv)
+SCALE_SET_NAME=$(az vmss list --resource-group $CLUSTER_RESOURCE_GROUP --query '[0].name' -o tsv)
``` The above example assigns the name of the cluster resource group for the *myAKSCluster* in *myResourceGroup* to *CLUSTER_RESOURCE_GROUP*. The example then uses *CLUSTER_RESOURCE_GROUP* to list the scale set name and assign it to *SCALE_SET_NAME*.
aks https://docs.microsoft.com/en-us/azure/aks/start-stop-cluster https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/start-stop-cluster.md
@@ -25,7 +25,6 @@ This article assumes that you have an existing AKS cluster. If you need an AKS c
When using the cluster start/stop feature, the following restrictions apply: - This feature is only supported for Virtual Machine Scale Sets backed clusters.-- During preview, this feature is not supported for Private clusters. - The cluster state of a stopped AKS cluster is preserved for up to 12 months. If your cluster is stopped for more than 12 months, the cluster state cannot be recovered. For more information, see the [AKS Support Policies](support-policies.md). - During preview, you need to stop the cluster autoscaler (CA) before attempting to stop the cluster. - You can only start or delete a stopped AKS cluster. To perform any operation like scale or upgrade, start your cluster first.
aks https://docs.microsoft.com/en-us/azure/aks/supported-kubernetes-versions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/supported-kubernetes-versions.md
@@ -141,7 +141,7 @@ For the past release history, see [Kubernetes](https://en.wikipedia.org/wiki/Kub
| 1.18 | Mar-23-20 | May 2020 | Aug 2020 | 1.21 GA | | 1.19 | Aug-04-20 | Sep 2020 | Nov 2020 | 1.22 GA | | 1.20 | Dec-08-20 | Jan 2021 | Mar 2021 | 1.23 GA |
-| 1.21 | Apr-08-21* | May 2021 | Jul 2021 | 1.24 GA |
+| 1.21 | Apr-08-21* | May 2021 | Jun 2021 | 1.24 GA |
\* The Kubernetes 1.21 Upstream release is subject to change as the Upstream calender as yet to be finalized.
aks https://docs.microsoft.com/en-us/azure/aks/uptime-sla https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/uptime-sla.md
@@ -85,7 +85,7 @@ Create a new cluster, and don't use Uptime SLA:
az aks create --resource-group myResourceGroup --name myAKSCluster--node-count 1 ```
-Use the [`az aks update`][az-aks-nodepool-update] command to update the existing cluster:
+Use the [`az aks update`][az-aks-update] command to update the existing cluster:
```azurecli-interactive # Update an existing cluster to use Uptime SLA
@@ -130,6 +130,6 @@ Configure your cluster to [limit egress traffic](limit-egress-traffic.md).
[limit-egress-traffic]: ./limit-egress-traffic.md [az-extension-add]: /cli/azure/extension#az-extension-add [az-extension-update]: /cli/azure/extension#az-extension-update
-[az-aks-nodepool-update]: /cli/azure/aks/nodepool?#az-aks-nodepool-update
+[az-aks-update]: /cli/azure/aks?view=azure-cli-latest&preserve-view=true#az_aks_update
[az-group-delete]: /cli/azure/group#az-group-delete [private-clusters]: private-clusters.md
aks https://docs.microsoft.com/en-us/azure/aks/use-managed-identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/use-managed-identity.md
@@ -20,7 +20,6 @@ You must have the following resource installed:
## Limitations
-* During cluster **upgrade** operations, the managed identity is temporarily unavailable.
* Tenants move / migrate of managed identity enabled clusters isn't supported. * If the cluster has `aad-pod-identity` enabled, Node-Managed Identity (NMI) pods modify the nodes' iptables to intercept calls to the Azure Instance Metadata endpoint. This configuration means any
aks https://docs.microsoft.com/en-us/azure/aks/view-master-logs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/view-master-logs.md
@@ -49,6 +49,8 @@ kind: Pod
metadata: name: nginx spec:
+ nodeSelector:
+ "beta.kubernetes.io/os": linux
containers: - name: mypod image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
aks https://docs.microsoft.com/en-us/azure/aks/virtual-nodes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/virtual-nodes.md
@@ -43,6 +43,7 @@ Virtual Nodes functionality is heavily dependent on ACI's feature set. In additi
* Virtual nodes with Private clusters. * Using api server authorized ip ranges for AKS. * Volume mounting Azure Files share support [General-purpose V1](../storage/common/storage-account-overview.md#types-of-storage-accounts). Follow the instructions for mounting [a volume with Azure Files share](azure-files-volume.md)
+* Using IPv6 is not supported.
## Next steps
analysis-services https://docs.microsoft.com/en-us/azure/analysis-services/analysis-services-datasource https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/analysis-services/analysis-services-datasource.md
@@ -4,7 +4,7 @@ description: Describes data sources and connectors supported for tabular 1200 an
author: minewiskan ms.service: azure-analysis-services ms.topic: conceptual
-ms.date: 08/21/2020
+ms.date: 01/21/2021
ms.author: owend ms.reviewer: minewiskan
@@ -113,6 +113,14 @@ For cloud data sources:
* If using SQL authentication, impersonation should be Service Account.
+## Service Principal authentication
+
+When specified as a *provider* data source, Azure Analysis Services supports [MSOLEDBSQL](/sql/connect/oledb/release-notes-for-oledb-driver-for-sql-server) Azure Active Directory service principal authentication for Azure SQL Database and Azure Synapse data sources.
+
+`
+Provider=MSOLEDBSQL;Data Source=[server];Initial Catalog=[database];Authentication=ActiveDirectoryServicePrincipal;User ID=[Application (client) ID];Password=[Application (client) secret];Use Encryption for Data=true
+`
+ ## OAuth credentials For tabular models at the 1400 and higher compatibility level using in-memory mode, Azure SQL Database, Azure Synapse, Dynamics 365, and SharePoint List support OAuth credentials. Azure Analysis Services manages token refresh for OAuth data sources to avoid timeouts for long-running refresh operations. To generate valid tokens, set credentials by using Power Query.
analysis-services https://docs.microsoft.com/en-us/azure/analysis-services/analysis-services-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/analysis-services/analysis-services-overview.md
@@ -4,7 +4,7 @@ description: Learn about Azure Analysis Services, a fully managed platform as a
author: minewiskan ms.service: azure-analysis-services ms.topic: overview
-ms.date: 01/07/2021
+ms.date: 01/20/2021
ms.author: owend ms.reviewer: minewiskan #Customer intent: As a BI developer, I want to determine if Azure Analysis Services is the best data modeling platform for our organization.
@@ -78,6 +78,7 @@ Azure Analysis Services is supported in regions throughout the world. Supported
|---------|---------|:---------:| |Brazil South | B1, B2, S0, S1, S2, S4, D1 | 1 | |Canada Central | B1, B2, S0, S1, S2, S4, D1 | 1 |
+|Canada Central | S8v2, S9v2 | 1 |
|East US | B1, B2, S0, S1, S2, S4, D1 | 1 | |East US 2 | B1, B2, S0, S1, S2, S4, D1 | 7 | |East US 2 | S8v2, S9v2 | 1 |
app-service https://docs.microsoft.com/en-us/azure/app-service/app-service-authentication-how-to https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/app-service-authentication-how-to.md
@@ -275,6 +275,150 @@ The identity provider may provide certain turn-key authorization. For example:
If either of the other levels don't provide the authorization you need, or if your platform or identity provider isn't supported, you must write custom code to authorize users based on the [user claims](#access-user-claims).
+## Updating the configuration version (preview)
+
+There are two versions of the management API for the Authentication / Authorization feature. The preview V2 version is required for the "Authentication (preview)" experience in the Azure portal. An app already using the V1 API can upgrade to the V2 version once a few changes have been made. Specifically, secret configuration must be moved to slot-sticky application settings. Configuration of the Microsoft Account provider is also not supported in V2 presently.
+
+> [!WARNING]
+> Migration to the V2 preview will disable management of the App Service Authentication / Authorization feature for your application through some clients, such as its existing experience in the Azure portal, Azure CLI, and Azure PowerShell. This cannot be reversed. During the preview, migration of production workloads is not encouraged or supported. You should only follow the steps in this section for test applications.
+
+### Moving secrets to application settings
+
+1. Get your existing configuration by using the V1 API:
+
+ ```azurecli
+ # For Web Apps
+ az webapp auth show -g <group_name> -n <site_name>
+
+ # For Azure Functions
+ az functionapp auth show -g <group_name> -n <site_name>
+ ```
+
+ In the resulting JSON payload, make note of the secret value used for each provider you have configured:
+
+ * AAD: `clientSecret`
+ * Google: `googleClientSecret`
+ * Facebook: `facebookAppSecret`
+ * Twitter: `twitterConsumerSecret`
+ * Microsoft Account: `microsoftAccountClientSecret`
+
+ > [!IMPORTANT]
+ > The secret values are important security credentials and should be handled carefully. Do not share these values or persist them on a local machine.
+
+1. Create slot-sticky application settings for each secret value. You may choose the name of each application setting. It's value should match what you obtained in the previous step or [reference a Key Vault secret](./app-service-key-vault-references.md?toc=/azure/azure-functions/toc.json) that you have created with that value.
+
+ To create the setting, you can use the Azure portal or run a variation of the following for each provider:
+
+ ```azurecli
+ # For Web Apps, Google example
+ az webapp config appsettings set -g <group_name> -n <site_name> --slot-settings GOOGLE_PROVIDER_AUTHENTICATION_SECRET=<value_from_previous_step>
+
+ # For Azure Functions, Twitter example
+ az functionapp config appsettings set -g <group_name> -n <site_name> --slot-settings TWITTER_PROVIDER_AUTHENTICATION_SECRET=<value_from_previous_step>
+ ```
+
+ > [!NOTE]
+ > The application settings for this configuration should be marked as slot-sticky, meaning that they will not move between environments during a [slot swap operation](./deploy-staging-slots.md). This is because your authentication configuration itself is tied to the environment.
+
+1. Create a new JSON file named `authsettings.json`.Take the output that you received previously and remove each secret value from it. Write the remaining output to the file, making sure that no secret is included. In some cases, the configuration may have arrays containing empty strings. Make sure that `microsoftAccountOAuthScopes` does not, and if it does, switch that value to `null`.
+
+1. Add a property to `authsettings.json` which points to the application setting name you created earlier for each provider:
+
+ * AAD: `clientSecretSettingName`
+ * Google: `googleClientSecretSettingName`
+ * Facebook: `facebookAppSecretSettingName`
+ * Twitter: `twitterConsumerSecretSettingName`
+ * Microsoft Account: `microsoftAccountClientSecretSettingName`
+
+ An example file after this operation might look similar to the following, in this case only configured for AAD:
+
+ ```json
+ {
+ "id": "/subscriptions/00d563f8-5b89-4c6a-bcec-c1b9f6d607e0/resourceGroups/myresourcegroup/providers/Microsoft.Web/sites/mywebapp/config/authsettings",
+ "name": "authsettings",
+ "type": "Microsoft.Web/sites/config",
+ "location": "Central US",
+ "properties": {
+ "enabled": true,
+ "runtimeVersion": "~1",
+ "unauthenticatedClientAction": "AllowAnonymous",
+ "tokenStoreEnabled": true,
+ "allowedExternalRedirectUrls": null,
+ "defaultProvider": "AzureActiveDirectory",
+ "clientId": "3197c8ed-2470-480a-8fae-58c25558ac9b",
+ "clientSecret": null,
+ "clientSecretSettingName": "MICROSOFT_IDENTITY_AUTHENTICATION_SECRET",
+ "clientSecretCertificateThumbprint": null,
+ "issuer": "https://sts.windows.net/0b2ef922-672a-4707-9643-9a5726eec524/",
+ "allowedAudiences": [
+ "https://mywebapp.azurewebsites.net"
+ ],
+ "additionalLoginParams": null,
+ "isAadAutoProvisioned": true,
+ "aadClaimsAuthorization": null,
+ "googleClientId": null,
+ "googleClientSecret": null,
+ "googleClientSecretSettingName": null,
+ "googleOAuthScopes": null,
+ "facebookAppId": null,
+ "facebookAppSecret": null,
+ "facebookAppSecretSettingName": null,
+ "facebookOAuthScopes": null,
+ "gitHubClientId": null,
+ "gitHubClientSecret": null,
+ "gitHubClientSecretSettingName": null,
+ "gitHubOAuthScopes": null,
+ "twitterConsumerKey": null,
+ "twitterConsumerSecret": null,
+ "twitterConsumerSecretSettingName": null,
+ "microsoftAccountClientId": null,
+ "microsoftAccountClientSecret": null,
+ "microsoftAccountClientSecretSettingName": null,
+ "microsoftAccountOAuthScopes": null,
+ "isAuthFromFile": "false"
+ }
+ }
+ ```
+
+1. Submit this file as the new Authentication/Authorization configuration for your app:
+
+ ```azurecli
+ az rest --method PUT --url "/subscriptions/<subscription_id>/resourceGroups/<group_name>/providers/Microsoft.Web/sites/<site_name>/config/authsettings?api-version=2020-06-01" --body @./authsettings.json
+ ```
+
+1. Validate that your app is still operating as expected after this gesture.
+
+1. Delete the file used in the previous steps.
+
+You have now migrated the app to store identity provider secrets as application settings.
+
+### Support for Microsoft account registrations
+
+The V2 API does not currently support Microsoft Account as a distinct provider. Rather, it leverages the converged [Microsoft Identity Platform](../active-directory/develop/v2-overview.md) to sign-in users with personal Microsoft accounts. When switching to the V2 API, the V1 Azure Active Directory configuration is used to configure the Microsoft Identity Platform provider.
+
+If your existing configuration contains a Microsoft Account provider and does not contain an Azure Active Directory provider, you can switch the configuration over to the Azure Active Directory provider and then perform the migration. To do this:
+
+1. Go to [**App registrations**](https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/ApplicationsListBlade) in the Azure portal and find the registration associated with your Microsoft Account provider. It may be under the "Applications from personal account" heading.
+1. Navigate to the "Authentication" page for the registration. Under "Redirect URIs" you should see an entry ending in `/.auth/login/microsoftaccount/callback`. Copy this URI.
+1. Add a new URI that matches the one you just copied, except instead have it end in `/.auth/login/aad/callback`. This will allow the registration to be used by the App Service Authentication / Authorization configuration.
+1. Navigate to the App Service Authentication / Authorization configuration for your app.
+1. Collect the configuration for the Microsoft Account provider.
+1. Configure the Azure Active Directory provider using the "Advanced" management mode, supplying the client ID and client secret values you collected in the previous step. For the Issuer URL, use Use `<authentication-endpoint>/<tenant-id>/v2.0`, and replace *\<authentication-endpoint>* with the [authentication endpoint for your cloud environment](../active-directory/develop/authentication-national-cloud.md#azure-ad-authentication-endpoints) (e.g., "https://login.microsoftonline.com" for global Azure), also replacing *\<tenant-id>* with your **Directory (tenant) ID**.
+1. Once you have saved the configuration, test the login flow by navigating in your browser to the `/.auth/login/aad` endpoint on your site and complete the sign-in flow.
+1. At this point, you have successfully copied the configuration over, but the existing Microsoft Account provider configuration remains. Before you remove it, make sure that all parts of your app reference the Azure Active Directory provider through login links, etc. Verify that all parts of your app work as expected.
+1. Once you have validated that things work against the AAD Azure Active Directory provider, you may remove the Microsoft Account provider configuration.
+
+Some apps may already have separate registrations for Azure Active Directory and Microsoft Account. Those apps cannot be migrated at this time.
+
+> [!WARNING]
+> It is possible to converge the two registrations by modifying the [supported account types](../active-directory/develop/supported-accounts-validation.md) for the AAD app registration. However, this would force a new consent prompt for Microsoft Account users, and those users' identity claims may be different in structure, `sub` notably changing values since a new App ID is being used. This approach is not recommended unless thoroughly understood. You should instead wait for support for the two registrations in the V2 API surface.
+
+### Switching to V2
+
+Once the above steps have been performed, navigate to the app in the Azure portal. Select the "Authentication (preview)" section.
+
+Alternatively, you may make a PUT request against the `config/authsettingsv2` resource under the site resource. The schema for the payload is the same as captured in the [Configure using a file](#config-file) section.
+ ## <a name="config-file"> </a>Configure using a file (preview) Your auth settings can optionally be configured via a file that is provided by your deployment. This may be required by certain preview capabilities of App Service Authentication / Authorization.
app-service https://docs.microsoft.com/en-us/azure/app-service/configure-authentication-provider-aad https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/configure-authentication-provider-aad.md
@@ -73,7 +73,7 @@ Perform the following steps:
1. Select **Azure Active Directory** > **App registrations** > **New registration**. 1. In the **Register an application** page, enter a **Name** for your app registration. 1. In **Redirect URI**, select **Web** and type `<app-url>/.auth/login/aad/callback`. For example, `https://contoso.azurewebsites.net/.auth/login/aad/callback`.
-1. Select **Create**.
+1. Select **REGISTER**.
1. After the app registration is created, copy the **Application (client) ID** and the **Directory (tenant) ID** for later. 1. Select **Authentication**. Under **Implicit grant**, enable **ID tokens** to allow OpenID Connect user sign-ins from App Service. 1. (Optional) Select **Branding**. In **Home page URL**, enter the URL of your App Service app and select **Save**.
app-service https://docs.microsoft.com/en-us/azure/app-service/security-controls-policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/security-controls-policy.md
@@ -1,7 +1,7 @@
--- title: Azure Policy Regulatory Compliance controls for Azure App Service description: Lists Azure Policy Regulatory Compliance controls available for Azure App Service. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources.
-ms.date: 01/08/2021
+ms.date: 01/21/2021
ms.topic: sample ms.service: app-service ms.custom: subject-policy-compliancecontrols
application-gateway https://docs.microsoft.com/en-us/azure/application-gateway/quick-create-template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/quick-create-template.md
@@ -6,7 +6,7 @@ services: application-gateway
author: vhorne ms.service: application-gateway ms.topic: quickstart
-ms.date: 08/27/2020
+ms.date: 01/20/2021
ms.author: victorh ms.custom: mvc, subject-armqs ---
automation https://docs.microsoft.com/en-us/azure/automation/security-controls-policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/security-controls-policy.md
@@ -1,7 +1,7 @@
--- title: Azure Policy Regulatory Compliance controls for Azure Automation description: Lists Azure Policy Regulatory Compliance controls available for Azure Automation. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources.
-ms.date: 01/08/2021
+ms.date: 01/21/2021
ms.topic: sample author: mgoedtel ms.author: magoedte
automation https://docs.microsoft.com/en-us/azure/automation/update-management/deploy-updates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/update-management/deploy-updates.md
@@ -141,4 +141,5 @@ Select **Errors** to see detailed information about any errors from the deployme
## Next steps
-To learn how to create alerts to notify you about update deployment results, see [create alerts for Update Management](configure-alerts.md).
\ No newline at end of file
+* To learn how to create alerts to notify you about update deployment results, see [create alerts for Update Management](configure-alerts.md).
+* To troubleshoot general Update Management errors, see [Troubleshoot Update Management issues](../troubleshoot/update-management.md).
automation https://docs.microsoft.com/en-us/azure/automation/whats-new https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/whats-new.md new file mode 100644
@@ -0,0 +1,224 @@
+---
+title: What's new in Azure Automation
+description: Significant updates to Azure Automation updated each month.
+ms.subservice:
+ms.topic: overview
+author: mgoedtel
+ms.author: magoedte
+ms.date: 01/21/2021
+ms.custom: references_regions
+---
+
+# What's new in Azure Automation?
+
+Azure Automation receives improvements on an ongoing basis. To stay up to date with the most recent developments, this article provides you with information about:
+
+- The latest releases
+- Known issues
+- Bug fixes
+
+This page is updated monthly, so revisit it regularly.
+
+## January 2021
+
+### Azure Automation runbooks moved from TechNet Script Center to GitHub
+
+**Type:** Plan for change
+
+The TechNet Script Center is retiring and all runbooks hosted in the Runbook gallery have been moved to our [Automation GitHub organization](https://github.com/azureautomation). For more information, read [Azure Automation Runbooks moving to GitHub](https://techcommunity.microsoft.com/t5/azure-governance-and-management/azure-automation-runbooks-moving-to-github/ba-p/2039337).
+
+## December 2020
+
+### Azure Automation and Update Management Private Link GA
+
+**Type:** New feature
+
+Azure Automation and Update Management support announced as GA for Azure global and Government clouds. Azure Automation enabled Private Link support to secure execution of a runbook on a hybrid worker role, using Update Management to patch machines, invoking a runbook through a webhook, and using State Configuration service to keep your machines complaint. For more information, read [Azure Automation Private Link support](https://azure.microsoft.com/updates/azure-automation-private-link)
+
+### Azure Automation classified as Grade-C certified on Accessibility
+
+**Type:** New feature
+
+Accessibility features of Microsoft products help agencies address global accessibility requirements. On the [blog announcement](https://cloudblogs.microsoft.com/industry-blog/government/2018/09/11/accessibility-conformance-reports/) page, search for **Azure Automation** to read the Accessibility conformance report for the Automation service.
+
+### Support for Automation and State Configuration GA in UAE North
+
+**Type:** New feature
+
+Automation account and State Configuration availability in the UAE North region. For more information, read [announcement](https://azure.microsoft.com/updates/azure-automation-in-uae-north-region/).
+
+### Support for Automation and State Configuration GA in Germany West Central
+
+**Type:** New feature
+
+Automation account and State Configuration availability in Germany West region. For more information, read [announcement](https://azure.microsoft.com/updates/azure-automation-in-germany-west-central-region/).
+
+### DSC support for Oracle 6 and 7
+
+**Type:** New feature
+
+Manage Oracle Linux 6 and 7 machines with Automation State Configuration. See [Supported Linux distros](https://github.com/Azure/azure-linux-extensions/tree/master/DSC#4-supported-linux-distributions) for updates to the documentation to reflect these changes.
+
+### Public Preview for Python3 runbooks in Automation
+
+**Type:** New feature
+
+Azure Automation now supports Python 3 cloud & hybrid runbook execution in public preview in all regions in Azure global cloud. See the [announcement]((https://azure.microsoft.com/updates/azure-automation-python-3-public-preview/) for more details.
+
+## November 2020
+
+### DSC support for Ubuntu 18.04
+
+**Type:** New feature
+
+See [Supported Linux Distros](https://github.com/Azure/azure-linux-extensions/tree/master/DSC#4-supported-linux-distributions) for updates to the documentation reflecting these changes.
+
+## October 2020
+
+### Support for Automation and State Configuration GA in Switzerland North
+
+**Type:** New feature
+
+Automation account and State Configuration availability in Switzerland North. For more information, read [announcement](https://azure.microsoft.com/updates/azure-automation-in-switzerland-north-region/).
+
+### Support for Automation and State Configuration GA in Brazil South East
+
+**Type:** New feature
+
+Automation account and State Configuration availability in Brazil South East. For more information, read [announcement](https://azure.microsoft.com/updates/azure-automation-in-brazil-southeast-region/).
+
+### Update Management availability in South Central US
+
+**Type:** New feature
+
+Azure Automation region mapping updated to support Update Management feature in South Central US region. See [Supported region mapping](how-to/region-mappings.md#supported-mappings) for updates to the documentation to reflect this change.
+
+## September 2020
+
+### Start/Stop VMs during off-hours runbooks updated to use Azure Az modules
+
+**Type:** New feature
+
+Start/Stop VM runbooks have been updated to use Az modules in place of Azure Resource Manager modules. See [Start/Stop VMs during off-hours](automation-solution-vm-management.md) overview for updates to the documentation to reflect these changes.
+
+## August 2020
+
+### Published the DSC extension to support Azure Arc
+
+**Type:** New feature
+
+Use Azure Automation State Configuration to centrally store configurations and maintain the desired state of hybrid connected machines enabled through the Azure Arc enabled servers DSC VM extension. For more information, read [Arc enabled servers VM extensions overview](../azure-arc/servers/manage-vm-extensions.md).
+
+### July 2020
+
+### Introduced Public Preview of Private Link support in Automation
+
+**Type:** New feature
+
+Use Azure Private Link to securely connect virtual networks to Azure Automation using private endpoints. For more information, read the [announcement](https://azure.microsoft.com/updates/public-preview-private-link-azure-automation-is-now-available/).
+
+### Hybrid Runbook Worker support for Windows Server 2008 R2
+
+**Type:** New feature
+
+Automation Hybrid Runbook Worker supports the Windows Server 2008 R2 operating system. See [Supported operating systems](automation-windows-hrw-install.md#supported-windows-operating-system) for updates to the documentation to reflect these changes.
+
+### Update Management support for Windows Server 2008 R2
+
+**Type:** New feature
+
+Update Management supports assessing and patching the Windows Server 2008 R2 operating system. See [Supported operating systems](update-management/overview.md#clients) for updates to the documentation to reflect these changes.
+
+### Automation diagnostic logs schema update
+
+**Type:** New feature
+
+Changed the schema of Azure Automation log data in the Log Analytics service. To learn more, see [Forward Azure Automation job data to Azure Monitor logs](automation-manage-send-joblogs-log-analytics.md#filter-job-status-output-converted-into-a-json-object).
+
+### Azure Lighthouse supports Automation Update Management
+
+**Type:** New feature
+
+Azure Lighthouse enables delegated resource management with Update Management for service providers and customers. Read more [here](https://azure.microsoft.com/blog/how-azure-lighthouse-enables-management-at-scale-for-service-providers/).
+
+## June 2020
+
+### Automation and Update Management availability in the US Gov Arizona region
+
+**Type:** New feature
+
+Automation account and Update Management are available in US Gov Arizona. For more information, see [announcement](https://azure.microsoft.com/updates/azure-automation-generally-available-in-usgov-arizona-region/).
+
+### Hybrid Runbook Worker onboarding script updated to use Az modules
+
+**Type:** New feature
+
+The New-OnPremiseHybridWorker runbook has been updated to support Az modules. For more information, see the package in the [PowerShell Gallery](https://www.powershellgallery.com/packages/New-OnPremiseHybridWorker/1.7).
+
+### Update Management availability in China East 2
+
+**Type:** New feature
+
+Azure Automation region mapping updated to support Update Management feature in China East 2 region. See [Supported region mapping](how-to/region-mappings.md#supported-mappings) for updates to the documentation to reflect this change.
+
+## May 2020
+
+### Updated Automation service DNS records from region-specific to Automation account-specific URLs
+
+**Type:** New feature
+
+Azure Automation DNS records have been updated to support Private Links. For more information, read the [announcement](https://azure.microsoft.com/updates/azure-automation-updateddns-records/).
+
+### Added capability to keep Automation runbooks & DSC scripts encrypted by default
+
+**Type:** New feature
+
+In addition to improve security of assets, runbooks & DSC scripts are also encrypted to enhance Azure Automation security.
+
+## April 2020
+
+### Retirement of the Automation watcher task
+
+**Type:** Plan for change
+
+Azure Logic Apps is now the recommended and supported way to monitor for events, schedule recurring tasks, and trigger actions. There will be no further investments in Watcher task functionality. To learn more, see [Schedule and run recurring automated tasks with Logic Apps](../logic-apps/concepts-schedule-automated-recurring-tasks-workflows.md).
+
+## March 2020
+
+### Support for Impact Level 5 (IL5) compute isolation in Azure commercial and Government cloud
+
+**Type:**
+
+Azure Automation Hybrid Runbook Worker can be used in Azure Government to support Impact Level 5 workloads. To learn more, see our [documentation](automation-hybrid-runbook-worker.md#support-for-impact-level-5-il5).
+
+## February 2020
+
+### Introduced support for Azure virtual network service tags
+
+**Type:** New feature
+
+Automation support of service tags allow or deny the traffic for the Automation service, for a subset of scenarios. To learn more, see the [documentation](automation-hybrid-runbook-worker.md#service-tags).
+
+### Enable TLS 1.2 support for Azure Automation service
+
+**Type:** Plan for change
+
+Azure Automation fully supports TLS 1.2 and all client calls (through webhooks, DSC nodes, and hybrid worker). TLS 1.1 and TLS 1.0 are still supported for backward compatibility with older clients until customers standardize and fully migrate to TLS 1.2.
+
+## January 2020
+
+### Introduced Public Preview of customer-managed keys for Azure Automation
+
+**Type:** New feature
+
+Customers can manage and secure encryption of Azure Automation assets using their own managed keys. For more information, see [Use of customer-managed keys](automation-secure-asset-encryption.md#use-of-customer-managed-keys-for-an-automation-account).
+
+### Retirement of Azure Service Management (ASM) REST APIs for Azure Automation
+
+**Type:** Retire
+
+Azure Service Management (ASM) REST APIs for Azure Automation will be retired and no longer supported after 30th January 2020. To learn more, see the [announcement](https://azure.microsoft.com/updates/azure-automation-service-management-rest-apis-are-being-retired-april-30-2019/).
+
+## Next steps
+
+If you'd like to contribute to Azure Automation documentation, see the [Docs Contributor Guide](/contribute/).
availability-zones https://docs.microsoft.com/en-us/azure/availability-zones/az-region https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/availability-zones/az-region.md
@@ -119,7 +119,7 @@ To achieve comprehensive business continuity on Azure, build your application ar
| [Azure Data Explorer](/azure/data-explorer/create-cluster-database-portal) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | [Azure Database for MySQL - Flexible Server](../mysql/flexible-server/concepts-high-availability.md) | :x: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | [Azure Database for PostgreSQL - Flexible Server](../postgresql/flexible-server/overview.md) | :x: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| [Azure SQL Database (General Purpose Tier)](../azure-sql/database/high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability-preview) | :x: | :heavy_check_mark:(Preview) | :x: | :heavy_check_mark:(Preview) |
+| [Azure SQL Database (General Purpose Tier)](../azure-sql/database/high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability-preview) | :heavy_check_mark:(Preview) | :heavy_check_mark:(Preview) | :x: | :heavy_check_mark:(Preview) |
| [Azure SQL Database (Premium & Business Critical Tiers)](../azure-sql/database/high-availability-sla.md#premium-and-business-critical-service-tier-zone-redundant-availability) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | **Analytics** | | | | | | [Event Hubs](../event-hubs/index.yml) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
@@ -164,7 +164,7 @@ To achieve comprehensive business continuity on Azure, build your application ar
| [Azure Data Explorer](/azure/data-explorer/create-cluster-database-portal) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | [Azure Database for MySQL - Flexible Server](../mysql/flexible-server/concepts-high-availability.md) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | [Azure Database for PostgreSQL - Flexible Server](../postgresql/flexible-server/overview.md) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| [Azure SQL Database (General Purpose Tier)](../azure-sql/database/high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability-preview) | :x: | :heavy_check_mark:(Preview) | :heavy_check_mark:(Preview) |
+| [Azure SQL Database (General Purpose Tier)](../azure-sql/database/high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability-preview) | :heavy_check_mark:(Preview) | :heavy_check_mark:(Preview) | :heavy_check_mark:(Preview) |
| [Azure SQL Database (Premium & Business Critical Tiers)](../azure-sql/database/high-availability-sla.md#premium-and-business-critical-service-tier-zone-redundant-availability) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | **Analytics** | | | | | [Event Hubs](../event-hubs/index.yml) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
azure-app-configuration https://docs.microsoft.com/en-us/azure/azure-app-configuration/enable-dynamic-configuration-aspnet-core https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/enable-dynamic-configuration-aspnet-core.md
@@ -65,28 +65,27 @@ A *sentinel key* is a special key used to signal when configuration has changed.
1. Open *Program.cs*, and update the `CreateWebHostBuilder` method to add the `config.AddAzureAppConfiguration()` method.
- #### [.NET Core 2.x](#tab/core2x)
+ #### [.NET 5.x](#tab/core5x)
```csharp
- public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
- WebHost.CreateDefaultBuilder(args)
- .ConfigureAppConfiguration((hostingContext, config) =>
- {
- var settings = config.Build();
-
- config.AddAzureAppConfiguration(options =>
+ public static IHostBuilder CreateHostBuilder(string[] args) =>
+ Host.CreateDefaultBuilder(args)
+ .ConfigureWebHostDefaults(webBuilder =>
+ webBuilder.ConfigureAppConfiguration((hostingContext, config) =>
{
- options.Connect(settings["ConnectionStrings:AppConfig"])
- .ConfigureRefresh(refresh =>
- {
- refresh.Register("TestApp:Settings:Sentinel", refreshAll: true)
- .SetCacheExpiration(new TimeSpan(0, 5, 0));
- });
- });
- })
- .UseStartup<Startup>();
- ```
-
+ var settings = config.Build();
+ config.AddAzureAppConfiguration(options =>
+ {
+ options.Connect(settings["ConnectionStrings:AppConfig"])
+ .ConfigureRefresh(refresh =>
+ {
+ refresh.Register("TestApp:Settings:Sentinel", refreshAll: true)
+ .SetCacheExpiration(new TimeSpan(0, 5, 0));
+ });
+ });
+ })
+ .UseStartup<Startup>());
+ ```
#### [.NET Core 3.x](#tab/core3x) ```csharp
@@ -108,6 +107,27 @@ A *sentinel key* is a special key used to signal when configuration has changed.
}) .UseStartup<Startup>()); ```
+ #### [.NET Core 2.x](#tab/core2x)
+
+ ```csharp
+ public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
+ WebHost.CreateDefaultBuilder(args)
+ .ConfigureAppConfiguration((hostingContext, config) =>
+ {
+ var settings = config.Build();
+
+ config.AddAzureAppConfiguration(options =>
+ {
+ options.Connect(settings["ConnectionStrings:AppConfig"])
+ .ConfigureRefresh(refresh =>
+ {
+ refresh.Register("TestApp:Settings:Sentinel", refreshAll: true)
+ .SetCacheExpiration(new TimeSpan(0, 5, 0));
+ });
+ });
+ })
+ .UseStartup<Startup>();
+ ```
--- The `ConfigureRefresh` method is used to specify the settings used to update the configuration data with the App Configuration store when a refresh operation is triggered. The `refreshAll` parameter to the `Register` method indicates that all configuration values should be refreshed if the sentinel key changes.
@@ -119,7 +139,7 @@ A *sentinel key* is a special key used to signal when configuration has changed.
To actually trigger a refresh operation, you'll need to configure a refresh middleware for the application to refresh the configuration data when any change occurs. You'll see how to do this in a later step.
-2. Add a *Settings.cs* file that defines and implements a new `Settings` class.
+2. Add a *Settings.cs* file in the Controllers directory that defines and implements a new `Settings` class. Replace the namespace with the name of your project.
```csharp namespace TestAppConfig
@@ -136,16 +156,16 @@ A *sentinel key* is a special key used to signal when configuration has changed.
3. Open *Startup.cs*, and use `IServiceCollection.Configure<T>` in the `ConfigureServices` method to bind configuration data to the `Settings` class.
- #### [.NET Core 2.x](#tab/core2x)
+ #### [.NET 5.x](#tab/core5x)
```csharp public void ConfigureServices(IServiceCollection services) { services.Configure<Settings>(Configuration.GetSection("TestApp:Settings"));
- services.AddMvc().SetCompatibilityVersion(CompatibilityVersion.Version_2_1);
+ services.AddControllersWithViews();
+ services.AddAzureAppConfiguration();
} ```- #### [.NET Core 3.x](#tab/core3x) ```csharp
@@ -153,6 +173,16 @@ A *sentinel key* is a special key used to signal when configuration has changed.
{ services.Configure<Settings>(Configuration.GetSection("TestApp:Settings")); services.AddControllersWithViews();
+ services.AddAzureAppConfiguration();
+ }
+ ```
+ #### [.NET Core 2.x](#tab/core2x)
+
+ ```csharp
+ public void ConfigureServices(IServiceCollection services)
+ {
+ services.Configure<Settings>(Configuration.GetSection("TestApp:Settings"));
+ services.AddMvc().SetCompatibilityVersion(CompatibilityVersion.Version_2_1);
} ``` ---
@@ -162,23 +192,41 @@ A *sentinel key* is a special key used to signal when configuration has changed.
4. Update the `Configure` method, adding the `UseAzureAppConfiguration` middleware to allow the configuration settings registered for refresh to be updated while the ASP.NET Core web app continues to receive requests.
- #### [.NET Core 2.x](#tab/core2x)
+ #### [.NET 5.x](#tab/core5x)
```csharp
- public void Configure(IApplicationBuilder app, IHostingEnvironment env)
+ public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{
- app.UseAzureAppConfiguration();
+ if (env.IsDevelopment())
+ {
+ app.UseDeveloperExceptionPage();
+ }
+ else
+ {
+ app.UseExceptionHandler("/Home/Error");
+ // The default HSTS value is 30 days. You may want to change this for production scenarios, see https://aka.ms/aspnetcore-hsts.
+ app.UseHsts();
+ }
- services.Configure<CookiePolicyOptions>(options =>
- {
- options.CheckConsentNeeded = context => true;
- options.MinimumSameSitePolicy = SameSiteMode.None;
- });
+ // Add the following line:
+ app.UseAzureAppConfiguration();
- app.UseMvc();
+ app.UseHttpsRedirection();
+
+ app.UseStaticFiles();
+
+ app.UseRouting();
+
+ app.UseAuthorization();
+
+ app.UseEndpoints(endpoints =>
+ {
+ endpoints.MapControllerRoute(
+ name: "default",
+ pattern: "{controller=Home}/{action=Index}/{id?}");
+ });
} ```- #### [.NET Core 3.x](#tab/core3x) ```csharp
@@ -214,6 +262,22 @@ A *sentinel key* is a special key used to signal when configuration has changed.
}); } ```
+ #### [.NET Core 2.x](#tab/core2x)
+
+ ```csharp
+ public void Configure(IApplicationBuilder app, IHostingEnvironment env)
+ {
+ app.UseAzureAppConfiguration();
+
+ services.Configure<CookiePolicyOptions>(options =>
+ {
+ options.CheckConsentNeeded = context => true;
+ options.MinimumSameSitePolicy = SameSiteMode.None;
+ });
+
+ app.UseMvc();
+ }
+ ```
--- The middleware uses the refresh configuration specified in the `AddAzureAppConfiguration` method in `Program.cs` to trigger a refresh for each request received by the ASP.NET Core web app. For each request, a refresh operation is triggered and the client library checks if the cached value for the registered configuration setting has expired. If it's expired, it's refreshed.
@@ -231,14 +295,17 @@ A *sentinel key* is a special key used to signal when configuration has changed.
2. Update the `HomeController` class to receive `Settings` through dependency injection, and make use of its values.
- #### [.NET Core 2.x](#tab/core2x)
+ #### [.NET 5.x](#tab/core5x)
- ```csharp
+```csharp
public class HomeController : Controller { private readonly Settings _settings;
- public HomeController(IOptionsSnapshot<Settings> settings)
+ private readonly ILogger<HomeController> _logger;
+
+ public HomeController(ILogger<HomeController> logger, IOptionsSnapshot<Settings> settings)
{
+ _logger = logger;
_settings = settings.Value; }
@@ -251,12 +318,13 @@ A *sentinel key* is a special key used to signal when configuration has changed.
return View(); }
- }
- ```
- #### [.NET Core 3.x](#tab/core3x)
+ // ...
+ }
+```
+#### [.NET Core 3.x](#tab/core3x)
- ```csharp
+```csharp
public class HomeController : Controller { private readonly Settings _settings;
@@ -280,8 +348,30 @@ A *sentinel key* is a special key used to signal when configuration has changed.
// ... }
- ```
- ---
+```
+#### [.NET Core 2.x](#tab/core2x)
+
+```csharp
+ public class HomeController : Controller
+ {
+ private readonly Settings _settings;
+ public HomeController(IOptionsSnapshot<Settings> settings)
+ {
+ _settings = settings.Value;
+ }
+
+ public IActionResult Index()
+ {
+ ViewData["BackgroundColor"] = _settings.BackgroundColor;
+ ViewData["FontSize"] = _settings.FontSize;
+ ViewData["FontColor"] = _settings.FontColor;
+ ViewData["Message"] = _settings.Message;
+
+ return View();
+ }
+ }
+```
+---
@@ -337,7 +427,7 @@ A *sentinel key* is a special key used to signal when configuration has changed.
| TestApp:Settings:Message | Data from Azure App Configuration - now with live updates! | | TestApp:Settings:Sentinel | 2 |
-1. Refresh the browser page to see the new configuration settings. You may need to refresh more than once for the changes to be reflected.
+1. Refresh the browser page to see the new configuration settings. You may need to refresh more than once for the changes to be reflected, or change your automatic refresh rate to less than 5 minutes.
![Launching updated quickstart app locally](./media/quickstarts/aspnet-core-app-launch-local-after.png)
azure-app-configuration https://docs.microsoft.com/en-us/azure/azure-app-configuration/quickstart-aspnet-core-app https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/quickstart-aspnet-core-app.md
@@ -84,6 +84,19 @@ dotnet new mvc --no-https --output TestAppConfig
> [!IMPORTANT] > `CreateHostBuilder` replaces `CreateWebHostBuilder` in .NET Core 3.x. Select the correct syntax based on your environment.
+ #### [.NET 5.x](#tab/core5x)
+
+ ```csharp
+ public static IHostBuilder CreateHostBuilder(string[] args) =>
+ Host.CreateDefaultBuilder(args)
+ .ConfigureWebHostDefaults(webBuilder =>
+ webBuilder.ConfigureAppConfiguration(config =>
+ {
+ var settings = config.Build();
+ var connection = settings.GetConnectionString("AppConfig");
+ config.AddAzureAppConfiguration(connection);
+ }).UseStartup<Startup>());
+ ```
#### [.NET Core 3.x](#tab/core3x) ```csharp
azure-app-configuration https://docs.microsoft.com/en-us/azure/azure-app-configuration/quickstart-dotnet-core-app https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/quickstart-dotnet-core-app.md
@@ -100,7 +100,7 @@ You use the [.NET Core command-line interface (CLI)](/dotnet/core/tools/) to cre
export ConnectionString='connection-string-of-your-app-configuration-store' ```
- Restart the command prompt to allow the change to take effect. Print out the value of the environment variable to validate that it is set properly.
+ Restart the command prompt to allow the change to take effect. Print the value of the environment variable to validate that it is set properly.
2. Run the following command to build the console app:
azure-app-configuration https://docs.microsoft.com/en-us/azure/azure-app-configuration/quickstart-feature-flag-aspnet-core https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/quickstart-feature-flag-aspnet-core.md
@@ -73,6 +73,21 @@ dotnet new mvc --no-https --output TestFeatureFlags
> [!IMPORTANT] > `CreateHostBuilder` replaces `CreateWebHostBuilder` in .NET Core 3.x. Select the correct syntax based on your environment.
+ #### [.NET 5.x](#tab/core5x)
+
+ ```csharp
+ public static IHostBuilder CreateHostBuilder(string[] args) =>
+ Host.CreateDefaultBuilder(args)
+ .ConfigureWebHostDefaults(webBuilder =>
+ webBuilder.ConfigureAppConfiguration(config =>
+ {
+ var settings = config.Build();
+ var connection = settings.GetConnectionString("AppConfig");
+ config.AddAzureAppConfiguration(options =>
+ options.Connect(connection).UseFeatureFlags());
+ }).UseStartup<Startup>());
+ ```
+ #### [.NET Core 3.x](#tab/core3x) ```csharp
@@ -114,6 +129,15 @@ dotnet new mvc --no-https --output TestFeatureFlags
1. Update the `Startup.ConfigureServices` method to add feature flag support by calling the `AddFeatureManagement` method. Optionally, you can include any filter to be used with feature flags by calling `AddFeatureFilter<FilterType>()`:
+ #### [.NET 5.x](#tab/core5x)
+
+ ```csharp
+ public void ConfigureServices(IServiceCollection services)
+ {
+ services.AddControllersWithViews();
+ services.AddFeatureManagement();
+ }
+ ```
#### [.NET Core 3.x](#tab/core3x) ```csharp
azure-app-configuration https://docs.microsoft.com/en-us/azure/azure-app-configuration/use-feature-flags-dotnet-core https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/use-feature-flags-dotnet-core.md
@@ -35,7 +35,6 @@ In this tutorial, you will learn how to:
## Set up feature management Add a reference to the `Microsoft.FeatureManagement.AspNetCore` and `Microsoft.FeatureManagement` NuGet packages to utilize the .NET Core feature manager.
-
The .NET Core feature manager `IFeatureManager` gets feature flags from the framework's native configuration system. As a result, you can define your application's feature flags by using any configuration source that .NET Core supports, including the local *appsettings.json* file or environment variables. `IFeatureManager` relies on .NET Core dependency injection. You can register the feature management services by using standard conventions: ```csharp
@@ -107,7 +106,7 @@ The easiest way to connect your ASP.NET Core application to App Configuration is
2. Open *Startup.cs* and update the `Configure` method to add the built-in middleware called `UseAzureAppConfiguration`. This middleware allows the feature flag values to be refreshed at a recurring interval while the ASP.NET Core web app continues to receive requests. ```csharp
- public void Configure(IApplicationBuilder app, IHostingEnvironment env)
+ public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{ app.UseAzureAppConfiguration(); app.UseMvc();
@@ -187,6 +186,8 @@ if (await featureManager.IsEnabledAsync(nameof(MyFeatureFlags.FeatureA)))
In ASP.NET Core MVC, you can access the feature manager `IFeatureManager` through dependency injection: ```csharp
+using Microsoft.FeatureManagement;
+ public class HomeController : Controller { private readonly IFeatureManager _featureManager;
azure-arc https://docs.microsoft.com/en-us/azure/azure-arc/kubernetes/troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/troubleshooting.md
@@ -20,7 +20,7 @@ This document provides some common troubleshooting scenarios with connectivity,
### Azure CLI set up Before using az connectedk8s or az k8sconfiguration CLI commands, assure that az is set to work against the correct Azure subscription.
-```console
+```azurecli
az account set --subscription 'subscriptionId' az account show ```
@@ -75,7 +75,7 @@ Connecting clusters to Azure requires access to both an Azure subscription and `
If the provided kubeconfig file does not have sufficient permissions to install the Azure Arc agents, the Azure CLI command will return an error attempting to call the Kubernetes API.
-```console
+```azurecli
$ az connectedk8s connect --resource-group AzureArc --name AzureArcCluster Command group 'connectedk8s' is in preview. It may be changed/removed in a future release. Ensure that you have the latest helm version installed before proceeding to avoid unexpected errors.
@@ -90,7 +90,7 @@ Cluster owner should use a Kubernetes user with cluster administrator permission
Azure Arc agent installation requires running a set of containers on the target cluster. If the cluster is running over a slow internet connection the container image pull may take longer than the Azure CLI timeouts.
-```console
+```azurecli
$ az connectedk8s connect --resource-group AzureArc --name AzureArcCluster Command group 'connectedk8s' is in preview. It may be changed/removed in a future release. Ensure that you have the latest helm version installed before proceeding to avoid unexpected errors.
azure-arc https://docs.microsoft.com/en-us/azure/azure-arc/kubernetes/use-gitops-connected-cluster https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/use-gitops-connected-cluster.md
@@ -44,7 +44,7 @@ If you are associating a private repository with the `sourceControlConfiguration
Use the Azure CLI extension for `k8sconfiguration` to link a connected cluster to the [example Git repository](https://github.com/Azure/arc-k8s-demo). We will give this configuration a name `cluster-config`, instruct the agent to deploy the operator in the `cluster-config` namespace, and give the operator `cluster-admin` permissions.
-```console
+```azurecli
az k8sconfiguration create --name cluster-config --cluster-name AzureArcTest1 --resource-group AzureArcTest --operator-instance-name cluster-config --operator-namespace cluster-config --repository-url https://github.com/Azure/arc-k8s-demo --scope cluster --cluster-type connectedClusters ```
@@ -175,7 +175,7 @@ For more information, see [Flux documentation](https://aka.ms/FluxcdReadme).
Using the Azure CLI validate that the `sourceControlConfiguration` was successfully created.
-```console
+```azurecli
az k8sconfiguration show --name cluster-config --cluster-name AzureArcTest1 --resource-group AzureArcTest --cluster-type connectedClusters ```
@@ -347,7 +347,7 @@ Delete a `sourceControlConfiguration` using the Azure CLI or Azure portal. Afte
> After a sourceControlConfiguration with namespace scope is created, it's possible for users with `edit` role binding on the namespace to deploy workloads on this namespace. When this `sourceControlConfiguration` with namespace scope gets deleted, the namespace is left intact and will not be deleted to avoid breaking these other workloads. If needed you can delete this namespace manually with kubectl. > Any changes to the cluster that were the result of deployments from the tracked Git repo are not deleted when the `sourceControlConfiguration` is deleted.
-```console
+```azurecli
az k8sconfiguration delete --name cluster-config --cluster-name AzureArcTest1 --resource-group AzureArcTest --cluster-type connectedClusters ```
azure-arc https://docs.microsoft.com/en-us/azure/azure-arc/servers/manage-agent https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/manage-agent.md
@@ -1,7 +1,7 @@
--- title: Managing the Azure Arc enabled servers agent description: This article describes the different management tasks that you will typically perform during the lifecycle of the Azure Arc enabled servers Connected Machine agent.
-ms.date: 12/21/2020
+ms.date: 01/21/2021
ms.topic: conceptual ---
@@ -29,7 +29,74 @@ For servers or machines you no longer want to manage with Azure Arc enabled serv
* Using the [Azure CLI](../../azure-resource-manager/management/delete-resource-group.md?tabs=azure-cli#delete-resource) or [Azure PowerShell](../../azure-resource-manager/management/delete-resource-group.md?tabs=azure-powershell#delete-resource). For the`ResourceType` parameter use `Microsoft.HybridCompute/machines`.
-3. Uninstall the agent from the machine or server. Follow the steps below.
+3. [Uninstall the agent](#remove-the-agent) from the machine or server following the steps below.
+
+## Renaming a machine
+
+When you change the name of the Linux or Windows machine connected to Azure Arc enabled servers, the new name is not recognized automatically because the resource name in Azure is immutable. As with other Azure resources, you have to delete the resource and re-create it in order to use the new name.
+
+For Arc enabled servers, before you rename the machine, it is necessary to remove the VM extensions before proceeding.
+
+> [!NOTE]
+> While installed extensions continue to run and perform their normal operation after this procedure is complete, you won't be able to manage them. If you attempt to redeploy the extensions on the machine, you may experience unpredictable behavior.
+
+> [!WARNING]
+> We recommend you avoid renaming the machine's computer name and only perform this procedure if absolutely necessary.
+
+The steps below summarize the computer rename procedure.
+
+1. Audit the VM extensions installed on the machine and note their configuration, using the [Azure CLI](manage-vm-extensions-cli.md#list-extensions-installed) or using [Azure PowerShell](manage-vm-extensions-powershell.md#list-extensions-installed).
+
+2. Remove the VM extensions using PowerShell, the Azure CLI, or from the Azure portal.
+
+ > [!NOTE]
+ > If you deployed the Azure Monitor for VMs (insights) agent or the Log Analytics agent using an Azure Policy Guest Configuration policy, the agents are redeployed after the next [evaluation cycle](../../governance/policy/how-to/get-compliance-data.md#evaluation-triggers) and after the renamed machine is registered with Arc enabled servers.
+
+3. Disconnect the machine from Arc enabled servers using PowerShell, the Azure CLI, or from the portal.
+
+4. Rename the computer.
+
+5. Connect the machine with Arc enabled servers using the `Azcmagent` tool to register and create a new resource in Azure.
+
+6. Deploy VM extensions previously installed on the target machine.
+
+Use the following steps to complete this task.
+
+1. Remove VM extensions installed from the [Azure portal](manage-vm-extensions-portal.md#uninstall-extension), using the [Azure CLI](manage-vm-extensions-cli.md#remove-an-installed-extension), or using [Azure PowerShell](manage-vm-extensions-powershell.md#remove-an-installed-extension).
+
+2. Use one of the following methods to disconnect the machine from Azure Arc. Disconnecting the machine from Arc enabled servers does not remove the Connected Machine agent, and you do not need to remove the agent as part of this process. Any VM extensions that are deployed to the machine continue to work during this process.
+
+ # [Azure portal](#tab/azure-portal)
+
+ 1. From your browser, go to the [Azure portal](https://portal.azure.com).
+ 1. In the portal, browse to **Servers - Azure Arc** and select your hybrid machine from the list.
+ 1. From the selected registered Arc enabled server, select **Delete** from the top bar to delete the resource in Azure.
+
+ # [Azure CLI](#tab/azure-cli)
+
+ ```azurecli
+ az resource delete \
+ --resource-group ExampleResourceGroup \
+ --name ExampleArcMachine \
+ --resource-type "Microsoft.HybridCompute/machines"
+ ```
+
+ # [Azure PowerShell](#tab/azure-powershell)
+
+ ```powershell
+ Remove-AzResource `
+ -ResourceGroupName ExampleResourceGroup `
+ -ResourceName ExampleArcMachine `
+ -ResourceType Microsoft.HybridCompute/machines
+ ```
+
+3. Rename the computer name of the machine.
+
+### After renaming operation
+
+After a machine has been renamed, the Connected Machine agent needs to be re-registered with Arc enabled servers. Run the `azcmagent` tool with the [Connect](#connect) parameter complete this step.
+
+Redeploy the VM extensions that were originally deployed to the machine from Arc enabled servers. If you deployed the Azure Monitor for VMs (insights) agent or the Log Analytics agent using an Azure Policy Guest Configuration policy, the agents are redeployed after the next [evaluation cycle](../../governance/policy/how-to/get-compliance-data.md#evaluation-triggers).
## Upgrading agent
azure-arc https://docs.microsoft.com/en-us/azure/azure-arc/servers/security-controls-policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/security-controls-policy.md
@@ -1,7 +1,7 @@
--- title: Azure Policy Regulatory Compliance controls for Azure Arc enabled servers (preview) description: Lists Azure Policy Regulatory Compliance controls available for Azure Arc enabled servers (preview). These built-in policy definitions provide common approaches to managing the compliance of your Azure resources.
-ms.date: 01/08/2021
+ms.date: 01/21/2021
ms.topic: sample ms.custom: subject-policy-compliancecontrols ---
azure-australia https://docs.microsoft.com/en-us/azure/azure-australia/vpn-gateway https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-australia/vpn-gateway.md
@@ -200,4 +200,4 @@ This article covered the specific configuration of VPN Gateway to meet the requi
- [Azure virtual network gateway overview](../vpn-gateway/index.yml) - [What is VPN Gateway?](../vpn-gateway/vpn-gateway-about-vpngateways.md) - [Create a virtual network with a site-to-site VPN connection by using PowerShell](../vpn-gateway/vpn-gateway-create-site-to-site-rm-powershell.md) -- [Create and manage a VPN gateway](../vpn-gateway/vpn-gateway-tutorial-create-gateway-powershell.md)\ No newline at end of file
+- [Create and manage a VPN gateway](../vpn-gateway/tutorial-create-gateway-portal.md)
\ No newline at end of file
azure-cache-for-redis https://docs.microsoft.com/en-us/azure/azure-cache-for-redis/security-controls-policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/security-controls-policy.md
@@ -1,7 +1,7 @@
--- title: Azure Policy Regulatory Compliance controls for Azure Cache for Redis description: Lists Azure Policy Regulatory Compliance controls available for Azure Cache for Redis. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources.
-ms.date: 01/08/2021
+ms.date: 01/21/2021
ms.topic: sample author: yegu-ms ms.author: yegu
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/configure-monitoring https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/configure-monitoring.md
@@ -229,7 +229,7 @@ az functionapp config appsettings delete --name <FUNCTION_APP_NAME> \
For a function app to send data to Application Insights, it needs to know the instrumentation key of an Application Insights resource. The key must be in an app setting named **APPINSIGHTS_INSTRUMENTATIONKEY**.
-When you create your function app [in the Azure portal](functions-create-first-azure-function.md), from the command line by using [Azure Functions Core Tools](./create-first-function-cli-csharp.md), or by using [Visual Studio Code](./create-first-function-vs-code-csharp.md), Application Insights integration is enabled by default. The Application Insights resource has the same name as your function app, and it's created either in the same region or in the nearest region.
+When you create your function app [in the Azure portal](./functions-get-started.md), from the command line by using [Azure Functions Core Tools](./create-first-function-cli-csharp.md), or by using [Visual Studio Code](./create-first-function-vs-code-csharp.md), Application Insights integration is enabled by default. The Application Insights resource has the same name as your function app, and it's created either in the same region or in the nearest region.
### New function app in the portal
@@ -283,4 +283,4 @@ To learn more about monitoring, see:
+ [Application Insights](/azure/application-insights/)
-[host.json]: functions-host-json.md
+[host.json]: functions-host-json.md
\ No newline at end of file
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/consumption-plan https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/consumption-plan.md
@@ -29,10 +29,10 @@ When you create a function app in the Azure portal, the Consumption plan is the
Use the following links to learn how to create a serverless function app in a Consumption plan, either programmatically or in the Azure portal: + [Azure CLI](./scripts/functions-cli-create-serverless.md)
-+ [Azure portal](functions-create-first-azure-function.md)
++ [Azure portal](./functions-get-started.md) + [Azure Resource Manager template](functions-create-first-function-resource-manager.md)
-You can also create function apps in a Consumption plan when you publish a Functions project from [Visual Studio Code](functions-create-first-function-vs-code.md#publish-the-project-to-azure) or [Visual Studio](functions-create-your-first-function-visual-studio.md#publish-the-project-to-azure).
+You can also create function apps in a Consumption plan when you publish a Functions project from [Visual Studio Code](./create-first-function-vs-code-csharp.md#publish-the-project-to-azure) or [Visual Studio](functions-create-your-first-function-visual-studio.md#publish-the-project-to-azure).
## Multiple apps in the same plan
@@ -41,4 +41,4 @@ Function apps in the same region can be assigned to the same Consumption plan. T
## Next steps + [Azure Functions hosting options](functions-scale.md)
-+ [Event-driven scaling in Azure Functions](event-driven-scaling.md)
++ [Event-driven scaling in Azure Functions](event-driven-scaling.md)\ No newline at end of file
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/create-premium-plan-function-app-portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/create-premium-plan-function-app-portal.md
@@ -30,4 +30,4 @@ At this point, you can create functions in the new function app. These functions
## Next steps > [!div class="nextstepaction"]
-> [Add an HTTP triggered function](functions-create-first-azure-function.md#create-function)
+> [Add an HTTP triggered function](./functions-get-started.md
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/functions-compare-logic-apps-ms-flow-webjobs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-compare-logic-apps-ms-flow-webjobs.md
@@ -74,7 +74,7 @@ Azure Functions is built on the WebJobs SDK, so it shares many of the same event
| | Functions | WebJobs with WebJobs SDK | | --- | --- | --- | |**[Serverless app model](https://azure.microsoft.com/solutions/serverless/) with [automatic scaling](event-driven-scaling.md)**|Γ£ö||
-|**[Develop and test in browser](functions-create-first-azure-function.md)** |Γ£ö||
+|**[Develop and test in browser](./functions-get-started.md)** |Γ£ö||
|**[Pay-per-use pricing](consumption-plan.md)**|Γ£ö|| |**[Integration with Logic Apps](functions-twitter-email.md)**|Γ£ö|| | **Trigger events** |[Timer](functions-bindings-timer.md)<br>[Azure Storage queues and blobs](functions-bindings-storage-blob.md)<br>[Azure Service Bus queues and topics](functions-bindings-service-bus.md)<br>[Azure Cosmos DB](functions-bindings-cosmosdb.md)<br>[Azure Event Hubs](functions-bindings-event-hubs.md)<br>[HTTP/WebHook (GitHub, Slack)](functions-bindings-http-webhook.md)<br>[Azure Event Grid](functions-bindings-event-grid.md)|[Timer](functions-bindings-timer.md)<br>[Azure Storage queues and blobs](functions-bindings-storage-blob.md)<br>[Azure Service Bus queues and topics](functions-bindings-service-bus.md)<br>[Azure Cosmos DB](functions-bindings-cosmosdb.md)<br>[Azure Event Hubs](functions-bindings-event-hubs.md)<br>[File system](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/master/src/WebJobs.Extensions/Extensions/Files/FileTriggerAttribute.cs)|
@@ -119,4 +119,4 @@ Get started by creating your first flow, logic app, or function app. Select any
* [Get started with Power Automate](/power-automate/getting-started) * [Create a logic app](../logic-apps/quickstart-create-first-logic-app-workflow.md)
-* [Create your first Azure function](functions-create-first-azure-function.md)
\ No newline at end of file
+* [Create your first Azure function](./functions-get-started.md)
\ No newline at end of file
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/functions-create-serverless-api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-create-serverless-api.md
@@ -81,7 +81,7 @@ In this section, you create a new proxy, which serves as a frontend to your over
### Setting up the frontend environment
-Repeat the steps to [Create a function app](./functions-create-first-azure-function.md#create-a-function-app) to create a new function app in which you will create your proxy. This new app's URL serves as the frontend for our API, and the function app you were previously editing serves as a backend.
+Repeat the steps to [Create a function app](./functions-get-started.md) to create a new function app in which you will create your proxy. This new app's URL serves as the frontend for our API, and the function app you were previously editing serves as a backend.
1. Navigate to your new frontend function app in the portal. 1. Select **Platform Features** and choose **Application Settings**.
@@ -192,5 +192,5 @@ The following references may be helpful as you develop your API further:
- [Documenting an Azure Functions API (preview)](./functions-openapi-definition.md)
-[Create your first function]: ./functions-create-first-azure-function.md
-[Working with Azure Functions Proxies]: ./functions-proxies.md
\ No newline at end of file
+[Create your first function]: ./functions-get-started.md
+[Working with Azure Functions Proxies]: ./functions-proxies.md
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/functions-deployment-technologies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-deployment-technologies.md
@@ -177,7 +177,7 @@ You can use FTP to directly transfer files to Azure Functions.
In the portal-based editor, you can directly edit the files that are in your function app (essentially deploying every time you save your changes).
->__How to use it:__ To be able to edit your functions in the Azure portal, you must have [created your functions in the portal](functions-create-first-azure-function.md). To preserve a single source of truth, using any other deployment method makes your function read-only and prevents continued portal editing. To return to a state in which you can edit your files in the Azure portal, you can manually turn the edit mode back to `Read/Write` and remove any deployment-related application settings (like `WEBSITE_RUN_FROM_PACKAGE`).
+>__How to use it:__ To be able to edit your functions in the Azure portal, you must have [created your functions in the portal](./functions-get-started.md). To preserve a single source of truth, using any other deployment method makes your function read-only and prevents continued portal editing. To return to a state in which you can edit your files in the Azure portal, you can manually turn the edit mode back to `Read/Write` and remove any deployment-related application settings (like `WEBSITE_RUN_FROM_PACKAGE`).
>__When to use it:__ The portal is a good way to get started with Azure Functions. For more intense development work, we recommend that you use one of the following client tools: >
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/functions-infrastructure-as-code https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-infrastructure-as-code.md
@@ -688,7 +688,7 @@ Learn more about how to develop and configure Azure Functions.
* [Azure Functions developer reference](functions-reference.md) * [How to configure Azure function app settings](functions-how-to-use-azure-function-app-settings.md)
-* [Create your first Azure function](functions-create-first-azure-function.md)
+* [Create your first Azure function](./functions-get-started.md)
<!-- LINKS -->
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/functions-integrate-storage-queue-output-binding https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-integrate-storage-queue-output-binding.md
@@ -18,13 +18,13 @@ To complete this quickstart:
- An Azure subscription. If you don't have one, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. -- Follow the directions in [Create your first function from the Azure portal](functions-create-first-azure-function.md) and don't do the **Clean up resources** step. That quickstart creates the function app and function that you use here.
+- Follow the directions in [Create your first function from the Azure portal](./functions-get-started.md) and don't do the **Clean up resources** step. That quickstart creates the function app and function that you use here.
## <a name="add-binding"></a>Add an output binding In this section, you use the portal UI to add a queue storage output binding to the function you created earlier. This binding makes it possible to write minimal code to create a message in a queue. You don't have to write code for tasks such as opening a storage connection, creating a queue, or getting a reference to a queue. The Azure Functions runtime and queue output binding take care of those tasks for you.
-1. In the Azure portal, open the function app page for the function app that you created in [Create your first function from the Azure portal](functions-create-first-azure-function.md). To do open the page, search for and select **Function App**. Then, select your function app.
+1. In the Azure portal, open the function app page for the function app that you created in [Create your first function from the Azure portal](./functions-get-started.md). To do open the page, search for and select **Function App**. Then, select your function app.
1. Select your function app, and then select the function that you created in that earlier quickstart.
@@ -94,7 +94,7 @@ In this section, you add code that writes a message to the output queue. The mes
Notice that the **Request body** contains the `name` value *Azure*. This value appears in the queue message that is created when the function is invoked.
- As an alternative to selecting **Run** here, you can call the function by entering a URL in a browser and specifying the `name` value in the query string. The browser method is shown in the [previous quickstart](functions-create-first-azure-function.md#test-the-function).
+ As an alternative to selecting **Run** here, you can call the function by entering a URL in a browser and specifying the `name` value in the query string. The browser method is shown in the [previous quickstart](./functions-get-started.md).
1. Check the logs to make sure that the function succeeded.
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/functions-reference-node https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-reference-node.md
@@ -562,7 +562,7 @@ Add your own environment variables to a function app, in both your local and clo
### In local development environment
-When running locally, your functions project includes a [`local.settings.json` file](/azure/azure-functions/functions-run-local), where you store your environment variables in the `Values` object.
+When running locally, your functions project includes a [`local.settings.json` file](./functions-run-local.md), where you store your environment variables in the `Values` object.
```json {
@@ -808,4 +808,4 @@ For more information, see the following resources:
+ [Azure Functions developer reference](functions-reference.md) + [Azure Functions triggers and bindings](functions-triggers-bindings.md)
-[`func azure functionapp publish`]: functions-run-local.md#project-file-deployment
+[`func azure functionapp publish`]: functions-run-local.md#project-file-deployment
\ No newline at end of file
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/functions-test-a-function https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-test-a-function.md
@@ -32,8 +32,8 @@ The following example describes how to create a C# Function app in Visual Studio
To set up your environment, create a Function and test app. The following steps help you create the apps and functions required to support the tests:
-1. [Create a new Functions app](./functions-create-first-azure-function.md) and name it **Functions**
-2. [Create an HTTP function from the template](./functions-create-first-azure-function.md) and name it **MyHttpTrigger**.
+1. [Create a new Functions app](./functions-get-started.md) and name it **Functions**
+2. [Create an HTTP function from the template](./functions-get-started.md) and name it **MyHttpTrigger**.
3. [Create a timer function from the template](./functions-create-scheduled-function.md) and name it **MyTimerTrigger**. 4. [Create an xUnit Test app](https://xunit.net/docs/getting-started/netcore/cmdline) in the solution and name it **Functions.Tests**. 5. Use NuGet to add a reference from the test app to [Microsoft.AspNetCore.Mvc](https://www.nuget.org/packages/Microsoft.AspNetCore.Mvc/)
@@ -383,4 +383,4 @@ Now that you've learned how to write automated tests for your functions, continu
- [Manually run a non HTTP-triggered function](./functions-manually-run-non-http.md) - [Azure Functions error handling](./functions-bindings-error-pages.md)-- [Azure Function Event Grid Trigger Local Debugging](./functions-debug-event-grid-trigger-local.md)
+- [Azure Function Event Grid Trigger Local Debugging](./functions-debug-event-grid-trigger-local.md)
\ No newline at end of file
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/functions-twitter-email https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-twitter-email.md
@@ -39,7 +39,7 @@ In this tutorial, you learn how to:
> or you can [create a Google client app to use for authentication in your Gmail connector](/connectors/gmail/#authentication-and-bring-your-own-application). > For more information, see [Data security and privacy policies for Google connectors in Azure Logic Apps](../connectors/connectors-google-data-security-privacy-policy.md).
-+ This article uses as its starting point the resources created in [Create your first function from the Azure portal](functions-create-first-azure-function.md).
++ This article uses as its starting point the resources created in [Create your first function from the Azure portal](./functions-get-started.md). If you haven't already done so, complete these steps now to create your function app. ## Create a Cognitive Services resource
@@ -302,4 +302,4 @@ Advance to the next tutorial to learn how to create a serverless API for your fu
> [!div class="nextstepaction"] > [Create a serverless API using Azure Functions](functions-create-serverless-api.md)
-To learn more about Logic Apps, see [Azure Logic Apps](../logic-apps/logic-apps-overview.md).
+To learn more about Logic Apps, see [Azure Logic Apps](../logic-apps/logic-apps-overview.md).
\ No newline at end of file
azure-government https://docs.microsoft.com/en-us/azure/azure-government/azure-secure-isolation-guidance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/azure-secure-isolation-guidance.md
@@ -478,7 +478,7 @@ To route traffic between servers, which use PA addresses, on an underlying netwo
:::image type="content" source="./media/secure-isolation-fig10.png" alt-text="Separation of tenant network traffic using VNets"::: **Figure 10.** Separation of tenant network traffic using VNets
-**External traffic (orange line)** ΓÇô For external traffic, Azure provides multiple layers of assurance to enforce isolation depending on traffic patterns. When a customer places a public IP on their VNet gateway, traffic from the public Internet or customer on-premises network that is destined for that IP address will be routed to an Internet Edge Router. Alternatively, when a customer establishes private peering over an ExpressRoute connection, it is connected with an Azure VNet via VNet Gateway. This set-up aligns connectivity from the physical circuit and makes the private IP address space from the on-premises location addressable. Azure then uses Border Gateway Protocol (BGP) to share routing details with the on-premises network to establish end-to-end connectivity. When communication begins with a resource within the VNet, the network traffic traverses as normal until it reaches a Microsoft ExpressRoute Edge (MSEE) Router. In both cases, VNets provide the means for Azure VMs to act as part of customerΓÇÖs on-premises network. A cryptographically protected [IPsec/IKE tunnel](../vpn-gateway/vpn-gateway-about-vpn-devices.md#ipsec) is established between Azure and customerΓÇÖs internal network (e.g., via [Azure VPN Gateway](../vpn-gateway/vpn-gateway-howto-site-to-site-resource-manager-portal.md) or [Azure ExpressRoute Private Peering](../virtual-wan/vpn-over-expressroute.md)), enabling the VM to connect securely to customerΓÇÖs on-premises resources as though it was directly on that network.
+**External traffic (orange line)** ΓÇô For external traffic, Azure provides multiple layers of assurance to enforce isolation depending on traffic patterns. When a customer places a public IP on their VNet gateway, traffic from the public Internet or customer on-premises network that is destined for that IP address will be routed to an Internet Edge Router. Alternatively, when a customer establishes private peering over an ExpressRoute connection, it is connected with an Azure VNet via VNet Gateway. This set-up aligns connectivity from the physical circuit and makes the private IP address space from the on-premises location addressable. Azure then uses Border Gateway Protocol (BGP) to share routing details with the on-premises network to establish end-to-end connectivity. When communication begins with a resource within the VNet, the network traffic traverses as normal until it reaches a Microsoft ExpressRoute Edge (MSEE) Router. In both cases, VNets provide the means for Azure VMs to act as part of customerΓÇÖs on-premises network. A cryptographically protected [IPsec/IKE tunnel](../vpn-gateway/vpn-gateway-about-vpn-devices.md#ipsec) is established between Azure and customerΓÇÖs internal network (e.g., via [Azure VPN Gateway](../vpn-gateway/tutorial-site-to-site-portal.md) or [Azure ExpressRoute Private Peering](../virtual-wan/vpn-over-expressroute.md)), enabling the VM to connect securely to customerΓÇÖs on-premises resources as though it was directly on that network.
At the Internet Edge Router or the MSEE Router, the packet is encapsulated using Generic Routing Encapsulation (GRE). This encapsulation uses a unique identifier specific to the VNet destination and the destination address, which is used to appropriately route the traffic to the identified VNet. Upon reaching the VNet Gateway, which is a special VNet used only to accept traffic from outside of an Azure VNet, the encapsulation is verified by the Azure network fabric to ensure: a) the endpoint receiving the packet is a match to the unique VNet ID used to route the data, and b) the destination address requested exists in this VNet. Once verified, the packet is routed as internal traffic from the VNet Gateway to the final requested destination address within the VNet. This approach ensures that traffic from external networks travels only to Azure VNet for which it is destined, enforcing isolation.
@@ -547,7 +547,7 @@ TLS provides strong authentication, message privacy, and integrity. [Perfect Fo
**In-transit encryption for VMs:** Remote sessions to Windows and Linux VMs deployed in Azure can be conducted over protocols that ensure data encryption in transit. For example, the [Remote Desktop Protocol](/windows/win32/termserv/remote-desktop-protocol) (RDP) initiated from a client computer to Windows and Linux VMs enables TLS protection for data in transit. Customers can also use [Secure Shell](../virtual-machines/linux/ssh-from-windows.md) (SSH) to connect to Linux VMs running in Azure. SSH is an encrypted connection protocol available by default for remote management of Linux VMs hosted in Azure. > [!IMPORTANT]
-> Customers should review best practices for network security, including guidance for **[disabling RDP/SSH access to Virtual Machines](../security/fundamentals/network-best-practices.md#disable-rdpssh-access-to-virtual-machines)** from the Internet to mitigate brute force attacks to gain access to Azure Virtual Machines. Accessing VMs for remote management can then be accomplished via **[point-to-site VPN](../vpn-gateway/vpn-gateway-howto-point-to-site-resource-manager-portal.md)**, **[site-to-site VPN](../vpn-gateway/vpn-gateway-howto-site-to-site-resource-manager-portal.md)**, or **[ExpressRoute](../expressroute/expressroute-howto-linkvnet-portal-resource-manager.md)**.
+> Customers should review best practices for network security, including guidance for **[disabling RDP/SSH access to Virtual Machines](../security/fundamentals/network-best-practices.md#disable-rdpssh-access-to-virtual-machines)** from the Internet to mitigate brute force attacks to gain access to Azure Virtual Machines. Accessing VMs for remote management can then be accomplished via **[point-to-site VPN](../vpn-gateway/vpn-gateway-howto-point-to-site-resource-manager-portal.md)**, **[site-to-site VPN](../vpn-gateway/tutorial-site-to-site-portal.md)**, or **[ExpressRoute](../expressroute/expressroute-howto-linkvnet-portal-resource-manager.md)**.
**Azure Storage transactions:** When interacting with Azure Storage through the Azure portal, all transactions take place over HTTPS. Moreover, customers can configure their storage accounts to accept requests only from secure connections by setting the &#8220;[secure transfer required](../storage/common/storage-require-secure-transfer.md)&#8221; property for the storage account. The &#8220;secure transfer required&#8221; option is enabled by default when creating a Storage account in the Azure portal.
@@ -556,7 +556,7 @@ TLS provides strong authentication, message privacy, and integrity. [Perfect Fo
#### CustomerΓÇÖs datacenter connection to Azure region **VPN encryption:** [Virtual Network](../virtual-network/virtual-networks-overview.md) (VNet) provides a means for Azure Virtual Machines (VMs) to act as part of a customerΓÇÖs internal (on-premises) network. With VNet, customers choose the address ranges of non-globally-routable IP addresses to be assigned to the VMs so that they will not collide with addresses the customer is using elsewhere. Customers have options to securely connect to a VNet from their on-premises infrastructure or remote locations. -- **Site-to-Site** (IPsec/IKE VPN tunnel) ΓÇô A cryptographically protected &#8220;tunnel&#8221; is established between Azure and the customerΓÇÖs internal network, allowing an Azure VM to connect to the customerΓÇÖs back-end resources as though it was directly on that network. This type of connection requires a [VPN device](../vpn-gateway/vpn-gateway-vpn-faq.md#s2s) located on-premises that has an externally facing public IP address assigned to it. Customers can use [Azure VPN Gateway](../vpn-gateway/vpn-gateway-about-vpngateways.md) to send encrypted traffic between their VNet and their on-premises infrastructure across the public Internet, e.g., a [site-to-site VPN](../vpn-gateway/vpn-gateway-howto-site-to-site-resource-manager-portal.md) relies on IPsec for transport encryption. Azure VPN Gateway supports a wide range of encryption algorithms that are FIPS 140-2 validated. Moreover, customers can configure Azure VPN Gateway to use [custom IPsec/IKE policy](../vpn-gateway/vpn-gateway-about-compliance-crypto.md) with specific cryptographic algorithms and key strengths instead of relying on the default Azure policies. IPsec encrypts data at the IP level (Network Layer 3).
+- **Site-to-Site** (IPsec/IKE VPN tunnel) ΓÇô A cryptographically protected &#8220;tunnel&#8221; is established between Azure and the customerΓÇÖs internal network, allowing an Azure VM to connect to the customerΓÇÖs back-end resources as though it was directly on that network. This type of connection requires a [VPN device](../vpn-gateway/vpn-gateway-vpn-faq.md#s2s) located on-premises that has an externally facing public IP address assigned to it. Customers can use [Azure VPN Gateway](../vpn-gateway/vpn-gateway-about-vpngateways.md) to send encrypted traffic between their VNet and their on-premises infrastructure across the public Internet, e.g., a [site-to-site VPN](../vpn-gateway/tutorial-site-to-site-portal.md) relies on IPsec for transport encryption. Azure VPN Gateway supports a wide range of encryption algorithms that are FIPS 140-2 validated. Moreover, customers can configure Azure VPN Gateway to use [custom IPsec/IKE policy](../vpn-gateway/vpn-gateway-about-compliance-crypto.md) with specific cryptographic algorithms and key strengths instead of relying on the default Azure policies. IPsec encrypts data at the IP level (Network Layer 3).
- **Point-to-Site** (VPN over SSTP, OpenVPN, and IPsec) ΓÇô A secure connection is established from an individual client computer to customerΓÇÖs VNet using Secure Socket Tunneling Protocol (SSTP), OpenVPN, or IPsec. As part of the [Point-to-Site VPN](../vpn-gateway/vpn-gateway-howto-point-to-site-resource-manager-portal.md) configuration, customers need to install a certificate and a VPN client configuration package, which allow the client computer to connect to any VM within the VNet. [Point-to-Site VPN](../vpn-gateway/point-to-site-about.md) connections do not require a VPN device or a public facing IP address. In addition to controlling the type of algorithm that is supported for VPN connections, Azure provides customers with the ability to enforce that all traffic leaving a VNet may only be routed through a VNet Gateway (e.g., Azure VPN Gateway). This enforcement allows customers to ensure that traffic may not leave a VNet without being encrypted. A VPN Gateway can be used for [VNet-to-VNet](../vpn-gateway/vpn-gateway-howto-vnet-vnet-resource-manager-portal.md) connections while also providing a secure tunnel with IPsec/IKE. Azure VPN uses [Pre-Shared Key (PSK) authentication](../vpn-gateway/vpn-gateway-vpn-faq.md#how-does-my-vpn-tunnel-get-authenticated) whereby Microsoft generates a PSK when the VPN tunnel is created. Customers can change the autogenerated PSK to their own.
@@ -847,4 +847,4 @@ In line with the shared responsibility model in cloud computing, this article pr
Learn more about: - [Azure Security](../security/fundamentals/overview.md) - [Azure Compliance](../compliance/index.yml)-- [Azure Government developer guidance](./documentation-government-developer-guide.md)
+- [Azure Government developer guidance](./documentation-government-developer-guide.md)
\ No newline at end of file
azure-government https://docs.microsoft.com/en-us/azure/azure-government/documentation-government-impact-level-5 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/documentation-government-impact-level-5.md
@@ -73,7 +73,7 @@ Azure Cognitive Search supports Impact Level 5 workloads in Azure Government wit
Azure Machine Learning supports Impact Level 5 workloads in Azure Government with this configuration: -- Configure encryption at rest of content in Azure Machine Learning by using customer-managed keys in Azure Key Vault. Azure Machine Learning stores snapshots, output, and logs in the Azure Blob Storage account that's associated with the Azure Machine Learning workspace and customer subscription. All the data stored in Azure Blob Storage is [encrypted at rest with Microsoft-managed keys](../machine-learning/concept-enterprise-security.md#data-encryption). Customers can use their own keys for data stored in Azure Blob Storage. See [Configure encryption with customer-managed keys stored in Azure Key Vault](../storage/common/customer-managed-keys-configure-key-vault.md).
+- Configure encryption at rest of content in Azure Machine Learning by using customer-managed keys in Azure Key Vault. Azure Machine Learning stores snapshots, output, and logs in the Azure Blob Storage account that's associated with the Azure Machine Learning workspace and customer subscription. All the data stored in Azure Blob Storage is [encrypted at rest with Microsoft-managed keys](../machine-learning/concept-enterprise-security.md). Customers can use their own keys for data stored in Azure Blob Storage. See [Configure encryption with customer-managed keys stored in Azure Key Vault](../storage/common/customer-managed-keys-configure-key-vault.md).
### [Cognitive Services: Computer Vision](https://azure.microsoft.com/services/cognitive-services/computer-vision/)
@@ -125,7 +125,7 @@ Azure Analysis Services supports Impact Level 5 workloads in Azure Government wi
Azure Data Explorer supports Impact Level 5 workloads in Azure Government with this configuration: -- Data in Azure Data Explorer clusters in Azure is secured and encrypted with Microsoft-managed keys by default. For additional control over encryption keys, you can supply customer-managed keys to use for data encryption and manage [encryption of your data](https://docs.microsoft.com/azure/data-explorer/security#data-encryption) at the storage level with your own keys.
+- Data in Azure Data Explorer clusters in Azure is secured and encrypted with Microsoft-managed keys by default. For additional control over encryption keys, you can supply customer-managed keys to use for data encryption and manage [encryption of your data](/azure/data-explorer/security#data-encryption) at the storage level with your own keys.
### [Azure Stream Analytics](https://azure.microsoft.com/services/stream-analytics/)
@@ -349,7 +349,7 @@ Azure Logic Apps supports Impact Level 5 workloads in Azure Government. To meet
### [Event Grid](https://azure.microsoft.com/services/event-grid/)
-Azure Event Grid can persist customer content for no more than 24 hours. For more information, see [Authenticate event delivery to event handlers](https://docs.microsoft.com/azure/event-grid/security-authentication#encryption-at-rest). All data written to disk is encrypted with Microsoft-managed keys.
+Azure Event Grid can persist customer content for no more than 24 hours. For more information, see [Authenticate event delivery to event handlers](../event-grid/security-authentication.md). All data written to disk is encrypted with Microsoft-managed keys.
Azure Event Grid supports Impact Level 5 workloads in Azure Government with no additional configuration required.
azure-maps https://docs.microsoft.com/en-us/azure/azure-maps/create-data-source-android-sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/create-data-source-android-sdk.md
@@ -350,10 +350,10 @@ A vector tile source describes how to access a vector tile layer. Use the `Vecto
Azure Maps adheres to the [Mapbox Vector Tile Specification](https://github.com/mapbox/vector-tile-spec), an open standard. Azure Maps provides the following vector tiles services as part of the platform: -- Road tiles [documentation](https://docs.microsoft.com/rest/api/maps/renderv2/getmaptilepreview) | [data format details](https://developer.tomtom.com/maps-api/maps-api-documentation-vector/tile)-- Traffic incidents [documentation](https://docs.microsoft.com/rest/api/maps/traffic/gettrafficincidenttile) | [data format details](https://developer.tomtom.com/traffic-api/traffic-api-documentation-traffic-incidents/vector-incident-tiles)-- Traffic flow [documentation](https://docs.microsoft.com/rest/api/maps/traffic/gettrafficflowtile) | [data format details](https://developer.tomtom.com/traffic-api/traffic-api-documentation-traffic-flow/vector-flow-tiles)-- Azure Maps Creator also allows custom vector tiles to be created and accessed through the [Get Tile Render V2](https://docs.microsoft.com/rest/api/maps/renderv2/getmaptilepreview)
+- Road tiles [documentation](/rest/api/maps/renderv2/getmaptilepreview) | [data format details](https://developer.tomtom.com/maps-api/maps-api-documentation-vector/tile)
+- Traffic incidents [documentation](/rest/api/maps/traffic/gettrafficincidenttile) | [data format details](https://developer.tomtom.com/traffic-api/traffic-api-documentation-traffic-incidents/vector-incident-tiles)
+- Traffic flow [documentation](/rest/api/maps/traffic/gettrafficflowtile) | [data format details](https://developer.tomtom.com/traffic-api/traffic-api-documentation-traffic-flow/vector-flow-tiles)
+- Azure Maps Creator also allows custom vector tiles to be created and accessed through the [Get Tile Render V2](/rest/api/maps/renderv2/getmaptilepreview)
To display data from a vector tile source on the map, connect the source to one of the data rendering layers. All layers that use a vector source must specify a `sourceLayer` value in the options. The following code loads the Azure Maps traffic flow vector tile service as a vector tile source, then displays it on a map using a line layer. This vector tile source has a single set of data in the source layer called "Traffic flow". The line data in this data set has a property called `traffic_level` that is used in this code to select the color and scale the size of lines.
@@ -518,4 +518,4 @@ See the following articles for more code samples to add to your maps:
> [Add a heat map](map-add-heat-map-layer-android.md) > [!div class="nextstepaction"]
-> [Web SDK Code samples](https://docs.microsoft.com/samples/browse/?products=azure-maps)
+> [Web SDK Code samples](/samples/browse/?products=azure-maps)
\ No newline at end of file
azure-maps https://docs.microsoft.com/en-us/azure/azure-maps/how-to-request-elevation-data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/how-to-request-elevation-data.md
@@ -18,7 +18,7 @@ ms.custom: mvc
> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. > For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-The Azure Maps [Elevation service](https://docs.microsoft.com/rest/api/maps/elevation) provides APIs to query elevation data anywhere on the Earth's surface. You can request sampled elevation data along paths, within a defined bounding box, or at specific coordinates. Also, you can use the [Render V2 - Get Map Tile API](https://docs.microsoft.com/rest/api/maps/renderv2) to retrieve elevation data in tile format. The tiles are delivered in GeoTIFF raster format. This article shows you how to use Azure Maps Elevation service and the Get Map Tile API to request elevation data. The elevation data can be requested in both GeoJSON and GeoTiff formats.
+The Azure Maps [Elevation service](/rest/api/maps/elevation) provides APIs to query elevation data anywhere on the Earth's surface. You can request sampled elevation data along paths, within a defined bounding box, or at specific coordinates. Also, you can use the [Render V2 - Get Map Tile API](/rest/api/maps/renderv2) to retrieve elevation data in tile format. The tiles are delivered in GeoTIFF raster format. This article shows you how to use Azure Maps Elevation service and the Get Map Tile API to request elevation data. The elevation data can be requested in both GeoJSON and GeoTiff formats.
## Prerequisites
@@ -31,7 +31,7 @@ This article uses the [Postman](https://www.postman.com/) application, but you m
## Request elevation data in raster tiled format
-To request elevation data in raster tile format, use the [Render V2 - Get Map Tile API](https://docs.microsoft.com/rest/api/maps/renderv2). If the tile can be found, the API returns the tile as a GeoTIFF. Otherwise, the API returns 0. All raster DEM tiles are using the geoid (sea level) Earth mode. In this example, we'll request elevation data for Mt. Everest.
+To request elevation data in raster tile format, use the [Render V2 - Get Map Tile API](/rest/api/maps/renderv2). If the tile can be found, the API returns the tile as a GeoTIFF. Otherwise, the API returns 0. All raster DEM tiles are using the geoid (sea level) Earth mode. In this example, we'll request elevation data for Mt. Everest.
>[!TIP] >To retrieve a tile at a specific area on the world map, you'll need to find the correct tile at the appropriate zoom level. Note also that, WorldDEM covers the entire global landmass but does not cover oceans. For more information, see [Zoom levels and tile grid](zoom-levels-and-tile-grid.md).
@@ -54,9 +54,9 @@ Use the Elevation service (Preview) APIs to request elevation data in GeoJSON fo
* [Get Data for Points](/rest/api/maps/elevation/getdataforpoints) * [Post Data for Points](/rest/api/maps/elevation/postdataforpoints)
-* [Get Data for Polyline](https://docs.microsoft.com/rest/api/maps/elevation/getdataforpolyline)
-* [Post Data for Polyline](https://docs.microsoft.com/rest/api/maps/elevation/postdataforpolyline)
-* [Get Data for Bounding Box](https://docs.microsoft.com/rest/api/maps/elevation/getdataforboundingbox)
+* [Get Data for Polyline](/rest/api/maps/elevation/getdataforpolyline)
+* [Post Data for Polyline](/rest/api/maps/elevation/postdataforpolyline)
+* [Get Data for Bounding Box](/rest/api/maps/elevation/getdataforboundingbox)
>[!IMPORTANT] > When no data can be returned, all APIs return `0`.
@@ -122,11 +122,11 @@ In this example, we'll use the [Get Data for Points API](/rest/api/maps/elevatio
### Request elevation data samples along a Polyline
-In this example, we'll use the [Get Data for Polyline](https://docs.microsoft.com/rest/api/maps/elevation/getdataforpolyline) to request five equally spaced samples of elevation data along a straight line between coordinates at Mt. Everest and Chamlang mountains. Both coordinates must be defined in Long/Lat format. If you don't specify a value for the `samples` parameter, the number of samples defaults to 10. The maximum number of samples is 2,000.
+In this example, we'll use the [Get Data for Polyline](/rest/api/maps/elevation/getdataforpolyline) to request five equally spaced samples of elevation data along a straight line between coordinates at Mt. Everest and Chamlang mountains. Both coordinates must be defined in Long/Lat format. If you don't specify a value for the `samples` parameter, the number of samples defaults to 10. The maximum number of samples is 2,000.
Then, we'll use the Get Data for Polyline to request three equally spaced samples of elevation data along a path. We'll define the precise location for the samples by passing in three Long/Lat coordinate pairs.
-Finally, we'll use the [Post Data For Polyline API](https://docs.microsoft.com/rest/api/maps/elevation/postdataforpolyline) to request elevation data at the same three equally spaced samples.
+Finally, we'll use the [Post Data For Polyline API](/rest/api/maps/elevation/postdataforpolyline) to request elevation data at the same three equally spaced samples.
Latitudes and longitudes in the URL are expected to be in WGS84 (World Geodetic System) decimal degree.
@@ -225,7 +225,7 @@ Latitudes and longitudes in the URL are expected to be in WGS84 (World Geodetic
} ```
-7. Now, we'll call the [Post Data For Polyline API](https://docs.microsoft.com/rest/api/maps/elevation/postdataforpolyline) to get elevation data for the same three points. Select the **POST** HTTP method in the builder tab and enter the following URL. For this request, and other requests mentioned in this article, replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key.
+7. Now, we'll call the [Post Data For Polyline API](/rest/api/maps/elevation/postdataforpolyline) to get elevation data for the same three points. Select the **POST** HTTP method in the builder tab and enter the following URL. For this request, and other requests mentioned in this article, replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key.
```http https://atlas.microsoft.com/elevation/line/json?api-version=1.0&subscription-key={Azure-Maps-Primary-Subscription-key}&samples=5
@@ -252,7 +252,7 @@ Latitudes and longitudes in the URL are expected to be in WGS84 (World Geodetic
### Request elevation data by Bounding Box
-Now we'll use the [Get Data for Bounding Box](https://docs.microsoft.com/rest/api/maps/elevation/getdataforboundingbox) to request elevation data near Mt. Rainier, WA. The elevation data will be returned at equally spaced locations within a bounding box. The bounding area defined by (2) sets of lat/long coordinates (south latitude, west longitude | north latitude, east longitude) is divided into rows and columns. The edges of the bounding box account for two (2) of the rows and two (2) of the columns. Elevations are returned for the grid vertices created at row and column intersections. Up to 2000 elevations can be returned in a single request.
+Now we'll use the [Get Data for Bounding Box](/rest/api/maps/elevation/getdataforboundingbox) to request elevation data near Mt. Rainier, WA. The elevation data will be returned at equally spaced locations within a bounding box. The bounding area defined by (2) sets of lat/long coordinates (south latitude, west longitude | north latitude, east longitude) is divided into rows and columns. The edges of the bounding box account for two (2) of the rows and two (2) of the columns. Elevations are returned for the grid vertices created at row and column intersections. Up to 2000 elevations can be returned in a single request.
In this example, we'll specify rows=3 and columns=6. 18 elevation values are returned in the response. In the following diagram, the elevation values are ordered starting with the southwest corner, and then continue west to east and south to north. The elevation points are numbered in the order that they're returned.
@@ -487,15 +487,15 @@ To further explore the Azure Maps Elevation (Preview) APIs, see:
> [Elevation (Preview) - Get Data for Lat Long Coordinates](/rest/api/maps/elevation/getdataforpoints) > [!div class="nextstepaction"]
-> [Elevation (Preview) - Get Data for Bounding Box](https://docs.microsoft.com/rest/api/maps/elevation/getdataforboundingbox)
+> [Elevation (Preview) - Get Data for Bounding Box](/rest/api/maps/elevation/getdataforboundingbox)
> [!div class="nextstepaction"]
-> [Elevation (Preview) - Get Data for Polyline](https://docs.microsoft.com/rest/api/maps/elevation/getdataforpolyline)
+> [Elevation (Preview) - Get Data for Polyline](/rest/api/maps/elevation/getdataforpolyline)
> [!div class="nextstepaction"]
-> [Render V2 ΓÇô Get Map Tile](https://docs.microsoft.com/rest/api/maps/renderv2)
+> [Render V2 ΓÇô Get Map Tile](/rest/api/maps/renderv2)
For a complete list of Azure Maps REST APIs, see: > [!div class="nextstepaction"]
-> [Azure Maps REST APIs](https://docs.microsoft.com/rest/api/maps/)
+> [Azure Maps REST APIs](/rest/api/maps/)
\ No newline at end of file
azure-maps https://docs.microsoft.com/en-us/azure/azure-maps/how-to-secure-spa-app https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/how-to-secure-spa-app.md
@@ -25,7 +25,7 @@ The following guide pertains to an application using Azure Active Directory (Azu
Create a secured web service application which is responsible for authentication to Azure AD.
-1. Create a function in the Azure portal. For more information, see [Create Azure Function](../azure-functions/functions-create-first-azure-function.md).
+1. Create a function in the Azure portal. For more information, see [Create Azure Function](../azure-functions/functions-get-started.md).
2. Configure CORS policy on the Azure function to be accessible by the single page web application. This will secure browser clients to the allowed origins of your web application. See [Add CORS functionality](../app-service/app-service-web-tutorial-rest-api.md#add-cors-functionality).
azure-maps https://docs.microsoft.com/en-us/azure/azure-maps/migrate-from-bing-maps-web-app https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/migrate-from-bing-maps-web-app.md
@@ -80,7 +80,7 @@ Azure Maps also has many additional [open-source modules for the web SDK](open-s
The following are some of the key differences between the Bing Maps and Azure Maps Web SDKs to be aware of:
-* In addition to providing a hosted endpoint for accessing the Azure Maps Web SDK, an NPM package is also available for embedding the Web SDK into apps if preferred. For more information, see this [documentation](https://docs.microsoft.com/azure/azure-maps/how-to-use-map-control) for more information. This package also includes TypeScript definitions.
+* In addition to providing a hosted endpoint for accessing the Azure Maps Web SDK, an NPM package is also available for embedding the Web SDK into apps if preferred. For more information, see this [documentation](./how-to-use-map-control.md) for more information. This package also includes TypeScript definitions.
* Bing Maps provides two hosted branches of their SDK; Release and Experimental. The Experimental branch may receive multiple updates a day when new development is taking place. Azure Maps only hosts a release branch, however experimental features are created as custom modules in the open-source Azure Maps code samples project. Bing Maps used to have a frozen branch as well that was updated less frequently, thus reducing the risk of breaking changes due to a release. In Azure Maps there you can use the NPM module and point to any previous minor version release. > [!TIP]
@@ -90,7 +90,7 @@ The following are some of the key differences between the Bing Maps and Azure Ma
* Both platforms use a similar tiling system for the base maps, however the tiles in Bing Maps are 256 pixels in dimension while the tiles in Azure Maps are 512 pixels in dimension. As such, to get the same map view in Azure Maps as Bing Maps, a zoom level used in Bing Maps needs to be subtracted by one in Azure Maps. * Coordinates in Bing Maps are referred to as `latitude, longitude` while Azure Maps uses `longitude, latitude`. This format aligns with the standard `[x, y]` that is followed by most GIS platforms.
-* Shapes in the Azure Maps Web SDK are based on the GeoJSON schema. Helper classes are exposed through the [atlas.data namespace](https://docs.microsoft.com/javascript/api/azure-maps-control/atlas.data). There is also the [atlas.Shape](https://docs.microsoft.com/javascript/api/azure-maps-control/atlas.shape) class that can be used to wrap GeoJSON objects and make them easy to update and maintain in a data bindable way.
+* Shapes in the Azure Maps Web SDK are based on the GeoJSON schema. Helper classes are exposed through the [atlas.data namespace](/javascript/api/azure-maps-control/atlas.data). There is also the [atlas.Shape](/javascript/api/azure-maps-control/atlas.shape) class that can be used to wrap GeoJSON objects and make them easy to update and maintain in a data bindable way.
* Coordinates in Azure Maps are defined as Position objects that can be specified as a simple number array in the format `[longitude, latitude]` or `new atlas.data.Position(longitude, latitude)`. > [!TIP]
azure-maps https://docs.microsoft.com/en-us/azure/azure-maps/quick-android-map https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/quick-android-map.md
@@ -54,7 +54,7 @@ Once your Maps account is successfully created, retrieve the primary key that en
3. Copy the **Primary Key** to your clipboard. Save it locally to use later in this tutorial. >[!NOTE]
-> If you use the Azure subscription key instead of the Azure Maps primary key, your map won't render properly. Also, for security purposes, it is recommended that you rotate between your primary and secondary keys. To rotate keys, update your app to use the secondary key, deploy, then press the cycle/refresh button beside the primary key to generate a new primary key. The old primary key will be disabled. For more information on key rotation, see [Set up Azure Key Vault with key rotation and auditing](https://docs.microsoft.com/azure/key-vault/secrets/key-rotation-log-monitoring)
+> If you use the Azure subscription key instead of the Azure Maps primary key, your map won't render properly. Also, for security purposes, it is recommended that you rotate between your primary and secondary keys. To rotate keys, update your app to use the secondary key, deploy, then press the cycle/refresh button beside the primary key to generate a new primary key. The old primary key will be disabled. For more information on key rotation, see [Set up Azure Key Vault with key rotation and auditing](../key-vault/secrets/tutorial-rotation-dual.md)
![Get Primary Key Azure Maps key in Azure portal](media/quick-android-map/get-key.png)
@@ -276,4 +276,4 @@ For more code examples, see these guides:
In this quickstart, you created your Azure Maps account and created a demo application. Take a look at the following tutorials to learn more about Azure Maps: > [!div class="nextstepaction"]
-> [Load GeoJSON data into Azure Maps](tutorial-load-geojson-file-android.md)
+> [Load GeoJSON data into Azure Maps](tutorial-load-geojson-file-android.md)
\ No newline at end of file
azure-maps https://docs.microsoft.com/en-us/azure/azure-maps/tutorial-ev-routing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/tutorial-ev-routing.md
@@ -22,7 +22,7 @@ In this tutorial, you walk help a driver whose electric vehicle battery is low.
In this tutorial, you will: > [!div class="checklist"]
-> * Create and run a Jupyter Notebook file on [Azure Notebooks](../notebooks/index.yml) in the cloud.
+> * Create and run a Jupyter Notebook file on [Azure Notebooks](https://notebooks.azure.com) in the cloud.
> * Call Azure Maps REST APIs in Python. > * Search for a reachable range based on the electric vehicle's consumption model. > * Search for electric vehicle charging stations within the reachable range, or isochrone.
@@ -44,7 +44,7 @@ For more information on authentication in Azure Maps, see [manage authentication
To follow along with this tutorial, you need to create an Azure Notebooks project and download and run the Jupyter Notebook file. The Jupyter Notebook file contains Python code, which implements the scenario in this tutorial. To create an Azure Notebooks project and upload the Jupyter Notebook document to it, do the following steps:
-1. Go to [Azure Notebooks](https://notebooks.azure.com) and sign in. For more information, see [Quickstart: Sign in and set a user ID](../notebooks/quickstart-sign-in-azure-notebooks.md).
+1. Go to [Azure Notebooks](https://notebooks.azure.com) and sign in. For more information, see [Quickstart: Sign in and set a user ID](https://notebooks.azure.com).
1. At the top of your public profile page, select **My Projects**. ![The My Projects button](./media/tutorial-ev-routing/myproject.png)
@@ -403,4 +403,4 @@ There are no resources that require cleanup.
To learn more about Azure Notebooks, see > [!div class="nextstepaction"]
-> [Azure Notebooks](../notebooks/index.yml)
\ No newline at end of file
+> [Azure Notebooks](https://notebooks.azure.com)
\ No newline at end of file
azure-maps https://docs.microsoft.com/en-us/azure/azure-maps/tutorial-iot-hub-maps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/tutorial-iot-hub-maps.md
@@ -158,15 +158,15 @@ IoT Hub enables secure and reliable bi-directional communication between an IoT
> [!NOTE] > The ability to publish device telemetry events on Event Grid is currently in preview. This feature is available in all regions except the following: East US, West US, West Europe, Azure Government, Azure China 21Vianet, and Azure Germany.
-To create an IoT hub in the *ContosoRental* resource group, follow the steps in [create an IoT hub](https://docs.microsoft.com/azure/iot-hub/quickstart-send-telemetry-dotnet#create-an-iot-hub).
+To create an IoT hub in the *ContosoRental* resource group, follow the steps in [create an IoT hub](../iot-hub/quickstart-send-telemetry-dotnet.md#create-an-iot-hub).
## Register a device in your IoT hub
-Devices can't connect to the IoT hub unless they're registered in the IoT hub identity registry. Here, you'll create a single device with the name, *InVehicleDevice*. To create and register the device within your IoT hub, follow the steps in [register a new device in the IoT hub](https://docs.microsoft.com/azure/iot-hub/iot-hub-create-through-portal#register-a-new-device-in-the-iot-hub). Make sure to copy the primary connection string of your device. You'll need it later.
+Devices can't connect to the IoT hub unless they're registered in the IoT hub identity registry. Here, you'll create a single device with the name, *InVehicleDevice*. To create and register the device within your IoT hub, follow the steps in [register a new device in the IoT hub](../iot-hub/iot-hub-create-through-portal.md#register-a-new-device-in-the-iot-hub). Make sure to copy the primary connection string of your device. You'll need it later.
## Create a function and add an Event Grid subscription
-Azure Functions is a serverless compute service that allows you to run small pieces of code ("functions"), without the need to explicitly provision or manage compute infrastructure. To learn more, see [Azure Functions](https://docs.microsoft.com/azure/azure-functions/functions-overview).
+Azure Functions is a serverless compute service that allows you to run small pieces of code ("functions"), without the need to explicitly provision or manage compute infrastructure. To learn more, see [Azure Functions](../azure-functions/functions-overview.md).
A function is triggered by a certain event. Here, you'll create a function that is triggered by an Event Grid trigger. Create the relationship between trigger and function by creating an event subscription for IoT Hub device telemetry events. When a device telemetry event occurs, your function is called as an endpoint, and receives the relevant data for the device you previously registered in IoT Hub.
@@ -220,7 +220,7 @@ Now, set up your Azure function.
## Filter events by using IoT Hub message routing
-When you add an Event Grid subscription to the Azure function, a messaging route is automatically created in the specified IoT hub. Message routing allows you to route different data types to various endpoints. For example, you can route device telemetry messages, device life-cycle events, and device twin change events. For more information, see [Use IoT Hub message routing](https://docs.microsoft.com/azure/iot-hub/iot-hub-devguide-messages-d2c).
+When you add an Event Grid subscription to the Azure function, a messaging route is automatically created in the specified IoT hub. Message routing allows you to route different data types to various endpoints. For example, you can route device telemetry messages, device life-cycle events, and device twin change events. For more information, see [Use IoT Hub message routing](../iot-hub/iot-hub-devguide-messages-d2c.md).
:::image type="content" source="./media/tutorial-iot-hub-maps/hub-route.png" alt-text="Screenshot of message routing in IoT hub.":::
@@ -229,7 +229,7 @@ In your example scenario, you only want to receive messages when the rental car
:::image type="content" source="./media/tutorial-iot-hub-maps/hub-filter.png" alt-text="Screenshot of filter routing messages."::: >[!TIP]
->There are various ways to query IoT device-to-cloud messages. To learn more about message routing syntax, see [IoT Hub message routing](https://docs.microsoft.com/azure/iot-hub/iot-hub-devguide-routing-query-syntax).
+>There are various ways to query IoT device-to-cloud messages. To learn more about message routing syntax, see [IoT Hub message routing](../iot-hub/iot-hub-devguide-routing-query-syntax.md).
## Send telemetry data to IoT Hub
@@ -291,4 +291,4 @@ To learn more about how to send device-to-cloud telemetry, and the other way aro
> [!div class="nextstepaction"]
-> [Send telemetry from a device](../iot-hub/quickstart-send-telemetry-dotnet.md)
+> [Send telemetry from a device](../iot-hub/quickstart-send-telemetry-dotnet.md)
\ No newline at end of file
azure-maps https://docs.microsoft.com/en-us/azure/azure-maps/weather-service-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/weather-service-tutorial.md
@@ -23,7 +23,7 @@ Wind power is one alternative energy source for fossil fuels to combat against c
In this tutorial, you will: > [!div class="checklist"]
-> * Work with data files in [Azure Notebooks](../notebooks/index.yml) in the cloud.
+> * Work with data files in [Azure Notebooks](https://notebooks.azure.com) in the cloud.
> * Load demo data from file. > * Call Azure Maps REST APIs in Python. > * Render location data on the map.
@@ -202,4 +202,4 @@ There are no resources that require cleanup.
To learn more about Azure Notebooks, see > [!div class="nextstepaction"]
-> [Azure Notebooks](../notebooks/index.yml)
+> [Azure Notebooks](https://notebooks.azure.com)
\ No newline at end of file
azure-maps https://docs.microsoft.com/en-us/azure/azure-maps/weather-services-faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/weather-services-faq.md
@@ -17,7 +17,7 @@ manager: philmea
> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. > For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-This article answers to common questions about Azure Maps [Weather services](https://docs.microsoft.com/rest/api/maps/weather) data and features. The following topics are covered:
+This article answers to common questions about Azure Maps [Weather services](/rest/api/maps/weather) data and features. The following topics are covered:
* Data sources and data models * Weather services coverage and availability
@@ -55,7 +55,7 @@ Numerous weather forecast guidance systems are utilized to formulate global fore
**What kind of coverage can I expect for different countries/regions?**
-Weather service coverage varies by country/region. All features are not available in every country/region. For more information, see [coverage documentation](https://docs.microsoft.com/azure/azure-maps/weather-coverage).
+Weather service coverage varies by country/region. All features are not available in every country/region. For more information, see [coverage documentation](./weather-coverage.md).
## Data update frequency
@@ -75,7 +75,7 @@ Azure Maps Forecast APIs are cached for up to 30 mins. To see when the cached re
**Does Azure Maps Web SDK natively support Weather services (Preview) integration?**
-The Azure Maps Web SDK provides a services module. The services module is a helper library that makes it easy to use the Azure Maps REST services in web or Node.js applications. by using JavaScript or TypeScript. To get started, see our [documentation](https://docs.microsoft.com/azure/azure-maps/how-to-use-services-module).
+The Azure Maps Web SDK provides a services module. The services module is a helper library that makes it easy to use the Azure Maps REST services in web or Node.js applications. by using JavaScript or TypeScript. To get started, see our [documentation](./how-to-use-services-module.md).
**Does Azure Maps Android SDK natively support Weather services (Preview) integration?**
@@ -87,26 +87,26 @@ We plan to create a services module for Java/Android similar to the web SDK modu
**Does Azure Maps Power BI Visual support Azure Maps weather tiles?**
-Yes. To learn how to migrate radar and infrared satellite tiles to the Microsoft Power BI visual, see [Add a tile layer to Power BI visual](https://docs.microsoft.com/azure/azure-maps/power-bi-visual-add-tile-layer).
+Yes. To learn how to migrate radar and infrared satellite tiles to the Microsoft Power BI visual, see [Add a tile layer to Power BI visual](./power-bi-visual-add-tile-layer.md).
**How do I interpret colors used for radar and satellite tiles?**
-The Azure Maps [Weather concept article](https://docs.microsoft.com/azure/azure-maps/weather-services-concepts#radar-and-satellite-imagery-color-scale) includes a guide to help interpret colors used for radar and satellite tiles. The article covers color samples and HEX color codes.
+The Azure Maps [Weather concept article](./weather-services-concepts.md#radar-and-satellite-imagery-color-scale) includes a guide to help interpret colors used for radar and satellite tiles. The article covers color samples and HEX color codes.
**Can I create radar and satellite tile animations?**
-Yes. In addition to real-time radar and satellite tiles, Azure Maps customers can request past and future tiles to enhance data visualizations with map overlays. This can be done by directly calling [Get Map Tile v2 API](https://aka.ms/AzureMapsWeatherTiles ) or by requesting tiles via Azure Maps web SDK. Radar tiles are provided for up to 1.5 hours in the past, and for up to 2 hours in the future. The tiles and are available in 5-minute intervals. Infrared tiles are provided for up to 3 hours in the past, and are available in 10-minute intervals. For more information, see the open-source Weather Tile Animation [code sample](https://azuremapscodesamples.azurewebsites.net/https://docsupdatetracker.net/index.html?sample=Animated%20tile%20layer).
+Yes. In addition to real-time radar and satellite tiles, Azure Maps customers can request past and future tiles to enhance data visualizations with map overlays. This can be done by directly calling [Get Map Tile v2 API](/rest/api/maps/renderv2/getmaptilepreview) or by requesting tiles via Azure Maps web SDK. Radar tiles are provided for up to 1.5 hours in the past, and for up to 2 hours in the future. The tiles and are available in 5-minute intervals. Infrared tiles are provided for up to 3 hours in the past, and are available in 10-minute intervals. For more information, see the open-source Weather Tile Animation [code sample](https://azuremapscodesamples.azurewebsites.net/https://docsupdatetracker.net/index.html?sample=Animated%20tile%20layer).
**Do you offer icons for different weather conditions?**
-Yes. You can find icons and their respective codes [here](https://docs.microsoft.com/azure/azure-maps/weather-services-concepts#weather-icons). Notice that only some of the Weather service (Preview) APIs, such as [Get Current Conditions API](https://aka.ms/azuremapsweathercurrentconditions), return the *iconCode* in the response. For more information, see the Current WeatherConditions open-source [code sample](https://azuremapscodesamples.azurewebsites.net/https://docsupdatetracker.net/index.html?sample=Get%20current%20weather%20at%20a%20location).
+Yes. You can find icons and their respective codes [here](./weather-services-concepts.md#weather-icons). Notice that only some of the Weather service (Preview) APIs, such as [Get Current Conditions API](/rest/api/maps/weather/getcurrentconditionspreview), return the *iconCode* in the response. For more information, see the Current WeatherConditions open-source [code sample](https://azuremapscodesamples.azurewebsites.net/https://docsupdatetracker.net/index.html?sample=Get%20current%20weather%20at%20a%20location).
## Next steps If this FAQ doesnΓÇÖt answer your question, you can contact us through the following channels (in escalating order): * The comments section of this article.
-* [MSFT Q&A page for Azure Maps](https://docs.microsoft.com/answers/topics/azure-maps.html).
+* [MSFT Q&A page for Azure Maps](/answers/topics/azure-maps.html).
* Microsoft Support. To create a new support request, in the [Azure portal](https://portal.azure.com/), on the Help tab, select the **Help +** support button, and then select **New support request**. * [Azure Maps UserVoice](https://feedback.azure.com/forums/909172-azure-maps) to submit feature requests.
@@ -121,4 +121,4 @@ Azure Maps Weather services (Preview) concepts article:
Explore the Azure Maps Weather services (Preview) API documentation: > [!div class="nextstepaction"]
-> [Azure Maps Weather services](/rest/api/maps/weather)
+> [Azure Maps Weather services](/rest/api/maps/weather)
\ No newline at end of file
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/app/api-custom-events-metrics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/api-custom-events-metrics.md
@@ -531,7 +531,7 @@ If [sampling](./sampling.md) is in operation, the itemCount property shows a val
Use the TrackDependency call to track the response times and success rates of calls to an external piece of code. The results appear in the dependency charts in the portal. The code snippet below needs to be added wherever a dependency call is made. > [!NOTE]
-> For .NET and .NET Core you can alternatively use the `TelemetryClient.StartOperation` (extension) method that fills the `DependencyTelemetry` properties that are needed for correlation and some other properties like the start time and duration so you don't need to create a custom timer as with the examples below. For more information consult this article's [section on outgoing dependency tracking](https://docs.microsoft.com/azure/azure-monitor/app/custom-operations-tracking#outgoing-dependencies-tracking).
+> For .NET and .NET Core you can alternatively use the `TelemetryClient.StartOperation` (extension) method that fills the `DependencyTelemetry` properties that are needed for correlation and some other properties like the start time and duration so you don't need to create a custom timer as with the examples below. For more information consult this article's [section on outgoing dependency tracking](./custom-operations-tracking.md#outgoing-dependencies-tracking).
*C#*
@@ -1120,4 +1120,4 @@ To determine how long data is kept, see [Data retention and privacy](./data-rete
## <a name="next"></a>Next steps * [Search events and logs](./diagnostic-search.md)
-* [Troubleshooting](../faq.md)
+* [Troubleshooting](../faq.md)
\ No newline at end of file
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/app/automate-custom-reports https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/automate-custom-reports.md
@@ -29,7 +29,7 @@ You can [programmatically query Application Insights](https://dev.applicationins
* [Automate reports with Power Automate](../platform/logicapp-flow-connector.md) * [Automate reports with Logic Apps](automate-with-logic-apps.md)
-* Use the "Application Insights scheduled digest" [Azure function](../../azure-functions/functions-create-first-azure-function.md) template in the Monitoring scenario. This function uses SendGrid to deliver the email.
+* Use the "Application Insights scheduled digest" [Azure function](../../azure-functions/functions-get-started.md) template in the Monitoring scenario. This function uses SendGrid to deliver the email.
![Azure function template](./media/automate-custom-reports/azure-function-template.png)
@@ -68,7 +68,7 @@ availabilityResults
1. Create an Azure Function App.(Application Insights _On_ is required only if you want to monitor your new Function App with Application Insights)
- Visit the Azure Functions documentation to learn how to [create a function app](../../azure-functions/functions-create-first-azure-function.md#create-a-function-app)
+ Visit the Azure Functions documentation to learn how to [create a function app](../../azure-functions/functions-get-started.md)
2. Once your new Function App has completed deployment, select **Go to resource**.
@@ -150,4 +150,3 @@ These steps only apply if you don't already have a SendGrid account configured.
* Learn more about [programmatically querying Application Insights data](https://dev.applicationinsights.io/) * Learn more about [Logic Apps](../../logic-apps/logic-apps-overview.md). * Learn more about [Microsoft Power Automate](https://ms.flow.microsoft.com).-
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/app/azure-vm-vmss-apps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/azure-vm-vmss-apps.md
@@ -13,7 +13,7 @@ Enabling monitoring on your .NET based web applications running on [Azure virtua
This article walks you through enabling Application Insights monitoring using the Application Insights Agent and provides preliminary guidance for automating the process for large-scale deployments. > [!IMPORTANT]
-> Azure Application Insights Agent for ASP.NET applications running on **Azure VMs and VMSS** is currently in public preview. For monitoring your ASP.Net applications running **on-premises**, use the [Azure Application Insights Agent for on-premises servers](https://docs.microsoft.com/azure/azure-monitor/app/status-monitor-v2-overview), which is generally available and fully supported.
+> Azure Application Insights Agent for ASP.NET applications running on **Azure VMs and VMSS** is currently in public preview. For monitoring your ASP.Net applications running **on-premises**, use the [Azure Application Insights Agent for on-premises servers](./status-monitor-v2-overview.md), which is generally available and fully supported.
> The preview version for Azure VMs and VMSS is provided without a service-level agreement, and we don't recommend it for production workloads. Some features might not be supported, and some might have constrained capabilities. > For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
@@ -173,4 +173,4 @@ C:\WindowsAzure\Logs\Plugins\Microsoft.Azure.Diagnostics.ApplicationMonitoringWi
## Next steps * Learn how to [deploy an application to an Azure virtual machine scale set](../../virtual-machine-scale-sets/virtual-machine-scale-sets-deploy-app.md).
-* [Set up Availability web tests](monitor-web-app-availability.md) to be alerted if your endpoint is down.
+* [Set up Availability web tests](monitor-web-app-availability.md) to be alerted if your endpoint is down.
\ No newline at end of file
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/app/java-standalone-config https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/java-standalone-config.md
@@ -300,7 +300,9 @@ If your application is behind a firewall and cannot connect directly to Applicat
} ```
-[//]: # "NOTE not advertising OpenTelemetry support until we support 0.10.0, which has massive breaking changes from 0.9.0"
+Application Insights Java 3.0 also respects the global `-Dhttps.proxyHost` and `-Dhttps.proxyPort` if those are set.
+
+[//]: # "NOTE OpenTelemetry support is in private preview until OpenTelemetry API reaches 1.0"
[//]: # "## Support for OpenTelemetry API pre-1.0 releases"
@@ -350,6 +352,8 @@ and the console, corresponding to this configuration:
`maxHistory` is the number of rolled over log files that are retained (in addition to the current log file).
+Starting from version 3.0.2, you can also set the self-diagnostics `level` using the environment variable `APPLICATIONINSIGHTS_SELF_DIAGNOSTICS_LEVEL`.
+ ## An example This is just an example to show what a configuration file looks like with multiple components.
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/app/java-standalone-telemetry-processors https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/java-standalone-telemetry-processors.md
@@ -238,7 +238,7 @@ For the `hash` action, following are required
### `extract` > [!NOTE]
-> This feature is only in 3.0.1 and later
+> This feature is only in 3.0.2 and later
Extracts values using a regular expression rule from the input key to target keys specified in the rule. If a target key already exists, it will be overridden. It behaves similar to the [Span Processor](#extract-attributes-from-span-name) `toAttributes` setting with the existing attribute as the source.
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/app/javascript-click-analytics-plugin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/javascript-click-analytics-plugin.md
@@ -65,7 +65,7 @@ appInsights.loadAppInsights();
2. To improve efficiency, the plugin uses this tag as a flag, when encountered it will stop itself from further processing the DOM (Document Object Model) upwards. > [!CAUTION]
- > Once `parentDataTag` is used, it has a persistent effect across your whole application and not just the HTML element you used it in.
+ > Once `parentDataTag` is used, the SDK will begin looking for parent tags across your entire application and not just the HTML element where you used it.
4. `customDataPrefix` provided by the user should always start with `data-`, for example `data-sample-`. In HTML the `data-*` global attributes form a class of attributes called custom data attributes, that allow proprietary information to be exchanged between the HTML and its DOM representation by scripts. Older browsers (Internet Explorer, Safari) will drop attributes that it doesn't understand, unless they start with `data-`. The `*` in `data-*` may be replaced by any name following the [production rule of XML names](https://www.w3.org/TR/REC-xml/#NT-Name) with the following restrictions:
@@ -310,5 +310,5 @@ appInsights.loadAppInsights();
- Check out the [GitHub Repository](https://github.com/microsoft/ApplicationInsights-JS/tree/master/extensions/applicationinsights-clickanalytics-js) and [NPM Package](https://www.npmjs.com/package/@microsoft/applicationinsights-clickanalytics-js) for the Click Analytics Auto-Collection Plugin. - Use [Events Analysis in Usage Experience](usage-segmentation.md) to analyze top clicks and slice by available dimensions.-- Find click data under content field within customDimensions attribute in CustomEvents table in [Log Analytics](../log-query/log-analytics-tutorial.md#write-a-query).
+- Find click data under content field within customDimensions attribute in CustomEvents table in [Log Analytics](../log-query/log-analytics-tutorial.md#write-a-query). See [Sample App](https://go.microsoft.com/fwlink/?linkid=2152871) for additional guidance.
- Build a [Workbook](../platform/workbooks-overview.md) to create custom visualizations of click data.
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/app/live-stream https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/live-stream.md
@@ -32,7 +32,7 @@ Live Metrics are currently supported for ASP.NET, ASP.NET Core, Azure Functions,
* [ASP.NET Core](./asp-net-core.md)- Live Metrics is enabled by default. * [.NET/.NET Core Console/Worker](./worker-service.md)- Live Metrics is enabled by default. * [.NET Applications - Enable using code](#enable-livemetrics-using-code-for-any-net-application).
- * [Java](https://docs.microsoft.com/azure/azure-monitor/app/java-in-process-agent) - Live Metrics is enabled by default.
+ * [Java](./java-in-process-agent.md) - Live Metrics is enabled by default.
* [Node.js](./nodejs.md#live-metrics) 2. In the [Azure portal](https://portal.azure.com), open the Application Insights resource for your app, and then open Live Stream.
@@ -262,4 +262,4 @@ Live Metrics Stream uses different IP addresses than other Application Insights
* [Monitoring usage with Application Insights](./usage-overview.md) * [Using Diagnostic Search](./diagnostic-search.md) * [Profiler](./profiler.md)
-* [Snapshot debugger](./snapshot-debugger.md)
+* [Snapshot debugger](./snapshot-debugger.md)
\ No newline at end of file
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/app/profiler-troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/profiler-troubleshooting.md
@@ -212,9 +212,9 @@ To check the settings that were used to configure Azure Diagnostics:
If your application connects to the Internet via a proxy or a firewall, you may need to update the rules to communicate with the Profiler service.
-The IPs used by Application Insights Profiler are included in the Azure Monitor service tag. For more information, see [Service Tags documentation](https://docs.microsoft.com/azure/virtual-network/service-tags-overview).
+The IPs used by Application Insights Profiler are included in the Azure Monitor service tag. For more information, see [Service Tags documentation](../../virtual-network/service-tags-overview.md).
[profiler-search-telemetry]:./media/profiler-troubleshooting/Profiler-Search-Telemetry.png [profiler-webjob]:./media/profiler-troubleshooting/Profiler-webjob.png
-[profiler-webjob-log]:./media/profiler-troubleshooting/Profiler-webjob-log.png
+[profiler-webjob-log]:./media/profiler-troubleshooting/Profiler-webjob-log.png
\ No newline at end of file
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/app/snapshot-debugger-function-app https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/snapshot-debugger-function-app.md
@@ -17,7 +17,7 @@ For most applications, the Free and Shared service tiers don't have enough memor
## Prerequisites
-* [Enable Application Insights monitoring in your Function App](https://docs.microsoft.com/azure/azure-functions/configure-monitoring#add-to-an-existing-function-app)
+* [Enable Application Insights monitoring in your Function App](../../azure-functions/configure-monitoring.md#add-to-an-existing-function-app)
## Enable Snapshot Debugger
@@ -142,5 +142,5 @@ We recommend you have Snapshot Debugger enabled on all your apps to ease diagnos
- Generate traffic to your application that can trigger an exception. Then, wait 10 to 15 minutes for snapshots to be sent to the Application Insights instance. - [View snapshots](snapshot-debugger.md?toc=/azure/azure-monitor/toc.json#view-snapshots-in-the-portal) in the Azure portal.-- Customize Snapshot Debugger configuration based on your use-case on your Function app. For more info, see [snapshot configuration in host.json](https://docs.microsoft.com/azure/azure-functions/functions-host-json#applicationinsightssnapshotconfiguration).-- For help with troubleshooting Snapshot Debugger issues, see [Snapshot Debugger troubleshooting](snapshot-debugger-troubleshoot.md?toc=/azure/azure-monitor/toc.json).
+- Customize Snapshot Debugger configuration based on your use-case on your Function app. For more info, see [snapshot configuration in host.json](../../azure-functions/functions-host-json.md#applicationinsightssnapshotconfiguration).
+- For help with troubleshooting Snapshot Debugger issues, see [Snapshot Debugger troubleshooting](snapshot-debugger-troubleshoot.md?toc=/azure/azure-monitor/toc.json).
\ No newline at end of file
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/insights/monitor-vm-azure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/insights/monitor-vm-azure.md
@@ -109,7 +109,7 @@ Collect platform metrics with a diagnostic setting for the virtual machine. Unli
Set-AzDiagnosticSetting -Name vm-diagnostics -ResourceId "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/my-resource-group/providers/Microsoft.Compute/virtualMachines/my-vm" -Enabled $true -MetricCategory AllMetrics -workspaceId "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/my-resource-group/providers/microsoft.operationalinsights/workspaces/my-workspace" ```
-```CLI
+```azurecli
az monitor diagnostic-settings create \ --name VM-Diagnostics --resource /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/my-resource-group/providers/Microsoft.Compute/virtualMachines/my-vm \
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/insights/vminsights-health-enable https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/insights/vminsights-health-enable.md
@@ -123,9 +123,9 @@ Deploy the template using any [deployment method for Resource Manager templates]
New-AzResourceGroupDeployment -Name GuestHealthDataCollectionRule -ResourceGroupName my-resource-group -TemplateFile Health.DataCollectionRule.template.json -TemplateParameterFile Health.DataCollectionRule.template.parameters.json ```
-# [CLI](#tab/cli)
+# [Azure CLI](#tab/azure-cli)
-```cli
+```azurecli
az deployment group create --name GuestHealthDataCollectionRule --resource-group my-resource-group --template-file Health.DataCollectionRule.template.json --parameters Health.DataCollectionRule.template.parameters.json ```
@@ -263,9 +263,9 @@ For example, use the following commands to deploy the template and parameters fi
New-AzResourceGroupDeployment -Name GuestHealthDeployment -ResourceGroupName my-resource-group -TemplateFile azure-monitor-deploy.json -TemplateParameterFile azure-monitor-deploy.parameters.json ```
-# [CLI](#tab/cli)
+# [Azure CLI](#tab/azure-cli)
-```cli
+```azurecli
az deployment group create --name GuestHealthDeployment --resource-group my-resource-group --template-file Health.VirtualMachine.template.json --parameters Health.VirtualMachine.template.parameters.json ```
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/action-groups https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/action-groups.md
@@ -158,6 +158,7 @@ You may have a limited number of Logic App actions in an Action Group.
> [!NOTE] > Using the webhook action requires that the target webhook endpoint either doesn't require details of the alert to function successfully or it's capable of parsing the alert context information that's provided as part of the POST operation. If the webhook endpoint can't handle the alert context information on its own, you can use a solution like a [Logic App action](./action-groups-logic-app.md) for a custom manipulation of the alert context information to match the webhook's expected data format.
+> User should be the **owner** of webhook service principal in order to make sure security is not violated. As any azure customer can access all object Ids through portal, without checking the owner, anyone can add the secure webhook to their own action group for azure monitor alert notification which violate security.
The Action Groups Webhook action enables you to take advantage of Azure Active Directory to secure the connection between your action group and your protected web API (webhook endpoint). The overall workflow for taking advantage of this functionality is described below. For an overview of Azure AD Applications and service principals, see [Microsoft identity platform (v2.0) overview](../../active-directory/develop/v2-overview.md).
@@ -313,9 +314,6 @@ Pricing for supported countries/regions is listed in the [Azure Monitor pricing
> [!NOTE] > Using the webhook action requires that the target webhook endpoint either doesn't require details of the alert to function successfully or it's capable of parsing the alert context information that's provided as part of the POST operation. -
-> User should be the **owner** of webhook service principal in order to make sure security is not violated. As any azure customer can access all object Ids through portal, without checking the owner, anyone can add the secure webhook to their own action group for azure monitor alert notification which violate security.
- > If the webhook endpoint can't handle the alert context information on its own, you can use a solution like a [Logic App action](./action-groups-logic-app.md) for a custom manipulation of the alert context information to match the webhook's expected data format. Webhooks are processed using the following rules
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/alerts-troubleshoot-metric https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/alerts-troubleshoot-metric.md
@@ -4,7 +4,7 @@ description: Common issues with Azure Monitor metric alerts and possible solutio
author: harelbr ms.author: harelbr ms.topic: troubleshooting
-ms.date: 01/11/2021
+ms.date: 01/21/2021
ms.subservice: alerts --- # Troubleshooting problems in Azure Monitor metric alerts
@@ -18,8 +18,9 @@ Azure Monitor alerts proactively notify you when important conditions are found
If you believe a metric alert should have fired but it didnΓÇÖt fire and isn't found in the Azure portal, try the following steps: 1. **Configuration** - Review the metric alert rule configuration to make sure itΓÇÖs properly configured:
- - Check that the **Aggregation type**, **Aggregation granularity (period)**, and **Threshold value** or **Sensitivity** are configured as expected
- - For an alert rule that uses Dynamic Thresholds, check if advanced settings are configured, as **Number of violations** may filter alerts and **Ignore data before** can impact how the thresholds are calculated
+ - Check that the **Aggregation type** and **Aggregation granularity (period)** are configured as expected. **Aggregation type** determines how metric values are aggregated (learn more [here](./metrics-aggregation-explained.md#aggregation-types)), and **Aggregation granularity (period)** controls how far back the evaluation aggregates the metric values each time the alert rule runs.
+ - Check that the **Threshold value** or **Sensitivity** are configured as expected.
+ - For an alert rule that uses Dynamic Thresholds, check if advanced settings are configured, as **Number of violations** may filter alerts and **Ignore data before** can impact how the thresholds are calculated.
> [!NOTE] > Dynamic Thresholds require at least 3 days and 30 metric samples before becoming active.
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/logs-data-export https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/logs-data-export.md
@@ -29,7 +29,7 @@ Log Analytics workspace data export continuously exports data from a Log Analyti
- One time export to local machine using PowerShell script. See [Invoke-AzOperationalInsightsQueryExport](https://www.powershellgallery.com/packages/Invoke-AzOperationalInsightsQueryExport).
-## Current limitations
+## Limitations
- Configuration can be performed using CLI or REST requests currently. Azure portal or PowerShell are not supported yet. - The ```--export-all-tables``` option in CLI and REST isn't supported and will be removed. You should provide the list of tables in export rules explicitly.
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/security-controls-policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/security-controls-policy.md
@@ -1,7 +1,7 @@
--- title: Azure Policy Regulatory Compliance controls for Azure Monitor description: Lists Azure Policy Regulatory Compliance controls available for Azure Monitor. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources.
-ms.date: 01/08/2021
+ms.date: 01/21/2021
ms.topic: sample author: rboucher ms.author: robb
azure-netapp-files https://docs.microsoft.com/en-us/azure/azure-netapp-files/cross-region-replication-requirements-considerations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/cross-region-replication-requirements-considerations.md
@@ -13,7 +13,7 @@ ms.workload: storage
ms.tgt_pltfrm: na ms.devlang: na ms.topic: conceptual
-ms.date: 09/16/2020
+ms.date: 01/20/2021
ms.author: b-juche ---
@@ -27,6 +27,7 @@ Note the following requirements and considerations about [using the volume cross
* Azure NetApp Files replication is only available in certain fixed region pairs. See [Supported region pairs](cross-region-replication-introduction.md#supported-region-pairs). * SMB volumes are supported along with NFS volumes. Replication of SMB volumes requires an Active Directory connection in the source and destination NetApp accounts. The destination AD connection must have access to the DNS servers or ADDS Domain Controllers that are reachable from the delegated subnet in the destination region. For more information, see [Requirements for Active Directory connections](azure-netapp-files-create-volumes-smb.md#requirements-for-active-directory-connections). * The destination account must be in a different region from the source volume region. You can also select an existing NetApp account in a different region.
+* The replication destination volume is read-only until you [fail over to the destination region](cross-region-replication-manage-disaster-recovery.md#fail-over-to-destination-volume) to enable the destination volume for read and write.
* Azure NetApp Files replication does not currently support multiple subscriptions; all replications must be performed under a single subscription. * You can set up a maximum of five volumes for replication within a single subscription per region. You can open a support ticket to request for an increase in the default quota of five replication destination volumes (per subscription in a region). * There can be a delay up to five minutes for the interface to reflect a newly added snapshot on the source volume.
azure-resource-manager https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/security-controls-policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/security-controls-policy.md
@@ -1,7 +1,7 @@
--- title: Azure Policy Regulatory Compliance controls for Azure Resource Manager description: Lists Azure Policy Regulatory Compliance controls available for Azure Resource Manager. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources.
-ms.date: 01/08/2021
+ms.date: 01/21/2021
ms.topic: sample author: tfitzmac ms.author: tomfitz
azure-signalr https://docs.microsoft.com/en-us/azure/azure-signalr/howto-private-endpoints https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-signalr/howto-private-endpoints.md
@@ -122,55 +122,55 @@ For more information on configuring your own DNS server to support private endpo
### Create a private endpoint using Azure CLI 1. Login to Azure CLI
- ```console
+ ```azurecli
az login ``` 1. Select your Azure Subscription
- ```console
+ ```azurecli
az account set --subscription {AZURE SUBSCRIPTION ID} ``` 1. Create a new Resource Group
- ```console
+ ```azurecli
az group create -n {RG} -l {AZURE REGION} ``` 1. Register Microsoft.SignalRService as a provider
- ```console
+ ```azurecli
az provider register -n Microsoft.SignalRService ``` 1. Create a new Azure SignalR Service
- ```console
+ ```azurecli
az signalr create --name {NAME} --resource-group {RG} --location {AZURE REGION} --sku Standard_S1 ``` 1. Create a Virtual Network
- ```console
+ ```azurecli
az network vnet create --resource-group {RG} --name {vNet NAME} --location {AZURE REGION} ``` 1. Add a subnet
- ```console
+ ```azurecli
az network vnet subnet create --resource-group {RG} --vnet-name {vNet NAME} --name {subnet NAME} --address-prefixes {addressPrefix} ``` 1. Disable Virtual Network Policies
- ```console
+ ```azurecli
az network vnet subnet update --name {subnet NAME} --resource-group {RG} --vnet-name {vNet NAME} --disable-private-endpoint-network-policies true ``` 1. Add a Private DNS Zone
- ```console
+ ```azurecli
az network private-dns zone create --resource-group {RG} --name privatelink.service.signalr.net ``` 1. Link Private DNS Zone to Virtual Network
- ```console
+ ```azurecli
az network private-dns link vnet create --resource-group {RG} --virtual-network {vNet NAME} --zone-name privatelink.service.signalr.net --name {dnsZoneLinkName} --registration-enabled true ``` 1. Create a Private Endpoint (Automatically Approve)
- ```console
+ ```azurecli
az network private-endpoint create --resource-group {RG} --vnet-name {vNet NAME} --subnet {subnet NAME} --name {Private Endpoint Name} --private-connection-resource-id "/subscriptions/{AZURE SUBSCRIPTION ID}/resourceGroups/{RG}/providers/Microsoft.SignalRService/SignalR/{NAME}" --group-ids signalr --connection-name {Private Link Connection Name} --location {AZURE REGION} ``` 1. Create a Private Endpoint (Manually Request Approval)
- ```console
+ ```azurecli
az network private-endpoint create --resource-group {RG} --vnet-name {vNet NAME} --subnet {subnet NAME} --name {Private Endpoint Name} --private-connection-resource-id "/subscriptions/{AZURE SUBSCRIPTION ID}/resourceGroups/{RG}/providers/Microsoft.SignalRService/SignalR/{NAME}" --group-ids signalr --connection-name {Private Link Connection Name} --location {AZURE REGION} --manual-request ``` 1. Show Connection Status
- ```console
+ ```azurecli
az network private-endpoint show --resource-group {RG} --name {Private Endpoint Name} ```
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/database/elastic-jobs-tsql-create-manage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/elastic-jobs-tsql-create-manage.md
@@ -57,8 +57,8 @@ EXEC jobs.sp_add_target_group 'ServerGroup1'
EXEC jobs.sp_add_target_group_member 'ServerGroup1', @target_type = 'SqlServer',
-@refresh_credential_name='mymastercred', --credential required to refresh the databases in a server
-@server_name='server1.database.windows.net'
+@refresh_credential_name = 'mymastercred', --credential required to refresh the databases in a server
+@server_name = 'server1.database.windows.net'
--View the recently created target group and target group members SELECT * FROM jobs.target_groups WHERE target_group_name='ServerGroup1';
@@ -81,16 +81,16 @@ GO
EXEC [jobs].sp_add_target_group_member @target_group_name = N'ServerGroup', @target_type = N'SqlServer',
-@refresh_credential_name=N'mymastercred', --credential required to refresh the databases in a server
-@server_name=N'London.database.windows.net'
+@refresh_credential_name = N'mymastercred', --credential required to refresh the databases in a server
+@server_name = N'London.database.windows.net'
GO -- Add a server target member EXEC [jobs].sp_add_target_group_member @target_group_name = N'ServerGroup', @target_type = N'SqlServer',
-@refresh_credential_name=N'mymastercred', --credential required to refresh the databases in a server
-@server_name='server2.database.windows.net'
+@refresh_credential_name = N'mymastercred', --credential required to refresh the databases in a server
+@server_name = 'server2.database.windows.net'
GO --Exclude a database target member from the server target group
@@ -99,7 +99,7 @@ EXEC [jobs].sp_add_target_group_member
@membership_type = N'Exclude', @target_type = N'SqlDatabase', @server_name = N'server1.database.windows.net',
-@database_name =N'MappingDB'
+@database_name = N'MappingDB'
GO --View the recently created target group and target group members
@@ -122,9 +122,9 @@ EXEC jobs.sp_add_target_group 'PoolGroup'
EXEC jobs.sp_add_target_group_member 'PoolGroup', @target_type = 'SqlElasticPool',
-@refresh_credential_name='mymastercred', --credential required to refresh the databases in a server
-@server_name='server1.database.windows.net',
-@elastic_pool_name='ElasticPool-1'
+@refresh_credential_name = 'mymastercred', --credential required to refresh the databases in a server
+@server_name = 'server1.database.windows.net',
+@elastic_pool_name = 'ElasticPool-1'
-- View the recently created target group and target group members SELECT * FROM jobs.target_groups WHERE target_group_name = N'PoolGroup';
@@ -140,14 +140,14 @@ Connect to the [*job database*](job-automation-overview.md#job-database) and run
--Connect to the job database specified when creating the job agent --Add job for create table
-EXEC jobs.sp_add_job @job_name='CreateTableTest', @description='Create Table Test'
+EXEC jobs.sp_add_job @job_name = 'CreateTableTest', @description = 'Create Table Test'
-- Add job step for create table
-EXEC jobs.sp_add_jobstep @job_name='CreateTableTest',
-@command=N'IF NOT EXISTS (SELECT * FROM sys.tables WHERE object_id = object_id(''Test''))
+EXEC jobs.sp_add_jobstep @job_name = 'CreateTableTest',
+@command = N'IF NOT EXISTS (SELECT * FROM sys.tables WHERE object_id = object_id(''Test''))
CREATE TABLE [dbo].[Test]([TestId] [int] NOT NULL);',
-@credential_name='myjobcred',
-@target_group_name='PoolGroup'
+@credential_name = 'myjobcred',
+@target_group_name = 'PoolGroup'
``` ## Data collection using built-in parameters
@@ -192,15 +192,15 @@ EXEC jobs.sp_add_job @job_name ='ResultsJob', @description='Collection Performan
-- Add a job step w/ schedule to collect results EXEC jobs.sp_add_jobstep
-@job_name='ResultsJob',
-@command= N' SELECT DB_NAME() DatabaseName, $(job_execution_id) AS job_execution_id, * FROM sys.dm_db_resource_stats WHERE end_time > DATEADD(mi, -20, GETDATE());',
-@credential_name='myjobcred',
-@target_group_name='PoolGroup',
-@output_type='SqlDatabase',
-@output_credential_name='myjobcred',
-@output_server_name='server1.database.windows.net',
-@output_database_name='<resultsdb>',
-@output_table_name='<resutlstable>'
+@job_name = 'ResultsJob',
+@command = N' SELECT DB_NAME() DatabaseName, $(job_execution_id) AS job_execution_id, * FROM sys.dm_db_resource_stats WHERE end_time > DATEADD(mi, -20, GETDATE());',
+@credential_name = 'myjobcred',
+@target_group_name = 'PoolGroup',
+@output_type = 'SqlDatabase',
+@output_credential_name = 'myjobcred',
+@output_server_name = 'server1.database.windows.net',
+@output_database_name = '<resultsdb>',
+@output_table_name = '<resutlstable>'
Create a job to monitor pool performance --Connect to the job database specified when creating the job agent
@@ -209,17 +209,17 @@ EXEC jobs.sp_add_target_group 'MasterGroup'
-- Add a server target member EXEC jobs.sp_add_target_group_member
-@target_group_name='MasterGroup',
-@target_type='SqlDatabase',
-@server_name='server1.database.windows.net',
-@database_name='master'
+@target_group_name = 'MasterGroup',
+@target_type = 'SqlDatabase',
+@server_name = 'server1.database.windows.net',
+@database_name = 'master'
-- Add a job to collect perf results EXEC jobs.sp_add_job
-@job_name='ResultsPoolsJob',
-@description='Demo: Collection Performance data from all pools',
-@schedule_interval_type='Minutes',
-@schedule_interval_count=15
+@job_name = 'ResultsPoolsJob',
+@description = 'Demo: Collection Performance data from all pools',
+@schedule_interval_type = 'Minutes',
+@schedule_interval_count = 15
-- Add a job step w/ schedule to collect results EXEC jobs.sp_add_jobstep
@@ -240,13 +240,13 @@ SELECT elastic_pool_name , end_time, elastic_pool_dtu_limit, avg_cpu_percent, av
avg_storage_percent, elastic_pool_storage_limit_mb FROM sys.elastic_pool_resource_stats WHERE end_time > @poolStartTime and end_time <= @poolEndTime; '),
-@credential_name='myjobcred',
-@target_group_name='MasterGroup',
-@output_type='SqlDatabase',
-@output_credential_name='myjobcred',
-@output_server_name='server1.database.windows.net',
-@output_database_name='resultsdb',
-@output_table_name='resutlstable'
+@credential_name = 'myjobcred',
+@target_group_name = 'MasterGroup',
+@output_type = 'SqlDatabase',
+@output_credential_name = 'myjobcred',
+@output_server_name = 'server1.database.windows.net',
+@output_database_name = 'resultsdb',
+@output_table_name = 'resutlstable'
``` ## View job definitions
@@ -300,10 +300,10 @@ Connect to the [*job database*](job-automation-overview.md#job-database) and run
--Connect to the job database specified when creating the job agent EXEC jobs.sp_update_job
-@job_name='ResultsJob',
+@job_name = 'ResultsJob',
@enabled=1,
-@schedule_interval_type='Minutes',
-@schedule_interval_count=15
+@schedule_interval_type = 'Minutes',
+@schedule_interval_count = 15
``` ## Monitor job execution status
@@ -1345,4 +1345,4 @@ Shows all members of all target groups.
## Next steps - [Create and manage Elastic Jobs using PowerShell](elastic-jobs-powershell-create.md)-- [Authorization and Permissions](/dotnet/framework/data/adonet/sql/authorization-and-permissions-in-sql-server)\ No newline at end of file
+- [Authorization and Permissions](/dotnet/framework/data/adonet/sql/authorization-and-permissions-in-sql-server)
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/database/firewall-configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/firewall-configure.md
@@ -95,7 +95,9 @@ When a computer tries to connect to your server from the internet, the firewall
### Connections from inside Azure
-To allow applications hosted inside Azure to connect to your SQL server, Azure connections must be enabled. When an application from Azure tries to connect to your server, the firewall verifies that Azure connections are allowed. This can be turned on directly from the Azure portal blade by setting Firewall rules, as well as switching the **Allow Azure Services and resources to access this server** to **ON** in the **Firewalls and virtual networks** settings. If the connection isn't allowed, the request doesn't reach the server.
+To allow applications hosted inside Azure to connect to your SQL server, Azure connections must be enabled. To enable Azure connections, there must be a firewall rule with starting and ending IP addresses set to 0.0.0.0.
+
+When an application from Azure tries to connect to the server, the firewall checks that Azure connections are allowed by verifying this firewall rule exists. This can be turned on directly from the Azure portal blade by switching the **Allow Azure Services and resources to access this server** to **ON** in the **Firewalls and virtual networks** settings. Setting to ON creates an inbound firewall rule for IP 0.0.0.0 - 0.0.0.0 named **AllowAllWindowsIP**. Use PowerShell or the Azure CLI to create a firewall rule with start and end IP addresses set to 0.0.0.0 if youΓÇÖre not using the portal.
> [!IMPORTANT] > This option configures the firewall to allow all connections from Azure, including connections from the subscriptions of other customers. If you select this option, make sure that your login and user permissions limit access to authorized users only.
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/database/read-scale-out https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/read-scale-out.md
@@ -10,7 +10,7 @@ ms.topic: conceptual
author: anosov1960 ms.author: sashan ms.reviewer: sstein
-ms.date: 09/03/2020
+ms.date: 01/20/2021
--- # Use read-only replicas to offload read-only query workloads [!INCLUDE[appliesto-sqldb-sqlmi](../includes/appliesto-sqldb-sqlmi.md)]
@@ -109,12 +109,12 @@ In rare cases, if a snapshot isolation transaction accesses object metadata that
### Long-running queries on read-only replicas
-Queries running on read-only replicas need to access metadata for the objects referenced in the query (tables, indexes, statistics, etc.) In rare cases, if a metadata object is modified on the primary replica while a query holds a lock on the same object on the read-only replica, the query can [block](/sql/database-engine/availability-groups/windows/troubleshoot-primary-changes-not-reflected-on-secondary#BKMK_REDOBLOCK) the process that applies changes from the primary replica to the read-only replica. If such a query were to run for a long time, it would cause the read-only replica to be significantly out of sync with the primary replica.
+Queries running on read-only replicas need to access metadata for the objects referenced in the query (tables, indexes, statistics, etc.) In rare cases, if a metadata object is modified on the primary replica while a query holds a lock on the same object on the read-only replica, the query can [block](/sql/database-engine/availability-groups/windows/troubleshoot-primary-changes-not-reflected-on-secondary#BKMK_REDOBLOCK) the process that applies changes from the primary replica to the read-only replica. If such a query were to run for a long time, it would cause the read-only replica to be significantly out of sync with the primary replica.
-If a long-running query on a read-only replica causes this kind of blocking, it will be automatically terminated, and the session will receive error 1219, "Your session has been disconnected because of a high priority DDL operation".
+If a long-running query on a read-only replica causes this kind of blocking, it will be automatically terminated. The session will receive error 1219, "Your session has been disconnected because of a high priority DDL operation", or error 3947, "The transaction was aborted because the secondary compute failed to catch up redo. Retry the transaction."
> [!NOTE]
-> If you receive error 3961 or error 1219 when running queries against a read-only replica, retry the query.
+> If you receive error 3961, 1219, or 3947 when running queries against a read-only replica, retry the query.
> [!TIP] > In Premium and Business Critical service tiers, when connected to a read-only replica, the `redo_queue_size` and `redo_rate` columns in the [sys.dm_database_replica_states](/sql/relational-databases/system-dynamic-management-views/sys-dm-database-replica-states-azure-sql-database) DMV may be used to monitor data synchronization process, serving as indicators of data latency on the read-only replica.
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/database/security-controls-policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/security-controls-policy.md
@@ -1,7 +1,7 @@
--- title: Azure Policy Regulatory Compliance controls for Azure SQL Database description: Lists Azure Policy Regulatory Compliance controls available for Azure SQL Database and SQL Managed Instance. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources.
-ms.date: 01/08/2021
+ms.date: 01/21/2021
ms.topic: sample author: stevestein ms.author: sstein
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/managed-instance/transact-sql-tsql-differences-sql-server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/transact-sql-tsql-differences-sql-server.md
@@ -272,6 +272,8 @@ The following options can't be modified:
- `SINGLE_USER` - `WITNESS`
+Some `ALTER DATABASE` statements (e.g. [SET CONTAINMENT](https://docs.microsoft.com/sql/relational-databases/databases/migrate-to-a-partially-contained-database?#converting-a-database-to-partially-contained-using-transact-sql)) might transiently fail, for example during the automated database backup or right after a database is created. In this case `ALTER DATABASE` statement should be retried. For more details and information on related error messages, see the [Remarks section](https://docs.microsoft.com/sql/t-sql/statements/alter-database-transact-sql?view=azuresqldb-mi-current&preserve-view=true&tabs=sqlpool#remarks-2).
+ For more information, see [ALTER DATABASE](/sql/t-sql/statements/alter-database-transact-sql-file-and-filegroup-options). ### SQL Server Agent
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/virtual-machines/windows/sql-server-iaas-agent-extension-automate-management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/sql-server-iaas-agent-extension-automate-management.md
@@ -28,7 +28,7 @@ This article provides an overview of the extension. To install the SQL Server Ia
## Overview
-The SQL Server IaaS Agent extension provides a number of benefits for SQL Server on Azure VMs:
+The SQL Server IaaS Agent extension allows for integration with the Azure portal, and depending on the management mode, unlocks a number of feature benefits for SQL Server on Azure VMs:
- **Feature benefits**: The extension unlocks a number of automation feature benefits, such as portal management, license flexibility, automated backup, automated patching and more. See [Feature benefits](#feature-benefits) later in this article for details.
@@ -68,12 +68,13 @@ The following table details these benefits:
| Feature | Description | | --- | --- |
-| **Portal management** | Unlocks [management in the portal](manage-sql-vm-portal.md), so that you can view all of your SQL Server VMs in one place, and so that you can enable and disable SQL specific features directly from the portal.
-| **Automated backup** |Automates the scheduling of backups for all databases for either the default instance or a [properly installed](frequently-asked-questions-faq.md#administration) named instance of SQL Server on the VM. For more information, see [Automated backup for SQL Server in Azure virtual machines (Resource Manager)](automated-backup-sql-2014.md). |
-| **Automated patching** |Configures a maintenance window during which important Windows and SQL Server security updates to your VM can take place, so you can avoid updates during peak times for your workload. For more information, see [Automated patching for SQL Server in Azure virtual machines (Resource Manager)](automated-patching.md). |
-| **Azure Key Vault integration** |Enables you to automatically install and configure Azure Key Vault on your SQL Server VM. For more information, see [Configure Azure Key Vault integration for SQL Server on Azure Virtual Machines (Resource Manager)](azure-key-vault-integration-configure.md). |
-| **Flexible licensing** | Save on cost by [seamlessly transitioning](licensing-model-azure-hybrid-benefit-ahb-change.md) from the bring-your-own-license (also known as the Azure Hybrid Benefit) to the pay-as-you-go licensing model and back again. |
-| **Flexible version / edition** | If you decide to change the [version](change-sql-server-version.md) or [edition](change-sql-server-edition.md) of SQL Server, you can update the metadata within the Azure portal without having to redeploy the entire SQL Server VM. |
+| **Portal management** | Unlocks [management in the portal](manage-sql-vm-portal.md), so that you can view all of your SQL Server VMs in one place, and so that you can enable and disable SQL specific features directly from the portal. <br/> Management mode: Lightweight & full|
+| **Automated backup** |Automates the scheduling of backups for all databases for either the default instance or a [properly installed](frequently-asked-questions-faq.md#administration) named instance of SQL Server on the VM. For more information, see [Automated backup for SQL Server in Azure virtual machines (Resource Manager)](automated-backup-sql-2014.md). <br/> Management mode: Full|
+| **Automated patching** |Configures a maintenance window during which important Windows and SQL Server security updates to your VM can take place, so you can avoid updates during peak times for your workload. For more information, see [Automated patching for SQL Server in Azure virtual machines (Resource Manager)](automated-patching.md). <br/> Management mode: Full|
+| **Azure Key Vault integration** |Enables you to automatically install and configure Azure Key Vault on your SQL Server VM. For more information, see [Configure Azure Key Vault integration for SQL Server on Azure Virtual Machines (Resource Manager)](azure-key-vault-integration-configure.md). <br/> Management mode: Full|
+| **View disk utilization in portal** | Allows you to view a graphical representation of the disk utilization of your SQL data files in the Azure portal. <br/> Management mode: Full |
+| **Flexible licensing** | Save on cost by [seamlessly transitioning](licensing-model-azure-hybrid-benefit-ahb-change.md) from the bring-your-own-license (also known as the Azure Hybrid Benefit) to the pay-as-you-go licensing model and back again. <br/> Management mode: Lightweight & full|
+| **Flexible version / edition** | If you decide to change the [version](change-sql-server-version.md) or [edition](change-sql-server-edition.md) of SQL Server, you can update the metadata within the Azure portal without having to redeploy the entire SQL Server VM. <br/> Management mode: Lightweight & full|
## Management modes
@@ -109,7 +110,7 @@ There are three ways to register with the extension:
### Named instance support
-The SQL Server IaaS Agent extension works with a named instance of SQL Server if is the only SQL Server instance available on the virtual machine. The extension fails to install on VMs that have multiple SQL Server instances.
+The SQL Server IaaS Agent extension works with a named instance of SQL Server if it is the only SQL Server instance available on the virtual machine. The extension fails to install on VMs that have multiple named SQL Server instances if there is no default instance on the VM.
To use a named instance of SQL Server, deploy an Azure virtual machine, install a single named SQL Server instance to it, and then register it with the [SQL IaaS Extension](sql-agent-extension-manually-register-single-vm.md).
@@ -222,7 +223,7 @@ No. A VM must have at least one SQL Server (Database Engine) instance to success
**Can I register a VM with the SQL IaaS Agent extension if there are multiple SQL Server instances?**
-Yes. The SQL IaaS Agent extension will register only one SQL Server (Database Engine) instance. The SQL IaaS Agent extension will register the default SQL Server instance in the case of multiple instances. If there is no default instance, then only registering in lightweight mode is supported. To upgrade from lightweight to full manageability mode, either the default SQL Server instance should exist or the VM should have only one named SQL Server instance.
+Yes, provided there is a default instance on the VM. The SQL IaaS Agent extension will register only one SQL Server (Database Engine) instance. The SQL IaaS Agent extension will register the default SQL Server instance in the case of multiple instances.
**Can I register a SQL Server failover cluster instance with the SQL IaaS Agent extension?**
azure-vmware https://docs.microsoft.com/en-us/azure/azure-vmware/azure-vmware-solution-horizon https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/azure-vmware-solution-horizon.md
@@ -126,15 +126,35 @@ Horizon's sizing methodology on a host running in Azure VMware Solution is simpl
### Sizing tables
-The tables show the common workloads for Login VSI Knowledge Worker workloads and Power Worker workloads.
+Specific vCPU/vRAM requirements for Horizon virtual desktops depend on the customerΓÇÖs specific workload profile. Work with your MSFT and VMware sales team to help determine your vCPU/vRAM requirements for your virtual desktops.
+
+| vCPU per VM | vRAM per VM (GB) | Instance | 100 VMs | 200 VMs | 300 VMs | 400 VMs | 500 VMs | 600 VMs | 700 VMs | 800 VMs | 900 VMs | 1000 VMs | 2000 VMs | 3000 VMs | 4000 VMs | 5000 VMs | 6000 VMs | 6400 VMs |
+|:-----------:|:----------------:|:--------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|
+| 2 | 3.5 | AVS | 3 | 3 | 4 | 4 | 5 | 6 | 6 | 7 | 8 | 9 | 17 | 25 | 33 | 41 | 49 | 53 |
+| 2 | 4 | AVS | 3 | 3 | 4 | 5 | 6 | 6 | 7 | 8 | 9 | 9 | 18 | 26 | 34 | 42 | 51 | 54 |
+| 2 | 6 | AVS | 3 | 4 | 5 | 6 | 7 | 9 | 10 | 11 | 12 | 13 | 26 | 38 | 51 | 62 | 75 | 79 |
+| 2 | 8 | AVS | 3 | 5 | 6 | 8 | 9 | 11 | 12 | 14 | 16 | 18 | 34 | 51 | 67 | 84 | 100 | 106 |
+| 2 | 12 | AVS | 4 | 6 | 9 | 11 | 13 | 16 | 19 | 21 | 23 | 26 | 51 | 75 | 100 | 124 | 149 | 158 |
+| 2 | 16 | AVS | 5 | 8 | 11 | 14 | 18 | 21 | 24 | 27 | 30 | 34 | 67 | 100 | 133 | 165 | 198 | 211 |
+| 4 | 3.5 | AVS | 3 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 22 | 33 | 44 | 55 | 66 | 70 |
+| 4 | 4 | AVS | 3 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 22 | 33 | 44 | 55 | 66 | 70 |
+| 4 | 6 | AVS | 3 | 4 | 5 | 6 | 7 | 9 | 10 | 11 | 12 | 13 | 26 | 38 | 51 | 62 | 75 | 79 |
+| 4 | 8 | AVS | 3 | 5 | 6 | 8 | 9 | 11 | 12 | 14 | 16 | 18 | 34 | 51 | 67 | 84 | 100 | 106 |
+| 4 | 12 | AVS | 4 | 6 | 9 | 11 | 13 | 16 | 19 | 21 | 23 | 26 | 51 | 75 | 100 | 124 | 149 | 158 |
+| 4 | 16 | AVS | 5 | 8 | 11 | 14 | 18 | 21 | 24 | 27 | 30 | 34 | 67 | 100 | 133 | 165 | 198 | 211 |
+| 6 | 3.5 | AVS | 3 | 4 | 5 | 6 | 7 | 9 | 10 | 11 | 13 | 14 | 27 | 41 | 54 | 68 | 81 | 86 |
+| 6 | 4 | AVS | 3 | 4 | 5 | 6 | 7 | 9 | 10 | 11 | 13 | 14 | 27 | 41 | 54 | 68 | 81 | 86 |
+| 6 | 6 | AVS | 3 | 4 | 5 | 6 | 7 | 9 | 10 | 11 | 13 | 14 | 27 | 41 | 54 | 68 | 81 | 86 |
+| 6 | 8 | AVS | 3 | 5 | 6 | 8 | 9 | 11 | 12 | 14 | 16 | 18 | 34 | 51 | 67 | 84 | 100 | 106 |
+| 6 | 12 | AVS | 4 | 6 | 9 | 11 | 13 | 16 | 19 | 21 | 23 | 26 | 51 | 75 | 100 | 124 | 149 | 158 |
+| 6 | 16 | AVS | 5 | 8 | 11 | 14 | 18 | 21 | 24 | 27 | 30 | 34 | 67 | 100 | 133 | 165 | 198 | 211 |
+| 8 | 3.5 | AVS | 3 | 4 | 6 | 7 | 9 | 10 | 12 | 14 | 15 | 17 | 33 | 49 | 66 | 82 | 98 | 105 |
+| 8 | 4 | AVS | 3 | 4 | 6 | 7 | 9 | 10 | 12 | 14 | 15 | 17 | 33 | 49 | 66 | 82 | 98 | 105 |
+| 8 | 6 | AVS | 3 | 4 | 6 | 7 | 9 | 10 | 12 | 14 | 15 | 17 | 33 | 49 | 66 | 82 | 98 | 105 |
+| 8 | 8 | AVS | 3 | 5 | 6 | 8 | 9 | 11 | 12 | 14 | 16 | 18 | 34 | 51 | 67 | 84 | 100 | 106 |
+| 8 | 12 | AVS | 4 | 6 | 9 | 11 | 13 | 16 | 19 | 21 | 23 | 26 | 51 | 75 | 100 | 124 | 149 | 158 |
+| 8 | 16 | AVS | 5 | 8 | 11 | 14 | 18 | 21 | 24 | 27 | 30 | 34 | 67 | 100 | 133 | 165 | 198 | 211 |
-#### Knowledge worker workloads
-
-:::image type="content" source="media/horizon/common-vdi-profiles-vsi-workloads-knowledge.png" alt-text="Table of common VDI profiles for VMware Horizon for login VSI Knowledge worker workloads" lightbox="media/horizon/common-vdi-profiles-vsi-workloads-knowledge.png" border="false":::
-
-#### Power worker workloads
-
-:::image type="content" source="media/horizon/common-vdi-profiles-vsi-workloads-power.png" alt-text="Table of common VDI profiles for VMware Horizon for login VSI Power worker workloads" lightbox="media/horizon/common-vdi-profiles-vsi-workloads-power.png" border="false":::
### Horizon sizing inputs
@@ -185,24 +205,9 @@ If deployed on Azure VMware Solution and on-premises, as with a disaster recover
Work with your VMware EUC sales team to determine the Horizon licensing cost based on your needs.
-### Cost of the Horizon infrastructure VMs on Azure Virtual Network
+### Azure Instance Types
-Based on the standard deployment architecture, Horizon infrastructure VMs are made up of Connection Servers, UAGs, App Volume Managers. They're deployed in the customer's Azure Virtual Network. Additional Azure native instances are required to support High Availability (HA), Microsoft SQL, or Microsoft Active Directory (AD) services on Azure. The table lists the Azure instances based on a 2,000-desktop deployment example.
-
->[!NOTE]
->To be able to handle failure, deploy one more server than is required for the number of connections (n+1). The minimum recommended number of instances of the Connection Server, UAG and App Volumes Manager is 2, and the number of required will grow based on the amount of users the environment will support. A single Connection Server supports a maximum of 4,000 sessions, although 2,000 is recommended as a best practice. Up to seven Connection Servers are supported per pod with a recommendation of 12,000 active sessions in total per pod. For the most current numbers, see the [VMware Knowledge Base article VMware Horizon 7 Sizing Limits and Recommendations](https://kb.vmware.com/s/article/2150348).
-
-| Horizon Infrastructure Component | Azure Instance | Number of Instances Needed (for 2,000-desktops) | Comment |
-|----------------------------------|----------------|----------------------------------------------------|----------|
-| Connection Server | D4sv3 | 2 | *See Note Above* |
-| UAG | F2sv2 | 2 | *See Note Above* |
-| App Volumes Manager | D4sv3 | 2 | *See Note Above* |
-| Cloud Connector | D4sv3 | 1 | |
-| AD Controller | D4sv3 | 2 | *Option to use MSFT AD service on Azure* |
-| MS-SQL Database | D4sv3 | 2 | *Option to use SQL service on Azure* |
-| Windows file share | D4sv3 | | *Optional* |
-
-The infrastructure VM cost amounts to \$0.36 per user per month for the 2,000-desktop deployment in the example above. This example uses US East Azure instance June 2020 pricing. Your pricing may vary depending on region, options selected, and timing.
+To understand the Azure virtual machine sizes which will be required for the Horizon Infrastructure please refer to VMware's guidelines which can be found [here](https://techzone.vmware.com/resource/horizon-on-azure-vmware-solution-configuration#horizon-installation-on-azure-vmware-solution).
## Next steps To learn more about VMware Horizon on Azure VMware Solution, read the [VMware Horizon FAQ](https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/products/horizon/vmw-horizon-on-microsoft-azure-vmware-solution-faq.pdf).
azure-vmware https://docs.microsoft.com/en-us/azure/azure-vmware/net-app-files-with-azure-vmware-solution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/net-app-files-with-azure-vmware-solution.md new file mode 100644
@@ -0,0 +1,103 @@
+---
+title: Azure NetApp Files with Azure VMware Solution
+description: Use Azure NetApp Files with Azure VMware Solution VMs to migrate and sync data across on-premises servers, Azure VMware Solution VMs, and cloud infrastructures.
+ms.topic: how-to
+ms.date: 01/20/2021
+---
+
+# Azure NetApp Files with Azure VMware Solution
+
+In this article, we'll walk through the steps of integrating Azure NetApp Files with Azure VMware Solution-based workloads. The guest operating system will run inside virtual machines (VMs) accessing Azure NetApp Files volumes.
+
+## Azure NetApp Files overview
+
+[Azure NetApp Files](../azure-netapp-files/azure-netapp-files-introduction.md) is an Azure first-party service for migration and running the most demanding enterprise file-workloads in the cloud, including databases, SAP, and high-performance computing applications, with no code changes.
+
+### Features
+(Services where Azure NetApp Files are used.)
+
+- **Active Directory connections**: Azure NetApp Files supports [Active Directory Domain Services and Azure Active Directory Domain Services](../azure-netapp-files/azure-netapp-files-create-volumes-smb.md#decide-which-domain-services-to-use).
+
+- **Share Protocol**: Azure NetApp Files supports Server Message Block (SMB) and Network File System (NFS) protocols. This support means the volumes can be mounted on the Linux client and can be mapped on Windows client.
+
+- **Azure VMware Solution**: Azure NetApp Files shares can be mounted from VMs that are created in the Azure VMware Solution environment.
+
+Azure NetApp Files is available in many Azure regions and supports cross-region replication. For information on Azure NetApp Files configuration methods, see [Storage hierarchy of Azure NetApp Files](../azure-netapp-files/azure-netapp-files-understand-storage-hierarchy.md).
+
+## Reference architecture
+
+The following diagram illustrates a connection via Azure ExpressRoute to an Azure VMware Solution private cloud. It shows the usage of an Azure NetApp Files share, mounted on Azure VMware Solution VMs, being accessed by the Azure VMware Solution environment.
+
+![Diagram showing NetApp Files for Azure VMware Solution architecture.](media/net-app-files/net-app-files-topology.png)
+
+This article covers instructions to set up, test, and verify the Azure NetApp Files volume as a file share for Azure VMware Solution VMs. In this scenario, we have used the NFS protocol. Azure NetApp Files and Azure VMware Solution are created in the same Azure region.
+
+## Prerequisites
+
+> [!div class="checklist"]
+> * Azure subscription with Azure NetApp Files enabled
+> * Subnet for Azure NetApp Files
+> * Linux VM on Azure VMware Solution
+> * Windows VMs on Azure VMware Solution
+
+## Regions supported
+
+List of supported regions can be found at [Azure Products by Region](https://azure.microsoft.com/global-infrastructure/services/?products=netapp,azure-vmware&regions=all).
+
+## Verify pre-configured Azure NetApp Files
+
+Follow the step-by-step instructions in the following articles to create and Mount Azure NetApp Files volumes onto Azure VMware Solution VMs.
+
+- [Create a NetApp account](../azure-netapp-files/azure-netapp-files-create-netapp-account.md)
+- [Set up a capacity pool](../azure-netapp-files/azure-netapp-files-set-up-capacity-pool.md)
+- [Create an SMB volume for Azure NetApp Files](../azure-netapp-files/azure-netapp-files-create-volumes-smb.md)
+- [Create an NFS volume for Azure NetApp Files](../azure-netapp-files/azure-netapp-files-create-volumes.md)
+- [Delegate a subnet to Azure NetApp Files](../azure-netapp-files/azure-netapp-files-delegate-subnet.md)
+
+The following steps include verification of the pre-configured Azure NetApp Files created in Azure on Azure NetApp Files Premium service level.
+
+1. In the Azure portal, under **STORAGE**, select **Azure NetApp Files**. A list of your configured Azure NetApp Files will show.
+
+ :::image type="content" source="media/net-app-files/azure-net-app-files-list.png" alt-text="Screenshot showing list of pre-configured Azure NetApp Files.":::
+
+2. Select a configured NetApp Files account to view its settings. For example, select **Contoso-anf2**.
+
+3. Select **Capacity pools** to verify the configured pool.
+
+ :::image type="content" source="media/net-app-files/net-app-settings.png" alt-text="Screenshot showing options to view capacity pools and volumes of a configured NetApp Files account.":::
+
+ The Capacity pools page opens showing the capacity and service level. In this example, the storage pool is configured as 4 TiB with a Premium service level.
+
+4. Select **Volumes** to view volumes created under the capacity pool. (See preceding screenshot.)
+
+5. Select a volume to view its configuration.
+
+ :::image type="content" source="media/net-app-files/azure-net-app-volumes.png" alt-text="Screenshot showing volumes created under the capacity pool.":::
+
+ A window opens showing the configuration details of the volume.
+
+ :::image type="content" source="media/net-app-files/configuration-of-volume.png" alt-text="Screenshot showing configuration details of a volume.":::
+
+ You can see that the volume anfvolume, with a size of 200 GiB, was created in capacity pool anfpool1 and exported as an NFS file share via 10.22.3.4:/ANFVOLUME. One private IP from the Azure Virtual Network (VNet) was created for Azure NetApp Files and the NFS path to mount on the VM. For information on Azure NetApp Files volume performance relative to size ("Quota"), see [Performance considerations for Azure NetApp Files](../azure-netapp-files/azure-netapp-files-performance-considerations.md).
+
+## Verify pre-configured Azure VMware Solution VM share mapping
+
+Before showcasing the accessibility of Azure NetApp Files share to an Azure VMware Solution VM, it's important to understand SMB and NFS share mapping. Only after configuring the SMB or NFS volumes, can they be mounted as documented here.
+
+- SMB share: Create an Active Directory connection before deploying an SMB volume. The specified domain controllers must be accessible by the delegated subnet of Azure NetApp Files for a successful connection. Once the Active Directory is configured within the Azure NetApp Files account, it will appear as a selectable item while creating SMB volumes.
+
+- NFS share: Azure NetApp Files contributes to creating the volumes using NFS or dual protocol (NFS and SMB). A volume's capacity consumption counts against its pool's provisioned capacity. NFS can be mounted to the Linux server by using the command lines or /etc/fstab entries.
+
+## Use Cases of Azure NetApp Files with Azure VMware Solution
+
+The following are just a few compelling Azure NetApp Files use cases.
+- Horizon profile management
+- Citrix profile management
+- Remote Desktop Services profile management
+- File shares on Azure VMware Solution
+
+## Next steps
+- Learn about [resource limits for Azure NetApp Files](../azure-netapp-files/azure-netapp-files-resource-limits.md#resource-limits).
+- See [Guidelines for Azure NetApp Files network planning](../azure-netapp-files/azure-netapp-files-network-topologies.md).
+- Learn about [Cross-region replication of Azure NetApp Files volumes](../azure-netapp-files/cross-region-replication-introduction.md).
+- See [FAQs about Azure NetApp Files](../azure-netapp-files/azure-netapp-files-faqs.md).
backup https://docs.microsoft.com/en-us/azure/backup/azure-backup-glossary https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/azure-backup-glossary.md
@@ -51,7 +51,7 @@ Azure Backup offers three types of replication to keep your storage and data hig
[Geo-redundant storage (GRS)](https://docs.microsoft.com/azure/storage/common/storage-redundancy#geo-redundant-storage) is the default and recommended replication option. GRS replicates your backup data to a secondary region, hundreds of miles away from the primary location of the source data. GRS costs more than LRS, but GRS provides a higher level of durability for your backup data, even if there's a regional outage. >[!NOTE]
->For GRS vaults that have teh cross-region restore feature enabled, backup storage is upgraded from GRS to RA-GRS (Read-Access Geo-Redundant Storage).
+>For GRS vaults that have the cross-region restore feature enabled, backup storage is upgraded from GRS to RA-GRS (Read-Access Geo-Redundant Storage).
### ZRS
backup https://docs.microsoft.com/en-us/azure/backup/security-controls-policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/security-controls-policy.md
@@ -1,7 +1,7 @@
--- title: Azure Policy Regulatory Compliance controls for Azure Backup description: Lists Azure Policy Regulatory Compliance controls available for Azure Backup. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources.
-ms.date: 01/08/2021
+ms.date: 01/21/2021
ms.topic: sample author: dcurwin ms.author: dacurwin
batch https://docs.microsoft.com/en-us/azure/batch/batch-customer-managed-key https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/batch-customer-managed-key.md
@@ -35,7 +35,7 @@ After the account is created, you can find a unique GUID in the **Identity princ
When you create a new Batch account, specify `SystemAssigned` for the `--identity` parameter.
-```powershell
+```azurecli
resourceGroupName='myResourceGroup' accountName='mybatchaccount'
@@ -48,7 +48,7 @@ az batch account create \
After the account is created, you can verify that system-assigned managed identity has been enabled on this account. Be sure to note the `PrincipalId`, as this value will be needed to grant this batch account access to the Key Vault.
-```powershell
+```azurecli
az batch account show \ -n $accountName \ -g $resourceGroupName \
@@ -96,7 +96,7 @@ In the [Azure portal](https://portal.azure.com/), go to the Batch account page.
After the Batch account is created with system-assigned managed identity and the access to Key Vault is granted, update the Batch account with the `{Key Identifier}` URL under `keyVaultProperties` parameter. Also set **encryption_key_source** as `Microsoft.KeyVault`.
-```powershell
+```azurecli
az batch account set \ -n $accountName \ -g $resourceGroupName \
@@ -114,7 +114,7 @@ When you create a new version of a key, update the Batch account to use the new
You can also use Azure CLI to update the version.
-```powershell
+```azurecli
az batch account set \ -n $accountName \ -g $resourceGroupName \
@@ -130,7 +130,7 @@ To change the key used for Batch encryption, follow these steps:
You can also use Azure CLI to use a different key.
-```powershell
+```azurecli
az batch account set \ -n $accountName \ -g $resourceGroupName \
batch https://docs.microsoft.com/en-us/azure/batch/batch-linux-nodes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/batch-linux-nodes.md
@@ -2,7 +2,7 @@
title: Run Linux on virtual machine compute nodes description: Learn how to process parallel compute workloads on pools of Linux virtual machines in Azure Batch. ms.topic: how-to
-ms.date: 11/10/2020
+ms.date: 01/21/2021
ms.custom: "H1Hack27Feb2017, devx-track-python, devx-track-csharp" --- # Provision Linux compute nodes in Batch pools
@@ -11,9 +11,7 @@ You can use Azure Batch to run parallel compute workloads on both Linux and Wind
## Virtual Machine Configuration
-When you create a pool of compute nodes in Batch, you have two options from which to select the node size and operating system: Cloud Services Configuration and Virtual Machine Configuration. Most pools of Windows compute nodes use [Cloud Services Configuration](nodes-and-pools.md#cloud-services-configuration), which specifies that the pool is composed of Azure Cloud Services nodes.These pools provide only Windows compute nodes.
-
-In contrast, [Virtual Machine Configuration](nodes-and-pools.md#virtual-machine-configuration) specifies that the pool is composed of Azure VMs, which may be created from either Linux or Windows images. When you create a pool with Virtual Machine Configuration, you must specify an [available compute node size](../virtual-machines/sizes.md), the virtual machine image reference,and the Batch node agent SKU (a program that runs on each node and provides an interface between the node and the Batch service), and the virtual machine image reference that will be installed on the nodes.
+When you create a pool of compute nodes in Batch, you have two options from which to select the node size and operating system: Cloud Services Configuration and Virtual Machine Configuration. [Virtual Machine Configuration](nodes-and-pools.md#virtual-machine-configuration) pools are composed of Azure VMs, which may be created from either Linux or Windows images. When you create a pool with Virtual Machine Configuration, you specify an [available compute node size](../virtual-machines/sizes.md), the virtual machine image reference to be installed on the nodes,and the Batch node agent SKU (a program that runs on each node and provides an interface between the node and the Batch service).
### Virtual machine image reference
@@ -29,7 +27,11 @@ When you create a virtual machine image reference, you must specify the followin
| Version |latest | > [!TIP]
-> You can learn more about these properties and how to specify Marketplace images in [Find Linux VM images in the Azure Marketplace with the Azure CLI](../virtual-machines/linux/cli-ps-findimage.md). Note that not all Marketplace images are currently compatible with Batch.
+> You can learn more about these properties and how to specify Marketplace images in [Find Linux VM images in the Azure Marketplace with the Azure CLI](../virtual-machines/linux/cli-ps-findimage.md). Note that some Marketplace images are not currently compatible with Batch.
+
+### List of virtual machine images
+
+Not all Marketplace images are compatible with the currently available Batch node agents. To list all supported Marketplace virtual machine images for the Batch service and their corresponding node agent SKUs, use [list_supported_images](/python/api/azure-batch/azure.batch.operations.AccountOperations#list-supported-images-account-list-supported-images-options-none--custom-headers-none--raw-false----operation-config-) (Python), [ListSupportedImages](/dotnet/api/microsoft.azure.batch.pooloperations.listsupportedimages) (Batch .NET), or the corresponding API in another language SDK.
### Node agent SKU
@@ -39,10 +41,6 @@ The [Batch node agent](https://github.com/Azure/Batch/blob/master/changelogs/nod
- batch.node.centos 7 - batch.node.windows amd64
-### List of virtual machine images
-
-Not all Marketplace images are compatible with the currently available Batch node agents. To list all supported Marketplace virtual machine images for the Batch service and their corresponding node agent SKUs, use [list_supported_images](/python/api/azure-batch/azure.batch.operations.AccountOperations#list-supported-images-account-list-supported-images-options-none--custom-headers-none--raw-false----operation-config-) (Python), [ListSupportedImages](/dotnet/api/microsoft.azure.batch.pooloperations.listsupportedimages) (Batch .NET), or the corresponding API in another language SDK.
- ## Create a Linux pool: Batch Python The following code snippet shows an example of how to use the [Microsoft Azure Batch Client Library for Python](https://pypi.python.org/pypi/azure-batch) to create a pool of Ubuntu Server compute nodes. For more details about the Batch Python module, view the [reference documentation](/python/api/overview/azure/batch).
batch https://docs.microsoft.com/en-us/azure/batch/batch-powershell-cmdlets-get-started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/batch-powershell-cmdlets-get-started.md
@@ -2,29 +2,29 @@
title: Get started with PowerShell description: A quick introduction to the Azure PowerShell cmdlets you can use to manage Batch resources. ms.topic: how-to
-ms.date: 01/15/2019
+ms.date: 01/21/2021
ms.custom: seodec18, devx-track-azurepowershell --- # Manage Batch resources with PowerShell cmdlets
-With the Azure Batch PowerShell cmdlets, you can perform and script many of the tasks you carry out with the Batch APIs, the Azure portal, and the Azure Command-Line Interface (CLI). This is a quick introduction to the cmdlets you can use to manage your Batch accounts and work with your Batch resources such as pools, jobs, and tasks.
+With the Azure Batch PowerShell cmdlets, you can perform and script many common Batch tasks. This is a quick introduction to the cmdlets you can use to manage your Batch accounts and work with your Batch resources such as pools, jobs, and tasks.
For a complete list of Batch cmdlets and detailed cmdlet syntax, see the [Azure Batch cmdlet reference](/powershell/module/az.batch).
-This article is based on cmdlets in Az Batch module 1.0.0. We recommend that you update your Azure PowerShell modules frequently to take advantage of service updates and enhancements.
+We recommend that you update your Azure PowerShell modules frequently to take advantage of service updates and enhancements.
## Prerequisites
-* [Install and configure the Azure PowerShell module](/powershell/azure/). To install a specific Azure Batch module, such as a pre-release module, see the [PowerShell Gallery](https://www.powershellgallery.com/packages/Az.Batch/1.0.0).
+- [Install and configure the Azure PowerShell module](/powershell/azure/). To install a specific Azure Batch module, such as a pre-release module, see the [PowerShell Gallery](https://www.powershellgallery.com/packages/Az.Batch/).
-* Run the **Connect-AzAccount** cmdlet to connect to your subscription (the Azure Batch cmdlets ship in the Azure Resource Manager module):
+- Run the **Connect-AzAccount** cmdlet to connect to your subscription (the Azure Batch cmdlets ship in the Azure Resource Manager module):
```powershell Connect-AzAccount ```
-* **Register with the Batch provider namespace**. You only need to perform this operation **once per subscription**.
+- **Register with the Batch provider namespace**. You only need to perform this operation **once per subscription**.
```powershell Register-AzResourceProvider -ProviderNamespace Microsoft.Batch
@@ -109,9 +109,9 @@ When using many of these cmdlets, in addition to passing a BatchContext object,
### Create a Batch pool
-When creating or updating a Batch pool, you select either the cloud services configuration or the virtual machine configuration for the operating system on the compute nodes (see [Nodes and pools](nodes-and-pools.md#configurations)). If you specify the cloud services configuration, your compute nodes are imaged with one of the [Azure Guest OS releases](../cloud-services/cloud-services-guestos-update-matrix.md#releases). If you specify the virtual machine configuration, you can either specify one of the supported Linux or Windows VM images listed in the [Azure Virtual Machines Marketplace][vm_marketplace], or provide a custom image that you have prepared.
+When creating or updating a Batch pool, you specify a [configuration](nodes-and-pools.md#configurations). Pools should generally be configured with Virtual Machine Configuration, which lets you either specify one of the supported Linux or Windows VM images listed in the [Azure Virtual Machines Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/category/compute?filters=virtual-machine-images&page=1), or provide a custom image that you have prepared. Cloud Services Configuration pools provide only Windows compute nodes and do not support all Batch features.
-When you run **New-AzBatchPool**, pass the operating system settings in a PSCloudServiceConfiguration or PSVirtualMachineConfiguration object. For example, the following snippet creates a Batch pool with size Standard_A1 compute nodes in the virtual machine configuration, imaged with Ubuntu Server 18.04-LTS. Here, the **VirtualMachineConfiguration** parameter specifies the *$configuration* variable as the PSVirtualMachineConfiguration object. The **BatchContext** parameter specifies a previously defined variable *$context* as the BatchAccountContext object.
+When you run **New-AzBatchPool**, pass the operating system settings in a PSVirtualMachineConfiguration or PSCloudServiceConfiguration object. For example, the following snippet creates a Batch pool with size Standard_A1 compute nodes in the virtual machine configuration, imaged with Ubuntu Server 18.04-LTS. Here, the **VirtualMachineConfiguration** parameter specifies the *$configuration* variable as the PSVirtualMachineConfiguration object. The **BatchContext** parameter specifies a previously defined variable *$context* as the BatchAccountContext object.
```powershell $imageRef = New-Object -TypeName "Microsoft.Azure.Commands.Batch.Models.PSImageReference" -ArgumentList @("UbuntuServer","Canonical","18.04-LTS")
@@ -185,7 +185,10 @@ Get-AzBatchComputeNode -PoolId "myPool" -BatchContext $context | Restart-AzBatch
## Application package management
-Application packages provide a simplified way to deploy applications to the compute nodes in your pools. With the Batch PowerShell cmdlets, you can upload and manage application packages in your Batch account, and deploy package versions to compute nodes.
+[Application packages](batch-application-packages.md) provide a simplified way to deploy applications to the compute nodes in your pools. With the Batch PowerShell cmdlets, you can upload and manage application packages in your Batch account, and deploy package versions to compute nodes.
+
+> [!IMPORTANT]
+> You must link an Azure Storage account to your Batch account to use application packages.
**Create** an application:
@@ -242,18 +245,14 @@ $appPackageReference.ApplicationId = "MyBatchApplication"
$appPackageReference.Version = "1.0" ```
-Now create the configuration and pool. This example uses the **CloudServiceConfiguration** parameter with a `PSCloudServiceConfiguration` type object initialized in `$configuration`, which sets the **OSFamily** to `6` for 'Windows Server 2019' and **OSVersion** to `*`. Specify the package reference object as the argument to the `ApplicationPackageReferences` option:
+Now create the pool, and specify the package reference object as the argument to the `ApplicationPackageReferences` option:
```powershell
-$configuration = New-Object -TypeName "Microsoft.Azure.Commands.Batch.Models.PSCloudServiceConfiguration" -ArgumentList @(6,"*") # 6 = OSFamily 'Windows Server 2019'
-New-AzBatchPool -Id "PoolWithAppPackage" -VirtualMachineSize "Small" -CloudServiceConfiguration $configuration -BatchContext $context -ApplicationPackageReferences $appPackageReference
+New-AzBatchPool -Id "PoolWithAppPackage" -VirtualMachineSize "Small" -VirtualMachineConfiguration $configuration -BatchContext $context -ApplicationPackageReferences $appPackageReference
``` You can find more information on application packages in [Deploy applications to compute nodes with Batch application packages](batch-application-packages.md).
-> [!IMPORTANT]
-> You must link an Azure Storage account to your Batch account to use application packages.
- ### Update a pool's application packages To update the applications assigned to an existing pool, first create a PSApplicationPackageReference object with the desired properties (application ID and package version):
@@ -267,7 +266,7 @@ $appPackageReference.Version = "2.0"
```
-Next, get the pool from Batch, clear out any existing packages, add our new package reference, and update the Batch service with the new pool settings:
+Next, get the pool from Batch, clear out any existing packages, add the new package reference, and update the Batch service with the new pool settings:
```powershell $pool = Get-AzBatchPool -BatchContext $context -Id "PoolWithAppPackage"
@@ -286,11 +285,9 @@ Get-AzBatchComputeNode -PoolId "PoolWithAppPackage" -BatchContext $context | Res
``` > [!TIP]
-> You can deploy multiple application packages to the compute nodes in a pool. If you'd like to *add* an application package instead of replacing the currently deployed packages, omit the `$pool.ApplicationPackageReferences.Clear()` line above.
+> You can deploy multiple application packages to the compute nodes in a pool. If you'd like to add an application package instead of replacing the currently deployed packages, omit the `$pool.ApplicationPackageReferences.Clear()` line above.
## Next steps
-* For detailed cmdlet syntax and examples, see [Azure Batch cmdlet reference](/powershell/module/az.batch).
-* For more information about applications and application packages in Batch, see [Deploy applications to compute nodes with Batch application packages](batch-application-packages.md).
-
-[vm_marketplace]: https://azuremarketplace.microsoft.com/marketplace/apps/category/compute?filters=virtual-machine-images&page=1
+- Review the [Azure Batch cmdlet reference](/powershell/module/az.batch) for detailed cmdlet syntax and examples.
+- Learn how to [deploy applications to compute nodes with Batch application packages](batch-application-packages.md).
batch https://docs.microsoft.com/en-us/azure/batch/budget https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/budget.md
@@ -1,21 +1,21 @@
---
-title: Cost analysis and budget
-description: Learn how to get a cost analysis and set a budget for the underlying compute resources and software licenses used to run your Batch workloads.
+title: Cost analysis and budgets
+description: Learn how to get a cost analysis, set a budget, and reduce costs for the underlying compute resources and software licenses used to run your Batch workloads.
ms.topic: how-to
-ms.date: 07/19/2019
+ms.date: 01/21/2021
--- # Cost analysis and budgets for Azure Batch
-There's no charge for Azure Batch itself, only the underlying compute resources and software licenses used to run Batch workloads. On a high level, costs are incurred from virtual machines (VMs) in a pool, data transfer from the VM, or any input or output data stored in the cloud. Let's take a look at some key components of Batch to understand where costs come from, how to set a budget for a pool or account, and some techniques for making your Batch workloads more cost efficient.
+There are no costs for using Azure Batch itself, although there can be charges for the underlying compute resources and software licenses used to run Batch workloads. Costs may be incurred from virtual machines (VMs) in a pool, data transfer from the VM, or any input or output data stored in the cloud. This topic will help you understand where costs come from, how to set a budget for a Batch pool or account, and ways to reduce the costs for Batch workloads.
## Batch resources
-Virtual machines are the most significant resource used for Batch processing. The cost of using VMs for Batch is calculated based on the type, quantity, and the duration of use. VM billing options include [Pay-As-You-Go](https://azure.microsoft.com/offers/ms-azr-0003p/) or [reservation](../cost-management-billing/reservations/save-compute-costs-reservations.md) (pay in advance). Both payment options have different benefits depending on your compute workload, and both payment models will affect your bill differently.
+Virtual machines are the most significant resource used for Batch processing. The cost of using VMs for Batch is calculated based on the type, quantity, and the duration of use. VM billing options include [Pay-As-You-Go](https://azure.microsoft.com/offers/ms-azr-0003p/) or [reservation](../cost-management-billing/reservations/save-compute-costs-reservations.md) (pay in advance). Both payment options have different benefits depending on your compute workload and will affect your bill differently.
-When applications are deployed to Batch nodes (VMs) using [application packages](batch-application-packages.md), you are billed for the Azure Storage resources that your application packages consume. You are also billed for the storage of any input or output files, such as resource files and other log data. In general, the cost of storage data associated with Batch is much lower than the cost of compute resources. Each VM in a pool created with **VirtualMachineConfiguration** has an associated OS disk that uses Azure-managed disks. Azure-managed disks have an additional cost, and other disk performance tiers have different costs as well.
+When applications are deployed to Batch nodes (VMs) using [application packages](batch-application-packages.md), you are billed for the Azure Storage resources that your application packages consume. You are also billed for the storage of any input or output files, such as resource files and other log data. In general, the cost of storage data associated with Batch is much lower than the cost of compute resources. Each VM in a pool created with [Virtual Machine Configuration](nodes-and-pools.md#virtual-machine-configuration) has an associated OS disk that uses Azure-managed disks. Azure-managed disks have an additional cost, and other disk performance tiers have different costs as well.
-Batch pools use networking resources. In particular, for **VirtualMachineConfiguration** pools standard load balancers are used, which require static IP addresses. The load balancers used by Batch are visible for **User Subscription** accounts, but are not visible for **Batch Service** accounts. Standard load balancers incur charges for all data passed to and from Batch pool VMs; select Batch APIs that retrieve data from pool nodes (such as Get Task/Node File), task application packages, resource/output files, and container images will incur charges.
+Batch pools use networking resources. In particular, for **VirtualMachineConfiguration** pools, standard load balancers are used, which require static IP addresses. The load balancers used by Batch are visible for **User Subscription** accounts, but are not visible for **Batch Service** accounts. Standard load balancers incur charges for all data passed to and from Batch pool VMs; select Batch APIs that retrieve data from pool nodes (such as Get Task/Node File), task application packages, resource/output files, and container images will incur charges.
### Additional services
@@ -33,53 +33,47 @@ Depending on which services you use with your Batch solution, you may incur addi
## Cost analysis and budget for a pool
-Through the Azure portal, you can create budgets and spending alerts for your Batch pool(s) or Batch account. Budgets and alerts are useful for notifying stakeholders of any risks of overspending. It's possible for there to be a delay in spending alerts and to slightly exceed a budget. In this example, we'll view cost analysis of an individual Batch pool.
+In the Azure portal, you can create budgets and spending alerts for your Batch pools or Batch accounts. Budgets and alerts are useful for notifying stakeholders of any risks of overspending, although it's possible for there to be a delay in spending alerts and to slightly exceed a budget.
-1. In the Azure portal, select **Cost Management + Billing** from the left navigation bar.
-1. Select your subscription from the **My subscriptions** section
-1. Go to **Cost analysis** under the **Cost Management** section of the left nav bar, which will show a view like this:
+In this example, we'll view cost analysis of an individual Batch pool.
+
+1. In the Azure portal, type in or select **Cost Management + Billing** .
+1. Select your subscription in the **Billing scopes** section.
+1. Under **Cost Management**, select **Cost analysis**.
1. Select **Add Filter**. In the first drop-down, select **Resource**
- ![Select the resource filter](./media/batch-budget/resource-filter.png)
-1. In the second drop-down, select the Batch pool. When the pool is selected, the cost analysis will look similar to the following analysis.
- ![Cost analysis of a pool](./media/batch-budget/pool-cost-analysis.png)
+1. In the second drop-down, select the Batch pool. When the pool is selected, you will see the cost analysis for the pool, similar to the example shown here.
+ ![Screenshot showing cost analysis of a pool in the Azure portal.](./media/batch-budget/pool-cost-analysis.png)
The resulting cost analysis shows the cost of the pool as well as the resources that contribute to this cost. In this example, the VMs used in the pool are the most costly resource.
-To create a budget for the pool select **Budget: none**, and then select **Create new budget >**. Now use the window to configure a budget specifically for your pool.
-
-For more information on configuring a budget, see [Create and manage Azure budgets](../cost-management-billing/costs/tutorial-acm-create-budgets.md).
+To create a budget for the pool select **Budget: none**, then select **Create new budget >**. Now use the window to [configure a budget](../cost-management-billing/costs/tutorial-acm-create-budgets.md) specifically for your pool.
> [!NOTE]
-> Azure Batch is built on Azure Cloud Services and Azure Virtual Machines technology. When you choose **Cloud Services Configuration**, you are charged based on the Cloud Services pricing structure. When you choose **Virtual Machine Configuration**, you are charged based on the Virtual Machines pricing structure. The example on this page uses the **Virtual Machine Configuration**.
+> Azure Batch is built on Azure Cloud Services and Azure Virtual Machines technology. When you choose **Cloud Services Configuration**, you are charged based on the Cloud Services pricing structure. When you choose **Virtual Machine Configuration**, you are charged based on the Virtual Machines pricing structure. The example on this page uses the **Virtual Machine Configuration**, which is recommended for most Batch pools.
## Minimize cost
-Using several VMs and Azure services for extended periods of time can be costly. Fortunately, there are services available to help reduce your spending, as well as strategies for maximizing the efficiency of your workload.
+Using several VMs and Azure services for extended periods of time can be costly. Consider using these strategies to maximize the efficiency of your workloads and reduce your costs.
### Low-priority virtual machines
-Low-priority VMs reduce the cost of Batch workloads by taking advantage of surplus computing capacity in Azure. When you specify low-priority VMs in your pools, Batch uses this surplus to run your workload. There is a substantial cost saving by using low-priority VMs in place of dedicated VMs.
-
-Learn more about how to set up low-priority VMs for your workload at [Use low-priority VMs with Batch](batch-low-pri-vms.md).
+[Low-priority VMs](batch-low-pri-vms.md) reduce the cost of Batch workloads by taking advantage of surplus computing capacity in Azure. When you specify low-priority VMs in your pools, Batch uses this surplus to run your workload. There can be substantial cost savings when you use low-priority VMs instead of dedicated VMs.
### Virtual machine OS disk type
-There are multiple [VM OS disk types](../virtual-machines/disks-types.md). Most VM-series have sizes that support both premium and standard storage. When an ΓÇÿsΓÇÖ VM size is selected for a pool, Batch configures premium SSD OS disks. When the ΓÇÿnon-sΓÇÖ VM size is selected, then the cheaper, standard HDD disk type is used. For example, premium SSD OS disks are used for `Standard_D2s_v3` and standard HDD OS disks are used for `Standard_D2_v3`.
+Azure offers multiple [VM OS disk types](../virtual-machines/disks-types.md). Most VM-series have sizes that support both premium and standard storage. When an 's' VM size is selected for a pool, Batch configures premium SSD OS disks. When the 'non-s' VM size is selected, then the cheaper, standard HDD disk type is used. For example, premium SSD OS disks are used for `Standard_D2s_v3` and standard HDD OS disks are used for `Standard_D2_v3`.
-Premium SSD OS disks are more expensive, but have higher performance and VMs with premium disks can start slightly quicker than VMs with standard HDD OS disks. With Batch, the OS disk is often not used much as the applications and task files are located on the VMs temporary SSD disk. Therefore in many cases, there's no need to pay the increased cost for the premium SSD that is provisioned when a ΓÇÿsΓÇÖ VM size is specified.
+Premium SSD OS disks are more expensive, but have higher performance. VMs with premium disks can start slightly quicker than VMs with standard HDD OS disks. With Batch, the OS disk is often not used much, since the applications and task files are located on the VM's temporary SSD disk. Because of this, you can often select the 'non-s' VM size to avoid paying the increased cost for the premium SSD that is provisioned when an 's' VM size is specified.
### Reserved virtual machine instances
-If you intend to use Batch for a long period of time, you can save on the cost of VMs by using [Azure Reservations](../cost-management-billing/reservations/save-compute-costs-reservations.md) for your workloads. A reservation rate is considerably lower than a pay-as-you-go rate. Virtual machine instances used without a reservation are charged at pay-as-you-go rate. If you purchase a reservation, the reservation discount is applied and you are no longer charged at the pay-as-you-go rates.
+If you intend to use Batch for a long period of time, you can reduce the cost of VMs by using [Azure Reservations](../cost-management-billing/reservations/save-compute-costs-reservations.md) for your workloads. A reservation rate is considerably lower than a pay-as-you-go rate. Virtual machine instances used without a reservation are charged at the pay-as-you-go rate. When you purchase a reservation, the reservation discount is applied.
### Automatic scaling
-[Automatic scaling](batch-automatic-scaling.md) dynamically scales the number of VMs in your Batch pool based on demands of the current job. By scaling the pool based on the lifetime of a job, automatic scaling ensures that VMs scaled up and used only when there is a job to perform. When the job is complete, or there are no jobs, the VMs are automatically scaled down to save compute resources. Scaling allows you to lower the overall cost of your Batch solution by using only the resources you need.
-
-For more information about automatic scaling, see [Automatically scale compute nodes in an Azure Batch pool](batch-automatic-scaling.md).
+[Automatic scaling](batch-automatic-scaling.md) dynamically scales the number of VMs in your Batch pool based on demands of the current job. By scaling the pool based on the lifetime of a job, automatic scaling ensures that VMs are scaled up and used only when there is a job to perform. When the job is complete, or when there are no jobs, the VMs are automatically scaled down to save compute resources. Scaling allows you to lower the overall cost of your Batch solution by using only the resources you need.
## Next steps - Learn more about the [Batch APIs and tools](batch-apis-tools.md) available for building and monitoring Batch solutions. --- Learn about [low-priority VMs with Batch](batch-low-pri-vms.md).
+- Learn about [using low-priority VMs with Batch](batch-low-pri-vms.md).
batch https://docs.microsoft.com/en-us/azure/batch/security-controls-policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/security-controls-policy.md
@@ -1,7 +1,7 @@
--- title: Azure Policy Regulatory Compliance controls for Azure Batch description: Lists Azure Policy Regulatory Compliance controls available for Azure Batch. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources.
-ms.date: 01/08/2021
+ms.date: 01/21/2021
ms.topic: sample author: JnHs ms.author: jenhayes
cdn https://docs.microsoft.com/en-us/azure/cdn/endpoint-multiorigin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cdn/endpoint-multiorigin.md
@@ -10,7 +10,7 @@ ms.date: 9/06/2020
ms.author: allensu ---
-# Azure CDN endpoint multi-origin (Preview)
+# Azure CDN endpoint multi-origin
Multi-origin support eliminates downtime and establishes global redundancy.
@@ -21,11 +21,6 @@ Setup one or more origin groups and choose a default origin group. Each origin g
> [!NOTE] > Currently this feature is only available from Azure CDN from Microsoft.
-> [!IMPORTANT]
-> Azure CDN endpoint multi-origin is currently in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- ## Create the origin group 1. Sign in to the [Azure portal](https://portal.azure.com)
cloudfoundry https://docs.microsoft.com/en-us/azure/cloudfoundry/create-cloud-foundry-on-azure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloudfoundry/create-cloud-foundry-on-azure.md
@@ -38,11 +38,13 @@ For more information, see [Use SSH keys with Windows on Azure](../virtual-machin
> [!NOTE] >
-> To create a service principal, you need owner account permission. You also can write a script to automate creating the service principal. For example, you can use the Azure CLI [az ad sp create-for-rbac](/cli/azure/ad/sp?view=azure-cli-latest).
+> To create a service principal, you need owner account permission. You also can write a script to automate creating the service principal. For example, you can use the Azure CLI [az ad sp create-for-rbac](/cli/azure/ad/sp).
1. Sign in to your Azure account.
- `az login`
+ ```azurecli
+ az login
+ ```
![Azure CLI login](media/deploy/az-login-output.png )
@@ -50,11 +52,15 @@ For more information, see [Use SSH keys with Windows on Azure](../virtual-machin
2. Set your default subscription for this configuration.
- `az account set -s {id}`
+ ```azurecli
+ az account set -s {id}
+ ```
3. Create an Azure Active Directory application for your PCF. Specify a unique alphanumeric password. Store the password as your **clientSecret** to use later.
- `az ad app create --display-name "Svc Principal for OpsManager" --password {enter-your-password} --homepage "{enter-your-homepage}" --identifier-uris {enter-your-homepage}`
+ ```azurecli
+ az ad app create --display-name "Svc Principal for OpsManager" --password {enter-your-password} --homepage "{enter-your-homepage}" --identifier-uris {enter-your-homepage}
+ ```
Copy the "appId" value in the output as your **clientID** to use later.
@@ -64,21 +70,29 @@ For more information, see [Use SSH keys with Windows on Azure](../virtual-machin
4. Create a service principal with your new app ID.
- `az ad sp create --id {appId}`
+ ```azurecli
+ az ad sp create --id {appId}
+ ```
5. Set the permission role of your service principal as a Contributor.
- `az role assignment create --assignee "{enter-your-homepage}" --role "Contributor"`
+ ```azurecli
+ az role assignment create --assignee "{enter-your-homepage}" --role "Contributor"
+ ```
Or you also can use
- `az role assignment create --assignee {service-principal-name} --role "Contributor"`
+ ```azurecli
+ az role assignment create --assignee {service-principal-name} --role "Contributor"
+ ```
![Service principal role assignment](media/deploy/svc-princ.png ) 6. Verify that you can successfully sign in to your service principal by using the app ID, password, and tenant ID.
- `az login --service-principal -u {appId} -p {your-password} --tenant {tenantId}`
+ ```azurecli
+ az login --service-principal -u {appId} -p {your-password} --tenant {tenantId}
+ ```
7. Create a .json file in the following format. Use the **subscription ID**, **tenantID**, **clientID**, and **clientSecret** values you copied previously. Save the file.
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Face/Face-API-How-to-Topics/how-to-mitigate-latency https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Face/Face-API-How-to-Topics/how-to-mitigate-latency.md
@@ -37,7 +37,11 @@ var faces = await client.Face.DetectWithUrlAsync("https://www.biography.com/.ima
The Face service must then download the image from the remote server. If the connection from the Face service to the remote server is slow, that will impact the response time of the Detect method.
-To mitigate this, consider [storing the image in Azure Premium Blob Storage](https://docs.microsoft.com/azure/storage/blobs/storage-upload-process-images?tabs=dotnet).
+To mitigate this, consider [storing the image in Azure Premium Blob Storage](https://docs.microsoft.com/azure/storage/blobs/storage-upload-process-images?tabs=dotnet). For example:
+
+``` csharp
+var faces = await client.Face.DetectWithUrlAsync("https://csdx.blob.core.windows.net/resources/Face/Images/Family1-Daughter1.jpg");
+```
### Large upload size
@@ -53,7 +57,10 @@ If the file to upload is large, that will impact the response time of the `Detec
- It takes the service longer to process the file, in proportion to the file size. Mitigations:-- Consider [storing the image in Azure Premium Blob Storage](https://docs.microsoft.com/azure/storage/blobs/storage-upload-process-images?tabs=dotnet).
+- Consider [storing the image in Azure Premium Blob Storage](https://docs.microsoft.com/azure/storage/blobs/storage-upload-process-images?tabs=dotnet). For example:
+``` csharp
+var faces = await client.Face.DetectWithUrlAsync("https://csdx.blob.core.windows.net/resources/Face/Images/Family1-Daughter1.jpg");
+```
- Consider uploading a smaller file. - See the guidelines regarding [input data for face detection](https://docs.microsoft.com/azure/cognitive-services/face/concepts/face-detection#input-data) and [input data for face recognition](https://docs.microsoft.com/azure/cognitive-services/face/concepts/face-recognition#input-data). - For face detection, when using detection model `DetectionModel.Detection01`, reducing the image file size will increase processing speed. When using detection model `DetectionModel.Detection02`, reducing the image file size will only increase processing speed if the image file is smaller than 1920x1080.
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/LUIS/includes/luis-portal-note https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/LUIS/includes/luis-portal-note.md
@@ -7,10 +7,10 @@ manager: nitinme
ms.service: cognitive-services ms.subservice: language-understanding ms.topic: include
-ms.date: 01/08/2021
+ms.date: 01/21/2021
--- > [!NOTE]
-> Starting January 18th, the regional portals (au.luis.ai and eu.luis.ai) will be consolidated into a single portal and URL. If you were using one of these portals, you will be automatically re-directed to [luis.ai](https://luis.ai/). You will continue using the same regional resources you created and your data will continue to be saved and processed in the same region as your resource.
+> Starting January 20th, the regional portals (au.luis.ai and eu.luis.ai) will be consolidated into a single portal and URL. If you were using one of these portals, you will be automatically re-directed to [luis.ai](https://luis.ai/). You will continue using the same regional resources you created and your data will continue to be saved and processed in the same region as your resource.
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/LUIS/includes/portal-consolidation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/LUIS/includes/portal-consolidation.md new file mode 100644
@@ -0,0 +1,16 @@
+---
+title: portal consolidation include file
+description: portal consolidation include file
+services: cognitive-services
+manager: nitinme
+author: aahill
+ms.author: aahi
+ms.service: cognitive-services
+ms.subservice: language-understanding
+ms.date: 01/21/2021
+ms.topic: include
+
+---
+
+> [!NOTE]
+> As of January 20th 2021, The luis.au and luis.eu portals have been consolidated into a single LUIS portal. If you were using one of these portals, instead access LUIS at [luis.ai](https://luis.ai).
\ No newline at end of file
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/LUIS/luis-how-to-collaborate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/LUIS/luis-how-to-collaborate.md
@@ -3,13 +3,14 @@ title: Collaborate with others - LUIS
titleSuffix: Azure Cognitive Services description: An app owner can add contributors to the authoring resource. These contributors can modify the model, train, and publish the app. services: cognitive-services-
+author: aahill
+ms.author: aahi
manager: nitinme ms.custom: seodec18 ms.service: cognitive-services ms.subservice: language-understanding ms.topic: how-to
-ms.date: 12/08/2020
+ms.date: 01/21/2021
---
@@ -63,9 +64,7 @@ LUIS uses standard Azure Active Directory (Azure AD) consent flow.
The tenant admin should work directly with the user who needs access granted to use LUIS in the Azure AD. * First, the user signs into LUIS, and sees the pop-up dialog needing admin approval. The user contacts the tenant admin before continuing.
-* Second, the tenant admin signs into LUIS, and sees a consent flow pop-up dialog. This is the dialog the admin needs to give permission for the user. Once the admin accepts the permission, the user is able to continue with LUIS. If the tenant admin will not sign in to LUIS, the admin can access [consent](https://account.activedirectory.windowsazure.com/r#/applications) for LUIS, shown in the following screenshot. Notice the list is filtered to items that include the name `LUIS`.
-
-![Azure active directory permission by app website](./media/luis-how-to-collaborate/tenant-permissions.png)
+* Second, the tenant admin signs into LUIS, and sees a consent flow pop-up dialog. This is the dialog the admin needs to give permission for the user. Once the admin accepts the permission, the user is able to continue with LUIS. If the tenant admin will not sign in to LUIS, the admin can access [consent](https://account.activedirectory.windowsazure.com/r#/applications) for LUIS. On this page you can filter the list to items that include the name `LUIS`.
If the tenant admin only wants certain users to use LUIS, there are a couple of possible solutions: * Giving the "admin consent" (consent to all users of the Azure AD), but then set to "Yes" the "User assignment required" under Enterprise Application Properties, and finally assign/add only the wanted users to the Application. With this method, the Administrator is still providing "admin consent" to the App, however, it's possible to control the users that can access it.
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/LUIS/luis-reference-regions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/LUIS/luis-reference-regions.md
@@ -3,31 +3,31 @@ title: Publishing regions & endpoints - LUIS
description: The region specified in the Azure portal is the same where you will publish the LUIS app and an endpoint URL is generated for this same region. ms.service: cognitive-services ms.subservice: language-understanding
+author: aahill
+ms.author: aahi
ms.topic: reference
-ms.date: 11/09/2020
+ms.date: 01/21/2021
+ms.custom: references_regions
---
-# Authoring and publishing regions and the associated keys
-[!INCLUDE [LUIS Free account](includes/luis-portal-note.md)]
+# Authoring and publishing regions and the associated keys
-Three authoring regions are supported by corresponding LUIS portals. To publish a LUIS app to more than one region, you need at least one key per region.
+LUIS authoring regions are supported by the LUIS portal. To publish a LUIS app to more than one region, you need at least one key per region.
<a name="luis-website"></a> ## LUIS Authoring regions
-There are three LUIS authoring portals, based on region. You must author and publish in the same region.
-|LUIS|Authoring region|Azure region name|
-|--|--|--|
-|[www.luis.ai][www.luis.ai] |U.S.<br>not Europe<br>not Australia| `westus`|
-|[au.luis.ai][au.luis.ai] |Australia| `australiaeast`|
-|[eu.luis.ai][eu.luis.ai] |Europe|`westeurope`|
+[!INCLUDE [portal consolidation](includes/portal-consolidation.md)]
-Authoring regions have [paired fail-over regions](../../best-practices-availability-paired-regions.md).
+LUIS has one portal you can use regardless of region, [www.luis.ai](https://www.luis.ai). You must still author and publish in the same region.
+
+Authoring regions have [paired fail-over regions](../../best-practices-availability-paired-regions.md)
<a name="regions-and-azure-resources"></a> ## Publishing regions and Azure resources+ The app is published to all regions associated with the LUIS resources added in the LUIS portal. For example, for an app created on [www.luis.ai][www.luis.ai], if you create a LUIS or Cognitive Service resource in **westus** and [add it to the app as a resource](luis-how-to-azure-subscription.md), the app is published in that region. ## Public apps
@@ -39,57 +39,46 @@ A public app is published in all regions so that a user with a region-based LUIS
The authoring region app can only be published to a corresponding publish region. If your app is currently in the wrong authoring region, export the app, and import it into the correct authoring region for your publishing region.
-LUIS apps created on https://www.luis.ai can be published to all endpoints except the [European](#publishing-to-europe) and [Australian](#publishing-to-australia) regions.
+> [!NOTE]
+> LUIS apps created on https://www.luis.ai can now be published to all endpoints including the [European](#publishing-to-europe) and [Australian](#publishing-to-australia) regions.
## Publishing to Europe
-To publish to the European regions, you create LUIS apps at https://eu.luis.ai only. If you attempt to publish anywhere else using a key in the Europe region, LUIS displays a warning message. Instead, use https://eu.luis.ai. LUIS apps created at [https://eu.luis.ai][eu.luis.ai] don't automatically migrate to other regions. Export and then import the LUIS app in order to migrate it.
-
-## Europe publishing regions
-
- Global region | Authoring API region & authoring website| Publishing & querying region<br>`API region name` | Endpoint URL format |
+ Global region | Authoring API region | Publishing & querying region<br>`API region name` | Endpoint URL format |
|-----|------|------|------|
-| [Europe](#publishing-to-europe)| `westeurope`<br>[eu.luis.ai][eu.luis.ai]| France Central<br>`francecentral` | `https://francecentral.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
-| [Europe](#publishing-to-europe)| `westeurope`<br>[eu.luis.ai][eu.luis.ai]| North Europe<br>`northeurope` | `https://northeurope.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
-| [Europe](#publishing-to-europe) | `westeurope`<br>[eu.luis.ai][eu.luis.ai]| West Europe<br>`westeurope` | `https://westeurope.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
-| [Europe](#publishing-to-europe) | `westeurope`<br>[eu.luis.ai][eu.luis.ai]| UK South<br>`uksouth` | `https://uksouth.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
+| Europe | `westeurope`| France Central<br>`francecentral` | `https://francecentral.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
+| Europe | `westeurope`| North Europe<br>`northeurope` | `https://northeurope.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
+| Europe | `westeurope`| West Europe<br>`westeurope` | `https://westeurope.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
+| Europe | `westeurope`| UK South<br>`uksouth` | `https://uksouth.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
## Publishing to Australia
-To publish to the Australian regions, you create LUIS apps at https://au.luis.ai only. If you attempt to publish anywhere else using a key in the Australian region, LUIS displays a warning message. Instead, use https://au.luis.ai. LUIS apps created at [https://au.luis.ai][au.luis.ai] don't automatically migrate to other regions. Export and then import the LUIS app in order to migrate it.
-
-## Australia publishing regions
-
- Global region | Authoring API region & authoring website| Publishing & querying region<br>`API region name` | Endpoint URL format |
+ Global region | Authoring API region | Publishing & querying region<br>`API region name` | Endpoint URL format |
|-----|------|------|------|
-| [Australia](#publishing-to-australia) | `australiaeast`<br>[au.luis.ai][au.luis.ai]| Australia East<br>`australiaeast` | `https://australiaeast.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
-
-## Publishing to other regions
-
-To publish to the other regions, you create LUIS apps at [https://www.luis.ai](https://www.luis.ai) only.
+| Australia | `australiaeast` | Australia East<br>`australiaeast` | `https://australiaeast.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
## Other publishing regions
- Global region | Authoring API region & authoring website| Publishing & querying region<br>`API region name` | Endpoint URL format |
+ Global region | Authoring API region | Publishing & querying region<br>`API region name` | Endpoint URL format |
|-----|------|------|------|
-| Africa | `westus`<br>[www.luis.ai][www.luis.ai]| South Africa North<br>`southafricanorth` | `https://southafricanorth.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
-| Asia | `westus`<br>[www.luis.ai][www.luis.ai]| Central India<br>`centralindia` | `https://centralindia.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
-| Asia | `westus`<br>[www.luis.ai][www.luis.ai]| East Asia<br>`eastasia` | `https://eastasia.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
-| Asia | `westus`<br>[www.luis.ai][www.luis.ai]| Japan East<br>`japaneast` | `https://japaneast.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
-| Asia | `westus`<br>[www.luis.ai][www.luis.ai]| Japan West<br>`japanwest` | `https://japanwest.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
-| Asia | `westus`<br>[www.luis.ai][www.luis.ai]| Korea Central<br>`koreacentral` | `https://koreacentral.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
-| Asia | `westus`<br>[www.luis.ai][www.luis.ai]| Southeast Asia<br>`southeastasia` | `https://southeastasia.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
-| Asia | `westus`<br>[www.luis.ai][www.luis.ai]| North UAE<br>`northuae` | `https://northuae.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
-| North America |`westus`<br>[www.luis.ai][www.luis.ai] | Canada Central<br>`canadacentral` | `https://canadacentral.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
-| North America |`westus`<br>[www.luis.ai][www.luis.ai] | Central US<br>`centralus` | `https://centralus.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
-| North America |`westus`<br>[www.luis.ai][www.luis.ai] | East US<br>`eastus` | `https://eastus.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
-| North America | `westus`<br>[www.luis.ai][www.luis.ai] | East US 2<br>`eastus2` | `https://eastus2.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
-| North America | `westus`<br>[www.luis.ai][www.luis.ai] | North Central US<br>`northcentralus` | `https://northcentralus.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
-| North America | `westus`<br>[www.luis.ai][www.luis.ai] | South Central US<br>`southcentralus` | `https://southcentralus.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
-| North America |`westus`<br>[www.luis.ai][www.luis.ai] | West Central US<br>`westcentralus` | `https://westcentralus.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
-| North America | `westus`<br>[www.luis.ai][www.luis.ai] | West US<br>`westus` | `https://westus.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
-| North America |`westus`<br>[www.luis.ai][www.luis.ai] | West US 2<br>`westus2` | `https://westus2.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
-| South America | `westus`<br>[www.luis.ai][www.luis.ai] | Brazil South<br>`brazilsouth` | `https://brazilsouth.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
+| Africa | `westus`<br>[www.luis.ai][www.luis.ai]| South Africa North<br>`southafricanorth` | `https://southafricanorth.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
+| Asia | `westus`<br>[www.luis.ai][www.luis.ai]| Central India<br>`centralindia` | `https://centralindia.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
+| Asia | `westus`<br>[www.luis.ai][www.luis.ai]| East Asia<br>`eastasia` | `https://eastasia.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
+| Asia | `westus`<br>[www.luis.ai][www.luis.ai]| Japan East<br>`japaneast` | `https://japaneast.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
+| Asia | `westus`<br>[www.luis.ai][www.luis.ai]| Japan West<br>`japanwest` | `https://japanwest.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
+| Asia | `westus`<br>[www.luis.ai][www.luis.ai]| Korea Central<br>`koreacentral` | `https://koreacentral.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
+| Asia | `westus`<br>[www.luis.ai][www.luis.ai]| Southeast Asia<br>`southeastasia` | `https://southeastasia.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
+| Asia | `westus`<br>[www.luis.ai][www.luis.ai]| North UAE<br>`northuae` | `https://northuae.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
+| North America |`westus`<br>[www.luis.ai][www.luis.ai] | Canada Central<br>`canadacentral` | `https://canadacentral.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
+| North America |`westus`<br>[www.luis.ai][www.luis.ai] | Central US<br>`centralus` | `https://centralus.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
+| North America |`westus`<br>[www.luis.ai][www.luis.ai] | East US<br>`eastus` | `https://eastus.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
+| North America | `westus`<br>[www.luis.ai][www.luis.ai] | East US 2<br>`eastus2` | `https://eastus2.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
+| North America | `westus`<br>[www.luis.ai][www.luis.ai] | North Central US<br>`northcentralus` | `https://northcentralus.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
+| North America | `westus`<br>[www.luis.ai][www.luis.ai] | South Central US<br>`southcentralus` | `https://southcentralus.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
+| North America |`westus`<br>[www.luis.ai][www.luis.ai] | West Central US<br>`westcentralus` | `https://westcentralus.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
+| North America | `westus`<br>[www.luis.ai][www.luis.ai] | West US<br>`westus` | `https://westus.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
+| North America |`westus`<br>[www.luis.ai][www.luis.ai] | West US 2<br>`westus2` | `https://westus2.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
+| South America | `westus`<br>[www.luis.ai][www.luis.ai] | Brazil South<br>`brazilsouth` | `https://brazilsouth.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
## Endpoints
@@ -106,6 +95,4 @@ Authoring regions have [paired fail-over regions](../../best-practices-availabil
> [!div class="nextstepaction"] > [Prebuilt entities reference](./luis-reference-prebuilt-entities.md)
- [www.luis.ai]: https://www.luis.ai
- [au.luis.ai]: https://au.luis.ai
- [eu.luis.ai]: https://eu.luis.ai
+ [www.luis.ai]: https://www.luis.ai
\ No newline at end of file
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/LUIS/luis-user-privacy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/LUIS/luis-user-privacy.md
@@ -5,11 +5,11 @@ description: You have full control over viewing, exporting, and deleting their d
services: cognitive-services manager: nitinme
-ms.custom: seodec18
+ms.custom: seodec18, references_regions
ms.service: cognitive-services ms.subservice: language-understanding ms.topic: reference
-ms.date: 12/08/2020
+ms.date: 12/10/2020
---
@@ -55,31 +55,38 @@ To enable [active learning](luis-how-to-review-endpoint-utterances.md#log-user-q
With the exception of active learning data (detailed below), LUIS follows the [data storage practices for regional services](https://azuredatacentermap.azurewebsites.net/).
+[!INCLUDE [portal consolidation](includes/portal-consolidation.md)]
++ ### Europe
-The [eu.luis.ai](https://eu.luis.ai) portal and Europe Authoring (also known as Programmatic APIs ) are hosted in Azure's Europe geography. The eu.luis.ai portal and Europe Authoring (also known as Programmatic APIs) support deployment of endpoints to the following Azure geographies:
+Europe Authoring (also known as Programmatic APIs) resources are hosted in Azure's Europe geography, and support deployment of endpoints to the following Azure geographies:
* Europe * France * United Kingdom
-When deploying to these Azure geographies, the utterances received by the endpoint from end users of your app will be stored in Azure's Europe geography for active learning. You can disable active learning, see [Disable active learning](luis-how-to-review-endpoint-utterances.md#disable-active-learning). To manage stored utterances, see [Delete utterance](luis-how-to-review-endpoint-utterances.md#delete-utterance).
+When deploying to these Azure geographies, the utterances received by the endpoint from end users of your app will be stored in Azure's Europe geography for active learning.
### Australia
-The [au.luis.ai](https://au.luis.ai) portal and Australia Authoring (also known as Programmatic APIs) are hosted in Azure's Australia geography. The au.luis.ai portal and Australia Authoring (also known as Programmatic APIs) support deployment of endpoints to the following Azure geographies:
+Australia Authoring (also known as Programmatic APIs) resources are hosted in Azure's Australia geography, and support deployment of endpoints to the following Azure geographies:
* Australia
-When deploying to these Azure geographies, the utterances received by the endpoint from end users of your app will be stored in Azure's Australia geography for active learning. You can disable active learning, see [Disable active learning](luis-how-to-review-endpoint-utterances.md#disable-active-learning). To manage stored utterances, see [Delete utterance](luis-how-to-review-endpoint-utterances.md#delete-utterance).
+When deploying to these Azure geographies, the utterances received by the endpoint from end users of your app will be stored in Azure's Australia geography for active learning.
### United States
-The [luis.ai](https://www.luis.ai) portal and United States Authoring (also known as Programmatic APIs) are hosted in Azure's United States geography. The luis.ai portal and United States Authoring (also known as Programmatic APIs) support deployment of endpoints to the following Azure geographies:
+United States Authoring (also known as Programmatic APIs) resources are hosted in Azure's United States geography, and support deployment of endpoints to the following Azure geographies:
* Azure geographies not supported by the Europe or Australia authoring regions
-When deploying to these Azure geographies, the utterances received by the endpoint from end users of your app will be stored in Azure's United States geography for active learning. You can disable active learning, see [Disable active learning](luis-how-to-review-endpoint-utterances.md#disable-active-learning). To manage stored utterances, see [Delete utterance](luis-how-to-review-endpoint-utterances.md#delete-utterance).
+When deploying to these Azure geographies, the utterances received by the endpoint from end users of your app will be stored in Azure's United States geography for active learning.
+
+## Disable active learning
+
+To disable active learning, see [Disable active learning](luis-how-to-review-endpoint-utterances.md#disable-active-learning). To manage stored utterances, see [Delete utterance](luis-how-to-review-endpoint-utterances.md#delete-utterance).
## Next steps
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/batch-transcription https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/batch-transcription.md
@@ -209,7 +209,8 @@ Each transcription result file has this format:
], "recognizedPhrases": [ // results for each phrase and each channel individually {
- "recognitionStatus": "Success", // recognition state, e.g. "Success", "Failure"
+ "recognitionStatus": "Success", // recognition state, e.g. "Success", "Failure"
+ "speaker": 1, // if `diarizationEnabled` is `true`, this is the identified speaker (1 or 2), otherwise this property is not present
"channel": 0, // channel number of the result "offset": "PT0.07S", // offset in audio of this phrase, ISO 8601 encoded duration "duration": "PT1.59S", // audio duration of this phrase, ISO 8601 encoded duration
@@ -220,7 +221,6 @@ Each transcription result file has this format:
"nBest": [ { "confidence": 0.898652852, // confidence value for the recognition of the whole phrase
- "speaker": 1, // if `diarizationEnabled` is `true`, this is the identified speaker (1 or 2), otherwise this property is not present
"lexical": "hello world", "itn": "hello world", "maskedITN": "hello world",
@@ -422,4 +422,4 @@ This sample code doesn't specify a custom model. The service uses the baseline m
## Next steps -- [Speech to text v3 API reference](https://centralus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CopyModelToSubscription)\ No newline at end of file
+- [Speech to text v3 API reference](https://centralus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CopyModelToSubscription)
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/how-to-custom-voice-create-voice https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-custom-voice-create-voice.md
@@ -47,22 +47,17 @@ The following table shows the processing states for imported datasets:
After validation is complete, you can see the total number of matched utterances for each of your datasets in the **Utterances** column. If the data type you have selected requires long-audio segmentation, this column only reflects the utterances we have segmented for you either based on your transcripts or through the speech transcription service. You can further download the dataset validated to view the detail results of the utterances successfully imported and their mapping transcripts. Hint: long-audio segmentation can take more than an hour to complete data processing.
-In the data detail view, you can further check the pronunciation scores and the noise level for each of your datasets. The pronunciation score ranges from 0 to 100. A score below 70 normally indicates a speech error or script mismatch. A heavy accent can reduce your pronunciation score and impact the generated digital voice.
+For en-US and zh-CN datasets, you can further download a report to check the pronunciation scores and the noise level for each of your recordings. The pronunciation score ranges from 0 to 100. A score below 70 normally indicates a speech error or script mismatch. A heavy accent can reduce your pronunciation score and impact the generated digital voice.
A higher signal-to-noise ratio (SNR) indicates lower noise in your audio. You can typically reach a 50+ SNR by recording at professional studios. Audio with an SNR below 20 can result in obvious noise in your generated voice. Consider re-recording any utterances with low pronunciation scores or poor signal-to-noise ratios. If you can't re-record, you might exclude those utterances from your dataset.
-> [!NOTE]
-> It is required that if you are using Custom Neural Voice, you must register your voice talent in the **Voice Talent** tab. When preparing your recording script, make sure you include the below sentence to acquire the voice talent acknowledgement of using their voice data to create a TTS voice model and generate synthetic speech.
-ΓÇ£I [state your first and last name] am aware that recordings of my voice will be used by [state the name of the company] to create and use a synthetic version of my voice.ΓÇ¥
-This sentence will be used to verify if the recordings in your training datasets are done by the same person that makes the consent. [Read more about how your data will be processed and how voice talent verification is done here](https://aka.ms/CNV-data-privacy).
- ## Build your custom voice model After your dataset has been validated, you can use it to build your custom voice model.
-1. Navigate to **Text-to-Speech > Custom Voice > [name of project] > Model**.
+1. Navigate to **Text-to-Speech > Custom Voice > [name of project] > Training**.
2. Click **Train model**.
@@ -72,22 +67,15 @@ After your dataset has been validated, you can use it to build your custom voice
A common use of the **Description** field is to record the names of the datasets that were used to create the model.
-4. From the **Select training data** page, choose one or multiple datasets that you would like to use for training. Check the number of utterances before you submit them. You can start with any number of utterances for en-US and zh-CN voice models using the "Adaptive" training method. For other locales, you must select more than 2,000 utterances to be able to train a voice using a standard tier including the "Statistical parametric" and "Concatenative" training methods, and more than 300 utterances to train a custom neural voice.
+4. From the **Select training data** page, choose one or multiple datasets that you would like to use for training. Check the number of utterances before you submit them. You can start with any number of utterances for en-US and zh-CN voice models. For other locales, you must select more than 2,000 utterances to be able to train a voice.
> [!NOTE] > Duplicate audio names will be removed from the training. Make sure the datasets you select do not contain the same audio names across multiple .zip files. > [!TIP]
- > Using the datasets from the same speaker is required for quality results. Different training methods require different training data size. To train a model with the "Statistical parametric" method, at least 2,000 distinct utterances are required. For the "Concatenative" method, it's 6,000 utterances, while for "Neural", the minimum data size requirement is 300 utterances.
+ > Using the datasets from the same speaker is required for quality results. When the datasets you have submitted for training contain a total number of less than 6,000 distinct utterances, you will train your voice model through the Statistical Parametric Synthesis technique. In the case where your training data exceeds a total number of 6,000 distinct utterances, you will kick off a training process with the Concatenation Synthesis technique. Normally the concatenation technology can result in more natural, and higher-fidelity voice results. [Contact the Custom Voice team](https://go.microsoft.com/fwlink/?linkid=2108737) if you want to train a model with the latest Neural TTS technology that can produce a digital voice equivalent to the publicly available [neural voices](language-support.md#neural-voices).
-5. Select the **training method** in the next step.
-
- > [!NOTE]
- > If you would like to train a neural voice, you must specify a voice talent profile with the audio consent file provided of the voice talent acknowledging to use his/her speech data to train a custom voice model. Custom Neural Voice is available with limited access. Make sure you understand the [responsible AI requirements](https://aka.ms/gating-overview) and [apply the access here](https://aka.ms/customneural).
-
- On this page you can also select to upload your script for testing. The testing script must be a txt file, less than 1Mb. Supported encoding format includes ANSI/ASCII, UTF-8, UTF-8-BOM, UTF-16-LE, or UTF-16-BE. Each paragraph of the utterance will result in a separate audio. If you want to combine all sentences into one audio, make them in one paragraph.
-
-6. Click **Train** to begin creating your voice model.
+5. Click **Train** to begin creating your voice model.
The Training table displays a new entry that corresponds to this newly created model. The table also displays the status: Processing, Succeeded, Failed.
@@ -99,14 +87,11 @@ The status that's shown reflects the process of converting your dataset to a voi
| Succeeded | Your voice model has been created and can be deployed. | | Failed | Your voice model has been failed in training due to many reasons, for example unseen data problems or network issues. |
-Training time varies depending on the volume of audio data processed and the training method you have selected. It can range from 30 minutes to 40 hours. Once your model training is succeeded, you can start to test it.
+Training time varies depending on the volume of audio data processed. Typical times range from about 30 minutes for hundreds of utterances to 40 hours for 20,000 utterances. Once your model training is succeeded, you can start to test it.
> [!NOTE] > Free subscription (F0) users can train one voice font simultaneously. Standard subscription (S0) users can train three voices simultaneously. If you reach the limit, wait until at least one of your voice fonts finishes training, and then try again.
-> [!NOTE]
-> Training of custom neural voices is not free. Check the [pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/) here.
- > [!NOTE] > The maximum number of voice models allowed to be trained per subscription is 10 models for free subscription (F0) users and 100 for standard subscription (S0) users.
@@ -114,28 +99,33 @@ If you are using the neural voice training capability, you can select to train a
## Test your voice model
-Each training will generate 100 sample audios automatically to help you test the model. After your voice model is successfully built, you can test it before deploying it for use.
+After your voice font is successfully built, you can test it before deploying it for use.
-1. Navigate to **Text-to-Speech > Custom Voice > [name of project] > Model**.
+1. Navigate to **Text-to-Speech > Custom Voice > [name of project] > Testing**.
-2. Click the name of the model you would like to test.
+2. Click **Add test**.
-3. On the model detail page, you can find the sample audios under the **Testing** tab.
+3. Select one or multiple models that you would like to test.
-The quality of the voice depends on a number of factors, including the size of the training data, the quality of the recording, the accuracy of the transcript file, how well the recorded voice in the training data matches the personality of the designed voice for your intended use case, and more. [Check here to learn more about the capabilities and limits of our technology and the best practice to improve your model quality](https://aka.ms/CNV-limits).
+4. Provide the text you want the voice(s) to speak. If you have selected to test multiple models at one time, the same text will be used for the testing for different models.
+
+ > [!NOTE]
+ > The language of your text must be the same as the language of your voice font. Only successfully trained models can be tested. Only plain text is supported in this step.
+
+5. Click **Create**.
+
+Once you have submitted your test request, you will return to the test page. The table now includes an entry that corresponds to your new request and the status column. It can take a few minutes to synthesize speech. When the status column says **Succeeded**, you can play the audio, or download the text input (a .txt file) and audio output (a .wav file), and further audition the latter for quality.
+
+You can also find the test results in the detail page of each models you have selected for testing. Go to the **Training** tab, and click the model name to enter the model detail page.
## Create and use a custom voice endpoint After you've successfully created and tested your voice model, you deploy it in a custom Text-to-Speech endpoint. You then use this endpoint in place of the usual endpoint when making Text-to-Speech requests through the REST API. Your custom endpoint can be called only by the subscription that you have used to deploy the font.
-To create a new custom voice endpoint, go to **Text-to-Speech > Custom Voice > Endpoint**. Select **Add endpoint** and enter a **Name** and **Description** for your custom endpoint. Then select the custom voice model you would like to associate with this endpoint.
+To create a new custom voice endpoint, go to **Text-to-Speech > Custom Voice > Deployment**. Select **Add endpoint** and enter a **Name** and **Description** for your custom endpoint. Then select the custom voice model you would like to associate with this endpoint.
After you have clicked the **Add** button, in the endpoint table, you will see an entry for your new endpoint. It may take a few minutes to instantiate a new endpoint. When the status of the deployment is **Succeeded**, the endpoint is ready for use.
-You can **Suspend** and **Resume** your endpoint if you don't use it all the time. When an endpoint is reactivated after suspension, the endpoint URL will be kept the same so you don't need to change your code in your apps.
-
-You can also update the endpoint to a new model. To change the model, make sure the new model is named the same as the one your want to update.
- > [!NOTE] > Free subscription (F0) users can have only one model deployed. Standard subscription (S0) users can create up to 50 endpoints, each with its own custom voice.
@@ -152,4 +142,4 @@ The custom endpoint is functionally identical to the standard endpoint that's us
* [Guide: Record your voice samples](record-custom-voice-samples.md) * [Text-to-Speech API reference](rest-text-to-speech.md)
-* [Long Audio API](long-audio-api.md)
+* [Long Audio API](long-audio-api.md)
\ No newline at end of file
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/how-to-custom-voice-prepare-data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-custom-voice-prepare-data.md
@@ -17,15 +17,7 @@ ms.author: erhopf
When you're ready to create a custom text-to-speech voice for your application, the first step is to gather audio recordings and associated scripts to start training the voice model. The Speech service uses this data to create a unique voice tuned to match the voice in the recordings. After you've trained the voice, you can start synthesizing speech in your applications.
-Before you can train your own text-to-speech voice model, you'll need audio recordings and the associated text transcriptions. On this page, we'll review data types, how they are used, and how to manage each.
-
-> [!NOTE]
-> If you would like to train a neural voice, you must specify a voice talent profile with the audio consent file provided of the voice talent acknowledging to use his/her speech data to train a custom voice model. When preparing your recording script, make sure you include the below sentence.
-
-> ΓÇ£I [state your first and last name] am aware that recordings of my voice will be used by [state the name of the company] to create and use a synthetic version of my voice.ΓÇ¥
-This sentence will be used to verify if the training data is done by the same person that makes the consent. Read more about the [voice talent verification](https://aka.ms/CNV-data-privacy) here.
-
-> Custom Neural Voice is available with limited access. Make sure you understand the [responsible AI requirements](https://aka.ms/gating-overview) and [apply the access here](https://aka.ms/customneural).
+You can start with a small amount of data to create a proof of concept. However, the more data that you provide, the more natural your custom voice will sound. Before you can train your own text-to-speech voice model, you'll need audio recordings and the associated text transcriptions. On this page, we'll review data types, how they are used, and how to manage each.
## Data types
@@ -35,22 +27,22 @@ In some cases, you may not have the right dataset ready and will want to test th
This table lists data types and how each is used to create a custom text-to-speech voice model.
-| Data type | Description | When to use | Additional processing required |
-| --------- | ----------- | ----------- | --------------------------- |
-| **Individual utterances + matching transcript** | A collection (.zip) of audio files (.wav) as individual utterances. Each audio file should be 15 seconds or less in length, paired with a formatted transcript (.txt). | Professional recordings with matching transcripts | Ready for training. |
-| **Long audio + transcript (beta)** | A collection (.zip) of long, unsegmented audio files (longer than 20 seconds), paired with a transcript (.txt) that contains all spoken words. | You have audio files and matching transcripts, but they are not segmented into utterances. | Segmentation (using batch transcription).<br>Audio format transformation where required. |
-| **Audio only (beta)** | A collection (.zip) of audio files without a transcript. | You only have audio files available, without transcripts. | Segmentation + transcript generation (using batch transcription).<br>Audio format transformation where required.|
+| Data type | Description | When to use | Additional service required | Quantity for training a model | Locale(s) |
+| --------- | ----------- | ----------- | --------------------------- | ----------------------------- | --------- |
+| **Individual utterances + matching transcript** | A collection (.zip) of audio files (.wav) as individual utterances. Each audio file should be 15 seconds or less in length, paired with a formatted transcript (.txt). | Professional recordings with matching transcripts | Ready for training. | No hard requirement for en-US and zh-CN. More than 2,000+ distinct utterances for other locales. | [All Custom Voice locales](language-support.md#customization) |
+| **Long audio + transcript (beta)** | A collection (.zip) of long, unsegmented audio files (longer than 20 seconds), paired with a transcript (.txt) that contains all spoken words. | You have audio files and matching transcripts, but they are not segmented into utterances. | Segmentation (using batch transcription).<br>Audio format transformation where required. | No hard requirement | [All Custom Voice locales](language-support.md#customization) |
+| **Audio only (beta)** | A collection (.zip) of audio files without a transcript. | You only have audio files available, without transcripts. | Segmentation + transcript generation (using batch transcription).<br>Audio format transformation where required.| No hard requirement | [All Custom Voice locales](language-support.md#customization) |
Files should be grouped by type into a dataset and uploaded as a zip file. Each dataset can only contain a single data type. > [!NOTE]
-> The maximum number of datasets allowed to be imported per subscription is 10 zip files for free subscription (F0) users and 500 for standard subscription (S0) users.
+> The maximum number of datasets allowed to be imported per subscription is 10 .zip files for free subscription (F0) users and 500 for standard subscription (S0) users.
## Individual utterances + matching transcript You can prepare recordings of individual utterances and the matching transcript in two ways. Either write a script and have it read by a voice talent or use publicly available audio and transcribe it to text. If you do the latter, edit disfluencies from the audio files, such as "um" and other filler sounds, stutters, mumbled words, or mispronunciations.
-To produce a good voice model, create the recordings in a quiet room with a high-quality microphone. Consistent volume, speaking rate, speaking pitch, and expressive mannerisms of speech are essential.
+To produce a good voice font, create the recordings in a quiet room with a high-quality microphone. Consistent volume, speaking rate, speaking pitch, and expressive mannerisms of speech are essential.
> [!TIP] > To create a voice for production use, we recommend you use a professional recording studio and voice talent. For more information, see [How to record voice samples for a custom voice](record-custom-voice-samples.md).
@@ -94,6 +86,9 @@ Below is an example of how the transcripts are organized utterance by utterance
``` ItΓÇÖs important that the transcripts are 100% accurate transcriptions of the corresponding audio. Errors in the transcripts will introduce quality loss during the training.
+> [!TIP]
+> When building production text-to-speech voices, select utterances (or write scripts) that take into account both phonetic coverage and efficiency. Having trouble getting the results you want? [Contact the Custom Voice](mailto:speechsupport@microsoft.com) team to find out more about having us consult.
+ ## Long audio + transcript (beta) In some cases, you may not have segmented audio available. We provide a service (beta) through the custom voice portal to help you segment long audio files and create transcriptions. Keep in mind, this service will be charged toward your speech-to-text subscription usage.
@@ -153,4 +148,4 @@ All audio files should be grouped into a zip file. Once your dataset is successf
## Next steps - [Create a Custom Voice](how-to-custom-voice-create-voice.md)-- [Guide: Record your voice samples](record-custom-voice-samples.md)
+- [Guide: Record your voice samples](record-custom-voice-samples.md)
\ No newline at end of file
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/how-to-custom-voice https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-custom-voice.md
@@ -34,11 +34,10 @@ The diagram below highlights the steps to create a custom voice model using the
## Custom Neural voices
-Custom Voice currently supports both standard and neural tiers. Custom Neural Voice empowers users to build higher quality voice models while requiring less data, and provides measures to help you deploy AI responsibly. We recommend you should use Custom Neural Voice to develop more realistic voices for more natural conversational interfaces and enable your customers and end users to benefit from the latest Text-to-Speech technology, in a responsible way. [Learn more about Custom Neural Voice](https://aka.ms/CNV-Transparency-Note).
+The neural voice customization capability is currently in public preview, limited to selected customers. Fill out this [application form](https://go.microsoft.com/fwlink/?linkid=2108737) to get started.
> [!NOTE]
-> As part of Microsoft's commitment to designing responsible AI, we have limited the use of Custom Neural Voice. You may gain access to the technology only after your applications are reviewed and you have committed to using it in alignment with our responsible AI principles. Learn more about our [policy on the limit access](https://aka.ms/gating-overview) and [apply here](https://aka.ms/customneural).
-> The [languages](language-support.md#customization) and [regions](regions.md#custom-voices) supported for the standard and neural version of Custom Voice are different. Check the details before you start.
+> As part of Microsoft's commitment to designing responsible AI, our intent is to protect the rights of individuals and society, and foster transparent human-computer interactions. For this reason, Custom Neural Voice is not generally available to all customers. You may gain access to the technology only after your applications are reviewed and you have committed to using it in alignment with our ethics principles. Learn more about our [application gating process](./concepts-gating-overview.md).
## Set up your Azure account
@@ -52,7 +51,7 @@ Once you've created an Azure account and a Speech service subscription, you'll n
4. If you'd like to switch to another Speech subscription, use the cog icon located in the top navigation. > [!NOTE]
-> You must have a F0 or a S0 Speech service key created in Azure before you can use the service. Custom Neural Voice only supports the S0 tier.
+> You must have a F0 or a S0 key created in Azure before you can use the service.
## How to create a project
@@ -67,4 +66,4 @@ To create your first project, select the **Text-to-Speech/Custom Voice** tab, th
- [Prepare Custom Voice data](how-to-custom-voice-prepare-data.md) - [Create a Custom Voice](how-to-custom-voice-create-voice.md)-- [Guide: Record your voice samples](record-custom-voice-samples.md)
+- [Guide: Record your voice samples](record-custom-voice-samples.md)
\ No newline at end of file
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/language-support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/language-support.md
@@ -124,8 +124,6 @@ https://cris.ai -> Click on Adaptation Data -> scroll down to section "Pronuncia
Both the Microsoft Speech SDK and REST APIs support these voices, each of which supports a specific language and dialect, identified by locale. You can also get a full list of languages and voices supported for each specific region/endpoint through the [voices/list API](rest-text-to-speech.md#get-a-list-of-voices).
-To learn how you can configure and adjust speaking styles, including neural voices, see the [how-to](speech-synthesis-markup.md#adjust-speaking-styles) on Speech Synthesis Markup Language.
- > [!IMPORTANT] > Pricing varies for standard, custom and neural voices. Please visit the [Pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/) page for additional information.
@@ -284,6 +282,8 @@ Below neural voices are in public preview.
For more information about regional availability, see [regions](regions.md#standard-and-neural-voices).
+To learn how you can configure and adjust neural voices, such as Speaking Styles, see [Speech Synthesis Markup Language](speech-synthesis-markup.md#adjust-speaking-styles).
+ > [!IMPORTANT] > The `en-US-JessaNeural` voice has changed to `en-US-AriaNeural`. If you were using "Jessa" before, convert over to "Aria".
@@ -387,30 +387,10 @@ More than 75 standard voices are available in over 45 languages and locales, whi
### Customization
-Custom Voice is available in the standard and the neural tier. The languages supported are different for these two tiers.
-
-| Language | Locale | Standard | Neural |
-|--|--|--|--|
-| Chinese (Mandarin, Simplified) | `zh-CN` | Yes | Yes |
-| Chinese (Mandarin, Simplified), English bilingual | `zh-CN` bilingual | Yes | Yes |
-| English (Australia) | `en-AU` | No | Yes |
-| English (India) | `en-IN` | Yes | Yes |
-| English (United Kingdom) | `en-GB` | Yes | Yes |
-| English (United States) | `en-US` | Yes | Yes |
-| French (Canada) | `fr-CA` | No | Yes |
-| French (France) | `fr-FR` | Yes | Yes |
-| German (Germany) | `de-DE` | Yes | Yes |
-| Italian (Italy) | `it-IT` | Yes | Yes |
-| Japanese (Japan) | `ja-JP` | No | Yes |
-| Korean (Korea) | `ko-KR` | No | Yes |
-| Portuguese (Brazil) | `pt-BR` | Yes | Yes |
-| Spanish (Mexico) | `es-MX` | Yes | Yes |
-| Spanish (Spain) | `es-ES` | No | Yes |
-
-Select the right locale that matches the training data you have to train a custom voice model. For example, if the recording data you have is spoken in English with a British accent, select `en-GB`.
+Voice customization is available for `de-DE`, `en-GB`, `en-IN`, `en-US`, `es-MX`, `fr-FR`, `it-IT`, `pt-BR`, and `zh-CN`. Select the right locale that matches the training data you have to train a custom voice model. For example, if the recording data you have is spoken in English with a British accent, select `en-GB`.
> [!NOTE]
-> We do not support bi-lingual model training in Custom Voice, except for the Chinese-English bi-lingual. Select "Chinese-English bilingual" if you want to train a Chinese voice that can speak English as well. Chinese-English bilingual model training using the standard method is available in North Europe and North Central US only. Custom Neural Voice training is available in UK South and East US.
+> We do not support bi-lingual model training in Custom Voice, except for the Chinese-English bi-lingual. Select "Chinese-English bilingual" if you want to train a Chinese voice that can speak English as well. Voice training in all locales starts with a data set of 2,000+ utterances, except for the `en-US` and `zh-CN` where you can start with any size of training data.
## Speech translation
@@ -515,4 +495,4 @@ See the following table for supported languages for the various Speaker Recognit
## Next steps * [Create a free Azure account](https://azure.microsoft.com/free/cognitive-services/)
-* [See how to recognize speech in C#](./get-started-speech-to-text.md?pivots=programming-language-chsarp)
+* [See how to recognize speech in C#](./get-started-speech-to-text.md?pivots=programming-language-chsarp)
\ No newline at end of file
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/record-custom-voice-samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/record-custom-voice-samples.md
@@ -20,14 +20,6 @@ Before you can make these recordings, though, you need a script: the words that
Many small but important details go into creating a professional voice recording. This guide is a roadmap for a process that will help you get good, consistent results.
-> [!NOTE]
-> If you would like to train a neural voice, you must specify a voice talent profile with the audio consent file provided of the voice talent acknowledging to use his/her speech data to train a custom voice model. When preparing your recording script, make sure you include the below sentence.
-
-> ΓÇ£I [state your first and last name] am aware that recordings of my voice will be used by [state the name of the company] to create and use a synthetic version of my voice.ΓÇ¥
-This sentence will be used to verify if the training data is done by the same person that makes the consent. Read more about the [voice talent verification](https://aka.ms/CNV-data-privacy) here.
-
-> Custom Neural Voice is available with limited access. Make sure you understand the [responsible AI requirements](https://aka.ms/gating-overview) and [apply the access here](https://aka.ms/customneural).
- > [!TIP] > For the highest quality results, consider engaging Microsoft to help develop your custom voice. Microsoft has extensive experience producing high-quality voices for its own products, including Cortana and Office.
@@ -59,7 +51,7 @@ Your voice talent is the other half of the equation. They must be able to speak
Recording custom voice samples can be more fatiguing than other kinds of voice work. Most voice talent can record for two or three hours a day. Limit sessions to three or four a week, with a day off in-between if possible.
-Work with your voice talent to develop a "persona" that defines the overall sound and emotional tone of the custom voice. In the process, you'll pinpoint what "neutral" sounds like for that persona. Using the Custom Neural Voice capability, you can train a model that speaks with emotions. Define the "speaking styles" and ask your voice talent to read the script in a way that resonate the styles you want.
+Recordings made for a voice model should be emotionally neutral. That is, a sad utterance should not be read in a sad way. Mood can be added to the synthesized speech later through prosody controls. Work with your voice talent to develop a "persona" that defines the overall sound and emotional tone of the custom voice. In the process, you'll pinpoint what "neutral" sounds like for that persona.
A persona might have, for example, a naturally upbeat personality. So "their" voice might carry a note of optimism even when they speak neutrally. However, such a personality trait should be subtle and consistent. Listen to readings by existing voices to get an idea of what you're aiming for.
@@ -214,7 +206,7 @@ Listen to each file carefully. At this stage, you can edit out small unwanted so
Convert each file to 16 bits and a sample rate of 16 kHz before saving and, if you recorded the studio chatter, remove the second channel. Save each file in WAV format, naming the files with the utterance number from your script.
-Finally, create the *transcript* that associates each WAV file with a text version of the corresponding utterance. [Creating custom voices](./how-to-custom-voice-create-voice.md) includes details of the required format. You can copy the text directly from your script. Then create a Zip file of the WAV files and the text transcript.
+Finally, create the *transcript* that associates each WAV file with a text version of the corresponding utterance. [Creating custom voice fonts](./how-to-custom-voice-create-voice.md) includes details of the required format. You can copy the text directly from your script. Then create a Zip file of the WAV files and the text transcript.
Archive the original recordings in a safe place in case you need them later. Preserve your script and notes, too.
@@ -223,4 +215,4 @@ Archive the original recordings in a safe place in case you need them later. Pre
You're ready to upload your recordings and create your custom voice. > [!div class="nextstepaction"]
-> [Create custom voice fonts](./how-to-custom-voice-create-voice.md)
+> [Create custom voice fonts](./how-to-custom-voice-create-voice.md)
\ No newline at end of file
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/rest-text-to-speech https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/rest-text-to-speech.md
@@ -55,11 +55,9 @@ The `voices/list` endpoint allows you to get a full list of voices for a specifi
| Korea Central | `https://koreacentral.tts.speech.microsoft.com/cognitiveservices/voices/list` | | North Central US | `https://northcentralus.tts.speech.microsoft.com/cognitiveservices/voices/list` | | North Europe | `https://northeurope.tts.speech.microsoft.com/cognitiveservices/voices/list` |
-| South Africa North | `https://southafricanorth.tts.speech.microsoft.com/cognitiveservices/voices/list` |
| South Central US | `https://southcentralus.tts.speech.microsoft.com/cognitiveservices/voices/list` | | Southeast Asia | `https://southeastasia.tts.speech.microsoft.com/cognitiveservices/voices/list` | | UK South | `https://uksouth.tts.speech.microsoft.com/cognitiveservices/voices/list` |
-| West Central US | `https://westcentralus.tts.speech.microsoft.com/cognitiveservices/voices/list` |
| West Europe | `https://westeurope.tts.speech.microsoft.com/cognitiveservices/voices/list` | | West US | `https://westus.tts.speech.microsoft.com/cognitiveservices/voices/list` | | West US 2 | `https://westus2.tts.speech.microsoft.com/cognitiveservices/voices/list` |
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Translator/language-support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/language-support.md
@@ -189,24 +189,38 @@ The Transliterate method supports the following languages. In the "To/From", "<-
|:----------- |:-------------:|:-------------:|:-------------:|:-------------:| | Arabic | `ar` | Arabic `Arab` | <--> | Latin `Latn` | | Bangla | `bn` | Bengali `Beng` | <--> | Latin `Latn` |
+|Belarusian| `be` | Cyrillic `Cyrl` | <--> | Latin `Latn` |
+|Bulgarian| `bg` | Cyrillic `Cyrl` | <--> | Latin `Latn` |
| Chinese (Simplified) | `zh-Hans` | Chinese Simplified `Hans`| <--> | Latin `Latn` | | Chinese (Simplified) | `zh-Hans` | Chinese Simplified `Hans`| <--> | Chinese Traditional `Hant`| | Chinese (Traditional) | `zh-Hant` | Chinese Traditional `Hant`| <--> | Latin `Latn` | | Chinese (Traditional) | `zh-Hant` | Chinese Traditional `Hant`| <--> | Chinese Simplified `Hans` |
+|Greek| `el` | Greek `Grek` | <--> | Latin `Latn` |
| Gujarati | `gu` | Gujarati `Gujr` | <--> | Latin `Latn` | | Hebrew | `he` | Hebrew `Hebr` | <--> | Latin `Latn` | | Hindi | `hi` | Devanagari `Deva` | <--> | Latin `Latn` | | Japanese | `ja` | Japanese `Jpan` | <--> | Latin `Latn` | | Kannada | `kn` | Kannada `Knda` | <--> | Latin `Latn` |
+|Kazakh| `kk` | Cyrillic `Cyrl` | <--> | Latin `Latn` |
+|Kyrgyz| `ky` | Cyrillic `Cyrl` | <--> | Latin `Latn` |
+|Macedonian| `mk` | Cyrillic `Cyrl` | <--> | Latin `Latn` |
| Malayalam | `ml` | Malayalam `Mlym` | <--> | Latin `Latn` | | Marathi | `mr` | Devanagari `Deva` | <--> | Latin `Latn` |
+|Mongolian| `mn` | Cyrillic `Cyrl` | <--> | Latin `Latn` |
| Odia | `or` | Oriya `Orya` | <--> | Latin `Latn` |
+|Persian| `fa` | Arabic `Arab` | <--> | Latin `Latn` |
| Punjabi | `pa` | Gurmukhi `Guru` | <--> | Latin `Latn` |
+|Russian| `ru` | Cyrillic `Cyrl` | <--> | Latin `Latn` |
| Serbian (Cyrillic) | `sr-Cyrl` | Cyrillic `Cyrl` | --> | Latin `Latn` | | Serbian (Latin) | `sr-Latn` | Latin `Latn` | --> | Cyrillic `Cyrl`|
+|Sindhi| `sd` | Arabic `Arab` | <--> | Latin `Latn` |
+|Tajik| `tg` | Cyrillic `Cyrl` | <--> | Latin `Latn` |
| Tamil | `ta` | Tamil `Taml` | <--> | Latin `Latn` |
+|Tatar| `tt` | Cyrillic `Cyrl` | <--> | Latin `Latn` |
| Telugu | `te` | Telugu `Telu` | <--> | Latin `Latn` | | Thai | `th` | Thai `Thai` | --> | Latin `Latn` |
+|Ukrainian| `uk` | Cyrillic `Cyrl` | <--> | Latin `Latn` |
+|Urdu| `ur` | Arabic `Arab` | <--> | Latin `Latn` |
### Dictionary
@@ -427,4 +441,4 @@ Convert text to speech. Text-to-speech is used to add audible output of translat
For a quick look at the languages, the Microsoft Translator website shows all the languages supported by Translator for text translation and Speech service for speech translation. This list doesn't include developer-specific information such as language codes.
-[See the list of languages](https://www.microsoft.com/translator/languages.aspx)
\ No newline at end of file
+[See the list of languages](https://www.microsoft.com/translator/languages.aspx)
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/immersive-reader/how-to-create-immersive-reader https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/immersive-reader/how-to-create-immersive-reader.md
@@ -139,21 +139,27 @@ The script is designed to be flexible. It will first look for existing Immersive
} ```
-1. Run the function `Create-ImmersiveReaderResource`, supplying the parameters as appropriate.
+1. Run the function `Create-ImmersiveReaderResource`, supplying the '<PARAMETER_VALUES>' placeholders below with your own values as appropriate.
```azurepowershell-interactive
+ Create-ImmersiveReaderResource -SubscriptionName '<SUBSCRIPTION_NAME>' -ResourceName '<RESOURCE_NAME>' -ResourceSubdomain '<RESOURCE_SUBDOMAIN>' -ResourceSKU '<RESOURCE_SKU>' -ResourceLocation '<RESOURCE_LOCATION>' -ResourceGroupName '<RESOURCE_GROUP_NAME>' -ResourceGroupLocation '<RESOURCE_GROUP_LOCATION>' -AADAppDisplayName '<AAD_APP_DISPLAY_NAME>' -AADAppIdentifierUri '<AAD_APP_IDENTIFIER_URI>' -AADAppClientSecret '<AAD_APP_CLIENT_SECRET>' -AADAppClientSecretExpiration '<AAD_APP_CLIENT_SECRET_EXPIRATION>'
+ ```
+
+ The full command will look something like the following. Here we have put each parameter on its own line for clarity, so you can see the whole command. Do not copy or use this command as-is. Copy and use the command above with your own values. This example has dummy values for the '<PARAMETER_VALUES>' above. Yours will be different, as you will come up with your own names for these values.
+
+ ```
Create-ImmersiveReaderResource
- -SubscriptionName '<SUBSCRIPTION_NAME>' `
- -ResourceName '<RESOURCE_NAME>' `
- -ResourceSubdomain '<RESOURCE_SUBDOMAIN>' `
- -ResourceSKU '<RESOURCE_SKU>' `
- -ResourceLocation '<RESOURCE_LOCATION>' `
- -ResourceGroupName '<RESOURCE_GROUP_NAME>' `
- -ResourceGroupLocation '<RESOURCE_GROUP_LOCATION>' `
- -AADAppDisplayName '<AAD_APP_DISPLAY_NAME>' `
- -AADAppIdentifierUri '<AAD_APP_IDENTIFIER_URI>' `
- -AADAppClientSecret '<AAD_APP_CLIENT_SECRET>'
- -AADAppClientSecretExpiration '<AAD_APP_CLIENT_SECRET_EXPIRATION>'
+ -SubscriptionName 'MyOrganizationSubscriptionName'
+ -ResourceName 'MyOrganizationImmersiveReader'
+ -ResourceSubdomain 'MyOrganizationImmersiveReader'
+ -ResourceSKU 'S0'
+ -ResourceLocation 'westus2'
+ -ResourceGroupName 'MyResourceGroupName'
+ -ResourceGroupLocation 'westus2'
+ -AADAppDisplayName 'MyOrganizationImmersiveReaderAADApp'
+ -AADAppIdentifierUri 'https://MyOrganizationImmersiveReaderAADApp'
+ -AADAppClientSecret 'SomeStrongPassword'
+ -AADAppClientSecretExpiration '2021-12-31'
``` | Parameter | Comments |
@@ -161,7 +167,7 @@ The script is designed to be flexible. It will first look for existing Immersive
| SubscriptionName |Name of the Azure subscription to use for your Immersive Reader resource. You must have a subscription in order to create a resource. | | ResourceName | Must be alphanumeric, and may contain '-', as long as the '-' is not the first or last character. Length may not exceed 63 characters.| | ResourceSubdomain |A custom subdomain is needed for your Immersive Reader resource. The subdomain is used by the SDK when calling the Immersive Reader service to launch the Reader. The subdomain must be globally unique. The subdomain must be alphanumeric, and may contain '-', as long as the '-' is not the first or last character. Length may not exceed 63 characters. This parameter is optional if the resource already exists. |
- | ResourceSKU |Options: `S0`. Visit our [Cognitive Services pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/immersive-reader/) to learn more about each available SKU. This parameter is optional if the resource already exists. |
+ | ResourceSKU |Options: `S0` (Standard tier) or `S1` (Education/Nonprofit organizations). Visit our [Cognitive Services pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/immersive-reader/) to learn more about each available SKU. This parameter is optional if the resource already exists. |
| ResourceLocation |Options: `eastus`, `eastus2`, `southcentralus`, `westus`, `westus2`, `australiaeast`, `southeastasia`, `centralindia`, `japaneast`, `northeurope`, `uksouth`, `westeurope`. This parameter is optional if the resource already exists. | | ResourceGroupName |Resources are created in resource groups within subscriptions. Supply the name of an existing resource group. If the resource group does not already exist, a new one with this name will be created. | | ResourceGroupLocation |If your resource group doesn't exist, you need to supply a location in which to create the group. To find a list of locations, run `az account list-locations`. Use the *name* property (without spaces) of the returned result. This parameter is optional if your resource group already exists. |
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/text-analytics/concepts/data-limits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/text-analytics/concepts/data-limits.md
@@ -17,7 +17,7 @@ ms.reviewer: chtufts
# Data and rate limits for the Text Analytics API <a name="data-limits"></a>
-Use this article to find the limits for the size, and rates that you can send data to Text Analytics API.
+Use this article to find the limits for the size, and rates that you can send data to Text Analytics API. Note that pricing is not affected by the data limits or rate limits. Pricing is subject to your Text Analytics resource's [pricing details](https://azure.microsoft.com/pricing/details/cognitive-services/text-analytics/).
## Data limits
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/text-analytics/includes/docker-run-health-container https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/text-analytics/includes/docker-run-health-container.md
@@ -79,7 +79,7 @@ Azure [Web App for Containers](https://azure.microsoft.com/services/app-service/
Run this PowerShell script using the Azure CLI to create a Web App for Containers, using your subscription and the container image over HTTPS. Wait for the script to complete (approximately 25-30 minutes) before submitting the first request.
-```bash
+```azurecli
$subscription_name = "" # THe name of the subscription you want you resource to be created on. $resource_group_name = "" # The name of the resource group you want the AppServicePlan # and AppSerivce to be attached to.
@@ -113,7 +113,7 @@ See the [ACI regional support](../../../container-instances/container-instances-
> [!NOTE] > Azure Container Instances don't include HTTPS support for the builtin domains. If you need HTTPS, you will need to manually configure it, including creating a certificate and registering a domain. You can find instructions to do this with NGINX below.
-```bash
+```azurecli
$subscription_name = "" # The name of the subscription you want you resource to be created on. $resource_group_name = "" # The name of the resource group you want the AppServicePlan # and AppService to be attached to.
container-instances https://docs.microsoft.com/en-us/azure/container-instances/container-instances-region-availability https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-instances/container-instances-region-availability.md
@@ -35,6 +35,7 @@ The following regions and maximum resources are available to container groups wi
| East US | 4 | 16 | 4 | 16 | 50 | K80, P100, V100 | | East US 2 | 4 | 16 | 4 | 16 | 50 | N/A | | France Central | 4 | 16 | 4 | 16 | 50 | N/A |
+| Germany West Central | 3 | 16 | N/A | N/A | 50 | N/A |
| Japan East | 2 | 8 | 4 | 16 | 50 | N/A | | Korea Central | 4 | 16 | N/A | N/A | 50 | N/A | | North Central US | 2 | 3.5 | 4 | 16 | 50 | K80, P100, V100 |
@@ -43,6 +44,7 @@ The following regions and maximum resources are available to container groups wi
| Southeast Asia | 4 | 16 | 4 | 16 | 50 | P100, V100 | | South India | 4 | 16 | N/A | N/A | 50 | N/A | | UK South | 4 | 16 | 4 | 16 | 50 | N/A |
+| UAE North | 3 | 16 | N/A | N/A | 50 | N/A |
| West Central US| 4 | 16 | 4 | 16 | 50 | N/A | | West Europe | 4 | 16 | 4 | 16 | 50 | K80, P100, V100 | | West US | 4 | 16 | 4 | 16 | 50 | N/A |
container-registry https://docs.microsoft.com/en-us/azure/container-registry/security-controls-policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/security-controls-policy.md
@@ -1,7 +1,7 @@
--- title: Azure Policy Regulatory Compliance controls for Azure Container Registry description: Lists Azure Policy Regulatory Compliance controls available for Azure Container Registry. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources.
-ms.date: 01/08/2021
+ms.date: 01/21/2021
ms.topic: sample author: dlepow ms.author: danlep
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/cosmosdb-monitor-resource-logs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/cosmosdb-monitor-resource-logs.md
@@ -28,7 +28,7 @@ Platform metrics and the Activity logs are collected automatically, whereas you
1. When you create a diagnostic setting, you specify which category of logs to collect. The categories of logs supported by Azure Cosmos DB are listed below along with sample log collected by them:
- * **DataPlaneRequests**: Select this option to log back-end requests to all APIs, which include SQL, Graph, MongoDB, Cassandra, and Table API accounts in Azure Cosmos DB. Key properties to note are: `Requestcharge`, `statusCode`, `clientIPaddress`, `partitionID`, `resourceTokenPermissionId`, and `resourceTokenPermissionMode`.
+ * **DataPlaneRequests**: Select this option to log back-end requests to the SQL API accounts in Azure Cosmos DB. Key properties to note are: `Requestcharge`, `statusCode`, `clientIPaddress`, `partitionID`, `resourceTokenPermissionId`, and `resourceTokenPermissionMode`.
```json { "time": "2019-04-23T23:12:52.3814846Z", "resourceId": "/SUBSCRIPTIONS/<your_subscription_ID>/RESOURCEGROUPS/<your_resource_group>/PROVIDERS/MICROSOFT.DOCUMENTDB/DATABASEACCOUNTS/<your_database_account>", "category": "DataPlaneRequests", "operationName": "ReadFeed", "properties": {"activityId": "66a0c647-af38-4b8d-a92a-c48a805d6460","requestResourceType": "Database","requestResourceId": "","collectionRid": "","statusCode": "200","duration": "0","userAgent": "Microsoft.Azure.Documents.Common/2.2.0.0","clientIpAddress": "10.0.0.24","requestCharge": "1.000000","requestLength": "0","responseLength": "372", "resourceTokenPermissionId": "perm-prescriber-app","resourceTokenPermissionMode": "all", "resourceTokenUserRid": "","region": "East US","partitionId": "062abe3e-de63-4aa5-b9de-4a77119c59f8","keyType": "PrimaryReadOnlyMasterKey","databaseName": "","collectionName": ""}}
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/index-policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/index-policy.md
@@ -5,7 +5,7 @@ author: timsander1
ms.service: cosmos-db ms.subservice: cosmosdb-sql ms.topic: conceptual
-ms.date: 12/07/2020
+ms.date: 01/21/2021
ms.author: tisande ---
@@ -31,6 +31,17 @@ Azure Cosmos DB supports two indexing modes:
By default, indexing policy is set to `automatic`. It's achieved by setting the `automatic` property in the indexing policy to `true`. Setting this property to `true` allows Azure CosmosDB to automatically index documents as they are written.
+## <a id="index-size"></a>Index size
+
+In Azure Cosmos DB, the total consumed storage is the combination of both the Data size and Index size. The following are some features of index size:
+
+* The index size depends on the indexing policy. If all the properties are indexed, then the index size can be larger than the data size.
+* When data is deleted, indexes are compacted on a near continuous basis. However, for small data deletions, you may not immediately observe a decrease in index size.
+* The Index size can grow on the following cases:
+
+ * Partition split duration- The index space is released after the partition split is completed.
+ * When a partition is splitting, index space will temporarily increase during the partition split.
+ ## <a id="include-exclude-paths"></a>Including and excluding property paths A custom indexing policy can specify property paths that are explicitly included or excluded from indexing. By optimizing the number of paths that are indexed, you can substantially reduce the latency and RU charge of write operations. These paths are defined following [the method described in the indexing overview section](index-overview.md#from-trees-to-property-paths) with the following additions:
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/online-backup-and-restore https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/online-backup-and-restore.md
@@ -111,6 +111,13 @@ If you have accidentally deleted or corrupted your data, you should contact [Azu
If you provision throughput at the database level, the backup and restore process in this case happen at the entire database level, and not at the individual containers level. In such cases, you can't select a subset of containers to restore.
+## Required permissions to change retention or restore from the portal
+Principals who are part of the role [CosmosdbBackupOperator](../role-based-access-control/built-in-roles.md#cosmosbackupoperator), owner or contributor are allowed to request a restore or change the retention period.
+
+## Understanding Costs of extra backups
+2 backups are provided free and extra backups are charged according to the region based pricing for backup storage described in [backup storage pricing](https://azure.microsoft.com/en-us/pricing/details/cosmos-db/). For example if Backup Retention is configured to 240 hrs i.e., 10 days and Backup Interval to 24 hrs. This implies 10 copies of the backup data. Assuming 1 TB of data in West US 2, the would be 1000 * 0.12 ~ $ 120 for backup storage in given month.
++ ## Options to manage your own backups With Azure Cosmos DB SQL API accounts, you can also maintain your own backups by using one of the following approaches:
@@ -143,4 +150,3 @@ Next you can learn about how to restore data from an Azure Cosmos account or lea
* To make a restore request, contact Azure Support, [file a ticket from the Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade) * [Use Cosmos DB change feed](change-feed.md) to move data to Azure Cosmos DB. * [Use Azure Data Factory](../data-factory/connector-azure-cosmos-db.md) to move data to Azure Cosmos DB.-
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/powershell-samples-cassandra https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/powershell-samples-cassandra.md
@@ -5,14 +5,14 @@ author: markjbrown
ms.service: cosmos-db ms.subservice: cosmosdb-cassandra ms.topic: sample
-ms.date: 10/13/2020
+ms.date: 01/20/2021
ms.author: mjbrown --- # Azure PowerShell samples for Azure Cosmos DB Cassandra API [!INCLUDE[appliesto-cassandra-api](includes/appliesto-cassandra-api.md)]
-The following table includes links to commonly used Azure PowerShell scripts for Azure Cosmos DB. Use the links on the right to navigate to API specific samples. Common samples are the same across all APIs. Reference pages for all Azure Cosmos DB PowerShell cmdlets are available in the [Azure PowerShell Reference](/powershell/module/az.cosmosdb). Please check for updates to `Az.CosmosDB` regularly. You can also fork these PowerShell samples for Cosmos DB from our GitHub repository, [Cosmos DB PowerShell Samples on GitHub](https://github.com/Azure/azure-docs-powershell-samples/tree/master/cosmosdb).
+The following table includes links to commonly used Azure PowerShell scripts for Azure Cosmos DB. Use the links on the right to navigate to API specific samples. Common samples are the same across all APIs. Reference pages for all Azure Cosmos DB PowerShell cmdlets are available in the [Azure PowerShell Reference](/powershell/module/az.cosmosdb). The `Az.CosmosDB` module is now part of the `Az` module. [Download and install](/powershell/azure/install-az-ps?preserve-view=true&view=azps-5.4.0) the latest version of Az module to get the Azure Cosmos DB cmdlets. You can also get the latest version from the [PowerShell Gallery](https://www.powershellgallery.com/packages/Az/5.4.0). You can also fork these PowerShell samples for Cosmos DB from our GitHub repository, [Cosmos DB PowerShell Samples on GitHub](https://github.com/Azure/azure-docs-powershell-samples/tree/master/cosmosdb).
## Common Samples
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/powershell-samples-gremlin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/powershell-samples-gremlin.md
@@ -5,14 +5,14 @@ author: markjbrown
ms.service: cosmos-db ms.subservice: cosmosdb-graph ms.topic: sample
-ms.date: 10/13/2020
+ms.date: 01/20/2021
ms.author: mjbrown --- # Azure PowerShell samples for Azure Cosmos DB Gremlin API [!INCLUDE[appliesto-gremlin-api](includes/appliesto-gremlin-api.md)]
-The following table includes links to commonly used Azure PowerShell scripts for Azure Cosmos DB. Use the links on the right to navigate to API specific samples. Common samples are the same across all APIs. Reference pages for all Azure Cosmos DB PowerShell cmdlets are available in the [Azure PowerShell Reference](/powershell/module/az.cosmosdb). Please check for updates to `Az.CosmosDB` regularly. You can also fork these PowerShell samples for Cosmos DB from our GitHub repository, [Cosmos DB PowerShell Samples on GitHub](https://github.com/Azure/azure-docs-powershell-samples/tree/master/cosmosdb).
+The following table includes links to commonly used Azure PowerShell scripts for Azure Cosmos DB. Use the links on the right to navigate to API specific samples. Common samples are the same across all APIs. Reference pages for all Azure Cosmos DB PowerShell cmdlets are available in the [Azure PowerShell Reference](/powershell/module/az.cosmosdb). The `Az.CosmosDB` module is now part of the `Az` module. [Download and install](/powershell/azure/install-az-ps?preserve-view=true&view=azps-5.4.0) the latest version of Az module to get the Azure Cosmos DB cmdlets. You can also get the latest version from the [PowerShell Gallery](https://www.powershellgallery.com/packages/Az/5.4.0). You can also fork these PowerShell samples for Cosmos DB from our GitHub repository, [Cosmos DB PowerShell Samples on GitHub](https://github.com/Azure/azure-docs-powershell-samples/tree/master/cosmosdb).
## Common Samples
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/powershell-samples-mongodb https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/powershell-samples-mongodb.md
@@ -5,14 +5,14 @@ author: markjbrown
ms.service: cosmos-db ms.subservice: cosmosdb-mongo ms.topic: sample
-ms.date: 10/13/2020
+ms.date: 01/20/2021
ms.author: mjbrown --- # Azure PowerShell samples for Azure Cosmos DB API for MongoDB [!INCLUDE[appliesto-mongodb-api](includes/appliesto-mongodb-api.md)]
-The following table includes links to commonly used Azure PowerShell scripts for Azure Cosmos DB. Use the links on the right to navigate to API specific samples. Common samples are the same across all APIs. Reference pages for all Azure Cosmos DB PowerShell cmdlets are available in the [Azure PowerShell Reference](/powershell/module/az.cosmosdb). Please check for updates to `Az.CosmosDB` regularly. You can also fork these PowerShell samples for Cosmos DB from our GitHub repository, [Cosmos DB PowerShell Samples on GitHub](https://github.com/Azure/azure-docs-powershell-samples/tree/master/cosmosdb).
+The following table includes links to commonly used Azure PowerShell scripts for Azure Cosmos DB. Use the links on the right to navigate to API specific samples. Common samples are the same across all APIs. Reference pages for all Azure Cosmos DB PowerShell cmdlets are available in the [Azure PowerShell Reference](/powershell/module/az.cosmosdb). The `Az.CosmosDB` module is now part of the `Az` module. [Download and install](/powershell/azure/install-az-ps?preserve-view=true&view=azps-5.4.0) the latest version of Az module to get the Azure Cosmos DB cmdlets. You can also get the latest version from the [PowerShell Gallery](https://www.powershellgallery.com/packages/Az/5.4.0). You can also fork these PowerShell samples for Cosmos DB from our GitHub repository, [Cosmos DB PowerShell Samples on GitHub](https://github.com/Azure/azure-docs-powershell-samples/tree/master/cosmosdb).
## Common Samples
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/powershell-samples-table https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/powershell-samples-table.md
@@ -5,14 +5,14 @@ author: markjbrown
ms.service: cosmos-db ms.subservice: cosmosdb-table ms.topic: sample
-ms.date: 10/13/2020
+ms.date: 01/20/2021
ms.author: mjbrown --- # Azure PowerShell samples for Azure Cosmos DB Table API [!INCLUDE[appliesto-table-api](includes/appliesto-table-api.md)]
-The following table includes links to commonly used Azure PowerShell scripts for Azure Cosmos DB. Use the links on the right to navigate to API specific samples. Common samples are the same across all APIs. Reference pages for all Azure Cosmos DB PowerShell cmdlets are available in the [Azure PowerShell Reference](/powershell/module/az.cosmosdb). Please check for updates to `Az.CosmosDB` regularly. You can also fork these PowerShell samples for Cosmos DB from our GitHub repository, [Cosmos DB PowerShell Samples on GitHub](https://github.com/Azure/azure-docs-powershell-samples/tree/master/cosmosdb).
+The following table includes links to commonly used Azure PowerShell scripts for Azure Cosmos DB. Use the links on the right to navigate to API specific samples. Common samples are the same across all APIs. Reference pages for all Azure Cosmos DB PowerShell cmdlets are available in the [Azure PowerShell Reference](/powershell/module/az.cosmosdb). The `Az.CosmosDB` module is now part of the `Az` module. [Download and install](/powershell/azure/install-az-ps?preserve-view=true&view=azps-5.4.0) the latest version of Az module to get the Azure Cosmos DB cmdlets. You can also get the latest version from the [PowerShell Gallery](https://www.powershellgallery.com/packages/Az/5.4.0). You can also fork these PowerShell samples for Cosmos DB from our GitHub repository, [Cosmos DB PowerShell Samples on GitHub](https://github.com/Azure/azure-docs-powershell-samples/tree/master/cosmosdb).
## Common Samples
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/powershell-samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/powershell-samples.md
@@ -5,14 +5,14 @@ author: markjbrown
ms.service: cosmos-db ms.subservice: cosmosdb-sql ms.topic: sample
-ms.date: 10/13/2020
+ms.date: 01/20/2021
ms.author: mjbrown --- # Azure PowerShell samples for Azure Cosmos DB Core (SQL) API [!INCLUDE[appliesto-sql-api](includes/appliesto-sql-api.md)]
-The following table includes links to commonly used Azure PowerShell scripts for Azure Cosmos DB. Use the links on the right to navigate to API specific samples. Common samples are the same across all APIs. Reference pages for all Azure Cosmos DB PowerShell cmdlets are available in the [Azure PowerShell Reference](/powershell/module/az.cosmosdb). Please check for updates to `Az.CosmosDB` regularly. You can also fork these PowerShell samples for Cosmos DB from our GitHub repository, [Cosmos DB PowerShell Samples on GitHub](https://github.com/Azure/azure-docs-powershell-samples/tree/master/cosmosdb).
+The following table includes links to commonly used Azure PowerShell scripts for Azure Cosmos DB. Use the links on the right to navigate to API specific samples. Common samples are the same across all APIs. Reference pages for all Azure Cosmos DB PowerShell cmdlets are available in the [Azure PowerShell Reference](/powershell/module/az.cosmosdb). The `Az.CosmosDB` module is now part of the `Az` module. [Download and install](/powershell/azure/install-az-ps?preserve-view=true&view=azps-5.4.0) the latest version of Az module to get the Azure Cosmos DB cmdlets. You can also get the latest version from the [PowerShell Gallery](https://www.powershellgallery.com/packages/Az/5.4.0). You can also fork these PowerShell samples for Cosmos DB from our GitHub repository, [Cosmos DB PowerShell Samples on GitHub](https://github.com/Azure/azure-docs-powershell-samples/tree/master/cosmosdb).
For PowerShell cmdlets for other APIs see [PowerShell Samples for Cassandra](powershell-samples-cassandra.md), [PowerShell Samples for MongoDB API](powershell-samples-mongodb.md), [PowerShell Samples for Gremlin](powershell-samples-gremlin.md), [PowerShell Samples for Table](powershell-samples-table.md)
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/scripts/powershell/cassandra/autoscale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/scripts/powershell/cassandra/autoscale.md
@@ -14,7 +14,10 @@ ms.author: mjbrown
[!INCLUDE [updated-for-az](../../../../../includes/updated-for-az.md)]
-[!INCLUDE [sample-powershell-install](../../../../../includes/sample-powershell-install-no-ssh.md)]
+This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed.
+If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
+
+Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to Azure.
## Sample script
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/scripts/powershell/cassandra/create https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/scripts/powershell/cassandra/create.md
@@ -14,7 +14,10 @@ ms.author: mjbrown
[!INCLUDE [updated-for-az](../../../../../includes/updated-for-az.md)]
-[!INCLUDE [sample-powershell-install](../../../../../includes/sample-powershell-install-no-ssh.md)]
+This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed.
+If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
+
+Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to Azure.
## Sample script
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/scripts/powershell/cassandra/list-get https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/scripts/powershell/cassandra/list-get.md
@@ -14,7 +14,10 @@ ms.author: mjbrown
[!INCLUDE [updated-for-az](../../../../../includes/updated-for-az.md)]
-[!INCLUDE [sample-powershell-install](../../../../../includes/sample-powershell-install-no-ssh.md)]
+This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed.
+If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
+
+Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to Azure.
## Sample script
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/scripts/powershell/cassandra/lock https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/scripts/powershell/cassandra/lock.md
@@ -14,7 +14,10 @@ ms.date: 06/12/2020
[!INCLUDE [updated-for-az](../../../../../includes/updated-for-az.md)]
-[!INCLUDE [sample-powershell-install](../../../../../includes/sample-powershell-install-no-ssh.md)]
+This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed.
+If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
+
+Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to Azure.
> [!IMPORTANT] > Resource locks do not work for changes made by users connecting using any Cassandra SDK, CQL Shell, or the Azure Portal unless the Cosmos DB account is first locked with the `disableKeyBasedMetadataWriteAccess` property enabled. To learn more about how to enable this property see, [Preventing changes from SDKs](../../../role-based-access-control.md#prevent-sdk-changes).
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/scripts/powershell/cassandra/throughput https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/scripts/powershell/cassandra/throughput.md
@@ -14,7 +14,10 @@ ms.author: mjbrown
[!INCLUDE [updated-for-az](../../../../../includes/updated-for-az.md)]
-[!INCLUDE [sample-powershell-install](../../../../../includes/sample-powershell-install-no-ssh.md)]
+This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed.
+If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
+
+Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to Azure.
## Get throughput
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/scripts/powershell/common/account-update https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/scripts/powershell/common/account-update.md
@@ -13,7 +13,10 @@ ms.author: mjbrown
[!INCLUDE [updated-for-az](../../../../../includes/updated-for-az.md)]
-[!INCLUDE [sample-powershell-install](../../../../../includes/sample-powershell-install-no-ssh.md)]
+This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed.
+If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
+
+Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to Azure.
## Sample script
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/scripts/powershell/common/failover-priority-update https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/scripts/powershell/common/failover-priority-update.md
@@ -13,7 +13,10 @@ ms.author: mjbrown
[!INCLUDE [updated-for-az](../../../../../includes/updated-for-az.md)]
-[!INCLUDE [sample-powershell-install](../../../../../includes/sample-powershell-install-no-ssh.md)]
+This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed.
+If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
+
+Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to Azure.
## Sample script
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/scripts/powershell/common/firewall-create https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/scripts/powershell/common/firewall-create.md
@@ -13,7 +13,10 @@ ms.author: mjbrown
[!INCLUDE [updated-for-az](../../../../../includes/updated-for-az.md)]
-[!INCLUDE [sample-powershell-install](../../../../../includes/sample-powershell-install-no-ssh.md)]
+This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed.
+If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
+
+Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to Azure.
## Sample script
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/scripts/powershell/common/keys-connection-strings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/scripts/powershell/common/keys-connection-strings.md
@@ -13,7 +13,10 @@ ms.author: mjbrown
[!INCLUDE [updated-for-az](../../../../../includes/updated-for-az.md)]
-[!INCLUDE [sample-powershell-install](../../../../../includes/sample-powershell-install-no-ssh.md)]
+This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed.
+If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
+
+Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to Azure.
## Sample script
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/scripts/powershell/common/update-region https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/scripts/powershell/common/update-region.md
@@ -13,7 +13,10 @@ ms.author: mjbrown
[!INCLUDE [updated-for-az](../../../../../includes/updated-for-az.md)]
-[!INCLUDE [sample-powershell-install](../../../../../includes/sample-powershell-install-no-ssh.md)]
+This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed.
+If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
+
+Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to Azure.
## Sample script
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/scripts/powershell/gremlin/autoscale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/scripts/powershell/gremlin/autoscale.md
@@ -14,7 +14,10 @@ ms.author: mjbrown
[!INCLUDE [updated-for-az](../../../../../includes/updated-for-az.md)]
-[!INCLUDE [sample-powershell-install](../../../../../includes/sample-powershell-install-no-ssh.md)]
+This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed.
+If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
+
+Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to Azure.
## Sample script
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/scripts/powershell/gremlin/create https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/scripts/powershell/gremlin/create.md
@@ -14,7 +14,10 @@ ms.author: mjbrown
[!INCLUDE [updated-for-az](../../../../../includes/updated-for-az.md)]
-[!INCLUDE [sample-powershell-install](../../../../../includes/sample-powershell-install-no-ssh.md)]
+This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed.
+If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
+
+Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to Azure.
## Sample script
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/scripts/powershell/gremlin/list-get https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/scripts/powershell/gremlin/list-get.md
@@ -14,7 +14,10 @@ ms.author: mjbrown
[!INCLUDE [updated-for-az](../../../../../includes/updated-for-az.md)]
-[!INCLUDE [sample-powershell-install](../../../../../includes/sample-powershell-install-no-ssh.md)]
+This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed.
+If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
+
+Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to Azure.
## Sample script
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/scripts/powershell/gremlin/lock https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/scripts/powershell/gremlin/lock.md
@@ -14,7 +14,10 @@ ms.date: 06/12/2020
[!INCLUDE [updated-for-az](../../../../../includes/updated-for-az.md)]
-[!INCLUDE [sample-powershell-install](../../../../../includes/sample-powershell-install-no-ssh.md)]
+This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed.
+If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
+
+Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to Azure.
> [!IMPORTANT] > Resource locks do not work for changes made by users connecting using any Gremlin SDK or the Azure Portal unless the Cosmos DB account is first locked with the `disableKeyBasedMetadataWriteAccess` property enabled. To learn more about how to enable this property see, [Preventing changes from SDKs](../../../role-based-access-control.md#prevent-sdk-changes).
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/scripts/powershell/gremlin/throughput https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/scripts/powershell/gremlin/throughput.md
@@ -14,7 +14,10 @@ ms.author: mjbrown
[!INCLUDE [updated-for-az](../../../../../includes/updated-for-az.md)]
-[!INCLUDE [sample-powershell-install](../../../../../includes/sample-powershell-install-no-ssh.md)]
+This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed.
+If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
+
+Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to Azure.
## Get throughput
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/scripts/powershell/mongodb/autoscale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/scripts/powershell/mongodb/autoscale.md
@@ -14,7 +14,10 @@ ms.author: mjbrown
[!INCLUDE [updated-for-az](../../../../../includes/updated-for-az.md)]
-[!INCLUDE [sample-powershell-install](../../../../../includes/sample-powershell-install-no-ssh.md)]
+This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed.
+If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
+
+Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to Azure.
## Sample script
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/scripts/powershell/mongodb/create https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/scripts/powershell/mongodb/create.md
@@ -14,7 +14,10 @@ ms.author: mjbrown
[!INCLUDE [updated-for-az](../../../../../includes/updated-for-az.md)]
-[!INCLUDE [sample-powershell-install](../../../../../includes/sample-powershell-install-no-ssh.md)]
+This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed.
+If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
+
+Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to Azure.
## Sample script
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/scripts/powershell/mongodb/list-get https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/scripts/powershell/mongodb/list-get.md
@@ -14,7 +14,10 @@ ms.author: mjbrown
[!INCLUDE [updated-for-az](../../../../../includes/updated-for-az.md)]
-[!INCLUDE [sample-powershell-install](../../../../../includes/sample-powershell-install-no-ssh.md)]
+This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed.
+If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
+
+Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to Azure.
## Sample script
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/scripts/powershell/mongodb/lock https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/scripts/powershell/mongodb/lock.md
@@ -14,7 +14,10 @@ ms.date: 06/12/2020
[!INCLUDE [updated-for-az](../../../../../includes/updated-for-az.md)]
-[!INCLUDE [sample-powershell-install](../../../../../includes/sample-powershell-install-no-ssh.md)]
+This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed.
+If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
+
+Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to Azure.
> [!IMPORTANT] > Resource locks do not work for changes made by users connecting using any MongoDB SDK, Mongoshell, any tools or the Azure Portal unless the Cosmos DB account is first locked with the `disableKeyBasedMetadataWriteAccess` property enabled. To learn more about how to enable this property see, [Preventing changes from SDKs](../../../role-based-access-control.md#prevent-sdk-changes).
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/scripts/powershell/mongodb/throughput https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/scripts/powershell/mongodb/throughput.md
@@ -14,7 +14,10 @@ ms.author: mjbrown
[!INCLUDE [updated-for-az](../../../../../includes/updated-for-az.md)]
-[!INCLUDE [sample-powershell-install](../../../../../includes/sample-powershell-install-no-ssh.md)]
+This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed.
+If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
+
+Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to Azure.
## Get throughput
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/scripts/powershell/sql/autoscale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/scripts/powershell/sql/autoscale.md
@@ -14,7 +14,10 @@ ms.author: mjbrown
[!INCLUDE [updated-for-az](../../../../../includes/updated-for-az.md)]
-[!INCLUDE [sample-powershell-install](../../../../../includes/sample-powershell-install-no-ssh.md)]
+This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed.
+If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
+
+Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to Azure.
## Sample script
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/scripts/powershell/sql/create-index-none https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/scripts/powershell/sql/create-index-none.md
@@ -14,7 +14,10 @@ ms.author: mjbrown
[!INCLUDE [updated-for-az](../../../../../includes/updated-for-az.md)]
-[!INCLUDE [sample-powershell-install](../../../../../includes/sample-powershell-install-no-ssh.md)]
+This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed.
+If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
+
+Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to Azure.
## Sample script
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/scripts/powershell/sql/create-large-partition-key https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/scripts/powershell/sql/create-large-partition-key.md
@@ -14,7 +14,10 @@ ms.author: mjbrown
[!INCLUDE [updated-for-az](../../../../../includes/updated-for-az.md)]
-[!INCLUDE [sample-powershell-install](../../../../../includes/sample-powershell-install-no-ssh.md)]
+This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed.
+If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
+
+Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to Azure.
## Sample script
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/scripts/powershell/sql/create https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/scripts/powershell/sql/create.md
@@ -14,7 +14,10 @@ ms.author: mjbrown
[!INCLUDE [updated-for-az](../../../../../includes/updated-for-az.md)]
-[!INCLUDE [sample-powershell-install](../../../../../includes/sample-powershell-install-no-ssh.md)]
+This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed.
+If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
+
+Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to Azure.
## Sample script
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/scripts/powershell/sql/list-get https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/scripts/powershell/sql/list-get.md
@@ -14,7 +14,10 @@ ms.author: mjbrown
[!INCLUDE [updated-for-az](../../../../../includes/updated-for-az.md)]
-[!INCLUDE [sample-powershell-install](../../../../../includes/sample-powershell-install-no-ssh.md)]
+This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed.
+If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
+
+Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to Azure.
## Sample script
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/scripts/powershell/sql/lock https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/scripts/powershell/sql/lock.md
@@ -14,7 +14,10 @@ ms.date: 06/12/2020
[!INCLUDE [updated-for-az](../../../../../includes/updated-for-az.md)]
-[!INCLUDE [sample-powershell-install](../../../../../includes/sample-powershell-install-no-ssh.md)]
+This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed.
+If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
+
+Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to Azure.
> [!IMPORTANT] > Resource locks do not work for changes made by users connecting using any Cosmos DB SDK, any tools that connect via account keys, or the Azure Portal unless the Cosmos DB account is first locked with the `disableKeyBasedMetadataWriteAccess` property enabled. To learn more about how to enable this property see, [Preventing changes from SDKs](../../../role-based-access-control.md#prevent-sdk-changes).
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/scripts/powershell/sql/throughput https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/scripts/powershell/sql/throughput.md
@@ -14,7 +14,10 @@ ms.author: mjbrown
[!INCLUDE [updated-for-az](../../../../../includes/updated-for-az.md)]
-[!INCLUDE [sample-powershell-install](../../../../../includes/sample-powershell-install-no-ssh.md)]
+This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed.
+If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
+
+Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to Azure.
## Get throughput
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/scripts/powershell/table/autoscale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/scripts/powershell/table/autoscale.md
@@ -14,7 +14,10 @@ ms.author: mjbrown
[!INCLUDE [updated-for-az](../../../../../includes/updated-for-az.md)]
-[!INCLUDE [sample-powershell-install](../../../../../includes/sample-powershell-install-no-ssh.md)]
+This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed.
+If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
+
+Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to Azure.
## Sample script
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/scripts/powershell/table/create https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/scripts/powershell/table/create.md
@@ -14,7 +14,10 @@ ms.author: mjbrown
[!INCLUDE [updated-for-az](../../../../../includes/updated-for-az.md)]
-[!INCLUDE [sample-powershell-install](../../../../../includes/sample-powershell-install-no-ssh.md)]
+This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed.
+If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
+
+Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to Azure.
## Sample script
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/scripts/powershell/table/list-get https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/scripts/powershell/table/list-get.md
@@ -14,7 +14,10 @@ ms.author: mjbrown
[!INCLUDE [updated-for-az](../../../../../includes/updated-for-az.md)]
-[!INCLUDE [sample-powershell-install](../../../../../includes/sample-powershell-install-no-ssh.md)]
+This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed.
+If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
+
+Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to Azure.
## Sample script
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/scripts/powershell/table/lock https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/scripts/powershell/table/lock.md
@@ -14,7 +14,10 @@ ms.date: 06/12/2020
[!INCLUDE [updated-for-az](../../../../../includes/updated-for-az.md)]
-[!INCLUDE [sample-PowerShell-install](../../../../../includes/sample-PowerShell-install-no-ssh.md)]
+This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed.
+If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
+
+Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to Azure.
> [!IMPORTANT] > Resource locks do not work for changes made by users connecting using any Cosmos DB SDK, any tools that connect via account keys, or the Azure Portal unless the Cosmos DB account is first locked with the `disableKeyBasedMetadataWriteAccess` property enabled. To learn more about how to enable this property see, [Preventing changes from SDKs](../../../role-based-access-control.md#prevent-sdk-changes).
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/scripts/powershell/table/throughput https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/scripts/powershell/table/throughput.md
@@ -14,7 +14,10 @@ ms.author: mjbrown
[!INCLUDE [updated-for-az](../../../../../includes/updated-for-az.md)]
-[!INCLUDE [sample-powershell-install](../../../../../includes/sample-powershell-install-no-ssh.md)]
+This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed.
+If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
+
+Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to Azure.
## Get throughput
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/security-controls-policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/security-controls-policy.md
@@ -1,7 +1,7 @@
--- title: Azure Policy Regulatory Compliance controls for Azure Cosmos DB description: Lists Azure Policy Regulatory Compliance controls available for Azure Cosmos DB. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources.
-ms.date: 01/08/2021
+ms.date: 01/21/2021
ms.topic: sample author: SnehaGunda ms.author: sngun
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/unique-keys https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/unique-keys.md
@@ -17,7 +17,7 @@ Unique keys add a layer of data integrity to an Azure Cosmos container. You crea
After you create a container with a unique key policy, the creation of a new or an update of an existing item resulting in a duplicate within a logical partition is prevented, as specified by the unique key constraint. The partition key combined with the unique key guarantees the uniqueness of an item within the scope of the container.
-For example, consider an Azure Cosmos container with email address as the unique key constraint and `CompanyID` as the partition key. When you configure the user's email address with a unique key, each item has a unique email address within a given `CompanyID`. Two items can't be created with duplicate email addresses and with the same partition key value. In Azure Cosmos DB's SQL (Core) API, items are stored as JSON values. These JSON values are case sensitive. When you choose a property as a unique key, you can insert case sensitive values for that property. For example, If you have a unique key defined on the name property, "Gaby" is different from "gaby" and you can insert both into the container.
+For example, consider an Azure Cosmos container with `Email address` as the unique key constraint and `CompanyID` as the partition key. When you configure the user's email address with a unique key, each item has a unique email address within a given `CompanyID`. Two items can't be created with duplicate email addresses and with the same partition key value. In Azure Cosmos DB's SQL (Core) API, items are stored as JSON values. These JSON values are case sensitive. When you choose a property as a unique key, you can insert case sensitive values for that property. For example, If you have a unique key defined on the name property, "Gaby" is different from "gaby" and you can insert both into the container.
To create items with the same email address, but not the same first name, last name, and email address, add more paths to the unique key policy. Instead of creating a unique key based on the email address only, you also can create a unique key with a combination of the first name, last name, and email address. This key is known as a composite unique key. In this case, each unique combination of the three values within a given `CompanyID` is allowed.
@@ -53,4 +53,4 @@ You can define unique keys only when you create an Azure Cosmos container. A uni
## Next steps * Learn more about [logical partitions](partitioning-overview.md)
-* Explore [how to define unique keys](how-to-define-unique-keys.md) when creating a container
\ No newline at end of file
+* Explore [how to define unique keys](how-to-define-unique-keys.md) when creating a container
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/use-metrics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/use-metrics.md
@@ -71,7 +71,7 @@ After identifying which partition key is causing the skew in distribution, you m
## Compare data size against index size
-In Azure Cosmos DB, the total consumed storage is the combination of both the Data size and Index size. Typically, the index size is a fraction of the data size. In the Metrics blade in the [Azure portal](https://portal.azure.com), the Storage tab showcases the breakdown of storage consumption based on data and index.
+In Azure Cosmos DB, the total consumed storage is the combination of both the Data size and Index size. Typically, the index size is a fraction of the data size. To learn more, see the [Index size](index-policy.md#index-size) article. In the Metrics blade in the [Azure portal](https://portal.azure.com), the Storage tab showcases the breakdown of storage consumption based on data and index.
```csharp // Measure the document size usage (which includes the index size)
cost-management-billing https://docs.microsoft.com/en-us/azure/cost-management-billing/manage/ea-portal-rest-apis https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/ea-portal-rest-apis.md
@@ -3,7 +3,7 @@ title: Azure Enterprise REST APIs
description: This article describes the REST APIs for use with your Azure enterprise enrollment. author: bandersmsft ms.author: banders
-ms.date: 09/03/2020
+ms.date: 01/21/2021
ms.topic: conceptual ms.service: cost-management-billing ms.subservice: enterprise
@@ -89,14 +89,6 @@ When you're using an API, response status codes are shown. The following table d
Usage and billing data files are updated every 24 hours for the current billing month. However, data latency can occur for up to three days. For example, if usage is incurred on Monday, data might not appear in the data file until Thursday.
-### Test enrollment for development
-
-If you're a partner or a developer without an Azure enterprise enrollment and you want to access the API, you can use the test enrollment. The enrollment name is _EnrollmentNumber 100_, you can find and test usage information up to June 2018. Then you can use the following key to call the API and see sample data.
-
-```
-eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsIng1dCI6ImpoeXA2UU9DWlZmY1pmdmhDVGR1OFdxeTJ5byJ9.eyJFbnJvbGxtZW50TnVtYmVyIjoiMTAwIiwiSWQiOiI1ZTc2ZmNiMy0xN2I4LTQ5ZDItYjdkOC0zMDU0YjUwOWY0MWYiLCJSZXBvcnRWaWV3IjoiU3lzdGVtIiwiUGFydG5lcklkIjoiIiwiRGVwYXJ0bWVudElkIjoiIiwiQWNjb3VudElkIjoiIiwiaXNzIjoiZWEubWljcm9zb2Z0YXp1cmUuY29tIiwiYXVkIjoiY2xpZW50LmVhLm1pY3Jvc29mdGF6dXJlLmNvbSIsImV4cCI6MTU4NjM5MDA2OSwibmJmIjoxNTcwNTc4ODY5fQ.lENR5pCBph6iZCVexUlN1b-j7StaILCyBewVHoILD-_fn8S2o2bHY1qUseGOkBwNlaFQfk2OZIo-jQYvnf3eP3UNrNVTCINT0APbc1RqgwSjZSxugVVHH9jnSzEjONkJaSKmi4tlidk6zkF1-uY-TPJkKxYN_9ar7BgLshF9JGXk7t8OZhxSCxDZc-smntu6ORFDl4gRZZVBKXhqOGjOAdYX5tPiGDF2Bxb68RSzh9Xyr5PXxKLx5yivZzUdo0-GFHo13V9w6a5VQM4R1w4_ro8jF8WAo3mpGZ_ovx_U5IY6zMNmi_AoA1mUyvTGotgcu94RragutoJRxAGHbNJZ0Q
-```
- ### Azure service catalog All Azure services are posted to a catalog in CSV format in an Azure storage blog. The catalog is useful if you need to build a curated catalog of all Azure services for your system. The current catalog is at [https://azurecatalog.blob.core.windows.net/catalog/AzureCatalog.csv](https://azurecatalog.blob.core.windows.net/catalog/AzureCatalog.csv).
cost-management-billing https://docs.microsoft.com/en-us/azure/cost-management-billing/manage/subscription-disabled https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/subscription-disabled.md
@@ -8,7 +8,7 @@ tags: billing
ms.service: cost-management-billing ms.subservice: billing ms.topic: how-to
-ms.date: 11/17/2020
+ms.date: 01/19/2021
ms.author: banders ---
@@ -18,41 +18,43 @@ Your Azure subscription can get disabled because your credit has expired, you re
## Your credit is expired
-When you sign up for an Azure free account, you get a Free Trial subscription, which provides you $200 in Azure credits for 30 days and 12 months of free services. At the end of 30 days, Azure disables your subscription. Your subscription is disabled to protect you from accidentally incurring charges for usage beyond the credit and free services included with your subscription. To continue using Azure services, you must [upgrade your subscription](upgrade-azure-subscription.md). After you upgrade, your subscription still has access to free services for 12 months. You only get charged for usage beyond the free services and quantities.
+When you sign up for an Azure free account, you get a Free Trial subscription, which provides you $200 in Azure credits for 30 days and 12 months of free services. At the end of 30 days, Azure disables your subscription. Your subscription is disabled to protect you from accidentally incurring charges for usage beyond the credit and free services included with your subscription. To continue using Azure services, you must [upgrade your subscription](upgrade-azure-subscription.md). After you upgrade, your subscription still has access to free services for 12 months. You only get charged for usage beyond the free service quantity limits.
## You reached your spending limit
-Azure subscriptions with credit such as Free Trial and Visual Studio Enterprise have spending limits on them. This means you can only use services up to the included credit. When your usage reaches the spending limit, Azure disables your subscription for the remainder of that billing period. Your subscription is disabled to protect you from accidentally incurring charges for usage beyond the credit included with your subscription. To remove your spending limit, see [Remove the spending limit in Account Center](spending-limit.md#remove).
+Azure subscriptions with credit such as Free Trial and Visual Studio Enterprise have spending limits on them. You can only use services up to the included credit. When your usage reaches the spending limit, Azure disables your subscription for the rest of that billing period. Your subscription is disabled to protect you from accidentally incurring charges for usage beyond the credit included with your subscription. To remove your spending limit, see [Remove the spending limit in the Azure portal](spending-limit.md#remove).
> [!NOTE] > If you have a Free Trial subscription and you remove the spending limit, your subscription converts to an individual subscription with pay-as-you-go rates at the end of the Free Trial. You keep your remaining credit for the full 30 days after you created the subscription. You also have access to free services for 12 months. To monitor and manage billing activity for Azure, see [Plan to manage Azure costs](../understand/plan-manage-costs.md). - ## Your bill is past due
-To resolve past due balance, see [Resolve past due balance for your Azure subscription after getting an email from Azure](resolve-past-due-balance.md).
+To resolve a past due balance, see one of the following articles:
+
+- For Microsoft Online Subscription Program subscriptions including pay-as-you-go, see [Resolve past due balance for your Azure subscription after getting an email from Azure](resolve-past-due-balance.md).
+- For Microsoft Customer Agreement subscriptions, see [How to pay your bill for Microsoft Azure](../understand/pay-bill.md).
## The bill exceeds your credit card limit
-To resolve this issue, [switch to a different credit card](change-credit-card.md). Or if you're representing a business, you can [switch to pay by invoice](pay-by-invoice.md).
+To resolve the issue, [switch to a different credit card](change-credit-card.md). Or if you're representing a business, you can [switch to pay by invoice](pay-by-invoice.md).
## The subscription was accidentally canceled
-If you're the Account Administrator and accidentally canceled an individual subscription with pay-as-you-go rates, you can reactivate it in the Account Center.
-
-1. Sign in to the [Account Center](https://account.windowsazure.com/Subscriptions).
-1. Select the canceled subscription.
-1. Click **Reactivate**.
+If you're the Account Administrator and accidentally canceled a pay-as-you-go subscription, you can reactivate it in the Azure portal.
- ![Screenshot that shows reactivate links on the right pane](./media/subscription-disabled/reactivate-sub.png)
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Go to Subscriptions and then select the canceled subscription.
+1. Select **Reactivate**.
+1. Confirm reactivation by selecting **OK**.
+ :::image type="content" source="./media/subscription-disabled/reactivate-sub.png" alt-text="Screenshot that shows Confirm reactivation" :::
For other subscription types, [contact support](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade) to have your subscription reactivated. ## After reactivation
-After your subscription is reactivated, there might be a delay in creating or managing resources. If the delay exceeds 30 minutes, please contact [Azure Billing Support](https://go.microsoft.com/fwlink/?linkid=2083458) for assistance. Most Azure resources automatically resume and don't require any action. However, we recommend that you check your Azure service resources and restart any that don't resume automatically.
+After your subscription is reactivated, there might be a delay in creating or managing resources. If the delay exceeds 30 minutes, contact [Azure Billing Support](https://go.microsoft.com/fwlink/?linkid=2083458) for assistance. Most Azure resources automatically resume and don't require any action. However, we recommend that you check your Azure service resources and restart any that don't resume automatically.
## Need help? Contact us.
cost-management-billing https://docs.microsoft.com/en-us/azure/cost-management-billing/manage/switch-azure-offer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/switch-azure-offer.md
@@ -1,25 +1,25 @@
--- title: Change Azure subscription offer
-description: Learn about how to change your Azure subscription and switch to a different offer using the Azure Account Center.
+description: Learn about how to change your Azure subscription and switch to a different offer.
author: bandersmsft ms.reviewer: amberb tags: billing,top-support-issue ms.service: cost-management-billing ms.subservice: billing ms.topic: conceptual
-ms.date: 08/20/2020
+ms.date: 01/20/2021
ms.author: banders --- # Change your Azure subscription to a different offer
-As a customer with an [individual subscription with pay-as-you-go rates](https://azure.microsoft.com/offers/ms-azr-0003p/), you can switch your Azure subscription to another offer in the [Account Center](https://account.windowsazure.com/Subscriptions). For example, you can use this feature to take advantage of the [monthly credits for Visual Studio subscribers](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details/).
+As a customer with a [pay-as-you-go subscription](https://azure.microsoft.com/offers/ms-azr-0003p/) subscription, you can switch your Azure subscription to another offer in the Azure portal. For example, you can use this feature to take advantage of the [monthly credits for Visual Studio subscribers](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details/).
**Just want to upgrade from Free Trial?** See [upgrade your subscription](upgrade-azure-subscription.md). ## What's supported:
-You can switch from an individual subscription with pay-as-you-go rates to:
+You can switch from a pay-as-you-go subscription to:
- [Pay-As-You-Go Dev/Test](https://azure.microsoft.com/offers/ms-azr-0023p/) - [Visual Studio Professional](https://azure.microsoft.com/offers/ms-azr-0059p/)
@@ -30,43 +30,31 @@ You can switch from an individual subscription with pay-as-you-go rates to:
> [!NOTE] > For other offer changes, [contact support](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade).
->
->
## Switch subscription offer
-> [!VIDEO https://channel9.msdn.com/Series/Microsoft-Azure-Tutorials/Switch-to-a-different-Azure-offer/player]
->
->
-
-1. Sign in at [Azure Account Center](https://account.windowsazure.com/Subscriptions).
-1. Select your individual subscription with pay-as-you-go rates.
-1. Click **Switch to another offer**. The option is only available if you have an individual subscription with pay-as-you-go rates and have completed your first billing period.
-
- ![Notice the Switch offer button on the right side of the page](./media/switch-azure-offer/switchbutton.png)
-1. **Select the offer you want** from the list of offers your subscription can be switched to. This list varies based on the memberships that your account is associated with. If nothing is available, check the [list of available offers you can switch to](#whats-supported) and make sure you have the right memberships.
-
- ![Select an offer that you want to switch to](./media/switch-azure-offer/selectoffer.png)
-1. Depending on the offer youΓÇÖre switching to, you may see a note about the impact of switching. Go through the list carefully and follow the instructions before you continue.
-
- ![Review the notes](./media/switch-azure-offer/thingstonote.png)
-1. You can rename your subscription. By default, it isn't set to the new offer name. Click **Switch Offer** to complete the process.
-
- ![Click the green button](./media/switch-azure-offer/confirmpage.png)
-1. Success! Your subscription is now switched to the new offer.
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Navigate to **Subscriptions** and then select your pay-as-you-go subscription.
+1. At the top of the page, select **Switch Offer**. The option is only available if you have a pay-as-you-go subscription and have completed your first billing period.
+ :::image type="content" source="./media/switch-azure-offer/switch-offer.png" alt-text="ALTImage showing subscription details with the Switch Offer optionTEXT" lightbox="./media/switch-azure-offer/switch-offer.png" :::
+1. Select the offer that you want from the list of offers your subscription can be switched to. This list varies based on the memberships that your account is associated with. If nothing is available, check the [list of available offers you can switch to](#whats-supported) and make sure you have the right memberships. Then select **Next**.
+ :::image type="content" source="./media/switch-azure-offer/select-offer.png" alt-text="Select an offer that you want to switch to" lightbox="./media/switch-azure-offer/select-offer.png" :::
+ Depending on the offer youΓÇÖre switching to, you may see a note about the impact of switching. Go through the list carefully and follow the instructions before you continue. You might also need to verify your phone number.
+1. After reviewing any notes or verifying your phone number, select **Switch Offer**.
+1. Your subscription is now switched to the new offer.
## Frequently asked questions The following sections answer commonly asked questions. ### What is an Azure offer?
-An Azure offer is the *type* of the Azure subscription you have. For example, [an subscription with pay-as-you-go rates](https://azure.microsoft.com/offers/ms-azr-0003p/), [Azure in Open](https://azure.microsoft.com/offers/ms-azr-0111p/), and [Visual Studio Enterprise](https://azure.microsoft.com/offers/ms-azr-0063p/) are all Azure offers. Each offer has different [terms](https://azure.microsoft.com/support/legal/offer-details/) and some have special benefits. The offer of your subscription can be found in the Account Center subscription page. Click the offer name to get more details.
+An Azure offer is the *type* of the Azure subscription you have. For example, [an subscription with pay-as-you-go rates](https://azure.microsoft.com/offers/ms-azr-0003p/), [Azure in Open](https://azure.microsoft.com/offers/ms-azr-0111p/), and [Visual Studio Enterprise](https://azure.microsoft.com/offers/ms-azr-0063p/) are all Azure offers. Each offer has different [terms](https://azure.microsoft.com/support/legal/offer-details/) and some have special benefits. The offer of your subscription is shown on the subscription details page.
- ![Click the Offer link in Account Center to get more details](./media/switch-azure-offer/offerlink01.png)
+:::image type="content" source="./media/switch-azure-offer/subscription-details.png" alt-text="Subscription details page showing the offer type" lightbox="./media/switch-azure-offer/subscription-details.png" :::
### Why don't I see the button?
-You might not see the **Switch to another offer** option if:
+You might not see the **Switch Offer** option if:
* You don't have a [subscription with pay-as-you-go rates](https://azure.microsoft.com/offers/ms-azr-0003p/). Currently only subscriptions with pay-as-you-go rates can be converted to another offer. * If you have a [Free Trial](https://azure.microsoft.com/free/), learn how to [upgrade to Pay-As-You-Go](upgrade-azure-subscription.md).
@@ -80,7 +68,7 @@ You might not see the **Switch to another offer** option if:
### What does switching Azure offers do to my service and billing?
-Here are the details of what happens when you switch Azure offers in the Account Center.
+Here are the details of what happens when you switch Azure offers.
#### No service downtime
@@ -97,7 +85,7 @@ On the day you switch, an invoice is generated for all outstanding charges. Then
### Can I migrate from a subscription with pay-as-you-go rates to Cloud Solution Provider (CSP) or Enterprise Agreement (EA)? * To migrate to CSP, see [Transfer Azure subscriptions between subscribers and CSPs](transfer-subscriptions-subscribers-csp.md).
-* To migrate to EA, have your Enrollment Admin add your account into the EA. Follow instructions in the invitation email to have your subscriptions moved under EA enrollment. To learn more, see [Associate an Existing Account](https://ea.azure.com/helpdocs/associateExistingAccount) in the EA portal.
+* To migrate to EA, have your Enrollment Admin add your account into the EA. Follow instructions in the invitation email to have your subscriptions moved under the EA enrollment.
### Can I migrate data and services to a new subscription?
cost-management-billing https://docs.microsoft.com/en-us/azure/cost-management-billing/reservations/understand-reserved-instance-usage-ea https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/reservations/understand-reserved-instance-usage-ea.md
@@ -1,19 +1,19 @@
---
-title: Understand Azure reservations usage for Enterprise Agreements
-description: Learn how to read your usage to understand how the Azure reservation for your Enterprise enrollment is applied.
+title: Understand Azure reservations usage for Enterprise Agreement and Microsoft Customer Agreement
+description: Learn how to read your usage information to understand how an Azure reservation applies to Enterprise Agreement and Microsoft Customer Agreement usage.
author: bandersmsft ms.reviewer: yashar tags: billing ms.service: cost-management-billing ms.subservice: reservations ms.topic: conceptual
-ms.date: 12/02/2020
+ms.date: 01/19/2020
ms.author: banders ---
-# Get Enterprise Agreement reservation costs and usage
+# Get Enterprise Agreement and Microsoft Customer Agreement reservation costs and usage
-Reservation costs and usage data are available for Enterprise Agreement customers in the Azure portal and REST APIs. This article helps you:
+Enhanced data for reservation costs and usage is available for Enterprise Agreement (EA) and Microsoft Customer Agreement (MCA) usage in Cost management. This article helps you:
- Get reservation purchase data - Know which subscription, resource group or resource used the reservation
@@ -56,9 +56,7 @@ Other information available in Azure usage data has changed:
You can get the data using the API or download it from Azure portal.
-You call the [Usage Details API](/rest/api/consumption/usagedetails/list) to get the new data. For details about terminology, see [usage terms](../understand/understand-usage.md). The caller should be an Enterprise Administrator for the enterprise agreement using the [EA portal](https://ea.azure.com). Read-only Enterprise Administrators can also get the data.
-
-Please note that this data is not available in [Reporting APIs for Enterprise customers - Usage Details](/rest/api/billing/enterprise/billing-enterprise-api-usage-detail).
+You call the [Usage Details API](/rest/api/consumption/usagedetails/list) to get the new data. For details about terminology, see [usage terms](../understand/understand-usage.md).
Here's an example call to the Usage Details API:
@@ -82,7 +80,7 @@ Information in the following table about metric and filter can help solve for co
## Download the usage CSV file with new data
-If you are an EA admin, you can download the CSV file that contains new usage data from Azure portal. This data isn't available from the EA portal (ea.azure.com), you must download the usage file from Azure portal (portal.azure.com) to see the new data.
+If you're an EA admin, you can download the CSV file that contains new usage data from Azure portal. This data isn't available from the EA portal (ea.azure.com), you must download the usage file from Azure portal (portal.azure.com) to see the new data.
In the Azure portal, navigate to [Cost management + billing](https://portal.azure.com/#blade/Microsoft_Azure_Billing/ModernBillingMenuBlade/BillingAccounts).
@@ -145,7 +143,7 @@ Reservation costs are available in [cost analysis](https://aka.ms/costanalysis).
Group by charge type to see a break down of usage, purchases, and refunds; or by reservation for a breakdown of reservation and on-demand costs. Remember the only reservation costs you will see when looking at actual cost are purchases, but costs will be allocated to the individual resources which used the benefit when looking at amortized cost. You will also see a new **UnusedReservation** charge type when looking at amortized cost.
-## Need help? Contact us.
+## Need help? Contact us
If you have questions or need help, [create a support request](https://go.microsoft.com/fwlink/?linkid=2083458).
data-factory https://docs.microsoft.com/en-us/azure/data-factory/control-flow-power-query-activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-power-query-activity.md new file mode 100644
@@ -0,0 +1,27 @@
+---
+title: Power Query activity in Azure Data Factory
+description: Learn how to use the Power Query activity for data wrangling features in a Data Factory pipeline
+services: data-factory
+author: kromerm
+ms.author: makromer
+ms.service: data-factory
+ms.workload: data-services
+ms.topic: conceptual
+ms.date: 01/18/2021
+---
+
+# Power query activity in data factory
+
+The Power Query activity allows you to build and execute Power Query mash-ups to execute data wrangling at scale in a Data Factory pipeline. You can create a new Power Query mash-up from the New resources menu option or by adding a Power Activity to your pipeline.
+
+![Screenshot that shows Power Query in the factory resources pane.](media/data-flow/power-query-wrangling.png)
+
+Previously, data wrangling in Azure Data Factory was authored from the Data Flow menu option. This has been changed to authoring from a new Power Query activity. You can work directly inside of the Power Query mash-up editor to perform interactive data exploration and then save your work. Once complete, you can take your Power Query activity and add it to a pipeline. Azure Data Factory will automatically scale it out and operationalize your data wrangling using Azure Data Factory's data flow Spark environment.
+
+## Translation to data flow script
+
+To achieve scale with your Power Query activity, Azure Data Factory translates your ```M``` script into a data flow script so that you can execute your Power Query at scale using the Azure Data Factory data flow Spark environment. Author your wrangling data flow using code-free data preparation. For the list of available functions, see [transformation functions](wrangling-functions.md).
+
+## Next steps
+
+Learn more about data wrangling concepts using [Power Query in Azure Data Factory](wrangling-tutorial.md)
data-factory https://docs.microsoft.com/en-us/azure/data-factory/quickstart-create-data-factory-portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/quickstart-create-data-factory-portal.md
@@ -5,7 +5,6 @@ services: data-factory
documentationcenter: '' author: linda33wj manager: shwang
-ms.reviewer: douglasl
ms.service: data-factory ms.workload: data-services ms.topic: quickstart
@@ -21,7 +20,7 @@ ms.author: jingwang
[!INCLUDE[appliesto-adf-xxx-md](includes/appliesto-adf-xxx-md.md)]
-This quickstart describes how to use the Azure Data Factory UI to create and monitor a data factory. The pipeline that you create in this data factory *copies* data from one folder to another folder in Azure Blob storage. To *transform* data by using Azure Data Factory, see [Mapping data flow](concepts-data-flow-overview.md) and [Wrangling data flow (Preview)](wrangling-data-flow-overview.md).
+This quickstart describes how to use the Azure Data Factory UI to create and monitor a data factory. The pipeline that you create in this data factory *copies* data from one folder to another folder in Azure Blob storage. To *transform* data by using Azure Data Factory, see [Mapping data flow](concepts-data-flow-overview.md).
> [!NOTE] > If you are new to Azure Data Factory, see [Introduction to Azure Data Factory](introduction.md) before doing this quickstart.
data-factory https://docs.microsoft.com/en-us/azure/data-factory/transform-data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/transform-data.md
@@ -42,9 +42,9 @@ Data Factory supports the following data transformation activities that can be a
Mapping data flows are visually designed data transformations in Azure Data Factory. Data flows allow data engineers to develop graphical data transformation logic without writing code. The resulting data flows are executed as activities within Azure Data Factory pipelines that use scaled-out Spark clusters. Data flow activities can be operationalized via existing Data Factory scheduling, control, flow, and monitoring capabilities. For more information, see [mapping data flows](concepts-data-flow-overview.md).
-### Wrangling data flows
+### Data wrangling
-Wrangling data flows in Azure Data Factory allow you to do code-free data preparation at cloud scale iteratively. Wrangling data flows integrate with [Power Query Online](/power-query/) and makes Power Query M functions available for data wrangling at cloud scale via spark execution. For more information, see [wrangling data flows](wrangling-data-flow-overview.md).
+Power Query in Azure Data Factory enables cloud-scale data wrangling, which allows you to do code-free data preparation at cloud scale iteratively. Data wrangling integrates with [Power Query Online](/power-query/) and makes Power Query M functions available for data wrangling at cloud scale via spark execution. For more information, see [data wrangling in ADF](wrangling-overview.md).
## External transformations
@@ -104,4 +104,4 @@ You create a linked service for the compute environment and then use the linked
See [Compute Linked Services](compute-linked-services.md) article to learn about compute services supported by Data Factory. ## Next steps
-See the following tutorial for an example of using a transformation activity: [Tutorial: transform data using Spark](tutorial-transform-data-spark-powershell.md)
\ No newline at end of file
+See the following tutorial for an example of using a transformation activity: [Tutorial: transform data using Spark](tutorial-transform-data-spark-powershell.md)
data-factory https://docs.microsoft.com/en-us/azure/data-factory/wrangling-data-flow-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/wrangling-data-flow-overview.md deleted file mode 100644
@@ -1,69 +0,0 @@
-title: Wrangling data flows in Azure Data Factory
-description: An overview of wrangling data flows in Azure Data Factory
-author: dcstwh
-ms.author: weetok
-ms.reviewer: gamal
-ms.service: data-factory
-ms.topic: conceptual
-ms.date: 11/01/2019
-
-# What are wrangling data flows?
-
-[!INCLUDE[appliesto-adf-xxx-md](includes/appliesto-adf-xxx-md.md)]
--
-Organizations need to do data preparation and wrangling for accurate analysis of complex data that continues to grow every day. Data preparation is required so that organizations can use the data in various business processes and reduce the time to value.
-
-Wrangling data flows in Azure Data Factory allow you to do code-free data preparation at cloud scale iteratively. Wrangling data flows integrate with [Power Query Online](/power-query/) and makes Power Query M functions available for data factory users.
-
-Wrangling data flow translates M generated by the Power Query Online Mashup Editor into spark code for cloud scale execution.
-
-Wrangling data flows are especially useful for data engineers or 'citizen data integrators'.
-
-> [!NOTE]
-> Wrangling data flow is currently available in public preview
-
-## Use cases
-
-### Fast interactive data exploration and preparation
-
-Multiple data engineers and citizen data integrators can interactively explore and prepare datasets at cloud scale. With the rise of volume, variety and velocity of data in data lakes, users need an effective way to explore and prepare data sets. For example, you may need to create a dataset that 'has all customer demographic info for new customers since 2017'. You aren't mapping to a known target. You're exploring, wrangling, and prepping datasets to meet a requirement before publishing it in the lake. Wrangling data flows are often used for less formal analytics scenarios. The prepped datasets can be used for doing transformations and machine learning operations downstream.
-
-### Code-free agile data preparation
-
-Citizen data integrators spend more than 60% of their time looking for and preparing data. They're looking to do it in a code free manner to improve operational productivity. Allowing citizen data integrators to enrich, shape, and publish data using known tools like Power Query Online in a scalable manner drastically improves their productivity. Wrangling data flow in Azure Data Factory enables the familiar Power Query Online mashup editor to allow citizen data integrators to fix errors quickly, standardize data, and produce high-quality data to support business decisions.
-
-### Data validation
-
-Visually scan your data in a code-free manner to remove any outliers, anomalies,
-and conform it to a shape for fast analytics.
-
-## Supported sources
-
-| Connector | Data format | Authentication type |
-| -- | -- | --|
-| [Azure Blob Storage](connector-azure-blob-storage.md) | CSV, Parquet | Account Key |
-| [Azure Data Lake Storage Gen1](connector-azure-data-lake-store.md) | CSV | Service Principal |
-| [Azure Data Lake Storage Gen2](connector-azure-data-lake-storage.md) | CSV, Parquet | Account Key, Service Principal |
-| [Azure SQL Database](connector-azure-sql-database.md) | - | SQL authentication |
-| [Azure Synapse Analytics](connector-azure-sql-data-warehouse.md) | - | SQL authentication |
-
-## The mashup editor
-
-When you create a wrangling data flow, all source datasets become dataset queries and are placed in the **ADFResource** folder. By default, the UserQuery will point to the first dataset query. All transformations should be done on the UserQuery as changes to dataset queries are not supported nor will they be persisted. Renaming, adding and deleting queries is currently not supported.
-
-![Wrangling](media/wrangling-data-flow/editor.png)
-
-Currently not all Power Query M functions are supported for data wrangling despite being available during authoring. While building your wrangling data flows, you'll be prompted with the following error message if a function isn't supported:
-
-`The wrangling data flow is invalid. Expression.Error: The transformation logic isn't supported. Please try a simpler expression`
-
-For more information on supported transformations, see [wrangling data flow functions](wrangling-data-flow-functions.md).
-
-Currently wrangling data flow only supports writing to one sink.
-
-## Next steps
-
-Learn how to [create a wrangling data flow](wrangling-data-flow-tutorial.md).
\ No newline at end of file
data-factory https://docs.microsoft.com/en-us/azure/data-factory/wrangling-data-flow-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/wrangling-data-flow-tutorial.md deleted file mode 100644
@@ -1,60 +0,0 @@
-title: Getting started with wrangling data flow in Azure Data Factory
-description: A tutorial on how to prepare data in Azure Data Factory using wrangling data flow
-author: dcstwh
-ms.author: weetok
-ms.reviewer: gamal
-ms.service: data-factory
-ms.topic: conceptual
-ms.date: 11/01/2019
-
-# Prepare data with wrangling data flow
-
-[!INCLUDE[appliesto-adf-xxx-md](includes/appliesto-adf-xxx-md.md)]
-
-> [!NOTE]
-> Wrangling data flow is currently avilable in public preview
-
-## Create a wrangling data flow
-
-There are two ways to create a wrangling data flow in Azure Data Factory. One way is to click the plus icon and select **Data Flow** in the factory resources pane.
-
-![Screenshot that shows Data Flow in the factory resources pane.](media/wrangling-data-flow/tutorial7.png)
-
-The other method is in the activities pane of the pipeline canvas. Open the **Move and Transform** accordion and drag the **Data flow** activity onto the canvas.
-
-In both methods, in the side pane that opens, select **Create new data flow** and choose **Wrangling data flow**. Click OK.
-
-![Screenshot that highlights the Wrangling data flow option.](media/wrangling-data-flow/tutorial1.png)
-
-## Author a wrangling data flow
-
-Add a **Source dataset** for your wrangling data flow. You can either choose an existing dataset or create a new one. You can also select a sink dataset. You can choose one or more source datasets, but only one sink is allowed at this time. Choosing a sink dataset is optional, but at least one source dataset is required.
-
-> [!NOTE]
-> Only ADLS Gen 2 Delimited Text are supported for limited preview.
-
-![Wrangling](media/wrangling-data-flow/tutorial4.png)
-
-Click **Create** to open the Power Query Online mashup editor.
-
-![Screenshot that shows the Create button that opens the Power Query Online mashup editor.](media/wrangling-data-flow/tutorial5.png)
-
-Author your wrangling data flow using code-free data preparation. For the list of available functions, see [transformation functions](wrangling-data-flow-functions.md).
-
-![Screenshot that shows the process for authoring your wrangling data flow.](media/wrangling-data-flow/tutorial6.png)
-
-## Running and monitoring a wrangling data flow
-
-To execute a pipeline debug run of a wrangling data flow, click **Debug** in the pipeline canvas. Once you publish your data flow, **Trigger now** executes an on-demand run of the last published pipeline. Wrangling data flows can be schedule with all existing Azure Data Factory triggers.
-
-![Screenshot that shows how to add a wrangling data flow.](media/wrangling-data-flow/tutorial3.png)
-
-Go to the **Monitor** tab to visualize the output of a triggered wrangling data flow activity run.
-
-![Screenshot that shows the output of a triggered wrangling data flow activity run.](media/wrangling-data-flow/tutorial2.png)
-
-## Next steps
-
-Learn how to [create a mapping data flow](tutorial-data-flow.md).
\ No newline at end of file
data-factory https://docs.microsoft.com/en-us/azure/data-factory/wrangling-functions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/wrangling-functions.md new file mode 100644
@@ -0,0 +1,131 @@
+---
+title: Data wrangling functions in Azure Data Factory
+description: An overview of available Data Wrangling functions in Azure Data Factory
+author: kromerm
+ms.author: makromer
+ms.service: data-factory
+ms.topic: conceptual
+ms.date: 01/19/2021
+---
+
+# Transformation functions in Power Query for data wrangling
+
+[!INCLUDE[appliesto-adf-xxx-md](includes/appliesto-adf-xxx-md.md)]
+
+Data Wrangling in Azure Data Factory allows you to do code-free agile data preparation and wrangling at cloud scale by translating Power Query ```M``` scripts into Data Flow script. ADF integrates with [Power Query Online](/powerquery-m/power-query-m-reference) and makes Power Query ```M``` functions available for data wrangling via Spark execution using the data flow Spark infrastructure.
+
+> [!NOTE]
+> Power Query in ADF is currently avilable in public preview
+
+Currently not all Power Query M functions are supported for data wrangling despite being available during authoring. While building your mash-ups, you'll be prompted with the following error message if a function isn't supported:
+
+`The Wrangling Data Flow is invalid. Expression.Error: The transformation logic is not supported. Please try a simpler expression.`
+
+Below is a list of supported Power Query M functions.
+
+## Column Management
+
+* Selection: [Table.SelectColumns](/powerquery-m/table-selectcolumns)
+* Removal: [Table.RemoveColumns](/powerquery-m/table-removecolumns)
+* Renaming: [Table.RenameColumns](/powerquery-m/table-renamecolumns), [Table.PrefixColumns](/powerquery-m/table-prefixcolumns), [Table.TransformColumnNames](/powerquery-m/table-transformcolumnnames)
+* Reordering: [Table.ReorderColumns](/powerquery-m/table-reordercolumns)
+
+## Row Filtering
+
+Use M function [Table.SelectRows](/powerquery-m/table-selectrows) to filter on the following conditions:
+
+* Equality and inequality
+* Numeric, text, and date comparisons (but not DateTime)
+* Numeric information such as [Number.IsEven](/powerquery-m/number-iseven)/[Odd](/powerquery-m/number-iseven)
+* Text containment using [Text.Contains](/powerquery-m/text-contains), [Text.StartsWith](/powerquery-m/text-startswith), or [Text.EndsWith](/powerquery-m/text-endswith)
+* Date ranges including all the 'IsIn' [Date functions](/powerquery-m/date-functions))
+* Combinations of these using and, or, or not conditions
+
+## Adding and Transforming Columns
+
+The following M functions add or transform columns: [Table.AddColumn](/powerquery-m/table-addcolumn), [Table.TransformColumns](/powerquery-m/table-transformcolumns), [Table.ReplaceValue](/powerquery-m/table-replacevalue), [Table.DuplicateColumn](/powerquery-m/table-duplicatecolumn). Below are the supported transformation functions.
+
+* Numeric arithmetic
+* Text concatenation
+* Date andTime Arithmetic (Arithmetic operators, [Date.AddDays](/powerquery-m/date-adddays), [Date.AddMonths](/powerquery-m/date-addmonths), [Date.AddQuarters](/powerquery-m/date-addquarters), [Date.AddWeeks](/powerquery-m/date-addweeks), [Date.AddYears](/powerquery-m/date-addyears))
+* Durations can be used for date and time arithmetic, but must be transformed into another type before written to a sink (Arithmetic operators, [#duration](/powerquery-m/sharpduration), [Duration.Days](/powerquery-m/duration-days), [Duration.Hours](/powerquery-m/duration-hours), [Duration.Minutes](/powerquery-m/duration-minutes), [Duration.Seconds](/powerquery-m/duration-seconds), [Duration.TotalDays](/powerquery-m/duration-totaldays), [Duration.TotalHours](/powerquery-m/duration-totalhours), [Duration.TotalMinutes](/powerquery-m/duration-totalminutes), [Duration.TotalSeconds](/powerquery-m/duration-totalseconds))
+* Most standard, scientific, and trigonometric numeric functions (All functions under [Operations](/powerquery-m/number-functions#operations), [Rounding](/powerquery-m/number-functions#rounding), and [Trigonometry](/powerquery-m/number-functions#trigonometry) *except* Number.Factorial, Number.Permutations, and Number.Combinations)
+* Replacement ([Replacer.ReplaceText](/powerquery-m/replacer-replacetext), [Replacer.ReplaceValue](/powerquery-m/replacer-replacevalue), [Text.Replace](/powerquery-m/text-replace), [Text.Remove](/powerquery-m/text-remove))
+* Positional text extraction ([Text.PositionOf](/powerquery-m/text-positionof), [Text.Length](/powerquery-m/text-length), [Text.Start](/powerquery-m/text-start), [Text.End](/powerquery-m/text-end), [Text.Middle](/powerquery-m/text-middle), [Text.ReplaceRange](/powerquery-m/text-replacerange), [Text.RemoveRange](/powerquery-m/text-removerange))
+* Basic text formatting ([Text.Lower](/powerquery-m/text-lower), [Text.Upper](/powerquery-m/text-upper),
+ [Text.Trim](/powerquery-m/text-trim)/[Start](/powerquery-m/text-trimstart)/[End](/powerquery-m/text-trimend), [Text.PadStart](/powerquery-m/text-padstart)/[End](/powerquery-m/text-padend), [Text.Reverse](/powerquery-m/text-reverse))
+* Date/Time Functions ([Date.Day](/powerquery-m/date-day), [Date.Month](/powerquery-m/date-month), [Date.Year](/powerquery-m/date-year) [Time.Hour](/powerquery-m/time-hour), [Time.Minute](/powerquery-m/time-minute), [Time.Second](/powerquery-m/time-second), [Date.DayOfWeek](/powerquery-m/date-dayofweek), [Date.DayOfYear](/powerquery-m/date-dayofyear), [Date.DaysInMonth](/powerquery-m/date-daysinmonth))
+* If expressions (but branches must have matching types)
+* Row filters as a logical column
+* Number, text, logical, date, and datetime constants
+
+Merging/Joining tables
+----------------------
+* Power Query will generate a nested join (Table.NestedJoin; users can also
+ manually write
+ [Table.AddJoinColumn](/powerquery-m/table-addjoincolumn)).
+ Users must then expand the nested join column into a non-nested join
+ (Table.ExpandTableColumn, not supported in any other context).
+* The M function
+ [Table.Join](/powerquery-m/table-join) can
+ be written directly to avoid the need for an additional expansion
+ step, but the user must ensure that there are no duplicate column names
+ among the joined tables
+* Supported Join Kinds:
+ [Inner](/powerquery-m/joinkind-inner),
+ [LeftOuter](/powerquery-m/joinkind-leftouter),
+ [RightOuter](/powerquery-m/joinkind-rightouter),
+ [FullOuter](/powerquery-m/joinkind-fullouter)
+* Both
+ [Value.Equals](/powerquery-m/value-equals)
+ and
+ [Value.NullableEquals](/powerquery-m/value-nullableequals)
+ are supported as key equality comparers
+
+## Group by
+
+Use [Table.Group](/powerquery-m/table-group) to aggregate values.
+* Must be used with an aggregation function
+* Supported aggregation functions:
+ [List.Sum](/powerquery-m/list-sum),
+ [List.Count](/powerquery-m/list-count),
+ [List.Average](/powerquery-m/list-average),
+ [List.Min](/powerquery-m/list-min),
+ [List.Max](/powerquery-m/list-max),
+ [List.StandardDeviation](/powerquery-m/list-standarddeviation),
+ [List.First](/powerquery-m/list-first),
+ [List.Last](/powerquery-m/list-last)
+
+## Sorting
+
+Use [Table.Sort](/powerquery-m/table-sort) to sort values.
+
+## Reducing Rows
+
+Keep and Remove Top, Keep Range (corresponding M functions,
+ only supporting counts, not conditions:
+ [Table.FirstN](/powerquery-m/table-firstn),
+ [Table.Skip](/powerquery-m/table-skip),
+ [Table.RemoveFirstN](/powerquery-m/table-removefirstn),
+ [Table.Range](/powerquery-m/table-range),
+ [Table.MinN](/powerquery-m/table-minn),
+ [Table.MaxN](/powerquery-m/table-maxn))
+
+## Known unsupported functions
+
+| Function | Status |
+| -- | -- |
+| Table.PromoteHeaders | Not supported. The same result can be achieved by setting "First row as header" in the dataset. |
+| Table.CombineColumns | This is a common scenario that isn't directly supported but can be achieved by adding a new column that concatenates two given columns. For example, Table.AddColumn(RemoveEmailColumn, "Name", each [FirstName] & " " & [LastName]) |
+| Table.TransformColumnTypes | This is supported in most cases. The following scenarios are unsupported: transforming string to currency type, transforming string to time type, transforming string to Percentage type. |
+| Table.NestedJoin | Just doing a join will result in a validation error. The columns must be expanded for it to work. |
+| Table.Distinct | Remove duplicate rows isn't supported. |
+| Table.RemoveLastN | Remove bottom rows isn't supported. |
+| Table.RowCount | Not supported, but can be achieved by adding a custom column containing the value 1, then aggregating that column with List.Sum. Table.Group is supported. |
+| Row level error handling | Row level error handling is currently not supported. For example, to filter out non-numeric values from a column, one approach would be to transform the text column to a number. Every cell which fails to transform will be in an error state and need to be filtered. This scenario isn't possible in wrangling data flow. |
+| Table.Transpose | Not supported |
+| Table.Pivot | Not supported |
+
+## Next steps
+
+Learn how to [create a data wrangling Power Query in ADF](wrangling-tutorial.md).
data-factory https://docs.microsoft.com/en-us/azure/data-factory/wrangling-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/wrangling-overview.md new file mode 100644
@@ -0,0 +1,62 @@
+---
+title: Data wrangling in Azure Data Factory
+description: An overview of Data Wrangling in Azure Data Factory
+author: kromerm
+ms.author: makromer
+ms.service: data-factory
+ms.topic: conceptual
+ms.date: 01/19/2021
+---
+
+# What is data wrangling?
+
+[!INCLUDE[appliesto-adf-xxx-md](includes/appliesto-adf-xxx-md.md)]
+
+Organizations need to the ability to explore their critical business data for data preparation and wrangling in order to provide accurate analysis of complex data that continues to grow every day. Data preparation is required so that organizations can use the data in various business processes and reduce the time to value.
+
+Data Factory empowers you with code-free data preparation at cloud scale iteratively using Power Query. Data Factory integrates with [Power Query Online](/power-query/) and makes Power Query M functions available as a pipeline activity.
+
+Data Factory translates M generated by the Power Query Online Mashup Editor into spark code for cloud scale execution by translating M into Azure Data Factory Data Flows. Wrangling data with Power Query and data flows are especially useful for data engineers or 'citizen data integrators'.
+
+> [!NOTE]
+> The Power Query activity in Azure Data Factory is currently available in public preview
+
+## Use cases
+
+### Fast interactive data exploration and preparation
+
+Multiple data engineers and citizen data integrators can interactively explore and prepare datasets at cloud scale. With the rise of volume, variety and velocity of data in data lakes, users need an effective way to explore and prepare data sets. For example, you may need to create a dataset that 'has all customer demographic info for new customers since 2017'. You aren't mapping to a known target. You're exploring, wrangling, and prepping datasets to meet a requirement before publishing it in the lake. Wrangling is often used for less formal analytics scenarios. The prepped datasets can be used for doing transformations and machine learning operations downstream.
+
+### Code-free agile data preparation
+
+Citizen data integrators spend more than 60% of their time looking for and preparing data. They're looking to do it in a code free manner to improve operational productivity. Allowing citizen data integrators to enrich, shape, and publish data using known tools like Power Query Online in a scalable manner drastically improves their productivity. Wrangling in Azure Data Factory enables the familiar Power Query Online mashup editor to allow citizen data integrators to fix errors quickly, standardize data, and produce high-quality data to support business decisions.
+
+### Data validation and exploration
+
+Visually scan your data in a code-free manner to remove any outliers, anomalies, and conform it to a shape for fast analytics.
+
+## Supported sources
+
+| Connector | Data format | Authentication type |
+| -- | -- | --|
+| [Azure Blob Storage](connector-azure-blob-storage.md) | CSV, Parquet | Account Key |
+| [Azure Data Lake Storage Gen1](connector-azure-data-lake-store.md) | CSV | Service Principal |
+| [Azure Data Lake Storage Gen2](connector-azure-data-lake-storage.md) | CSV, Parquet | Account Key, Service Principal |
+| [Azure SQL Database](connector-azure-sql-database.md) | - | SQL authentication |
+| [Azure Synapse Analytics](connector-azure-sql-data-warehouse.md) | - | SQL authentication |
+
+## The mashup editor
+
+When you create a Power Query activity, all source datasets become dataset queries and are placed in the **ADFResource** folder. By default, the UserQuery will point to the first dataset query. All transformations should be done on the UserQuery as changes to dataset queries are not supported nor will they be persisted. Renaming, adding and deleting queries is currently not supported.
+
+![Wrangling](media/wrangling-data-flow/editor.png)
+
+Currently not all Power Query M functions are supported for data wrangling despite being available during authoring. While building your Power Query activities, you'll be prompted with the following error message if a function isn't supported:
+
+`The wrangling data flow is invalid. Expression.Error: The transformation logic isn't supported. Please try a simpler expression`
+
+For more information on supported transformations, see [data wrangling functions](wrangling-functions.md).
+
+## Next steps
+
+Learn how to [create a data wrangling Power Query mash-up](wrangling-tutorial.md).
data-factory https://docs.microsoft.com/en-us/azure/data-factory/wrangling-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/wrangling-tutorial.md new file mode 100644
@@ -0,0 +1,59 @@
+---
+title: Getting started with wrangling data flow in Azure Data Factory
+description: A tutorial on how to prepare data in Azure Data Factory using wrangling data flow
+author: kromerm
+ms.author: makromer
+ms.service: data-factory
+ms.topic: conceptual
+ms.date: 01/19/2021
+---
+
+# Prepare data with data wrangling
+
+[!INCLUDE[appliesto-adf-xxx-md](includes/appliesto-adf-xxx-md.md)]
+
+Data wrangling in data factory allows you to build interactive Power Query mash-ups natively in ADF and then execute those at scale inside of an ADF pipeline.
+
+> [!NOTE]
+> Power Query acitivty in ADF is currently avilable in public preview
+
+## Create a Power Query activity
+
+There are two ways to create a Power Query in Azure Data Factory. One way is to click the plus icon and select **Data Flow** in the factory resources pane.
+
+> [!NOTE]
+> Previously, the data wrangling feature was located in the data flow workflow. Now, you will build your data wrangling mash-up from ```New > Power query```
+
+![Screenshot that shows Power Query in the factory resources pane.](media/data-flow/power-query-wrangling.png)
+
+The other method is in the activities pane of the pipeline canvas. Open the **Power Query** accordion and drag the **Power Query** activity onto the canvas.
+
+![Screenshot that highlights the data wrangling option.](media/data-flow/power-query-activity.png)
+
+## Author a Power Query data wrangling activity
+
+Add a **Source dataset** for your Power Query mash-up. You can either choose an existing dataset or create a new one. You can also select a sink dataset. You can choose one or more source datasets, but only one sink is allowed at this time. Choosing a sink dataset is optional, but at least one source dataset is required.
+
+![Wrangling](media/wrangling-data-flow/tutorial4.png)
+
+Click **Create** to open the Power Query Online mashup editor.
+
+![Screenshot that shows the Create button that opens the Power Query Online mashup editor.](media/wrangling-data-flow/tutorial5.png)
+
+Author your wrangling Power Query using code-free data preparation. For the list of available functions, see [transformation functions](wrangling-functions.md). ADF translates the M script into a data flow script so that you can execute your Power Query at scale using the Azure Data Factory data flow Spark environment.
+
+![Screenshot that shows the process for authoring your data wrangling Power Query.](media/wrangling-data-flow/tutorial6.png)
+
+## Running and monitoring a Power Query data wrangling activity
+
+To execute a pipeline debug run of a Power Query activity, click **Debug** in the pipeline canvas. Once you publish your pipeline, **Trigger now** executes an on-demand run of the last published pipeline. Power Query pipelines can be schedule with all existing Azure Data Factory triggers.
+
+![Screenshot that shows how to add a Power Query data wrangling activity.](media/wrangling-data-flow/tutorial3.png)
+
+Go to the **Monitor** tab to visualize the output of a triggered Power Query activity run.
+
+![Screenshot that shows the output of a triggered wrangling Power Query activity run.](media/wrangling-data-flow/tutorial2.png)
+
+## Next steps
+
+Learn how to [create a mapping data flow](tutorial-data-flow.md).
data-lake-analytics https://docs.microsoft.com/en-us/azure/data-lake-analytics/security-controls-policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-lake-analytics/security-controls-policy.md
@@ -1,7 +1,7 @@
--- title: Azure Policy Regulatory Compliance controls for Azure Data Lake Analytics description: Lists Azure Policy Regulatory Compliance controls available for Azure Data Lake Analytics. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources.
-ms.date: 01/08/2021
+ms.date: 01/21/2021
ms.topic: sample author: hrasheed-msft ms.author: hrasheed
data-lake-store https://docs.microsoft.com/en-us/azure/data-lake-store/security-controls-policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-lake-store/security-controls-policy.md
@@ -1,7 +1,7 @@
--- title: Azure Policy Regulatory Compliance controls for Azure Data Lake Storage Gen1 description: Lists Azure Policy Regulatory Compliance controls available for Azure Data Lake Storage Gen1. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources.
-ms.date: 01/08/2021
+ms.date: 01/21/2021
ms.topic: sample author: normesta ms.author: normesta
databox-online https://docs.microsoft.com/en-us/azure/databox-online/azure-stack-edge-j-series-deploy-virtual-machine-cli-python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-j-series-deploy-virtual-machine-cli-python.md
@@ -120,7 +120,7 @@ Before you begin creating and managing a VM on your Azure Stack Edge Pro device
The following is a sample output of the above command:
- ```powershell
+ ```output
PS C:\windows\system32> az --version azure-cli 2.0.80
@@ -144,7 +144,7 @@ Before you begin creating and managing a VM on your Azure Stack Edge Pro device
PS C:\windows\system32> ```
- If you do not have Azure CLI, download and [Install Azure CLI on Windows](/cli/azure/install-azure-cli-windows?view=azure-cli-latest). You can run Azure CLI using Windows command prompt or through Windows PowerShell.
+ If you do not have Azure CLI, download and [Install Azure CLI on Windows](/cli/azure/install-azure-cli-windows). You can run Azure CLI using Windows command prompt or through Windows PowerShell.
2. Make a note of the CLI's Python location. You need this to determine the location of trusted root certificate store for Azure CLI.
@@ -168,7 +168,7 @@ Before you begin creating and managing a VM on your Azure Stack Edge Pro device
The following sample output shows the installation of Haikunator:
- ```powershell
+ ```output
PS C:\Program Files (x86)\Microsoft SDKs\Azure\CLI2> .\python.exe -m pip install haikunator Collecting haikunator
@@ -184,7 +184,7 @@ Before you begin creating and managing a VM on your Azure Stack Edge Pro device
The following sample output shows the installation of pip for `msrestazure`:
- ```powershell
+ ```output
PS C:\Program Files (x86)\Microsoft SDKs\Azure\CLI2> .\python.exe -m pip install msrestazure==0.6.2 Requirement already satisfied: msrestazure==0.6.2 in c:\program files (x86)\microsoft sdks\azure\cli2\lib\site-packages (0.6.2) Requirement already satisfied: msrest<2.0.0,>=0.6.0 in c:\program files (x86)\microsoft sdks\azure\cli2\lib\site-packages (from msrestazure==0.6.2) (0.6.10)
@@ -208,7 +208,7 @@ Before you begin creating and managing a VM on your Azure Stack Edge Pro device
The cmdlet returns the certificate location, as seen below:
- ```powershell
+ ```output
PS C:\Program Files (x86)\Microsoft SDKs\Azure\CLI2> .\python -c "import certifi; print(certifi.where())" C:\Program Files (x86)\Microsoft SDKs\Azure\CLI2\lib\site-packages\certifi\cacert.pem PS C:\Program Files (x86)\Microsoft SDKs\Azure\CLI2>
@@ -320,7 +320,7 @@ Before you begin creating and managing a VM on your Azure Stack Edge Pro device
The following shows sample output for a successful sign in after supplying the password:
- ```powershell
+ ```output
PS C:\Program Files (x86)\Microsoft SDKs\Azure\CLI2> az login -u EdgeARMuser Password: [
defender-for-iot https://docs.microsoft.com/en-us/azure/defender-for-iot/concept-pricing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/concept-pricing.md deleted file mode 100644
@@ -1,75 +0,0 @@
-title: Pricing and associated costs
-description: Learn about the costs associated with Defender for IoT, and how to control them.
-services: defender-for-iot
-ms.service: defender-for-iot
-documentationcenter: na
-author: shhazam-ms
-manager: rkarlin
-editor: ''
-
-ms.devlang: na
-ms.topic: conceptual
-ms.tgt_pltfrm: na
-ms.workload: na
-ms.date: 12/08/2020
-ms.author: shhazam
-
-# Pricing and associated costs
-
-This article explains Defender for IoT pricing model, summarizes all associated costs and explains how to manage them.
-
-## Pricing
-
-The Defender for IoT pricing model is comprised of two parts, and is billed once an IoT Hub is [enabled](quickstart-onboard-iot-hub.md) in Defender for IoT:
--- Cost by device - built-in security capabilities based on analysis of IoT Hub logs.--- Cost by message - enhanced security capabilities based on security messages from IoT Edge or leaf devices.-
-For more information, see [Security Center pricing](https://azure.microsoft.com/pricing/details/security-center/).
-
-## Associated costs
-
-Defender for IoT has associated costs, which are not part of the direct pricing:
--- Log Analytics storage costs-
-You can reduce associated costs by opting out of certain solution features. Opt out by changing your settings.
-
-To change your settings:
-
-1. Open IoT Hub.
-
-1. Under **Security**, click **Settings**.
-
-1. Click **Data Collection**.
-
-The following table provides a summary of associated costs and implications of each option.
-
-| Option | Usage | Comment |
-| --- | --- | --- |
-| **Log Analytics storage** | |
-| Device recommendation and alerts| Security recommendation and alerts generated by the service | Not optional |
-| Raw security data| Raw security data from IoT devices, collected by security agents | Disable _store raw device security events_ |
-|
-
->[!Important]
-> Opting out has severe implications to Defender for IoT security feature availability.
-
-| Opt out | Implications |
-| --- | --- |
-| _Twin metadata collection_ | Disable [custom alerts](quickstart-create-custom-alerts.md) |
-| | Disable IoT Edge manifest recommendations |
-| | Disable device identity-based recommendations and alerts |
-| _Store raw device security events_ | Details on device OS baseline recommendations are not available |
-| | Details on [alert](concept-security-alerts.md) and [recommendation](concept-recommendations.md) investigations are not available |
-|
-
-## See also
--- Access your [raw security data](how-to-security-data-access.md)-- [Investigate a device](how-to-investigate-device.md)-- Understand and explore [security recommendations](concept-recommendations.md)-- Understand and explore [security alerts](concept-security-alerts.md)
defender-for-iot https://docs.microsoft.com/en-us/azure/defender-for-iot/release-notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/release-notes.md
@@ -13,11 +13,11 @@ ms.devlang: na
ms.topic: how-to ms.tgt_pltfrm: na ms.workload: na
-ms.date: 01/03/2021
+ms.date: 01/06/2021
ms.author: shhazam ---
-# What's new
+# What's new?
Defender for IoT 10.0 provides feature enhancements that improve security, management, and usability.
digital-twins https://docs.microsoft.com/en-us/azure/digital-twins/concepts-twins-graph https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/concepts-twins-graph.md
@@ -26,7 +26,9 @@ In an Azure Digital Twins solution, the entities in your environment are represe
Before you can create a digital twin in your Azure Digital Twins instance, you need to have a *model* uploaded to the service. A model describes the set of properties, telemetry messages, and relationships that a particular twin can have, among other things. For the types of information that are defined in a model, see [*Concepts: Custom models*](concepts-models.md).
-After creating and uploading a model, your client app can create an instance of the type; this is a digital twin. For example, after creating a model of *Floor*, you may create one or several digital twins that use this type (like a *Floor*-type twin called *GroundFloor*, another called *Floor2*, etc.).
+After creating and uploading a model, your client app can create an instance of the type; this is a digital twin. For example, after creating a model of *Floor*, you may create one or several digital twins that use this type (like a *Floor*-type twin called *GroundFloor*, another called *Floor2*, etc.).
+
+[!INCLUDE [digital-twins-versus-device-twins](../../includes/digital-twins-versus-device-twins.md)]
## Relationships: a graph of digital twins
digital-twins https://docs.microsoft.com/en-us/azure/digital-twins/how-to-create-azure-function https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-create-azure-function.md
@@ -37,7 +37,7 @@ Here is an overview of the steps it contains:
## Create a function app in Visual Studio
-In Visual Studio 2019, select _File > New > Project_ and search for the _Azure Functions_ template, select _Next_.
+In Visual Studio 2019, select _File > New > Project_ and search for the _Azure Functions_ template. Select _Next_.
:::image type="content" source="media/how-to-create-azure-function/create-azure-function-project.png" alt-text="Visual Studio: new project dialog":::
@@ -45,11 +45,11 @@ Specify a name for the function app and select _Create_.
:::image type="content" source="media/how-to-create-azure-function/configure-new-project.png" alt-text="Visual Studio: configure new project":::
-Select the type of the function app *Event Grid trigger* and select _Create_.
+Select the function app type of *Event Grid trigger* and select _Create_.
-:::image type="content" source="media/how-to-create-azure-function/eventgridtrigger-function.png" alt-text="Visual Studio: Azure Functions project trigger dialog":::
+:::image type="content" source="media/how-to-create-azure-function/event-grid-trigger-function.png" alt-text="Visual Studio: Azure Functions project trigger dialog":::
-Once your function app is created, your visual studio will have auto populated code sample in **function.cs** file in your project folder. This short function is used to log events.
+Once your function app is created, Visual Studio will generate a code sample in a **Function1.cs** file in your project folder. This short function is used to log events.
:::image type="content" source="media/how-to-create-azure-function/visual-studio-sample-code.png" alt-text="Visual Studio: Project window with sample code":::
@@ -57,11 +57,11 @@ Once your function app is created, your visual studio will have auto populated c
You can write a function by adding SDK to your function app. The function app interacts with Azure Digital Twins using the [Azure Digital Twins SDK for .NET (C#)](/dotnet/api/overview/azure/digitaltwins/client?view=azure-dotnet&preserve-view=true).
-In order to use the SDK, you'll need to include the following packages into your project. You can either install the packages using visual studio NuGet package manager or add the packages using `dotnet` command-line tool. Choose either of these methods:
+In order to use the SDK, you'll need to include the following packages into your project. You can either install the packages using Visual Studio's NuGet package manager, or add the packages using `dotnet` in a command-line tool. Follow the steps below for your preferred method.
**Option 1. Add packages using Visual Studio package manager:**
-You can do this by right-selecting on your project and select _Manage NuGet Packages_ from the list. Then, in the window that opens, select _Browse_ tab and search for the following packages. Select _Install_ and _accept_ the License agreement to install the packages.
+Right-select your project and select _Manage NuGet Packages_ from the list. Then, in the window that opens, select the _Browse_ tab and search for the following packages. Select _Install_ and _Accept_ the License agreement to install the packages.
* `Azure.DigitalTwins.Core` * `Azure.Identity`
@@ -79,15 +79,15 @@ dotnet add package System.Net.Http
dotnet add package Azure.Core ```
-Next, in your Visual Studio Solution Explorer, open _function.cs_ file where you have sample code and add the following _using_ statements to your function.
+Next, in your Visual Studio Solution Explorer, open the _Function1.cs_ file where you have sample code and add the following `using` statements to your function.
:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/adtIngestFunctionSample.cs" id="Function_dependencies"::: ## Add authentication code to the function
-You will now declare class level variables and add authentication code that will allow the function to access Azure Digital Twins. You will add the following to your function in the {your function name}.cs file.
+You will now declare class level variables and add authentication code that will allow the function to access Azure Digital Twins. You will add the following to your function in the _Function1.cs_ file.
-* Read ADT service URL as an environment variable. It is a good practice to read the service URL from an environment variable, rather than hard-coding it in the function.
+* Code to read the Azure Digital Twins service URL as an environment variable. It is a good practice to read the service URL from an environment variable, rather than hard-coding it in the function.
:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/adtIngestFunctionSample.cs" id="ADT_service_URL":::
@@ -98,43 +98,24 @@ You will now declare class level variables and add authentication code that will
* You can use the managed identity credentials in Azure Functions. :::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/adtIngestFunctionSample.cs" id="ManagedIdentityCredential":::
-* Add a local variable _DigitalTwinsClient_ inside of your function to hold your Azure Digital Twins client instance to the function project. Do *not* make this variable static inside your class.
+* Add a local variable _DigitalTwinsClient_ inside of your function to hold your Azure Digital Twins client instance. Do *not* make this variable static inside your class.
:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/adtIngestFunctionSample.cs" id="DigitalTwinsClient":::
-* Add a null check for _adtInstanceUrl_ and wrap your function logic in a try catch block to catch any exceptions.
+* Add a null check for _adtInstanceUrl_ and wrap your function logic in a try/catch block to catch any exceptions.
After these changes, your function code will be similar to the following: :::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/adtIngestFunctionSample.cs":::
-## Publish the function app to Azure
-
-To publish the project to a function app in Azure, right-select the function project (not the solution) in Solution Explorer, and choose **Publish**.
-
-> [!IMPORTANT]
-> Publishing to a function app in Azure incurs additional charges on your subscription, independent of Azure Digital Twins.
-
-:::image type="content" source="media/how-to-create-azure-function/publish-azure-function.png" alt-text="Visual Studio: publish function to Azure":::
-
-Select **Azure** as the publishing target and select **Next**.
-
-:::image type="content" source="media/how-to-create-azure-function/publish-azure-function-1.png" alt-text="Visual Studio: publish Azure Functions dialog, select Azure ":::
-
-:::image type="content" source="media/how-to-create-azure-function/publish-azure-function-2.png" alt-text="Visual Studio: publish function dialog, select Azure Function App(Windows) or (Linux) based on your machine":::
+Now that your application is written, you can publish it to Azure using the steps in the next section.
-:::image type="content" source="media/how-to-create-azure-function/publish-azure-function-3.png" alt-text="Visual Studio: publish function dialog, Create a new Azure Function":::
-
-:::image type="content" source="media/how-to-create-azure-function/publish-azure-function-4.png" alt-text="Visual Studio: publish function dialog, Fill in the fields, and select create":::
-
-:::image type="content" source="media/how-to-create-azure-function/publish-azure-function-5.png" alt-text="Visual Studio: publish function dialog, Select your function app from the list, and finish":::
-
-On the following page, enter the desired name for the new function app, a resource group, and other details.
-For your function app to be able to access Azure Digital Twins, it needs to have a system-managed identity and have permissions to access your Azure Digital Twins instance.
+## Publish the function app to Azure
-Next, you can set up security access for the function using CLI or Azure portal. Choose either of these methods:
+[!INCLUDE [digital-twins-publish-azure-function.md](../../includes/digital-twins-publish-azure-function.md)]
## Set up security access for the function app
-You can set up security access for the function app using one of these options:
+
+You can set up security access for the function app using either the Azure CLI or the Azure portal. Follow the steps for your preferred option below.
### Option 1: Set up security access for the function app using CLI
@@ -170,7 +151,7 @@ A system assigned managed identity enables Azure resources to authenticate to cl
In the [Azure portal](https://portal.azure.com/), search for _function app_ in the search bar with the function app name that you created earlier. Select the *Function App* from the list.
-:::image type="content" source="media/how-to-create-azure-function/portal-search-for-functionapp.png" alt-text="Azure portal: Search function app":::
+:::image type="content" source="media/how-to-create-azure-function/portal-search-for-function-app.png" alt-text="Azure portal: Search function app":::
On the function app window, select _Identity_ in the navigation bar on the left to enable managed identity. Under _System assigned_ tab, toggle the _Status_ to On and _save_ it. You will see a pop-up to _Enable system assigned managed identity_.
@@ -207,31 +188,29 @@ Then, save your details by hitting the _Save_ button.
You can make the URL of your Azure Digital Twins instance accessible to your function by setting an environment variable. For more information on this, see [*Environment variables*](/sandbox/functions-recipes/environment-variables). Application settings are exposed as environment variables to access the digital twins instance.
-You'll need ADT_INSTANCE_URL to create an application setting.
-
-You can get ADT_INSTANCE_URL by appending **_https://_** to your instance host name. In the Azure portal, you can find your digital twins instance host name by searching for your instance in the search bar. Then, select _Overview_ on the left navigation bar to view the _Host name_. Copy this value to create an application setting.
+To set an environment variable with the URL of your instance, first get the URL by finding your Azure Digital Twins instance's host name. Search for your instance in the [Azure portal](https://portal.azure.com) search bar. Then, select _Overview_ on the left navigation bar to view the _Host name_. Copy this value.
:::image type="content" source="media/how-to-create-azure-function/adt-hostname.png" alt-text="Azure portal: Overview-> Copy hostname to use in the _Value_ field."::: You can now create an application setting following the steps below:
-* Search for your app using the function app name in the search bar and select the function app from the list
-* Select _Configuration_ on the navigation bar on the left to create a new application setting
-* In the _Application settings_ tab, select _+ New application setting_
+1. Search for your app using the function app name in the search bar and select the function app from the list
+1. Select _Configuration_ on the navigation bar on the left to create a new application setting
+1. In the _Application settings_ tab, select _+ New application setting_
-:::image type="content" source="media/how-to-create-azure-function/search-for-azure-function.png" alt-text="Azure portal: Search for an existing function app":::
+:::image type="content" source="media/how-to-create-azure-function/search-for-azure-function.png" alt-text="Azure portal: Search for an existing function app" lightbox="media/how-to-create-azure-function/search-for-azure-function.png":::
:::image type="content" source="media/how-to-create-azure-function/application-setting.png" alt-text="Azure portal: Configure application settings":::
-In the window that opens up, use the value copied from above to create an application setting. \
-_Name_ : ADT_SERVICE_URL \
-_Value_ : https://{your-azure-digital-twins-hostname}
+In the window that opens up, use the host name value copied above to create an application setting.
+* _Name_ : ADT_SERVICE_URL
+* _Value_: https://{your-azure-digital-twins-host-name}
Select _OK_ to create an application setting. :::image type="content" source="media/how-to-create-azure-function/add-application-setting.png" alt-text="Azure portal: Add application settings.":::
-You can view your application settings with application name under the _Name_ field. Then, save your application settings by selecting _Save_ button.
+You can view your application settings with application name under the _Name_ field. Then, save your application settings by selecting the _Save_ button.
:::image type="content" source="media/how-to-create-azure-function/application-setting-save-details.png" alt-text="Azure portal: View the application created and restart the application":::
@@ -245,10 +224,7 @@ You can view that application settings are updated by selecting _Notifications_
## Next steps
-In this article, you followed the steps to set up a function app in Azure for use with Azure Digital Twins. Next, you can subscribe your function to Event Grid, to listen on an endpoint. This endpoint could be:
-* An Event Grid endpoint attached to Azure Digital Twins to process messages coming from Azure Digital Twins itself (such as property change messages, telemetry messages generated by [digital twins](concepts-twins-graph.md) in the twin graph, or life-cycle messages)
-* The IoT system topics used by IoT Hub to send telemetry and other device events
-* An Event Grid endpoint receiving messages from other services
+In this article, you followed the steps to set up a function app in Azure for use with Azure Digital Twins.
Next, see how to build on your basic function to ingest IoT Hub data into Azure Digital Twins: * [*How-to: Ingest telemetry from IoT Hub*](how-to-ingest-iot-hub-data.md)
digital-twins https://docs.microsoft.com/en-us/azure/digital-twins/how-to-integrate-azure-signalr https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-integrate-azure-signalr.md
@@ -41,7 +41,11 @@ You will be attaching Azure SignalR Service to Azure Digital Twins through the p
First, download the required sample apps. You will need both of the following: * [**Azure Digital Twins end-to-end samples**](/samples/azure-samples/digital-twins-samples/digital-twins-samples/): This sample contains an *AdtSampleApp* holding two Azure functions for moving data around an Azure Digital Twins instance (you can learn about this scenario in more detail in [*Tutorial: Connect an end-to-end solution*](tutorial-end-to-end.md)). It also contains a *DeviceSimulator* sample application that simulates an IoT device, generating a new temperature value every second.
- - Navigate to the sample link and hit the *Download ZIP* button to download a copy of the sample to your machine, as _**Azure_Digital_Twins_end_to_end_samples.zip**_. Unzip the folder.
+ - If you haven't already downloaded the sample as part of the tutorial in [*Prerequisites*](#prerequisites), navigate to the sample link and select the *Browse code* button underneath the title. This will take you to the GitHub repo for the samples, which you can download as a *.ZIP* by selecting the *Code* button and *Download ZIP*.
+
+ :::image type="content" source="media/includes/download-repo-zip.png" alt-text="View of the digital-twins-samples repo on GitHub. The Code button is selected, producing a small dialog box where the Download ZIP button is highlighted." lightbox="media/includes/download-repo-zip.png":::
+
+ This will download a copy of the sample repo to your machine, as **digital-twins-samples-master.zip**. Unzip the folder.
* [**SignalR integration web app sample**](/samples/azure-samples/digitaltwins-signalr-webapp-sample/digital-twins-samples/): This is a sample React web app that will consume Azure Digital Twins telemetry data from an Azure SignalR service. - Navigate to the sample link and hit the *Download ZIP* button to download a copy of the sample to your machine, as _**Azure_Digital_Twins_SignalR_integration_web_app_sample.zip**_. Unzip the folder.
@@ -64,7 +68,7 @@ First, go to the browser where the Azure portal is opened, and complete the foll
:::image type="content" source="media/how-to-integrate-azure-signalr/signalr-keys.png" alt-text="Screenshot of the Azure portal that shows the Keys page for the SignalR instance. The 'Copy to clipboard' icon next to the Primary CONNECTION STRING is highlighted." lightbox="media/how-to-integrate-azure-signalr/signalr-keys.png":::
-Next, start Visual Studio (or another code editor of your choice), and open the code solution in the *Azure_Digital_Twins_end_to_end_samples > ADTSampleApp* folder. Then do the following steps to create the functions:
+Next, start Visual Studio (or another code editor of your choice), and open the code solution in the *digital-twins-samples-master > ADTSampleApp* folder. Then do the following steps to create the functions:
1. Create a new C# sharp class called **SignalRFunctions.cs** in the *SampleFunctionsApp* project.
@@ -72,7 +76,7 @@ Next, start Visual Studio (or another code editor of your choice), and open the
:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/signalRFunction.cs":::
-1. In Visual Studio's *Package Manager Console* window, or any command window on your machine in the *Azure_Digital_Twins_end_to_end_samples\AdtSampleApp\SampleFunctionsApp* folder, run the following command to install the `SignalRService` NuGet package to the project:
+1. In Visual Studio's *Package Manager Console* window, or any command window on your machine in the *digital-twins-samples-master\AdtSampleApp\SampleFunctionsApp* folder, run the following command to install the `SignalRService` NuGet package to the project:
```cmd dotnet add package Microsoft.Azure.WebJobs.Extensions.SignalRService --version 1.2.0 ```
@@ -127,7 +131,7 @@ In this section, you will see the result in action. First, you'll start up the *
During the end-to-end tutorial prerequisite, you [configured the device simulator](tutorial-end-to-end.md#configure-and-run-the-simulation) to send data through an IoT Hub and to your Azure Digital Twins instance.
-Now, all you have to do is start the simulator project, located in *Azure_Digital_Twins_end_to_end_samples > DeviceSimulator > DeviceSimulator.sln*. If you're using Visual Studio, you can open the project and then run it with this button in the toolbar:
+Now, all you have to do is start the simulator project, located in *digital-twins-samples-master > DeviceSimulator > DeviceSimulator.sln*. If you're using Visual Studio, you can open the project and then run it with this button in the toolbar:
:::image type="content" source="media/how-to-integrate-azure-signalr/start-button-simulator.png" alt-text="The Visual Studio start button (DeviceSimulator project)":::
@@ -189,7 +193,7 @@ Using the Azure Cloud Shell or local Azure CLI, you can delete all Azure resourc
az group delete --name <your-resource-group> ```
-Finally, delete the project sample folders that you downloaded to your local machine (*Azure_Digital_Twins_end_to_end_samples.zip* and *Azure_Digital_Twins_SignalR_integration_web_app_sample.zip*).
+Finally, delete the project sample folders that you downloaded to your local machine (*digital-twins-samples-master.zip* and *Azure_Digital_Twins_SignalR_integration_web_app_sample.zip*).
## Next steps
digital-twins https://docs.microsoft.com/en-us/azure/digital-twins/how-to-integrate-time-series-insights https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-integrate-time-series-insights.md
@@ -66,7 +66,7 @@ The Azure Digital Twins [*Tutorial: Connect an end-to-end solution*](./tutorial-
4. Create an Azure Digital Twins [endpoint](concepts-route-events.md#create-an-endpoint) that links your event hub to your Azure Digital Twins instance. ```azurecli-interactive
- az dt endpoint create eventhub --endpoint-name <name for your Event Hubs endpoint> --eventhub-resource-group <resource group name> --eventhub-namespace <Event Hubs namespace from above> --eventhub <Twins event hub name from above> --eventhub-policy <Twins auth rule from above> -n <your Azure Digital Twins instance name>
+ az dt endpoint create eventhub -n <your Azure Digital Twins instance name> --endpoint-name <name for your Event Hubs endpoint> --eventhub-resource-group <resource group name> --eventhub-namespace <Event Hubs namespace from above> --eventhub <Twins event hub name from above> --eventhub-policy <Twins auth rule from above>
``` 5. Create a [route](concepts-route-events.md#create-an-event-route) in Azure Digital Twins to send twin update events to your endpoint. The filter in this route will only allow twin update messages to be passed to your endpoint.
@@ -90,11 +90,16 @@ This function will convert those twin update events from their original form as
For more information about using Event Hubs with Azure Functions, see [*Azure Event Hubs trigger for Azure Functions*](../azure-functions/functions-bindings-event-hubs-trigger.md).
-Inside your published function app, replace the function code with the following code.
+Inside your published function app, add a new function called **ProcessDTUpdatetoTSI** with the following code.
:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/updateTSI.cs":::
-From here, the function will then send the JSON objects it creates to a second event hub, which you will connect to Time Series Insights.
+>[!NOTE]
+>You may need to add the packages to your project using the `dotnet add package` command or the Visual Studio NuGet package manager.
+
+Next, **publish** the new Azure function. For instructions on how to do this, see [*How-to: Set up an Azure function for processing data*](how-to-create-azure-function.md#publish-the-function-app-to-azure).
+
+Looking ahead, this function will send the JSON objects it creates to a second event hub, which you will connect to Time Series Insights. You'll create that event hub in the next section.
Later, you'll also set some environment variables that this function will use to connect to your own event hubs.
@@ -131,7 +136,7 @@ Next, you'll need to set environment variables in your function app from earlier
az eventhubs eventhub authorization-rule keys list --resource-group <resource group name> --namespace-name <Event Hubs namespace> --eventhub-name <Twins event hub name from earlier> --name <Twins auth rule from earlier> ```
-2. Use the connection string you get as a result to create an app setting in your function app that contains your connection string:
+2. Use the *primaryConnectionString* value from the result to create an app setting in your function app that contains your connection string:
```azurecli-interactive az functionapp config appsettings set --settings "EventHubAppSetting-Twins=<Twins event hub connection string>" -g <resource group> -n <your App Service (function app) name>
@@ -153,15 +158,15 @@ Next, you'll need to set environment variables in your function app from earlier
## Create and connect a Time Series Insights instance
-Next, you will set up a Time Series Insights instance to receive the data from your second event hub. Follow the steps below, and for more details about this process, see [*Tutorial: Set up an Azure Time Series Insights Gen2 PAYG environment*](../time-series-insights/tutorials-set-up-tsi-environment.md).
+Next, you will set up a Time Series Insights instance to receive the data from your second (TSI) event hub. Follow the steps below, and for more details about this process, see [*Tutorial: Set up an Azure Time Series Insights Gen2 PAYG environment*](../time-series-insights/tutorials-set-up-tsi-environment.md).
-1. In the Azure portal, begin creating a Time Series Insights resource.
+1. In the Azure portal, begin creating a Time Series Insights environment.
1. Select the **Gen2(L1)** pricing tier. 2. You will need to choose a **time series ID** for this environment. Your time series ID can be up to three values that you will use to search for your data in Time Series Insights. For this tutorial, you can use **$dtId**. Read more about selecting an ID value in [*Best practices for choosing a Time Series ID*](../time-series-insights/how-to-select-tsid.md). :::image type="content" source="media/how-to-integrate-time-series-insights/create-twin-id.png" alt-text="The creation portal UX for a Time Series Insights environment. The Gen2(L1) pricing tier is selected and the time series ID property name is $dtId" lightbox="media/how-to-integrate-time-series-insights/create-twin-id.png":::
-2. Select **Next: Event Source** and select your Event Hubs information from above. You will also need to create a new Event Hubs consumer group.
+2. Select **Next: Event Source** and select your TSI event hub information from earlier. You will also need to create a new Event Hubs consumer group.
:::image type="content" source="media/how-to-integrate-time-series-insights/event-source-twins.png" alt-text="The creation portal UX for a Time Series Insights environment event source. You are creating an event source with the event hub information from above. You are also creating a new consumer group." lightbox="media/how-to-integrate-time-series-insights/event-source-twins.png":::
@@ -175,7 +180,7 @@ If you are using the end-to-end tutorial ([*Tutorial: Connect an end-to-end solu
Now, data should be flowing into your Time Series Insights instance, ready to be analyzed. Follow the steps below to explore the data coming in.
-1. Open your Time Series Insights instance in the [Azure portal](https://portal.azure.com) (you can search for the name of your instance in the portal search bar). Visit the *Time Series Insights Explorer URL* shown in the instance overview.
+1. Open your Time Series Insights environment in the [Azure portal](https://portal.azure.com) (you can search for the name of your environment in the portal search bar). Visit the *Time Series Insights Explorer URL* shown in the instance overview.
:::image type="content" source="media/how-to-integrate-time-series-insights/view-environment.png" alt-text="Select the Time Series Insights explorer URL in the overview tab of your Time Series Insights environment":::
digital-twins https://docs.microsoft.com/en-us/azure/digital-twins/how-to-manage-routes-apis-cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-manage-routes-apis-cli.md
@@ -27,10 +27,12 @@ Alternatively, you can also manage endpoints and routes with the [Azure portal](
## Prerequisites
-* You'll need an **Azure account** (you can set one up for free [here](https://azure.microsoft.com/free/?WT.mc_id=A261C142F))
-* You'll need an **Azure Digital Twins instance** in your Azure subscription. If you don't have an instance already, you can create one using the steps in [*How-to: Set up an instance and authentication*](how-to-set-up-instance-cli.md). Have the following values from setup handy to use later in this article:
+- You'll need an **Azure account** (you can set one up for free [here](https://azure.microsoft.com/free/?WT.mc_id=A261C142F))
+- You'll need an **Azure Digital Twins instance** in your Azure subscription. If you don't have an instance already, you can create one using the steps in [*How-to: Set up an instance and authentication*](how-to-set-up-instance-cli.md). Have the following values from setup handy to use later in this article:
- Instance name - Resource group+
+[!INCLUDE [azure-cli-prepare-your-environment-no-header.md](../../includes/azure-cli-prepare-your-environment-no-header.md)]
## Create an endpoint for Azure Digital Twins
@@ -45,7 +47,7 @@ To link an endpoint to Azure Digital Twins, the event grid topic, event hub, or
### Create an Event Grid endpoint
-The following example shows how to create an event grid-type endpoint using the Azure CLI. You can use [Azure Cloud Shell](https://shell.azure.com), or [install the CLI locally](/cli/azure/install-azure-cli?preserve-view=true&view=azure-cli-latest).
+The following example shows how to create an event grid-type endpoint using the Azure CLI.
First, create an event grid topic. You can use the following command, or view the steps in more detail by visiting [the *Create a custom topic* section](../event-grid/custom-event-quickstart-portal.md#create-a-custom-topic) of the Event Grid *Custom events* quickstart.
digital-twins https://docs.microsoft.com/en-us/azure/digital-twins/how-to-set-up-instance-scripted https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-set-up-instance-scripted.md
@@ -30,11 +30,13 @@ This version of this article completes these steps by running an [**automated de
## Prerequisites: Download the script
-The sample script is written in PowerShell. It is part of the [**Azure Digital Twins end-to-end samples**](/samples/azure-samples/digital-twins-samples/digital-twins-samples/), which you can download to your machine by navigating to that sample link and selecting the *Download ZIP* button underneath the title.
+The sample script is written in PowerShell. It is part of the [**Azure Digital Twins end-to-end samples**](/samples/azure-samples/digital-twins-samples/digital-twins-samples/), which you can download to your machine by navigating to that sample link and selecting the *Browse code* button underneath the title. This will take you to the GitHub repo for the samples, which you can download as a *.ZIP* by selecting the *Code* button and *Download ZIP*.
-This will download the sample project to your machine as _**Azure_Digital_Twins_end_to_end_samples.zip**_. Navigate to the folder on your machine and unzip it to extract the files.
+:::image type="content" source="media/includes/download-repo-zip.png" alt-text="View of the digital-twins-samples repo on GitHub. The Code button is selected, producing a small dialog box where the Download ZIP button is highlighted." lightbox="media/includes/download-repo-zip.png":::
-In the unzipped folder, the deployment script is located at _Azure_Digital_Twins_end_to_end_samples > scripts > **deploy.ps1**_.
+This will download a *.ZIP* folder to your machine as **digital-twins-samples-master.zip**. Navigate to the folder on your machine and unzip it to extract the files.
+
+In the unzipped folder, the deployment script is located at _digital-twins-samples-master > scripts > **deploy.ps1**_.
[!INCLUDE [cloud-shell-try-it.md](../../includes/cloud-shell-try-it.md)]
@@ -57,7 +59,7 @@ Here are the steps to run the deployment script in Cloud Shell.
:::image type="content" source="media/how-to-set-up-instance/cloud-shell/cloud-shell-upload.png" alt-text="Cloud Shell window showing selection of the Upload icon":::
- Navigate to the _**deploy.ps1**_ file on your machine (in _Azure_Digital_Twins_end_to_end_samples > scripts > **deploy.ps1**_) and hit "Open." This will upload the file to Cloud Shell so that you can run it in the Cloud Shell window.
+ Navigate to the _**deploy.ps1**_ file on your machine (in _digital-twins-samples-master > scripts > **deploy.ps1**_) and hit "Open." This will upload the file to Cloud Shell so that you can run it in the Cloud Shell window.
4. Run the script by sending the `./deploy.ps1` command in the Cloud Shell window. You can copy the command below (recall that to paste into Cloud Shell, you can use **Ctrl+Shift+V** on Windows and Linux, or **Cmd+Shift+V** on macOS. You can also use the right-click menu).
digital-twins https://docs.microsoft.com/en-us/azure/digital-twins/how-to-use-apis-sdks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-use-apis-sdks.md
@@ -33,7 +33,7 @@ To use the control plane APIs:
* You can call the APIs directly by referencing the latest Swagger folder in the [control plane Swagger repo](https://github.com/Azure/azure-rest-api-specs/tree/master/specification/digitaltwins/resource-manager/Microsoft.DigitalTwins/stable). This folder also includes a folder of examples that show the usage. * You can currently access SDKs for control APIs in... - [**.NET (C#)**](https://www.nuget.org/packages/Microsoft.Azure.Management.DigitalTwins/) ([reference [auto-generated]](/dotnet/api/overview/azure/digitaltwins/management?view=azure-dotnet&preserve-view=true)) ([source](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/digitaltwins/Microsoft.Azure.Management.DigitalTwins))
- - [**Java**](https://search.maven.org/artifact/com.microsoft.azure.digitaltwins.v2020_10_31/azure-mgmt-digitaltwins/1.0.0/jar) ([reference [auto-generated]](/java/api/overview/azure/digitaltwins?view=azure-java-stable&preserve-view=true)) ([source](https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/digitaltwins/mgmt-v2020_10_31))
+ - [**Java**](https://search.maven.org/search?q=a:azure-mgmt-digitaltwins) ([reference [auto-generated]](/java/api/overview/azure/digitaltwins?view=azure-java-stable&preserve-view=true)) ([source](https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/digitaltwins))
- [**JavaScript**](https://www.npmjs.com/package/@azure/arm-digitaltwins) ([source](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/digitaltwins/arm-digitaltwins)) - [**Python**](https://pypi.org/project/azure-mgmt-digitaltwins/) ([source](https://github.com/Azure/azure-sdk-for-python/tree/release/v3/sdk/digitaltwins/azure-mgmt-digitaltwins)) - [**Go**](https://godoc.org/github.com/Azure/azure-sdk-for-go/services/digitaltwins/mgmt/2020-10-31/digitaltwins) ([source](https://github.com/Azure/azure-sdk-for-go/tree/master/services/digitaltwins/mgmt/2020-10-31/digitaltwins))
digital-twins https://docs.microsoft.com/en-us/azure/digital-twins/overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/overview.md
@@ -35,6 +35,8 @@ In Azure Digital Twins, you define the digital entities that represent the peopl
You can think of these model definitions as a specialized vocabulary to describe your business. For a building management solution, for example, you might define models such as "building", "floor", and "elevator". You can then create **digital twins** based on these models to represent your specific environment.
+[!INCLUDE [digital-twins-versus-device-twins](../../includes/digital-twins-versus-device-twins.md)]
+ Models are defined in a JSON-like language called [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md), and they describe twins in terms of their state properties, telemetry events, commands, components, and relationships. * Models define semantic **relationships** between your entities so that you can connect your twins into a knowledge graph that reflects their interactions. You can think of the models as nouns in a description of your world, and the relationships as verbs. * You can also specialize twins using model inheritance. One model can inherit from another.
digital-twins https://docs.microsoft.com/en-us/azure/digital-twins/tutorial-end-to-end https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/tutorial-end-to-end.md
@@ -118,50 +118,9 @@ This will open the NuGet Package Manager. Select the *Updates* tab and if there
### Publish the app
-Back in your Visual Studio window where the _**AdtE2ESample**_ project is open, from the *Solution Explorer* pane, right-select the _**SampleFunctionsApp**_ project file and hit **Publish**.
+Back in your Visual Studio window where the _**AdtE2ESample**_ project is open, locate the _**SampleFunctionsApp**_ project in the *Solution Explorer* pane.
-:::image type="content" source="media/tutorial-end-to-end/publish-azure-function-1.png" alt-text="Visual Studio: publish project":::
-
-In the *Publish* page that follows, leave the default target selection of **Azure** and hit *Next*.
-
-For a specific target, choose **Azure Function App (Windows)** and hit *Next*.
-
-:::image type="content" source="media/tutorial-end-to-end/publish-azure-function-2.png" alt-text="Publish Azure function in Visual Studio: specific target":::
-
-On the *Functions instance* page, choose your subscription. This should populate a box with the *resource groups* in your subscription.
-
-Select your instance's resource group and hit *+* to create a new Azure Function.
-
-:::image type="content" source="media/tutorial-end-to-end/publish-azure-function-3.png" alt-text="Publish Azure function in Visual Studio: Functions instance (before function app)":::
-
-In the *Function App (Windows) - Create new* window, fill in the fields as follows:
-* **Name** is the name of the consumption plan that Azure will use to host your Azure Functions app. This will also become the name of the function app that holds your actual function. You can choose your own unique value or leave the default suggestion.
-* Make sure the **Subscription** matches the subscription you want to use
-* Make sure the **Resource group** to the resource group you want to use
-* Leave the **Plan type** as *Consumption*
-* Select the **Location** that matches the location of your resource group
-* Create a new **Azure Storage** resource using the *New...* link. Set the location to match your resource group, use the other default values, and hit "Ok".
-
-:::image type="content" source="media/tutorial-end-to-end/publish-azure-function-4.png" alt-text="Publish Azure function in Visual Studio: Function App (Windows) - Create new":::
-
-Then, select **Create**.
-
-This should bring you back to the *Functions instance* page, where your new function app is now visible underneath your resource group. Hit *Finish*.
-
-:::image type="content" source="media/tutorial-end-to-end/publish-azure-function-5.png" alt-text="Publish Azure function in Visual Studio: Functions instance (after function app)":::
-
-On the *Publish* pane that opens back in the main Visual Studio window, check that all the information looks correct and select **Publish**.
-
-:::image type="content" source="media/tutorial-end-to-end/publish-azure-function-6.png" alt-text="Publish Azure function in Visual Studio: publish":::
-
-> [!NOTE]
-> If you see a popup like this:
-> :::image type="content" source="media/tutorial-end-to-end/publish-azure-function-7.png" alt-text="Publish Azure function in Visual Studio: publish credentials" border="false":::
-> Select **Attempt to retrieve credentials from Azure** and **Save**.
->
-> If you see a warning to *Upgrade Functions version on Azure* or that *Your version of the functions runtime does not match the version running in Azure*:
->
-> Follow the prompts to upgrade to the latest Azure Functions runtime version. This issue might occur if you're using an older version of Visual Studio than the one recommended in the *Prerequisites* section at the start of this tutorial.
+[!INCLUDE [digital-twins-publish-azure-function.md](../../includes/digital-twins-publish-azure-function.md)]
### Assign permissions to the function app
@@ -169,12 +128,14 @@ To enable the function app to access Azure Digital Twins, the next step is to co
[!INCLUDE [digital-twins-role-rename-note.md](../../includes/digital-twins-role-rename-note.md)]
-In Azure Cloud Shell, use the following command to set an application setting which your function app will use to reference your Azure Digital Twins instance.
+In Azure Cloud Shell, use the following command to set an application setting which your function app will use to reference your Azure Digital Twins instance. Fill in the placeholders with the details of your resources (remember that your Azure Digital Twins instance URL is its host name preceded by *https://*).
```azurecli-interactive az functionapp config appsettings set -g <your-resource-group> -n <your-App-Service-(function-app)-name> --settings "ADT_SERVICE_URL=<your-Azure-Digital-Twins-instance-URL>" ```
+The output is the list of settings for the Azure Function, which should now contain an entry called *ADT_SERVICE_URL*.
+ Use the following command to create the system-managed identity. Take note of the *principalId* field in the output. ```azurecli-interactive
event-grid https://docs.microsoft.com/en-us/azure/event-grid/concepts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/concepts.md
@@ -2,7 +2,7 @@
title: Azure Event Grid concepts description: Describes Azure Event Grid and its concepts. Defines several key components of Event Grid. ms.topic: conceptual
-ms.date: 10/29/2020
+ms.date: 01/21/2021
--- # Concepts in Azure Event Grid
@@ -13,10 +13,7 @@ This article describes the main concepts in Azure Event Grid.
An event is the smallest amount of information that fully describes something that happened in the system. Every event has common information like: source of the event, time the event took place, and unique identifier. Every event also has specific information that is only relevant to the specific type of event. For example, an event about a new file being created in Azure Storage has details about the file, such as the `lastTimeModified` value. Or, an Event Hubs event has the URL of the Capture file.
-An event of size up to 64 KB is covered by General Availability (GA) Service Level Agreement (SLA). The support for an event of size up to 1 MB is currently in preview. Events over 64 KB are charged in 64-KB increments.
--
-For the properties that are sent in an event, see [Azure Event Grid event schema](event-schema.md).
+The maximum allowed size for an event is 1 MB. Events over 64 KB are charged in 64-KB increments. For the properties that are sent in an event, see [Azure Event Grid event schema](event-schema.md).
## Publishers
@@ -71,10 +68,7 @@ If Event Grid can't confirm that an event has been received by the subscriber's
## Batching
-When using a custom topic, events must always be published in an array. This can be a batch of one for low-throughput scenarios, however, for high volume use cases, it's recommended that you batch several events together per publish to achieve higher efficiency. Batches can be up to 1 MB. Each event should still not be greater than 64 KB (General Availability) or 1 MB (preview).
-
-> [!NOTE]
-> An event of size up to 64 KB is covered by General Availability (GA) Service Level Agreement (SLA). The support for an event of size up to 1 MB is currently in preview. Events over 64 KB are charged in 64 KB increments.
+When using a custom topic, events must always be published in an array. This can be a batch of one for low-throughput scenarios, however, for high volume use cases, it's recommended that you batch several events together per publish to achieve higher efficiency. Batches can be up to 1 MB and the maximum size of an event is 1 MB.
## Next steps
event-grid https://docs.microsoft.com/en-us/azure/event-grid/create-view-manage-system-topics-cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/create-view-manage-system-topics-cli.md
@@ -25,7 +25,7 @@ For a local installation:
## Create a system topic - To create a system topic on an Azure source first and then create an event subscription for that topic, see the following reference topics:
- - [az eventgrid system-topic create](/cli/azure/ext/eventgrid/eventgrid/system-topic?view=azure-cli-latest#ext-eventgrid-az-eventgrid-system-topic-create)
+ - [az eventgrid system-topic create](/cli/azure/ext/eventgrid/eventgrid/system-topic#ext-eventgrid-az-eventgrid-system-topic-create)
```azurecli-interactive # Get the ID of the Azure source (for example: Azure Storage account)
@@ -48,14 +48,14 @@ For a local installation:
```azurecli-interactive az eventgrid topic-type list --output json | grep -w id ```
- - [az eventgrid system-topic event-subscription create](/cli/azure/ext/eventgrid/eventgrid/system-topic/event-subscription?view=azure-cli-latest#ext-eventgrid-az-eventgrid-system-topic-event-subscription-create)
+ - [az eventgrid system-topic event-subscription create](/cli/azure/ext/eventgrid/eventgrid/system-topic/event-subscription#ext-eventgrid-az-eventgrid-system-topic-event-subscription-create)
```azurecli-interactive az eventgrid system-topic event-subscription create --name <SPECIFY EVENT SUBSCRIPTION NAME> \ -g rg1 --system-topic-name <SYSTEM TOPIC NAME> \ --endpoint <ENDPOINT URL> ```-- To create a system topic (implicitly) when creating an event subscription for an Azure source, use the [az eventgrid event-subscription create](/cli/azure/ext/eventgrid/eventgrid/event-subscription?view=azure-cli-latest#ext-eventgrid-az-eventgrid-event-subscription-create) method. Here's an example:
+- To create a system topic (implicitly) when creating an event subscription for an Azure source, use the [az eventgrid event-subscription create](/cli/azure/ext/eventgrid/eventgrid/event-subscription#ext-eventgrid-az-eventgrid-event-subscription-create) method. Here's an example:
```azurecli-interactive storageid=$(az storage account show --name <AZURE STORAGE ACCOUNT NAME> --resource-group <AZURE RESOURCE GROUP NAME> --query id --output tsv)
@@ -71,12 +71,12 @@ For a local installation:
## View all system topics To view all system topics and details of a selected system topic, use the following commands: -- [az eventgrid system-topic list](/cli/azure/ext/eventgrid/eventgrid/system-topic?view=azure-cli-latest#ext-eventgrid-az-eventgrid-system-topic-list)
+- [az eventgrid system-topic list](/cli/azure/ext/eventgrid/eventgrid/system-topic#ext-eventgrid-az-eventgrid-system-topic-list)
```azurecli-interactive az eventgrid system-topic list ```-- [az eventgrid system-topic show](/cli/azure/ext/eventgrid/eventgrid/system-topic?view=azure-cli-latest#ext-eventgrid-az-eventgrid-system-topic-show)
+- [az eventgrid system-topic show](/cli/azure/ext/eventgrid/eventgrid/system-topic#ext-eventgrid-az-eventgrid-system-topic-show)
```azurecli-interactive az eventgrid system-topic show -g <AZURE RESOURCE GROUP NAME> -n <SYSTEM TOPIC NAME>
@@ -85,7 +85,7 @@ To view all system topics and details of a selected system topic, use the follow
## Delete a system topic To delete a system topic, use the following command: -- [az eventgrid system-topic delete](/cli/azure/ext/eventgrid/eventgrid/system-topic?view=azure-cli-latest#ext-eventgrid-az-eventgrid-system-topic-delete)
+- [az eventgrid system-topic delete](/cli/azure/ext/eventgrid/eventgrid/system-topic#ext-eventgrid-az-eventgrid-system-topic-delete)
```azurecli-interactive az eventgrid system-topic delete -g <AZURE RESOURCE GROUP NAME> --name <SYSTEM TOPIC NAME>
event-grid https://docs.microsoft.com/en-us/azure/event-grid/edge/concepts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/edge/concepts.md
@@ -43,7 +43,7 @@ See [REST API documentation](api.md) on how to manage subscriptions in Event Gri
## Event handlers
-From an Event Grid perspective, an event handler is the place where the event is sent. The handler takes further action to process the event. Event Grid supports several handler types. You can use a supported Azure service or your own web hook as the handler. Depending on the type of handler, Event Grid follows different mechanisms to guarantee the delivery of the event. If the destination event handler is an HTTP web hook, the event is retried when the handler returns a status code of `200 ΓÇô OK`. For edge Hub, if the event is delivered without any exception, it is considered successful.
+From an Event Grid perspective, an event handler is the place where the event is sent. The handler takes further action to process the event. Event Grid supports several handler types. You can use a supported Azure service or your own web hook as the handler. Depending on the type of handler, Event Grid follows different mechanisms to guarantee the delivery of the event. If the destination event handler is an HTTP web hook, the event is retried until the handler returns a status code of `200 ΓÇô OK`. For edge Hub, if the event is delivered without any exception, it is considered successful.
## Security
event-grid https://docs.microsoft.com/en-us/azure/event-grid/get-access-keys https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/get-access-keys.md
@@ -29,13 +29,13 @@ Get-AzEventGridDomainKey -ResourceGroup <RESOURCE GROUP NAME> -Name <DOMAIN NAME
``` ## Azure CLI
-Use the [az eventgrid topic key list](/cli/azure/eventgrid/topic/key?view=azure-cli-latest#az-eventgrid-topic-key-list) to get access keys for topics.
+Use the [az eventgrid topic key list](/cli/azure/eventgrid/topic/key#az-eventgrid-topic-key-list) to get access keys for topics.
```azurecli-interactive az eventgrid topic key list --resource-group <RESOURCE GROUP NAME> --name <TOPIC NAME> ```
-Use [az eventgrid domain key list](/cli/azure/eventgrid/domain/key?view=azure-cli-latest#az-eventgrid-domain-key-list) to get access keys for domains.
+Use [az eventgrid domain key list](/cli/azure/eventgrid/domain/key#az-eventgrid-domain-key-list) to get access keys for domains.
```azurecli-interactive az eventgrid domain key list --resource-group <RESOURCE GROUP NAME> --name <DOMAIN NAME>
event-grid https://docs.microsoft.com/en-us/azure/event-grid/handler-functions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/handler-functions.md
@@ -70,7 +70,7 @@ You can update these values for an existing subscription on the **Features** tab
You can set **maxEventsPerBatch** and **preferredBatchSizeInKilobytes** in an Azure Resource Manager template. For more information, see [Microsoft.EventGrid eventSubscriptions template reference](/azure/templates/microsoft.eventgrid/eventsubscriptions). ### Azure CLI
-You can use the [az eventgrid event-subscription create](/cli/azure/eventgrid/event-subscription?view=azure-cli-latest#az_eventgrid_event_subscription_create&preserve-view=true) or [az eventgrid event-subscription update](/cli/azure/eventgrid/event-subscription?view=azure-cli-latest#az_eventgrid_event_subscription_update&preserve-view=true) command to configure batch-related settings using the following parameters: `--max-events-per-batch` or `--preferred-batch-size-in-kilobytes`.
+You can use the [az eventgrid event-subscription create](/cli/azure/eventgrid/event-subscription#az_eventgrid_event_subscription_create&preserve-view=true) or [az eventgrid event-subscription update](/cli/azure/eventgrid/event-subscription#az_eventgrid_event_subscription_update&preserve-view=true) command to configure batch-related settings using the following parameters: `--max-events-per-batch` or `--preferred-batch-size-in-kilobytes`.
### Azure PowerShell You can use the [New-AzEventGridSubscription](/powershell/module/az.eventgrid/new-azeventgridsubscription) or [Update-AzEventGridSubscription](/powershell/module/az.eventgrid/update-azeventgridsubscription) cmdlet to configure batch-related settings using the following parameters: `-MaxEventsPerBatch` or `-PreferredBatchSizeInKiloBytes`.
event-grid https://docs.microsoft.com/en-us/azure/event-grid/post-to-custom-topic https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/post-to-custom-topic.md
@@ -66,10 +66,7 @@ For custom topics, the top-level data contains the same fields as standard resou
] ```
-For a description of these properties, see [Azure Event Grid event schema](event-schema.md). When posting events to an event grid topic, the array can have a total size of up to 1 MB. Each event in the array is limited to 64 KB (General Availability) or 1 MB (preview).
-
-> [!NOTE]
-> An event of size up to 64 KB is covered by General Availability (GA) Service Level Agreement (SLA). The support for an event of size up to 1 MB is currently in preview. Events over 64 KB are charged in 64-KB increments.
+For a description of these properties, see [Azure Event Grid event schema](event-schema.md). When posting events to an event grid topic, the array can have a total size of up to 1 MB. The maximum allowed size for an event is also 1 MB. Events over 64 KB are charged in 64-KB increments.
For example, a valid event data schema is:
event-grid https://docs.microsoft.com/en-us/azure/event-grid/security-authentication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/security-authentication.md
@@ -30,7 +30,7 @@ You can secure the webhook endpoint that's used to receive events from Event Gri
### Using client secret as a query parameter You can also secure your webhook endpoint by adding query parameters to the webhook destination URL specified as part of creating an Event Subscription. Set one of the query parameters to be a client secret such as an [access token](https://en.wikipedia.org/wiki/Access_token) or a shared secret. Event Grid service includes all the query parameters in every event delivery request to the webhook. The webhook service can retrieve and validate the secret. If the client secret is updated, event subscription also needs to be updated. To avoid delivery failures during this secret rotation, make the webhook accept both old and new secrets for a limited duration before updating the event subscription with the new secret.
-As query parameters could contain client secrets, they are handled with extra care. They are stored as encrypted and are not accessible to service operators. They are not logged as part of the service logs/traces. When retrieving the Event Subscription properties, destination query parameters aren't returned by default. For example: [--include-full-endpoint-url](/cli/azure/eventgrid/event-subscription?view=azure-cli-latest#az-eventgrid-event-subscription-show) parameter is to be used in Azure [CLI](/cli/azure?view=azure-cli-latest).
+As query parameters could contain client secrets, they are handled with extra care. They are stored as encrypted and are not accessible to service operators. They are not logged as part of the service logs/traces. When retrieving the Event Subscription properties, destination query parameters aren't returned by default. For example: [--include-full-endpoint-url](/cli/azure/eventgrid/event-subscription#az-eventgrid-event-subscription-show) parameter is to be used in Azure [CLI](/cli/azure).
For more information on delivering events to webhooks, see [Webhook event delivery](webhook-event-delivery.md)
event-grid https://docs.microsoft.com/en-us/azure/event-grid/security-baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/security-baseline.md
@@ -309,9 +309,9 @@ Azure role-based access control (Azure RBAC) allows you to manage access to Azur
- [Authorizing access to Event Grid resources](security-authorization.md) -- [How to get a directory role in Azure AD with PowerShell](/powershell/module/azuread/get-azureaddirectoryrole?view=azureadps-2.0)
+- [How to get a directory role in Azure AD with PowerShell](/powershell/module/azuread/get-azureaddirectoryrole)
-- [How to get members of a directory role in Azure AD with PowerShell](/powershell/module/azuread/get-azureaddirectoryrolemember?view=azureadps-2.0)
+- [How to get members of a directory role in Azure AD with PowerShell](/powershell/module/azuread/get-azureaddirectoryrolemember)
**Azure Security Center monitoring**: Yes
@@ -717,7 +717,7 @@ In addition, use the Azure Resource Graph to query/discover resources within the
Azure Resource Manager has the ability to export the template in JavaScript Object Notation (JSON), which should be reviewed to ensure that the configurations meet the security requirements for your organization before deployments. -- [How to view available Azure Policy aliases](/powershell/module/az.resources/get-azpolicyalias?view=azps-3.3.0)
+- [How to view available Azure Policy aliases](/powershell/module/az.resources/get-azpolicyalias)
- [How to configure and manage Azure Policy](../governance/policy/tutorials/create-and-manage.md)
@@ -759,9 +759,9 @@ Azure Resource Manager has the ability to export the template in JavaScript Obje
**Guidance**: If using custom Azure Policy definitions for your Event Grid or related resources, use Azure Repos to securely store and manage your code. -- [How to store code in Azure DevOps](/azure/devops/repos/git/gitworkflow?view=azure-devops)
+- [How to store code in Azure DevOps](/azure/devops/repos/git/gitworkflow)
-- [Azure Repos Documentation](/azure/devops/repos/index?view=azure-devops)
+- [Azure Repos Documentation](/azure/devops/repos/index)
**Azure Security Center monitoring**: Not Applicable
event-hubs https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-availability-and-consistency https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-availability-and-consistency.md
@@ -25,10 +25,10 @@ Event Hubs is built on top of a partitioned data model. You can configure the nu
The simplest way to get started with Event Hubs is to use the default behavior. #### [Azure.Messaging.EventHubs (5.0.0 or later)](#tab/latest)
-If you create a new **[EventHubProducerClient](/dotnet/api/azure.messaging.eventhubs.producer.eventhubproducerclient?view=azure-dotnet)** object and use the **[SendAsync](/dotnet/api/azure.messaging.eventhubs.producer.eventhubproducerclient.sendasync?view=azure-dotnet)** method, your events are automatically distributed between partitions in your event hub. This behavior allows for the greatest amount of up time.
+If you create a new **[EventHubProducerClient](/dotnet/api/azure.messaging.eventhubs.producer.eventhubproducerclient)** object and use the **[SendAsync](/dotnet/api/azure.messaging.eventhubs.producer.eventhubproducerclient.sendasync)** method, your events are automatically distributed between partitions in your event hub. This behavior allows for the greatest amount of up time.
#### [Microsoft.Azure.EventHubs (4.1.0 or earlier)](#tab/old)
-If you create a new **[EventHubClient](/dotnet/api/microsoft.azure.eventhubs.eventhubclient)** object and use the **[Send](/dotnet/api/microsoft.azure.eventhubs.eventhubclient.sendasync?view=azure-dotnet#Microsoft_Azure_EventHubs_EventHubClient_SendAsync_Microsoft_Azure_EventHubs_EventData_)** method, your events are automatically distributed between partitions in your event hub. This behavior allows for the greatest amount of up time.
+If you create a new **[EventHubClient](/dotnet/api/microsoft.azure.eventhubs.eventhubclient)** object and use the **[Send](/dotnet/api/microsoft.azure.eventhubs.eventhubclient.sendasync#Microsoft_Azure_EventHubs_EventHubClient_SendAsync_Microsoft_Azure_EventHubs_EventData_)** method, your events are automatically distributed between partitions in your event hub. This behavior allows for the greatest amount of up time.
---
event-hubs https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-geo-dr https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-geo-dr.md
@@ -38,8 +38,10 @@ The following terms are used in this article:
- *Alias*: The name for a disaster recovery configuration that you set up. The alias provides a single stable Fully Qualified Domain Name (FQDN) connection string. Applications use this alias connection string to connect to a namespace. -- *Primary/secondary namespace*: The namespaces that correspond to the alias. The primary namespace is "active" and receives messages (can be an existing or new namespace). The secondary namespace is "passive" and doesn't receive messages. The metadata between both is in sync, so both can seamlessly accept messages without any application code or connection string changes. To ensure that only the active namespace receives messages, you must use the alias.
+- *Primary/secondary namespace*: The namespaces that correspond to the alias. The primary namespace is "active" and receives messages (can be an existing or new namespace). The secondary namespace is "passive" and doesn't receive messages. The metadata between both is in sync, so both can seamlessly accept messages without any application code or connection string changes. To ensure that only the active namespace receives messages, you must use the alias.
+ > [!IMPORTANT]
+ > The geo-disaster recovery feature requires the subscription and the resource group to be the same for primary and secondary namespaces.
- *Metadata*: Entities such as event hubs and consumer groups; and their properties of the service that are associated with the namespace. Only entities and their settings are replicated automatically. Messages and events aren't replicated. - *Failover*: The process of activating the secondary namespace.
@@ -68,12 +70,12 @@ The following section is an overview of the failover process, and explains how t
You first create or use an existing primary namespace, and a new secondary namespace, then pair the two. This pairing gives you an alias that you can use to connect. Because you use an alias, you don't have to change connection strings. Only new namespaces can be added to your failover pairing. 1. Create the primary namespace.
-1. Create the secondary namespace. This step is optional. You can create the secondary namespace while creating the pairing in the next step.
+1. Create the secondary namespace in the subscription and the resource group that has the primary namespace. This step is optional. You can create the secondary namespace while creating the pairing in the next step.
1. In the Azure portal, navigate to your primary namespace. 1. Select **Geo-recovery** on the left menu, and select **Initiate pairing** on the toolbar. :::image type="content" source="./media/event-hubs-geo-dr/primary-namspace-initiate-pairing-button.png" alt-text="Initiate pairing from the primary namespace":::
-1. On the **Initiate pairing** page, select an existing secondary namespace or create one, and then select **Create**. In the following example, an existing secondary namespace is selected.
+1. On the **Initiate pairing** page, select an existing secondary namespace or create one in the subscription and the resource group that has the primary namespace. Then, select **Create**. In the following example, an existing secondary namespace is selected.
:::image type="content" source="./media/event-hubs-geo-dr/initiate-pairing-page.png" alt-text="Select the secondary namespace"::: 1. Now, when you select **Geo-recovery** for the primary namespace, you should see the **Geo-DR Alias** page that looks like the following image:
event-hubs https://docs.microsoft.com/en-us/azure/event-hubs/security-baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/security-baseline.md
@@ -290,9 +290,9 @@ How to onboard Azure Sentinel: https://docs.microsoft.com/azure/sentinel/quickst
**Guidance**: Azure Active Directory (AD) has built-in roles that must be explicitly assigned and are queryable. Use the Azure AD PowerShell module to perform ad hoc queries to discover accounts that are members of administrative groups.
-How to get a directory role in Azure AD with PowerShell: https://docs.microsoft.com/powershell/module/azuread/get-azureaddirectoryrole?view=azureadps-2.0
+How to get a directory role in Azure AD with PowerShell: https://docs.microsoft.com/powershell/module/azuread/get-azureaddirectoryrole
-How to get members of a directory role in Azure AD with PowerShell: https://docs.microsoft.com/powershell/module/azuread/get-azureaddirectoryrolemember?view=azureadps-2.0
+How to get members of a directory role in Azure AD with PowerShell: https://docs.microsoft.com/powershell/module/azuread/get-azureaddirectoryrolemember
**Azure Security Center monitoring**: Yes
@@ -630,7 +630,7 @@ How to create alerts for Azure Activity Log events: https://docs.microsoft.com/a
How to create queries with Azure Resource Graph: https://docs.microsoft.com/azure/governance/resource-graph/first-query-portal
-How to view your Azure Subscriptions: https://docs.microsoft.com/powershell/module/az.accounts/get-azsubscription?view=azps-3.0.0
+How to view your Azure Subscriptions: https://docs.microsoft.com/powershell/module/az.accounts/get-azsubscription
Understand Azure RBAC: https://docs.microsoft.com/azure/role-based-access-control/overview
@@ -776,7 +776,7 @@ How to configure Conditional Access to block access to Azure Resource Manager: h
Azure Built-in Policy for Event Hubs namespace:ΓÇï https://docs.microsoft.com/azure/governance/policy/samples/built-in-policies#event-hub
-How to view available Azure Policy aliases: https://docs.microsoft.com/powershell/module/az.resources/get-azpolicyalias?view=azps-3.3.0
+How to view available Azure Policy aliases: https://docs.microsoft.com/powershell/module/az.resources/get-azpolicyalias
How to configure and manage Azure Policy: https://docs.microsoft.com/azure/governance/policy/tutorials/create-and-manage
@@ -817,9 +817,9 @@ For more information about the Azure Policy Effects: https://docs.microsoft.com
**Guidance**: If using custom Azure Policy definitions for your Event Hubs or related resources, use Azure Repos to securely store and manage your code.
-How to store code in Azure DevOps: https://docs.microsoft.com/azure/devops/repos/git/gitworkflow?view=azure-devops
+How to store code in Azure DevOps: https://docs.microsoft.com/azure/devops/repos/git/gitworkflow
-Azure Repos Documentation: https://docs.microsoft.com/azure/devops/repos/index?view=azure-devops
+Azure Repos Documentation: https://docs.microsoft.com/azure/devops/repos/index
**Azure Security Center monitoring**: Not applicable
@@ -983,7 +983,7 @@ How to backup Key Vault Secrets: https://docs.microsoft.com/powershell/module/az
-How to restore key vault keys in Azure: https://docs.microsoft.com/powershell/module/azurerm.keyvault/restore-azurekeyvaultkey?view=azurermps-6.13.0
+How to restore key vault keys in Azure: https://docs.microsoft.com/powershell/module/azurerm.keyvault/restore-azurekeyvaultkey
**Azure Security Center monitoring**: Not applicable
event-hubs https://docs.microsoft.com/en-us/azure/event-hubs/security-controls-policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/security-controls-policy.md
@@ -1,7 +1,7 @@
--- title: Azure Policy Regulatory Compliance controls for Azure Event Hubs description: Lists Azure Policy Regulatory Compliance controls available for Azure Event Hubs. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources.
-ms.date: 01/08/2021
+ms.date: 01/21/2021
ms.topic: sample author: spelluru ms.author: spelluru
expressroute https://docs.microsoft.com/en-us/azure/expressroute/expressroute-global-reach https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-global-reach.md
@@ -45,6 +45,7 @@ ExpressRoute Global Reach is supported in the following places.
* New Zealand * Norway * Singapore
+* South Africa (Johannesburg only)
* Sweden * Switzerland * United Kingdom
expressroute https://docs.microsoft.com/en-us/azure/expressroute/expressroute-locations-providers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-locations-providers.md
@@ -6,7 +6,7 @@ author: duongau
ms.service: expressroute ms.topic: conceptual
-ms.date: 01/05/2021
+ms.date: 01/21/2021
ms.author: duau --- # ExpressRoute partners and peering locations
@@ -95,7 +95,7 @@ The following table shows connectivity locations and the service providers for e
| **Frankfurt** | [Interxion FRA11](https://www.interxion.com/Locations/frankfurt/) | 1 | Germany West Central | 10G, 100G | AT&T NetBond, CenturyLink Cloud Connect, Colt, DE-CIX, Equinix, euNetworks, GEANT, InterCloud, Interxion, Megaport, Orange, Telia Carrier, T-Systems | | **Frankfurt2** | [Equinix FR7](https://www.equinix.com/locations/europe-colocation/germany-colocation/frankfurt-data-centers/fr7/) | 1 | Germany West Central | 10G, 100G | | | **Geneva** | [Equinix GV2](https://www.equinix.com/locations/europe-colocation/switzerland-colocation/geneva-data-centers/gv2/) | 1 | Switzerland West | 10G, 100G | Equinix, Megaport |
-| **Hong Kong** | [Equinix HK1](https://www.equinix.com/locations/asia-colocation/hong-kong-colocation/hong-kong-data-center/hk1/) | 2 | East Asia | 10G | Aryaka Networks, British Telecom, CenturyLink Cloud Connect, Chief Telecom, China Telecom Global, Equinix, InterCloud, Megaport, NTT Communications, Orange, PCCW Global Limited, Tata Communications, Telia Carrier, Verizon |
+| **Hong Kong** | [Equinix HK1](https://www.equinix.com/locations/asia-colocation/hong-kong-colocation/hong-kong-data-center/hk1/) | 2 | East Asia | 10G | Aryaka Networks, British Telecom, CenturyLink Cloud Connect, Chief Telecom, China Telecom Global, China Unicom, Equinix, InterCloud, Megaport, NTT Communications, Orange, PCCW Global Limited, Tata Communications, Telia Carrier, Verizon |
| **Hong Kong2** | [MEGA-i](https://www.iadvantage.net/index.php/locations/mega-i) | 2 | East Asia | 10G | China Mobile International, China Telecom Global, PCCW Global Limited, SingTel | | **Jakarta** | Telin, Telkom Indonesia | 4 | n/a | 10G | Telin | | **Johannesburg** | [Teraco JB1](https://www.teraco.co.za/data-centre-locations/johannesburg/#jb1) | 3 | South Africa North | 10G | BCX, British Telecom, Internet Solutions - Cloud Connect, Liquid Telecom, Orange, Teraco |
@@ -108,7 +108,7 @@ The following table shows connectivity locations and the service providers for e
| **Marseille** |[Interxion MRS1](https://www.interxion.com/Locations/marseille/) | 1 | France South | n/a | DE-CIX, GEANT, Interxion, Jaguar Network, Ooredoo Cloud Connect | | **Melbourne** | [NextDC M1](https://www.nextdc.com/data-centres/m1-melbourne-data-centre) | 2 | Australia Southeast | 10G, 100G | AARNet, Devoli, Equinix, Megaport, NEXTDC, Optus, Telstra Corporation, TPG Telecom | | **Miami** | [Equinix MI1](https://www.equinix.com/locations/americas-colocation/united-states-colocation/miami-data-centers/mi1/) | 1 | n/a | 10G, 100G | Claro, C3ntro, Equinix, Megaport, Neutrona Networks |
-| **Milan** | [IRIDEOS](https://irideos.it/en/data-centers/) | 1 | n/a | 10G | Colt, Equinix, Fastweb, Retelit |
+| **Milan** | [IRIDEOS](https://irideos.it/en/data-centers/) | 1 | n/a | 10G | Colt, Equinix, Fastweb, IRIDEOS, Retelit |
| **Minneapolis** | [Cologix MIN1](https://www.cologix.com/data-centers/minneapolis/min1/) | 1 | n/a | 10G, 100G | Cologix | | **Montreal** | [Cologix MTL3](https://www.cologix.com/data-centers/montreal/mtl3/) | 1 | n/a | 10G, 100G | Bell Canada, Cologix, Fibrenoire, Megaport, Telus, Zayo | | **Mumbai** | Tata Communications | 2 | West India | 10G | DE-CIX, Global CloudXchange (GCX), Reliance Jio, Sify, Tata Communications, Verizon |
@@ -131,7 +131,7 @@ The following table shows connectivity locations and the service providers for e
| **Silicon Valley** | [Equinix SV1](https://www.equinix.com/locations/americas-colocation/united-states-colocation/silicon-valley-data-centers/sv1/) | 1 | West US | 10G, 100G | Aryaka Networks, AT&T NetBond, British Telecom, CenturyLink Cloud Connect, Colt, Comcast, Coresite, Equinix, InterCloud, Internet2, IX Reach, Packet, PacketFabric, Level 3 Communications, Megaport, Orange, Sprint, Tata Communications, Telia Carrier, Verizon, Zayo | | **Silicon Valley2** | [Coresite SV7](https://www.coresite.com/data-centers/locations/silicon-valley/sv7) | 1 | West US | 10G, 100G | Colt, Coresite | | **Singapore** | [Equinix SG1](https://www.equinix.com/locations/asia-colocation/singapore-colocation/singapore-data-center/sg1/) | 2 | Southeast Asia | 10G, 100G | Aryaka Networks, AT&T NetBond, British Telecom, China Mobile International, Epsilon Global Communications, Equinix, InterCloud, Level 3 Communications, Megaport, NTT Communications, Orange, SingTel, Tata Communications, Telstra Corporation, Verizon, Vodafone |
-| **Singapore2** | [Global Switch Tai Seng](https://www.globalswitch.com/locations/singapore-data-centres/) | 2 | Southeast Asia | 10G, 100G | China Unicom Global, Colt, Epsilon Global Communications, Megaport, PCCW Global Limited, SingTel |
+| **Singapore2** | [Global Switch Tai Seng](https://www.globalswitch.com/locations/singapore-data-centres/) | 2 | Southeast Asia | 10G, 100G | China Unicom Global, Colt, Epsilon Global Communications, Megaport, PCCW Global Limited, SingTel, Telehouse - KDDI |
| **Stavanger** | [Green Mountain DC1](https://greenmountain.no/dc1-stavanger/) | 1 | Norway West | 10G, 100G |GlobalConnect, Megaport | | **Stockholm** | [Equinix SK1](https://www.equinix.com/locations/europe-colocation/sweden-colocation/stockholm-data-centers/sk1/) | 1 | n/a | 10G | Equinix, Telia Carrier | | **Sydney** | [Equinix SY2](https://www.equinix.com/locations/asia-colocation/australia-colocation/sydney-data-centers/sy2/) | 2 | Australia East | 10G, 100G | AARNet, AT&T NetBond, British Telecom, Devoli, Equinix, Kordia, Megaport, NEXTDC, NTT Communications, Optus, Orange, Spark NZ, Telstra Corporation, TPG Telecom, Verizon, Vocus Group NZ |
expressroute https://docs.microsoft.com/en-us/azure/expressroute/expressroute-locations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-locations.md
@@ -7,7 +7,7 @@ author: duongau
ms.service: expressroute ms.topic: conceptual ms.workload: infrastructure-services
-ms.date: 01/05/2021
+ms.date: 01/21/2021
ms.author: duau ---
@@ -89,7 +89,7 @@ The following table shows locations by service provider. If you want to view ava
| **[Chief Telecom](https://www.chief.com.tw/)** |Supported |Supported |Hong Kong, Taipei | | **China Mobile International** |Supported |Supported | Hong Kong, Hong Kong2, Singapore | | **China Telecom Global** |Supported |Supported |Hong Kong, Hong Kong2 |
-| **China Unicom Global** |Supported |Supported | Singapore2 |
+| **China Unicom Global** |Supported |Supported | Hong Kong, Singapore2 |
| **[Chunghwa Telecom](https://www.cht.com.tw/en/home/cht/about-cht/products-and-services/International/Cloud-Service)** |Supported |Supported |Taipei | | **[Claro](https://www.usclaro.com/enterprise-mnc/connectivity/mpls/)** |Supported |Supported |Miami | | **[Cologix](https://www.cologix.com/hyperscale/microsoft-azure/)** |Supported |Supported |Chicago, Dallas, Minneapolis, Montreal, Toronto, Vancouver, Washington DC |
@@ -117,6 +117,7 @@ The following table shows locations by service provider. If you want to view ava
| **[Internet Initiative Japan Inc. - IIJ](https://www.iij.ad.jp/en/news/pressrelease/2015/1216-2.html)** |Supported |Supported |Osaka, Tokyo | | **[Internet Solutions - Cloud Connect](https://www.is.co.za/solution/cloud-connect/)** |Supported |Supported |Cape Town, Johannesburg, London | | **[Interxion](https://www.interxion.com/why-interxion/colocate-with-the-clouds/Microsoft-Azure/)** |Supported |Supported |Amsterdam, Amsterdam2, Copenhagen, Dublin, Frankfurt, London, Marseille, Paris, Zurich |
+| **[IRIDEOS](https://irideos.it/)** |Supported |Supported |Milan |
| **[IX Reach](https://www.ixreach.com/partners/cloud-partners/microsoft-azure/)**|Supported |Supported | Amsterdam, London2, Silicon Valley, Toronto, Washington DC | | **Jaguar Network** |Supported |Supported |Marseille, Paris | | **[Jisc](https://www.jisc.ac.uk/microsoft-azure-expressroute)** |Supported |Supported |London, Newport(Wales) |
@@ -155,7 +156,7 @@ The following table shows locations by service provider. If you want to view ava
| **[Swisscom](https://www.swisscom.ch/en/business/enterprise/offer/cloud-data-center/microsoft-cloud-services/microsoft-azure-von-swisscom.html)** | Supported | Supported | Zurich | | **[Tata Communications](https://www.tatacommunications.com/lp/izo/azure/azure_https://docsupdatetracker.net/index.html)** |Supported |Supported |Amsterdam, Chennai, Hong Kong SAR, London, Mumbai, Sao Paulo, Silicon Valley, Singapore, Washington DC | | **[Telefonica](https://www.business-solutions.telefonica.com/es/enterprise/solutions/efficient-infrastructure/managed-voice-data-connectivity/)** |Supported |Supported |Amsterdam, Sao Paulo |
-| **[Telehouse - KDDI](https://www.telehouse.net/solutions/cloud-services/cloud-link)** |Supported |Supported |London, London2 |
+| **[Telehouse - KDDI](https://www.telehouse.net/solutions/cloud-services/cloud-link)** |Supported |Supported |London, London2, Singapore2 |
| **Telenor** |Supported |Supported |Amsterdam, London, Oslo | | **[Telia Carrier](https://www.teliacarrier.com/)** | Supported | Supported |Amsterdam, Chicago, Dallas, Frankfurt, Hong Kong, London, Oslo, Paris, Silicon Valley, Stockholm, Washington DC | | **[Telin](https://www.telin.net/)** | Supported | Supported |Jakarta |
firewall https://docs.microsoft.com/en-us/azure/firewall/active-ftp-support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall/active-ftp-support.md new file mode 100644
@@ -0,0 +1,39 @@
+---
+title: Azure Firewall Active FTP support
+description: By default, Active FTP is disabled on Azure Firewall. You can enable it using PowerShell, CLI, and ARM template.
+services: firewall
+author: vhorne
+ms.service: firewall
+ms.topic: conceptual
+ms.date: 01/21/2021
+ms.author: victorh
+---
+
+# Azure Firewall Active FTP support
+
+With Active FTP, the FTP server initiates the data connection to the designated FTP client data port. Firewalls on the client-side network normally block an outside connection request to an internal client port. For more information, see [Active FTP vs. Passive FTP, a Definitive Explanation](https://slacksite.com/other/ftp.html).
+
+By default, Active FTP support is disabled on Azure Firewall to protect against FTP bounce attacks using the FTP `PORT` command. However, you can enable Active FTP when you deploy using Azure PowerShell, the Azure CLI, or an Azure ARM template.
+
+## Azure PowerShell
+
+To deploy using Azure PowerShell, use the `AllowActiveFTP` parameter. For more information, see [Create a Firewall with Allow Active FTP](/powershell/module/az.network/new-azfirewall?view=azps-5.4.0#16---create-a-firewall-with-allow-active-ftp-).
+
+## Azure CLI
+
+To deploy using the Azure CLI, use the `--allow-active-ftp` parameter. For more information, see [az network firewall create](/cli/azure/ext/azure-firewall/network/firewall?view=azure-cli-latest#ext_azure_firewall_az_network_firewall_create-optional-parameters).
+
+## Azure Resource Manager (ARM) template
+
+To deploy using an ARM template, use the `AdditionalProperties` field:
+
+```json
+"additionalProperties": {
+ "Network.FTP.AllowActiveFTP": "True"
+ },
+```
+For more information, see [Microsoft.Network azureFirewalls](/azure/templates/microsoft.network/azurefirewalls).
+
+## Next steps
+
+To learn how to deploy an Azure Firewall, see [Deploy and configure Azure Firewall using Azure PowerShell](deploy-ps.md).
\ No newline at end of file
firewall https://docs.microsoft.com/en-us/azure/firewall/ip-groups https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall/ip-groups.md
@@ -5,7 +5,7 @@ services: firewall
author: vhorne ms.service: firewall ms.topic: conceptual
-ms.date: 07/30/2020
+ms.date: 01/21/2021
ms.author: victorh ---
@@ -22,6 +22,9 @@ An IP Group can have a single IP address, multiple IP addresses, or one or more
IP Groups can be reused in Azure Firewall DNAT, network, and application rules for multiple firewalls across regions and subscriptions in Azure. Group names must be unique. You can configure an IP Group in the Azure portal, Azure CLI, or REST API. A sample template is provided to help you get started.
+> [!NOTE]
+> IP Groups are not currently available in Azure national clouds environments.
+ ## Sample format The following IPv4 address format examples are valid to use in IP Groups:
firewall https://docs.microsoft.com/en-us/azure/firewall/overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall/overview.md
@@ -53,7 +53,6 @@ Network filtering rules for non-TCP/UDP protocols (for example ICMP) don't work
|SNAT on inbound connections|In addition to DNAT, connections via the firewall public IP address (inbound) are SNATed to one of the firewall private IPs. This requirement today (also for Active/Active NVAs) to ensure symmetric routing.|To preserve the original source for HTTP/S, consider using [XFF](https://en.wikipedia.org/wiki/X-Forwarded-For) headers. For example, use a service such as [Azure Front Door](../frontdoor/front-door-http-headers-protocol.md#front-door-to-backend) or [Azure Application Gateway](../application-gateway/rewrite-http-headers.md) in front of the firewall. You can also add WAF as part of Azure Front Door and chain to the firewall. |SQL FQDN filtering support only in proxy mode (port 1433)|For Azure SQL Database, Azure Synapse Analytics, and Azure SQL Managed Instance:<br><br>SQL FQDN filtering is supported in proxy-mode only (port 1433).<br><br>For Azure SQL IaaS:<br><br>If you're using non-standard ports, you can specify those ports in the application rules.|For SQL in redirect mode (the default if connecting from within Azure), you can instead filter access using the SQL service tag as part of Azure Firewall network rules. |Outbound traffic on TCP port 25 isn't allowed| Outbound SMTP connections that use TCP port 25 are blocked. Port 25 is primarily used for unauthenticated email delivery. This is the default platform behavior for virtual machines. For more information, see more [Troubleshoot outbound SMTP connectivity issues in Azure](../virtual-network/troubleshoot-outbound-smtp-connectivity.md). However, unlike virtual machines, it isn't currently possible to enable this functionality on Azure Firewall. Note: to allow authenticated SMTP (port 587) or SMTP over a port other than 25, please make sure you configure a network rule and not an application rule as SMTP inspection is not supported at this time.|Follow the recommended method to send email, as documented in the SMTP troubleshooting article. Or, exclude the virtual machine that needs outbound SMTP access from your default route to the firewall. Instead, configure outbound access directly to the internet.
-|Active FTP isn't supported|Active FTP is disabled on Azure Firewall to protect against FTP bounce attacks using the FTP PORT command.|You can use Passive FTP instead. You must still explicitly open TCP ports 20 and 21 on the firewall.
|SNAT port utilization metric shows 0%|The Azure Firewall SNAT port utilization metric may show 0% usage even when SNAT ports are used. In this case, using the metric as part of the firewall health metric provides an incorrect result.|This issue has been fixed and rollout to production is targeted for May 2020. In some cases, firewall redeployment resolves the issue, but it's not consistent. As an intermediate workaround, only use the firewall health state to look for *status=degraded*, not for *status=unhealthy*. Port exhaustion will show as *degraded*. *Not healthy* is reserved for future use when the are more metrics to impact the firewall health. |DNAT isn't supported with Forced Tunneling enabled|Firewalls deployed with Forced Tunneling enabled can't support inbound access from the Internet because of asymmetric routing.|This is by design because of asymmetric routing. The return path for inbound connections goes via the on-premises firewall, which hasn't seen the connection established. |Outbound Passive FTP may not work for Firewalls with multiple public IP addresses, depending on your FTP server configuration.|Passive FTP establishes different connections for control and data channels. When a Firewall with multiple public IP addresses sends data outbound, it randomly selects one of its public IP addresses for the source IP address. FTP may fail when data and control channels use different source IP addresses, depending on your FTP server configuration.|An explicit SNAT configuration is planned. In the meantime, you can configure your FTP server to accept data and control channels from different source IP addresses (see [an example for IIS](/iis/configuration/system.applicationhost/sites/sitedefaults/ftpserver/security/datachannelsecurity)). Alternatively, consider using a single IP address in this situation.|
governance https://docs.microsoft.com/en-us/azure/governance/blueprints/samples/ism-protected/control-mapping https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/ism-protected/control-mapping.md
@@ -1,7 +1,7 @@
--- title: Australian Government ISM PROTECTED blueprint sample controls description: Control mapping of the Australian Government ISM PROTECTED blueprint sample. Each control is mapped to one or more Azure Policy definitions that assist with assessment.
-ms.date: 09/11/2020
+ms.date: 01/21/2021
ms.topic: sample --- # Control mapping of the Australian Government ISM PROTECTED blueprint sample
governance https://docs.microsoft.com/en-us/azure/governance/blueprints/samples/ism-protected/deploy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/ism-protected/deploy.md
@@ -1,7 +1,7 @@
--- title: Deploy Australian Government ISM PROTECTED blueprint sample description: Deploy steps for the Australian Government ISM PROTECTED blueprint sample including blueprint artifact parameter details.
-ms.date: 09/11/2020
+ms.date: 01/21/2021
ms.topic: sample --- # Deploy the Australian Government ISM PROTECTED blueprint sample
governance https://docs.microsoft.com/en-us/azure/governance/blueprints/samples/ism-protected/index https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/ism-protected/index.md
@@ -1,7 +1,7 @@
--- title: Australian Government ISM PROTECTED blueprint sample overview description: Overview of the Australian Government ISM PROTECTED blueprint sample. This blueprint sample helps customers assess specific ISM PROTECTED controls.
-ms.date: 09/11/2020
+ms.date: 01/21/2021
ms.topic: sample --- # Overview of the Australian Government ISM PROTECTED blueprint sample
governance https://docs.microsoft.com/en-us/azure/governance/policy/samples/azure-security-benchmark https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/azure-security-benchmark.md
@@ -1,20 +1,20 @@
---
-title: Regulatory Compliance details for Azure Security Benchmark
-description: Details of the Azure Security Benchmark Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment.
-ms.date: 01/08/2021
+title: Regulatory Compliance details for Azure Security Benchmark v1
+description: Details of the Azure Security Benchmark v1 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment.
+ms.date: 01/21/2021
ms.topic: sample ms.custom: generated ---
-# Details of the Azure Security Benchmark Regulatory Compliance built-in initiative
+# Details of the Azure Security Benchmark v1 Regulatory Compliance built-in initiative
The following article details how the Azure Policy Regulatory Compliance built-in initiative
-definition maps to **compliance domains** and **controls** in Azure Security Benchmark.
+definition maps to **compliance domains** and **controls** in Azure Security Benchmark v1.
For more information about this compliance standard, see
-[Azure Security Benchmark](../../../security/benchmarks/overview.md). To understand
+[Azure Security Benchmark v1](../../../security/benchmarks/overview.md). To understand
_Ownership_, see [Azure Policy policy definition](../concepts/definition-structure.md#type) and [Shared responsibility in the cloud](../../../security/fundamentals/shared-responsibility.md).
-The following mappings are to the **Azure Security Benchmark** controls. Use the
+The following mappings are to the **Azure Security Benchmark v1** controls. Use the
navigation on the right to jump directly to a specific **compliance domain**. Many of the controls are implemented with an [Azure Policy](../overview.md) initiative definition. To review the complete initiative definition, open **Policy** in the Azure portal and select the **Definitions** page.
@@ -22,7 +22,7 @@ Then, find and select the **Azure Security Benchmark v1** Regulatory Compliance
initiative definition. This built-in initiative is deployed as part of the
-[Azure Security Benchmark blueprint sample](../../blueprints/samples/azure-security-benchmark.md).
+[Azure Security Benchmark v1 blueprint sample](../../blueprints/samples/azure-security-benchmark.md).
> [!IMPORTANT] > Each control below is associated with one or more [Azure Policy](../overview.md) definitions.
governance https://docs.microsoft.com/en-us/azure/governance/policy/samples/built-in-policies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/built-in-policies.md
@@ -1,7 +1,7 @@
--- title: List of built-in policy definitions description: List built-in policy definitions for Azure Policy. Categories include Tags, Regulatory Compliance, Key Vault, Kubernetes, Guest Configuration, and more.
-ms.date: 01/08/2021
+ms.date: 01/21/2021
ms.topic: sample ms.custom: generated ---
@@ -91,6 +91,10 @@ side of the page. Otherwise, use <kbd>Ctrl</kbd>-<kbd>F</kbd> to use your browse
[!INCLUDE [azure-policy-reference-policies-data-box](../../../../includes/policy/reference/bycat/policies-data-box.md)]
+## Data Factory
+
+[!INCLUDE [azure-policy-reference-policies-data-factory](../../../../includes/policy/reference/bycat/policies-data-factory.md)]
+ ## Data Lake [!INCLUDE [azure-policy-reference-policies-data-lake](../../../../includes/policy/reference/bycat/policies-data-lake.md)]
governance https://docs.microsoft.com/en-us/azure/governance/policy/samples/cis-azure-1-1-0 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/cis-azure-1-1-0.md
@@ -1,7 +1,7 @@
--- title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark description: Details of the CIS Microsoft Azure Foundations Benchmark Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment.
-ms.date: 01/08/2021
+ms.date: 01/21/2021
ms.topic: sample ms.custom: generated ---
@@ -34,7 +34,7 @@ This built-in initiative is deployed as part of the
> definitions at this time. Therefore, compliance in Azure Policy is only a partial view of your > overall compliance status. The associations between compliance domains, controls, and Azure Policy > definitions for this compliance standard may change over time. To view the change history, see the
-> [GitHub Commit History](https://github.com/Azure/azure-policy/commits/master/built-in-policies/policySetDefinitions/Regulatory%20Compliance/CISv1_1_0_audit.json).
+> [GitHub Commit History](https://github.com/Azure/azure-policy/commits/master/built-in-policies/policySetDefinitions/Regulatory%20Compliance/CISv1_1_0.json).
## Identity and Access Management
@@ -79,6 +79,15 @@ This built-in initiative is deployed as part of the
## Security Center
+### Ensure that standard pricing tier is selected
+
+**ID**: CIS Azure 2.1
+**Ownership**: Customer
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|---|---|---|---|
+|[Security Center standard pricing tier should be selected](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa1181c5f-672a-477a-979a-7d58aa086233) |The standard pricing tier enables threat detection for networks and virtual machines, providing threat intelligence, anomaly detection, and behavior analytics in Azure Security Center |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Standard_pricing_tier.json) |
+ ### Ensure that 'Automatic provisioning of monitoring agent' is set to 'On' **ID**: CIS Azure 2.2
@@ -235,6 +244,15 @@ This built-in initiative is deployed as part of the
|---|---|---|---| |[Secure transfer to storage accounts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F404c3081-a854-4457-ae30-26a93ef643f9) |Audit requirement of Secure transfer in your storage account. Secure transfer is an option that forces your storage account to accept requests only from secure connections (HTTPS). Use of HTTPS ensures authentication between the server and the service and protects data in transit from network layer attacks such as man-in-the-middle, eavesdropping, and session-hijacking |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/Storage_AuditForHTTPSEnabled_Audit.json) |
+### Ensure that 'Public access level' is set to Private for blob containers
+
+**ID**: CIS Azure 3.6
+**Ownership**: Customer
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|---|---|---|---|
+|[Storage account public access should be disallowed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4fa4b6c0-31ca-4c0d-b10d-24b96f62a751) |Anonymous public read access to containers and blobs in Azure Storage is a convenient way to share data but might present security risks. To prevent data breaches caused by undesired anonymous access, Microsoft recommends preventing public access to a storage account unless your scenario requires it. |audit, deny, disabled |[2.0.1-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/ASC_Storage_DisallowPublicBlobAccess_Audit.json) |
+ ### Ensure default network access rule for Storage Accounts is set to deny **ID**: CIS Azure 3.7
@@ -412,6 +430,15 @@ This built-in initiative is deployed as part of the
|---|---|---|---| |[Azure Monitor should collect activity logs from all regions](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F41388f1c-2db0-4c25-95b2-35d7f5ccbfa9) |This policy audits the Azure Monitor log profile which does not export activities from all Azure supported regions including global. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ActivityLog_CaptureAllRegions.json) |
+### Ensure the storage container storing the activity logs is not publicly accessible
+
+**ID**: CIS Azure 5.1.5
+**Ownership**: Customer
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|---|---|---|---|
+|[Storage account public access should be disallowed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4fa4b6c0-31ca-4c0d-b10d-24b96f62a751) |Anonymous public read access to containers and blobs in Azure Storage is a convenient way to share data but might present security risks. To prevent data breaches caused by undesired anonymous access, Microsoft recommends preventing public access to a storage account unless your scenario requires it. |audit, deny, disabled |[2.0.1-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/ASC_Storage_DisallowPublicBlobAccess_Audit.json) |
+ ### Ensure the storage account containing the container with activity logs is encrypted with BYOK (Use Your Own Key) **ID**: CIS Azure 5.1.6
@@ -598,6 +625,24 @@ This built-in initiative is deployed as part of the
## Other Security Considerations
+### Ensure that the expiration date is set on all keys
+
+**ID**: CIS Azure 8.1
+**Ownership**: Customer
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|---|---|---|---|
+|[Key Vault keys should have an expiration date](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F152b15f7-8e1f-4c1f-ab71-8c010ba5dbc0) |Cryptographic keys should have a defined expiration date and not be permanent. Keys that are valid forever provide a potential attacker with more time to compromise the key. It is a recommended security practice to set expiration dates on cryptographic keys. |Audit, Deny, Disabled |[1.0.1-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/Keys_ExpirationSet.json) |
+
+### Ensure that the expiration date is set on all Secrets
+
+**ID**: CIS Azure 8.2
+**Ownership**: Customer
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|---|---|---|---|
+|[Key Vault secrets should have an expiration date](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F98728c90-32c7-4049-8429-847dc0f4fe37) |Secrets should have a defined expiration date and not be permanent. Secrets that are valid forever provide a potential attacker with more time to compromise them. It is a recommended security practice to set expiration dates on secrets. |Audit, Deny, Disabled |[1.0.1-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/Secrets_ExpirationSet.json) |
+ ### Ensure the key vault is recoverable **ID**: CIS Azure 8.4
governance https://docs.microsoft.com/en-us/azure/governance/policy/samples/hipaa-hitrust-9-2 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/hipaa-hitrust-9-2.md
@@ -1,7 +1,7 @@
--- title: Regulatory Compliance details for HIPAA HITRUST 9.2 description: Details of the HIPAA HITRUST 9.2 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment.
-ms.date: 01/08/2021
+ms.date: 01/21/2021
ms.topic: sample ms.custom: generated ---
governance https://docs.microsoft.com/en-us/azure/governance/policy/samples/nist-sp-800-171-r2 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/nist-sp-800-171-r2.md
@@ -1,7 +1,7 @@
--- title: Regulatory Compliance details for NIST SP 800-171 R2 description: Details of the NIST SP 800-171 R2 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment.
-ms.date: 01/08/2021
+ms.date: 01/21/2021
ms.topic: sample ms.custom: generated ---
governance https://docs.microsoft.com/en-us/azure/governance/policy/samples/nist-sp-800-53-r4 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/nist-sp-800-53-r4.md
@@ -1,7 +1,7 @@
--- title: Regulatory Compliance details for NIST SP 800-53 R4 description: Details of the NIST SP 800-53 R4 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment.
-ms.date: 01/08/2021
+ms.date: 01/21/2021
ms.topic: sample ms.custom: generated ---
governance https://docs.microsoft.com/en-us/azure/governance/resource-graph/samples/starter https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/resource-graph/samples/starter.md
@@ -1,7 +1,7 @@
--- title: Starter query samples description: Use Azure Resource Graph to run some starter queries, including counting resources, ordering resources, or by a specific tag.
-ms.date: 10/14/2020
+ms.date: 01/21/2021
ms.topic: sample --- # Starter Resource Graph query samples
@@ -25,6 +25,7 @@ We'll walk through the following starter queries:
- [Count resources that have IP addresses configured by subscription](#count-resources-by-ip) - [List resources with a specific tag value](#list-tag) - [List all storage accounts with specific tag value](#list-specific-tag)
+- [List all tags and their values](#list-all-tag-values)
- [Show unassociated network security groups](#unassociated-nsgs) - [Get cost savings summary from Azure Advisor](#advisor-savings) - [Count machines in scope of Guest Configuration policies](#count-gcmachines)
@@ -488,6 +489,56 @@ Search-AzGraph -Query "Resources | where type =~ 'Microsoft.Storage/storageAccou
> [!NOTE] > This example uses `==` for matching instead of the `=~` conditional. `==` is a case sensitive match.
+## <a name="list-all-tag-values"></a>List all tags and their values
+
+This query lists tags on management groups, subscriptions, and resources along with their values.
+The query first limits to resources where tags `isnotempty()`, limits the included fields by only
+including _tags_ in the `project`, and `mvexpand` and `extend` to get the paired data from the
+property bag. It then uses `union` to combine the results from _ResourceContainers_ to the same
+results from _Resources_, giving broad coverage to which tags are fetched. Last, it limits the
+results to `distinct` paired data and excludes system-hidden tags.
+
+```kusto
+ResourceContainers
+| where isnotempty(tags)
+| project tags
+| mvexpand tags
+| extend tagKey = tostring(bag_keys(tags)[0])
+| extend tagValue = tostring(tags[tagKey])
+| union (
+ resources
+ | where isnotempty(tags)
+ | project tags
+ | mvexpand tags
+ | extend tagKey = tostring(bag_keys(tags)[0])
+ | extend tagValue = tostring(tags[tagKey])
+)
+| distinct tagKey, tagValue
+| where tagKey !startswith "hidden-"
+```
+
+# [Azure CLI](#tab/azure-cli)
+
+```azurecli-interactive
+az graph query -q "ResourceContainers | where isnotempty(tags) | project tags | mvexpand tags | extend tagKey = tostring(bag_keys(tags)[0]) | extend tagValue = tostring(tags[tagKey]) | union (resources | where notempty(tags) | project tags | mvexpand tags | extend tagKey = tostring(bag_keys(tags)[0]) | extend tagValue = tostring(tags[tagKey]) ) | distinct tagKey, tagValue | where tagKey !startswith "hidden-""
+```
+
+# [Azure PowerShell](#tab/azure-powershell)
+
+```azurepowershell-interactive
+Search-AzGraph -Query "ResourceContainers | where isnotempty(tags) | project tags | mvexpand tags | extend tagKey = tostring(bag_keys(tags)[0]) | extend tagValue = tostring(tags[tagKey]) | union (resources | where notempty(tags) | project tags | mvexpand tags | extend tagKey = tostring(bag_keys(tags)[0]) | extend tagValue = tostring(tags[tagKey]) ) | distinct tagKey, tagValue | where tagKey !startswith "hidden-""
+```
+
+# [Portal](#tab/azure-portal)
+
+:::image type="icon" source="../media/resource-graph-small.png"::: Try this query in Azure Resource Graph Explorer:
+
+- Azure portal: <a href="https://portal.azure.com/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/ResourceContainers%20%0A%7C%20where%20isnotempty%28tags%29%0A%7C%20project%20tags%0A%7C%20mvexpand%20tags%0A%7C%20extend%20tagKey%20%3D%20tostring%28bag_keys%28tags%29%5B0%5D%29%0A%7C%20extend%20tagValue%20%3D%20tostring%28tags%5BtagKey%5D%29%0A%7C%20union%20%28%0A%20%20%20%20resources%0A%20%20%20%20%7C%20where%20isnotempty%28tags%29%0A%20%20%20%20%7C%20project%20tags%0A%20%20%20%20%7C%20mvexpand%20tags%0A%20%20%20%20%7C%20extend%20tagKey%20%3D%20tostring%28bag_keys%28tags%29%5B0%5D%29%0A%20%20%20%20%7C%20extend%20tagValue%20%3D%20tostring%28tags%5BtagKey%5D%29%0A%29%0A%7C%20distinct%20tagKey%2C%20tagValue%0A%7C%20where%20tagKey%20%21startswith%20%22hidden-%22" target="_blank">portal.azure.com <span class="docon docon-navigate-external x-hidden-focus"></span></a>
+- Azure Government portal: <a href="https://portal.azure.us/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/ResourceContainers%20%0A%7C%20where%20isnotempty%28tags%29%0A%7C%20project%20tags%0A%7C%20mvexpand%20tags%0A%7C%20extend%20tagKey%20%3D%20tostring%28bag_keys%28tags%29%5B0%5D%29%0A%7C%20extend%20tagValue%20%3D%20tostring%28tags%5BtagKey%5D%29%0A%7C%20union%20%28%0A%20%20%20%20resources%0A%20%20%20%20%7C%20where%20isnotempty%28tags%29%0A%20%20%20%20%7C%20project%20tags%0A%20%20%20%20%7C%20mvexpand%20tags%0A%20%20%20%20%7C%20extend%20tagKey%20%3D%20tostring%28bag_keys%28tags%29%5B0%5D%29%0A%20%20%20%20%7C%20extend%20tagValue%20%3D%20tostring%28tags%5BtagKey%5D%29%0A%29%0A%7C%20distinct%20tagKey%2C%20tagValue%0A%7C%20where%20tagKey%20%21startswith%20%22hidden-%22" target="_blank">portal.azure.us <span class="docon docon-navigate-external x-hidden-focus"></span></a>
+- Azure China 21Vianet portal: <a href="https://portal.azure.cn/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/ResourceContainers%20%0A%7C%20where%20isnotempty%28tags%29%0A%7C%20project%20tags%0A%7C%20mvexpand%20tags%0A%7C%20extend%20tagKey%20%3D%20tostring%28bag_keys%28tags%29%5B0%5D%29%0A%7C%20extend%20tagValue%20%3D%20tostring%28tags%5BtagKey%5D%29%0A%7C%20union%20%28%0A%20%20%20%20resources%0A%20%20%20%20%7C%20where%20isnotempty%28tags%29%0A%20%20%20%20%7C%20project%20tags%0A%20%20%20%20%7C%20mvexpand%20tags%0A%20%20%20%20%7C%20extend%20tagKey%20%3D%20tostring%28bag_keys%28tags%29%5B0%5D%29%0A%20%20%20%20%7C%20extend%20tagValue%20%3D%20tostring%28tags%5BtagKey%5D%29%0A%29%0A%7C%20distinct%20tagKey%2C%20tagValue%0A%7C%20where%20tagKey%20%21startswith%20%22hidden-%22" target="_blank">portal.azure.cn <span class="docon docon-navigate-external x-hidden-focus"></span></a>
+
+---
+ ## <a name="unassociated-nsgs"></a>Show unassociated network security groups This query returns Network Security Groups (NSGs) that aren't associated to a network interface or
iot-central https://docs.microsoft.com/en-us/azure/iot-central/core/troubleshoot-connection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/troubleshoot-connection.md
@@ -32,11 +32,11 @@ This section helps you determine if your data is reaching IoT Central.
If you haven't already done so, install the `az cli` tool and `azure-iot` extension.
-To learn how to install the `az cli`, see [Install the Azure CLI](/cli/azure/install-azure-cli?view=azure-cli-latest).
+To learn how to install the `az cli`, see [Install the Azure CLI](/cli/azure/install-azure-cli).
-To [install](/cli/azure/azure-cli-reference-for-IoT?view=azure-cli-latest#extension-reference-installation) the `azure-iot` extension, run the following command:
+To [install](/cli/azure/azure-cli-reference-for-IoT#extension-reference-installation) the `azure-iot` extension, run the following command:
-```cmd/bash
+```azurecli
az extension add --name azure-iot ```
@@ -47,20 +47,20 @@ When you've installed the `azure-iot` extension, start your device to see if the
Use the following commands to sign in the subscription where you have your IoT Central application:
-```cmd/bash
+```azurecli
az login az set account --subscription <your-subscription-id> ``` To monitor the telemetry your device is sending, use the following command:
-```cmd/bash
+```azurecli
az iot central diagnostics monitor-events --app-id <app-id> --device-id <device-name> ``` If the device has connected successfully to IoT Central, you see output similar to the following:
-```cmd/bash
+```output
Monitoring telemetry. Filtering on device: device-001 {
@@ -79,13 +79,13 @@ Filtering on device: device-001
To monitor the property updates your device is exchanging with IoT Central, use the following preview command:
-```cmd/bash
+```azurecli
az iot central diagnostics monitor-properties --app-id <app-id> --device-id <device-name> ``` If the device successfully sends property updates, you see output similar to the following:
-```cmd/bash
+```output
Changes in reported properties: version : 32 {'state': 'true', 'name': {'value': {'value': 'Contoso'}, 'status': 'completed', 'desiredVersion': 7, 'ad': 'completed', 'av': 7, 'ac
@@ -103,7 +103,7 @@ If you're still not seeing any data appear on your terminal, it's likely that yo
If your data is not appearing on the monitor, check the provisioning status of your device by running the following command:
-```cmd/bash
+```azurecli
az iot central device registration-info --app-id <app-id> --device-id <device-name> ```
@@ -173,13 +173,13 @@ To detect which categories your issue is in, run the most appropriate command fo
- To validate telemetry, use the preview command:
- ```cmd/bash
+ ```azurecli
az iot central diagnostics validate-messages --app-id <app-id> --device-id <device-name> ``` - To validate property updates, use the preview command
- ```cmd/bash
+ ```azurecli
az iot central diagnostics validate-properties --app-id <app-id> --device-id <device-name> ```
@@ -187,7 +187,7 @@ You may be prompted to install the `uamqp` library the first time you run a `val
The following output shows example error and warning messages from the validate command:
-```cmd/bash
+```output
Validating telemetry. Filtering on device: v22upeoqx6. Exiting after 300 second(s), or 10 message(s) have been parsed (whichever happens first).
iot-dps https://docs.microsoft.com/en-us/azure/iot-dps/how-to-manage-dps-with-cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-dps/how-to-manage-dps-with-cli.md
@@ -12,7 +12,7 @@ services: iot-dps
# How to use Azure CLI and the IoT extension to manage the IoT Hub Device Provisioning Service
-[Azure CLI](/cli/azure?view=azure-cli-latest) is an open-source cross platform command-line tool for managing Azure resources such as IoT Edge. Azure CLI is available on Windows, Linux, and macOS. Azure CLI enables you to manage Azure IoT Hub resources, Device Provisioning service instances, and linked-hubs out of the box.
+[Azure CLI](/cli/azure) is an open-source cross platform command-line tool for managing Azure resources such as IoT Edge. Azure CLI is available on Windows, Linux, and macOS. Azure CLI enables you to manage Azure IoT Hub resources, Device Provisioning service instances, and linked-hubs out of the box.
The IoT extension enriches Azure CLI with features such as device management and full IoT Edge capability.
@@ -20,20 +20,13 @@ In this tutorial, you first complete the steps to setup Azure CLI and the IoT ex
[!INCLUDE [iot-hub-cli-version-info](../../includes/iot-hub-cli-version-info.md)]
-## Installation
+## Prerequisites
-### Install Python
+- [Python 2.7x or Python 3.x](https://www.python.org/downloads/) is required.
-[Python 2.7x or Python 3.x](https://www.python.org/downloads/) is required.
-
-### Install the Azure CLI
-
-Follow the [installation instruction](/cli/azure/install-azure-cli?view=azure-cli-latest) to setup Azure CLI in your environment. At a minimum, your Azure CLI version must be 2.0.70 or above. Use `az ΓÇôversion` to validate. This version supports az extension commands and introduces the Knack command framework. One simple way to install on Windows is to download and install the [MSI](https://aka.ms/InstallAzureCliWindows).
-
-### Install IoT extension
-
-[The IoT extension readme](https://github.com/Azure/azure-iot-cli-extension) describes several ways to install the extension. The simplest way is to run `az extension add --name azure-iot`. After installation, you can use `az extension list` to validate the currently installed extensions or `az extension show --name azure-iot` to see details about the IoT extension. To remove the extension, you can use `az extension remove --name azure-iot`.
+[!INCLUDE [azure-cli-prepare-your-environment-no-header.md](../../includes/azure-cli-prepare-your-environment-no-header.md)]
+- This article requires version 2.0.70 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
## Basic Device Provisioning Service operations
iot-edge https://docs.microsoft.com/en-us/azure/iot-edge/how-to-authenticate-downstream-device https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-authenticate-downstream-device.md
@@ -66,7 +66,7 @@ When you create the new device identity, provide the following information:
You also can use the [IoT extension for Azure CLI](https://github.com/Azure/azure-iot-cli-extension) to complete the same operation. The following example uses the [az iot hub device-identity](/cli/azure/ext/azure-iot/iot/hub/device-identity) command to create a new IoT device with symmetric key authentication and assign a parent device:
-```cli
+```azurecli
az iot hub device-identity create -n {iothub name} -d {new device ID} --pd {existing gateway device ID} ```
@@ -121,7 +121,7 @@ For X.509 self-signed authentication, sometimes referred to as thumbprint authen
You also can use the [IoT extension for Azure CLI](https://github.com/Azure/azure-iot-cli-extension) to complete the same device creation operation. The following example uses the [az iot hub device-identity](/cli/azure/ext/azure-iot/iot/hub/device-identity) command to create a new IoT device with X.509 self-signed authentication and assigns a parent device:
-```cli
+```azurecli
az iot hub device-identity create -n {iothub name} -d {device ID} --pd {gateway device ID} --am x509_thumbprint --ptp {primary thumbprint} --stp {secondary thumbprint} ```
@@ -165,7 +165,7 @@ This section is based on the instructions detailed in the IoT Hub article [Set u
You also can use the [IoT extension for Azure CLI](https://github.com/Azure/azure-iot-cli-extension) to complete the same device creation operation. The following example uses the [az iot hub device-identity](/cli/azure/ext/azure-iot/iot/hub/device-identity) command to create a new IoT device with X.509 CA signed authentication and assigns a parent device:
-```cli
+```azurecli
az iot hub device-identity create -n {iothub name} -d {device ID} --pd {gateway device ID} --am x509_ca ```
@@ -186,19 +186,19 @@ Connection strings for downstream devices need the following components:
All together, a complete connection string looks like:
-```
+```console
HostName=myiothub.azure-devices.net;DeviceId=myDownstreamDevice;SharedAccessKey=xxxyyyzzz;GatewayHostName=myGatewayDevice ``` Or:
-```
+```console
HostName=myiothub.azure-devices.net;DeviceId=myDownstreamDevice;x509=true;GatewayHostName=myGatewayDevice ``` Thanks to the parent/child relationship, you can simplify the connection string by calling the gateway directly as the connection host. For example:
-```
+```console
HostName=myGatewayDevice;DeviceId=myDownstreamDevice;SharedAccessKey=xxxyyyzzz ```
iot-edge https://docs.microsoft.com/en-us/azure/iot-edge/how-to-connect-downstream-iot-edge-device https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-connect-downstream-iot-edge-device.md
@@ -111,7 +111,7 @@ To enable gateway discovery, every IoT Edge gateway device needs to be configure
To enable secure connections, every IoT Edge device in a gateway scenario needs to be configured with an unique device CA certificate and a copy of the root CA certificate shared by all devices in the gateway hierarchy.
-You should already have IoT Edge installed on your device. If not, follow the steps to [Install the Azure IoT Edge runtime](how-to-install-iot-edge.md) and then provision your device with either [symmetric key authentication](how-to-manual-provision-symmetric-key.md) or [X.509 certificate authentication](how-to-manual-provision-x509.md).
+You should already have IoT Edge installed on your device. If not, follow the steps to [Register an IoT Edge device in IoT Hub](how-to-register-device.md) and then [Install the Azure IoT Edge runtime](how-to-install-iot-edge.md).
The steps in this section reference the **root CA certificate** and **device CA certificate and private key** that were discussed earlier in this article. If you created those certificates on a different device, have them available on this device. You can transfer the files physically, like with a USB drive, with a service like [Azure Key Vault](../key-vault/general/overview.md), or with a function like [Secure file copy](https://www.ssh.com/ssh/scp/).
iot-edge https://docs.microsoft.com/en-us/azure/iot-edge/how-to-deploy-cli-at-scale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-deploy-cli-at-scale.md
@@ -185,7 +185,7 @@ You deploy modules to your target devices by creating a deployment that consists
Use the [az iot edge deployment create](/cli/azure/ext/azure-iot/iot/edge/deployment#ext-azure-iot-az-iot-edge-deployment-create) command to create a deployment:
-```cli
+```azurecli
az iot edge deployment create --deployment-id [deployment id] --hub-name [hub name] --content [file path] --labels "[labels]" --target-condition "[target query]" --priority [int] ```
@@ -218,7 +218,7 @@ You cannot update the content of a deployment, which includes the modules and ro
Use the [az iot edge deployment update](/cli/azure/ext/azure-iot/iot/edge/deployment#ext-azure-iot-az-iot-edge-deployment-update) command to update a deployment:
-```cli
+```azurecli
az iot edge deployment update --deployment-id [deployment id] --hub-name [hub name] --set [property1.property2='value'] ```
@@ -239,7 +239,7 @@ When you delete a deployment, any devices take on their next highest priority de
Use the [az iot edge deployment delete](/cli/azure/ext/azure-iot/iot/edge/deployment#ext-azure-iot-az-iot-edge-deployment-delete) command to delete a deployment:
-```cli
+```azurecli
az iot edge deployment delete --deployment-id [deployment id] --hub-name [hub name] ```
iot-edge https://docs.microsoft.com/en-us/azure/iot-edge/how-to-install-iot-edge-on-windows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-install-iot-edge-on-windows.md new file mode 100644