Updates from: 01/08/2022 02:07:04
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-domain-services Network Considerations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-domain-services/network-considerations.md
Previously updated : 08/12/2021 Last updated : 01/06/2022
You can enable name resolution using conditional DNS forwarders on the DNS serve
## Network resources used by Azure AD DS
-A managed domain creates some networking resources during deployment. These resources are needed for successful operation and management of the managed domain, and shouldn't be manually configured.
+A managed domain creates some networking resources during deployment. These resources are needed for successful operation and management of the managed domain, and shouldn't be manually configured.
+
+Don't lock the networking resources used by Azure AD DS. If networking resources get locked, they can't be deleted. When domain controllers need to be rebuilt in that case, new networking resources with different IP addresses need to be created.
| Azure resource | Description | |:-|:|
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/whats-new-docs.md
Title: "What's new in Azure Active Directory application provisioning" description: "New and updated documentation for the Azure Active Directory application provisioning." Previously updated : 12/02/2021 Last updated : 01/07/2022
Welcome to what's new in Azure Active Directory application provisioning documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the provisioning service, see [What's new in Azure Active Directory](../fundamentals/whats-new.md).
+## December 2021
+
+### Updated articles
+
+- [How Application Provisioning works in Azure Active Directory](how-provisioning-works.md)
++ ## November 2021 ### New articles
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-proxy/whats-new-docs.md
Title: "What's new in Azure Active Directory application proxy" description: "New and updated documentation for the Azure Active Directory application proxy." Previously updated : 12/02/2021 Last updated : 01/07/2022
Welcome to what's new in Azure Active Directory application proxy documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the service, see [What's new in Azure Active Directory](../fundamentals/whats-new.md).
+## December 2021
+
+### Updated articles
+
+- [Configure custom domains with Azure AD Application Proxy](application-proxy-configure-custom-domain.md)
+- [Active Directory (Azure AD) Application Proxy frequently asked questions](application-proxy-faq.yml)
++ ## November 2021 ### Updated articles
active-directory Howto Authentication Passwordless Phone https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-authentication-passwordless-phone.md
Previously updated : 01/05/2022 Last updated : 01/07/2022
If the user attempts to upgrade multiple installations (5+) of the Microsoft Aut
Before you can create this new strong credential, there are prerequisites. One prerequisite is that the device on which the Microsoft Authenticator app is installed must be registered within the Azure AD tenant to an individual user.
-Currently, a device can only be registered in a single tenant. This limit means that only one work or school account in the Microsoft Authenticator app can be enabled for phone sign-in.
+Currently, a device can only be enabled for passwordless sign-in in a single tenant. This limit means that only one work or school account in the Microsoft Authenticator app can be enabled for phone sign-in.
> [!NOTE] > Device registration is not the same as device management or mobile device management (MDM). Device registration only associates a device ID and a user ID together, in the Azure AD directory.
active-directory Quickstart V2 Aspnet Core Webapp Calls Graph https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-aspnet-core-webapp-calls-graph.md
-+ Previously updated : 05/17/2021 Last updated : 11/22/2021 #Customer intent: As an application developer, I want to download and run a demo ASP.NET Core web app that can sign in users with personal Microsoft accounts (MSA) and work/school accounts from any Azure Active Directory instance, then access their data in Microsoft Graph on their behalf.
In this quickstart, you download and run a code sample that demonstrates how an
See [How the sample works](#how-the-sample-works) for an illustration.
-> [!div renderon="docs"]
-> ## Prerequisites
->
-> * [Visual Studio 2019](https://visualstudio.microsoft.com/vs/) or [Visual Studio Code](https://code.visualstudio.com/)
-> * [.NET Core SDK 3.1+](https://dotnet.microsoft.com/download)
->
->
-> ## Step 1: Register your application
-> To register your application and add the app's registration information to your solution manually, follow these steps:
->
-> 1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
-> 1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application.
-> 1. Search for and select **Azure Active Directory**.
-> 1. Under **Manage**, select **App registrations** > **New registration**.
-> 1. Enter a **Name** for your application, for example `AspNetCoreWebAppCallsGraph-Quickstart`. Users of your app might see this name, and you can change it later.
-> 1. Enter a **Redirect URI** of `https://localhost:44321/signin-oidc`.
-> 1. Select **Register**.
-> 1. Under **Manage**, select **Authentication**.
-> 1. Enter a **Front-channel logout URL** of `https://localhost:44321/signout-oidc`.
-> 1. Select **Save**.
-> 1. Under **Manage**, select **Certificates & secrets** > **Client secrets** > **New client secret**.
-> 1. Enter a **Description**, for example `clientsecret1`.
-> 1. Select **In 1 year** for the secret's expiration.
-> 1. Select **Add** and immediately record the secret's **Value** for use in a later step. The secret value is *never displayed again* and is irretrievable by any other means. Record it in a secure location as you would any password.
-
-> [!div class="sxs-lookup" renderon="portal"]
-> ## Step 1: Configure your application in the Azure portal
->
-> For the code sample in this quickstart to work, add a **Redirect URI** of `https://localhost:44321/signin-oidc` and **Front-channel logout URL** of `https://localhost:44321/signout-oidc` in the app registration.
-> > [!div renderon="portal" id="makechanges" class="nextstepaction"]
-> > [Make this change for me]()
->
-> > [!div id="appconfigured" class="alert alert-info"]
-> > ![Already configured](media/quickstart-v2-aspnet-webapp/green-check.png) Your application is configured with these attributes.
+## Step 1: Configure your application in the Azure portal
-## Step 2: Download the ASP.NET Core project
+For the code sample in this quickstart to work, add a **Redirect URI** of `https://localhost:44321/signin-oidc` and **Front-channel logout URL** of `https://localhost:44321/signout-oidc` in the app registration.
+> [!div class="nextstepaction"]
+> [Make this change for me]()
-> [!div renderon="docs"]
-> [Download the ASP.NET Core solution](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/archive/aspnetcore3-1-callsgraph.zip)
+> [!div class="alert alert-info"]
+> ![Already configured](media/quickstart-v2-aspnet-webapp/green-check.png) Your application is configured with these attributes.
+
+## Step 2: Download the ASP.NET Core project
-> [!div renderon="portal" class="sxs-lookup"]
-> Run the project.
+Run the project.
-> [!div renderon="portal" class="sxs-lookup" id="autoupdate" class="nextstepaction"]
+> [!div class="nextstepaction"]
> [Download the code sample](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/archive/aspnetcore3-1-callsgraph.zip) [!INCLUDE [active-directory-develop-path-length-tip](../../../includes/active-directory-develop-path-length-tip.md)]
-> [!div class="sxs-lookup" renderon="portal"]
-> ## Step 3: Your app is configured and ready to run
->
-> We have configured your project with values of your app's properties and it's ready to run.
-> [!div class="sxs-lookup" renderon="portal"]
-> > [!NOTE]
-> > `Enter_the_Supported_Account_Info_Here`
-> [!div renderon="docs"]
->
-> ## Step 3: Configure your ASP.NET Core project
-> 1. Extract the .zip archive into a local folder near the root of your drive. For example, into *C:\Azure-Samples*.
-> 1. Open the solution in Visual Studio 2019.
-> 1. Open the *appsettings.json* file and modify the following:
->
-> ```json
-> "ClientId": "Enter_the_Application_Id_here",
-> "TenantId": "common",
-> "clientSecret": "Enter_the_Client_Secret_Here"
-> ```
->
-> - Replace `Enter_the_Application_Id_here` with the **Application (client) ID** of the application you registered in the Azure portal. You can find **Application (client) ID** in the app's **Overview** page.
-> - Replace `common` with one of the following:
-> - If your application supports **Accounts in this organizational directory only**, replace this value with the **Directory (tenant) ID** (a GUID) or **tenant name** (for example, `contoso.onmicrosoft.com`). You can find the **Directory (tenant) ID** on the app's **Overview** page.
-> - If your application supports **Accounts in any organizational directory**, replace this value with `organizations`
-> - If your application supports **All Microsoft account users**, leave this value as `common`
-> - Replace `Enter_the_Client_Secret_Here` with the **Client secret** you created and recorded in an earlier step.
->
-> For this quickstart, don't change any other values in the *appsettings.json* file.
->
-> ## Step 4: Build and run the application
->
-> Build and run the app in Visual Studio by selecting the **Debug** menu > **Start Debugging**, or by pressing the `F5` key.
->
-> You're prompted for your credentials, and then asked to consent to the permissions your app requires. Select **Accept** on the consent prompt.
->
-> :::image type="content" source="media/quickstart-v2-aspnet-core-webapp-calls-graph/webapp-01-consent.png" alt-text="Consent dialog showing the permissions the app is requesting from the > user":::
->
-> After consenting to the requested permissions, the app displays that you've successfully logged in using your Azure Active Directory credentials, and you'll see your email address in the "Api result" section of the page. This was extracted using Microsoft Graph.
->
-> :::image type="content" source="media/quickstart-v2-aspnet-core-webapp-calls-graph/webapp-02-signed-in.png" alt-text="Web browser displaying the running web app and the user signed in":::
+
+## Step 3: Your app is configured and ready to run
+
+We have configured your project with values of your app's properties and it's ready to run.
+
+> [!NOTE]
+> `Enter_the_Supported_Account_Info_Here`
## About the code
active-directory Quickstart V2 Aspnet Core Webapp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-aspnet-core-webapp.md
-+ Previously updated : 09/11/2020 Last updated : 11/22/2021 #Customer intent: As an application developer, I want to know how to write an ASP.NET Core web app that can sign in personal accounts, as well as work and school accounts, from any Azure Active Directory instance.
In this quickstart, you download and run a code sample that demonstrates how an ASP.NET Core web app can sign in users from any Azure Active Directory (Azure AD) organization.
-> [!div renderon="docs"]
-> The following diagram shows how the sample app works:
->
-> ![Diagram of the interaction between the web browser, the web app, and the Microsoft identity platform in the sample app.](media/quickstart-v2-aspnet-core-webapp/aspnetcorewebapp-intro.svg)
->
-> ## Prerequisites
->
-> * [Visual Studio 2019](https://visualstudio.microsoft.com/vs/) or [Visual Studio Code](https://code.visualstudio.com/)
-> * [.NET Core SDK 3.1+](https://dotnet.microsoft.com/download)
->
-> ## Register and download the app
-> You have two options to start building your application: automatic or manual configuration.
->
-> ### Automatic configuration
-> If you want to automatically configure your app and then download the code sample, follow these steps:
->
-> 1. Go to the <a href="https://aka.ms/aspnetcore2-1-aad-quickstart-v2/" target="_blank">Azure portal page for app registration</a>.
-> 1. Enter a name for your application and select **Register**.
-> 1. Follow the instructions to download and automatically configure your new application in one click.
->
-> ### Manual configuration
-> If you want to manually configure your application and code sample, use the following procedures.
-> #### Step 1: Register your application
-> 1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
-> 1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application.
-> 1. Search for and select **Azure Active Directory**.
-> 1. Under **Manage**, select **App registrations** > **New registration**.
-> 1. For **Name**, enter a name for your application. For example, enter **AspNetCore-Quickstart**. Users of your app will see this name, and you can change it later.
-> 1. For **Redirect URI**, enter **https://localhost:44321/signin-oidc**.
-> 1. Select **Register**.
-> 1. Under **Manage**, select **Authentication**.
-> 1. For **Front-channel logout URL**, enter **https://localhost:44321/signout-oidc**.
-> 1. Under **Implicit grant and hybrid flows**, select **ID tokens**.
-> 1. Select **Save**.
+#### Step 1: Configure your application in the Azure portal
+For the code sample in this quickstart to work:
+- For **Redirect URI**, enter **https://localhost:44321/** and **https://localhost:44321/signin-oidc**.
+- For **Front-channel logout URL**, enter **https://localhost:44321/signout-oidc**.
-> [!div class="sxs-lookup" renderon="portal"]
-> #### Step 1: Configure your application in the Azure portal
-> For the code sample in this quickstart to work:
-> - For **Redirect URI**, enter **https://localhost:44321/** and **https://localhost:44321/signin-oidc**.
-> - For **Front-channel logout URL**, enter **https://localhost:44321/signout-oidc**.
->
-> The authorization endpoint will issue request ID tokens.
-> > [!div renderon="portal" id="makechanges" class="nextstepaction"]
-> > [Make this change for me]()
->
-> > [!div id="appconfigured" class="alert alert-info"]
-> > ![Already configured](media/quickstart-v2-aspnet-webapp/green-check.png) Your application is configured with these attributes.
+The authorization endpoint will issue request ID tokens.
+> [!div class="nextstepaction"]
+> [Make this change for me]()
-#### Step 2: Download the ASP.NET Core project
+> [!div class="alert alert-info"]
+> ![Already configured](media/quickstart-v2-aspnet-webapp/green-check.png) Your application is configured with these attributes.
-> [!div renderon="docs"]
-> [Download the ASP.NET Core solution](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/archive/aspnetcore3-1.zip)
+#### Step 2: Download the ASP.NET Core project
-> [!div renderon="portal" class="sxs-lookup"]
-> Run the project.
+Run the project.
-> [!div renderon="portal" class="sxs-lookup" id="autoupdate" class="nextstepaction"]
+> [!div class="nextstepaction"]
> [Download the code sample](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/archive/aspnetcore3-1.zip) [!INCLUDE [active-directory-develop-path-length-tip](../../../includes/active-directory-develop-path-length-tip.md)]
-> [!div class="sxs-lookup" renderon="portal"]
-> #### Step 3: Your app is configured and ready to run
-> We've configured your project with values of your app's properties, and it's ready to run.
-> [!div class="sxs-lookup" renderon="portal"]
-> > [!NOTE]
-> > `Enter_the_Supported_Account_Info_Here`
-> [!div renderon="docs"]
-> #### Step 3: Configure your ASP.NET Core project
-> 1. Extract the .zip archive into a local folder near the root of your drive. For example, extract into *C:\Azure-Samples*.
->
-> We recommend extracting the archive into a directory near the root of your drive to avoid errors caused by path length limitations on Windows.
-> 1. Open the solution in Visual Studio 2019.
-> 1. Open the *appsettings.json* file and modify the following code:
->
-> ```json
-> "Domain": "[Enter the domain of your tenant, e.g. contoso.onmicrosoft.com]",
-> "ClientId": "Enter_the_Application_Id_here",
-> "TenantId": "common",
-> ```
->
-> - Replace `Enter_the_Application_Id_here` with the application (client) ID of the application that you registered in the Azure portal. You can find the **Application (client) ID** value on the app's **Overview** page.
-> - Replace `common` with one of the following:
-> - If your application supports **Accounts in this organizational directory only**, replace this value with the directory (tenant) ID (a GUID) or the tenant name (for example, `contoso.onmicrosoft.com`). You can find the **Directory (tenant) ID** value on the app's **Overview** page.
-> - If your application supports **Accounts in any organizational directory**, replace this value with `organizations`.
-> - If your application supports **All Microsoft account users**, leave this value as `common`.
->
-> For this quickstart, don't change any other values in the *appsettings.json* file.
->
-> #### Step 4: Build and run the application
->
-> Build and run the app in Visual Studio by selecting the **Debug** menu > **Start Debugging**, or by pressing the F5 key.
->
-> You're prompted for your credentials, and then asked to consent to the permissions that your app requires. Select **Accept** on the consent prompt.
->
-> :::image type="content" source="media/quickstart-v2-aspnet-core-webapp/webapp-01-consent.png" alt-text="Screenshot of the consent dialog box, showing the permissions that the app is requesting from the user.":::
->
-> After you consent to the requested permissions, the app displays that you've successfully signed in with your Azure Active Directory credentials.
->
-> :::image type="content" source="media/quickstart-v2-aspnet-core-webapp/webapp-02-signed-in.png" alt-text="Screenshot of a web browser that shows the running web app and the signed-in user.":::
+
+#### Step 3: Your app is configured and ready to run
+We've configured your project with values of your app's properties, and it's ready to run.
+
+> [!NOTE]
+> `Enter_the_Supported_Account_Info_Here`
## More information
active-directory Quickstart V2 Aspnet Webapp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-aspnet-webapp.md
-+ Previously updated : 09/25/2020 Last updated : 11/22/2021 #Customer intent: As an application developer, I want to see a sample ASP.NET web app that can sign in Azure AD users.
In this quickstart, you download and run a code sample that demonstrates an ASP.NET web application that can sign in users with Azure Active Directory (Azure AD) accounts.
-> [!div renderon="docs"]
-> The following diagram shows how the sample app works:
->
-> ![Diagram of the interaction between the web browser, the web app, and the Microsoft identity platform in the sample app.](media/quickstart-v2-aspnet-webapp/aspnetwebapp-intro.svg)
->
-> ## Prerequisites
->
-> * An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-> * [Visual Studio 2019](https://visualstudio.microsoft.com/vs/)
-> * [.NET Framework 4.7.2+](https://dotnet.microsoft.com/download/visual-studio-sdks)
->
-> ## Register and download the app
-> You have two options to start building your application: automatic or manual configuration.
->
-> ### Automatic configuration
-> If you want to automatically configure your app and then download the code sample, follow these steps:
->
-> 1. Go to the <a href="https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/applicationsListBlade/quickStartType/AspNetWebAppQuickstartPage/sourceType/docs" target="_blank">Azure portal page for app registration</a>.
-> 1. Enter a name for your application and select **Register**.
-> 1. Follow the instructions to download and automatically configure your new application in one click.
->
-> ### Manual configuration
-> If you want to manually configure your application and code sample, use the following procedures.
->
-> #### Step 1: Register your application
->
-> 1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
-> 1. If you have access to multiple tenants, use the **Directory + subscription** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application.
-> 1. Search for and select **Azure Active Directory**.
-> 1. Under **Manage**, select **App registrations** > **New registration**.
-> 1. For **Name**, enter a name for your application. For example, enter **ASPNET-Quickstart**. Users of your app will see this name, and you can change it later.
-> 1. Add **https://localhost:44368/** in **Redirect URI**, and select **Register**.
-> 1. Under **Manage**, select **Authentication**.
-> 1. In the **Implicit grant and hybrid flows** section, select **ID tokens**.
-> 1. Select **Save**.
-
-> [!div class="sxs-lookup" renderon="portal"]
-> #### Step 1: Configure your application in the Azure portal
-> For the code sample in this quickstart to work, enter **https://localhost:44368/** for **Redirect URI**.
->
-> > [!div renderon="portal" id="makechanges" class="nextstepaction"]
-> > [Make this change for me]()
->
-> > [!div id="appconfigured" class="alert alert-info"]
-> > ![Already configured](media/quickstart-v2-aspnet-webapp/green-check.png) Your application is configured with this attribute.
+#### Step 1: Configure your application in the Azure portal
+For the code sample in this quickstart to work, enter **https://localhost:44368/** for **Redirect URI**.
-#### Step 2: Download the project
+> [!div class="nextstepaction"]
+> [Make this change for me]()
-> [!div renderon="docs"]
-> [Download the Visual Studio 2019 solution](https://github.com/AzureADQuickStarts/AppModelv2-WebApp-OpenIDConnect-DotNet/archive/master.zip)
+> [!div class="alert alert-info"]
+> ![Already configured](media/quickstart-v2-aspnet-webapp/green-check.png) Your application is configured with this attribute.
+
+#### Step 2: Download the project
-> [!div renderon="portal" class="sxs-lookup"]
-> Run the project by using Visual Studio 2019.
-> [!div renderon="portal" id="autoupdate" class="sxs-lookup nextstepaction"]
+Run the project by using Visual Studio 2019.
+> [!div class="sxs-lookup nextstepaction"]
> [Download the code sample](https://github.com/AzureADQuickStarts/AppModelv2-WebApp-OpenIDConnect-DotNet/archive/master.zip) [!INCLUDE [active-directory-develop-path-length-tip](../../../includes/active-directory-develop-path-length-tip.md)]
-> [!div class="sxs-lookup" renderon="portal"]
-> #### Step 3: Your app is configured and ready to run
-> We've configured your project with values of your app's properties.
-> [!div renderon="docs"]
-> #### Step 3: Run your Visual Studio project
+#### Step 3: Your app is configured and ready to run
+We've configured your project with values of your app's properties.
1. Extract the .zip file to a local folder that's close to the root folder. For example, extract to *C:\Azure-Samples*.
In this quickstart, you download and run a code sample that demonstrates an ASP.
3. Depending on the version of Visual Studio, you might need to right-click the project **AppModelv2-WebApp-OpenIDConnect-DotNet** and then select **Restore NuGet packages**. 4. Open the Package Manager Console by selecting **View** > **Other Windows** > **Package Manager Console**. Then run `Update-Package Microsoft.CodeDom.Providers.DotNetCompilerPlatform -r`.
-> [!div renderon="docs"]
-> 5. Edit *Web.config* and replace the parameters `ClientId`, `Tenant`, and `redirectUri` with:
-> ```xml
-> <add key="ClientId" value="Enter_the_Application_Id_here" />
-> <add key="Tenant" value="Enter_the_Tenant_Info_Here" />
-> <add key="redirectUri" value="https://localhost:44368/" />
-> ```
-> In that code:
->
-> - `Enter_the_Application_Id_here` is the application (client) ID of the app registration that you created earlier. Find the application (client) ID on the app's **Overview** page in **App registrations** in the Azure portal.
-> - `Enter_the_Tenant_Info_Here` is one of the following options:
-> - If your application supports **My organization only**, replace this value with the directory (tenant) ID or tenant name (for example, `contoso.onmicrosoft.com`). Find the directory (tenant) ID on the app's **Overview** page in **App registrations** in the Azure portal.
-> - If your application supports **Accounts in any organizational directory**, replace this value with `organizations`.
-> - If your application supports **All Microsoft account users**, replace this value with `common`.
-> - `redirectUri` is the **Redirect URI** you entered earlier in **App registrations** in the Azure portal.
->
-
-> [!div class="sxs-lookup" renderon="portal"]
-> > [!NOTE]
-> > `Enter_the_Supported_Account_Info_Here`
+> [!NOTE]
+> `Enter_the_Supported_Account_Info_Here`
## More information This section gives an overview of the code required to sign in users. This overview can be useful to understand how the code works, what the main arguments are, and how to add sign-in to an existing ASP.NET application.
-> [!div class="sxs-lookup" renderon="portal"]
-> ### How the sample works
->
-> ![Diagram of the interaction between the web browser, the web app, and the Microsoft identity platform in the sample app.](media/quickstart-v2-aspnet-webapp/aspnetwebapp-intro.svg)
+
+### How the sample works
+
+![Diagram of the interaction between the web browser, the web app, and the Microsoft identity platform in the sample app.](media/quickstart-v2-aspnet-webapp/aspnetwebapp-intro.svg)
### OWIN middleware NuGet packages
active-directory Quickstart V2 Java Webapp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-java-webapp.md
-+ Previously updated : 10/09/2019 Last updated : 11/22/2021 - # Quickstart: Add sign-in with Microsoft to a Java web app In this quickstart, you download and run a code sample that demonstrates how a Java web application can sign in users and call the Microsoft Graph API. Users from any Azure Active Directory (Azure AD) organization can sign in to the application.
To run this sample, you need:
- [Java Development Kit (JDK)](https://openjdk.java.net/) 8 or later. - [Maven](https://maven.apache.org/).
-> [!div renderon="docs"]
-> ## Register and download your quickstart app
-> There are two ways to start your quickstart application: express (option 1) and manual (option 2).
->
-> ### Option 1: Register and automatically configure your app, and then download the code sample
->
-> 1. Go to the <a href="https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/applicationsListBlade/quickStartType/JavaQuickstartPage/sourceType/docs" target="_blank">Azure portal - App registrations</a> quickstart experience.
-> 1. Enter a name for your application, and then select **Register**.
-> 1. Follow the instructions in the portal's quickstart experience to download the automatically configured application code.
->
-> ### Option 2: Register and manually configure your application and code sample
->
-> #### Step 1: Register your application
->
-> To register your application and manually add the app's registration information to it, follow these steps:
->
-> 1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
-> 1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application.
-> 1. Search for and select **Azure Active Directory**.
-> 1. Under **Manage**, select **App registrations**.
-> 1. Select **New registration**.
-> 1. Enter a **Name** for your application, for example **java-webapp**. Users of your app might see this name. You can change it later.
-> 1. Select **Register**.
-> 1. On the **Overview** page, note the **Application (client) ID** and the **Directory (tenant) ID**. You'll need these values later.
-> 1. Under **Manage**, select **Authentication**.
-> 1. Select **Add a platform** > **Web**.
-> 1. In the **Redirect URIs** section, enter `https://localhost:8443/msal4jsample/secure/aad`.
-> 1. Select **Configure**.
-> 1. In the **Web** section, under **Redirect URIs**, enter `https://localhost:8443/msal4jsample/graph/me` as a second redirect URI.
-> 1. Under **Manage**, select **Certificates & secrets**. In the **Client secrets** section, select **New client secret**.
-> 1. Enter a key description (for example, *app secret*), leave the default expiration, and select **Add**.
-> 1. Note the **Value** of the client secret. You'll need it later.
-
-> [!div class="sxs-lookup" renderon="portal"]
-> #### Step 1: Configure your application in the Azure portal
->
-> To use the code sample in this quickstart:
->
-> 1. Add reply URLs `https://localhost:8443/msal4jsample/secure/aad` and `https://localhost:8443/msal4jsample/graph/me`.
-> 1. Create a client secret.
-> > [!div renderon="portal" id="makechanges" class="nextstepaction"]
-> > [Make these changes for me]()
->
-> > [!div id="appconfigured" class="alert alert-info"]
-> > ![Already configured](media/quickstart-v2-aspnet-webapp/green-check.png) Your application is configured with these attributes.
+
+#### Step 1: Configure your application in the Azure portal
+
+To use the code sample in this quickstart:
+
+1. Add reply URLs `https://localhost:8443/msal4jsample/secure/aad` and `https://localhost:8443/msal4jsample/graph/me`.
+1. Create a client secret.
+> [!div class="nextstepaction"]
+> [Make these changes for me]()
+
+> [!div class="alert alert-info"]
+> ![Already configured](media/quickstart-v2-aspnet-webapp/green-check.png) Your application is configured with these attributes.
#### Step 2: Download the code sample
-> [!div renderon="docs"]
-> [Download the code sample](https://github.com/Azure-Samples/ms-identity-java-webapp/archive/master.zip)
-> [!div class="sxs-lookup" renderon="portal"]
-> Download the project and extract the .zip file into a folder near the root of your drive. For example, *C:\Azure-Samples*.
->
-> To use HTTPS with localhost, provide the `server.ssl.key` properties. To generate a self-signed certificate, use the keytool utility (included in JRE).
->
-> Here's an example:
-> ```
-> keytool -genkeypair -alias testCert -keyalg RSA -storetype PKCS12 -keystore keystore.p12 -storepass password
->
-> server.ssl.key-store-type=PKCS12
-> server.ssl.key-store=classpath:keystore.p12
-> server.ssl.key-store-password=password
-> server.ssl.key-alias=testCert
-> ```
-> Put the generated keystore file in the *resources* folder.
-
-> [!div renderon="portal" id="autoupdate" class="sxs-lookup nextstepaction"]
+Download the project and extract the .zip file into a folder near the root of your drive. For example, *C:\Azure-Samples*.
+
+To use HTTPS with localhost, provide the `server.ssl.key` properties. To generate a self-signed certificate, use the keytool utility (included in JRE).
+
+Here's an example:
+```
+ keytool -genkeypair -alias testCert -keyalg RSA -storetype PKCS12 -keystore keystore.p12 -storepass password
+
+ server.ssl.key-store-type=PKCS12
+ server.ssl.key-store=classpath:keystore.p12
+ server.ssl.key-store-password=password
+ server.ssl.key-alias=testCert
+ ```
+ Put the generated keystore file in the *resources* folder.
+
+> [!div class="sxs-lookup nextstepaction"]
> [Download the code sample](https://github.com/Azure-Samples/ms-identity-java-webapp/archive/master.zip)
-> [!div class="sxs-lookup" renderon="portal"]
+> [!div class="sxs-lookup"]
> > [!NOTE] > > `Enter_the_Supported_Account_Info_Here`
-> [!div renderon="docs"]
-> #### Step 3: Configure the code sample
-> 1. Extract the zip file to a local folder.
-> 1. *Optional.* If you use an integrated development environment, open the sample in that environment.
-> 1. Open the *application.properties* file. You can find it in the *src/main/resources/* folder. Replace the values in the fields `aad.clientId`, `aad.authority`, and `aad.secretKey` with the application ID, tenant ID, and client secret values, respectively. Here's what it should look like:
->
-> ```file
-> aad.clientId=Enter_the_Application_Id_here
-> aad.authority=https://login.microsoftonline.com/Enter_the_Tenant_Info_Here/
-> aad.secretKey=Enter_the_Client_Secret_Here
-> aad.redirectUriSignin=https://localhost:8443/msal4jsample/secure/aad
-> aad.redirectUriGraph=https://localhost:8443/msal4jsample/graph/me
-> aad.msGraphEndpointHost="https://graph.microsoft.com/"
-> ```
-> In the previous code:
->
-> - `Enter_the_Application_Id_here` is the application ID for the application you registered.
-> - `Enter_the_Client_Secret_Here` is the **Client Secret** you created in **Certificates & secrets** for the application you registered.
-> - `Enter_the_Tenant_Info_Here` is the **Directory (tenant) ID** value of the application you registered.
-> 1. To use HTTPS with localhost, provide the `server.ssl.key` properties. To generate a self-signed certificate, use the keytool utility (included in JRE).
->
-> Here's an example:
->
-> ```
-> keytool -genkeypair -alias testCert -keyalg RSA -storetype PKCS12 -keystore keystore.p12 -storepass password
->
-> server.ssl.key-store-type=PKCS12
-> server.ssl.key-store=classpath:keystore.p12
-> server.ssl.key-store-password=password
-> server.ssl.key-alias=testCert
-> ```
-> 1. Put the generated keystore file in the *resources* folder.
-
-> [!div class="sxs-lookup" renderon="portal"]
-> #### Step 3: Run the code sample
-> [!div renderon="docs"]
-> #### Step 4: Run the code sample
+> [!div class="sxs-lookup"]
+
+#### Step 3: Run the code sample
To run the project, take one of these steps:
active-directory Quickstart V2 Nodejs Webapp Msal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-nodejs-webapp-msal.md
-+ Previously updated : 10/22/2020 Last updated : 11/22/2021 #Customer intent: As an application developer, I want to know how to set up authentication in a web application built using Node.js and MSAL Node.
This quickstart uses the Microsoft Authentication Library for Node.js (MSAL Node
* [Node.js](https://nodejs.org/en/download/) * [Visual Studio Code](https://code.visualstudio.com/download) or another code editor
-> [!div renderon="docs"]
-> ## Register and download your quickstart application
->
-> #### Step 1: Register your application
->
-> 1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
-> 1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application.
-> 1. Under **Manage**, select **App registrations** > **New registration**.
-> 1. Enter a **Name** for your application. Users of your app might see this name, and you can change it later.
-> 1. Under **Supported account types**, select **Accounts in any organizational directory and personal Microsoft accounts**.
-> 1. Set the **Redirect URI** value to `http://localhost:3000/redirect`.
-> 1. Select **Register**.
-> 1. On the app **Overview** page, note the **Application (client) ID** value for later use.
-> 1. Under **Manage**, select **Certificates & secrets** > **Client secrets** > **New client secret**. Leave the description blank and default expiration, and then select **Add**.
-> 1. Note the value of **Client secret** for later use.
-
-> [!div class="sxs-lookup" renderon="portal"]
-> #### Step 1: Configure the application in Azure portal
-> For the code sample for this quickstart to work, you need to create a client secret and add the following reply URL: `http://localhost:3000/redirect`.
-> > [!div renderon="portal" id="makechanges" class="nextstepaction"]
-> > [Make this change for me]()
->
-> > [!div id="appconfigured" class="alert alert-info"]
-> > ![Already configured](media/quickstart-v2-windows-desktop/green-check.png) Your application is configured with these attributes.
+#### Step 1: Configure the application in Azure portal
+For the code sample for this quickstart to work, you need to create a client secret and add the following reply URL: `http://localhost:3000/redirect`.
+> [!div class="nextstepaction"]
+> [Make this change for me]()
-#### Step 2: Download the project
+> [!div class="alert alert-info"]
+> ![Already configured](media/quickstart-v2-windows-desktop/green-check.png) Your application is configured with these attributes.
-> [!div renderon="docs"]
-> To run the project with a web server by using Node.js, [download the core project files](https://github.com/Azure-Samples/ms-identity-node/archive/main.zip).
+#### Step 2: Download the project
-> [!div renderon="portal" class="sxs-lookup"]
-> Run the project with a web server by using Node.js.
+Run the project with a web server by using Node.js.
-> [!div renderon="portal" class="sxs-lookup" id="autoupdate" class="nextstepaction"]
+> [!div class="nextstepaction"]
> [Download the code sample](https://github.com/Azure-Samples/ms-identity-node/archive/main.zip)
-> [!div renderon="docs"]
-> #### Step 3: Configure your Node app
->
-> Extract the project, open the *ms-identity-node-main* folder, and then open the *index.js* file.
->
-> Set the `clientID` value with the application (client) ID, and then set the `clientSecret` value with the client secret.
->
->```javascript
->const config = {
-> auth: {
-> clientId: "Enter_the_Application_Id_Here",
-> authority: "https://login.microsoftonline.com/common",
-> clientSecret: "Enter_the_Client_Secret_Here"
-> },
->    system: {
->        loggerOptions: {
->            loggerCallback(loglevel, message, containsPii) {
->                console.log(message);
->            },
->         piiLoggingEnabled: false,
->         logLevel: msal.LogLevel.Verbose,
->        }
->    }
->};
-> ```
-
-> [!div renderon="docs"]
->
-> Modify the values in the `config` section:
->
-> - `Enter_the_Application_Id_Here` is the application (client) ID for the application you registered.
->
-> To find the application (client) ID, go to the app registration's **Overview** page in the Azure portal.
-> - `Enter_the_Client_Secret_Here` is the client secret for the application you registered.
->
-> To retrieve or generate a new client secret, under **Manage**, select **Certificates & secrets**.
->
-> The default `authority` value represents the main (global) Azure cloud:
->
-> ```javascript
-> authority: "https://login.microsoftonline.com/common",
-> ```
->
-> [!div class="sxs-lookup" renderon="portal"]
-> #### Step 3: Your app is configured and ready to run
->
-> [!div renderon="docs"]
->
-> #### Step 4: Run the project
+#### Step 3: Your app is configured and ready to run
Run the project by using Node.js.
active-directory Quickstart V2 Nodejs Webapp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-nodejs-webapp.md
-+ Previously updated : 10/28/2019 Last updated : 11/22/2021 #Customer intent: As an application developer, I want to know how to set up OpenID Connect authentication in a web application built using Node.js with Express.
active-directory Quickstart V2 Python Webapp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-python-webapp.md
-+ Previously updated : 09/25/2019 Last updated : 11/22/2021
See [How the sample works](#how-the-sample-works) for an illustration.
- [Flask](http://flask.pocoo.org/), [Flask-Session](https://pypi.org/project/Flask-Session/), [requests](https://requests.kennethreitz.org/en/master/) - [MSAL Python](https://github.com/AzureAD/microsoft-authentication-library-for-python)
-> [!div renderon="docs"]
->
-> ## Register and download your quickstart app
->
-> You have two options to start your quickstart application: express (Option 1), and manual (Option 2)
->
-> ### Option 1: Register and auto configure your app and then download your code sample
->
-> 1. Go to the <a href="https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/applicationsListBlade/quickStartType/PythonQuickstartPage/sourceType/docs" target="_blank">Azure portal - App registrations</a> quickstart experience.
-> 1. Enter a name for your application and select **Register**.
-> 1. Follow the instructions to download and automatically configure your new application.
->
-> ### Option 2: Register and manually configure your application and code sample
->
-> #### Step 1: Register your application
->
-> To register your application and add the app's registration information to your solution manually, follow these steps:
->
-> 1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
-> 1. If you have access to multiple tenants, use the **Directory + subscription** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to select the tenant in which you want to register an application.
-> 1. Under **Manage**, select **App registrations** > **New registration**.
-> 1. Enter a **Name** for your application, for example `python-webapp` . Users of your app might see this name, and you can change it later.
-> 1. Under **Supported account types**, select **Accounts in any organizational directory and personal Microsoft accounts**.
-> 1. Select **Register**.
-> 1. On the app **Overview** page, note the **Application (client) ID** value for later use.
-> 1. Under **Manage**, select **Authentication**.
-> 1. Select **Add a platform** > **Web**.
-> 1. Add `http://localhost:5000/getAToken` as **Redirect URIs**.
-> 1. Select **Configure**.
-> 1. Under **Manage**, select the **Certificates & secrets** and from the **Client secrets** section, select **New client secret**.
-> 1. Type a key description (for instance app secret), leave the default expiration, and select **Add**.
-> 1. Note the **Value** of the **Client Secret** for later use.
-> 1. Under **Manage**, select **API permissions** > **Add a permission**.
-> 1. Ensure that the **Microsoft APIs** tab is selected.
-> 1. From the *Commonly used Microsoft APIs* section, select **Microsoft Graph**.
-> 1. From the **Delegated permissions** section, ensure that the right permissions are checked: **User.ReadBasic.All**. Use the search box if necessary.
-> 1. Select the **Add permissions** button.
->
-> [!div class="sxs-lookup" renderon="portal"]
->
-> #### Step 1: Configure your application in Azure portal
->
-> For the code sample in this quickstart to work:
->
-> 1. Add a reply URL as `http://localhost:5000/getAToken`.
-> 1. Create a Client Secret.
-> 1. Add Microsoft Graph API's User.ReadBasic.All delegated permission.
->
-> > [!div renderon="portal" id="makechanges" class="nextstepaction"]
-> > [Make these changes for me]()
-> > [!div id="appconfigured" class="alert alert-info"]
-> > ![Already configured](media/quickstart-v2-aspnet-webapp/green-check.png) Your application is configured with this attribute
+#### Step 1: Configure your application in Azure portal
+
+For the code sample in this quickstart to work:
+
+1. Add a reply URL as `http://localhost:5000/getAToken`.
+1. Create a Client Secret.
+1. Add Microsoft Graph API's User.ReadBasic.All delegated permission.
+
+> [!div class="nextstepaction"]
+> [Make these changes for me]()
+
+> [!div class="alert alert-info"]
+> ![Already configured](./media/quickstart-v2-aspnet-webapp/green-check.png) Your application is configured with this attribute
#### Step 2: Download your project
-> [!div renderon="docs"]
-> [Download the Code Sample](https://github.com/Azure-Samples/ms-identity-python-webapp/archive/master.zip)
-> [!div class="sxs-lookup" renderon="portal"]
-> Download the project and extract the zip file to a local folder closer to the root folder - for example, **C:\Azure-Samples**
-> [!div class="sxs-lookup" renderon="portal" id="autoupdate" class="nextstepaction"]
+Download the project and extract the zip file to a local folder closer to the root folder - for example, **C:\Azure-Samples**
+> [!div class="nextstepaction"]
> [Download the code sample](https://github.com/Azure-Samples/ms-identity-python-webapp/archive/master.zip)
-> [!div class="sxs-lookup" renderon="portal"]
-> > [!NOTE]
-> > `Enter_the_Supported_Account_Info_Here`
-
-> [!div renderon="docs"]
-> #### Step 3: Configure the Application
->
-> 1. Extract the zip file to a local folder closer to the root folder - for example, **C:\Azure-Samples**
-> 1. If you use an integrated development environment, open the sample in your favorite IDE (optional).
-> 1. Open the **app_config.py** file, which can be found in the root folder and replace with the following code snippet:
->
-> ```python
-> CLIENT_ID = "Enter_the_Application_Id_here"
-> CLIENT_SECRET = "Enter_the_Client_Secret_Here"
-> AUTHORITY = "https://login.microsoftonline.com/Enter_the_Tenant_Name_Here"
-> ```
-> Where:
->
-> - `Enter_the_Application_Id_here` - is the Application Id for the application you registered.
-> - `Enter_the_Client_Secret_Here` - is the **Client Secret** you created in **Certificates & Secrets** for the application you registered.
-> - `Enter_the_Tenant_Name_Here` - is the **Directory (tenant) ID** value of the application you registered.
-
-> [!div class="sxs-lookup" renderon="portal"]
-> #### Step 3: Run the code sample
-
-> [!div renderon="docs"]
-> #### Step 4: Run the code sample
+> [!NOTE]
+> `Enter_the_Supported_Account_Info_Here`
+
+#### Step 3: Run the code sample
1. You will need to install MSAL Python library, Flask framework, Flask-Sessions for server-side session management and requests using pip as follows:
active-directory Supported Accounts Validation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/supported-accounts-validation.md
See the following table for the validation differences of various properties for
| Certificates (`keyCredentials`) | Symmetric signing key | Symmetric signing key | Encryption and asymmetric signing key | | Client secrets (`passwordCredentials`) | No limit\* | No limit\* | If liveSDK is enabled: Maximum of two client secrets | | Redirect URIs (`replyURLs`) | See [Redirect URI/reply URL restrictions and limitations](reply-url.md) for more info. | | |
-| API permissions (`requiredResourceAccess`) | No limit\* | No limit\* | Maximum of 50 resources per application and 30 permissions per resource (for example, Microsoft Graph). Total limit of 200 per application (resources x permissions). |
+| API permissions (`requiredResourceAccess`) | No more than 50 APIs (resource apps) from the same tenant as the application, no more than 10 APIs from other tenants, and no more than 400 permissions total across all APIs. | No more than 50 APIs (resource apps) from the same tenant as the application, no more than 10 APIs from other tenants, and no more than 400 permissions total across all APIs. | Maximum of 50 resources per application and 30 permissions per resource (for example, Microsoft Graph). Total limit of 200 per application (resources x permissions). |
| Scopes defined by this API (`oauth2Permissions`) | Maximum scope name length of 120 characters <br><br> No limit\* on the number of scopes defined | Maximum scope name length of 120 characters <br><br> No limit\* on the number of scopes defined | Maximum scope name length of 40 characters <br><br> Maximum of 100 scopes defined | | Authorized client applications (`preAuthorizedApplications`) | No limit\* | No limit\* | Total maximum of 500 <br><br> Maximum of 100 client apps defined <br><br> Maximum of 30 scopes defined per client | | appRoles | Supported <br> No limit\* | Supported <br> No limit\* | Not supported |
active-directory Web App Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/web-app-quickstart.md
+
+ Title: "Quickstart: Sign in users in web apps using the auth code flow"
+
+description: In this quickstart, learn how a web app can sign in users of personal accounts, work accounts, and school accounts by using the authorization code flow.
++++++++ Last updated : 11/16/2021++
+zone_pivot_groups: web-app-quickstart
+#Customer intent: As an app developer, I want to learn how to get access tokens and refresh tokens by using the Microsoft identity platform so that my web app can sign in users of personal accounts, work accounts, and school accounts.
++
+# Quickstart: Add sign-in with Microsoft to a web app
++++++
active-directory B2b Tutorial Require Mfa https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/b2b-tutorial-require-mfa.md
Previously updated : 11/08/2021 Last updated : 01/07/2022
# Tutorial: Enforce multi-factor authentication for B2B guest users
-When collaborating with external B2B guest users, itΓÇÖs a good idea to protect your apps with multi-factor authentication (MFA) policies. Then external users will need more than just a user name and password to access your resources. In Azure Active Directory (Azure AD), you can accomplish this goal with a Conditional Access policy that requires MFA for access. MFA policies can be enforced at the tenant, app, or individual guest user level, the same way that they are enabled for members of your own organization.
+When collaborating with external B2B guest users, itΓÇÖs a good idea to protect your apps with multi-factor authentication (MFA) policies. Then external users will need more than just a user name and password to access your resources. In Azure Active Directory (Azure AD), you can accomplish this goal with a Conditional Access policy that requires MFA for access. MFA policies can be enforced at the tenant, app, or individual guest user level, the same way that they are enabled for members of your own organization. The resource tenant is always responsible for Azure AD Multi-Factor Authentication for users, even if the guest userΓÇÖs organization has Multi-Factor Authentication capabilities.
Example:
Example:
1. The user is asked to complete an MFA challenge. 1. The user sets up MFA with Company A and chooses their MFA option. The user is allowed access to the application.
+>[!NOTE]
+>Azure AD Multi-Factor Authentication is done at resource tenancy to ensure predictability. When the guest user signs in, they'll see the resource tenant sign-in page displayed in the background, and their own home tenant sign-in page and company logo in the foreground.
+ In this tutorial, you will: > [!div class="checklist"]
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/whats-new-docs.md
Title: "What's new in Azure Active Directory external identities" description: "New and updated documentation for the Azure Active Directory external identities." Previously updated : 12/02/2021 Last updated : 01/07/2022
Welcome to what's new in Azure Active Directory external identities documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the external identities service, see [What's new in Azure Active Directory](../fundamentals/whats-new.md).
+## December 2021
+
+### Updated articles
+
+- [Federation with SAML/WS-Fed identity providers for guest users (preview)](direct-federation.md)
++ ## November 2021 ### Updated articles
active-directory Entitlement Management Access Package Assignments https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/entitlement-management-access-package-assignments.md
na Previously updated : 10/05/2021 Last updated : 01/05/2022
You can also directly assign a user to an access package using Microsoft Graph.
You can assign a user to an access package in PowerShell with the `New-MgEntitlementManagementAccessPackageAssignmentRequest` cmdlet from the [Microsoft Graph PowerShell cmdlets for Identity Governance](https://www.powershellgallery.com/packages/Microsoft.Graph.Identity.Governance/) module version 1.6.0 or later. This cmdlet takes as parameters * the access package ID, which is included in the response from the `Get-MgEntitlementManagementAccessPackage` cmdlet, * the access package assignment policy ID, which is included in the response from the `Get-MgEntitlementManagementAccessPackageAssignmentPolicy`cmdlet,
-* the object ID of the target user.
+* the object ID of the target user, if the user is already present in your directory.
```powershell Connect-MgGraph -Scopes "EntitlementManagement.ReadWrite.All"
$policy = $accesspackage.AccessPackageAssignmentPolicies[0]
$req = New-MgEntitlementManagementAccessPackageAssignmentRequest -AccessPackageId $accesspackage.Id -AssignmentPolicyId $policy.Id -TargetId "a43ee6df-3cc5-491a-ad9d-ea964ef8e464" ```
-You can also assign multiple users to an access package in PowerShell with the `New-MgEntitlementManagementAccessPackageAssignment` cmdlet from the [Microsoft Graph PowerShell cmdlets for Identity Governance](https://www.powershellgallery.com/packages/Microsoft.Graph.Identity.Governance/) module version 1.6.1 or later. This cmdlet takes as parameters
+You can also assign multiple users that are in your directory to an access package using PowerShell with the `New-MgEntitlementManagementAccessPackageAssignment` cmdlet from the [Microsoft Graph PowerShell cmdlets for Identity Governance](https://www.powershellgallery.com/packages/Microsoft.Graph.Identity.Governance/) module version 1.6.1 or later. This cmdlet takes as parameters
* the access package ID, which is included in the response from the `Get-MgEntitlementManagementAccessPackage` cmdlet, * the access package assignment policy ID, which is included in the response from the `Get-MgEntitlementManagementAccessPackageAssignmentPolicy`cmdlet, * the object IDs of the target users, either as an array of strings, or as a list of user members returned from the `Get-MgGroupMember` cmdlet.
-For example, if you want to ensure all the users who are currently members of a group also have assignments to an access package, you can use this cmdlet to create requests for those users who don't currently have assignments. Note that this will cmdlet will only create assignments; it does not remove assignments.
+For example, if you want to ensure all the users who are currently members of a group also have assignments to an access package, you can use this cmdlet to create requests for those users who don't currently have assignments. Note that this cmdlet will only create assignments; it does not remove assignments for users who are no longer members of a group.
```powershell Connect-MgGraph -Scopes "EntitlementManagement.ReadWrite.All,Directory.Read.All"
$policy = $accesspackage.AccessPackageAssignmentPolicies[0]
$req = New-MgEntitlementManagementAccessPackageAssignment -AccessPackageId $accesspackage.Id -AssignmentPolicyId $policy.Id -RequiredGroupMember $members ```
+If you wish to add an assignment for a user who is not yet in your directory, you can use the `New-MgEntitlementManagementAccessPackageAssignmentRequest` cmdlet from the [Microsoft Graph PowerShell cmdlets for Identity Governance](https://www.powershellgallery.com/packages/Microsoft.Graph.Identity.Governance/) module version 1.9.1 or later. This cmdlet takes as parameters
+* the access package ID, which is included in the response from the `Get-MgEntitlementManagementAccessPackage` cmdlet,
+* the access package assignment policy ID, which is included in the response from the `Get-MgEntitlementManagementAccessPackageAssignmentPolicy`cmdlet,
+* the email address of the target user.
+
+```powershell
+Connect-MgGraph -Scopes "EntitlementManagement.ReadWrite.All"
+Select-MgProfile -Name "beta"
+$accesspackage = Get-MgEntitlementManagementAccessPackage -DisplayNameEq "Marketing Campaign" -ExpandProperty "accessPackageAssignmentPolicies"
+$policy = $accesspackage.AccessPackageAssignmentPolicies[0]
+$req = New-MgEntitlementManagementAccessPackageAssignmentRequest -AccessPackageId $accesspackage.Id -AssignmentPolicyId $policy.Id -TargetEmail "sample@example.com"
+```
+ ## Remove an assignment **Prerequisite role:** Global administrator, User administrator, Catalog owner, Access package manager or Access package assignment manager
active-directory Add Application Portal Assign Users https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/add-application-portal-assign-users.md
It is recommended that you use a non-production environment to test the steps in
To create a user account and assign it to an enterprise application, you need: -- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
- One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal. - Completion of the steps in [Quickstart: Add an enterprise application](add-application-portal.md).
active-directory Add Application Portal Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/add-application-portal-configure.md
It is recommended that you use a non-production environment to test the steps in
To configure the properties of an enterprise application, you need: -- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
- One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal. - Completion of the steps in [Quickstart: Add an enterprise application](add-application-portal.md).
active-directory Add Application Portal Setup Sso https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/add-application-portal-setup-sso.md
It is recommended that you use a non-production environment to test the steps in
To configure SSO, you need: -- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
- One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal. - Completion of the steps in [Quickstart: Create and assign a user account](add-application-portal-assign-users.md).
active-directory Add Application Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/add-application-portal.md
It is recommended that you use a non-production environment to test the steps in
To add an enterprise application to your Azure AD tenant, you need: -- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+- An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- One of the following roles: Global Administrator, Cloud Application Administrator, or Application Administrator.
## Add an enterprise application
active-directory Delete Application Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/delete-application-portal.md
It is recommended that you use a non-production environment to test the steps in
To delete an enterprise application, you need: -- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
- One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal. - Completion of the steps in [Quickstart: Add an enterprise application](add-application-portal.md).
active-directory F5 Aad Integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/f5-aad-integration.md
Integrating F5 BIG-IP with Azure AD for SHA have the following pre-requisites:
No previous experience or F5 BIG-IP knowledge is necessary to implement SHA, but we do recommend familiarizing yourself with F5 BIG-IP terminology. F5ΓÇÖs rich [knowledge base](https://www.f5.com/services/resources/glossary) is also a good place to start building BIG-IP knowledge.
+## Deployment scenarios
++ Configuring a BIG-IP for SHA is achieved using any of the many available methods, including several template based options, or a manual configuration. The following tutorials provide detailed guidance on implementing some of the more common patterns for BIG-IP and Azure AD SHA, using these methods.
active-directory F5 Big Ip Forms Advanced https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/f5-big-ip-forms-advanced.md
Title: Configure F5 BIG-IPΓÇÖs Access Policy Manager for form-based SSO description: Learn how to configure F5's BIG-IP Access Policy Manager and Azure Active Directory for secure hybrid access to form-based applications.-+ Last updated 10/20/2021-+
To learn about all the benefits, see [Integrate F5 BIG-IP with Azure Active Dire
## Scenario description
-For this scenario, we have an internal legacy application that's configured for form-based authentication (FBA).
+For this scenario, we have an internal legacy application that's configured for basic form-based authentication (FBA).
-The ideal scenario is to have the application managed and governed directly through Azure AD. But, because the app lacks modern protocol interoperability, it would take considerable effort and time to modernize, which introduces the inevitable costs and risks of potential downtime.
+Ideally, application access should be managed directly by Azure AD but being legacy it lacks any form of modern authentication protocol. Modernization would take considerable effort and time, introducing inevitable costs and risk of potential downtime. Instead, a BIG-IP deployed between the public internet and the internal application will be used to gate inbound access to the application.
-Instead, a BIG-IP Virtual Edition (VE), deployed between the internet and the internal Azure virtual network that the app is connected to, is used to gate inbound access to Azure AD because of its extensive authentication and authorization capabilities.
+Having a BIG-IP in front of the application enables us to overlay the service with Azure AD pre-authentication and header-based SSO, significantly improving the overall security posture of the application.
-Having BIG-IP in front of the application lets you overlay the service with Azure AD pre-authentication and form-based SSO. This approach significantly improves the overall security posture of the application, allowing the business to continue operating at pace, without interruption.
## Scenario Architecture
-The secure hybrid access solution for this scenario is made up of the following elements:
+The secure hybrid access solution for this scenario is made up of:
-**Application**: A back-end service to be protected by Azure AD and BIG-IP secure hybrid access. This particular application validates user credentials against Active Directory, but it could be any directory, including Active Directory Lightweight Directory Services, open source, and so on.
+**Application**: BIG-IP published service to be protected by and Azure AD SHA. This particular application validates user credentials against Active Directory, but it could be any directory, including Active Directory Lightweight Directory Services, open source, and so on.
-**Azure AD**: The Security Assertion Markup Language (SAML) Identity Provider (IdP), which is responsible for verification of user credentials, Conditional Access (CA), and SSO to the BIG-IP APM.
+**Azure AD**: Security Assertion Markup Language (SAML) Identity Provider (IdP) responsible for verification of user credentials, Conditional Access (CA), and SSO to the BIG-IP. Through SSO, Azure AD provides the BIG-IP with any required attributes including a user identifier.
-**BIG-IP**: A reverse proxy and SAML service provider to the application, which delegates authentication to the SAML IdP before it performs form-based SSO to the back-end application. The cached user credentials are then available for SSO against other forms-based authentication applications.
+**BIG-IP**: Reverse proxy and SAML service provider (SP) to the application, delegating authentication to the SAML IdP before performing header-based SSO to the backend application. The cached user credentials are then available for SSO against other forms-based authentication applications.
+
+SHA for this scenario supports both SP and IdP initiated flows. The following image illustrates the SP initiated flow.
![Screenshot of the flow diagram, from user to application.](./media/f5-big-ip-forms-advanced/flow-diagram.png) | Step | Description| |-: |:-|
-| 1 | A user connects to the application's SAML service provider endpoint (BIG-IP APM).|
-| 2 | The APM access policy redirects the user to the SAML IdP (Azure AD) for pre-authentication.|
-| 3 | Azure AD authenticates the user and applies any enforced Conditional Access policies.|
-| 4 | User is redirected back to the SAML service provider with the issued token and claims. |
+| 1 | User connects to application endpoint (BIG-IP).|
+| 2 | BIG-IP APM access policy redirects user to Azure AD (SAML IdP).|
+| 3 | Azure AD pre-authenticates user and applies any enforced CA policies.|
+| 4 | User is redirected to BIG-IP (SAML SP) and SSO is performed using issued SAML token. |
| 5 | BIG-IP prompts the user for an application password and stores it in the cache. | | 6 | BIG-IP sends a request to the application and receives a logon form.| | 7 | The APM scripting auto responds, filling in the username and password before it submits the form.|
The secure hybrid access solution for this scenario is made up of the following
Prior BIG-IP experience is not necessary, but you'll need: -- An Azure AD subscription. If you don't already have one, you can sign up for a free subscription.
+- An Azure AD free subscription or above
- An existing BIG-IP, or [deploy BIG-IP Virtual Edition (VE) in Azure](f5-bigip-deployment-guide.md).
Prior BIG-IP experience is not necessary, but you'll need:
- User identities [synchronized](../hybrid/how-to-connect-sync-whatis.md) from an on-premises directory to Azure AD. -- An account with Azure AD Application Administrator [permissions](../roles/permissions-reference.md#application-administrator).
+- An account with Azure AD Application Admin [permissions](../roles/permissions-reference.md#application-administrator).
- [An SSL certificate](f5-bigip-deployment-guide.md#ssl-profile) for publishing services over HTTPS, or use default certificates during testing. - An existing form-based authentication application, or [set up an IIS FBA app](/troubleshoot/aspnet/forms-based-authentication) for testing.
-## Deployment modes
+## BIG-IP deployment methods
-Several methods exist for configuring a BIG-IP for this scenario. This article covers the advanced approach, a more flexible way to implement secure hybrid access in which you manually create all BIG-IP configuration objects. You can use this approach for scenarios that aren't covered by the template-based guided configuration.
+There are many methods to configure BIG-IP for this scenario, including a template-driven guided configuration. This article covers the advanced approach, which provides a more flexible way of implementing SHA by manually creating all BIG-IP configuration objects. You would also use this approach for more complex scenarios that the guided configuration templates don't cover.
> [!NOTE] > You should replace all example strings or values in this article with those for your actual environment.
-## Add F5 BIG-IP from the Azure AD gallery
+## Register F5 BIG-IP in Azure AD
+
+Before BIG-IP can hand off pre-authentication to Azure AD, it must be registered in your tenant. This is the first step in establishing SSO between both entities. It's no different from making any IdP aware of a SAML relying party. In this case, the app that you create from the F5 BIG-IP gallery template is the relying party that represents the SAML SP for the BIG-IP published application.
+
+1. Sign in to the [Azure AD portal](https://portal.azure.com) by using an account with Application Administrator permissions.
-Setting up a SAML federation trust between BIG-IP APM and Azure AD is one of the first steps in implementing secure hybrid access. It establishes the integration that's required for BIG-IP to hand off pre-authentication and [CA](../conditional-access/overview.md) to Azure AD, before it grants access to the published service.
+2. From the left pane, select the **Azure Active Directory** service.
-1. Sign in to the Azure portal by using an account with Application Administrator permissions.
+3. On the left menu, select **Enterprise applications**. The **All applications** pane opens and displays a list of the applications in your Azure AD tenant.
-1. On the left pane, select the **Azure Active Directory** service.
+4. On the **Enterprise applications** pane, select **New application**.
-1. Go to **Enterprise Applications** and then, in the ribbon, select **New application**.
+5. The **Browse Azure AD Gallery** pane opens and displays tiles for cloud platforms, on-premises applications, and featured applications. Applications listed in the **Featured applications** section have icons that indicate whether they support federated SSO and provisioning.
-1. Search for **F5** in the gallery, and then select **F5 BIG-IP APM Azure AD integration**.
+ Search for **F5** in the Azure gallery, and select **F5 BIG-IP APM Azure AD integration**.
-1. Provide a name for the application, and then select **Add/Create** to add it to your tenant. The name should reflect that specific service.
+6. Provide a name for the new application to recognize the instance of the application. Select **Add/Create** to add it to your tenant.
-## Configure Azure AD SSO
+### Enable SSO to F5 BIG-IP
-1. With the new **F5** application properties in view, select **Manage** > **Single sign-on**.
+Next, configure the BIG-IP registration to fulfill SAML tokens that the BIG-IP APM requests:
-1. On the **Select a single sign-on method** page, select **SAML**. Skip the prompt to save the single sign-on settings by selecting **No, I'll save later**.
+1. In the **Manage** section of the left menu, select **Single sign-on** to open the **Single sign-on** pane for editing.
-1. On the **Set up single sign-on with SAML** pane, select the **Edit** button (pen icon) for **Basic SAML Configuration**, and then do the following:
+2. On the **Select a single sign-on method** page, select **SAML** followed by **No, I'll save later** to skip the prompt.
- a. Replace the pre-defined **Identifier** URL with the URL for your BIG-IP published service (for example, `https://myvacation.contoso.com`).
+3. On the **Set up single sign-on with SAML** pane, select the pen icon to edit **Basic SAML Configuration**. Make these edits:
- b. Replace the pre-defined **Reply URL** with the URL for your BIG-IP published service, but include the path for the APM SAML endpoint (for example, `https://myvacation.contoso.com/saml/sp/profile/post/acs`).
+ 1. Replace the predefined **Identifier** value with the full URL for the BIG-IP published application.
+
+ 2. Replace the **Reply URL** value but retain the path for the application's SAML SP endpoint.
+
+ In this configuration, the SAML flow would operate in IdP-initiated mode. In that mode, Azure AD issues a SAML assertion before the user is redirected to the BIG-IP endpoint for the application.
- >[!NOTE]
- >In this configuration, the SAML flow would operate in IdP-initiated mode, where Azure AD issues users a SAML assertion before they're redirected to the BIG-IP service endpoint for the application. The BIG-IP APM supports both IdP-initiated and service provider-initiated modes.
+ 3. To use SP-initiated mode, populate **Sign on URL** with the application URL.
- c. For the `Logout URI`, enter the BIG-IP APM single logout (SLO) endpoint, prepended by the host header of the service that's being published. Providing an SLO URI ensures that users' BIG-IP APM session is also terminated after they're signed out of Azure AD. An example URI might be `https://myvacation.contoso.com/saml/sp/profile/redirect/slr`.
+ 4. For **Logout Url**, enter the BIG-IP APM single logout (SLO) endpoint prepended by the host header of the service that's being published. This step ensures that the user's BIG-IP APM session ends after the user is signed out of Azure AD.
![Screenshot showing a basic SAML configuration.](./media/f5-big-ip-forms-advanced/basic-saml-configuration.png)
- >[!Note]
- > As of F5 Traffic Management Operating System (TMOS) v16, the SAML SLO endpoint is /saml/sp/profile/redirect/slo.
+ > [!NOTE]
+ > From TMOS v16, the SAML SLO endpoint has changed to **/saml/sp/profile/redirect/slo**.
-1. Select **Save** before you close the SAML configuration pane, and skip the SSO test prompt.
+4. Select **Save** before closing the SAML configuration pane and skip the SSO test prompt.
- Observe the properties of the **User Attributes & Claims** section. Azure AD will issue them to users for BIG-IP APM authentication and SSO to the back-end application.
+5. Note the properties of the **User Attributes & Claims** section. Azure AD will issue these properties to users for BIG-IP APM authentication and for SSO to the back-end application.
-1. In the **SAML Signing Certificate** section, select the **Download** link to save the *Federation Metadata XML* file to your computer.
+6. On the **SAML Signing Certificate** pane, select **Download** to save the **Federation Metadata XML** file to your computer.
![Screenshot of the 'Federation Metadata XML' download link.](./media/f5-big-ip-forms-advanced/saml-certificate.png)
- SAML signing certificates that are created by Azure AD have a lifespan of three years. To manage them, use the published guidance in [Manage certificates for federated single sign-on](manage-certificates-for-federated-single-sign-on.md).
+SAML signing certificates created by Azure AD have a lifespan of three years. For more information, see [Managed certificates for federated single sign-on](./manage-certificates-for-federated-single-sign-on.md).
-### Azure AD authorization
+### Assign users and groups
-By default, Azure AD issues tokens only to users that have been granted access to an application.
+By default, Azure AD will issue tokens only for users who have been granted access to an application. To grant specific users and groups access to the application:
-1. In the application's configuration view, select **Users and groups**.
+1. On the **F5 BIG-IP application's overview** pane, select **Assign Users and groups**.
-1. Select **Add user** and then, on the **Add Assignment** pane, select **Users and groups**.
+2. Select **+ Add user/group**.
-1. On the **Users and groups** pane, add the groups of users that are authorized to access the internal application, and then select **Select** > **Assign**.
+3. Select users and groups, and then select **Assign** to assign them to your application.
-This completes the Azure AD part of the SAML federation trust. You can now set up the BIG-IP APM to publish the internal web application and then configure it with a corresponding set of properties that complete the trust for SAML pre-authentication and SSO.
+## BIG-IP Advanced configuration
-## Advanced configuration
+Now you can proceed with setting up the BIG-IP configurations.
-### The SAML service provider
+### Configure SAML service provider settings
-These settings define the SAML service provider properties that the APM will use for overlaying the legacy application with SAML pre-authentication.
+SAML service provider settings define the SAML SP properties that the APM will use for overlaying the legacy application with SAML pre-authentication. To configure them:
1. Select **Access** > **Federation** > **SAML Service Provider** > **Local SP Services**, and then select **Create**.
These settings define the SAML service provider properties that the APM will use
The values in the **SP Name Settings** section are required only if the entity ID isn't an exact match of the hostname portion of the published URL or, equally, if the entity ID isn't in regular hostname-based URL format. Provide the external scheme and hostname of the application that's being published if the entity ID is *urn:myvacation:contosoonline*.
-### The external IdP connector
+### Configure an external IdP connector
-A SAML IdP connector defines the settings that are required for the BIG-IP APM to trust Azure AD as its SAML IdP. These settings map the SAML service provider to a SAML IdP, which establishes the federation trust between the APM and Azure AD.
+A SAML IdP connector defines the settings that are required for the BIG-IP APM to trust Azure AD as its SAML IdP. These settings map the SAML service provider to a SAML IdP, which establishes the federation trust between the APM and Azure AD. To configure the connector:
1. Select the new SAML service provider object, and then select **Bind/UnbBind IdP Connectors**.
A SAML IdP connector defines the settings that are required for the BIG-IP APM t
![Screenshot of the 'Edit SAML IdPs that use this SP' pane.](./media/f5-big-ip-forms-advanced/edit-saml-idp-using-sp.png)
-### Form-based SSO
+### Configure Forms-based SSO
+
+In this section, you create an APM SSO object for performing FBA SSO to back-end applications.
You can perform FBA SSO in either client-initiated mode or by the BIG-IP itself. Both methods emulate a user logon by injecting credentials into the username and password tags before auto submitting the form. The flow is almost transparent, except that users have to provide their password once when they access an FBA application. The password is then cached for reuse across other FBA applications.
Select **Access** > **Single Sign-on** > **Forms Based**, select **Create**, and
For more information about configuring an APM for FBA SSO, go to the F5 [Single Sign-On Methods](https://techdocs.f5.com/en-us/bigip-14-1-0/big-ip-access-policy-manager-single-sign-on-concepts-configuration-14-1-0/single-sign-on-methods.html#GUID-F8588DF4-F395-4E44-881B-8D16EED91449) site.
-### Access profile configuration
+### Configure an Access profile
An access profile binds many APM elements managing access to BIG-IP virtual servers, including access policies, SSO configuration, and UI settings.
An access profile binds many APM elements managing access to BIG-IP virtual serv
1. Select the link in the upper **Deny** box to change the **Successful** branch to **Allow**, and then select **Save**.
-**Attribute mapping**
+ **(Optional) Configure attribute mappings**
-(Optional) By adding a LogonID_Mapping configuration, you can enable the BIG-IP active sessions list to display the UPN of the logged-in user instead of a session number. This action is useful when you're analyzing logs or troubleshooting.
+ Although it's optional, adding a LogonID_Mapping configuration enables the BIG-IP active sessions list to display the UPN of the logged-in user instead of a session number. This information is useful when you're analyzing logs or troubleshooting.
-1. Select the plus sign (**+**) next to the SAML Auth **Successful** branch.
+1. Select the plus (**+**) symbol for the **SAML Auth Successful** branch.
-1. In the pop-up window, select the **Assignment** tab, select **Variable Assign**, and then select **Add Item**.
+1. In the pop-up dialog, select **Assignment** > **Variable Assign** > **Add Item**.
![Screenshot showing the 'Variable Assign' option and its description.](./media/f5-big-ip-forms-advanced/variable-assign.png)
An access profile binds many APM elements managing access to BIG-IP virtual serv
![Screenshot showing the 'Add new entry' field.](./media/f5-big-ip-forms-advanced/add-new-entry.png)
-1. Set both variables to use the following properties:
+1. Set both variables:
| Property | Description | |:--|:-|
An access profile binds many APM elements managing access to BIG-IP virtual serv
| Session Variable | `session.saml.last.identity`| | | |
-1. Select **Finished**, and then select **Save**.
+1. Select **Finished** > **Save**.
-1. Commit those settings by selecting **Apply Access Policy** at the upper left, and then close the Visual Policy Editor.
+1. Commit those settings by selecting **Apply Access Policy** and then close the Visual Policy Editor.
![Screenshot showing the 'Apply Access Policy' pane.](./media/f5-big-ip-forms-advanced/apply-access-policy.png)
-### Back-end pool configuration
+### Configure a back-end pool
For the BIG-IP to know where to forward client traffic, you need to create a BIG-IP node object that represents the back-end server that hosts your application. Then, place that node in a BIG-IP server pool.
-1. Select **Local Traffic** > **Pools** > **Pool List**, select **Create**, and then provide a name for a server pool object (for example, *MyApps_VMs*).
+1. Select **Local Traffic** > **Pools** > **Pool List** > **Create** and provide a name for a server pool object. For example, enter **MyApps_VMs**.
![Screenshot shows pool list](./media/f5-big-ip-forms-advanced/pool-list.png)
-1. Add a pool member object with the following properties:
+1. Add a pool member object with the following resource details:
| Property | Description | |:--|:-|
- | Node Name | (Optional) The display name for the server that hosts the back-end web application |
- | Address | The IP address of the server that hosts the application |
- | Service Port | The HTTP or HTTPS port that the application is listening on |
+ | Node Name: | Optional display name for the server that hosts the back-end web application |
+ | Address: | IP address of the server that hosts the application |
+ | Service Port: | HTTP/S port that the application is listening on |
| | | ![Screenshot showing the pool member properties.](./media/f5-big-ip-forms-advanced/pool-member.png) >[!NOTE]
->Health monitors require additional configuration that's not covered in this article. For more information, see the F5 article [K13397: Overview of HTTP health monitor request formatting for the BIG-IP DNS system](https://support.f5.com/csp/article/K13397).
+>Health monitors require [additional configuration](https://support.f5.com/csp/article/K13397) that this article doesn't cover.
+
+### Configure a virtual server
-## Virtual server configuration
+A *virtual server* is a BIG-IP data-plane object that's represented by a virtual IP address that listens for client requests to the application. Any received traffic is processed and evaluated against the APM access profile that's associated with the virtual server. The traffic is then directed according to the policy results and settings.
-A virtual server is a BIG-IP data-plane object that's represented by a virtual IP address that listens for client requests to the application. Any received traffic is processed and evaluated against the APM access profile that's associated with the virtual server. The traffic is then directed according to the policy results and settings.
+To configure a virtual server:
-1. Select **Local Traffic** > **Virtual Servers** > **Virtual Server List**, select **Create**, and then do the following:
+1. Select **Local Traffic** > **Virtual Servers** > **Virtual Server List** > **Create**.
- a. For **Name**, enter the virtual server name (for example, *MyVacation*).
+3. Provide the virtual server with a Name value and an IPv4/IPv6 address that isn't already allocated to an existing BIG-IP object or device on the connected network. The IP address will be dedicated to receiving client traffic for the published back-end application. Then set Service Port to 443.
![Screenshot showing the virtual server properties.](./media/f5-big-ip-forms-advanced/virtual-server.png)
+
+3. Set **HTTP Profile (Client)** to **http**.
- b. For **Destination Address/Mask**, enter an unused IP IPv4/IPv6 that can be assigned to the BIG-IP to receive client traffic.
- c. For **Service Port**, enter **443** and **HTTPS**.
-
-1. **SSL Profile (Client)**: Enables Transport Layer Security (TLS), enabling services to be published over HTTPS. Select the client SSL profile that you created as part of the prerequisites, or keep the default settings if you're testing.
+1. Enable a virtual server for Transport Layer Security to allow services to be published over HTTPS. For **SSL Profile (Client)**, select the profile that you created as part of the prerequisites. (Or leave the default if you're testing.)
![Screenshot showing an SSL profile.](./media/f5-big-ip-forms-advanced/ssl-profile.png)
A virtual server is a BIG-IP data-plane object that's represented by a virtual I
![Screenshot showing the 'Access Policy' pane.](./media/f5-big-ip-forms-advanced/access-policy.png)
-1. Finally, for **Default Pool**, select the back-end pool object that you created earlier.
+1. Set **Default Pool** to use the back-end pool objects that you created in the previous section. Then select **Finished**.
![Screenshot showing the 'Default Pool' setting on the 'Resources' pane.](./media/f5-big-ip-forms-advanced/default-pool.png)
-## Session management
+### Configure Session management settings
-You use BIG-IP session management settings to define the conditions under which user sessions are terminated or allowed to continue, limits for users and IP addresses, and error pages. You can create your own policy by selecting **Access Policy** > **Access Profiles** and then selecting your application from the list.
+BIG-IP's session management settings define the conditions under which user sessions are terminated or allowed to continue, limits for users and IP addresses, and error pages. You can create your own policy here. Go to Access Policy > Access Profiles > Access Profile and select your application from the list.
-As for SLO functionality, when you define a single logout URI in Azure AD, you help ensure that an IdP-initiated sign-out from the MyApps portal also terminates the session between the client and the BIG-IP APM.
+If you've defined a Single Logout URI value in Azure AD, it will ensure that an IdP-initiated sign-out from the MyApps portal also ends the session between the client and the BIG-IP APM. The imported application's federation metadata XML file provides the APM with the Azure AD SAML logout endpoint for SP-initiated sign-outs. But for this to be truly effective, the APM needs to know exactly when a user signs out.
-When you've imported the application's federation metadata.xml, you give the APM the Azure AD SAML SLO endpoint for service provider-initiated sign-outs. But for this to be truly effective, the APM needs to know exactly when users sign out.
+Consider a scenario where a BIG-IP web portal is not used. The user has no way of instructing the APM to sign out. Even if the user signs out of the application itself, BIG-IP is technically oblivious to this, so the application session could easily be reinstated through SSO. For this reason, SP-initiated sign-out needs careful consideration to ensure that sessions are securely terminated when they're no longer required.
-In scenarios where a BIG-IP web portal isn't used, users have no way to instruct the APM to sign out. Even if users sign out of the application itself, the BIG-IP is technically unaware of this action, so the application session could easily be reinstated through SSO. For this reason, the service provider-initiated sign-out needs careful consideration to ensure that sessions are securely terminated when they're no longer required.
+One way to achieve this is by adding an SLO function to your application's sign-out button. This function can redirect your client to the Azure AD SAML sign-out endpoint. You can find this SAML sign-out endpoint at App Registrations > Endpoints.
-One way to achieve this would be to add an SLO function to your app's sign-out button, so that it can redirect your client to the Azure AD SAML sign-out endpoint. You can find the SAML sign-out endpoint for your tenant by selecting **App Registrations** > **Endpoints**.
+If you can't change the app, consider having BIG-IP listen for the app's sign-out call. When it detects the request, it should trigger SLO.
-If making a change to the app is a no-go, consider having the BIG-IP listen for the app's sign-out call. When it detects the request, have it trigger SLO. For more information about using BIG-IP iRules to achieve this, see the following F5 articles:
+For more information about using BIG-IP iRules to achieve this, see the following F5 articles:
* [K42052145: Configuring automatic session termination (logout) based on a URI-referenced file name](https://support.f5.com/csp/article/K42052145) * [K12056: Overview of the Logout URI Include option](https://support.f5.com/csp/article/K12056)
-## Troubleshoot access to the application
+
+## Summary
+
+Your application should now be published and accessible via secure hybrid access, either directly via the app's URL or through the Microsoft application portals.
+
+The application should also be visible as a target resource in Azure AD CA. For more information, see [Building a Conditional Access policy](../conditional-access/concept-conditional-access-policies.md).
+
+For increased security, organizations that use this pattern could also consider blocking all direct access to the application, which then forces a strict path through the BIG-IP.
+
+## Next steps
+
+From a browser, connect to the application's external URL or select the application's icon in the MyApps portal. After you authenticate to Azure AD, youΓÇÖre redirected to the BIG-IP endpoint for the application and prompted for a password. Notice that the APM pre-fills the username with the UPN from Azure AD. The username that's pre-populated by the APM is read only to ensure session consistency between Azure AD and the back-end application. You can hide this field from view with an additional configuration, if necessary.
+
+![Screenshot showing secured SSO.](./media/f5-big-ip-forms-advanced/secured-sso.png)
+
+After the information is submitted, users should be automatically signed in to the application.
+
+![Screenshot showing a welcome message.](./media/f5-big-ip-forms-advanced/welcome-message.png)
+
+## Troubleshoot
Failure to access the secure hybrid access-protected application can result from any of several factors, including a misconfiguration. When you troubleshoot this issue, be aware of the following:
The **View Variables** link in this location might also help determine the root
For more information, see the F5 BIG-IP [Session Variables reference](https://techdocs.f5.com/en-us/bigip-15-0-0/big-ip-access-policy-manager-visual-policy-editor/session-variables.html).
-## Summary
+## Additional resources
-Your application should now be published and accessible via secure hybrid access, either directly via the app's URL or through the Microsoft application portals.
+* [Active Directory Authentication](https://techdocs.f5.com/kb/en-us/products/big-ip_apm/manuals/product/apm-authentication-single-sign-on-11-5-0/2.html) (F5 article about BIG-IP advanced configuration)
-The application should also be visible as a target resource in Azure AD CA. For more information, see [Building a Conditional Access policy](../conditional-access/concept-conditional-access-policies.md).
+* [Forget passwords, go passwordless](https://www.microsoft.com/security/business/identity/passwordless)
-For increased security, organizations that use this pattern could also consider blocking all direct access to the application, which then forces a strict path through the BIG-IP.
+* [What is Conditional Access?](../conditional-access/overview.md)
-## Next steps
+* [Zero Trust framework to enable remote work](https://www.microsoft.com/security/blog/2020/04/02/announcing-microsoft-zero-trust-assessment-tool/)
-From a browser, connect to the application's external URL or select the application's icon in the MyApps portal. After you authenticate to Azure AD, youΓÇÖre redirected to the BIG-IP endpoint for the application and prompted for a password. Notice that the APM pre-fills the username with the UPN from Azure AD. The username that's pre-populated by the APM is read only to ensure session consistency between Azure AD and the back-end application. You can hide this field from view with an additional configuration, if necessary.
-![Screenshot showing secured SSO.](./media/f5-big-ip-forms-advanced/secured-sso.png)
-
-After the information is submitted, users should be automatically signed in to the application.
-
-![Screenshot showing a welcome message.](./media/f5-big-ip-forms-advanced/welcome-message.png)
active-directory F5 Big Ip Header Advanced https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/f5-big-ip-header-advanced.md
Title: Configure F5 BIG-IP Access Policy Manager for header-based SSO description: Learn how to configure F5's BIG-IP Access Policy Manager (APM) and Azure Active Directory SSO for header-based authentication -+ Last updated 11/10/2021-+ # Tutorial: Configure F5 BIG-IPΓÇÖs Access Policy Manager for header-based SSO
-In this tutorial, you'll learn how to configure F5's BIG-IP Access Policy Manager (APM) and Azure Active Directory (Azure AD) for secure hybrid access to header-based applications.
+In this article, youΓÇÖll learn to implement Secure Hybrid Access (SHA) with single sign-on (SSO) to header-based applications using F5ΓÇÖs BIG-IP advanced configuration.
Configuring BIG-IP published applications with Azure AD provides many benefits, including:
To learn about all of the benefits, see the article on [F5 BIG-IP and Azure AD i
## Scenario description
-For this scenario, we have an internal application whose access relies on receiving HTTP authorization headers from a legacy broker system. This enables users to be directed to their respective areas of content.
+For this scenario, we have a legacy application using HTTP authorization headers to control access to protected content.
-The ideal scenario is to have the application managed and governed directly through Azure AD. However, as it lacks any form of modern protocol interop, it would take considerable effort and time to modernize, introducing inevitable costs and risks of potential downtime.
+Ideally, application access should be managed directly by Azure AD but being legacy it lacks any form of modern authentication protocol. Modernization would take considerable effort and time, introducing inevitable costs and risk of potential downtime. Instead, a BIG-IP deployed between the public internet and the internal application will be used to gate inbound access to the application.
-Instead, a BIG-IP Virtual Edition (VE) deployed between the public internet and the internal Azure VNet the application is connected to will be used. It will enable to gate inbound access, with Azure AD for its extensive choice of authentication and authorization capabilities.
+Having a BIG-IP in front of the application enables us to overlay the service with Azure AD pre-authentication and header-based SSO, significantly improving the overall security posture of the application.
-Having a BIG-IP in front of the application enables to overlay the service with Azure AD pre-authentication and header-based SSO. It significantly improves the overall security posture of the application, allowing the business to continue operating at pace, without interruption.
-The secure hybrid access solution for this scenario is made up of the following components:
+## Scenario architecture
-- **Application**: Backend service to be protected by Azure AD and BIG-IP secure hybrid access
+The secure hybrid access solution for this scenario is made up of:
-- **Azure AD**: The SAML Identity Provider (IdP), responsible for
-verification of user credentials, Conditional Access (CA), and SSO to the BIG-IP APM.
+- **Application**: BIG-IP published service to be protected by and Azure AD SHA.
+
+- **Azure AD**: Security Assertion Markup Language (SAML) Identity Provider (IdP) responsible for verification of user credentials, Conditional Access (CA), and SSO to the BIG-IP. Through SSO, Azure AD provides the BIG-IP with any required session attributes including user identifiers.
- **BIG-IP**: Reverse proxy and SAML service provider (SP) to the application, delegating authentication to the SAML IdP, before performing header-based SSO to the backend application.
performing header-based SSO to the backend application.
| Step | Description | |:-|:--|
-| 1. | User connects to application's SAML SP endpoint (BIG-IP APM). |
-| 2. | APM access policy redirects user to SAML IdP (Azure AD) for pre-authentication.|
-| 3. | SAML IdP authenticates user and applies any enforced CA policies. |
-| 4. | Azure AD redirects user back to SAML SP with issued token and claims. |
-| 5. | BIG-IP APM grants user access and injects headers in the request to the application. |
+| 1. | User connects to application's SAML SP endpoint (BIG-IP). |
+| 2. | BIG-IP APM access policy redirects user to Azure AD (SAML IdP).|
+| 3. | Azure AD pre-authenticates user and applies any enforced CA policies. |
+| 4. | User is redirected to BIG-IP (SAML SP) and SSO is performed using issued SAML token. |
+| 5. | BIG-IP injects Azure AD attributes as headers in request to the application. |
+| 6. | Application authorizes request and returns payload. |
-For increased security, organizations using this pattern could also consider blocking all direct access to the application, in that way, forcing a strict path through the BIG-IP.
## Prerequisites
This last step provides break down of all applied settings before they are commi
Your application is now published and accessible via Secure Hybrid Access, either directly via its URL or through Microsoft's application portals.
-For increased security, organizations using this pattern could also consider blocking all direct access to the application, in that way forcing a strict path through the BIG-IP.
## Next steps
The output of the injected headers displayed by our headers-based application is
![Screenshot shows the output](./media/f5-big-ip-header-advanced/mytravel-example.png)
+For increased security, organizations using this pattern could also consider blocking all direct access to the application, in that way forcing a strict path through the BIG-IP.
+ ## Troubleshooting Failure to access the secure hybrid access protected application could be down to any number of potential factors, including a
active-directory F5 Big Ip Kerberos Advanced https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/f5-big-ip-kerberos-advanced.md
# Tutorial: Configure F5 BIG-IP Access Policy Manager for Kerberos authentication
-In this article, you'll learn how to implement Secure Hybrid Access (SHA) with single sign-on (SSO) to Kerberos applications by using F5's BIG-IP advanced configuration.
+In this tutorial, you'll learn to implement Secure Hybrid Access (SHA) with single sign-on (SSO) to Kerberos applications by using F5's BIG-IP advanced configuration.
Integrating a BIG-IP with Azure Active Directory (Azure AD) provides many benefits, including:
If the **web_svc_account** service runs in context of a computer account, use th
For more information, see [Kerberos Constrained Delegation across domains](/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/hh831477(v=ws.11)).
-## Make BIG-IP advanced configurations
+## BIG-IP advanced configuration
Now you can proceed with setting up the BIG-IP configurations.
An *access profile* binds many APM elements that manage access to BIG-IP virtual
![Screenshot that shows the list box for configuring an A A A server.](./media/f5-big-ip-kerberos-advanced/configure-aaa-server.png)
-6. Select the link in the upper **Deny** box to change the **Successful** branch to **Allow**.
+6. Select the link in the upper **Deny** box to change the **Successful** branch to **Allow**, and then select **Save**.
![Screenshot that shows changing the successful branch to Allow.](./media/f5-big-ip-kerberos-advanced/select-allow-successful-branch.png)
active-directory F5 Big Ip Kerberos Easy Button https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/f5-big-ip-kerberos-easy-button.md
# Tutorial: Configure F5 BIG-IP Easy Button for Kerberos SSO
-In this tutorial, youΓÇÖll implement Secure Hybrid Access (SHA) with Single Sign-on (SSO) to Kerberos applications using F5ΓÇÖs BIG-IP Easy Button guided configuration.
+In this article, you'll learn to implement Secure Hybrid Access (SHA) with single sign-on (SSO) to Kerberos applications using F5ΓÇÖs BIG-IP Easy Button guided configuration.
Integrating a BIG-IP with Azure Active Directory (Azure AD) provides many benefits, including:
Having a BIG-IP in front of the application enables us to overlay the service wi
## Scenario architecture
-The secure hybrid access solution for this scenario is made up of the following:
+The SHA solution for this scenario is made up of the following:
**Application:** BIG-IP published service to be protected by and Azure AD SHA. The application host is domain-joined and so is integrated with Active Directory (AD).
The secure hybrid access solution for this scenario is made up of the following:
**BIG-IP:** Reverse proxy functionality enables publishing backend applications. The APM then overlays published applications with SAML Service Provider (SP) and SSO functionality.
-Secure hybrid access for this scenario supports both SP and IdP initiated flows. The following image illustrates the SP initiated flow.
+SHA for this scenario supports both SP and IdP initiated flows. The following image illustrates the SP initiated flow.
![Scenario architecture](./media/f5-big-ip-kerberos-easy-button/scenario-architecture.png)
For those scenarios, go ahead and deploy using the Guided Configuration. Then na
## Troubleshooting
-You can fail to access the secure hybrid access protected application due to any number of factors, including a misconfiguration.
+You can fail to access the SHA protected application due to any number of factors, including a misconfiguration.
Consider the following points while troubleshooting any issue.
active-directory F5 Big Ip Ldap Header Easybutton https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/f5-big-ip-ldap-header-easybutton.md
-# Tutorial: Configure F5 BIG-IP Easy Button for Header-based and LDAP SSO
+# Tutorial: Configure F5 BIG-IP Easy Button for header-based and LDAP SSO
-In this tutorial, youΓÇÖll implement Secure Hybrid Access (SHA) with Single Sign-on (SSO) to header-based applications that also require session augmentation through Lightweight Directory Access Protocol (LDAP) sourced attributes using F5ΓÇÖs BIG-IP Easy Button guided configuration.
+In this article, you'll learn to implement Secure Hybrid Access (SHA) with single sign-on (SSO) to header-based applications that also require session augmentation through Lightweight Directory Access Protocol (LDAP) sourced attributes using F5ΓÇÖs BIG-IP Easy Button guided configuration.
Configuring BIG-IP published applications with Azure AD provides many benefits, including:
Enabling SSO allows users to access BIG-IP published services without having to
![Screenshot for SSO and HTTP headers](./media/f5-big-ip-easy-button-ldap/sso-headers.png) >[!NOTE]
->The APM session variables defined within curly brackets are CASE sensitive. For example, if our queried LDAP attribute was returned as eventroles, then the above variable definition would fail to populate the eventrole header value. In case of any issues, troubleshoot using the session analysis steps to check how the APM has variables defined will avoid any issues.
+>The APM session variables defined within curly brackets are CASE sensitive. For example, if our queried LDAP attribute was returned as eventroles, then the above variable definition would fail to populate the eventrole header value. In case of any issues, troubleshoot using the session analysis steps to check how the APM has variables defined.
### Session Management
If making a change to the app is a no go, then consider having the BIG-IP listen
Select **Deploy** to commit all settings and verify that the application has appeared in your tenant. This last step provides break down of all applied settings before theyΓÇÖre committed.
-Your application should now be published and accessible via Secure Hybrid Access, either directly via its URL or through MicrosoftΓÇÖs application portals. For increased security, organizations using this pattern could also consider blocking all direct access to the application, thereby forcing a strict path through the BIG-IP.
+Your application should now be published and accessible via SHA, either directly via its URL or through MicrosoftΓÇÖs application portals. For increased security, organizations using this pattern could also consider blocking all direct access to the application, thereby forcing a strict path through the BIG-IP.
## Next steps
active-directory View Applications Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/view-applications-portal.md
It is recommended that you use a non-production environment to test the steps in
To view applications that have been registered in your Azure AD tenant, you need: -- An Azure account. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
- One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal. - Completion of the steps in [Quickstart: Add an enterprise application](add-application-portal.md).
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/whats-new-docs.md
Title: "What's new in Azure Active Directory application management" description: "New and updated documentation for the Azure Active Directory application management." Previously updated : 12/02/2021 Last updated : 01/07/2022
reviewer: napuri
Welcome to what's new in Azure Active Directory application management documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the application management service, see [What's new in Azure Active Directory](../fundamentals/whats-new.md).
+## December 2021
+
+### New articles
+
+- [Tutorial: Configure F5 BIG-IP Easy Button for Kerberos SSO](f5-big-ip-kerberos-easy-button.md)
+- [Configure risk-based step-up consent using PowerShell](configure-risk-based-step-up-consent.md)
+- [Grant consent on behalf of a single user by using PowerShell](grant-consent-single-user.md)
+- [Overview of enterprise application ownership in Azure Active Directory](overview-assign-app-owners.md)
+- [Azure Active Directory admin consent workflow frequently asked questions](admin-consent-workflow-faq.md)
+- [Review and take action on admin consent requests](review-admin-consent-requests.md)
+- [Overview of the Azure Active Directory application gallery](overview-application-gallery.md)
+
+### Updated articles
+
+- [Plan Azure Active Directory My Apps configuration](my-apps-deployment-plan.md)
+- [Tutorial: Configure F5 BIG-IP Access Policy Manager for Kerberos authentication](f5-big-ip-kerberos-advanced.md)
+- [Applications listed in Enterprise applications](application-list.md)
+- [Quickstart: View enterprise applications](view-applications-portal.md)
+- [Secure hybrid access: Secure legacy apps with Azure Active Directory](secure-hybrid-access.md)
+- [Secure hybrid access with Azure Active Directory partner integrations](secure-hybrid-access-integrations.md)
+- [Create collections on the My Apps portal](access-panel-collections.md)
+- [Restrict access to a tenant](tenant-restrictions.md)
+- [Reasons why applications appear in my all applications list](application-types.md)
+- [Grant tenant-wide admin consent to an application](grant-admin-consent.md)
+- [Quickstart: Enable single sign-on for an enterprise application](add-application-portal-setup-sso.md)
+- [What is single sign-on in Azure Active Directory?](what-is-single-sign-on.md)
+- [Configure how users consent to applications](configure-user-consent.md)
+- [Consent and permissions overview](consent-and-permissions-overview.md)
+- [Manage consent to applications and evaluate consent requests](manage-consent-requests.md)
+- [Remove user access to applications](methods-for-removing-user-access.md)
+- [Azure Active Directory application management: What's new](whats-new-docs.md)
+- [Assign enterprise application owners](assign-app-owners.md)
+- [Integrate Azure AD with F5 BIG-IP for form-based authentication single sign-on](f5-big-ip-forms-advanced.md)
+- [Configure the admin consent workflow](configure-admin-consent-workflow.md)
+ ## November 2021 ### New articles
active-directory Custom Enterprise App Permissions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/custom-enterprise-app-permissions.md
To delegate the update and read of basic SAML Configurations for SAML based sing
To delegate the management of signing certificates for SAML based single sign-on applications. Permissions required.
-microsoft.directory/applications/credentials/update
+microsoft.directory/servicePrincipals/credentials/update
#### Update expiring sign-in cert notification email address
active-directory Workplace By Facebook Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/workplace-by-facebook-provisioning-tutorial.md
This section guides you through the steps to configure the Azure AD provisioning
![authorize](./media/workplace-by-facebook-provisioning-tutorial/workplace-login.png)
+> [!NOTE]
+> Failure to change the URL to https://scim.workplace.com/ will result in a failure when trying to save the configuration
+ 6. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box. ![Notification Email](common/provisioning-notification-email.png)
In December 2021, Facebook released a SCIM 2.0 connector. Completing the steps b
* Scoping filters * Custom attribute mappings
-Be sure to note any changes that have been made to the settings listed above before completing the steps below. Failure to do so will result in the loss of customized settings.
+> [!NOTE]
+> Be sure to note any changes that have been made to the settings listed above before completing the steps below. Failure to do so will result in the loss of customized settings.
1. Sign into the Azure portal at https://portal.azure.com 2. Navigate to your current Workplace by Facebook app under Azure Active Directory > Enterprise Applications
POST https://graph.microsoft.com/beta/servicePrincipals/[object-id]/synchronizat
11. Restore any previous changes you made to the application (Authentication details, Scoping filters, Custom attribute mappings) and re-enable provisioning.
+> [!NOTE]
+> Failure to restore the previous settings may results in attributes (name.formatted for example) updating in Workplace unexpectedly. Be sure to check the configuration before enabling provisioning
+ ## Change log * 09/10/2020 - Added support for enterprise attributes "division", "organization", "costCenter" and "employeeNumber". Added support for custom attributes "startDate", "auth_method" and "frontline"
aks Cluster Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/cluster-configuration.md
az aks create -n aks -g myResourceGroup --enable-oidc-issuer
To upgrade a cluster to use OIDC Issuer. ```azurecli-interactive
-az aks upgrade -n aks -g myResourceGroup --enable-oidc-issuer
+az aks update -n aks -g myResourceGroup --enable-oidc-issuer
``` ## Next steps
aks Configure Kubenet Dual Stack https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/configure-kubenet-dual-stack.md
+
+ Title: Configure dual-stack kubenet networking in Azure Kubernetes Service (AKS)
+description: Learn how to configure dual-stack kubenet networking in Azure Kubernetes Service (AKS)
++ Last updated : 12/15/2021++
+# Use dual-stack kubenet networking in Azure Kubernetes Service (AKS) (Preview)
+
+AKS clusters can now be deployed in a dual-stack (using both IPv4 and IPv6 addresses) mode when using [kubenet][kubenet] networking and a dual-stack Azure virtual network. In this configuration, nodes receive both an IPv4 and IPv6 address from the Azure virtual network subnet. Pods receive both an IPv4 and IPv6 address from a logically different address space to the Azure virtual network subnet of the nodes. Network address translation (NAT) is then configured so that the pods can reach resources on the Azure virtual network. The source IP address of the traffic is NAT'd to the node's primary IP address of the same family (IPv4 to IPv4 and IPv6 to IPv6).
+
+This article shows you how to use dual-stack networking with an AKS cluster. For more information on network options and considerations, see [Network concepts for Kubernetes and AKS][aks-network-concepts].
++
+## Limitations
+> [!NOTE]
+> Dual-stack kubenet networking is currently not available in sovereign clouds. This note will be removed when rollout is complete.
+* Azure Route Tables have a hard limit of 400 routes per table. Because each node in a dual-stack cluster requires two routes, one for each IP address family, dual-stack clusters are limited to 200 nodes.
+* During preview, service objects are only supported with `externalTrafficPolicy: Local`.
+* Dual-stack networking is required for the Azure Virtual Network and the pod CIDR - single stack IPv6-only isn't supported for node or pod IP addresses. Services can be provisioned on IPv4 or IPv6.
+* Features **not supported on dual-stack kubenet** include:
+ * [Azure network policies](use-network-policies.md#create-an-aks-cluster-and-enable-network-policy)
+ * [Calico network policies](use-network-policies.md#create-an-aks-cluster-and-enable-network-policy)
+ * [NAT Gateway][nat-gateway]
+ * [Virtual nodes add-on](virtual-nodes.md#network-requirements)
+ * [Windows node pools](./windows-faq.md)
+
+## Prerequisites
+
+* All prerequisites from [configure kubenet networking](configure-kubenet.md) apply.
+* AKS dual-stack clusters require Kubernetes version v1.21.2 or greater. v1.22.2 or greater is recommended to take advantage of the [out-of-tree cloud controller manager][aks-out-of-tree], which is the default on v1.22 and up.
+* Azure CLI with the `aks-preview` extension 0.5.48 or newer.
+* If using Azure Resource Manager templates, schema version 2021-10-01 is required.
+
+### Register the `AKS-EnableDualStack` preview feature
+
+To create an AKS dual-stack cluster, you must enable the `AKS-EnableDualStack` feature flag on your subscription.
+
+Register the `AKS-EnableDualStack` feature flag by using the `az feature register` command, as shown in the following example:
+
+```azurecli-interactive
+az feature register --namespace "Microsoft.ContainerService" --name "AKS-EnableDualStack"
+```
+
+It takes a few minutes for the status to show *Registered*. Verify the registration status by using the `az feature list` command:
+
+```azurecli-interactive
+az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/AKS-EnableDualStack')].{Name:name,State:properties.state}"
+```
+
+When ready, refresh the registration of the *Microsoft.ContainerService* resource provider by using the `az provider register` command:
+
+```azurecli-interactive
+az provider register --namespace Microsoft.ContainerService
+```
+
+### Install the aks-preview CLI extension
+
+```azurecli-interactive
+# Install the aks-preview extension
+az extension add --name aks-preview
+
+# Update the extension to make sure you have the latest version installed
+az extension update --name aks-preview
+```
+
+## Overview of dual-stack networking in Kubernetes
+
+Kubernetes v1.23 brings stable upstream support for [IPv4/IPv6 dual-stack][kubernetes-dual-stack] clusters, including pod and service networking. Nodes and pods are always assigned both an IPv4 and an IPv6 address, while services can be single-stack on either address family or dual-stack.
+
+AKS configures the required supporting services for dual-stack networking. This configuration includes:
+
+* Dual-stack virtual network configuration (if managed Virtual Network is used)
+* IPv4 and IPv6 node and pod addresses
+* Outbound rules for both IPv4 and IPv6 traffic
+* Load balancer setup for IPv4 and IPv6 services
+
+## Deploying a dual-stack cluster
+
+Three new attributes are provided to support dual-stack clusters:
+* `--ip-families` - takes a comma-separated list of IP families to enable on the cluster.
+ * Currently only `ipv4` or `ipv4,ipv6` are supported.
+* `--pod-cidrs` - takes a comma-separated list of CIDR notation IP ranges to assign pod IPs from.
+ * The count and order of ranges in this list must match the value provided to `--ip-families`.
+ * If no values are supplied, the default values of `10.244.0.0/16,fd12:3456:789a::/64` will be used.
+* `--service-cidrs` - takes a comma-separated list of CIDR notation IP ranges to assign service IPs from.
+ * The count and order of ranges in this list must match the value provided to `--ip-families`.
+ * If no values are supplied, the default values of `10.0.0.0/16,fd12:3456:789a:1::/108` will be used.
+ * The IPv6 subnet assigned to `--service-cidrs` can be no larger than a /108.
+
+### Deploy the cluster
+
+# [Azure CLI](#tab/azure-cli)
+
+Deploying a dual-stack cluster requires passing the `--ip-families` parameter with the parameter value of `ipv4,ipv6` to indicate that a dual-stack cluster should be created.
+
+1. First, create a resource group to create the cluster in:
+ ```azurecli-interactive
+ az group create -l <Region> -n <ResourceGroupName>
+ ```
+
+1. Then create the cluster itself:
+ ```azurecli-interactive
+ az aks create -l <Region> -g <ResourceGroupName> -n <ClusterName> --ip-families ipv4,ipv6
+ ```
+
+# [Azure Resource Manager](#tab/azure-resource-manager)
+
+When using an Azure Resource Manager template to deploy, pass `["IPv4", "IPv6"]` to the `ipFamilies` parameter to the `networkProfile` object. See the [Azure Resource Manager template documentation][deploy-arm-template] for help with deploying this template, if needed.
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "clusterName": {
+ "type": "string",
+ "defaultValue": "aksdualstack"
+ },
+ "location": {
+ "type": "string",
+ "defaultValue": "[resourceGroup().location]"
+ },
+ "kubernetesVersion": {
+ "type": "string",
+ "defaultValue": "1.22.2"
+ },
+ "nodeCount": {
+ "type": "int",
+ "defaultValue": 3
+ },
+ "nodeSize": {
+ "type": "string",
+ "defaultValue": "Standard_B2ms"
+ }
+ },
+ "resources": [
+ {
+ "type": "Microsoft.ContainerService/managedClusters",
+ "apiVersion": "2021-10-01",
+ "name": "[parameters('clusterName')]",
+ "location": "[parameters('location')]",
+ "identity": {
+ "type": "SystemAssigned"
+ },
+ "properties": {
+ "agentPoolProfiles": [
+ {
+ "name": "nodepool1",
+ "count": "[parameters('nodeCount')]",
+ "mode": "System",
+ "vmSize": "[parameters('nodeSize')]"
+ }
+ ],
+ "dnsPrefix": "[parameters('clusterName')]",
+ "kubernetesVersion": "[parameters('kubernetesVersion')]",
+ "networkProfile": {
+ "ipFamilies": [
+ "IPv4",
+ "IPv6"
+ ]
+ }
+ }
+ }
+ ]
+}
+```
+
+# [Bicep](#tab/bicep)
+
+When using a Bicep template to deploy, pass `["IPv4", "IPv6"]` to the `ipFamilies` parameter to the `networkProfile` object. See the [Bicep template documentation][deploy-bicep-template] for help with deploying this template, if needed.
+
+```bicep
+param clusterName string = 'aksdualstack'
+param location string = resourceGroup().location
+param kubernetesVersion string = '1.22.2'
+param nodeCount int = 3
+param nodeSize string = 'Standard_B2ms'
+
+resource aksCluster 'Microsoft.ContainerService/managedClusters@2021-10-01' = {
+ name: clusterName
+ location: location
+ identity: {
+ type: 'SystemAssigned'
+ }
+ properties: {
+ agentPoolProfiles: [
+ {
+ name: 'nodepool1'
+ count: nodeCount
+ mode: 'System'
+ vmSize: nodeSize
+ }
+ ]
+ dnsPrefix: clusterName
+ kubernetesVersion: kubernetesVersion
+ networkProfile: {
+ ipFamilies: [
+ 'IPv4'
+ 'IPv6'
+ ]
+ }
+ }
+}
+```
+++
+Finally, after the cluster has been created, get the admin credentials:
+
+```azurecli-interactive
+az aks get-credentials -g <ResourceGroupName> -n <ClusterName> -a
+```
+
+### Inspect the nodes to see both IP families
+
+Once the cluster is provisioned, confirm that the nodes are provisioned with dual-stack networking:
+
+```bash-interactive
+kubectl get nodes -o=custom-columns="NAME:.metadata.name,ADDRESSES:.status.addresses[?(@.type=='InternalIP')].address,PODCIDRS:.spec.podCIDRs[*]"
+```
+
+The output from the `kubectl get nodes` command will show that the nodes have addresses and pod IP assignment space from both IPv4 and IPv6.
+
+```
+NAME ADDRESSES PODCIDRS
+aks-nodepool1-14508455-vmss000000 10.240.0.4,2001:1234:5678:9abc::4 10.244.0.0/24,fd12:3456:789a::/80
+aks-nodepool1-14508455-vmss000001 10.240.0.5,2001:1234:5678:9abc::5 10.244.1.0/24,fd12:3456:789a:0:1::/80
+aks-nodepool1-14508455-vmss000002 10.240.0.6,2001:1234:5678:9abc::6 10.244.2.0/24,fd12:3456:789a:0:2::/80
+```
+
+## Create an example workload
+
+### Deploy an nginx web server
+
+Once the cluster has been created, workloads can be deployed as usual. A simple example webserver can be created using the following command:
+
+# [`kubectl create`](#tab/kubectl)
+
+```bash-interactive
+kubectl create deployment nginx --image=nginx:latest --replicas=3
+```
+
+# [YAML](#tab/yaml)
+
+```yml
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ labels:
+ app: nginx
+ name: nginx
+spec:
+ replicas: 3
+ selector:
+ matchLabels:
+ app: nginx
+ template:
+ metadata:
+ labels:
+ app: nginx
+ spec:
+ containers:
+ - image: nginx:latest
+ name: nginx
+```
+++
+Using the following `kubectl get pods` command will show that the pods have both IPv4 and IPv6 addresses (note that the pods will not show IP addresses until they are ready):
+
+```bash-interactive
+kubectl get pods -o custom-columns="NAME:.metadata.name,IPs:.status.podIPs[*].ip,NODE:.spec.nodeName,READY:.status.conditions[?(@.type=='Ready')].status"
+```
+
+```
+NAME IPs NODE READY
+nginx-55649fd747-9cr7h 10.244.2.2,fd12:3456:789a:0:2::2 aks-nodepool1-14508455-vmss000002 True
+nginx-55649fd747-p5lr9 10.244.0.7,fd12:3456:789a::7 aks-nodepool1-14508455-vmss000000 True
+nginx-55649fd747-r2rqh 10.244.1.2,fd12:3456:789a:0:1::2 aks-nodepool1-14508455-vmss000001 True
+```
+
+### Expose the workload via a `LoadBalancer`-type service
+
+> [!IMPORTANT]
+> There are currently two limitations pertaining to IPv6 services in AKS. These are both preview limitations and work is underway to remove them.
+> * Azure Load Balancer sends health probes to IPv6 destinations from a link-local address. This traffic cannot be routed to a pod and thus traffic flowing to IPv6 services deployed with `externalTrafficPolicy: Cluster` will fail. During preview, IPv6 services MUST be deployed with `externalTrafficPolicy: Local`, which causes `kube-proxy` to respond to the probe on the node, in order to function.
+> * Only the first IP address for a service will be provisioned to the load balancer, so a dual-stack service will only receive a public IP for its first listed IP family. In order to provide a dual-stack service for a single deployment, please create two services targeting the same selector, one for IPv4 and one for IPv6.
+
+IPv6 services in Kubernetes can be exposed publicly similarly to an IPv4 service.
+
+# [`kubectl expose`](#tab/kubectl)
+
+```bash-interactive
+kubectl expose deployment nginx --name=nginx-ipv4 --port=80 --type=LoadBalancer --overrides='{"spec":{"externalTrafficPolicy":"Local"}}'
+kubectl expose deployment nginx --name=nginx-ipv6 --port=80 --type=LoadBalancer --overrides='{"spec":{"externalTrafficPolicy":"Local", "ipFamilies": ["IPv6"]}}'
+```
+
+```
+service/nginx-ipv4 exposed
+service/nginx-ipv6 exposed
+```
+
+# [YAML](#tab/yaml)
+
+```yml
+
+apiVersion: v1
+kind: Service
+metadata:
+ labels:
+ app: nginx
+ name: nginx-ipv4
+spec:
+ externalTrafficPolicy: Local
+ ports:
+ - port: 80
+ protocol: TCP
+ targetPort: 80
+ selector:
+ app: nginx
+ type: LoadBalancer
+
+apiVersion: v1
+kind: Service
+metadata:
+ labels:
+ app: nginx
+ name: nginx-ipv6
+spec:
+ externalTrafficPolicy: Local
+ ipFamilies:
+ - IPv6
+ ports:
+ - port: 80
+ protocol: TCP
+ targetPort: 80
+ selector:
+ app: nginx
+ type: LoadBalancer
+
+```
+++
+Once the deployment has been exposed and the `LoadBalancer` services have been fully provisioned, `kubectl get services` will show the IP addresses of the
+
+```bash-interactive
+kubectl get services
+```
+
+```
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+nginx-ipv4 LoadBalancer 10.0.88.78 20.46.24.24 80:30652/TCP 97s
+nginx-ipv6 LoadBalancer fd12:3456:789a:1::981a 2603:1030:8:5::2d 80:32002/TCP 63s
+```
+
+Next, we can verify functionality via a command-line web request from an IPv6 capable host (note that Azure Cloud Shell is not IPv6 capable):
+
+```bash-interactive
+SERVICE_IP=$(kubectl get services nginx-ipv6 -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
+curl -s "http://[${SERVICE_IP}]" | head -n5
+```
+
+```
+<!DOCTYPE html>
+<html>
+<head>
+<title>Welcome to nginx!</title>
+<style>
+```
++
+<!-- LINKS - External -->
+[kubernetes-dual-stack]: https://kubernetes.io/docs/concepts/services-networking/dual-stack/
+
+<!-- LINKS - Internal -->
+[deploy-arm-template]: /azure/azure-resource-manager/templates/quickstart-create-templates-use-the-portal
+[deploy-bicep-template]: /azure/azure-resource-manager/bicep/deploy-cli
+[kubenet]: /azure/aks/configure-kubenet
+[aks-out-of-tree]: /azure/aks/out-of-tree
+[nat-gateway]: /azure/virtual-network/nat-gateway/nat-overview
+[install-azure-cli]: /cli/azure/install-azure-cli
+[aks-network-concepts]: concepts-network.md
+[aks-network-nsg]: concepts-network.md#network-security-groups
+[az-group-create]: /cli/azure/group#az_group_create
+[az-network-vnet-create]: /cli/azure/network/vnet#az_network_vnet_create
+[az-ad-sp-create-for-rbac]: /cli/azure/ad/sp#az_ad_sp_create_for_rbac
+[az-network-vnet-show]: /cli/azure/network/vnet#az_network_vnet_show
+[az-network-vnet-subnet-show]: /cli/azure/network/vnet/subnet#az_network_vnet_subnet_show
+[az-role-assignment-create]: /cli/azure/role/assignment#az_role_assignment_create
+[az-aks-create]: /cli/azure/aks#az_aks_create
+[byo-subnet-route-table]: #bring-your-own-subnet-and-route-table-with-kubenet
+[develop-helm]: quickstart-helm.md
+[use-helm]: kubernetes-helm.md
+[virtual-nodes]: virtual-nodes-cli.md
+[vnet-peering]: ../virtual-network/virtual-network-peering-overview.md
+[express-route]: ../expressroute/expressroute-introduction.md
+[network-comparisons]: concepts-network.md#compare-network-models
+[custom-route-table]: ../virtual-network/manage-route-table.md
+[user-assigned managed identity]: use-managed-identity.md#bring-your-own-control-plane-mi
aks Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/faq.md
The issue has been resolved by Kubernetes v1.20, refer [Kubernetes 1.20: Granula
## Can I use FIPS cryptographic libraries with deployments on AKS?
-FIPS-enabled nodes are currently available in preview on Linux-based node pools. For more details, see [Add a FIPS-enabled node pool (preview)](use-multiple-node-pools.md#add-a-fips-enabled-node-pool-preview).
+FIPS-enabled nodes are currently are now Generally Available on Linux-based node pools. For more details, see [Add a FIPS-enabled node pool](use-multiple-node-pools.md#add-a-fips-enabled-node-pool).
## Can I configure NSGs with AKS?
aks Tutorial Kubernetes Upgrade Cluster https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/tutorial-kubernetes-upgrade-cluster.md
Tags : {}
+## View the upgrade events
+
+When you upgrade your cluster, the following Kubenetes events may occur on each node:
+
+* Surge ΓÇô Create surge node.
+* Drain ΓÇô Pods are being evicted from the node. Each pod has a 30 minute timeout to complete the eviction.
+* Update ΓÇô Update of a node has succeeded or failed.
+* Delete ΓÇô Deleted a surge node.
+
+Use `kubectl get events` to show events in the default namespaces while running an upgrade. For example:
+
+```azurecli-interactive
+kubectl get events
+```
+
+The following example output shows some of the above events listed during an upgrade.
+
+```output
+...
+default 2m1s Normal Drain node/aks-nodepool1-96663640-vmss000001 Draining node: [aks-nodepool1-96663640-vmss000001]
+...
+default 9m22s Normal Surge node/aks-nodepool1-96663640-vmss000002 Created a surge node [aks-nodepool1-96663640-vmss000002 nodepool1] for agentpool %!s(MISSING)
+...
+```
+ ## Validate an upgrade ### [Azure CLI](#tab/azure-cli)
aks Upgrade Cluster https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/upgrade-cluster.md
Name Location ResourceGroup KubernetesVersion ProvisioningStat
myAKSCluster eastus myResourceGroup 1.18.10 Succeeded myakscluster-dns-379cbbb9.hcp.eastus.azmk8s.io ```
+## View the upgrade events
+
+When you upgrade your cluster, the following Kubenetes events may occur on each node:
+
+- Surge ΓÇô Create surge node.
+- Drain ΓÇô Pods are being evicted from the node. Each pod has a 30 minute timeout to complete the eviction.
+- Update ΓÇô Update of a node has succeeded or failed.
+- Delete ΓÇô Deleted a surge node.
+
+Use `kubectl get events` to show events in the default namespaces while running an upgrade. For example:
+
+```azurecli-interactive
+kubectl get events
+```
+
+The following example output shows some of the above events listed during an upgrade.
+
+```output
+...
+default 2m1s Normal Drain node/aks-nodepool1-96663640-vmss000001 Draining node: [aks-nodepool1-96663640-vmss000001]
+...
+default 9m22s Normal Surge node/aks-nodepool1-96663640-vmss000002 Created a surge node [aks-nodepool1-96663640-vmss000002 nodepool1] for agentpool %!s(MISSING)
+...
+```
+ ## Set auto-upgrade channel In addition to manually upgrading a cluster, you can set an auto-upgrade channel on your cluster. The following upgrade channels are available:
aks Use Multiple Node Pools https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/use-multiple-node-pools.md
az aks nodepool list -g myResourceGroup --cluster-name myAKSCluster
] ```
-## Add a FIPS-enabled node pool (preview)
+## Add a FIPS-enabled node pool
The Federal Information Processing Standard (FIPS) 140-2 is a US government standard that defines minimum security requirements for cryptographic modules in information technology products and systems. AKS allows you to create Linux-based node pools with FIPS 140-2 enabled. Deployments running on FIPS-enabled node pools can use those cryptographic modules to provide increased security and help meet security controls as part of FedRAMP compliance. For more details on FIPS 140-2, see [Federal Information Processing Standard (FIPS) 140-2][fips].
FIPS-enabled node pools are currently in preview.
You will need the *aks-preview* Azure CLI extension version *0.5.11* or later. Install the *aks-preview* Azure CLI extension by using the [az extension add][az-extension-add] command. Or install any available updates by using the [az extension update][az-extension-update] command. ```azurecli-interactive
-# Install the aks-preview extension
-az extension add --name aks-preview
-
-# Update the extension to make sure you have the latest version installed
-az extension update --name aks-preview
-```
-
-To use the feature, you must also enable the `FIPSPreview` feature flag on your subscription.
-
-Register the `FIPSPreview` feature flag by using the [az feature register][az-feature-register] command, as shown in the following example:
-
-```azurecli-interactive
-az feature register --namespace "Microsoft.ContainerService" --name "FIPSPreview"
-```
-
-It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [az feature list][az-feature-list] command:
-
-```azurecli-interactive
-az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/FIPSPreview')].{Name:name,State:properties.state}"
-```
-
-When ready, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [az provider register][az-provider-register] command:
-
-```azurecli-interactive
-az provider register --namespace Microsoft.ContainerService
-```
+You need the Azure CLI version 2.32.0 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
FIPS-enabled node pools have the following limitations: * Currently, you can only have FIPS-enabled Linux-based node pools running on Ubuntu 18.04. * FIPS-enabled node pools require Kubernetes version 1.19 and greater. * To update the underlying packages or modules used for FIPS, you must use [Node Image Upgrade][node-image-upgrade].
+* Container Images on the FIPS nodes have not been assessed for FIPS compliance.
> [!IMPORTANT] > The FIPS-enabled Linux image is a different image than the default Linux image used for Linux-based node pools. To enable FIPS on a node pool, you must create a new Linux-based node pool. You can't enable FIPS on existing node pools.
Use [proximity placement groups][reduce-latency-ppg] to reduce latency for your
[public-ip-prefix-benefits]: ../virtual-network/ip-services/public-ip-address-prefix.md [az-public-ip-prefix-create]: /cli/azure/network/public-ip/prefix#az_network_public_ip_prefix_create [node-image-upgrade]: node-image-upgrade.md
-[fips]: /azure/compliance/offerings/offering-fips-140-2
+[fips]: /azure/compliance/offerings/offering-fips-140-2
api-management Api Management Key Concepts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-key-concepts.md
Title: Azure API Management overview and key concepts | Microsoft Docs
-description: Learn about APIs, products, roles, groups, and other API Management key concepts.
+description: Learn about key scenarios, capabilities, and concepts of the Azure API Management service.
documentationcenter: '' - editor: '' - Previously updated : 11/15/2017 Last updated : 01/07/2022 # About API Management
-API Management (APIM) is a way to create consistent and modern API gateways for existing back-end services.
+Azure API Management is a hybrid, multicloud management platform for APIs across all environments. This article provides an overview of common scenarios and key components of API Management.
-API Management helps organizations publish APIs to external, partner, and internal developers to unlock the potential of their data and services. Businesses everywhere are looking to extend their operations as a digital platform, creating new channels, finding new customers and driving deeper engagement with existing ones. API Management provides the core competencies to ensure a successful API program through developer engagement, business insights, analytics, security, and protection. You can use Azure API Management to take any backend and launch a full-fledged API program based on it.
+## Scenarios
-This article provides an overview of common scenarios that involve APIM. It also gives a brief overview of the APIM system's main components. The article, then, gives a more detailed overview of each component.
+APIs enable digital experiences, simplify application integration, underpin new digital products, and make data and services reusable and universally accessible. ΓÇïWith the proliferation and increasing dependency on APIs, organizations need to manage them as first-class assets throughout their lifecycle.ΓÇï
-## Overview
+Azure API Management helps customers meet these challenges:
-To use API Management, administrators create APIs. Each API consists of one or more operations, and each API can be added to one or more products. To use an API, developers subscribe to a product that contains that API, and then they can call the API's operation, subject to any usage policies that may be in effect. Common scenarios include:
+* Abstract backend architecture diversity and complexity from API consumers
+* Securely expose services hosted on and outside of Azure as APIs
+* Protect, accelerate, and observe APIs
+* Enable API discovery and consumption by internal and external users
-* **Securing mobile infrastructure** by gating access with API keys, preventing DOS attacks by using throttling, or using advanced security policies like JWT token validation.
-* **Enabling ISV partner ecosystems** by offering fast partner onboarding through the developer portal and building an API facade to decouple from internal implementations that are not ripe for partner consumption.
-* **Running an internal API program** by offering a centralized location for the organization to communicate about the availability and latest changes to APIs, gating access based on organizational accounts, all based on a secured channel between the API gateway and the backend.
+Common scenarios include:
-The system is made up of the following components:
+* **Unlocking legacy assets** - APIs are used to abstract and modernize legacy backends and make them accessible from new cloud services and modern applications. APIs allow innovation without the risk, cost, and delays of migration.
+* **API-centric app integration** - APIs are easily consumable, standards-based, and self-describing mechanisms for exposing and accessing data, applications, and processes. They simplify and reduce the cost of app integration.
+* **Multi-channel user experiences** - APIs are frequently used to enable user experiences such as web, mobile, wearable, or Internet of Things applications. Reuse APIs to accelerate development and ROI.
+* **B2B integration** - APIs exposed to partners and customers lower the barrier to integrate business processes and exchange data between business entities. APIs eliminate the overhead inherent in point-to-point integration. Especially with self-service discovery and onboarding enabled, APIs are the primary tools for scaling B2B integration.
-* The **API gateway** is the endpoint that:
-
- * Accepts API calls and routes them to your backends.
- * Verifies API keys, JWT tokens, certificates, and other credentials.
- * Enforces usage quotas and rate limits.
- * Transforms your API on the fly without code modifications.
- * Caches backend responses where set up.
- * Logs call metadata for analytics purposes.
-* The **Azure portal** is the administrative interface where you set up your API program. Use it to:
-
- * Define or import API schema.
- * Package APIs into products.
- * Set up policies like quotas or transformations on the APIs.
- * Get insights from analytics.
- * Manage users.
-* The **Developer portal** serves as the main web presence for developers, where they can:
+## API Management components
+
+Azure API Management is made up of an API *gateway*, a *management plane*, and a *developer portal*. These components are Azure-hosted and fully managed by default. API Management is available in various [tiers](api-management-features.md) differing in capacity and features.
++
+### API gateway
+
+All requests from client applications first reach the API gateway, which then forwards them to respective backend services. The API gateway acts as a façade to the backend services, allowing API providers to abstract API implementations and evolve backend architecture without impacting API consumers. The gateway enables consistent configuration of routing, security, throttling, caching, and observability.
+
+The API gateway:
- * Read API documentation.
- * Try out an API via the interactive console.
- * Create an account and subscribe to get API keys.
- * Access analytics on their own usage.
-
-## <a name="apis"> </a>APIs and operations
-APIs are the foundation of an API Management service instance. Each API represents a set of operations available to developers. Each API contains a reference to the back-end service that implements the API, and its operations map to the operations implemented by the back-end service. Operations in API Management are highly configurable, with control over URL mapping, query and path parameters, request and response content, and operation response caching. Rate limit, quotas, and IP restriction policies can also be implemented at the API or individual operation level.
+ * Accepts API calls and routes them to configured backends
+ * Verifies API keys, JWT tokens, certificates, and other credentials
+ * Enforces usage quotas and rate limits
+ * Optionally transforms requests and responses as specified in [policy statements](#policies)
+ * If configured, caches responses to improve response latency and minimize the load on backend services
+ * Emits logs, metrics, and traces for monitoring, reporting, and troubleshooting
+
+With the [self-hosted gateway](self-hosted-gateway-overview.md), customers can deploy the API gateway to the same environments where they host their APIs, to optimize API traffic and ensure compliance with local regulations and guidelines. The self-hosted gateway enables customers with hybrid IT infrastructure to manage APIs hosted on-premises and across clouds from a single API Management service in Azure.
+
+The self-hosted gateway is packaged as a Linux-based Docker container and is commonly deployed to Kubernetes, including to Azure Kubernetes Service and [Azure Arc-enabled Kubernetes](how-to-deploy-self-hosted-gateway-azure-arc.md).
+
+### Management plane
+
+API providers interact with the service through the management plane, which provides full access to the API Management service capabilities.
+
+Customers interact with the management plane through Azure tools including the Azure portal, Azure PowerShell, Azure CLI, a [Visual Studio Code extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-apimanagement&ssr=false#overview), or client SDKs in several popular programming languages.
+
+Use the management plane to:
+
+ * Provision and configure API Management service settings
+ * Define or import API schemas from a wide range of sources, including OpenAPI specifications, Azure compute services, or WebSocket or GraphQL backends
+ * Package APIs into products
+ * Set up [policies](#policies) like quotas or transformations on the APIs
+ * Get insights from analytics
+ * Manage users
++
+### Developer portal
+
+The open-source [developer portal][Developer portal] is an automatically generated, fully customizable website with the documentation of your APIs.
+
+API providers can customize the look and feel of the developer portal by adding custom content, customizing styles, and adding their branding. Extend the developer portal further by [self-hosting](developer-portal-self-host.md).
+
+App developers use the open-source developer portal to discover the APIs, onboard to use them, and learn how to consume them in applications. (APIs can also be exported to the [Power Platform](export-api-power-platform.md) for discovery and use by citizen developers.)
+
+Using the developer portal, developers can:
+
+ * Read API documentation
+ * Call an API via the interactive console
+ * Create an account and subscribe to get API keys
+ * Access analytics on their own usage
+ * Download API definitions
+ * Manage API keys
+
+## Integration with Azure services
+
+API Management integrates with many complementary Azure services, including:
+
+* [Azure Key Vault](../key-vault/general/overview.md) for secure safekeeping and management of [client certificates](api-management-howto-mutual-certificates.md) and [secretsΓÇï](api-management-howto-properties.md)
+* [Azure Monitor](api-management-howto-use-azure-monitor.md) for logging, reporting, and alerting on management operations, systems events, and API requestsΓÇï
+* [Application Insights](api-management-howto-app-insights.md) for live metrics, end-to-end tracing, and troubleshooting
+* [Virtual networks](virtual-network-concepts.md) and [Application Gateway](api-management-howto-integrate-internal-vnet-appgateway.md) for network-level protectionΓÇï
+* Azure Active Directory for [developer authentication](api-management-howto-aad.md) and [request authorization](api-management-howto-protect-backend-with-aad.md)ΓÇï
+* [Event Hub](api-management-howto-log-event-hubs.md) for streaming eventsΓÇï
+* Several Azure compute offerings commonly used to build and host APIs on Azure, including [Functions](import-function-app-as-api.md), [Logic Apps](import-logic-app-as-api.md), [Web Apps](import-app-service-as-api.md), [Service Fabric](how-to-configure-service-fabric-backend.md), and others.ΓÇï
+
+## Key concepts
+
+### APIs
+
+APIs are the foundation of an API Management service instance. Each API represents a set of *operations* available to app developers. Each API contains a reference to the backend service that implements the API, and its operations map to backend operations.
+
+Operations in API Management are highly configurable, with control over URL mapping, query and path parameters, request and response content, and operation response caching.
+
+More information:
+* [Import and publish your first API][How to create APIs]
+* [Mock API responses][How to add operations to an API]
+
+### Products
+
+Products are how APIs are surfaced to developers. Products in API Management have one or more APIs, and can be *open* or *protected*. Protected products require a subscription key, while open products can be consumed freely.
+
+When a product is ready for use by developers, it can be published. Once published, it can be viewed or subscribed to by developers. Subscription approval is configured at the product level and can either require an administrator's approval or be automatic.
+
+More information:
+* [Create and publish a product][How to create and publish a product]
+* [Subscriptions in API Management](api-management-subscriptions.md)
-For more information, see [How to create APIs][How to create APIs] and [How to add operations to an API][How to add operations to an API].
+### Groups
-## <a name="products"> </a> Products
-Products are how APIs are surfaced to developers. Products in API Management have one or more APIs, and are configured with a title, description, and terms of use. Products can be **Open** or **Protected**. Protected products must be subscribed to before they can be used, while open products can be used without a subscription. When a product is ready for use by developers, it can be published. Once it is published, it can be viewed (and in the case of protected products subscribed to) by developers. Subscription approval is configured at the product level and can either require administrator approval, or be auto-approved.
+Groups are used to manage the visibility of products to developers. API Management has the following built-in groups:
-Groups are used to manage the visibility of products to developers. Products grant visibility to groups, and developers can view and subscribe to the products that are visible to the groups in which they belong.
+* **Administrators** - Manage API Management service instances and create the APIs, operations, and products that are used by developers.
-## <a name="groups"> </a> Groups
-Groups are used to manage the visibility of products to developers. API Management has the following immutable system groups:
+ Azure subscription administrators are members of this group.
-* **Administrators** - Azure subscription administrators are members of this group. Administrators manage API Management service instances, creating the APIs, operations, and products that are used by developers.
-* **Developers** - Authenticated developer portal users fall into this group. Developers are the customers that build applications using your APIs. Developers are granted access to the developer portal and build applications that call the operations of an API.
-* **Guests** - Unauthenticated developer portal users, such as prospective customers visiting the developer portal of an API Management instance fall into this group. They can be granted certain read-only access, such as the ability to view APIs but not call them.
+* **Developers** - Authenticated developer portal users that build applications using your APIs. Developers are granted access to the developer portal and build applications that call the operations of an API.
-In addition to these system groups, administrators can create custom groups or [leverage external groups in associated Azure Active Directory tenants](api-management-howto-aad.md). Custom and external groups can be used alongside system groups in giving developers visibility and access to API products. For example, you could create one custom group for developers affiliated with a specific partner organization and allow them access to the APIs from a product containing relevant APIs only. A user can be a member of more than one group.
+* **Guests** - Unauthenticated developer portal users, such as prospective customers visiting the developer portal. They can be granted certain read-only access, such as the ability to view APIs but not call them.
-For more information, see [How to create and use groups][How to create and use groups].
+Administrators can also create custom groups or use external groups in an [associated Azure Active Directory tenant](api-management-howto-aad.md) to give developers visibility and access to API products. For example, create a custom group for developers in a partner organization to access a specific subset of APIs in a product. A user can belong to more than one group.
-## <a name="developers"> </a> Developers
-Developers represent the user accounts in an API Management service instance. Developers can be created or invited to join by administrators, or they can sign up from the [Developer portal][Developer portal]. Each developer is a member of one or more groups, and can subscribe to the products that grant visibility to those groups.
+More information:
+* [How to create and use groups][How to create and use groups]
-When developers subscribe to a product, they are granted the primary and secondary key for the product. This key is used when making calls into the product's APIs.
+### Developers
-For more information, see [How to create or invite developers][How to create or invite developers] and [How to associate groups with developers][How to associate groups with developers].
+Developers represent the user accounts in an API Management service instance. Developers can be created or invited to join by administrators, or they can sign up from the [developer portal][Developer portal]. Each developer is a member of one or more groups, and can subscribe to the products that grant visibility to those groups.
-## <a name="policies"> </a> Policies
-Policies are a powerful capability of API Management that allow the Azure portal to change the behavior of the API through configuration. Policies are a collection of statements that are executed sequentially on the request or response of an API. Popular statements include format conversion from XML to JSON and call rate limiting to restrict the number of incoming calls from a developer, and many other policies are available.
+When developers subscribe to a product, they are granted the primary and secondary key for the product for use when calling the product's APIs.
-Policy expressions can be used as attribute values or text values in any of the API Management policies, unless the policy specifies otherwise. Some policies such as the [Control flow](./api-management-advanced-policies.md#choose) and [Set variable](./api-management-advanced-policies.md#set-variable) policies are based on policy expressions. For more information, see [Advanced policies](./api-management-advanced-policies.md#AdvancedPolicies) and [Policy expressions](./api-management-policy-expressions.md).
+More information:
+* [How to manage user accounts][How to create or invite developers]
+### Policies
-For a complete list of API Management policies, see [Policy reference][Policy reference]. For more information on using and configuring policies, see [API Management policies][API Management policies]. For a tutorial on creating a product with rate limit and quota policies, see [How to create and configure advanced product settings][How to create and configure advanced product settings].
+With [policies][API Management policies], an API publisher can change the behavior of an API through configuration. Policies are a collection of statements that are executed sequentially on the request or response of an API. Popular statements include format conversion from XML to JSON and call-rate limiting to restrict the number of incoming calls from a developer. For a complete list, see [API Management policies][Policy reference].
+Policy expressions can be used as attribute values or text values in any of the API Management policies, unless the policy specifies otherwise. Some policies such as the [Control flow](./api-management-advanced-policies.md#choose) and [Set variable](./api-management-advanced-policies.md#set-variable) policies are based on policy expressions.
-## <a name="developer-portal"> </a> Developer portal
-The developer portal is where developers can learn about your APIs, view and call operations, and subscribe to products. Prospective customers can visit the developer portal, view APIs and operations, and sign up. The URL for your developer portal is located on the dashboard in the Azure portal for your API Management service instance.
+Policies can be applied at different scopes, depending on your needs: global (all APIs), a product, a specific API, or an API operation.
-You can customize the look and feel of your developer portal by adding custom content, customizing styles, and adding your branding.
+More information:
+* [Transform and protect your API][How to create and configure advanced product settings].
+* [Policy expressions](./api-management-policy-expressions.md)
## Next steps Complete the following quickstart and start using Azure API Management: > [!div class="nextstepaction"]
-> [Create an Azure API Management instance](get-started-create-service-instance.md)
+> [Create an Azure API Management instance by using the Azure portal](get-started-create-service-instance.md)
[APIs and operations]: #apis [Products]: #products
app-service Overview Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/overview-managed-identity.md
To learn more about configuring AzureServiceTokenProvider and the operations it
For Java applications and functions, the simplest way to work with a managed identity is through the [Azure SDK for Java](https://github.com/Azure/azure-sdk-for-java). This section shows you how to get started with the library in your code.
-1. Add a reference to the [Azure SDK library](https://mvnrepository.com/artifact/com.microsoft.azure/azure). For Maven projects, you might add this snippet to the `dependencies` section of the project's POM file:
+1. Add a reference to the [Azure SDK library](https://mvnrepository.com/artifact/com.azure.resourcemanager/azure-resourcemanager). For Maven projects, you might add this snippet to the `dependencies` section of the project's POM file:
```xml <dependency>
- <groupId>com.microsoft.azure</groupId>
- <artifactId>azure</artifactId>
- <version>1.23.0</version>
+ <groupId>com.azure.resourcemanager</groupId>
+ <artifactId>azure-resourcemanager</artifactId>
+ <version>2.10.0</version>
</dependency> ```
-2. Use the `AppServiceMSICredentials` object for authentication. This example shows how this mechanism may be used for working with Azure Key Vault:
+2. Use the `ManagedIdentityCredential` object for authentication. This example shows how this mechanism may be used for working with Azure Key Vault:
```java
- import com.microsoft.azure.AzureEnvironment;
- import com.microsoft.azure.management.Azure;
- import com.microsoft.azure.management.keyvault.Vault
+ import com.azure.core.management.AzureEnvironment;
+ import com.azure.core.management.profile.AzureProfile;
+ import com.azure.identity.ManagedIdentityCredential;
+ import com.azure.identity.ManagedIdentityCredentialBuilder;
+ import com.azure.resourcemanager.AzureResourceManager;
+ import com.azure.resourcemanager.keyvault.models.Vault;
//...
- Azure azure = Azure.authenticate(new AppServiceMSICredentials(AzureEnvironment.AZURE))
- .withSubscription(subscriptionId);
- Vault myKeyVault = azure.vaults().getByResourceGroup(resourceGroup, keyvaultName);
+ AzureProfile azureProfile = new AzureProfile(AzureEnvironment.AZURE);
+ ManagedIdentityCredential managedIdentityCredential = new ManagedIdentityCredentialBuilder().build();
+ AzureResourceManager azure = AzureResourceManager.authenticate(managedIdentityCredential, azureProfile).withSubscription("subscription");
- ```
+ Vault vault = azure.vaults().getByResourceGroup("resourceGroup", "keyVaultName");
+ ```
+For more information on how to use the Azure SDK for Java, please refer to this [quickstart guide](https://aka.ms/azsdk/java/mgmt). To learn more about Azure Identiy and authentication and Managed Identity in general, please visit [this guide](https://github.com/Azure/azure-sdk-for-java/wiki/Azure-Identity-Examples#authenticating-a-user-assigned-managed-identity-with-defaultazurecredential)
## <a name="remove"></a>Remove an identity
Update-AzFunctionApp -Name $functionAppName -ResourceGroupName $resourceGroupNam
- [Call Microsoft Graph securely using a managed identity](scenario-secure-app-access-microsoft-graph-as-app.md) - [Connect securely to services with Key Vault secrets](tutorial-connect-msi-key-vault.md)
-[Microsoft.Azure.Services.AppAuthentication reference]: /dotnet/api/overview/azure/service-to-service-authentication
+[Microsoft.Azure.Services.AppAuthentication reference]: /dotnet/api/overview/azure/service-to-service-authentication
app-service Webjobs Sdk Get Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/webjobs-sdk-get-started.md
In this section, you create a function triggered by messages in an Azure Storage
Starting with version 3 of the WebJobs SDK, to connect to Azure Storage services you must install a separate Storage binding extension package.
+>[!NOTE]
+> Beginning with 5.x, Microsoft.Azure.WebJobs.Extensions.Storage has been [split by storage service](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/storage/Microsoft.Azure.WebJobs.Extensions.Storage/CHANGELOG.md#major-changes-and-features) and has migrated the `AddAzureStorage()` extension method by service type.
+ 1. Get the latest stable version of the [Microsoft.Azure.WebJobs.Extensions.Storage](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Storage) NuGet package, version 3.x. 1. In the following command, replace `<3_X_VERSION>` with the current version number you found in step 1. Each type of NuGet Package has a unique version number.
application-gateway Ingress Controller Add Health Probes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/ingress-controller-add-health-probes.md
spec:
``` Kubernetes API Reference:
-* [Container Probes](https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes)
-* [HttpGet Action](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/#httpgetaction-v1-core)
+* [Container Probes]https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.23/#httpgetaction-v1-core)
> [!NOTE] > * `readinessProbe` and `livenessProbe` are supported when configured with `httpGet`.
For any property that can not be inferred by the readiness/liveness probe, Defau
| `Protocol` | HTTP | | `Timeout` | 30 | | `Interval` | 30 |
-| `UnhealthyThreshold` | 3 |
+| `UnhealthyThreshold` | 3 |
applied-ai-services Create A Form Recognizer Resource https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/applied-ai-services/form-recognizer/create-a-form-recognizer-resource.md
+
+ Title: How to create a Form Recognizer resource
+
+description: Create a Form Recognizer resource in the Azure portal
+++++ Last updated : 01/06/2022+
+recommendations: false
+#Customer intent: I want to learn how to use create a Form Recognizer service in the Azure portal.
++
+# Create a Form Recognizer resource
+
+Azure Form Recognizer is a cloud-based [Azure Applied AI Service](../../applied-ai-services/index.yml) that uses machine-learning models to extract and analyze form fields, text, and tables from your documents. Here, you'll learn how to create a Form Recognizer resource in the Azure portal.
+
+## Visit the Azure portal
+
+The Azure portal is a single platform you can use to create and manage Azure services.
+
+Let's get started:
+
+1. Navigate to the Azure portal home page: [Azure home page](https://ms.portal.azure.com/#home).
+
+1. Select **Create a resource** from the Azure home page.
+
+1. Search for and choose **Form Recognizer** from the search bar.
+
+1. Select the **Create** button.
+
+ :::image border="true" type="content" source="media/logic-apps-tutorial/logic-app-connector-demo-one.gif" alt-text="Gif showing how to create a Form Recognizer resource.":::
+
+## Create a resource
+
+1. Next, you're going to fill out the **Create Form Recognizer** fields with the following values:
+
+ * **Subscription**. Select your current subscription.
+ * **Resource group**. The [Azure resource group](/azure/cloud-adoption-framework/govern/resource-consistency/resource-access-management#what-is-an-azure-resource-group) that will contain your resource. You can create a new group or add it to a pre-existing group.
+ * **Region**. Select your local region.
+ * **Name**. Enter a name for your resource. We recommend using a descriptive name, for example *YourNameFormRecognizer*.
+ * **Pricing tier**. The cost of your resource depends on the pricing tier you choose and your usage. For more information, see [pricing details](https://azure.microsoft.com/pricing/details/cognitive-services/). You can use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production.
+
+1. Select **Review + Create**.
+
+ :::image border="true" type="content" source="media/logic-apps-tutorial/logic-app-connector-demo-two.png" alt-text="Still image showing the correct values for creating Form Recognizer resource.":::
+
+1. Azure will run a quick validation check, after a few seconds you should see a green banner that says **Validation Passed**.
+
+1. Once the validation banner appears, select the **Create** button from the bottom-left corner.
+
+1. After you select create, you'll be redirected to a new page that says **Deployment in progress**. After a few seconds, you'll see a message that says, **Your deployment is complete**.
+
+## Get Endpoint URL and API keys
+
+1. Once you receive the *deployment is complete* message, select the **Go to resource** button.
+
+ :::image border="true" type="content" source="media/logic-apps-tutorial/logic-app-connector-demo-three.gif" alt-text="Gif showing the validation process of creating Form Recognizer resource.":::
+
+1. Copy the key and endpoint values from your Form Recognizer resource paste them in a convenient location, such as *Microsoft Notepad*. You'll need the key and endpoint values to connect your application to the Form Recognizer API.
+
+1. If your overview page does not have the keys and endpoint visible, you can select the **Keys and Endpoint** button on the left navigation bar and retrieve them there.
+
+ :::image border="true" type="content" source="media/containers/keys-and-endpoint.png" alt-text="Still photo showing how to access resource key and endpoint URL":::
+
+That's it! You're now ready to start automating data extraction using Azure Form Recognizer.
+
+## Next steps
+
+* Try the [Form Recognizer Studio](concept-form-recognizer-studio.md), an online tool for visually exploring, understanding, and integrating features from the Form Recognizer service into your applications.
+
+* Complete a Form Recognizer [C#](quickstarts/try-v3-csharp-sdk.md),[Python](quickstarts/try-v3-python-sdk.md), [Java](quickstarts/try-v3-java-sdk.md), or [JavaScript](quickstarts/try-v3-javascript-sdk.md) quickstart and get started creating a form processing app in the development language of your choice.
applied-ai-services Tutorial Logic Apps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/applied-ai-services/form-recognizer/tutorial-logic-apps.md
+
+ Title: Use Azure Logic Apps with Form Recognizer
+
+description: A tutorial outlining how to use Form Recognizer with Logic Apps.
+++++ Last updated : 01/06/2022+
+recommendations: false
+#Customer intent: As a form-processing software developer, I want to learn how to use the Form Recognizer service with Logic Apps.
++
+# Tutorial: Use Azure Logic Apps with Form Recognizer
+
+Azure Logic Apps is a cloud-based platform that can be used to automate workflows without writing a single line of code. The platform enables you to easily integrate Microsoft and third-party applications with your apps, data, services, and systems. A Logic App is the Azure resource you create when you want to develop a workflow. Here are a few examples of what you can do with a Logic App:
+
+* Create business processes and workflows visually.
+* Integrate workflows with software as a service (SaaS) and enterprise applications.
+* Automate enterprise application integration (EAI), business-to-business(B2B), and electronic data interchange (EDI) tasks.
+
+For more information, *see* [Logic Apps Overview](/azure/logic-apps/logic-apps-overview).
+
+ In this tutorial, you'll learn how to build a Logic App connector flow to automate the following tasks:
+
+> [!div class="checklist"]
+>
+> * Detect when an invoice as been added to a OneDrive folder.
+> * Process the invoice using the Form Recognizer prebuilt-invoice model.
+> * Send the extracted information from the invoice to a pre-specified email address.
+
+## Prerequisites
+
+To complete this tutorial, you'll need the following:
+
+* **An Azure subscription**. You can [create a free Azure subscription](https://azure.microsoft.com/free/cognitive-services/)
+
+* **A Form Recognizer resource**. Once you have your Azure subscription, [create a Form Recognizer resource](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) in the Azure portal to get your key and endpoint. If you have an existing Form Recognizer resource, navigate directly to your resource page. You can use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production.
+
+ 1. After the resource deploys, select **Go to resource**.
+
+ 1. Copy the **Keys and Endpoint** values from the resource you created and paste them in a convenient location, such as *Microsoft Notepad*. You'll need the key and endpoint values to connect your application to the Form Recognizer API.
+
+ :::image border="true" type="content" source="media/containers/keys-and-endpoint.png" alt-text="Still photo showing how to access resource key and endpoint URL.":::
+
+ > [!TIP]
+ > For further guidance, *see* [**create a Form Recognizer resource**](create-a-form-recognizer-resource.md).
+
+* A free [**OneDrive**](https://onedrive.live.com/signup) or [**OneDrive for Business**](https://www.microsoft.com/microsoft-365/onedrive/onedrive-for-business) cloud storage account.
+
+ > [!NOTE]
+ >
+ > * OneDrive is intended for personal storage.
+ > * OneDrive for Business is part of Office 365 and is designed for organizations. It provides cloud storage where you can store, share, and sync all work files.
+ >
+
+* A free [**Outlook online**](https://signup.live.com/signup.aspx?lic=1&mkt=en-ca) or [**Office 365**](https://www.microsoft.com/microsoft-365/outlook/email-and-calendar-software-microsoft-outlook) email account**.
+
+* **A sample invoice to test your Logic App**. You can download and use our [sample invoice document](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/invoice-logic-apps-tutorial.pdf) for this tutorial.
+
+## Create a OneDrive folder
+
+Before we jump into creating the Logic App, we have to set up a OneDrive folder.
+
+1. Go to your [OneDrive](https://onedrive.live.com/) or [OneDrive for Business](https://www.microsoft.com/microsoft-365/onedrive/onedrive-for-business) home page.
+
+1. Select the **Γ₧ò New** drop-down menu in the upper-left corner and select **Folder**.
+
+1. Enter a name for your new folder and select **Create**.
+
+1. You should see the new folder in your files. For now, we're done with OneDrive, but you'll need to access this folder later.
++
+### Create a Logic App resource
+
+At this point, you should have a Form Recognizer resource and a OneDrive folder all set. Now, it's time to create a Logic App resource.
+
+1. Select **Create a resource** from the Azure home page.
+
+1. Search for and choose **Logic App** from the search bar.
+
+1. Select the create button
+
+ :::image border="true" type="content" source="media/logic-apps-tutorial/logic-app-connector-demo-five.gif" alt-text="GIF showing how to create a Logic App resource.":::
+
+1. Next, you're going to fill out the **Create Logic App** fields with the following values:
+
+ * **Subscription**. Select your current subscription.
+ * **Resource group**. The [Azure resource group](/azure/cloud-adoption-framework/govern/resource-consistency/resource-access-management#what-is-an-azure-resource-group) that will contain your resource. Choose the same resource group you have for your Form Recognizer resource.
+ * **Type**. Select **Consumption**. The Consumption resource type runs in global, multi-tenant Azure Logic Apps and uses the [Consumption billing model](/azure/logic-apps/logic-apps-pricing#consumption-pricing).
+ * **Logic App name**. Enter a name for your resource. We recommend using a descriptive name, for example *YourNameLogicApp*.
+ * **Region**. Select your local region.
+ * **Enable log analytics**. For this project, select **No**.
+
+1. When you're done, you should have something similar to the image below (Resource group, Logic App name, and Region may be different). After checking these values, select **Review + create** in the bottom-left corner.
+
+ :::image border="true" type="content" source="media/logic-apps-tutorial/logic-app-connector-demo-six.png" alt-text="Image showing correct values to create a Logic App resource.":::
+
+1. A short validation check should run. After it completes successfully, select **Create** in the bottom-left corner.
+
+1. You will be redirected to a screen that says **Deployment in progress**. Give Azure some time to deploy; it can take a few minutes. After the deployment is complete, you should see a banner that says, **Your deployment is complete**. When you reach this screen, select **Go to resource**.
+
+ :::image border="true" type="content" source="media/logic-apps-tutorial/logic-app-connector-demo-seven.gif" alt-text="GIF showing how to get to newly created Logic App resource.":::
+
+1. You'll be redirected to the **Logic Apps Designer** page. There is a short video for a quick introduction to Logic Apps available on the home screen. When you're ready to begin designing your Logic App, select the **Blank Logic App** button.
+
+ :::image border="true" type="content" source="media/logic-apps-tutorial/logic-app-connector-demo-eight.png" alt-text="Image showing how to enter the Logic App Designer.":::
+
+1. You should see a screen that looks like the one below. Now, you're ready to start designing and implementing your Logic App.
+
+ :::image border="true" type="content" source="media/logic-apps-tutorial/logic-app-connector-demo-nine.png" alt-text="Image of the Logic App Designer.":::
+
+### Create automation flow
+
+Now that you have the Logic App connector resource set up and configured, the only thing left to do is to create the automation flow and test it out!
+
+1. Search for and select **OneDrive** or **OneDrive for Business** in the search bar.
+
+1. Select the **When a file is created** trigger.
+
+1. You'll see a OneDrive pop-up window and be prompted to log into your OneDrive account. Select **Sign in** and follow the prompts to connect your account.
+
+ > [!TIP]
+ > If you try to sign into the OneDrive connector using an Office 365 account, you may receive the following error: ***Sorry, we can't sign you in here with your @MICROSOFT.COM account.***
+ >
+ > * This error happens because OneDrive is a cloud-based storage for personal use that can be accessed with an Outlook.com or Microsoft Live account not with Office 365 account.
+ > * You can use the OneDrive for Business connector if you want to use an Office 365 account. Make sure that you have [created a OneDrive Folder](#create-a-onedrive-folder) for this project in your OneDrive for Business account.
+
+1. After your account is connected, select the folder you created earlier in your OneDrive or OneDrive for Business account. Leave the other default values in place. Your window should look similar to the one below.
+
+ :::image border="true" type="content" source="media/logic-apps-tutorial/logic-app-connector-demo-ten.gif" alt-text="GIF showing how to add the first node to workflow.":::
+
+1. Next, we're going to add a new step to the workflow. Select the plus button underneath the newly created OneDrive node.
+
+1. A new node should be added to the Logic App designer view. Search for "Form Recognizer" in the search bar and select **Analyze invoice (preview)** from the list.
+
+1. Now, you should see a window where you will create your connection. Specifically, you're going to connect your Form Recognizer resource to the Logic Apps Designer Studio:
+
+ * Enter a **Connection name**. It should be something easy to remember.
+ * Enter the Form Recognizer resource **Endpoint URL** and **Account Key** that you copied previously. If you skipped this step earlier or lost the strings, you can navigate back to your Form Recognizer resource and copy them again. When you're done, select **Create**.
+
+ :::image border="true" type="content" source="media/logic-apps-tutorial/logic-app-connector-demo-eleven.gif" alt-text="GIF showing how to add second node to workflow.":::
+
+1. You should see the parameters tab for the **Analyze Invoice** connector.
+
+1. Select the **Document/Image File Content** field. A dynamic content pop-up should appear. If it doesn't, select the **Add dynamic content** button below the field.
+
+1. Select **File content** from the pop-up list. This step is essentially sending the file(s) to be analyzed to the Form recognizer prebuilt-invoice model. Once you see the **File content** badge show in the **Document /Image file content** field, you've completed this step correctly.
+
+ :::image border="true" type="content" source="media/logic-apps-tutorial/logic-app-connector-demo-twelve.gif" alt-text="GIF showing how to add dynamic content to second node.":::
+
+1. We need to add the last step. Once again, select the **Γ₧ò New step** button to add another action.
+
+1. In the search bar, enter *Outlook* and select **Outlook.com** (personal) or **Office 365 Outlook** (work).
+
+1. In the actions bar, scroll down until you find **Send an email (V2)** and select this action.
+
+1. Just like with OneDrive, you'll be asked to sign into your Outlook or Office 365 Outlook account. After you sign in, you should see a window like the one pictured below. In this window, we're going to format the email to be sent with the dynamic content that Form Recognizer will extract from the invoice.
+
+ :::image border="true" type="content" source="media/logic-apps-tutorial/logic-app-connector-demo-thirteen.gif" alt-text="GIF showing how to add final step to workflow.":::
+
+1. We're almost done! Make the following changes to the following fields:
+
+ * **To**. Enter your personal or business email address or any other email address you have access to.
+
+ * **Subject**. Enter ***Invoice received from:*** and then append dynamic content **Vendor name field Vendor name**.
+
+ * **Body**. We're going to add specific information about the invoice:
+
+ 1. Type ***Invoice ID:*** and append the dynamic content **Invoice ID field Invoice ID**.
+
+ 1. On a new line type ***Invoice due date:*** and append the dynamic content **Invoice date field invoice date (date)**.
+
+ 1. Type ***Amount due:*** and append the dynamic content **Amount due field Amount due (number)**.
+
+ 1. Lastly, because the amount due is an important number we also want to send the confidence score for this extraction in the email. To do this type ***Amount due (confidence):*** and add the dynamic content **Amount due field confidence of amount due**. When you're done, the window should look similar to the screen below.
+
+ :::image border="true" type="content" source="media/logic-apps-tutorial/logic-app-connector-demo-fifteen.png" alt-text="Image of completed Outlook node.":::
+
+1. **Select Save in the upper left corner**.
+
+ :::image border="true" type="content" source="media/logic-apps-tutorial/logic-app-connector-demo-sixteen.png" alt-text="Image of completed connector flow.":::
+
+> [!NOTE]
+>
+> * The Logic App designer will automatically add a "for each loop" around the send email action. This is normal due to output format that may return more than one invoice from PDFs in the future.
+> * The current version only returns a single invoice per PDF.
+
+### Test automation flow
+
+Let's quickly review what we've done before we test our flow:
+
+> [!div class="checklist"]
+>
+> * We created a triggerΓÇöin this case scenario, the trigger is when a file is created in a pre-specified folder in our OneDrive account.
+> * We added a Form Recognizer action to our flowΓÇöin this scenario we decided to use the invoice API to automatically analyze the invoices from the OneDrive folder.
+> * We added an Outlook.com action to our flowΓÇöfor this scenario we sent some of the analyzed invoice data to a pre-determined email address.
+
+Now that we've created the flow, the last thing to do is to test it and make sure that we're getting the expected behavior.
+
+1. Now, to test the Logic App first open a new tab and navigate to the OneDrive folder you set up at the beginning of this tutorial. Add this file to the OneDrive folder [Sample invoice.](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/invoice-logic-apps-tutorial.pdf)
+
+1. Return to the Logic App designer tab and select the **Run trigger** button and select **Run** from the drop-down menu.
+
+1. You should see a sample run of your Logic App run if all the steps have green check marks it means the run was successful.
+
+ :::image border="true" type="content" source="media/logic-apps-tutorial/logic-app-connector-demo-seventeen.gif" alt-text="GIF of sample run of Logic App.":::
+
+1. Check your email and you should see a new email with the information we pre-specified.
+
+1. Be sure to [disable or delete](/azure/logic-apps/manage-logic-apps-with-azure-portal#disable-or-enable-a-single-logic-app) your logic App after you're done so usage stops.
+
+Congratulations! You've officially completed this tutorial.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Use the invoice processing prebuilt model in Power Automate](/ai-builder/flow-invoice-processing?toc=/azure/applied-ai-services/form-recognizer/toc.json&bc=/azure/applied-ai-services/form-recognizer/breadcrumb/toc.json)
applied-ai-services How To Configure Translation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/applied-ai-services/immersive-reader/how-to-configure-translation.md
Previously updated : 06/29/2020 Last updated : 01/06/2022
-# How to configure translation
+# How to configure Translation
-This article demonstrates how to configure the various options for translation in the Immersive Reader.
+This article demonstrates how to configure the various options for Translation in the Immersive Reader.
-## Configure translation language
+## Configure Translation language
-The `options` parameter contains all of the flags that can be used to configure translation. Set the `language` parameter to the language you wish to translate to. See the [Language Support](./language-support.md) for the full list of supported languages.
+The `options` parameter contains all of the flags that can be used to configure Translation. Set the `language` parameter to the language you wish to translate to. See the [Language Support](./language-support.md) for the full list of supported languages.
```typescript const options = {
applied-ai-services How To Create Immersive Reader https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/applied-ai-services/immersive-reader/how-to-create-immersive-reader.md
Previously updated : 07/22/2019 Last updated : 11/11/2021
The script is designed to be flexible. It will first look for existing Immersive
Start-Sleep -Seconds 5 Write-Host "Granting service principal access to the newly created Immersive Reader resource"
- $accessResult = az role assignment create --assignee $principalId --scope $resourceId --role "Cognitive Services User"
+ $accessResult = az role assignment create --assignee $principalId --scope $resourceId --role "Cognitive Services Immersive Reader User"
if (-not $accessResult) { throw "Error: Failed to grant service principal access" }
applied-ai-services Security How To Update Role Assignment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/applied-ai-services/immersive-reader/security-how-to-update-role-assignment.md
+
+ Title: "Security Advisory: Update Role Assignment for Azure Active Directory authentication permissions"
+
+description: This article will show you how to update the role assignment on existing Immersive Reader resources due to a security bug discovered in November 2021
+++++++ Last updated : 01/06/2022+++
+# Security Advisory: Update Role Assignment for Azure Active Directory authentication permissions
+
+A security bug has been discovered with Immersive Reader Azure Active Directory (Azure AD) authentication configuration. We are advising that you change the permissions on your Immersive Reader resources as described below.
+
+## Background
+
+A security bug was discovered that relates to Azure AD authentication for Immersive Reader. When initially creating your Immersive Reader resources and configuring them for Azure AD authentication, it is necessary to grant permissions for the Azure AD application identity to access your Immersive Reader resource. This is known as a Role Assignment. The Azure role that was previously used for permissions was the [Cognitive Services User](../../role-based-access-control/built-in-roles.md#cognitive-services-user) role.
+
+During a security audit, it was discovered that this `Cognitive Services User` role has permissions to [List Keys](/rest/api/cognitiveservices/accountmanagement/accounts/list-keys). This is slightly concerning because Immersive Reader integrations involve the use of this Azure AD access token in client web apps and browsers, and if the access token were to be stolen by a bad actor or attacker, there is a concern that this access token could be used to `list keys` of your Immersive Reader resource. If an attacker could `list keys` for your resource, then they would obtain the `Subscription Key` for your resource. The `Subscription Key` for your resource is used as an authentication mechanism and is considered a secret. If an attacker had the resource's `Subscription Key`, it would allow them to make valid and authenticated API calls to your Immersive Reader resource endpoint, which could lead to Denial of Service due to the increased usage and throttling on your endpoint. It would also allow unauthorized use of your Immersive Reader resource, which would lead to increased charges on your bill.
+
+In practice however, this attack or exploit is not likely to occur or may not even be possible. For Immersive Reader scenarios, customers obtain Azure AD access tokens with an audience of `https://cognitiveservices.azure.com`. In order to successfully `list keys` for your resource, the Azure AD access token would need to have an audience of `https://management.azure.com`. Generally speaking, this is not too much of a concern, since the access tokens used for Immersive Reader scenarios would not work to `list keys`, as they do not have the required audience. In order to change the audience on the access token, an attacker would have to hijack the token acquisition code and change the audience before the call is made to Azure AD to acquire the token. Again, this is not likely to be exploited because, as an Immersive Reader authentication best practice, we advise that customers create Azure AD access tokens on the web application backend, not in the client or browser. In those cases, since the token acquisition happens on the backend service, it's not as likely or perhaps even possible that attacker could compromise that process and change the audience.
+
+The real concern comes when or if any customer were to acquire tokens from Azure AD directly in client code. We strongly advise against this, but since customers are free to implement as they see fit, it is possible that some customers are doing this.
+
+To mitigate the concerns about any possibility of using the Azure AD access token to `list keys`, we have created a new built-in Azure role called `Cognitive Services Immersive Reader User` that does not have the permissions to `list keys`. This new role is not a shared role for the Cognitive Services platform like `Cognitive Services User` role is. This new role is specific to Immersive Reader and will only allow calls to Immersive Reader APIs.
+
+We are advising that ALL customers migrate to using the new `Cognitive Services Immersive Reader User` role instead of the original `Cognitive Services User` role. We have provided a script below that you can run on each of your resources to switch over the role assignment permissions.
+
+This recommendation applies to ALL customers, to ensure that this vulnerability is patched for everyone, no matter what the implementation scenario or likelihood of attack.
+
+If you do NOT do this, nothing will break. The old role will continue to function. The security impact for most customers is minimal. However, it is advised that you migrate to the new role to mitigate the security concerns discussed above. Applying this update is a security advisory recommendation; it is not a mandate.
+
+Any new Immersive Reader resources you create with our script at [How to: Create an Immersive Reader resource](./how-to-create-immersive-reader.md) will automatically use the new role.
++
+## Call to Action
+
+If you created and configured an Immersive Reader resource using the instructions at [How to: Create an Immersive Reader resource](./how-to-create-immersive-reader.md) prior to February 2022, it is advised that you perform the operation below to update the role assignment permissions on ALL of your Immersive Reader resources. The operation involves running a script to update the role assignment on a single resource. If you have multiple resources, run this script multiple times, once for each resource.
+
+### Set up PowerShell environment
+
+1. Start by opening the [Azure Cloud Shell](../../cloud-shell/overview.md). Ensure that Cloud Shell is set to PowerShell in the upper-left hand dropdown or by typing `pwsh`.
+
+1. Copy and paste the following code snippet into the shell.
+
+ ```azurepowershell-interactive
+ function Update-ImmersiveReaderRoleAssignment(
+ [Parameter(Mandatory=$true, Position=0)] [String] $SubscriptionName,
+ [Parameter(Mandatory=$true)] [String] $ResourceGroupName,
+ [Parameter(Mandatory=$true)] [String] $ResourceName,
+ [Parameter(Mandatory=$true)] [String] $AADAppIdentifierUri
+ )
+ {
+ $unused = ''
+ if (-not [System.Uri]::TryCreate($AADAppIdentifierUri, [System.UriKind]::Absolute, [ref] $unused)) {
+ throw "Error: AADAppIdentifierUri must be a valid URI"
+ }
+
+ Write-Host "Setting the active subscription to '$SubscriptionName'"
+ $subscriptionExists = Get-AzSubscription -SubscriptionName $SubscriptionName
+ if (-not $subscriptionExists) {
+ throw "Error: Subscription does not exist"
+ }
+ az account set --subscription $SubscriptionName
+
+ # Get the Immersive Reader resource
+ $resourceId = az cognitiveservices account show --resource-group $ResourceGroupName --name $ResourceName --query "id" -o tsv
+ if (-not $resourceId) {
+ throw "Error: Failed to find Immersive Reader resource"
+ }
+
+ # Get the Azure AD application service principal
+ $principalId = az ad sp show --id $AADAppIdentifierUri --query "objectId" -o tsv
+ if (-not $principalId) {
+ throw "Error: Failed to find Azure AD application service principal"
+ }
+
+ $newRoleName = "Cognitive Services Immersive Reader User"
+ $newRoleExists = az role assignment list --assignee $principalId --scope $resourceId --role $newRoleName --query "[].id" -o tsv
+ if ($newRoleExists) {
+ Write-Host "New role assignment for '$newRoleName' role already exists on resource"
+ }
+ else {
+ Write-Host "Creating new role assignment for '$newRoleName' role"
+ $roleCreateResult = az role assignment create --assignee $principalId --scope $resourceId --role $newRoleName
+ if (-not $roleCreateResult) {
+ throw "Error: Failed to add new role assignment"
+ }
+ Write-Host "New role assignment created successfully"
+ }
+
+ $oldRoleName = "Cognitive Services User"
+ $oldRoleExists = az role assignment list --assignee $principalId --scope $resourceId --role $oldRoleName --query "[].id" -o tsv
+ if (-not $oldRoleExists) {
+ Write-Host "Old role assignment for '$oldRoleName' role does not exist on resource"
+ }
+ else {
+ Write-Host "Deleting old role assignment for '$oldRoleName' role"
+ az role assignment delete --assignee $principalId --scope $resourceId --role $oldRoleName
+ $oldRoleExists = az role assignment list --assignee $principalId --scope $resourceId --role $oldRoleName --query "[].id" -o tsv
+ if ($oldRoleExists) {
+ throw "Error: Failed to delete old role assignment"
+ }
+ Write-Host "Old role assignment deleted successfully"
+ }
+ }
+ ```
+
+1. Run the function `Update-ImmersiveReaderRoleAssignment`, supplying the '<PARAMETER_VALUES>' placeholders below with your own values as appropriate.
+
+ ```azurepowershell-interactive
+ Update-ImmersiveReaderRoleAssignment -SubscriptionName '<SUBSCRIPTION_NAME>' -ResourceGroupName '<RESOURCE_GROUP_NAME>' -ResourceName '<RESOURCE_NAME>' -AADAppIdentifierUri '<AAD_APP_IDENTIFIER_URI>'
+ ```
+
+ The full command will look something like the following. Here we have put each parameter on its own line for clarity, so you can see the whole command. Do not copy or use this command as-is. Copy and use the command above with your own values. This example has dummy values for the '<PARAMETER_VALUES>' above. Yours will be different, as you will come up with your own names for these values.
+
+ ```
+ Update-ImmersiveReaderRoleAssignment
+ -SubscriptionName 'MyOrganizationSubscriptionName'
+ -ResourceGroupName 'MyResourceGroupName'
+ -ResourceName 'MyOrganizationImmersiveReader'
+ -AADAppIdentifierUri 'https://MyOrganizationImmersiveReaderAADApp'
+ ```
+
+ | Parameter | Comments |
+ | | |
+ | SubscriptionName |The name of your Azure subscription. |
+ | ResourceGroupName |The name of the Resource Group that contains your Immersive Reader resource. |
+ | ResourceName |The name of your Immersive Reader resource. |
+ | AADAppIdentifierUri |The URI for your Azure AD app. |
+
+ ```
+
+## Next steps
+
+* View the [Node.js quickstart](./quickstarts/client-libraries.md?pivots=programming-language-nodejs) to see what else you can do with the Immersive Reader SDK using Node.js
+* View the [Android tutorial](./how-to-launch-immersive-reader.md) to see what else you can do with the Immersive Reader SDK using Java or Kotlin for Android
+* View the [iOS tutorial](./how-to-launch-immersive-reader.md) to see what else you can do with the Immersive Reader SDK using Swift for iOS
+* View the [Python tutorial](./how-to-launch-immersive-reader.md) to see what else you can do with the Immersive Reader SDK using Python
+* Explore the [Immersive Reader SDK](https://github.com/microsoft/immersive-reader-sdk) and the [Immersive Reader SDK Reference](./reference.md)
azure-arc Upgrade Data Controller Direct Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/upgrade-data-controller-direct-cli.md
az arcdata dc upgrade --resource-group <resource group> --name <data controller
The output for the preceding command is: ```output
-Preparing to upgrade dc arcdc in namespace arc to version 20211024.1.
-Preparing to upgrade dc arcdc in namespace arc to version 20211024.1.
+Preparing to upgrade dc arcdc in namespace arc to version <version-tag>.
+Preparing to upgrade dc arcdc in namespace arc to version <version-tag>.
****Dry Run****
-Arcdata Control Plane would be upgraded to: 20211024.1
+Arcdata Control Plane would be upgraded to: <version-tag>
```
-To upgrade the data controller, run the `az arcdata dc upgrade` command. If you don't specify a target image, the data controller will be upgraded to the latest version. The following example uses a local variable (`$version`) to use the version you selected previously ([View available images and chose a version](#view-available-images-and-chose-a-version)).
+To upgrade the data controller, run the `az arcdata dc upgrade` command. If you don't specify a target image, the data controller will be upgraded to the latest version.
```azurecli
-az arcdata dc upgrade --resource-group <resource group> --name <data controller name> --desired-version <version> [--no-wait]
+az arcdata dc upgrade --resource-group <resource group> --name <data controller name> [--no-wait]
```
+In example above, you can include `--desired-version <version>` to specify a version if you do not want the latest version.
+ ## Monitor the upgrade status You can monitor the progress of the upgrade with CLI.
azure-arc Upgrade Data Controller Indirect Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/upgrade-data-controller-indirect-cli.md
You will need an indirect mode data controller with the imageTag v1.0.0_2021-07-
To check the version, run: ```console
-kubectl get datacontrollers -n -o custom-columns=BUILD:.spec.docker.imageTag
+kubectl get datacontrollers -n <namespace> -o custom-columns=BUILD:.spec.docker.imageTag
``` ## Install tools
Before you can proceed with the tasks in this article you need to install:
Pull the list of available images for the data controller with the following command: ```azurecli
- az arcdata dc list-upgrades --k8s-namespace <namespace> ΓÇô-use-k8s
+ az arcdata dc list-upgrades --k8s-namespace <namespace>
``` The command above returns output like the following example:
az arcdata dc upgrade --desired-version <version> --k8s-namespace <namespace> --
The output for the preceding command is: ```output
-Preparing to upgrade dc arcdc in namespace arc to version 20211024.1.
-Preparing to upgrade dc arcdc in namespace arc to version 20211024.1.
+Preparing to upgrade dc arcdc in namespace arc to version <version-tag>.
+Preparing to upgrade dc arcdc in namespace arc to version <version-tag>.
****Dry Run****
-Arcdata Control Plane would be upgraded to: 20211024.1
+Arcdata Control Plane would be upgraded to: <version-tag>
```
-To upgrade the data controller, run the `az arcdata dc upgrade` command. If you don't specify a target image, the data controller will be upgraded to the latest version. The following example uses a local variable (`$version`) to use the version you selected previously ([View available images and chose a version](#view-available-images-and-chose-a-version)).
+To upgrade the data controller, run the `az arcdata dc upgrade` command. If you don't specify a target image, the data controller will be upgraded to the latest version.
```azurecli
-az arcdata dc upgrade --desired-version $version --k8s-namespace <namespace> --use-k8s
+az arcdata dc upgrade --k8s-namespace <namespace> --use-k8s
``` The output for the preceding command shows the status of the steps: ```output
-Preparing to upgrade dc arcdc in namespace arc to version 20211024.1.
-Preparing to upgrade dc arcdc in namespace arc to version 20211024.1.
+Preparing to upgrade dc arcdc in namespace arc to version <version-tag>.
+Preparing to upgrade dc arcdc in namespace arc to version <version-tag>.
Creating service account: arc:cr-upgrade-worker Creating cluster role: arc:cr-upgrade-worker Creating cluster role binding: arc:crb-upgrade-worker
Service account arc:cr-upgrade-worker has been created successfully.
Creating privileged job arc-elevated-bootstrapper-job ```
+In example above, you can include `--desired-version <version>` to specify a version if you do not want the latest version.
+ ## Monitor the upgrade status You can monitor the progress of the upgrade with kubectl or CLI.
azure-arc Upgrade Data Controller Indirect Kubernetes Tools https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/upgrade-data-controller-indirect-kubernetes-tools.md
such as the Kubernetes dashboard, oc, or helm if you are familiar with those too
Pull the list of available images for the data controller with the following command: ```azurecli
-az arcdata dc list-upgrades --k8s-namespace <namespace> ΓÇô-use-k8s
+az arcdata dc list-upgrades --k8s-namespace <namespace>
``` The command above returns output like the following example: ```output
-Found 2 valid versions. The current datacontroller version is v1.0.0_2021-07-30.
-v1.1.0_2021-11-02
-v1.0.0_2021-07-30
+Found 2 valid versions. The current datacontroller version is <current-version>.
+<available-version>
+...
``` ## Create or download .yaml file
azure-arc Upgrade Sql Managed Instance Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/upgrade-sql-managed-instance-cli.md
The output will be:
```output Preparing to upgrade sql sqlmi-1 in namespace arc to data controller version.
-****Dry Run****1 instance(s) would be upgraded by this commandsqlmi-1 would be upgraded to 20211024.1.
+****Dry Run****1 instance(s) would be upgraded by this commandsqlmi-1 would be upgraded to <version-tag>.
``` ### General Purpose
Status:
Observed Generation: 2 Primary Endpoint: 30.76.129.38,1433 Ready Replicas: 1/1
- Running Version: 20211024.1
+ Running Version: <version-tag>
State: Ready ```
azure-arc Upgrade Sql Managed Instance Direct Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/upgrade-sql-managed-instance-direct-cli.md
The output will be:
```output Preparing to upgrade sql sqlmi-1 in namespace arc to data controller version.
-****Dry Run****1 instance(s) would be upgraded by this commandsqlmi-1 would be upgraded to 20211024.1.
+****Dry Run****1 instance(s) would be upgraded by this commandsqlmi-1 would be upgraded to <version-tag>.
``` ### General Purpose
Status:
Observed Generation: 2 Primary Endpoint: 30.76.129.38,1433 Ready Replicas: 1/1
- Running Version: 20211024.1
+ Running Version: <version-tag>
State: Ready ```
azure-arc Upgrade Sql Managed Instance Indirect Kubernetes Tools https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/upgrade-sql-managed-instance-indirect-kubernetes-tools.md
Status:
Observed Generation: 2 Primary Endpoint: 30.76.129.38,1433 Ready Replicas: 1/1
- Running Version: 20211024.1
+ Running Version: <version-tag>
State: Ready ```
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/resource-bridge/overview.md
URLS:
## Next steps
-To learn more about how Azure Arc-enabled VMware vSphere extends Azure's governance and management capabilities to VMware vSphere infrastructure, see the following [Overview](/vmware-vsphere/overview.md) article.
+To learn more about how Azure Arc-enabled VMware vSphere extends Azure's governance and management capabilities to VMware vSphere infrastructure, see the following [Overview](/azure/azure-arc/vmware-vsphere/overview) article.
azure-cache-for-redis Cache How To Scale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-how-to-scale.md
No, your cache name and keys are unchanged during a scaling operation.
- When you scale a **Standard** cache to a different size or to a **Premium** cache, one of the replicas is shut down and reprovisioned to the new size and the data transferred over, and then the other replica does a failover before it's reprovisioned, similar to the process that occurs during a failure of one of the cache nodes. - When you scale out a clustered cache, new shards are provisioned and added to the Redis server cluster. Data is then resharded across all shards. - When you scale in a clustered cache, data is first resharded and then cluster size is reduced to required shards.
+- In some cases, such as scaling or migrating your cache to a different cluster, the underlying IP address of the cache can change. The DNS records for the cache changes and is transparent to most applications. However, if you use an IP address to configure the connection to your cache, or to configure NSGs, or firewalls allowing traffic to the cache, your application might have trouble connecting sometime after that the DNS record updates.
### Will I lose data from my cache during scaling?
azure-cache-for-redis Quickstart Create Redis Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/quickstart-create-redis-enterprise.md
The Azure Cache for Redis Enterprise tiers provide fully integrated and managed
* Enterprise, which uses volatile memory (DRAM) on a virtual machine to store data * Enterprise Flash, which uses both volatile and non-volatile memory (NVMe or SSD) to store data.
+Both Enterprise and Enterprise Flash support open-source Redis 6 and some new features that aren't yet available in the Basic, Standard, or Premium tiers. The supported features include some Redis modules that enable additional features like search, bloom filters, and time series.
+ ## Prerequisites You'll need an Azure subscription before you begin. If you don't have one, create an [account](https://azure.microsoft.com/). For more information, see [special considerations for Enterprise tiers](cache-overview.md#special-considerations-for-enterprise-tiers).
azure-functions Durable Functions Bindings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/durable/durable-functions-bindings.md
def orchestrator_function(context: df.DurableOrchestrationContext):
main = df.Orchestrator.create(orchestrator_function) ```
+# [PowerShell](#tab/powershell)
+
+```powershell
+param($Context)
+
+$input = $Context.Input
+
+# Do some work
+
+$output
+```
Most orchestrator functions call activity functions, so here is a "Hello World" example that demonstrates how to call an activity function:
def orchestrator_function(context: df.DurableOrchestrationContext):
main = df.Orchestrator.create(orchestrator_function) ```
+# [PowerShell](#tab/powershell)
+
+```powershell
+param($Context)
+
+$name = $Context.Input.Name
+
+$output = Invoke-DurableActivity -FunctionName 'SayHello' -Input $name
+
+$output
+```
+ ## Activity trigger
def main(name: str) -> str:
return f"Hello {name}!" ```
+# [PowerShell](#tab/powershell)
+```powershell
+param($name)
+
+"Hello $name!"
+```
### Using input and output bindings
async def main(msg: func.QueueMessage, starter: str) -> None:
instance_id = await client.start_new("HelloWorld", client_input=payload) ```
+# [PowerShell](#tab/powershell)
+
+**function.json**
+```json
+{
+ "bindings": [
+ {
+ "name": "input",
+ "type": "queueTrigger",
+ "queueName": "durable-function-trigger",
+ "direction": "in"
+ },
+ {
+ "name": "starter",
+ "type": "durableClient",
+ "direction": "in"
+ }
+ ]
+}
+```
+
+**run.ps1**
+```powershell
+param($[string] $input, $TriggerMetadata)
+
+$InstanceId = Start-DurableOrchestration -FunctionName $FunctionName -Input $input
+```
+ More details on starting instances can be found in [Instance management](durable-functions-instance-management.md).
azure-monitor Agent Linux https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/agents/agent-linux.md
The default cache size is 10 MB but can be modified in the [omsagent.conf file](
- Review [Managing and maintaining the Log Analytics agent for Windows and Linux](agent-manage.md) to learn about how to reconfigure, upgrade, or remove the agent from the virtual machine. - Review [Troubleshooting the Linux agent](agent-linux-troubleshoot.md) if you encounter issues while installing or managing the agent.+
+- Review [Agent Data Sources](https://docs.microsoft.com/azure/azure-monitor/agents/agent-data-sources) to learn about data source configuration.
azure-monitor App Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/app-insights-overview.md
In addition, you can pull in telemetry from the host environments such as perfor
All these telemetry streams are integrated into Azure Monitor. In the Azure portal, you can apply powerful analytic and search tools to the raw data.
-### What's the overhead?
+### What's the performance overhead?
The impact on your app's performance is small. Tracking calls are non-blocking, and are batched and sent in a separate thread. ## What does Application Insights monitor?
azure-monitor Change Analysis Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/change-analysis-troubleshoot.md
description: Learn how to troubleshoot problems in Application Change Analysis.
Previously updated : 02/11/2021 Last updated : 01/07/2022 # Troubleshoot Application Change Analysis (preview)
-## Having trouble registering Microsoft. Change Analysis resource provider from Change history tab
+## Trouble registering Microsoft.ChangeAnalysis resource provider from Change history tab.
-If it's the first time you view Change history after its integration with Application Change Analysis, you will see it automatically registering a resource provider **Microsoft.ChangeAnalysis**. In rare cases it might fail for the following reasons:
+If you're viewing Change history after its first integration with Application Change Analysis, you will see it automatically registering the **Microsoft.ChangeAnalysis** resource provider. The resource may fail and incur the following error messages:
-- **You don't have enough permissions to register Microsoft.ChangeAnalysis resource provider**. This error message means your role in the current subscription does not have the **Microsoft.Support/register/action** scope associated with it. This might happen if you are not the owner of a subscription and got shared access permissions through a coworker (that is, view access to a resource group). To fix this, you can contact the owner of your subscription to register the **Microsoft.ChangeAnalysis** resource provider. This can be done in Azure portal through **Subscriptions | Resource providers** and search for ```Microsoft.ChangeAnalysis``` and register in the UI, or through Azure PowerShell or Azure CLI.
+### You don't have enough permissions to register Microsoft.ChangeAnalysis resource provider.
+You're receiving this error message because your role in the current subscription is not associated with the **Microsoft.Support/register/action** scope. For example, you are not the owner of your subscription and instead received shared access permissions through a coworker (like view access to a resource group).
- Register resource provider through PowerShell:
+To resolve the issue, contact the owner of your subscription to register the **Microsoft.ChangeAnalysis** resource provider.
+1. In the Azure portal, search for **Subscriptions**.
+1. Select your subscription.
+1. Navigate to **Resource providers** under **Settings** in the side menu.
+1. Search for **Microsoft.ChangeAnalysis** and register via the UI, Azure PowerShell, or Azure CLI.
+
+ Example for registering the resource provider through PowerShell:
```PowerShell # Register resource provider Register-AzResourceProvider -ProviderNamespace "Microsoft.ChangeAnalysis" ``` -- **Failed to register Microsoft.ChangeAnalysis resource provider**. This message means something failed immediately as the UI sent request to register the resource provider, and it's not related to permission issue. Likely it might be a temporary internet connectivity issue. Try refreshing the page and checking your internet connection. If the error persists, contact changeanalysishelp@microsoft.com
+### Failed to register Microsoft.ChangeAnalysis resource provider.
+This error message is likely a temporary internet connectivity issue, since:
+* The UI sent the resource provider registration request.
+* You've resolved your [permissions issue](#you-dont-have-enough-permissions-to-register-microsoftchangeanalysis-resource-provider).
-- **This is taking longer than expected**. This message means the registration is taking longer than 2 minutes. This is unusual but does not necessarily mean something went wrong. You can go to **Subscriptions | Resource provider** to check for **Microsoft.ChangeAnalysis** resource provider registration status. You can try to use the UI to unregister, re-register, or refresh to see if it helps. If issue persists, contact changeanalysishelp@microsoft.com for support.
- ![Troubleshoot RP registration taking too long](./media/change-analysis/troubleshoot-registration-taking-too-long.png)
+Try refreshing the page and checking your internet connection. If the error persists, contact the [Change Analysis help team](mailto:changeanalysishelp@microsoft.com).
-![Screenshot of the Diagnose and Solve Problems tool for a Virtual Machine with Troubleshooting tools selected.](./media/change-analysis/vm-dnsp-troubleshootingtools.png)
+### This is taking longer than expected.
+You'll receive this error message when the registration takes longer than 2 minutes. While unusual, it doesn't mean something went wrong. Restart your web app to see your registration changes. Changes should show up within a few hours of app restart.
-![Screenshot of the tile for the Analyze recent changes troubleshooting tool for a Virtual Machine.](./media/change-analysis/analyze-recent-changes.png)
+If your changes still don't show after 6 hours, contact the [Change Analysis help team](mailto:changeanalysishelp@microsoft.com).
+
+## Azure Lighthouse subscription is not supported.
+
+### Failed to query Microsoft.ChangeAnalysis resource provider.
+Often, this message includes: `Azure Lighthouse subscription is not supported, the changes are only available in the subscription's home tenant`.
-## Azure Lighthouse subscription is not supported
+Currently, the Change Analysis resource provider is limited to registration through Azure Lighthouse subscription for users outside of home tenant. We are working on addressing this limitation.
-- **Failed to query Microsoft.ChangeAnalysis resource provider** with message *Azure lighthouse subscription is not supported, the changes are only available in the subscription's home tenant*. There is a limitation right now for Change Analysis resource provider to be registered through Azure Lighthouse subscription for users not in home tenant. We are working on addressing this limitation. If this is a blocking issue for you, there is a workaround that involves creating a service principal and explicitly assigning the role to allow the access. Contact changeanalysishelp@microsoft.com to learn more about it.
+If this is a blocking issue for you, we can provide a workaround that involves creating a service principal and explicitly assigning the role to allow the access. Contact the [Change Analysis help team](mailto:changeanalysishelp@microsoft.com) to learn more about it.
-## An error occurred while getting changes. Please refresh this page or come back later to view changes
+## An error occurred while getting changes. Please refresh this page or come back later to view changes.
-This is the general error message presented by Application Change Analysis service when changes could not be loaded. A few known causes are:
+When changes can't be loaded, Application Change Analysis service presents this general error message. A few known causes are:
-- Internet connectivity error from the client device-- Change Analysis service being temporarily unavailable
-Refreshing the page after a few minutes usually fixes this issue. If the error persists, contact changeanalysishelp@micorosoft.com
+- Internet connectivity error from the client device.
+- Change Analysis service being temporarily unavailable.
-## You don't have enough permissions to view some changes. Contact your Azure subscription administrator
+Refreshing the page after a few minutes usually fixes this issue. If the error persists, contact the [Change Analysis help team](mailto:changeanalysishelp@microsoft.com).
-This is the general unauthorized error message, explaining the current user does not have sufficient permissions to view the change. At least reader access is required on the resource to view infrastructure changes returned by Azure Resource Graph and Azure Resource Manager. For web app in-guest file changes and configuration changes, at least contributor role is required.
+## You don't have enough permissions to view some changes. Contact your Azure subscription administrator.
-## Failed to register Microsoft.ChangeAnalysis resource provider
+This general unauthorized error message occurs when the current user does not have sufficient permissions to view the change. At minimum,
+* To view infrastructure changes returned by Azure Resource Graph and Azure Resource Manager, reader access is required.
+* For web app in-guest file changes and configuration changes, contributor role is required.
-This message means something failed immediately as the UI sent request to register the resource provider, and it's not related to permission issue. Likely it might be a temporary internet connectivity issue. Try refreshing the page and checking your internet connection. If the error persists, contact changeanalysishelp@microsoft.com
+You may not immediately see web app in-guest file changes and configuration changes. While we work on providing the option to restart the app in the Azure portal, the current procedure is:
-## You don't have enough permissions to register Microsoft.ChangeAnalysis resource provider. Contact your Azure subscription administrator
+1. User adds the hidden tracking tag, notifying the scheduled worker.
+2. Scheduled worker scans the web app within a few hours.
+3. While scanning, scheduled worker creates a handshake file via AST.
+4. The Web App team checks that handshake file when it restarts.
-This error message means your role in the current subscription does not have the **Microsoft.Support/register/action** scope associated with it. This might happen if you are not the owner of a subscription and got shared access permissions through a coworker (that is, view access to a resource group). To fix this, you can contact the owner of your subscription to register the **Microsoft.ChangeAnalysis** resource provider. This can be done in Azure portal through **Subscriptions | Resource providers** and search for ```Microsoft.ChangeAnalysis``` and register in the UI, or through Azure PowerShell or Azure CLI.
+## Diagnose and solve problems tool for virtual machines
+
+To troubleshoot virtual machine issues using the troubleshooting tool in the Azure portal:
+1. Navigate to your virtual machine.
+1. Select **Diagnose and solve problems** from the side menu.
+1. Browse and select the troubleshooting tool that fits your issue.
+
+![Screenshot of the Diagnose and Solve Problems tool for a Virtual Machine with Troubleshooting tools selected.](./media/change-analysis/vm-dnsp-troubleshootingtools.png)
+
+![Screenshot of the tile for the Analyze recent changes troubleshooting tool for a Virtual Machine.](./media/change-analysis/analyze-recent-changes.png)
-Register resource provider through PowerShell:
-```PowerShell
-# Register resource provider
-Register-AzResourceProvider -ProviderNamespace "Microsoft.ChangeAnalysis"
-```
## Next steps -- Learn more about [Azure Resource Graph](../../governance/resource-graph/overview.md), which helps power Change Analysis.
+Learn more about [Azure Resource Graph](../../governance/resource-graph/overview.md), which helps power Change Analysis.
azure-monitor Java Standalone Telemetry Processors https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/java-standalone-telemetry-processors.md
To configure this option, under `exclude`, specify the `matchType` one or more `
} ] ```
+### Default metrics captured by Java agent
+
+| Metric name | Metric type | Description | Filterable |
+|||||
+| `Current Thread Count` | custom metrics | See [ThreadMXBean.getThreadCount()](https://docs.oracle.com/javase/8/docs/api/java/lang/management/ThreadMXBean.html#getThreadCount--). | yes |
+| `Loaded Class Count` | custom metrics | See [ClassLoadingMXBean.getLoadedClassCount()](https://docs.oracle.com/javase/8/docs/api/java/lang/management/ClassLoadingMXBean.html#getLoadedClassCount--). | yes |
+| `GC Total Count` | custom metrics | Sum of counts across all GC MXBeans (diff since last reported). See [GarbageCollectorMXBean.getCollectionCount()](https://docs.oracle.com/javase/7/docs/api/java/lang/management/GarbageCollectorMXBean.html). | yes |
+| `GC Total Time` | custom metrics | Sum of time across all GC MXBeans (diff since last reported). See [GarbageCollectorMXBean.getCollectionTime()](https://docs.oracle.com/javase/7/docs/api/java/lang/management/GarbageCollectorMXBean.html).| yes |
+| `Heap Memory Used (MB)` | custom metrics | See [MemoryMXBean.getHeapMemoryUsage().getUsed()](https://docs.oracle.com/javase/8/docs/api/java/lang/management/MemoryMXBean.html#getHeapMemoryUsage--). | yes |
+| `% Of Max Heap Memory Used` | custom metrics | java.lang:type=Memory / maximum amount of memory in bytes. See [MemoryUsage](https://docs.oracle.com/javase/7/docs/api/java/lang/management/MemoryUsage.html)| yes |
+| `\Processor(_Total)\% Processor Time` | default metrics | Difference in [system wide CPU load tick counters](https://oshi.github.io/oshi/apidocs/oshi/hardware/CentralProcessor.html#getProcessorCpuLoadTicks())(Only User and System) divided by the number of [logical processors count](https://oshi.github.io/oshi/apidocs/oshi/hardware/CentralProcessor.html#getLogicalProcessors()) in a given interval of time | no |
+| `\Process(??APP_WIN32_PROC??)\% Processor Time` | default metrics | See [OperatingSystemMXBean.getProcessCpuTime()](https://docs.oracle.com/javase/8/docs/jre/api/management/extension/com/sun/management/OperatingSystemMXBean.html#getProcessCpuTime--) (diff since last reported, normalized by time and number of CPUs). | no |
+| `\Process(??APP_WIN32_PROC??)\Private Bytes` | default metrics | Sum of [MemoryMXBean.getHeapMemoryUsage()](https://docs.oracle.com/javase/8/docs/api/java/lang/management/MemoryMXBean.html#getHeapMemoryUsage--) and [MemoryMXBean.getNonHeapMemoryUsage()](https://docs.oracle.com/javase/8/docs/api/java/lang/management/MemoryMXBean.html#getNonHeapMemoryUsage--). | no |
+| `\Process(??APP_WIN32_PROC??)\IO Data Bytes/sec` | default metrics | `/proc/[pid]/io` Sum of bytes read and written by the process (diff since last reported). See [proc(5)](https://man7.org/linux/man-pages/man5/proc.5.html). | no |
+| `\Memory\Available Bytes` | default metrics | See [OperatingSystemMXBean.getFreePhysicalMemorySize()](https://docs.oracle.com/javase/7/docs/jre/api/management/extension/com/sun/management/OperatingSystemMXBean.html#getFreePhysicalMemorySize()). | no |
+
azure-monitor Private Link Design https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/private-link-design.md
The simplest and most secure approach would be:
2. Add *all* Azure Monitor resources (Application Insights components and Log Analytics workspaces) to that AMPLS. 3. Block network egress traffic as much as possible.
-If you can't add all Azure Monitor resources to your AMPLS, you can still your Private Link to some resources, as explained in [Control how Private Links apply to your networks](./private-link-design.md#control-how-private-links-apply-to-your-networks). While useful, this approach is less recommended since it doesn't prevent data exfiltration.
-
+If you can't add all Azure Monitor resources to your AMPLS, you can still apply your Private Link to some resources, as explained in [Control how Private Links apply to your networks](./private-link-design.md#control-how-private-links-apply-to-your-networks). While useful, this approach is less recommended since it doesn't prevent data exfiltration.
## Plan by network topology
azure-netapp-files Azure Netapp Files Mount Unmount Volumes For Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azure-netapp-files-mount-unmount-volumes-for-virtual-machines.md
Previously updated : 05/17/2021 Last updated : 01/07/2022 # Mount a volume for Windows or Linux VMs
-You can mount an Azure NetApp Files file for Windows or Linux virtual machines (VMs). The mount instructions for Linux virtual machines are available on Azure NetApxp Files.
+You can mount an Azure NetApp Files file for Windows or Linux virtual machines (VMs). The mount instructions for Linux virtual machines are available on Azure NetApp Files.
## Requirements
azure-netapp-files Azure Netapp Files Solution Architectures https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azure-netapp-files-solution-architectures.md
na Previously updated : 12/13/2021 Last updated : 01/07/2022 # Solution architectures using Azure NetApp Files
This section provides references for Windows applications and SQL Server solutio
### SQL Server * [SQL Server on Azure Virtual Machines with Azure NetApp Files - Azure Example Scenarios](/azure/architecture/example-scenario/file-storage/sql-server-azure-netapp-files)
-* [SQL Server on Azure Deployment Guide Using Azure NetApp Files](https://www.netapp.com/pdf.html?item=/media/27154-tr-4888.pdf)
+* [SQL Server on Azure Deployment Guide Using Azure NetApp Files](https://techcommunity.microsoft.com/t5/azure-architecture-blog/deploying-sql-server-on-azure-using-azure-netapp-files/ba-p/3023143)
* [Benefits of using Azure NetApp Files for SQL Server deployment](solutions-benefits-azure-netapp-files-sql-server.md) * [Deploy SQL Server Over SMB with Azure NetApp Files](https://www.youtube.com/watch?v=x7udfcYbibs) * [Deploy SQL Server Always-On Failover Cluster over SMB with Azure NetApp Files](https://www.youtube.com/watch?v=zuNJ5E07e8Q)
azure-netapp-files Configure Kerberos Encryption https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/configure-kerberos-encryption.md
na Previously updated : 07/15/2021 Last updated : 01/07/2022 # Configure NFSv4.1 Kerberos encryption for Azure NetApp Files
You should understand the security options available for NFSv4.1 volumes, the te
* [Create an NFS volume for Azure NetApp Files](azure-netapp-files-create-volumes.md) * [Create an Active Directory connection](create-active-directory-connections.md) * [Configure an NFS client for Azure NetApp Files](configure-nfs-clients.md)
+* [Configure ADDS LDAP with extended groups for NFS volume access](configure-ldap-extended-groups.md)
azure-netapp-files Configure Ldap Extended Groups https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/configure-ldap-extended-groups.md
na Previously updated : 11/11/2021 Last updated : 01/07/2022 # Configure ADDS LDAP with extended groups for NFS volume access
This article explains the considerations and steps for enabling LDAP with extend
| Unix groups | 24-hour TTL, 1-minute negative TTL | | Unix users | 24-hour TTL, 1-minute negative TTL |
- Caches have a specific timeout period called *Time to Live*. After the timeout period, entries age out so that stale entries do not linger. The *negative TTL* value is where a lookup that has failed resides to help avoid performance issues due to LDAP queries for objects that might not exist.ΓÇ¥
+ Caches have a specific timeout period called *Time to Live*. After the timeout period, entries age out so that stale entries do not linger. The *negative TTL* value is where a lookup that has failed resides to help avoid performance issues due to LDAP queries for objects that might not exist.
+
+* The **Allow local NFS users with LDAP** option in Active Directory connections intends to provide occasional and temporary access to local users. When this option is enabled, user authentication and lookup from the LDAP server stop working, and the number of group memberships that Azure NetApp Files will support will be limited to 16. As such, you should keep this option *disabled* on Active Directory connections, except for the occasion when a local user needs to access LDAP-enabled volumes. In that case, you should disable this option as soon as local user access is no longer required for the volume. See [Allow local NFS users with LDAP to access a dual-protocol volume](create-volumes-dual-protocol.md#allow-local-nfs-users-with-ldap-to-access-a-dual-protocol-volume) about managing local user access.
## Steps
azure-netapp-files Create Active Directory Connections https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/create-active-directory-connections.md
na Previously updated : 11/02/2021 Last updated : 01/07/2022 # Create and manage Active Directory connections for Azure NetApp Files
Several features of Azure NetApp Files require that you have an Active Directory
| Unix groups | 24-hour TTL, 1-minute negative TTL | | Unix users | 24-hour TTL, 1-minute negative TTL |
- Caches have a specific timeout period called *Time to Live*. After the timeout period, entries age out so that stale entries do not linger. The *negative TTL* value is where a lookup that has failed resides to help avoid performance issues due to LDAP queries for objects that might not exist.ΓÇ¥
+ Caches have a specific timeout period called *Time to Live*. After the timeout period, entries age out so that stale entries do not linger. The *negative TTL* value is where a lookup that has failed resides to help avoid performance issues due to LDAP queries for objects that might not exist.
+
+* Azure NetApp Files does not support the use of Active Directory Domain Services Read-Only Domain Controllers (RODC). To ensure that Azure NetApp Files does not try to use an RODC domain controller, configure the **AD Site** field of the Azure NetApp Files Active Directory connection with an Active Directory site that does not contain any RODC domain controllers.
## Decide which Domain Services to use
azure-netapp-files Create Volumes Dual Protocol https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/create-volumes-dual-protocol.md
na Previously updated : 12/09/2021 Last updated : 01/07/2022 # Create a dual-protocol volume for Azure NetApp Files
To create NFS volumes, see [Create an NFS volume](azure-netapp-files-create-volu
* Ensure that you meet the [Requirements for Active Directory connections](create-active-directory-connections.md#requirements-for-active-directory-connections). * Create a `pcuser` account in your Active Directory (AD) and ensure that the account is enabled. This account will serve as the default user. It will be used for mapping UNIX users for accessing a dual-protocol volume configured with NTFS security style. The `pcuser` account is used only when there is no user present in the AD. If a user has an account in the AD with the POSIX attributes set, then that account will be the one used for authentication, and it will not map to the `pcuser` account. * Create a reverse lookup zone on the DNS server and then add a pointer (PTR) record of the AD host machine in that reverse lookup zone. Otherwise, the dual-protocol volume creation will fail.
-* The **Allow local NFS users with LDAP** option in Active Directory connections intends to provide occasional and temporary access to local users. When this option is enabled, user authentication and lookup from the LDAP server stop working. As such, you should keep this option *disabled* on Active Directory connections, except for the occasion when a local user needs to access LDAP-enabled volumes. In that case, you should disable this option as soon as local user access is no longer required for the volume. See [Allow local NFS users with LDAP to access a dual-protocol volume](#allow-local-nfs-users-with-ldap-to-access-a-dual-protocol-volume) about managing local user access.
+* The **Allow local NFS users with LDAP** option in Active Directory connections intends to provide occasional and temporary access to local users. When this option is enabled, user authentication and lookup from the LDAP server stop working, and the number of group memberships that Azure NetApp Files will support will be limited to 16. As such, you should keep this option *disabled* on Active Directory connections, except for the occasion when a local user needs to access LDAP-enabled volumes. In that case, you should disable this option as soon as local user access is no longer required for the volume. See [Allow local NFS users with LDAP to access a dual-protocol volume](#allow-local-nfs-users-with-ldap-to-access-a-dual-protocol-volume) about managing local user access.
* Ensure that the NFS client is up to date and running the latest updates for the operating system. * Dual-protocol volumes support both Active Directory Domain Services (ADDS) and Azure Active Directory Domain Services (AADDS). * Dual-protocol volumes do not support the use of LDAP over TLS with AADDS. See [LDAP over TLS considerations](configure-ldap-over-tls.md#considerations).
azure-netapp-files Cross Region Replication Requirements Considerations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/cross-region-replication-requirements-considerations.md
This article describes requirements and considerations about [using the volume c
* [Volume replication metrics](azure-netapp-files-metrics.md#replication) * [Delete volume replications or volumes](cross-region-replication-delete.md) * [Troubleshoot cross-region replication](troubleshoot-cross-region-replication.md)--
azure-resource-manager Deployment Script Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/deployment-script-template.md
Title: Use deployment scripts in templates | Microsoft Docs
-description: use deployment scripts in Azure Resource Manager templates.
+description: Use deployment scripts in Azure Resource Manager templates.
Previously updated : 08/25/2021 Last updated : 01/07/2022
Learn how to use deployment scripts in Azure Resource templates (ARM templates). With a new resource type called `Microsoft.Resources/deploymentScripts`, users can execute scripts in template deployments and review execution results. These scripts can be used for performing custom steps such as: -- add users to a directory-- perform data plane operations, for example, copy blobs or seed database-- look up and validate a license key-- create a self-signed certificate-- create an object in Azure AD-- look up IP Address blocks from custom system
+- Add users to a directory.
+- Perform data plane operations, for example, copy blobs or seed database.
+- Look up and validate a license key.
+- Create a self-signed certificate.
+- Create an object in Azure AD.
+- Look up IP Address blocks from custom system.
The benefits of deployment script:
For deployment script API version 2020-10-01 or later, there are two principals
- **Deployment script principal**: This principal is only required if the deployment script needs to authenticate to Azure and call Azure CLI/PowerShell. There are two ways to specify the deployment script principal:
- - Specify a user-assigned managed identity in the `identity` property (see [Sample templates](#sample-templates)). When specified, the script service calls `Connect-AzAccount -Identity` before invoking the deployment script. The managed identity must have the required access to complete the operation in the script. Currently, only user-assigned managed identity is supported for the `identity` property. To login with a different identity, use the second method in this list.
+ - Specify a user-assigned managed identity in the `identity` property (see [Sample templates](#sample-templates)). When specified, the script service calls `Connect-AzAccount -Identity` before invoking the deployment script. The managed identity must have the required access to complete the operation in the script. Currently, only user-assigned managed identity is supported for the `identity` property. To log in with a different identity, use the second method in this list.
- Pass the service principal credentials as secure environment variables, and then can call [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) or [az login](/cli/azure/reference-index#az_login) in the deployment script. If a managed identity is used, the deployment principal needs the **Managed Identity Operator** role (a built-in role) assigned to the managed identity resource.
The following JSON is an example. For more information, see the latest [template
Property value details: -- `identity`: For deployment script API version 2020-10-01 or later, a user-assigned managed identity is optional unless you need to perform any Azure-specific actions in the script. For the API version 2019-10-01-preview, a managed identity is required as the deployment script service uses it to execute the scripts. When the identity property is specified, the script service calls `Connect-AzAccount -Identity` before invoking the user script. Currently, only user-assigned managed identity is supported. To login with a different identity, you can call [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) in the script.
+- `identity`: For deployment script API version 2020-10-01 or later, a user-assigned managed identity is optional unless you need to perform any Azure-specific actions in the script. For the API version 2019-10-01-preview, a managed identity is required as the deployment script service uses it to execute the scripts. When the identity property is specified, the script service calls `Connect-AzAccount -Identity` before invoking the user script. Currently, only user-assigned managed identity is supported. To log in with a different identity, you can call [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) in the script.
- `kind`: Specify the type of script. Currently, Azure PowerShell and Azure CLI scripts are supported. The values are **AzurePowerShell** and **AzureCLI**. - `forceUpdateTag`: Changing this value between template deployments forces the deployment script to re-execute. If you use the `newGuid()` or the `utcNow()` functions, both functions can only be used in the default value for a parameter. To learn more, see [Run script more than once](#run-script-more-than-once). - `containerSettings`: Specify the settings to customize Azure Container Instance. Deployment script requires a new Azure Container Instance. You can't specify an existing Azure Container Instance. However, you can customize the container group name by using `containerGroupName`. If not specified, the group name is automatically generated.
Property value details:
- [Sample 1](https://raw.githubusercontent.com/Azure/azure-docs-json-samples/master/deployment-script/deploymentscript-keyvault.json): create a key vault and use deployment script to assign a certificate to the key vault. - [Sample 2](https://raw.githubusercontent.com/Azure/azure-docs-json-samples/master/deployment-script/deploymentscript-keyvault-subscription.json): create a resource group at the subscription level, create a key vault in the resource group, and then use deployment script to assign a certificate to the key vault. - [Sample 3](https://raw.githubusercontent.com/Azure/azure-docs-json-samples/master/deployment-script/deploymentscript-keyvault-mi.json): create a user-assigned managed identity, assign the contributor role to the identity at the resource group level, create a key vault, and then use deployment script to assign a certificate to the key vault.
+- [Sample 4](https://raw.githubusercontent.com/Azure/azure-docs-json-samples/master/deployment-script/deploymentscript-keyvault-lock-sub.json): it is the same scenario as Sample 1 in this list. A new resource group is created to run the deployment script. This template is a subscription level template.
+- [Sample 5](https://raw.githubusercontent.com/Azure/azure-docs-json-samples/master/deployment-script/deploymentscript-keyvault-lock-group.json): it is the same scenario as Sample 4. This template is a resource group level template.
## Use inline scripts
The life cycle of these resources is controlled by the following properties in t
The container instance and storage account are deleted according to the `cleanupPreference`. However, if the script fails and `cleanupPreference` isn't set to **Always**, the deployment process automatically keeps the container running for one hour. You can use this hour to troubleshoot the script. If you want to keep the container running after successful deployments, add a sleep step to your script. For example, add [Start-Sleep](/powershell/module/microsoft.powershell.utility/start-sleep) to the end of your script. If you don't add the sleep step, the container is set to a terminal state and can't be accessed even if it hasn't been deleted yet.
+The automatically created storage account and container instance can't be deleted if the deployment script is deployed to a resource group with a [CanNotDelete lock](../management/lock-resources.md). To solve this problem, you can deploy the deployment script to another resource group without locks. See Sample 4 and Sample 5 in [Sample templates](#sample-templates).
+ ## Run script more than once Deployment script execution is an idempotent operation. If none of the `deploymentScripts` resource properties (including the inline script) are changed, the script doesn't execute when you redeploy the template. The deployment script service compares the resource names in the template with the existing resources in the same resource group. There are two options if you want to execute the same deployment script multiple times:
azure-resource-manager Template Specs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/template-specs.md
Title: Create & deploy template specs description: Describes how to create template specs and share them with other users in your organization. Previously updated : 10/05/2021- Last updated : 01/07/2022 - + # Azure Resource Manager template specs A template spec is a resource type for storing an Azure Resource Manager template (ARM template) in Azure for later deployment. This resource type enables you to share ARM templates with other users in your organization. Just like any other Azure resource, you can use Azure role-based access control (Azure RBAC) to share the template spec.
az ts create \
+You can also create template specs by using ARM templates. The following template creates a template spec to deploy a storage account:
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "templateSpecName": {
+ "type": "string",
+ "defaultValue": "CreateStorageAccount"
+ },
+ "templateSpecVersionName": {
+ "type": "string",
+ "defaultValue": "0.1"
+ },
+ "location": {
+ "type": "string",
+ "defaultValue": "[resourceGroup().location]"
+ }
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Resources/templateSpecs",
+ "apiVersion": "2021-05-01",
+ "name": "[parameters('templateSpecName')]",
+ "location": "[parameters('location')]",
+ "properties": {
+ "description": "A basic templateSpec - creates a storage account.",
+ "displayName": "Storage account (Standard_LRS)"
+ }
+ },
+ {
+ "type": "Microsoft.Resources/templateSpecs/versions",
+ "apiVersion": "2021-05-01",
+ "name": "[format('{0}/{1}', parameters('templateSpecName'), parameters('templateSpecVersionName'))]",
+ "location": "[parameters('location')]",
+ "dependsOn": [
+ "[resourceId('Microsoft.Resources/templateSpecs', parameters('templateSpecName'))]"
+ ],
+ "properties": {
+ "mainTemplate": {
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "storageAccountType": {
+ "type": "string",
+ "defaultValue": "Standard_LRS",
+ "allowedValues": [
+ "Standard_LRS",
+ "Standard_GRS",
+ "Standard_ZRS",
+ "Premium_LRS"
+ ]
+ }
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Storage/storageAccounts",
+ "apiVersion": "2019-06-01",
+ "name": "[concat('store', uniquestring(resourceGroup().id))]",
+ "location": "[resourceGroup().location]",
+ "kind": "StorageV2",
+ "sku": {
+ "name": "[[parameters('storageAccountType')]"
+ }
+ }
+ ]
+ }
+ }
+ }
+ ]
+}
+```
+ You can view all template specs in your subscription by using: # [PowerShell](#tab/azure-powershell)
az ts update \
When creating or modifying a template spec with the version parameter specified, but without the tag/tags parameter: -- If the template spec exists and has tags, but the version doesn't exist, the new version inherits the same tags as the existing template spec.
+* If the template spec exists and has tags, but the version doesn't exist, the new version inherits the same tags as the existing template spec.
When creating or modifying a template spec with both the tag/tags parameter and the version parameter specified: -- If both the template spec and the version don't exist, the tags are added to both the new template spec and the new version.-- If the template spec exists, but the version doesn't exist, the tags are only added to the new version.-- If both the template spec and the version exist, the tags only apply to the version.
+* If both the template spec and the version don't exist, the tags are added to both the new template spec and the new version.
+* If the template spec exists, but the version doesn't exist, the tags are only added to the new version.
+* If both the template spec and the version exist, the tags only apply to the version.
When modifying a template with the tag/tags parameter specified but without the version parameter specified, the tags is only added to the template spec.
azure-sql Recovery Using Backups https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/recovery-using-backups.md
Previously updated : 11/13/2020 Last updated : 01/07/2021 # Recover using automated database backups - Azure SQL Database & SQL Managed Instance [!INCLUDE[appliesto-sqldb-sqlmi](../includes/appliesto-sqldb-sqlmi.md)]
The following options are available for database recovery by using [automated da
- Create a new database on the same server, recovered to a specified point in time within the retention period. - Create a database on the same server, recovered to the deletion time for a deleted database. - Create a new database on any server in the same region, recovered to the point of the most recent backups.-- Create a new database on any server in any other region, recovered to the point of the most recent replicated backups.
+- Create a new database on any server in any other region, recovered to the point of the most recent replicated backups. Cross-region and cross-subscription restore for SQL Managed Instance isn't currently supported
If you configured [backup long-term retention](long-term-retention-overview.md), you can also create a new database from any long-term retention backup on any server.
azure-video-analyzer Get Started Livepipelines Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-docs/cloud/get-started-livepipelines-portal.md
Once the Video Analyzer account is created, you can go ahead with next steps t
- Select **Create** and you will see a pipeline is created in the pipeline grid on the portal. - Select the live pipeline created in the grid, select **Activate** option available towards the right of the pane to activate the live pipeline. This will start your live pipeline and start recording the video 1. Now you would be able to see the video resource under Video Analyzer account-> **Videos** pane in the portal. Its status will indicate **Recording** as pipeline is active and recording the live video stream.
-1. After a few seconds, select the video and you will be able to see the [low latency stream](../playback-recordings-how-to.md).
+1. After a few seconds, select the video and you will be able to see the [low latency stream](../viewing-videos-how-to.md).
> [!div class="mx-imgBorder"] > :::image type="content" source="./media/camera-1800s-mkv.png" alt-text="Diagram of the recorded video captured by live pipeline on the cloud.":::
In this tab, learn how to deploy live pipeline using using Video AnalyzerΓÇÖs [C
| ResourceGroup | Provide resource group name | | AccountName | Provide Video Analyzer account name | | TenantId | Provide tenant ID |
-| ClientId | Provide app registration client id |
+| ClientId | Provide app registration client ID |
| Secret | Provide app registration client secret | | AuthenticationEndpoint | Provide authentication end point (example: https://login.microsoftonline.com) | | ArmEndPoint | Provide ARM end point (example: https://management.azure.com) |
azure-video-analyzer Continuous Video Recording https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-docs/continuous-video-recording.md
Continuous video recording (CVR) refers to the process of continuously recording
> [!div class="mx-imgBorder"] > :::image type="content" source="./media/continuous-video-recording/continuous-video-recording-overview.svg" alt-text="Continuous video recording":::
-An instance of the pipeline topology depicted above can be run on an edge device in the Video Analyzer service, with the video sink recording to a [video resource](terminology.md#video). The video will be recorded for as long as the pipeline stays in the activated state. Recorded video can be played back using the streaming capabilities of Video Analyzer. See [Playback of video recordings](playback-recordings-how-to.md) for more details.
+An instance of the pipeline topology depicted above can be run on an edge device in the Video Analyzer service, with the video sink recording to a [video resource](terminology.md#video). The video will be recorded for as long as the pipeline stays in the activated state. Recorded video can be played back using the streaming capabilities of Video Analyzer. See [Recorded and live videos](viewing-videos-how-to.md) for more details.
## Suggested pre-reading
The `segmentLength` property, shown above, will help you control the write trans
The `segmentLength` property ensures that video is written to the storage account at most once per `segmentLength` seconds. This property has a minimum value of 30 seconds (also the default), and can be increased by 30-second increments to a maximum of 5 minutes.
-This property applies to both the Video Analyzer edge module and the Video Analyzer service. See the [Playback of video recordings](playback-recordings-how-to.md) article for the effect that `segmentLength` has on playback.
+This property applies to both the Video Analyzer edge module and the Video Analyzer service. See the [Recorded and live videos](viewing-videos-how-to.md) article for the effect that `segmentLength` has on playback.
## See also * [Event-based video recording](event-based-video-recording-concept.md)
-* [Playback of video recordings](playback-recordings-how-to.md)
+* [Recorded and live videos](viewing-videos-how-to.md)
## Next steps
azure-video-analyzer Detect Motion Record Video Clips Cloud https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-docs/edge/detect-motion-record-video-clips-cloud.md
You can examine the Video Analyzer video resource that was created by the live p
## Next steps
-* Learn how to [play back video recordings](../playback-recordings-how-to.md)
+* Check out [Recorded and live videos](../viewing-videos-how-to.md)
* Try [Quickstart: Analyze a live video feed from a (simulated) IP camera using your own HTTP model](analyze-live-video-use-your-model-http.md)
azure-video-analyzer Enable Video Preview Images https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-docs/edge/enable-video-preview-images.md
If you record video using the Video Analyzer edge module, you can enable the vid
> [!NOTE] > The preview images will preserve the aspect ratio of video from the camera.
-The preview images are generated periodically, the frequency being determined by [`segmentLength`](../playback-recordings-how-to.md#recording-and-playback-latencies). If you are using [event-based recording](record-event-based-live-video.md), you should note that images are generated only when the live pipeline is active and video is being recorded. Each time a set of preview images are generated, they will overwrite the previous set.
+The preview images are generated periodically, the frequency being determined by [`segmentLength`](../viewing-videos-how-to.md#recording-and-playback-latencies). If you are using [event-based recording](record-event-based-live-video.md), you should note that images are generated only when the live pipeline is active and video is being recorded. Each time a set of preview images are generated, they will overwrite the previous set.
> [!NOTE] > This functionality is currently only available with Video Analyzer Edge module. Further, enabling this has an impact on your Azure storage costs, driven by the frequent transactions to write the images or view them, and the size of the images.
Example:
] ```
-## Accessing preview images
+## Access preview images
-To acquire the static URLs to the available preview images, a GET request must be called on the video resource with an [authorized bearer token](../playback-recordings-how-to.md#accessing-videos). You will see the URLs listed under **contentUrls** in the response as shown below.
+To acquire the static URLs to the available preview images, a GET request must be called on the video resource with an [authorized bearer token](../viewing-videos-how-to.md#accessing-videos). You will see the URLs listed under **contentUrls** in the response as shown below.
``` "contentUrls": {
azure-video-analyzer Event Based Video Recording Concept https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-docs/event-based-video-recording-concept.md
Event-based video recording (EVR) refers to the process of recording video triggered by an event. The event in question could originate due to processing of the video signal itself (for example, when motion is detected) or could be from an independent source (for example, a door sensor signals that the door has been opened). A few use cases related to EVR are described in this article.
-The timestamps for the recordings are stored in UTC. Recorded video can be played back using the streaming capabilities of Video Analyzer. See [Playback of video recordings](playback-recordings-how-to.md) for more details.
+The timestamps for the recordings are stored in UTC. Recorded video can be played back using the streaming capabilities of Video Analyzer. See [Recorded and live videos](viewing-videos-how-to.md) for more details.
## Suggested pre-reading
See the [note on resilient recording](continuous-video-recording.md#resilient-re
## Next steps * [Tutorial: event-based video recording](record-event-based-live-video.md)
-* [Playback of recorded content](playback-recordings-how-to.md)
+* [Recorded and live videos](viewing-videos-how-to.md)
azure-video-analyzer Manage Retention Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-docs/manage-retention-policy.md
You can also set or update the `retentionPeriod` property of a video resource, u
## Next steps
-[Playback of recordings](playback-recordings-how-to.md)
+[Recorded and live videos](viewing-videos-how-to.md)
azure-video-analyzer Quotas Limitations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-docs/quotas-limitations.md
At most 50 live pipelines per topology are supported.ΓÇ»
### Concurrent low latency streaming sessionsΓÇ»
-For each active live pipeline, there can be at most one client application viewing the [low latency stream](playback-recordings-how-to.md#low-latency-streaming). If another client attempts to connect, the request will be refused.ΓÇ»
+For each active live pipeline, there can be at most one client application viewing the [low latency stream](viewing-videos-how-to.md#low-latency-streaming). If another client attempts to connect, the request will be refused.ΓÇ»
### Limitations on designing pipeline topologies
azure-video-analyzer Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-docs/release-notes.md
The ARM API version of the Video Analyzer service is:
* When using Video Analyzer with [Computer Vision for spatial analysis](edge/computer-vision-for-spatial-analysis.md) AI service from Cognitive Services, you can generate and view new insights such as the speed, orientation, trail of persons in the live video. * You can [discover ONVIF-capable devices](edge/camera-discovery.md) in the local subnet of your edge device. * You can [capture and record live video directly in the cloud](cloud/connect-cameras-to-cloud.md).
- * You can use [low latency streaming](playback-recordings-how-to.md#low-latency-streaming) to view the live video from the RTSP camera with end-to-end latencies of around 2 seconds
+ * You can use [low latency streaming](viewing-videos-how-to.md#low-latency-streaming) to view the live video from the RTSP camera with end-to-end latencies of around 2 seconds.
* You can implement the [Video Analyzer IoT PnP contract](cloud/connect-devices.md) on your RTSP camera to enable video capture from your device to the Video Analyzer service. * You can [export the desired portion of your recorded video](cloud/export-portion-of-video-as-mp4.md) to an MP4 file. * You can specify a retention policy for any of your recorded videos, where the service would periodically trim content older than the specified number of days.
azure-video-analyzer Viewing Videos How To https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-docs/viewing-videos-how-to.md
+
+ Title: Viewing of videos
+description: You can use Azure Video Analyzer for continuous video recording, whereby you can record video into the cloud for weeks or months. You can also limit your recording to clips that are of interest, via event-based recording. In addition, when using Video Analyzer service to capture videos from cameras, you can stream the video as it is being captured. This article talks about how to view such videos.
++ Last updated : 11/04/2021++
+# Viewing of videos
+
+## Suggested pre-reading
+
+* [Video Analyzer video resource](terminology.md#video)
+* [Continuous video recording](continuous-video-recording.md)
+* [Event-based video recording](event-based-video-recording-concept.md)
+
+## Background
+
+You can create [video resources](terminology.md#video) in your Azure Video Analyzer account by either recording from an RTSP camera, or exporting a portion of such a recording. If you are building a [VMS](terminology.md#vms) using Video Analyzer APIs, then this article will help you understand how you can view videos. After reading this article, you should proceed to review the article on [access policies](access-policies.md) and on the [Video Analyzer player widget](player-widget.md).
+
+If you are evaluating the capabilities of Video Analyzer, then you can go through [Quickstart: Detect motion in a (simulated) live video, record the video to the Video Analyzer account](edge/detect-motion-record-video-clips-cloud.md) or [Tutorial: Continuous video recording and playback](edge/use-continuous-video-recording.md). Make use of the Azure portal to view the videos.
+<!-- TODO - add a section here about 1P/3P SaaS and how to use widgets to allow end users to view videos without talking to ARM APIs -->
+
+## Creating videos
+
+Following are some of the ways to create videos using the Video Analyzer edge module:
+
+* Record [continuously](continuous-video-recording.md) (CVR) from an RTSP camera, for weeks or months or more.
+* Only record portions that are of interest, via [event-based video recording](event-based-video-recording-concept.md) (EVR).
+
+You can also use the Video Analyzer service to create videos using CVR. You can also use the service to create a video by exporting a portion of a video recording - such videos will contain a downloadable file (in MP4 file format).
+
+## Accessing videos
+
+You can query the ARM API [`Videos`](https://github.com/Azure/azure-rest-api-specs/blob/master/specification/videoanalyzer/resource-manager/Microsoft.Medi) shows you how.
+
+## Determining that a video recording is ready for viewing
+
+If your video resource represents the recording from an RTSP camera, you can [stream that content](terminology.md#streaming) either after the recording is complete, or while the recording is ongoing. This is indicated via the `canStream` flag that will be set to `true` for the video resource. Note that such videos will have `type` set to `archive`, and the URL for playback or streaming is returned in `archiveBaseUrl`.
+
+When you export a portion of a video recording to an MP4 file, the resulting video resource will have `type` set to `file` - and it will be available for playback or download once the video exporting job completes. The URL for playback or download of such files is returned in `downloadUrl`.
+ > [!NOTE]
+ > The above URLs require a [bearer token](./access-policies.md#creating-a-token). See the [Video Analyzer player widget](player-widget.md) documentation for more details.
+
+## Recording and playback latencies
+
+When using Video Analyzer edge module to record to a video resource, you will specify a [`segmentLength` property](https://github.com/Azure/azure-rest-api-specs/blob/master/specification/videoanalyzer/data-plane/VideoAnalyzer.Edge/preview/1.0.0/AzureVideoAnalyzer.json) in your pipeline topology which tells the module to aggregate a minimum duration of video (in seconds) before it is written to the cloud. For example, if `segmentLength` is set to 300, then the module will accumulate 5 minutes worth of video before uploading one 5 minutes ΓÇ£chunkΓÇ¥, then go into accumulation mode for the next 5 minutes, and upload again. Increasing the `segmentLength` has the benefit of lowering your Azure Storage transaction costs, as the number of reads and writes will be no more frequent than once every `segmentLength` seconds. If you are using Video Analyzer service, the pipeline topology has the same [`segmentLength` property](https://github.com/Azure/azure-rest-api-specs/blob/master/specification/videoanalyzer/resource-manager/Microsoft.Media/preview/2021-11-01-preview/PipelineTopologies.json).
+
+Consequently, streaming of the video from your Video Analyzer account will be delayed by at least that much time.
+
+Another factor that determines end-to-end latency (the delay between the time an event occurs in front of the camera, to the time it is viewed on a playback device) is the group-of-pictures [GOP](https://en.wikipedia.org/wiki/Group_of_pictures) duration. As [reducing the delay of live streams by using 3 simple techniques](https://medium.com/vrt-digital-studio/reducing-the-delay-of-live-streams-by-using-3-simple-techniques-e8e028b0a641) explains, the longer the GOP duration, the longer the latency. It is common to have IP cameras used in surveillance and security scenarios configured to use GOPs longer than 30 seconds. This has a large impact on the overall latency.
+
+## Low latency streaming
+
+When using the Video Analyzer service to capture and record videos from RTSP cameras, you can view the video with latencies of around 2 seconds using [low latency streaming](terminology.md#low-latency-streaming). The service makes available a websocket tunnel through which an RTSP-capable player such as the [Video Analyzer player widget](player-widget.md) can receive video using [RTSP protocol](https://datatracker.ietf.org/doc/html/rfc7826.html). Note that overall latency is dependent on the network bandwidth between the camera and cloud, as well as between the cloud and the playback device, as well as the processing power of the playback device. The URL for low latency streaming is returned in `rtspTunnelUrl`.
+
+ > [!NOTE]
+ > The above URL requires a [bearer token](./access-policies.md#creating-a-token). See the [Video Analyzer player widget](player-widget.md) documentation for more details.
+
+## Video Analyzer player widget
+Video Analyzer provides you with the necessary capabilities to deliver streams via HLS or MPEG-DASH or RTSP protocols to playback devices (clients). You would use the [Video Analyzer player widget](player-widget.md) to obtain the relevant URLs and the content authorization token, and use these in client apps to play back the video and inference metadata.
+
+You can install the Video Analyzer player widget to view videos. The widget can be installed using `npm` or `yarn` and this will allow you to include it in your own client-side application. Run one of the following commands to include the widget in your own application:
+
+NPM:
+```
+npm install ΓÇô-save @azure/video-analyzer-widgets
+```
+YARN:
+```
+yarn add @azure/video-analyzer-widgets
+```
+Alternatively you can embed an existing pre-build script by adding type="module" to the script element referencing the pre-build location using the following example:
+
+```
+<script async type="module" src="https://unpkg.com/@azure/video-analyzer-widgets"></script>
+```
+
+## Viewing video with inference results
+When recording video using the Video Analyzer edge module, if your [pipeline](pipeline.md) is using AI to generate inference results, you can record these results along with the video. When viewing the video, the Video Analyzer player widget can overlay the results on the video. See [this tutorial](edge/record-stream-inference-data-with-video.md) for more details.
+
+## Next steps
+
+* [Understand access policies](access-policies.md)
+* [Use the Video Analyzer player widget](player-widget.md)
+* [Continuous video recording on the edge](edge/use-continuous-video-recording.md)
+* [Continuous video recording in the cloud](cloud/get-started-livepipelines-portal.md)
azure-video-analyzer Audio Effects Detection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-for-media-docs/audio-effects-detection.md
Last updated 01/04/2022
-# Audio effects detection
+# Audio effects detection (preview)
**Audio effects detection** is one of Azure Video Analyzer for Media AI capabilities. It can detects a various of acoustics events and classify them into different acoustic categories (such as dog barking, crowd reactions, laugher and more).
azure-video-analyzer Video Indexer Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-for-media-docs/video-indexer-overview.md
The following list shows the insights you can retrieve from your videos using Vi
* **Audio effects** (preview): Detects the following audio effects in the non-speech segments of the content: Gunshot, Glass shatter, Alarm, Siren, Explosion, Dog Bark, Screaming, Laughter, Crowd reactions (cheering, clapping, and booing) and Silence. Note: the full set of events is available only when choosing ΓÇÿAdvanced Audio AnalysisΓÇÖ in upload preset, otherwise only ΓÇÿSilenceΓÇÖ and ΓÇÿCrowd reactionΓÇÖ will be available. * **Emotion detection**: Identifies emotions based on speech (what's being said) and voice tonality (how it's being said). The emotion could be joy, sadness, anger, or fear. * **Translation**: Creates translations of the audio transcript to 54 different languages.
-* **Audio effects detection**: Detects the following audio effects in the non-speech segments of the content: alarm or siren, dog barking, crowd reactions (cheering, clapping, and booing), gunshot or explosion, laughter, breaking glass, and silence.
+* **Audio effects detection** (preview): Detects the following audio effects in the non-speech segments of the content: alarm or siren, dog barking, crowd reactions (cheering, clapping, and booing), gunshot or explosion, laughter, breaking glass, and silence.
The detected acoustic events are in the closed captions file. The file can be downloaded from the Video Analyzer for Media portal. For more information, see [Audio effects detection](audio-effects-detection.md).
azure-vmware Concepts Security Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/concepts-security-recommendations.md
+
+ Title: Concepts - Security recommendations for Azure VMware Solution
+Description: Learn about tips and best practices to help protect Azure VMware Solution deployments from vulnerabilities and malicious actors.
+ Last updated : 01/10/2022+++
+# Security recommendations for Azure VMware Solution
+
+It's important that proper measures are taken to secure your Azure VMware Solution deployments. Use this information as a high-level guide to achieve your security goals.
+
+## General
+
+Use the following guidelines and links for general security recommendations for both Azure VMware Solution and VMware best practices.
+
+| **Recommendation** | **Comments** |
+| :-- | :-- |
+| Review and follow VMware Security Best Practices | It's important to stay updated on Azure security practices and [VMware Security Best Practices](https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.security.doc/GUID-412EF981-D4F1-430B-9D09-A4679C2D04E7.html). |
+| Keep up to date on VMware Security Advisories | Subscribe to VMware notifications in my.vmware.com and regularly review and remediate any [VMware Security Advisories](https://www.vmware.com/security/advisories.html). |
+| Enable Microsoft Defender for Cloud | [Microsoft Defender for Cloud](https://docs.microsoft.com/azure/defender-for-cloud/) provides unified security management and advanced threat protection across hybrid cloud workloads. |
+| Follow the Microsoft Security Response Center blog | [Microsoft Security Response Center](https://msrc-blog.microsoft.com/) |
+| Review and implement recommendations within the Azure Security Baseline for Azure VMware Solution | [Azure security baseline for VMware Solution](https://docs.microsoft.com/security/benchmark/azure/baselines/vmware-solution-security-baseline/) |
++
+## Network
+
+The following are network-related security recommendations for Azure VMware Solution.
+
+| **Recommendation** | **Comments** |
+| :-- | :-- |
+| Only allow trusted networks | Only allow access to your environments over ExpressRoute or other secured networks. Avoid exposing your management services like vCenter, for example, on the internet. |
+| Use Azure Firewall Premium | If you must expose management services on the internet, use [Azure Firewall Premium](https://docs.microsoft.com/azure/firewall/premium-migrate/) with both IDPS Alert and Deny mode along with TLS inspection for proactive threat detection. |
+| Deploy and configure Network Security Groups on VNET | Ensure any VNET deployed has [Network Security Groups](https://docs.microsoft.com/azure/virtual-network/network-security-groups-overview/) configured to control ingress and egress to your environment. |
+| Review and implement recommendations within the Azure security baseline for Azure VMware Solution | [Azure security baseline for Azure VMware Solution](https://docs.microsoft.com/security/benchmark/azure/baselines/vmware-solution-security-baseline/) |
+
+## HCX
+
+See the following information for recommendations to secure your HCX deployment.
+
+| **Recommendation** | **Comments** |
+| :-- | :-- |
+| Stay current with HCX service updates | HCX service updates can include new features, software fixes, and security patches. Apply service updates during a maintenance window where no new HCX operations are queued up by following these [steps](https://docs.vmware.com/en/VMware-HCX/4.1/hcx-user-guide/GUID-F4AEAACB-212B-4FB6-AC36-9E5106879222.html). |
+
azure-web-pubsub Howto Develop Eventhandler https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-web-pubsub/howto-develop-eventhandler.md
The data sending from the service to the server is always in CloudEvents `binary
## Upstream and Validation
-When configuring the webhook endpoint, the URL can use `{event}` parameter to define a URL template. The service calculates the value of the webhook URL dynamically when the client request comes in. For example, when a request `/client/hubs/chat` comes in, with a configured event handler URL pattern `http://host.com/api/{event}` for hub `chat`, when the client connects, it will first POST to this URL: `http://host.com/api/connect`. This can be useful when a PubSub WebSocket client sends custom events, that the event handler helps dispatch different events to different upstream. Note that the `{event}` parameter is not allowed in the URL domain name.
+When configuring the webhook endpoint, the URL can use `{event}` parameter to define a URL template. The service calculates the value of the webhook URL dynamically when the client request comes in. For example, when a request `/client/hubs/chat` comes in, with a configured event handler URL pattern `http://host.com/api/{event}` for hub `chat`, when the client connects, it will first POST to this URL: `http://host.com/api/connect`. The parameter can be useful when a PubSub WebSocket client sends custom events, that the event handler helps dispatch different events to different upstream. Note that the `{event}` parameter is not allowed in the URL domain name.
-When setting up the event handler upstream through Azure portal or CLI, the service follows the [CloudEvents Abuse Protection](https://github.com/cloudevents/spec/blob/v1.0/http-webhook.md#4-abuse-protection) to validate the upstream webhook. Every registered upstream webhook URL will be validated by this mechanism. The `WebHook-Request-Origin` request header is set to the service domain name `xxx.webpubsub.azure.com`, and it expects the response having header `WebHook-Allowed-Origin` to contain this domain name or `*`.
+When setting up the event handler upstream through Azure portal or CLI, the service follows the [CloudEvents Abuse Protection](https://github.com/cloudevents/spec/blob/v1.0/http-webhook.md#4-abuse-protection) to validate the upstream webhook. Every registered upstream webhook URL will be validated by this mechanism. The `WebHook-Request-Origin` request header is set to the service domain name `xxx.webpubsub.azure.com`, and it expects the response to have a header `WebHook-Allowed-Origin` to contain this domain name or `*`.
-When doing the validation, the `{event}` parameter is resolved to `validate`. For example, when trying to set the URL to `http://host.com/api/{event}`, the service tries to **OPTIONS** a request to `http://host.com/api/validate` and only when the response is valid the configure can be set successfully.
+When doing the validation, the `{event}` parameter is resolved to `validate`. For example, when trying to set the URL to `http://host.com/api/{event}`, the service will try to **OPTIONS** a request to `http://host.com/api/validate`. And only when the response is valid, the configuration can be set successfully.
For now, we do not support [WebHook-Request-Rate](https://github.com/cloudevents/spec/blob/v1.0/http-webhook.md#414-webhook-request-rate) and [WebHook-Request-Callback](https://github.com/cloudevents/spec/blob/v1.0/http-webhook.md#413-webhook-request-callback).
For now, we do not support [WebHook-Request-Rate](https://github.com/cloudevents
- Anonymous mode - Simple Auth with `?code=<code>` is provided through the configured Webhook URL as query parameter.-- Use AAD Auth, check [here](howto-use-managed-identity.md) for details.
+- Use Azure Active Directory(Azure AD) authentication, check [here](howto-use-managed-identity.md) for details.
- Step1: Enable Identity for the Web PubSub service - Step2: Select from existing AAD application that stands for your webhook web app
For now, we do not support [WebHook-Request-Rate](https://github.com/cloudevents
### Configure through Azure portal
-Find your Azure Web PubSub service from **Azure portal**. Navigate to **Settings** and enter your hub-name. Then click **Add** to configure your server side webhook URL. Don't forget to click **Save** when finish.
+Find your Azure Web PubSub service from **Azure portal**. Navigate to **Settings**. Then select **Add** to configure your server-side webhook URL. For an existing hub configuration, select **...** on right side will navigate to the same editing page.
:::image type="content" source="media/quickstart-serverless/set-event-handler.png" alt-text="Screenshot of setting the event handler.":::
+Then in the below editing page, you'd need to configure hub name, server webhook URL, and select `user` and `system` events you'd like to subscribe. Finally select **Save** when everything is done.
++ ### Configure through Azure CLI Use the Azure CLI [**az webpubsub hub**](/cli/azure/webpubsub/hub) group commands to configure the event handler settings.
azure-web-pubsub Quickstart Serverless https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-web-pubsub/quickstart-serverless.md
In this tutorial, you learn how to:
- Update `index.cs` and replace `Run` function with following codes. ```c# [FunctionName("index")]
- public static IActionResult Run([HttpTrigger(AuthorizationLevel.Anonymous)] HttpRequest req)
+ public static IActionResult Run([HttpTrigger(AuthorizationLevel.Anonymous)] HttpRequest req, ILogger log)
{ string indexFile = "https://docsupdatetracker.net/index.html"; if (Environment.GetEnvironmentVariable("HOME") != null)
In this tutorial, you learn how to:
```c# [FunctionName("negotiate")] public static WebPubSubConnection Run(
- [HttpTrigger(AuthorizationLevel.Function, "get", "post", Route = null)] HttpRequest req,
+ [HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)] HttpRequest req,
[WebPubSubConnection(Hub = "simplechat", UserId = "{headers.x-ms-client-principal-name}")] WebPubSubConnection connection, ILogger log) {
In this tutorial, you learn how to:
return connection; } ```
+ - Add below `using` statements in header to resolve required dependencies.
+ ```c#
+ using Microsoft.Azure.WebJobs.Extensions.WebPubSub;
+ ```
5. Create a `message` function to broadcast client messages through service. ```bash
In this tutorial, you learn how to:
```c# [FunctionName("message")] public static async Task<UserEventResponse> Run(
- [WebPubSubTrigger(WebPubSubEventType.User, "message")] UserEventRequest request,
+ [WebPubSubTrigger("simplechat", WebPubSubEventType.User, "message")] UserEventRequest request,
BinaryData data, WebPubSubDataType dataType, [WebPubSub(Hub = "simplechat")] IAsyncCollector<WebPubSubAction> actions) { await actions.AddAsync(WebPubSubAction.CreateSendToAllAction(
- BinaryData.FromString($"[{request.ConnectionContext.UserId}] {message.ToString()}"),
- dataType
- );
+ BinaryData.FromString($"[{request.ConnectionContext.UserId}] {data.ToString()}"),
+ dataType));
return new UserEventResponse { Data = BinaryData.FromString("[SYSTEM] ack"),
In this tutorial, you learn how to:
}; } ```
+ - Add below `using` statements in header to resolve required dependencies.
+ ```c#
+ using Microsoft.Azure.WebJobs.Extensions.WebPubSub;
+ using Microsoft.Azure.WebPubSub.Common;
+ ```
6. Add the client single page `https://docsupdatetracker.net/index.html` in the project root folder and copy content as below. ```html
azure-web-pubsub Tutorial Build Chat https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-web-pubsub/tutorial-build-chat.md
In the above code, we simply print a message to console when a client is connect
### Expose localhost
-Then we need to set the Webhook URL in the service so it can know where to call when there is a new event. But there is a problem that our server is running on localhost so does not have an internet accessible endpoint. Here we use [ngrok](https://ngrok.com/) to expose our localhost to internet.
+Then we need to set the Webhook URL in the service so it can know where to call when there is a new event. But there is a problem that our server is running on localhost so does not have an internet accessible endpoint. There are several tools available on the internet to expose localhost to the internet, for example, [ngrok](https://ngrok.com), [loophole](https://loophole.cloud/docs/), or [TunnelRelay](https://github.com/OfficeDev/microsoft-teams-tunnelrelay). Here we use [ngrok](https://ngrok.com/).
1. First download ngrok from https://ngrok.com/download, extract the executable to your local folder or your system bin folder. 2. Start ngrok
Then we need to set the Webhook URL in the service so it can know where to call
ngrok http 8080 ```
-ngrok will print a URL (`https://<domain-name>.ngrok.io`) that can be accessed from internet.
+ngrok will print a URL (`https://<domain-name>.ngrok.io`) that can be accessed from internet. In above step we listens the `/eventhandler` path, so next we'd like the service to send events to `https://<domain-name>.ngrok.io/eventhandler`.
### Set event handler
-Then we update the service event handler and set the Webhook URL.
+Then we update the service event handler and set the Webhook URL to `https://<domain-name>.ngrok.io/eventhandler`. Event handlers can be set from either the portal or the CLI as [described in this article](howto-develop-eventhandler.md#configure-event-handler), here we set it through CLI.
Use the Azure CLI [az webpubsub hub create](/cli/azure/webpubsub/hub#az_webpubsub_hub_update) command to create the event handler settings for the chat hub
azure-web-pubsub Tutorial Serverless Notification https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-web-pubsub/tutorial-serverless-notification.md
In this tutorial, you learn how to:
``` b. Run command to install specific function extension package. ```bash
- func extensions install --package Microsoft.Azure.WebJobs.Extensions.WebPubSub --version 1.0.0
+ func extensions install --package Microsoft.Azure.WebJobs.Extensions.WebPubSub --version 1.1.0
``` 3. Create an `index` function to read and host a static web page for clients.
In this tutorial, you learn how to:
return connection; } ```
+ - Add below `using` statements in header to resolve required dependencies.
+ ```c#
+ using Microsoft.Azure.WebJobs.Extensions.WebPubSub;
+ ```
5. Create a `notification` function to generate notifications with `TimerTrigger`. ```bash
In this tutorial, you learn how to:
return value.ToString("0.000"); } ```
+ - Add below `using` statements in header to resolve required dependencies.
+ ```c#
+ using Microsoft.Azure.WebJobs.Extensions.WebPubSub;
+ using Microsoft.Azure.WebPubSub.Common;
+ ```
6. Add the client single page `https://docsupdatetracker.net/index.html` in the project root folder and copy content as below. ```html
In this tutorial, you learn how to:
:::image type="content" source="media/quickstart-serverless/copy-connection-string.png" alt-text="Screenshot of copying the Web PubSub connection string.":::
- Run command below in the function folder to set the service connection string. Replace `<connection-string`> with your value as needed.
+ Run command below in the function folder to set the service connection string. Replace `<connection-string>` with your value as needed.
```bash func settings add WebPubSubConnectionString "<connection-string>" ``` > [!NOTE]
- > `TimerTrigger` used in the sample has dependency on Azure Storage, but you can use local storage emulator when the Function is running locally. If you got some error like `There was an error performing a read operation on the Blob Storage Secret Repository. Please ensure the 'AzureWebJobsStorage' connection string is valid.` You need to download and enable [Storage Emulator](../storage/common/storage-use-emulator.md).
+ > `TimerTrigger` used in the sample has dependency on Azure Storage, but you can use local storage emulator when the Function is running locally. If you got some error like `There was an error performing a read operation on the Blob Storage Secret Repository. Please ensure the 'AzureWebJobsStorage' connection string is valid.`, you'll need to download and enable [Storage Emulator](../storage/common/storage-use-emulator.md).
Now you're able to run your local function by command below.
backup Backup Azure Arm Restore Vms https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-arm-restore-vms.md
As one of the [restore options](#restore-options), Cross Region Restore (CRR) al
To begin using the feature, read the [Before You Begin section](./backup-create-rs-vault.md#set-cross-region-restore).
-To see if CRR is enabled, follow the instructions in [Configure Cross Region Restore](backup-create-rs-vault.md#configure-cross-region-restore).
+To see if CRR is enabled, follow the instructions in [Configure Cross Region Restore](backup-create-rs-vault.md#set-cross-region-restore).
### View backup items in secondary region
backup Backup Azure Monitoring Built In Monitor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-monitoring-built-in-monitor.md
Title: Monitor Azure Backup protected workloads description: In this article, learn about the monitoring and notification capabilities for Azure Backup workloads using the Azure portal. Previously updated : 11/02/2021 Last updated : 01/07/2022 ms.assetid: 86ebeb03-f5fa-4794-8a5f-aa5cbbf68a81
The following scenarios are defined by service as alertable scenarios.
- Backup succeeded with warnings for Microsoft Azure Recovery Services (MARS) agent - Stop protection with retain data/Stop protection with delete data - Soft-delete functionality disabled for vault-- Unsupported backup type for database workloads
+- [Unsupported backup type for database workloads](/azure/backup/backup-sql-server-azure-troubleshoot#backup-type-unsupported)
### Alerts from the following Azure Backup solutions are shown here
backup Backup Create Rs Vault https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-create-rs-vault.md
Title: Create and configure Recovery Services vaults
-description: In this article, learn how to create and configure Recovery Services vaults that store the backups and recovery points. Learn how to use Cross Region Restore to restore in a secondary region.
+description: Learn how to create and configure Recovery Services vaults, and how to restore in a secondary region by using Cross Region Restore.
Last updated 08/06/2021
# Create and configure a Recovery Services vault
+In this article, you'll create and configure an Azure Backup Recovery Services vault that stores backups and recovery points. You'll also use Cross Region Restore to restore in a secondary region.
+ [!INCLUDE [How to create a Recovery Services vault](../../includes/backup-create-rs-vault.md)] ## Set storage redundancy
Azure Backup automatically handles storage for the vault. You need to specify how that storage is replicated. > [!NOTE]
-> Changing **Storage Replication type** (Locally redundant/ Geo-redundant) for a Recovery Services vault has to be done before configuring backups in the vault. Once you configure backup, the option to modify is disabled.
+> Be sure to change the storage replication type for a Recovery Services vault before you configure a backup in the vault. After you configure a backup, the option to modify is disabled.
>
->- If you haven't yet configured the backup, complete the following steps to review and modify the settings.
->- If you've already configured the backup and must move from GRS to LRS, then [review these workarounds](#how-to-change-from-grs-to-lrs-after-configuring-backup).
+> If you haven't yet configured the backup, complete the following steps to review and modify the settings. If you've already configured the backup and must change the storage replication type, [review these workarounds](#modify-default-settings).
-1. From the **Recovery Services vaults** pane, select the new vault. Under the **Settings** section, select **Properties**.
+1. From the **Recovery Services vaults** pane, select the new vault. In the **Settings** section, select **Properties**.
1. In **Properties**, under **Backup Configuration**, select **Update**.
-1. Select the storage replication type, and select **Save**.
+1. For **Storage replication type**, select **Geo-redundant**, **Locally-redundant**, or **Zone-redundant**. Then select **Save**.
- ![Set the storage configuration for new vault](./media/backup-create-rs-vault/recovery-services-vault-backup-configuration.png)
+ ![Set the storage configuration for new vault](./media/backup-create-rs-vault/recovery-services-vault-backup-configuration.png)
- - We recommend that if you're using Azure as a primary backup storage endpoint, continue to use the default **Geo-redundant** setting.
- - If you don't use Azure as a primary backup storage endpoint, then choose **Locally redundant**, which reduces the Azure storage costs.
- - Learn more about [geo](../storage/common/storage-redundancy.md#geo-redundant-storage) and [local](../storage/common/storage-redundancy.md#locally-redundant-storage) redundancy.
- - If you need data availability without downtime in a region, guaranteeing data residency, then choose [zone-redundant storage](../storage/common/storage-redundancy.md#zone-redundant-storage).
+ Here are our recommendations for choosing a storage replication type:
+
+ - If you're using Azure as a primary backup storage endpoint, continue to use the default [geo-redundant storage (GRS)](../storage/common/storage-redundancy.md#geo-redundant-storage).
+
+ - If you don't use Azure as a primary backup storage endpoint, choose [locally redundant storage (LRS)](../storage/common/storage-redundancy.md#locally-redundant-storage) to reduce storage costs.
+
+ - If you need data availability without downtime in a region, guaranteeing data residency, choose [zone-redundant storage (ZRS)](../storage/common/storage-redundancy.md#zone-redundant-storage).
>[!NOTE]
->The Storage Replication settings for the vault aren't relevant for Azure file share backup as the current solution is snapshot based and there's no data transferred to the vault. Snapshots are stored in the same storage account as the backed up file share.
+>The storage replication settings for the vault aren't relevant for Azure file share backup, because the current solution is snapshot based and no data is transferred to the vault. Snapshots are stored in the same storage account as the backed-up file share.
## Set Cross Region Restore
-The restore option Cross Region Restore (CRR) allows you to restore data in a secondary, [Azure paired region](../availability-zones/cross-region-replication-azure.md). You can use it to conduct drills when there is an audit or compliance requirement (or) restore the data if there's a disaster in the primary region.
+The Cross Region Restore option allows you to restore data in a secondary, [Azure paired region](../availability-zones/cross-region-replication-azure.md). You can use Cross Region Restore to conduct drills when there's an audit or compliance requirement. You can also use it to restore the data if there's a disaster in the primary region.
-Before you begin:
-- CRR is supported:
- - Only for Recovery Services Vault with [GRS replication type](#set-storage-redundancy).
- - Azure VMs (you can restore the VM or its disk) that are ARM based Azure VMs and encrypted Azure VMs. Classic VMs wonΓÇÖt be supported.
- - SQL/SAP HANA databases hosted on Azure VMs (you can restore databases or their files)
- - Review the [support matrix](backup-support-matrix.md#cross-region-restore) for a list of supported managed types and regions
-- Using CRR will incur additional charges , [learn more](https://azure.microsoft.com/pricing/details/backup/)-- After opting-in, it might **take up to 48 hours for the backup items to be available in secondary regions**.-- CRR currently can't be reverted back to GRS or LRS once the protection is initiated for the first time.-- Currently, secondary region RPO is up to 12 hours from the primary region, even though [read-access geo-redundant storage (RA-GRS)](../storage/common/storage-redundancy.md#redundancy-in-a-secondary-region) replication is 15 minutes.
+Before you begin, consider the following information:
-### Configure Cross Region Restore
+- Cross Region Restore is supported only for a Recovery Services vault that uses the [GRS replication type](#set-storage-redundancy).
+- Virtual machines (VMs) created through Azure Resource Manager and encrypted Azure VMs are supported. VMs created through the classic deployment model aren't supported. You can restore the VM or its disk.
+- SQL Server or SAP HANA databases hosted on Azure VMs are supported. You can restore databases or their files.
+- Review the [support matrix](backup-support-matrix.md#cross-region-restore) for a list of supported managed types and regions.
+- Using Cross Region Restore will incur additional charges. [Learn more](https://azure.microsoft.com/pricing/details/backup/).
+- After you opt in, it might take up to 48 hours for the backup items to be available in secondary regions.
+- Cross Region Restore currently can't be reverted to GRS or LRS after the protection starts for the first time.
+- Currently, the recovery point objective for a secondary region is up to 12 hours from the primary region, even though [read-access geo-redundant storage (RA-GRS)](../storage/common/storage-redundancy.md#redundancy-in-a-secondary-region) replication is 15 minutes.
-A vault created with GRS redundancy includes the option to configure the Cross Region Restore feature. Every GRS vault will have a banner, which will link to the documentation. To configure CRR for the vault, go to the Backup Configuration pane, which contains the option to enable this feature.
+A vault created with GRS redundancy includes the option to configure the Cross Region Restore feature. Every GRS vault has a banner that links to the documentation.
- ![Backup Configuration banner](./media/backup-azure-arm-restore-vms/banner.png)
+![Screenshot that shows the banner about backup configuration.](./media/backup-azure-arm-restore-vms/banner.png)
->[!Note]
->If you've access to restricted paired regions and still unable to view Cross Region Restore settings in **Backup Configuration** blade, then re-register the recovery services resource provider. <br><br> To re-register the provider, go to your subscription in the Azure portal, navigate to **Resource provider** on the left navigation bar, then select **Microsoft.RecoveryServices** and select **Re-register**.
+To configure Cross Region Restore for the vault:
-1. From the portal, go to your Recovery Services vault > **Properties** (under **Settings**).
+1. From the portal, go to your Recovery Services vault, and then select **Properties** (under **Settings**).
1. Under **Backup Configuration**, select **Update**.
-1. Select **Enable Cross Region Restore in this vault** to enable the functionality.
+1. Under **Cross Region Restore**, select **Enable**.
+
+ ![Screenshot that shows the Backup Configuration pane and the toggle for enabling Cross Region Restore.](./media/backup-azure-arm-restore-vms/backup-configuration.png)
- ![Enable Cross Region restore](./media/backup-azure-arm-restore-vms/backup-configuration.png)
+> [!NOTE]
+> If you have access to restricted paired regions and still can't view Cross Region Restore settings on the **Backup Configuration** pane, re-register the Recovery Services resource provider. To re-register the provider, go to your subscription in the Azure portal, go to **Resource provider** on the left pane, and then select **Microsoft.RecoveryServices** > **Re-register**.
-See these articles for more information about backup and restore with CRR:
+For more information about backup and restore with Cross Region Restore, see these articles:
- [Cross Region Restore for Azure VMs](backup-azure-arm-restore-vms.md#cross-region-restore)-- [Cross Region Restore for SQL databases](restore-sql-database-azure-vm.md#cross-region-restore)
+- [Cross Region Restore for SQL Server databases](restore-sql-database-azure-vm.md#cross-region-restore)
- [Cross Region Restore for SAP HANA databases](sap-hana-db-restore.md#cross-region-restore) ## Set encryption settings
-By default, the data in the Recovery Services vault is encrypted using platform-managed keys. No explicit actions are required from your end to enable this encryption, and it applies to all workloads being backed up to your Recovery Services vault. You may choose to bring your own key to encrypt the backup data in this vault. This is referred to as customer-managed keys. If you wish to encrypt backup data using your own key, the encryption key must be specified before any item is protected to this vault. Once you enable encryption with your key, it can't be reversed.
-
-### Configuring a vault to encrypt using customer-managed keys
-
-To configure your vault to encrypt with customer-managed keys, these steps must be followed in this order:
-
-1. Enable managed identity for your Recovery Services vault
-
-1. Assign permissions to the vault to access the encryption key in the Azure Key Vault
-
-1. Enable soft-delete and purge protection on the Azure Key Vault
-
-1. Assign the encryption key to the Recovery Services vault
+By default, the data in the Recovery Services vault is encrypted through platform-managed keys. You don't need to take any explicit actions to enable this encryption. It applies to all workloads that are backed up to your Recovery Services vault.
-Instructions for each of these steps can be found [in this article](encryption-at-rest-with-cmk.md#configure-a-vault-to-encrypt-using-customer-managed-keys).
+You can choose to bring your own key (a *customer-managed key*) to encrypt the backup data in this vault. If you want to encrypt backup data by using your own key, you must specify the encryption key before any item is added to this vault. After you enable encryption with your key, it can't be reversed.
-## Modifying default settings
+To configure your vault to encrypt with customer-managed keys:
-We highly recommend you review the default settings for **Storage Replication type** and **Security settings** before configuring backups in the vault.
+1. Enable managed identity for your Recovery Services vault.
+1. Assign permissions to the vault to access the encryption key in Azure Key Vault.
+1. Enable soft delete and purge protection in Azure Key Vault.
+1. Assign the encryption key to the Recovery Services vault.
-- **Storage Replication type** by default is set to **Geo-redundant** (GRS). Once you configure the backup, the option to modify is disabled.
- - If you haven't yet configured the backup, then [follow these steps](#set-storage-redundancy) to review and modify the settings.
- - If you've already configured the backup and must move from GRS to LRS, then [review these workarounds](#how-to-change-from-grs-to-lrs-after-configuring-backup).
+You can find instructions for each of these steps in [this article](encryption-at-rest-with-cmk.md#configure-a-vault-to-encrypt-using-customer-managed-keys).
-- **Soft delete** by default is **Enabled** on newly created vaults to protect backup data from accidental or malicious deletes. [Follow these steps](./backup-azure-security-feature-cloud.md#enabling-and-disabling-soft-delete) to review and modify the settings.
+## Modify default settings
-### How to change from GRS to LRS after configuring backup
+We highly recommend that you review the default settings for storage replication type and security before you configure backups in the vault.
-Before deciding to move from GRS to locally redundant storage (LRS), review the trade-offs between lower cost and higher data durability that fit your scenario. If you must move from GRS to LRS, then you have two choices. They depend on your business requirements to retain the backup data:
+By default, **Soft delete** is set to **Enabled** on newly created vaults to help protect backup data from accidental or malicious deletions. To review and modify the settings, [follow these steps](./backup-azure-security-feature-cloud.md#enabling-and-disabling-soft-delete).
-- [DonΓÇÖt need to preserve previous backed-up data](#dont-need-to-preserve-previous-backed-up-data)-- [Must preserve previous backed-up data](#must-preserve-previous-backed-up-data)
+Before you decide to move from GRS to LRS, review the trade-offs between lower cost and higher data durability that fit your scenario. If you must move from GRS to LRS after you configure backup, you have the following two choices. Your choice will depend on your business requirements to retain the backup data.
-#### DonΓÇÖt need to preserve previous backed-up data
+### Don't need to preserve previous backed-up data
-To protect workloads in a new LRS vault, the current protection and data will need to be deleted in the GRS vault and backups configured again.
+To help protect workloads in a new LRS vault, you need to delete the current protection and data in the GRS vault and reconfigure backups.
->[!WARNING]
->The following operation is destructive and can't be undone. All backup data and backup items associated with the protected server will be permanently deleted. Proceed with caution.
+> [!WARNING]
+> The following operation is destructive and can't be undone. All backup data and backup items associated with the protected server will be permanently deleted. Proceed with caution.
-Stop and delete current protection on the GRS vault:
+To stop and delete current protection on the GRS vault:
-1. Disable soft delete in the GRS vault properties. Follow [these steps](backup-azure-security-feature-cloud.md#disabling-soft-delete-using-azure-portal) to disable soft delete.
+1. Follow [these steps](backup-azure-security-feature-cloud.md#disabling-soft-delete-using-azure-portal) to disable soft delete in the GRS vault's properties.
-1. Stop protection and delete backups from the existing GRS vault. In the Vault dashboard menu, select **Backup Items**. Items listed here that need to be moved to the LRS vault must be removed along with their backup data. See how to [delete protected items in the cloud](backup-azure-delete-vault.md#delete-protected-items-in-the-cloud) and [delete protected items on premises](backup-azure-delete-vault.md#delete-protected-items-on-premises).
+1. Stop protection and delete backups from the existing GRS vault. On the vault dashboard menu, select **Backup Items**. If you need to move items that are listed here to the LRS vault, you must remove them and their backup data. See [Delete protected items in the cloud](backup-azure-delete-vault.md#delete-protected-items-in-the-cloud) and [Delete protected items on-premises](backup-azure-delete-vault.md#delete-protected-items-on-premises).
-1. If you're planning to move AFS (Azure file shares), SQL servers or SAP HANA servers, then you'll need also to unregister them. In the vault dashboard menu, select **Backup Infrastructure**. See how to [unregister the SQL server](manage-monitor-sql-database-backup.md#unregister-a-sql-server-instance), [unregister a storage account associated with Azure file shares](manage-afs-backup.md#unregister-a-storage-account), and [unregister an SAP HANA instance](sap-hana-db-manage.md#unregister-an-sap-hana-instance).
+1. If you're planning to move Azure file shares, SQL Server instances, or SAP HANA servers, you'll also need to unregister them. On the vault dashboard menu, select **Backup Infrastructure**. For steps beyond that, see [Unregister a storage account associated with Azure file shares](manage-afs-backup.md#unregister-a-storage-account), [Unregister a SQL Server instance](manage-monitor-sql-database-backup.md#unregister-a-sql-server-instance), or [Unregister an SAP HANA instance](sap-hana-db-manage.md#unregister-an-sap-hana-instance).
-1. Once they're removed from the GRS vault, continue to configure the backups for your workload in the new LRS vault.
+1. After you remove Azure file shares, SQL Server instances, or SAP HANA servers from the GRS vault, continue to configure the backups for your workload in the new LRS vault.
-#### Must preserve previous backed-up data
+### Must preserve previous backed-up data
If you need to keep the current protected data in the GRS vault and continue the protection in a new LRS vault, there are limited options for some of the workloads: -- For MARS, you can [stop protection with retain data](backup-azure-manage-mars.md#stop-protecting-files-and-folder-backup) and register the agent in the new LRS vault.
+- For Microsoft Azure Recovery Services (MARS), you can [stop protection with retained data](backup-azure-manage-mars.md#stop-protecting-files-and-folder-backup) and register the agent in the new LRS vault. Be aware that:
- - Azure Backup service will continue to retain all the existing recovery points of the GRS vault.
+ - The Azure Backup service will continue to retain all the existing recovery points of the GRS vault.
- You'll need to pay to keep the recovery points in the GRS vault. - You'll be able to restore the backed-up data only for unexpired recovery points in the GRS vault.
- - A new initial replica of the data will need to be created on the LRS vault.
+ - You'll need to create an initial replica of the data on the LRS vault.
-- For an Azure VM, you can [stop protection with retain data](backup-azure-manage-vms.md#stop-protecting-a-vm) for the VM in the GRS vault, move the VM to another resource group, and then protect the VM in the LRS vault. See [guidance and limitations](../azure-resource-manager/management/move-limitations/virtual-machines-move-limitations.md) for moving a VM to another resource group.
+- For an Azure VM, you can [stop protection with retained data](backup-azure-manage-vms.md#stop-protecting-a-vm) for the VM in the GRS vault, move the VM to another resource group, and then help protect the VM in the LRS vault. For information about moving a VM to another resource group, see the [guidance and limitations](../azure-resource-manager/management/move-limitations/virtual-machines-move-limitations.md).
- A VM can be protected in only one vault at a time. However, the VM in the new resource group can be protected on the LRS vault as it's considered a different VM.
+ You can add a VM to only one vault at a time. However, the VM in the new resource group can be added to the LRS vault because it's considered a different VM. Be aware that:
- - Azure Backup service will retain the recovery points that have been backed up on the GRS vault.
- - You'll need to pay to keep the recovery points in the GRS vault (see [Azure Backup pricing](azure-backup-pricing.md) for details).
+ - The Azure Backup service will retain the recovery points that have been backed up on the GRS vault.
+ - You'll need to pay to keep the recovery points in the GRS vault. See [Azure Backup pricing](azure-backup-pricing.md) for details.
- You'll be able to restore the VM, if needed, from the GRS vault. - The first backup on the LRS vault of the VM in the new resource will be an initial replica. ## Next steps
-[Learn about](backup-azure-recovery-services-vault-overview.md) Recovery Services vaults.
-[Learn about](backup-azure-delete-vault.md) Delete Recovery Services vaults.
+- [Learn more about Recovery Services vaults](backup-azure-recovery-services-vault-overview.md)
+- [Delete Recovery Services vaults](backup-azure-delete-vault.md)
backup Encryption At Rest With Cmk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/encryption-at-rest-with-cmk.md
Example:
```AzurePowerShell $vault=Get-AzRecoveryServicesVault -ResourceGroupName "testrg" -Name "testvault"
-Update-AzRecoveryServicesVault -IdentityType SystemAssigned -VaultId $vault.ID
+Update-AzRecoveryServicesVault -IdentityType SystemAssigned -ResourceGroupName TestRG -Name TestVault
$vault.Identity | fl ```
backup Restore Azure Encrypted Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/restore-azure-encrypted-virtual-machines.md
Reinstall the ADE extension so the data disks are open and mounted.
## Cross Region Restore for an encrypted Azure VM
-Azure Backup supports Cross Region Restore of encrypted Azure VMs to the [Azure paired regions](../availability-zones/cross-region-replication-azure.md). Learn how to [enable Cross Region Restore](backup-create-rs-vault.md#configure-cross-region-restore) for an encrypted VM.
+Azure Backup supports Cross Region Restore of encrypted Azure VMs to the [Azure paired regions](../availability-zones/cross-region-replication-azure.md). Learn how to [enable Cross Region Restore](backup-create-rs-vault.md#set-cross-region-restore) for an encrypted VM.
## Move an encrypted Azure VM
backup Restore Sql Database Azure Vm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/restore-sql-database-azure-vm.md
As one of the restore options, Cross Region Restore (CRR) allows you to restore
To onboard to the feature, read the [Before You Begin section](./backup-create-rs-vault.md#set-cross-region-restore).
-To see if CRR is enabled, follow the instructions in [Configure Cross Region Restore](backup-create-rs-vault.md#configure-cross-region-restore)
+To see if CRR is enabled, follow the instructions in [Configure Cross Region Restore](backup-create-rs-vault.md#set-cross-region-restore)
### View backup items in secondary region
backup Sap Hana Db Restore https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/sap-hana-db-restore.md
As one of the restore options, Cross Region Restore (CRR) allows you to restore
To onboard to the feature, read the [Before You Begin section](./backup-create-rs-vault.md#set-cross-region-restore).
-To see if CRR is enabled, follow the instructions in [Configure Cross Region Restore](backup-create-rs-vault.md#configure-cross-region-restore)
+To see if CRR is enabled, follow the instructions in [Configure Cross Region Restore](backup-create-rs-vault.md#set-cross-region-restore)
### View backup items in secondary region
backup Selective Disk Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/selective-disk-backup-restore.md
az backup protection enable-for-vm --resource-group {ResourceGroup} --vault-nam
### Modify protection for already backed up VMs with Azure CLI ```azurecli
-az backup protection update-for-vm --resource-group {resourcegroup} --vault-name {vaultname} -c {vmname} -i {vmname} --backup-management-type AzureIaasVM --disk-list-setting exclude --diskslist {LUN number(s) separated by space}
+az backup protection update-for-vm --resource-group {resourcegroup} --vault-name {vaultname} -c {vmname} -i {vmname} --disk-list-setting exclude --diskslist {LUN number(s) separated by space}
``` ### Backup only OS disk during configure backup with Azure CLI
backup Tutorial Sql Backup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/tutorial-sql-backup.md
Title: Tutorial - Back up SQL Server databases to Azure description: In this tutorial, learn how to back up a SQL Server database running on an Azure VM to an Azure Backup Recovery Services vault. Previously updated : 06/18/2019 Last updated : 01/07/2022+++ # Back up a SQL Server database in an Azure VM
We do have aliasing for Azure table unsupported characters, but we recommend avo
Discover databases running on the VM.
-1. In the [Azure portal](https://portal.azure.com), open the Recovery Services vault you use to back up the database.
+1. In the [Azure portal](https://portal.azure.com), go to **Backup center** and click **+Backup**.
-2. On the **Recovery Services vault** dashboard, select **Backup**.
+1. Select **SQL in Azure VM** as the datasource type, select the Recovery Services vault you have created, and then click **Continue**.
- ![Select Backup to open the Backup Goal menu](./media/backup-azure-sql-database/open-backup-menu.png)
+ :::image type="content" source="./media/backup-azure-sql-database/configure-sql-backup.png" alt-text="Screenshot showing to select Backup to view the databases running in a VM.":::
-3. In **Backup Goal**, set **Where is your workload running** to **Azure** (the default).
-
-4. In **What do you want to backup**, select **SQL Server in Azure VM**.
-
- ![Select SQL Server in Azure VM for the backup](./media/backup-azure-sql-database/choose-sql-database-backup-goal.png)
-
-5. In **Backup Goal** > **Discover DBs in VMs**, select **Start Discovery** to search for unprotected VMs in the subscription. It can take a while, depending on the number of unprotected virtual machines in the subscription.
+1. In **Backup Goal** > **Discover DBs in VMs**, select **Start Discovery** to search for unprotected VMs in the subscription. It can take a while, depending on the number of unprotected virtual machines in the subscription.
* Unprotected VMs should appear in the list after discovery, listed by name and resource group. * If a VM isn't listed as you expect, check whether it's already backed up in a vault.
Discover databases running on the VM.
![Backup is pending during search for DBs in VMs](./media/backup-azure-sql-database/discovering-sql-databases.png)
-6. In the VM list, select the VM running the SQL Server database > **Discover DBs**.
+1. In the VM list, select the VM running the SQL Server database > **Discover DBs**.
-7. Track database discovery in the **Notifications** area. It can take a while for the job to complete, depending on how many databases are on the VM. When the selected databases are discovered, a success message appears.
+1. Track database discovery in the **Notifications** area. It can take a while for the job to complete, depending on how many databases are on the VM. When the selected databases are discovered, a success message appears.
![Deployment success message](./media/backup-azure-sql-database/notifications-db-discovered.png)
-8. Azure Backup discovers all SQL Server databases on the VM. During discovery, the following occurs in the background:
+1. Azure Backup discovers all SQL Server databases on the VM. During discovery, the following occurs in the background:
* Azure Backup register the VM with the vault for workload backup. All databases on the registered VM can only be backed up to this vault. * Azure Backup installs the **AzureBackupWindowsWorkload** extension on the VM. No agent is installed on the SQL database.
Discover databases running on the VM.
Configure backup as follows:
-1. In **Backup Goal**, select **Configure Backup**.
+1. In **Backup Goal** > **Step 2: Configure Backup**, select **Configure Backup**.
![Select Configure Backup](./media/backup-azure-sql-database/backup-goal-configure-backup.png)
-2. Select **Configure Backup**, the **Select items to backup** pane appears. This lists all the registered availability groups and standalone SQL Servers. Expand the chevron to the left of the row to see all the unprotected databases in that instance or Always on AG.
+1. Select **Add Resources** to see all the registered availability groups and standalone SQL Server instances.
- ![Displaying all SQL Server instances with standalone databases](./media/backup-azure-sql-database/list-of-sql-databases.png)
+ ![Select add resources](./media/backup-azure-sql-database/add-resources.png)
-3. Select all the databases you want to protect > **OK**.
+1. In the **Select items to backup** screen, select the arrow to the left of a row to expand the list of all the unprotected databases in that instance or Always On availability group.
+
+ ![Select items to backup](./media/backup-azure-sql-database/select-items-to-backup.png)
+
+1. Choose all the databases you want to protect, and then select **OK**.
![Protecting the database](./media/backup-azure-sql-database/select-database-to-protect.png) To optimize backup loads, Azure Backup sets a maximum number of databases in one backup job to 50.
- * Alternatively, you can enable auto-protection on the entire instance or Always On Availability group by selecting the **ON** option in the corresponding dropdown in the **AUTOPROTECT** column. The auto-protection feature not only enables protection on all the existing databases in one go but also automatically protects any new databases that will be added to that instance or the availability group in future.
-
-4. Select **OK** to open the **Backup policy** pane.
+ * To protect more than 50 databases, configure multiple backups.
+ * To [enable](/azure/backup/backup-sql-server-database-azure-vms#enable-auto-protection) the entire instance or the Always On availability group, in the **AUTOPROTECT** drop-down list, select **ON**, and then select **OK**.
- ![Enable auto-protection on the Always On availability group](./media/backup-azure-sql-database/enable-auto-protection.png)
+ > [!NOTE]
+ > The [auto-protection](/azure/backup/backup-sql-server-database-azure-vms#enable-auto-protection) feature not only enables protection on all the existing databases at once, but also automatically protects any new databases added to that instance or the availability group.
-5. In **Choose backup policy**, select a policy, then select **OK**.
+1. Define the **Backup policy**. You can do one of the following:
- * Select the default policy: HourlyLogBackup.
+ * Select the default policy as *HourlyLogBackup*.
* Choose an existing backup policy previously created for SQL. * Define a new policy based on your RPO and retention range. ![Select Backup policy](./media/backup-azure-sql-database/select-backup-policy.png)
-6. On **Backup** menu, select **Enable backup**.
+1. Select **Enable Backup** to submit the **Configure Protection** operation and track the configuration progress in the **Notifications** area of the portal.
- ![Enable the chosen backup policy](./media/backup-azure-sql-database/enable-backup-button.png)
-
-7. Track the configuration progress in the **Notifications** area of the portal.
-
- ![Notification area](./media/backup-azure-sql-database/notifications-area.png)
+ ![Track configuration progress](./media/backup-azure-sql-database/track-configuration-progress.png)
### Create a backup policy
A backup policy defines when backups are taken and how long they're retained.
* Multiple vaults can use the same backup policy, but you must apply the backup policy to each vault. * When you create a backup policy, a daily full backup is the default. * You can add a differential backup, but only if you configure full backups to occur weekly.
-* [Learn about](backup-architecture.md#sql-server-backup-types) different types of backup policies.
+* Learn about [different types of backup policies](backup-architecture.md#sql-server-backup-types).
To create a backup policy:
-1. In the vault, select **Backup policies** > **Add**.
-2. In **Add** menu, select **SQL Server in Azure VM** to define the policy type.
+1. Go to **Backup center** and click **+Policy**.
+
+1. Select **SQL Server in Azure VM** as the datasource type, select the vault under which the policy should be created, and then click **Continue**.
+
+ :::image type="content" source="./media/backup-azure-sql-database/create-sql-policy.png" alt-text="Screenshot showing to choose a policy type for the new backup policy.":::
+
+1. In **Policy name**, enter a name for the new policy.
- ![Choose a policy type for the new backup policy](./media/backup-azure-sql-database/policy-type-details.png)
+ :::image type="content" source="./media/backup-azure-sql-database/sql-policy-summary.png" alt-text="Screenshot to showing to enter policy name.":::
-3. In **Policy name**, enter a name for the new policy.
-4. In **Full Backup policy**, select a **Backup Frequency**, choose **Daily** or **Weekly**.
+1. Select the **Edit** link corresponding, to **Full backup**, to modify the default settings.
- * For **Daily**, select the hour and time zone when the backup job begins.
- * You must run a full backup as you can't turn off the **Full Backup** option.
- * Select **Full Backup** to view the policy.
- * You can't create differential backups for daily full backups.
- * For **Weekly**, select the day of the week, hour, and time zone when the backup job begins.
+ * Select a **Backup Frequency**. Choose either **Daily** or **Weekly**.
+ * For **Daily**, select the hour and time zone when the backup job begins. You can't create differential backups for daily full backups.
- ![New backup policy fields](./media/backup-azure-sql-database/full-backup-policy.png)
+ :::image type="content" source="./media/backup-azure-sql-database/sql-backup-schedule-inline.png" alt-text="Screenshot showing new backup policy fields." lightbox="./media/backup-azure-sql-database/sql-backup-schedule-expanded.png":::
-5. For **Retention Range**, by default all options are selected. Clear any undesired retention range limits you don't want to use, and set the intervals to use.
+1. In **RETENTION RANGE**, all options are selected by default. Clear any retention range limits that you don't want, and then set the intervals to use.
- * Minimum retention period for any type of backup (full/differential/log) is seven days.
+ * Minimum retention period for any type of backup (full, differential, and log) is seven days.
* Recovery points are tagged for retention based on their retention range. For example, if you select a daily full backup, only one full backup is triggered each day.
- * The backup for a specific day is tagged and retained based on the weekly retention range and your weekly retention setting.
- * The monthly and yearly retention ranges behave in a similar way.
+ * The backup for a specific day is tagged and retained based on the weekly retention range and the weekly retention setting.
+ * Monthly and yearly retention ranges behave in a similar way.
- ![Retention range interval settings](./media/backup-azure-sql-database/retention-range-interval.png)
+ :::image type="content" source="./media/backup-azure-sql-database/sql-retention-range-inline.png" alt-text="Screenshot showing the retention range interval settings." lightbox="./media/backup-azure-sql-database/sql-retention-range-expanded.png":::
-6. In the **Full Backup policy** menu, select **OK** to accept the settings.
-7. To add a differential backup policy, select **Differential Backup**.
+1. Select **OK** to accept the setting for full backups.
+1. Select the **Edit** link corresponding to **Differential backup**, to modify the default settings.
- ![Retention range interval settings](./media/backup-azure-sql-database/retention-range-interval.png)
- ![Open the differential backup policy menu](./media/backup-azure-sql-database/backup-policy-menu-choices.png)
+ * In **Differential Backup policy**, select **Enable** to open the frequency and retention controls.
+ * You can trigger only one differential backup per day. A differential backup can't be triggered on the same day as a full backup.
+ * Differential backups can be retained for a maximum of 180 days.
+ * The differential backup retention period can't be greater than that of the full backup (as the differential backups are dependent on the full backups for recovery).
+ * Differential Backup isn't supported for the master database.
-8. In **Differential Backup policy**, select **Enable** to open the frequency and retention controls.
+ :::image type="content" source="./media/backup-azure-sql-database/sql-differential-backup-inline.png" alt-text="Screenshot showing the differential Backup policy." lightbox="./media/backup-azure-sql-database/sql-differential-backup-expanded.png":::
- * At most, you can trigger one differential backup per day.
- * Differential backups can be retained for a maximum of 180 days. If you need longer retention, you must use full backups.
+1. Select the **Edit** link corresponding to **Log backup**, to modify the default settings
-9. Select **OK** to save the policy and return to the main **Backup policy** menu.
+ * In **Log Backup**, select **Enable**, and then set the frequency and retention controls.
+ * Log backups can occur as often as every 15 minutes and can be retained for up to 35 days.
+ * If the database is in the [simple recovery model](/sql/relational-databases/backup-restore/recovery-models-sql-server), the log backup schedule for that database will be paused and so no log backups will be triggered.
+ * If the recovery model of the database changes from **Full** to **Simple**, log backups will be paused within 24 hours of the change in the recovery model. Similarly, if the recovery model changes from **Simple**, implying log backups can now be supported for the database, the log backups schedules will be enabled within 24 hours of the change in recovery model.
-10. To add a transactional log backup policy, select **Log Backup**.
-11. In **Log Backup**, select **Enable**, and then set the frequency and retention controls. Log backups can occur as often as every 15 minutes, and can be retained for up to 35 days.
-12. Select **OK** to save the policy and return to the main **Backup policy** menu.
+ :::image type="content" source="./media/backup-azure-sql-database/sql-log-backup-inline.png" alt-text="Screenshot showing the log Backup policy." lightbox="./media/backup-azure-sql-database/sql-log-backup-expanded.png":::
- ![Edit the log backup policy](./media/backup-azure-sql-database/log-backup-policy-editor.png)
+1. On the **Backup policy** menu, choose whether to enable **SQL Backup Compression** or not. This option is disabled by default. If enabled, SQL Server will send a compressed backup stream to the VDI. Azure Backup overrides instance level defaults with COMPRESSION / NO_COMPRESSION clause depending on the value of this control.
-13. On the **Backup policy** menu, choose whether to enable **SQL Backup Compression**.
- * Compression is disabled by default.
- * On the back end, Azure Backup uses SQL native backup compression.
+1. After you complete the edits to the backup policy, select **OK**.
-14. After you complete the edits to the backup policy, select **OK**.
+> [!NOTE]
+> Each log backup is chained to the previous full backup to form a recovery chain. This full backup will be retained until the retention of the last log backup has expired. This might mean that the full backup is retained for an extra period to make sure all the logs can be recovered. Let's assume you have a weekly full backup, daily differential and 2 hour logs. All of them are retained for 30 days. But, the weekly full can be really cleaned up/deleted only after the next full backup is available, that is, after 30 + 7 days. For example, a weekly full backup happens on Nov 16th. According to the retention policy, it should be retained until Dec 16th. The last log backup for this full happens before the next scheduled full, on Nov 22nd. Until this log is available until Dec 22nd, the Nov 16th full can't be deleted. So, the Nov 16th full is retained until Dec 22nd.
## Run an on-demand backup
cognitive-services Bing Image Search Resource Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Bing-Image-Search/bing-image-search-resource-faq.md
Previously updated : 03/04/2019 Last updated : 01/05/2022 # Frequently asked questions (FAQ) about the Bing Image Search API
-> [!WARNING]
-> Bing Search APIs are moving from Cognitive Services to Bing Search Services. Starting **October 30, 2020**, any new instances of Bing Search need to be provisioned following the process documented [here](/bing/search-apis/bing-web-search/create-bing-search-service-resource).
-> Bing Search APIs provisioned using Cognitive Services will be supported for the next three years or until the end of your Enterprise Agreement, whichever happens first.
-> For migration instructions, see [Bing Search Services](/bing/search-apis/bing-web-search/create-bing-search-service-resource).
Find answers to commonly asked questions about concepts, code, and scenarios related to the Bing Image Search API for Azure Cognitive Services on Azure.
cognitive-services Bing Image Upgrade Guide V5 To V7 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Bing-Image-Search/bing-image-upgrade-guide-v5-to-v7.md
ms.assetid: 7F78B91F-F13B-40A4-B8A7-770FDB793F0F
Previously updated : 02/12/2019 Last updated : 01/05/2022 # Bing Image Search API v7 upgrade guide
-> [!WARNING]
-> Bing Search APIs are moving from Cognitive Services to Bing Search Services. Starting **October 30, 2020**, any new instances of Bing Search need to be provisioned following the process documented [here](/bing/search-apis/bing-web-search/create-bing-search-service-resource).
-> Bing Search APIs provisioned using Cognitive Services will be supported for the next three years or until the end of your Enterprise Agreement, whichever happens first.
-> For migration instructions, see [Bing Search Services](/bing/search-apis/bing-web-search/create-bing-search-service-resource).
This upgrade guide identifies the changes between version 5 and version 7 of the Bing Image Search API. Use this guide to help you identify the parts of your application that you need to update to use version 7.
cognitive-services Bing Image Search Get Images https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Bing-Image-Search/concepts/bing-image-search-get-images.md
ms.assetid: AB1B9898-C94A-4B59-91A1-8623C94BA3D4
Previously updated : 03/04/2019 Last updated : 01/05/2022 # Get images from the web with the Bing Image Search API
-> [!WARNING]
-> Bing Search APIs are moving from Cognitive Services to Bing Search Services. Starting **October 30, 2020**, any new instances of Bing Search need to be provisioned following the process documented [here](/bing/search-apis/bing-web-search/create-bing-search-service-resource).
-> Bing Search APIs provisioned using Cognitive Services will be supported for the next three years or until the end of your Enterprise Agreement, whichever happens first.
-> For migration instructions, see [Bing Search Services](/bing/search-apis/bing-web-search/create-bing-search-service-resource).
When you use the Bing Image Search REST API, you can get images from the web that are related to your search term by sending the following GET request:
cognitive-services Bing Image Search Sending Queries https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Bing-Image-Search/concepts/bing-image-search-sending-queries.md
ms.assetid: C2862E98-8BCC-423B-9C4A-AC79A287BE38
Previously updated : 06/27/2019 Last updated : 01/05/2022 # Customize and suggest image search queries
-> [!WARNING]
-> Bing Search APIs are moving from Cognitive Services to Bing Search Services. Starting **October 30, 2020**, any new instances of Bing Search need to be provisioned following the process documented [here](/bing/search-apis/bing-web-search/create-bing-search-service-resource).
-> Bing Search APIs provisioned using Cognitive Services will be supported for the next three years or until the end of your Enterprise Agreement, whichever happens first.
-> For migration instructions, see [Bing Search Services](/bing/search-apis/bing-web-search/create-bing-search-service-resource).
Use this article to learn how to customize queries and suggest search terms to send to the Bing Image Search API.
cognitive-services Gif Images https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Bing-Image-Search/gif-images.md
# Search for GIF images
-> [!WARNING]
-> Bing Search APIs are moving from Cognitive Services to Bing Search Services. Starting **October 30, 2020**, any new instances of Bing Search need to be provisioned following the process documented [here](/bing/search-apis/bing-web-search/create-bing-search-service-resource).
-> Bing Search APIs provisioned using Cognitive Services will be supported for the next three years or until the end of your Enterprise Agreement, whichever happens first.
-> For migration instructions, see [Bing Search Services](/bing/search-apis/bing-web-search/create-bing-search-service-resource).
The Bing Image Search API enables you to also search across the entire Web for the most relevant .gif images.  Developers can integrate engaging gifs in various conversation scenarios. 
cognitive-services Image Insights https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Bing-Image-Search/image-insights.md
ms.assetid: 0BCD936E-D4C0-472D-AE40-F4B2AB6912D5
Previously updated : 03/04/2019 Last updated : 01/05/2022 # Get image insights with the Bing Image Search API
-> [!WARNING]
-> Bing Search APIs are moving from Cognitive Services to Bing Search Services. Starting **October 30, 2020**, any new instances of Bing Search need to be provisioned following the process documented [here](/bing/search-apis/bing-web-search/create-bing-search-service-resource).
-> Bing Search APIs provisioned using Cognitive Services will be supported for the next three years or until the end of your Enterprise Agreement, whichever happens first.
-> For migration instructions, see [Bing Search Services](/bing/search-apis/bing-web-search/create-bing-search-service-resource).
> [!IMPORTANT] > Instead of using the /images/details endpoint to get image insights, you should use [Visual Search](../bing-visual-search/overview.md) since it provides more comprehensive insights.
cognitive-services Image Search Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Bing-Image-Search/image-search-endpoint.md
Previously updated : 03/04/2019 Last updated : 01/05/2022 # Endpoints for the Bing Image Search API
-> [!WARNING]
-> Bing Search APIs are moving from Cognitive Services to Bing Search Services. Starting **October 30, 2020**, any new instances of Bing Search need to be provisioned following the process documented [here](/bing/search-apis/bing-web-search/create-bing-search-service-resource).
-> Bing Search APIs provisioned using Cognitive Services will be supported for the next three years or until the end of your Enterprise Agreement, whichever happens first.
-> For migration instructions, see [Bing Search Services](/bing/search-apis/bing-web-search/create-bing-search-service-resource).
The **Image Search API** includes three endpoints. Endpoint 1 returns images from the Web based on a query. Endpoint 2 returns [ImageInsights](/rest/api/cognitiveservices-bingsearch/bing-images-api-v7-reference#imageinsightsresponse). Endpoint 3 returns trending images.
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Bing-Image-Search/language-support.md
Previously updated : 03/04/2019 Last updated : 01/05/2022 # Language and region support for the Bing Image Search API
-> [!WARNING]
-> Bing Search APIs are moving from Cognitive Services to Bing Search Services. Starting **October 30, 2020**, any new instances of Bing Search need to be provisioned following the process documented [here](/bing/search-apis/bing-web-search/create-bing-search-service-resource).
-> Bing Search APIs provisioned using Cognitive Services will be supported for the next three years or until the end of your Enterprise Agreement, whichever happens first.
-> For migration instructions, see [Bing Search Services](/bing/search-apis/bing-web-search/create-bing-search-service-resource).
The Bing Image Search API supports more than three dozen countries/regions, many with more than one language. Specifying a country/region with a query serves primarily to refine search results based on interests in that country/region. Additionally, the results may contain links to Bing, and these links may localize the Bing user experience according to the specified country/regions or language.
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Bing-Image-Search/overview.md
ms.assetid: 1446AD8B-A685-4F5F-B4AA-74C8E9A40BE9
Previously updated : 12/18/2019 Last updated : 01/05/2022 #Customer intent: As a developer, I want to integrate Bing's image search capabilities into my app, so that I can provide relevant, engaging images to my users.
# What is the Bing Image Search API?
-> [!WARNING]
-> Bing Search APIs are moving from Cognitive Services to Bing Search Services. Starting **October 30, 2020**, any new instances of Bing Search need to be provisioned following the process documented [here](/bing/search-apis/bing-web-search/create-bing-search-service-resource).
-> Bing Search APIs provisioned using Cognitive Services will be supported for the next three years or until the end of your Enterprise Agreement, whichever happens first.
-> For migration instructions, see [Bing Search Services](/bing/search-apis/bing-web-search/create-bing-search-service-resource).
The Bing Image Search API enables you to use Bing's image search capabilities in your application. By sending search queries to the API, you can get high-quality images similar to [bing.com/images](https://www.bing.com/images).
cognitive-services Client Libraries https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Bing-Image-Search/quickstarts/client-libraries.md
zone_pivot_groups: programming-languages-set-ten
Previously updated : 10/21/2020 Last updated : 01/05/2022 ms.devlang: csharp, java, javascript, python # Quickstart: Use the Bing Image Search client library
-> [!WARNING]
-> Bing Search APIs are moving from Cognitive Services to Bing Search Services. Starting **October 30, 2020**, any new instances of Bing Search need to be provisioned following the process documented [here](/bing/search-apis/bing-web-search/create-bing-search-service-resource).
-> Bing Search APIs provisioned using Cognitive Services will be supported for the next three years or until the end of your Enterprise Agreement, whichever happens first.
-> For migration instructions, see [Bing Search Services](/bing/search-apis/bing-web-search/create-bing-search-service-resource).
::: zone pivot="programming-language-csharp"
cognitive-services Csharp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Bing-Image-Search/quickstarts/csharp.md
Previously updated : 05/08/2020 Last updated : 01/05/2022 ms.devlang: csharp # Quickstart: Search for images using the Bing Image Search REST API and C#
-> [!WARNING]
-> Bing Search APIs are moving from Cognitive Services to Bing Search Services. Starting **October 30, 2020**, any new instances of Bing Search need to be provisioned following the process documented [here](/bing/search-apis/bing-web-search/create-bing-search-service-resource).
-> Bing Search APIs provisioned using Cognitive Services will be supported for the next three years or until the end of your Enterprise Agreement, whichever happens first.
-> For migration instructions, see [Bing Search Services](/bing/search-apis/bing-web-search/create-bing-search-service-resource).
Use this quickstart to learn how to send search requests to the Bing Image Search API. This C# application sends a search query to the API, and displays the URL of the first image in the results. Although this application is written in C#, the API is a RESTful web service compatible with most programming languages.
-The source code for this sample is available [on GitHub](https://github.com/Azure-Samples/cognitive-services-REST-api-samples/blob/master/dotnet/Search/BingImageSearchv7Quickstart.cs) with additional error handling and annotations.
- ## Prerequisites * Any edition of [Visual Studio 2017 or later](https://www.visualstudio.com/downloads/). * The [Json.NET](https://www.newtonsoft.com/json) framework, available as a NuGet package. * If you're using Linux/MacOS, this application can be run using [Mono](https://www.mono-project.com/). - ## Create and initialize a project 1. Create a new console solution named `BingSearchApisQuickStart` in Visual Studio. Then, add the following namespaces to the main code file:
cognitive-services Java https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Bing-Image-Search/quickstarts/java.md
Previously updated : 05/08/2020 Last updated : 01/05/2022 ms.devlang: java # Quickstart: Search for images with the Bing Image Search API and Java
-> [!WARNING]
-> Bing Search APIs are moving from Cognitive Services to Bing Search Services. Starting **October 30, 2020**, any new instances of Bing Search need to be provisioned following the process documented [here](/bing/search-apis/bing-web-search/create-bing-search-service-resource).
-> Bing Search APIs provisioned using Cognitive Services will be supported for the next three years or until the end of your Enterprise Agreement, whichever happens first.
-> For migration instructions, see [Bing Search Services](/bing/search-apis/bing-web-search/create-bing-search-service-resource).
Use this quickstart to learn how to send search requests to the Bing Image Search API in Azure Cognitive Services. This Java application sends a search query to the API, and displays the URL of the first image in the results. Although this application is written in Java, the API is a RESTful web service compatible with most programming languages.
-The source code for this sample is available [on GitHub](https://github.com/Azure-Samples/cognitive-services-REST-api-samples/blob/master/java/Search/BingImageSearchv7Quickstart.java) with additional error handling and annotations.
- ## Prerequisites * The [Java Development Kit(JDK)](/azure/developer/java/fundamentals/java-support-on-azure) * The [Gson library](https://github.com/google/gson) - ## Create and initialize a project 1. Create a new Java project in your favorite IDE or editor, and import the following libraries:
cognitive-services Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Bing-Image-Search/quickstarts/nodejs.md
Previously updated : 05/08/2020 Last updated : 01/05/2022 ms.devlang: javascript
# Quickstart: Search for images using the Bing Image Search REST API and Node.js
-> [!WARNING]
-> Bing Search APIs are moving from Cognitive Services to Bing Search Services. Starting **October 30, 2020**, any new instances of Bing Search need to be provisioned following the process documented [here](/bing/search-apis/bing-web-search/create-bing-search-service-resource).
-> Bing Search APIs provisioned using Cognitive Services will be supported for the next three years or until the end of your Enterprise Agreement, whichever happens first.
-> For migration instructions, see [Bing Search Services](/bing/search-apis/bing-web-search/create-bing-search-service-resource).
Use this quickstart to learn how to send search requests to the Bing Image Search API. This JavaScript application sends a search query to the API, and displays the URL of the first image in the results. Although this application is written in JavaScript, the API is a RESTful web service compatible with most programming languages.
-The source code for this sample is available [on GitHub](https://github.com/Azure-Samples/cognitive-services-REST-api-samples/blob/master/nodejs/Search/BingImageSearchv7Quickstart.js) with additional error handling and annotations.
- ## Prerequisites * The latest version of [Node.js](https://nodejs.org/en/download/). * The [JavaScript Request Library](https://github.com/request/request). - For more information, see [Cognitive Services Pricing - Bing Search API](https://azure.microsoft.com/pricing/details/cognitive-services/search-api/). ## Create and initialize the application
cognitive-services Php https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Bing-Image-Search/quickstarts/php.md
# Quickstart: Search for images using the Bing Image Search REST API and PHP
-> [!WARNING]
-> Bing Search APIs are moving from Cognitive Services to Bing Search Services. Starting **October 30, 2020**, any new instances of Bing Search need to be provisioned following the process documented [here](/bing/search-apis/bing-web-search/create-bing-search-service-resource).
-> Bing Search APIs provisioned using Cognitive Services will be supported for the next three years or until the end of your Enterprise Agreement, whichever happens first.
-> For migration instructions, see [Bing Search Services](/bing/search-apis/bing-web-search/create-bing-search-service-resource).
Use this quickstart to make your first call to the Bing Image Search API and receive a JSON response. The simple application in this article sends a search query and displays the raw results. Although this application is written in PHP, the API is a RESTful Web service compatible with any programming language that can make HTTP requests and parse JSON.
-The source code for this sample is available [on GitHub](https://github.com/Azure-Samples/cognitive-services-REST-api-samples/blob/master/php/Search/BingWebSearchv7.php).
- ## Prerequisites * [PHP 5.6.x or later](https://php.net/downloads.php) - For more information, see [Cognitive Services Pricing - Bing Search API](https://azure.microsoft.com/pricing/details/cognitive-services/search-api/). ## Create and initialize the application
cognitive-services Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Bing-Image-Search/quickstarts/python.md
Previously updated : 05/08/2020 Last updated : 01/05/2022 ms.devlang: python
# Quickstart: Search for images using the Bing Image Search REST API and Python
-> [!WARNING]
-> Bing Search APIs are moving from Cognitive Services to Bing Search Services. Starting **October 30, 2020**, any new instances of Bing Search need to be provisioned following the process documented [here](/bing/search-apis/bing-web-search/create-bing-search-service-resource).
-> Bing Search APIs provisioned using Cognitive Services will be supported for the next three years or until the end of your Enterprise Agreement, whichever happens first.
-> For migration instructions, see [Bing Search Services](/bing/search-apis/bing-web-search/create-bing-search-service-resource).
Use this quickstart to learn how to send search requests to the Bing Image Search API. This Python application sends a search query to the API, and displays the URL of the first image in the results. Although this application is written in Python, the API is a RESTful web service compatible with most programming languages.
-To run this example as a Jupyter notebook on [MyBinder](https://mybinder.org), select the **launch binder** badge:
-
-[![Binder](https://mybinder.org/badge.svg)](https://mybinder.org/v2/gh/Microsoft/cognitive-services-notebooks/master?filepath=BingImageSearchAPI.ipynb)
--
-The source code for this sample is available [on GitHub](https://github.com/Azure-Samples/cognitive-services-REST-api-samples/blob/master/python/Search/BingImageSearchv7.py) with additional error handling and annotations.
-- ## Prerequisites * [Python 2.x or 3.x](https://www.python.org/) * The [Python Imaging Library (PIL)](https://pillow.readthedocs.io/en/stable/https://docsupdatetracker.net/index.html) * [matplotlib](https://matplotlib.org/) - ## Create and initialize the application 1. Create a new Python file in your favorite IDE or editor, and import the following modules. Create a variable for your subscription key, search endpoint, and search term. For `search_url`, you can use the global endpoint in the following code, or use the [custom subdomain](../../../cognitive-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
cognitive-services Ruby https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Bing-Image-Search/quickstarts/ruby.md
Previously updated : 05/08/2020 Last updated : 01/05/2022 ms.devlang: ruby # Quickstart: Search for images using the Bing Image Search REST API and Ruby
-> [!WARNING]
-> Bing Search APIs are moving from Cognitive Services to Bing Search Services. Starting **October 30, 2020**, any new instances of Bing Search need to be provisioned following the process documented [here](/bing/search-apis/bing-web-search/create-bing-search-service-resource).
-> Bing Search APIs provisioned using Cognitive Services will be supported for the next three years or until the end of your Enterprise Agreement, whichever happens first.
-> For migration instructions, see [Bing Search Services](/bing/search-apis/bing-web-search/create-bing-search-service-resource).
Use this quickstart to make your first call to the Bing Image Search API and receive a JSON response. This simple Ruby application sends a search query to the API and displays the raw results. Although this application is written in Ruby, the API is a RESTful Web service compatible with most programming languages.
-The source code for this sample is available on [GitHub](https://github.com/Azure-Samples/cognitive-services-REST-api-samples/blob/master/ruby/Search/BingImageSearchv7.rb).
- ## Prerequisites * [The latest version of Ruby](https://www.ruby-lang.org/en/downloads/). - For more information, see [Cognitive Services Pricing - Bing Search API](https://azure.microsoft.com/pricing/details/cognitive-services/search-api/). ## Create and initialize the application
cognitive-services Trending Images https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Bing-Image-Search/trending-images.md
# Get trending images from the web
-> [!WARNING]
-> Bing Search APIs are moving from Cognitive Services to Bing Search Services. Starting **October 30, 2020**, any new instances of Bing Search need to be provisioned following the process documented [here](/bing/search-apis/bing-web-search/create-bing-search-service-resource).
-> Bing Search APIs provisioned using Cognitive Services will be supported for the next three years or until the end of your Enterprise Agreement, whichever happens first.
-> For migration instructions, see [Bing Search Services](/bing/search-apis/bing-web-search/create-bing-search-service-resource).
To get today's trending images, send the following GET request:
cognitive-services Tutorial Bing Image Search Single Page App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Bing-Image-Search/tutorial-bing-image-search-single-page-app.md
# Tutorial: Create a single-page app using the Bing Image Search API
-> [!WARNING]
-> Bing Search APIs are moving from Cognitive Services to Bing Search Services. Starting **October 30, 2020**, any new instances of Bing Search need to be provisioned following the process documented [here](/bing/search-apis/bing-web-search/create-bing-search-service-resource).
-> Bing Search APIs provisioned using Cognitive Services will be supported for the next three years or until the end of your Enterprise Agreement, whichever happens first.
-> For migration instructions, see [Bing Search Services](/bing/search-apis/bing-web-search/create-bing-search-service-resource).
The Bing Image Search API enables you to search the web for high-quality, relevant images. Use this tutorial to build a single-page web application that can send search queries to the API, and display the results within the webpage. This tutorial is similar to the [corresponding tutorial](../Bing-Web-Search/tutorial-bing-web-search-single-page-app.md) for Bing Web Search.
The tutorial app illustrates how to:
> * Display and page through search results > * Request and handle an API subscription key, and Bing client ID.
-The full source code for this tutorial is available on [GitHub](https://github.com/Azure-Samples/cognitive-services-REST-api-samples/tree/master/Tutorials/Bing-Image-Search).
- ## Prerequisites * The latest version of [Node.js](https://nodejs.org/). * The [Express.js](https://expressjs.com/) framework for Node.js. Installation instructions for the source code are available in the GitHub sample readme file. - ## Manage and store user subscription keys This application uses web browsers' persistent storage to store API subscription keys. If no key is stored, the webpage will prompt the user for their key and store it for later use. If the key is later rejected by the API, The app will remove it from storage. This sample uses the global endpoint. You can also use the [custom subdomain](../../cognitive-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
cognitive-services Tutorial Image Post https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Bing-Image-Search/tutorial-image-post.md
# Tutorial: Extract image details using the Bing Image Search API and C#
-> [!WARNING]
-> Bing Search APIs are moving from Cognitive Services to Bing Search Services. Starting **October 30, 2020**, any new instances of Bing Search need to be provisioned following the process documented [here](/bing/search-apis/bing-web-search/create-bing-search-service-resource).
-> Bing Search APIs provisioned using Cognitive Services will be supported for the next three years or until the end of your Enterprise Agreement, whichever happens first.
-> For migration instructions, see [Bing Search Services](/bing/search-apis/bing-web-search/create-bing-search-service-resource).
There are multiple [endpoints](./image-search-endpoint.md) available through the Bing Image Search API. The `/details` endpoint accepts a POST request with an image, and can return a variety of details about the image. This C# application sends an image using this API, and displays the details returned by Bing, which are JSON objects, such as the following:
This tutorial explains how to:
> * Upload the image data and send the `POST` request > * Print the JSON results to the console
-The source code for this sample is available on [GitHub](https://github.com/Azure-Samples/cognitive-services-REST-api-samples/blob/master/Tutorials/BingGetSimilarImages.cs).
- ## Prerequisites * Any edition of [Visual studio 2017 or later](https://visualstudio.microsoft.com/downloads/). - ## Construct an image details search request The following is the `/details` endpoint, which accepts POST requests with image data in the body of the request. You can use the global endpoint below, or the [custom subdomain](../../cognitive-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
cognitive-services How To Audio Content Creation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-audio-content-creation.md
After you've reviewed your audio output and are satisfied with your tuning and a
**Supported audio formats**
-| Format | 16 kHz sample rate | 24 kHz sample rate |
-|--|--|--|
-| wav | riff-16khz-16bit-mono-pcm | riff-24khz-16bit-mono-pcm |
-| mp3 | audio-16khz-128kbitrate-mono-mp3 | audio-24khz-160kbitrate-mono-mp3 |
+| Format | 8 kHz sample rate | 16 kHz sample rate | 24 kHz sample rate | 48 kHz sample rate |
+|--|--|--|--|--|
+| wav | riff-8khz-16bit-mono-pcm | riff-16khz-16bit-mono-pcm | riff-24khz-16bit-mono-pcm |riff-48khz-16bit-mono-pcm |
+| mp3 | N/A | audio-16khz-128kbitrate-mono-mp3 | audio-24khz-160kbitrate-mono-mp3 |audio-48khz-192kbitrate-mono-mp3 |
## How to add/remove Audio Content Creation users?
cognitive-services How To Custom Speech Test And Train https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-custom-speech-test-and-train.md
Title: "Prepare data for Custom Speech - Speech service"
-description: "When testing the accuracy of Microsoft speech recognition or training your custom models, you'll need audio and text data. On this page, we cover the types of data, how to use, and manage them."
+description: Learn about types of data for a Custom Speech model, along with how to use and manage that data.
# Prepare data for Custom Speech
-When testing the accuracy of Microsoft speech recognition or training your custom models, you'll need audio and text data. On this page, we cover the types of data a custom speech model needs.
+When you're testing the accuracy of Microsoft speech recognition or training your custom models, you need audio and text data. This article covers the types of data that a Custom Speech model needs.
## Data diversity
-Text and audio used to test and train a custom model need to include samples from a diverse set of speakers and scenarios you need your model to recognize.
-Consider these factors when gathering data for custom model testing and training:
+Text and audio that you use to test and train a custom model need to include samples from a diverse set of speakers and scenarios that you want your model to recognize. Consider these factors when you're gathering data for custom model testing and training:
-* Your text and speech audio data need to cover the kinds of verbal statements your users will make when interacting with your model. For example, a model that raises and lowers the temperature needs training on statements people might make to request such changes.
-* Your data need to include all speech variances your model will need to recognize. Many factors can vary speech, including accents, dialects, language-mixing, age, gender, voice pitch, stress level, and time of day.
+* Your text and speech audio data needs to cover the kinds of verbal statements that your users will make when they're interacting with your model. For example, a model that raises and lowers the temperature needs training on statements that people might make to request such changes.
+* Your data needs to include all speech variances that you want your model to recognize. Many factors can vary speech, including accents, dialects, language-mixing, age, gender, voice pitch, stress level, and time of day.
* You must include samples from different environments (indoor, outdoor, road noise) where your model will be used.
-* Audio must be gathered using hardware devices the production system will use. If your model needs to identify speech recorded on recording devices of varying quality, the audio data you provide to train your model must also represent these diverse scenarios.
+* You must gather audio by using hardware devices that the production system will use. If your model needs to identify speech recorded on recording devices of varying quality, the audio data that you provide to train your model must also represent these diverse scenarios.
* You can add more data to your model later, but take care to keep the dataset diverse and representative of your project needs.
-* Including data that is *not* within your custom model recognition needs can harm recognition quality overall, so do not include data that your model does not need to transcribe.
+* Including data that's *not* within your custom model's recognition needs can harm recognition quality overall. Include only data that your model needs to transcribe.
-A model trained on a subset of scenarios can only perform well in those scenarios. Carefully choose data that represents the full scope of scenarios you need your custom model to recognize.
+A model that's trained on a subset of scenarios can perform well in only those scenarios. Carefully choose data that represents the full scope of scenarios that you need your custom model to recognize.
> [!TIP]
-> Start with small sets of sample data that match the language and acoustics your model will encounter.
-> For example, record a small but representative sample of audio on the same hardware and in the same acoustic environment your model will find in production scenarios.
-> Small datasets of representative data can expose problems before you have invested in gathering a much larger datasets for training.
+> Start with small sets of sample data that match the language and acoustics that your model will encounter. For example, record a small but representative sample of audio on the same hardware and in the same acoustic environment that your model will find in production scenarios. Small datasets of representative data can expose problems before you invest in gathering larger datasets for training.
>
-> To quickly get started, consider using sample data. See this GitHub repository for <a href="https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/sampledata/customspeech" target="_target">sample Custom Speech data </a>
+> To quickly get started, consider using sample data. For sample Custom Speech data, see <a href="https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/sampledata/customspeech" target="_target">this GitHub repository</a>.
## Data types
-This table lists accepted data types, when each data type should be used, and the recommended quantity. Not every data type is required to create a model. Data requirements will vary depending on whether you're creating a test or training a model.
+The following table lists accepted data types, when each data type should be used, and the recommended quantity. Not every data type is required to create a model. Data requirements will vary depending on whether you're creating a test or training a model.
| Data type | Used for testing | Recommended quantity | Used for training | Recommended quantity | |--|--|-|-|-|
-| [Audio only](#audio-data-for-testing) | Yes<br>Used for visual inspection | 5+ audio files | No | N/A |
-| [Audio + Human-labeled transcripts](#audio--human-labeled-transcript-data-for-trainingtesting) | Yes<br>Used to evaluate accuracy | 0.5-5 hours of audio | Yes | 1-20 hours of audio |
-| [Plain text](#plain-text-data-for-training) | No | N/a | Yes | 1-200 MB of related text |
-| [Structured text](#structured-text-data-for-training-public-preview) (Public Preview) | No | N/a | Yes | Up to 10 classes with up to 4000 items and up to 50,000 training sentences |
-| [Pronunciation](#pronunciation-data-for-training) | No | N/a | Yes | 1 KB - 1 MB of pronunciation text |
+| [Audio only](#audio-data-for-testing) | Yes (visual inspection) | 5+ audio files | No | Not applicable |
+| [Audio + human-labeled transcripts](#audio--human-labeled-transcript-data-for-training-or-testing) | Yes (evaluation of accuracy) | 0.5-5 hours of audio | Yes | 1-20 hours of audio |
+| [Plain text](#plain-text-data-for-training) | No | Not applicable | Yes | 1-200 MB of related text |
+| [Structured text](#structured-text-data-for-training-public-preview) (public preview) | No | Not applicable | Yes | Up to 10 classes with up to 4,000 items and up to 50,000 training sentences |
+| [Pronunciation](#pronunciation-data-for-training) | No | Not applicable | Yes | 1 KB to 1 MB of pronunciation text |
-Files should be grouped by type into a dataset and uploaded as a .zip file. Each dataset can only contain a single data type.
+Files should be grouped by type into a dataset and uploaded as a .zip file. Each dataset can contain only a single data type.
> [!TIP]
-> When you train a new model, start with plain text data or structured text data. This data will improve the recognition of special terms and phrases. Training with text is much faster than training with audio (minutes vs. days).
+> When you train a new model, start with plain-text data or structured-text data. This data will improve the recognition of special terms and phrases. Training with text is much faster than training with audio (minutes versus days).
-> [!NOTE]
-> Not all base models support training with audio. If a base model does not support it, the Speech service will only use the text from the transcripts and ignore the audio. See [Language support](language-support.md#speech-to-text) for a list of base models that support training with audio data. Even if a base model supports training with audio data, the service might use only part of the audio. Still it will use all the transcripts.
->
-> In cases when you change the base model used for training, and you have audio in the training dataset, *always* check whether the new selected base model [supports training with audio data](language-support.md#speech-to-text). If the previously used base model did not support training with audio data, and the training dataset contains audio, training time with the new base model will **drastically** increase, and may easily go from several hours to several days and more. This is especially true if your Speech service subscription is **not** in a [region with the dedicated hardware](custom-speech-overview.md#set-up-your-azure-account) for training.
->
-> If you face the issue described in the paragraph above, you can quickly decrease the training time by reducing the amount of audio in the dataset or removing it completely and leaving only the text. The latter option is highly recommended if your Speech service subscription is **not** in a [region with the dedicated hardware](custom-speech-overview.md#set-up-your-azure-account) for training.
->
-> In regions with dedicated hardware for training, the Speech service will use up to 20 hours of audio for training. In other regions, it will only use up to 8 hours of audio.
+### Training with audio data
-> [!NOTE]
-> Training with structured text is only supported for these locales: en-US, en-UK, en-IN, de-DE, fr-FR, fr-CA, es-ES, es-MX and you must use the latest base model for these locales.
->
-> For locales that donΓÇÖt support training with structured text the service will take any training sentences that donΓÇÖt reference any classes as part of training with plain text data.
+Not all base models support [training with audio data](language-support.md#speech-to-text). If a base model doesn't support it, the Speech service will use only the text from the transcripts and ignore the audio. For a list of base models that support training with audio data, see [Language support](language-support.md#speech-to-text).
+
+Even if a base model supports training with audio data, the service might use only part of the audio. But it will use all the transcripts.
+
+If you change the base model that's used for training, and you have audio in the training dataset, *always* check whether the new selected base model supports training with audio data. If the previously used base model did not support training with audio data, and the training dataset contains audio, training time with the new base model will drastically increase. It could easily go from several hours to several days and more. This is especially true if your Speech service subscription is *not* in a [region with the dedicated hardware](custom-speech-overview.md#set-up-your-azure-account) for training.
+
+If you face the problem described in the previous paragraph, you can quickly decrease the training time by reducing the amount of audio in the dataset or removing it completely and leaving only the text.
+
+In regions with dedicated hardware for training, the Speech service will use up to 20 hours of audio for training. In other regions, it will only use up to 8 hours of audio.
+
+### Supported locales
+
+Training with structured text is supported only for these locales:
+
+* en-US
+* en-UK
+* en-IN
+* de-DE
+* fr-FR
+* fr-CA
+* es-ES
+* es-MX
+
+You must use the latest base model for these locales.
+
+For locales that don't support training with structured text, the service will take any training sentences that don't reference any classes as part of training with plain-text data.
## Upload data
-To upload your data, navigate to [Speech Studio](https://aka.ms/speechstudio/customspeech). After creating a project, navigate to **Speech datasets** tab, and click **Upload data** to launch the wizard and create your first dataset. Select a speech data type for your dataset, and upload your data.
+To upload your data:
-> [!NOTE]
-> If your dataset file size exceeds 128 MB, you can only upload it using *Azure Blob or shared location* option. You can also use [Speech-to-text REST API v3.0](rest-speech-to-text.md#speech-to-text-rest-api-v30) to upload a dataset of [any allowed size](speech-services-quotas-and-limits.md#model-customization). See [the next section](#upload-data-using-speech-to-text-rest-api-v30) for details.
+1. Go to [Speech Studio](https://aka.ms/speechstudio/customspeech).
+1. After you create a project, go to the **Speech datasets** tab. Select **Upload data** to start the wizard and create your first dataset.
+1. Select a speech data type for your dataset, and upload your data.
-First, you need to specify whether the dataset is to be used for **Training** or **Testing**. There are many types of data that can be uploaded and used for **Training** or **Testing**. Each dataset you upload must be correctly formatted before uploading, and must meet the requirements for the data type that you choose. Requirements are listed in the following sections.
+ > [!NOTE]
+ > If your dataset file size exceeds 128 MB, you can upload it by using the **Azure Blob or shared location** option. You can also use [Speech-to-text REST API v3.0](rest-speech-to-text.md#speech-to-text-rest-api-v30) to upload a dataset of [any allowed size](speech-services-quotas-and-limits.md#model-customization). See [the next section](#upload-data-by-using-speech-to-text-rest-api-v30) for details.
-After your dataset is uploaded, you have a few options:
+1. Specify whether the dataset will be used for **Training** or **Testing**.
-* You can navigate to the **Train custom models** tab to train a custom model.
-* You can navigate to the **Test models** tab to visually inspect quality with audio only data or evaluate accuracy with audio + human-labeled transcription data.
+ There are many types of data that can be uploaded and used for **Training** or **Testing**. Each dataset that you upload must be correctly formatted before uploading, and it must meet the requirements for the data type that you choose. Requirements are listed in the following sections.
-### Upload data using Speech-to-text REST API v3.0
+1. After your dataset is uploaded, you can either:
-You can use [Speech-to-text REST API v3.0](rest-speech-to-text.md#speech-to-text-rest-api-v30) to automate any operations related to your custom models. In particular, you can use it to upload a dataset. This is particularly useful when your dataset file exceeds 128 MB, because files that large cannot be uploaded using *Local file* option in Speech Studio. (You can also use *Azure Blob or shared location* option in Speech Studio for the same purpose as described in the previous section.)
+ * Go to the **Train custom models** tab to train a custom model.
+ * Go to the **Test models** tab to visually inspect quality with audio-only data or evaluate accuracy with audio + human-labeled transcription data.
-To create and upload a dataset use [Create Dataset](https://centralus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateDataset) request.
+### Upload data by using Speech-to-text REST API v3.0
-**REST API created datasets and Speech Studio projects**
+You can use [Speech-to-text REST API v3.0](rest-speech-to-text.md#speech-to-text-rest-api-v30) to automate any operations related to your custom models. In particular, you can use the REST API to upload a dataset. This is particularly useful when your dataset file exceeds 128 MB, because you can't upload files that large by using the **Local file** option in Speech Studio. (You can also use the **Azure Blob or shared location** option in Speech Studio for the same purpose, as described in the previous section.)
-A dataset created with the Speech-to-text REST API v3.0 will *not* be connected to any of the Speech Studio projects, unless a special parameter is specified in the request body (see below). Connection with a Speech Studio project is *not* required for any model customization operations, if they are performed via the REST API.
+To create and upload a dataset, use a [Create Dataset](https://centralus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateDataset) request.
-When you log on to the Speech Studio, its user interface will notify you when any unconnected object is found (like datasets uploaded through the REST API without any project reference) and offer to connect such objects to an existing project.
+A dataset that you create by using the Speech-to-text REST API v3.0 will *not* be connected to any of the Speech Studio projects, unless you specify a special parameter in the request body (see the code block later in this section). Connection with a Speech Studio project is *not* required for any model customization operations, if you perform them by using the REST API.
+
+When you log on to Speech Studio, its user interface will notify you when any unconnected object is found (like datasets uploaded through the REST API without any project reference). The interface will also offer to connect such objects to an existing project.
+
+To connect the new dataset to an existing project in Speech Studio during its upload, use [Create Dataset](https://centralus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateDataset) and fill out the request body according to the following format:
-To connect the new dataset to an existing project in the Speech Studio during its upload, use [Create Dataset](https://centralus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateDataset) and fill out the request body according to the following format:
```json { "kind": "Acoustic",
To connect the new dataset to an existing project in the Speech Studio during it
} ```
-The Project URL required for the `project` element can be obtained with the [Get Projects](https://centralus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetProjects) request.
+You can obtain the project URL that's required for the `project` element by using the [Get Projects](https://centralus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetProjects) request.
+
+## Audio + human-labeled transcript data for training or testing
-## Audio + human-labeled transcript data for training/testing
+You can use audio + human-labeled transcript data for both training and testing purposes. You must provide human-labeled transcriptions (word by word) for comparison:
-Audio + human-labeled transcript data can be used for both training and testing purposes. To improve the acoustic aspects like slight accents, speaking styles, background noises, or to measure the accuracy of Microsoft's speech-to-text accuracy when processing your audio files, you must provide human-labeled transcriptions (word-by-word) for comparison. While human-labeled transcription is often time consuming, it's necessary to evaluate accuracy and to train the model for your use cases. Keep in mind, the improvements in recognition will only be as good as the data provided. For that reason, it's important that only high-quality transcripts are uploaded.
+- To improve the acoustic aspects like slight accents, speaking styles, and background noises.
+- To measure the accuracy of Microsoft's speech-to-text accuracy when it's processing your audio files.
-Audio files can have silence at the beginning and end of the recording. If possible, include at least a half-second of silence before and after speech in each sample file. While audio with low recording volume or disruptive background noise is not helpful, it should not hurt your custom model. Always consider upgrading your microphones and signal processing hardware before gathering audio samples.
+Although human-labeled transcription is often time consuming, it's necessary to evaluate accuracy and to train the model for your use cases. Keep in mind that the improvements in recognition will only be as good as the data that you provide. For that reason, it's important to upload only high-quality transcripts.
+
+Audio files can have silence at the beginning and end of the recording. If possible, include at least a half-second of silence before and after speech in each sample file. Although audio with low recording volume or disruptive background noise is not helpful, it shouldn't hurt your custom model. Always consider upgrading your microphones and signal processing hardware before gathering audio samples.
| Property | Value | |--|-|
Audio files can have silence at the beginning and end of the recording. If possi
[!INCLUDE [supported-audio-formats](includes/supported-audio-formats.md)] > [!NOTE]
-> When uploading training and testing data, the .zip file size cannot exceed 2 GB. You can only test from a *single* dataset, be sure to keep it within the appropriate file size. Additionally, each training file cannot exceed 60 seconds otherwise it will error out.
+> When you're uploading training and testing data, the .zip file size can't exceed 2 GB. You can test from only a *single* dataset, so be sure to keep it within the appropriate file size. Additionally, each training file can't exceed 60 seconds, or it will error out.
+
+To address problems like word deletion or substitution, a significant amount of data is required to improve recognition. Generally, we recommend that you provide word-by-word transcriptions for 1 to 20 hours of audio. However, even as little as 30 minutes can help improve recognition results.
-To address issues like word deletion or substitution, a significant amount of data is required to improve recognition. Generally, it's recommended to provide word-by-word transcriptions for 1 to 20 hours of audio. However, even as little as 30 minutes can help to improve recognition results. The transcriptions for all WAV files should be contained in a single plain-text file. Each line of the transcription file should contain the name of one of the audio files, followed by the corresponding transcription. The file name and transcription should be separated by a tab (\t).
+The transcriptions for all WAV files are contained in a single plain-text file. Each line of the transcription file contains the name of one of the audio files, followed by the corresponding transcription. The file name and transcription are separated by a tab (`\t`).
For example:
speech03.wav the lazy dog was not amused
> [!IMPORTANT] > Transcription should be encoded as UTF-8 byte order mark (BOM).
-The transcriptions are text-normalized so they can be processed by the system. However, there are some important normalizations that must be done before uploading the data to the Speech Studio. For the appropriate language to use when you prepare your transcriptions, see [How to create a human-labeled transcription](how-to-custom-speech-human-labeled-transcriptions.md)
+The transcriptions are text-normalized so the system can process them. However, you must do some important normalizations before you upload the data to <a href="https://speech.microsoft.com/customspeech" target="_blank">Speech Studio</a>. For the appropriate language to use when you prepare your transcriptions, see [How to create human-labeled transcriptions](how-to-custom-speech-human-labeled-transcriptions.md).
-After you've gathered your audio files and corresponding transcriptions, package them as a single .zip file before uploading to the <a href="https://speech.microsoft.com/customspeech" target="_blank">Speech Studio </a>. Below is an example dataset with three audio files and a human-labeled transcription file:
+After you've gathered your audio files and corresponding transcriptions, package them as a single .zip file before uploading to Speech Studio. The following example dataset has three audio files and a human-labeled transcription file:
> [!div class="mx-imgBorder"]
-> ![Select audio from the Speech Portal](./media/custom-speech/custom-speech-audio-transcript-pairs.png)
-
-See [Set up your Azure account](custom-speech-overview.md#set-up-your-azure-account) for a list of recommended regions for your Speech service subscriptions. Setting up the Speech subscriptions in one of these regions will reduce the time it takes to train the model. In these regions, training can process about 10 hours of audio per day compared to just 1 hour per day in other regions. If model training cannot be completed within a week, the model will be marked as failed.
+> ![Screenshot that shows audio files and a transcription file in Speech Studio.](./media/custom-speech/custom-speech-audio-transcript-pairs.png)
-Not all base models support training with audio data. If the base model does not support it, the service will ignore the audio and just train with the text of the transcriptions. In this case, training will be the same as training with related text. See [Language support](language-support.md#speech-to-text) for a list of base models that support training with audio data.
+For a list of recommended regions for your Speech service subscriptions, see [Set up your Azure account](custom-speech-overview.md#set-up-your-azure-account). Setting up the Speech subscriptions in one of these regions will reduce the time it takes to train the model. In these regions, training can process about 10 hours of audio per day, compared to just 1 hour per day in other regions. If model training can't be completed within a week, the model will be marked as failed.
-## Plain text data for training
+Not all base models support training with audio data. If the base model doesn't support it, the service will ignore the audio and just train with the text of the transcriptions. In this case, training will be the same as training with related text. For a list of base models that support training with audio data, see [Language support](language-support.md#speech-to-text).
-You can use domain related sentences to improve accuracy when recognizing product names, or industry-specific jargon. Provide sentences in a single text file. To improve accuracy, use text data that is closer to the expected spoken utterances.
+## Plain-text data for training
-Training with plain text usually completes within a few minutes.
+You can use domain-related sentences to improve accuracy in recognizing product names or industry-specific jargon. Provide sentences in a single text file. To improve accuracy, use text data that's closer to the expected spoken utterances. Training with plain text usually finishes within a few minutes.
-To create a custom model using sentences, you'll need to provide a list of sample utterances. Utterances _do not_ need to be complete or grammatically correct, but they must accurately reflect the spoken input you expect in production. If you want certain terms to have increased weight, add several sentences that include these specific terms.
+To create a custom model by using sentences, you'll need to provide a list of sample utterances. Utterances _do not_ need to be complete or grammatically correct, but they must accurately reflect the spoken input that you expect in production. If you want certain terms to have increased weight, add several sentences that include these specific terms.
-As general guidance, model adaptation is most effective when the training text is as close as possible to the real text expected in production. Domain-specific jargon and phrases that you're targeting to enhance, should be included in training text. When possible, try to have one sentence or keyword controlled on a separate line. For keywords and phrases that are important to you (for example, product names), you can copy them a few times. But keep in mind, don't copy too much - it could affect the overall recognition rate.
+As general guidance, model adaptation is most effective when the training text is as close as possible to the real text expected in production. Domain-specific jargon and phrases that you're targeting to enhance should be included in training text. When possible, try to have one sentence or keyword controlled on a separate line. For keywords and phrases that are important to you (for example, product names), you can copy them a few times. But don't copy too much - it could affect the overall recognition rate.
Use this table to ensure that your related data file for utterances is formatted correctly: | Property | Value | |-|-| | Text encoding | UTF-8 BOM |
-| # of utterances per line | 1 |
+| Number of utterances per line | 1 |
| Maximum file size | 200 MB |
-Additionally, you'll want to account for the following restrictions:
+You'll also want to account for the following restrictions:
-* Avoid repeating characters, words, or groups of words more than three times. For example: "aaaa", "yeah yeah yeah yeah", or "that's it that's it that's it that's it". The Speech service might drop lines with too many repetitions.
+* Avoid repeating characters, words, or groups of words more than three times, as in "aaaa," "yeah yeah yeah yeah," or "that's it that's it that's it that's it." The Speech service might drop lines with too many repetitions.
* Don't use special characters or UTF-8 characters above `U+00A1`. * URIs will be rejected.
-* For some languages (for example Japanese or Korean), importing large amounts of text data can take very long or time out. Please consider to divide the uploaded data into text files of up to 20.000 lines each.
+* For some languages (for example, Japanese or Korean), importing large amounts of text data can take a long time or can time out. Consider dividing the uploaded data into text files of up to 20,000 lines each.
+
+## Structured-text data for training (public preview)
+
+Expected utterances often follow a certain pattern. One common pattern is that utterances differ only by words or phrases from a list. Examples of this pattern could be:
+
+* "I have a question about `product`," where `product` is a list of possible products.
+* "Make that `object` `color`," where `object` is a list of geometric shapes and `color` is a list of colors.
-## Structured text data for training (Public Preview)
+To simplify the creation of training data and to enable better modeling inside the Custom Language model, you can use a structured text in Markdown format to define lists of items. You can then reference these lists inside your training utterances. The Markdown format also supports specifying the phonetic pronunciation of words.
-Often the expected utterances follow a certain pattern. One common pattern is that utterances only differ by words or phrases from a list. Examples of this could be ΓÇ£I have a question about `product`,ΓÇ¥ where `product` is a list of possible products. Or, ΓÇ£Make that `object` `color`,ΓÇ¥ where `object` is a list of geometric shapes and `color` is a list of colors. To simplify the creation of training data and to enable better modeling inside the Custom Language Model, you can use a structured text in markdown format to define lists of items and then reference these inside your training utterances. Additionally, the markdown format also supports specifying the phonetic pronunciation of words. The markdown file should have a `.md` extension. The syntax of the markdown is the same as that from the Language Understanding models, in particular list entities and example utterances. For more information about the complete markdown syntax, see the <a href="/azure/bot-service/file-format/bot-builder-lu-file-format" target="_blank"> Language Understanding markdown</a>.
+The Markdown file should have an .md extension. The syntax of the Markdown is the same as that from the Language Understanding models, in particular list entities and example utterances. For more information about the complete Markdown syntax, see the <a href="/azure/bot-service/file-format/bot-builder-lu-file-format" target="_blank"> Language Understanding Markdown</a>.
-Here is an example of the markdown format:
+Here's an example of the Markdown format:
```markdown // This is a comment
-// Here are three separate lists of items that can be referenced in an example sentence. You can have up to 10 of these
+// Here are three separate lists of items that can be referenced in an example sentence. You can have up to 10 of these.
@ list food = - pizza - burger
Here is an example of the markdown format:
- football // This is a list of phonetic pronunciations.
-// This adjusts the pronunciation of every instance of these word in both a list or example training sentences
+// This adjusts the pronunciation of every instance of these words in a list or example training sentences.
@ speech:phoneticlexicon - cat/k ae t - cat/f i l ai n
-// Here are example training sentences. They are grouped into two sections to help organize the example training sentences.
-// You can refer to one of the lists we declared above by using {@listname} and you can refer to multiple lists in the same training sentence
+// Here are example training sentences. They are grouped into two sections to help organize them.
+// You can refer to one of the lists we declared earlier by using {@listname}. You can refer to multiple lists in the same training sentence.
// A training sentence does not have to refer to a list. # SomeTrainingSentence - you can include sentences without a class reference
Here is an example of the markdown format:
- or more sentences that have a class reference like {@pet} ```
-Like plain text, training with structured text typically takes a few minutes. Also, your example sentences and lists should reflect the type of spoken input you expect in production.
-For pronunciation entries see the description of the [Universal Phone Set](phone-sets.md).
+Like plain text, training with structured text typically takes a few minutes. Also, your example sentences and lists should reflect the type of spoken input that you expect in production. For pronunciation entries, see the description of the [Universal Phone Set](phone-sets.md).
-The table below specifies the limits and other properties for the markdown format:
+The following table specifies the limits and other properties for the Markdown format:
| Property | Value | |-|-|
The table below specifies the limits and other properties for the markdown forma
| Maximum number of example sentences | 50,000 | | Maximum number of list classes | 10 | | Maximum number of items in a list class | 4,000 |
-| Maximum number of speech:phoneticlexicon entries | 15000 |
+| Maximum number of `speech:phoneticlexicon` entries | 15,000 |
| Maximum number of pronunciations per word | 2 | ## Pronunciation data for training
-If there are uncommon terms without standard pronunciations that your users will encounter or use, you can provide a custom pronunciation file to improve recognition. For a list of languages that support custom pronunciation,
-see **Pronunciation** in the **Customizations** column in [the Speech-to-text table](language-support.md#speech-to-text).
+If there are uncommon terms without standard pronunciations that your users will encounter or use, you can provide a custom pronunciation file to improve recognition. For a list of languages that support custom pronunciation, see **Pronunciation** in the **Customizations** column in [the Speech-to-text table](language-support.md#speech-to-text).
> [!IMPORTANT]
-> It is not recommended to use custom pronunciation files to alter the pronunciation of common words.
+> We don't recommend that you use custom pronunciation files to alter the pronunciation of common words.
> [!NOTE]
-> You cannot combine this type of pronunciation file with structured text training data. For structured text data use the phonetic pronunciation capability that is included in the structured text markdown format.
+> You can't combine this type of pronunciation file with structured-text training data. For structured-text data, use the phonetic pronunciation capability that's included in the structured-text Markdown format.
-Provide pronunciations in a single text file. This includes examples of a spoken utterance, and a custom pronunciation for each:
+Provide pronunciations in a single text file. This file includes examples of a spoken utterance and a custom pronunciation for each:
| Recognized/displayed form | Spoken form | |--|--|
Provide pronunciations in a single text file. This includes examples of a spoken
| CNTK | c n t k | | IEEE | i triple e |
-The spoken form is the phonetic sequence spelled out. It can be composed of letter, words, syllables, or a combination of all three.
+The spoken form is the phonetic sequence spelled out. It can be composed of letters, words, syllables, or a combination of all three.
-Use the following table to ensure that your related data file for pronunciations is correctly formatted. Pronunciation files are small, and should only be a few kilobytes in size.
+Use the following table to ensure that your related data file for pronunciations is correctly formatted. Pronunciation files are small and should be only a few kilobytes in size.
| Property | Value | |-|-| | Text encoding | UTF-8 BOM (ANSI is also supported for English) |
-| # of pronunciations per line | 1 |
+| Number of pronunciations per line | 1 |
| Maximum file size | 1 MB (1 KB for free tier) | ## Audio data for testing
-Audio data is optimal for testing the accuracy of Microsoft's baseline speech-to-text model or a custom model. Keep in mind, audio data is used to inspect the accuracy of speech with regard to a specific model's performance. If you want to quantify the accuracy of a model, use [audio + human-labeled transcripts](#audio--human-labeled-transcript-data-for-trainingtesting).
+Audio data is optimal for testing the accuracy of Microsoft's baseline speech-to-text model or a custom model. Keep in mind that audio data is used to inspect the accuracy of speech with regard to a specific model's performance. If you want to quantify the accuracy of a model, use [audio + human-labeled transcripts](#audio--human-labeled-transcript-data-for-training-or-testing).
Custom Speech requires audio files with these properties:
Custom Speech requires audio files with these properties:
[!INCLUDE [supported-audio-formats](includes/supported-audio-formats.md)] > [!NOTE]
-> When uploading training and testing data, the .zip file size cannot exceed 2 GB. If you require more data for training, divide it into several .zip files and upload them separately. Later, you can choose to train from *multiple* datasets. However, you can only test from a *single* dataset.
+> When you're uploading training and testing data, the .zip file size can't exceed 2 GB. If you require more data for training, divide it into several .zip files and upload them separately. Later, you can choose to train from *multiple* datasets. However, you can test from only a *single* dataset.
-Use <a href="http://sox.sourceforge.net" target="_blank" rel="noopener">SoX</a> to verify audio properties or convert existing audio to the appropriate formats. Below are some example SoX commands:
+Use <a href="http://sox.sourceforge.net" target="_blank" rel="noopener">SoX</a> to verify audio properties or convert existing audio to the appropriate formats. Here are some example SoX commands:
| Activity | SoX command | ||-|
Use <a href="http://sox.sourceforge.net" target="_blank" rel="noopener">SoX</a>
* [Inspect your data](how-to-custom-speech-inspect-data.md) * [Evaluate your data](how-to-custom-speech-evaluate-data.md)
-* [Train custom model](how-to-custom-speech-train-model.md)
-* [Deploy model](./how-to-custom-speech-train-model.md)
+* [Train a custom model](how-to-custom-speech-train-model.md)
+* [Deploy a model](./how-to-custom-speech-train-model.md)
cognitive-services How To Custom Voice Create Voice https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-custom-voice-create-voice.md
Data files are automatically validated once you hit the **Submit** button. Data
Once the data is uploaded, you can check the details in the training set detail view. On the **Overview** tab, you can further check the pronunciation scores and the noise level for each of your data. The pronunciation score ranges from 0 to 100. A score below 70 normally indicates a speech error or script mismatch. A heavy accent can reduce your pronunciation score and affect the generated digital voice.
-A higher signal-to-noise ratio (SNR) indicates lower noise in your audio. You can typically reach a 50+ SNR by recording at professional studios. Audio with an SNR below 20 can result in obvious noise in your generated voice.
+A higher signal-to-noise ratio (SNR) indicates lower noise in your audio. You can typically reach a 35+ SNR by recording at professional studios. Audio with an SNR below 20 can result in obvious noise in your generated voice.
Consider re-recording any utterances with low pronunciation scores or poor signal-to-noise ratios. If you can't re-record, consider excluding those utterances from your data.
The second type of errors listed in the table below will be automatically fixed,
| Category | Name | Description | | | -- | |
-| Audio | Stereo audio auto fixed | Use mono in your audio sample recordings. Stereo audio channels are automatically merged into a mono channel, which can cause content loss. Download the normalized dataset and review it.|
-| Volume | Volume peak auto fixed |The volume peak should be within the range of -3 dB (70% of max volume) to -6 dB (50%). Control the volume peak during the sample recording or data preparation. This audio is linearly scaled to fit the peak range automatically (-4 dB or 65%). Download the normalized dataset and review it.|
-|Mismatch | Silence auto fixed| The start silence is detected to be longer than 200 ms, and has been trimmed to 200 ms automatically. Download the normalized dataset and review it. |
-| Mismatch |Silence auto fixed | The end silence is detected to be longer than 200 ms, and has been trimmed to 200 ms automatically. Download the normalized dataset and review it. |
| Mismatch |Silence auto fixed |The start silence is detected to be shorter than 100 ms, and has been extended to 100 ms automatically. Download the normalized dataset and review it. | | Mismatch |Silence auto fixed | The end silence is detected to be shorter than 100 ms, and has been extended to 100 ms automatically. Download the normalized dataset and review it.|
cognitive-services How To Migrate To Custom Neural Voice https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-migrate-to-custom-neural-voice.md
# Migrate from custom voice to custom neural voice > [!IMPORTANT]
-> We are retiring the standard/non-neural training tier of custom voice on **February 29, 2024**. During the retiring period (3/1/2021 - 2/29/2024), existing standard tier users can continue to use their non-neural models created, but all new users/new speech resources should move to the neural tier/custom neural voice. After 2/29/2024, all standard/non-neural custom voices will no longer be supported.
+> We are retiring the standard/non-neural training tier of custom voice on February 29, 2024. During the retiring period (3/1/2021 - 2/29/2024), existing standard tier users can continue to use their non-neural models created, but all new users who sign up for speech resources from **3/1/2021** should move to the neural tier/custom neural voice. After 2/29/2024, all standard/non-neural custom voices will no longer be supported.
The custom neural voice lets you build higher-quality voice models while requiring less data. You can develop more realistic, natural, and conversational voices. Your customers and end users will benefit from the latest Text-to-Speech technology, in a responsible way.
cognitive-services How To Migrate To Prebuilt Neural Voice https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-migrate-to-prebuilt-neural-voice.md
# Migrate from prebuilt standard voice to prebuilt neural voice > [!IMPORTANT]
-> We are retiring the standard voices on **August 31, 2024** and they will no longer be supported after that date. During the retiring period (8/31/2021 - 8/31/2024), existing standard voice users can continue to use standard voices, but all new users/new speech resources must choose neural voices.
+> We are retiring the standard voices on August 31, 2024. During the retiring period (9/1/2021 - 8/31/2024), existing standard voice users can continue to use standard voices, but all new users who sign up for speech resources from **9/1/2021** should choose [neural voice names](language-support.md#prebuilt-neural-voices) in your speech synthesis request. After 8/31/2024, the standard voices will no longer be supported in your speech synthesis request.
The prebuilt neural voice provides more natural sounding speech output, and thus, a better end-user experience.
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/language-support.md
Below table lists out the prebuilt neural voices supported in each language. You
| Chinese (Mandarin, Simplified) | `zh-CN` | Female | `zh-CN-XiaoyouNeural` | Child voice, optimized for story narrating | | Chinese (Mandarin, Simplified) | `zh-CN` | Male | `zh-CN-YunxiNeural` | General, multiple styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) | | Chinese (Mandarin, Simplified) | `zh-CN` | Male | `zh-CN-YunyangNeural` | Optimized for news reading,<br /> multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
-| Chinese (Mandarin, Simplified) | `zh-CN` | Male | `zh-CN-YunyeNeural` | Optimized for story narrating,<br /> multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
+| Chinese (Mandarin, Simplified) | `zh-CN` | Male | `zh-CN-YunyeNeural` | Optimized for story narrating, multiple role-play and styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
| Chinese (Taiwanese Mandarin) | `zh-TW` | Female | `zh-TW-HsiaoChenNeural` | General | | Chinese (Taiwanese Mandarin) | `zh-TW` | Female | `zh-TW-HsiaoYuNeural` | General | | Chinese (Taiwanese Mandarin) | `zh-TW` | Male | `zh-TW-YunJheNeural` | General |
With the cross-lingual feature (preview), you can transfer you custom neural voi
| Language | Locale | Cross-lingual (preview) | |--|--|--| | Arabic (Egypt) | `ar-EG` | No |
+| Arabic (Saudi Arabia) | `ar-SA` | No |
| Bulgarian (Bulgaria) | `bg-BG` | No |
+| Catalan (Spain) | `ca-ES` | No |
+| Chinese (Cantonese, Traditional) | `zh-HK` | No |
| Chinese (Mandarin, Simplified) | `zh-CN` | Yes | | Chinese (Mandarin, Simplified), English bilingual | `zh-CN` bilingual | Yes | | Chinese (Taiwanese Mandarin) | `zh-TW` | No |
+| Croatian (Croatia) | `hr-HR` | No |
| Czech (Czech) | `cs-CZ` | No |
+| Danish (Denmark) | `da-DK` | No |
| Dutch (Netherlands) | `nl-NL` | No | | English (Australia) | `en-AU` | Yes | | English (Canada) | `en-CA` | No |
With the cross-lingual feature (preview), you can transfer you custom neural voi
| English (Ireland) | `en-IE` | No | | English (United Kingdom) | `en-GB` | Yes | | English (United States) | `en-US` | Yes |
+| Finnish (Finland) | `fi-FI` | No |
| French (Canada) | `fr-CA` | Yes | | French (France) | `fr-FR` | Yes |
+| French (Switzerland) | `fr-CH` | No |
| German (Austria) | `de-AT` | No | | German (Germany) | `de-DE` | Yes |
+| German (Switzerland) | `de-CH` | No |
+| Greek (Greece) | `el-GR` | No |
+| Hebrew (Israel) | `he-IL` | No |
+| Hindi (India) | `hi-IN` | No |
| Hungarian (Hungary) | `hu-HU` | No |
+| Indonesian (Indonesia) | `id-ID` | No |
| Italian (Italy) | `it-IT` | Yes | | Japanese (Japan) | `ja-JP` | Yes | | Korean (Korea) | `ko-KR` | Yes |
+| Malay (Malaysia) | `ms-MY` | No |
| Norwegian (Bokmål, Norway) | `nb-NO` | No |
+| Polish (Poland) | `pl-PL` | No |
| Portuguese (Brazil) | `pt-BR` | Yes | | Portuguese (Portugal) | `pt-PT` | No |
+| Romanian (Romania) | `ro-RO` | No |
| Russian (Russia) | `ru-RU` | Yes | | Slovak (Slovakia) | `sk-SK` | No |
+| Slovenian (Slovenia) | `sl-SI` | No |
| Spanish (Mexico) | `es-MX` | Yes | | Spanish (Spain) | `es-ES` | Yes |
+| Swedish (Sweden) | `sv-SE` | No |
+| Tamil (India) | `ta-IN` | No |
+| Telugu (India) | `te-IN` | No |
+| Thai (Thailand) | `th-TH` | No |
| Turkish (Turkey) | `tr-TR` | No | | Vietnamese (Vietnam) | `vi-VN` | No | -- ## Language identification With language identification, you set and get one of the supported locales below. But we only compare at the language level such as English and German. If you include multiple locales of the same language (for example, `en-IN` and `en-US`), we'll only compare English (`en`) with the other candidate languages.
cognitive-services Long Audio Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/long-audio-api.md
# Long Audio API
-The Long Audio API provides asynchronous synthesis of long-form text to speech (for example: audio books, news articles and documents). This API doesn't return synthesized audio in real time. Instead, you poll for the response(s) and consume the output(s) as the service makes them available. Unlike the Text-to-speech API used by the Speech SDK, the Long Audio API can create synthesized audio longer than 10 minutes. This makes it ideal for publishers and audio content platforms to create long audio content like audio books in a batch.
-
-More benefits of the Long Audio API:
-
-* Synthesized speech returned by the service uses the best neural voices.
-* There's no need to deploy a voice endpoint.
-
-> [!NOTE]
-> The Long Audio API supports both [Public Neural Voices](./language-support.md#prebuilt-neural-voices) and [Custom Neural Voices](./how-to-custom-voice.md).
+The Long Audio API provides asynchronous synthesis of long-form text to speech. For example: audio books, news articles and documents. There's no need to deploy a custom voice endpoint. Unlike the Text-to-speech API used by the Speech SDK, the Long Audio API can create synthesized audio longer than 10 minutes. This makes it ideal for publishers and audio content platforms to create long audio content like audio books in a batch.
## Workflow
-When using the Long Audio API, you'll typically submit a text file or files to be synthesized, poll for the status, and download the audio output when the status indicates success.
+The Long Audio API doesn't return synthesized audio in real time. You submit text files to be synthesized, poll for the status, and download the audio output when the status indicates success.
This diagram provides a high-level overview of the workflow.
These libraries are used to construct the HTTP request, and call the text-to-spe
### Get a list of supported voices
+The Long Audio API supports a subset of [Public Neural Voices](./language-support.md#prebuilt-neural-voices) and [Custom Neural Voices](./language-support.md#custom-neural-voice).
+ To get a list of supported voices, send a GET request to `https://<endpoint>/api/texttospeech/v3.0/longaudiosynthesis/voices`. This code gets a full list of voices you can use at a specific region/endpoint.
The Long audio API is available in multiple regions with unique endpoints.
| Region | Endpoint | |--|-|
+| Australia East | `https://australiaeast.customvoice.api.speech.microsoft.com` |
| East US | `https://eastus.customvoice.api.speech.microsoft.com` | | India Central | `https://centralindia.customvoice.api.speech.microsoft.com` |
+| South Central US | `https://southcentralus.customvoice.api.speech.microsoft.com` |
| Southeast Asia | `https://southeastasia.customvoice.api.speech.microsoft.com` | | UK South | `https://uksouth.customvoice.api.speech.microsoft.com` | | West Europe | `https://westeurope.customvoice.api.speech.microsoft.com` |
cognitive-services Migration Overview Neural Voice https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/migration-overview-neural-voice.md
We're retiring two features from [Text-to-Speech](index-text-to-speech.yml) capa
## Custom voice (non-neural training) > [!IMPORTANT]
-> We are retiring the standard/non-neural training tier of custom voice on **February 29, 2024**. During the retiring period (3/1/2021 - 2/29/2024), existing standard tier users can continue to use their non-neural models created, but all new users/new speech resources should move to the neural tier/custom neural voice. After 2/29/2024, all standard/non-neural custom voices will no longer be supported.
+> We are retiring the standard/non-neural training tier of custom voice on February 29, 2024. During the retiring period (3/1/2021 - 2/29/2024), existing standard tier users can continue to use their non-neural models created, but all new users who sign up for speech resources from **3/1/2021** should move to the neural tier/custom neural voice. After 2/29/2024, all standard/non-neural custom voices will no longer be supported.
Go to [this article](how-to-migrate-to-custom-neural-voice.md) to learn how to migrate to custom neural voice.
Go to the [pricing page](https://azure.microsoft.com/pricing/details/cognitive-s
## Prebuilt standard voice > [!IMPORTANT]
-> We are retiring the standard voices on **August 31, 2024** and they will no longer be supported after that date. During the retiring period (8/31/2021 - 8/31/2024), existing standard voice users can continue to use standard voices, but all new users/new speech resources must choose neural voices.
+> We are retiring the standard voices on August 31, 2024. During the retiring period (9/1/2021 - 8/31/2024), existing standard voice users can continue to use standard voices, but all new users who sign up for speech resources from **9/1/2021** should choose [neural voice names](language-support.md#prebuilt-neural-voices) in your speech synthesis request. After 8/31/2024, the standard voices will no longer be supported in your speech synthesis request.
Go to [this article](how-to-migrate-to-prebuilt-neural-voice.md) to learn how to migrate to prebuilt neural voice.
cognitive-services Record Custom Voice Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/record-custom-voice-samples.md
A persona might have, for example, a naturally upbeat personality. So "their" vo
## Create a script
-The starting point of any custom neural voice recording session is the script, which contains the utterances to be spoken by your voice talent. (The term "utterances" encompasses both full sentences and shorter phrases.)
+The starting point of any custom neural voice recording session is the script, which contains the utterances to be spoken by your voice talent. The term "utterances" encompasses both full sentences and shorter phrases. Building a custom neural voice requires at least 300 recorded utterances as training data.
The utterances in your script can come from anywhere: fiction, non-fiction, transcripts of speeches, news reports, and anything else available in printed form. If you want to make sure your voice does well on specific kinds of words (such as medical terminology or programming jargon), you might want to include sentences from scholarly papers or technical documents. For a brief discussion of potential legal issues, see the ["Legalities"](#legalities) section. You can also write your own text.
Your utterances don't need to come from the same source, or the same kind of sou
We recommend the recording scripts include both general sentences and your domain-specific sentences. For example, if you plan to record 2,000 sentences, 1,000 of them could be general sentences, another 1,000 of them could be sentences from your target domain or the use case of your application.
-We provide [sample scripts in the 'General', 'Chat' and 'Customer Service' domains for each language](https://github.com/Azure-Samples/Cognitive-Speech-TTS/tree/master/CustomVoice/script) to help you prepare your recording scripts. You can use these Microsoft shared scripts for your recordings directly or use them as a reference to create your own. Building a custom neural voice requires at least 300 recorded sentences as training data.
+We provide [sample scripts in the 'General', 'Chat' and 'Customer Service' domains for each language](https://github.com/Azure-Samples/Cognitive-Speech-TTS/tree/master/CustomVoice/script) to help you prepare your recording scripts. You can use these Microsoft shared scripts for your recordings directly or use them as a reference to create your own.
You can select your domain-specific scripts from the sentences that your custom neural voice will be used to read.
cognitive-services Releasenotes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/releasenotes.md
# Speech Service release notes +
+## OnPrem Speech 2022-Jan release
+
+### Speech-to-text Container v2.18.0
+- Regular monthly updates (including security upgrades and vulnerability fixes).
+
+### Neural-text-to-speech Container v1.12.0
+- Support new locale-voice `am-et-amehaneural` and `am-et-mekdesneural` and `so-so-muuseneural` and `so-so-ubaxneural`.
+- Regular monthly updates (including security upgrades and vulnerability fixes).
++ ## Speech SDK 1.19.0: 2021-Nov release
cognitive-services Speech Container Howto https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/speech-container-howto.md
Speech containers enable customers to build a speech application architecture th
| Container | Features | Latest | Release status | |--|--|--|--|
-| Speech-to-text | Analyzes sentiment and transcribes continuous real-time speech or batch audio recordings with intermediate results. | 2.17.0 | Generally Available |
+| Speech-to-text | Analyzes sentiment and transcribes continuous real-time speech or batch audio recordings with intermediate results. | 2.18.0 | Generally Available |
| Custom Speech-to-text | Using a custom model from the [Custom Speech portal](https://speech.microsoft.com/customspeech), transcribes continuous real-time speech or batch audio recordings into text with intermediate results. | 2.17.0 | Generally Available | | Text-to-speech | Converts text to natural-sounding speech with plain text input or Speech Synthesis Markup Language (SSML). | 1.15.0 | Generally Available | | Speech Language Identification | Detect the language spoken in audio files. | 1.5.0 | preview |
-| Neural Text-to-speech | Converts text to natural-sounding speech using deep neural network technology, allowing for more natural synthesized speech. | 1.11.0 | Generally Available |
+| Neural Text-to-speech | Converts text to natural-sounding speech using deep neural network technology, allowing for more natural synthesized speech. | 1.12.0 | Generally Available |
## Prerequisites
cognitive-services Speech Synthesis Markup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/speech-synthesis-markup.md
The intensity of speaking style can be further changed to better fit your use ca
Apart from adjusting the speaking styles and style degree, you can also adjust the `role` parameter so that the voice will imitate a different age and gender. For example, a male voice can raise the pitch and change the intonation to imitate a female voice, but the voice name won't be changed. Currently, role adjustments are supported for these Chinese (Mandarin, Simplified) neural voices: * `zh-CN-XiaomoNeural` * `zh-CN-XiaoxuanNeural`
+* `zh-CN-YunxiNeural`
+* `zh-CN-YunyeNeural`
Above changes are applied at the sentence level, and styles and role-plays vary by voice. If a style or role-play isn't supported, the service will return speech in the default neutral speaking way. You can see what styles and roles are supported for each voice through the [voice list API](rest-text-to-speech.md#get-a-list-of-voices) or through the code-free [Audio Content Creation](https://aka.ms/audiocontentcreation) platform.
Above changes are applied at the sentence level, and styles and role-plays vary
<mstts:express-as role="string" style="string"></mstts:express-as> ``` > [!NOTE]
-> At the moment, `styledegree` only supports Chinese (Mandarin, Simplified) neural voices. `role` only supports zh-CN-XiaomoNeural and zh-CN-XiaoxuanNeural.
+> At the moment, `styledegree` only supports Chinese (Mandarin, Simplified) neural voices. `role` only supports zh-CN-XiaomoNeural, zh-CN-XiaoxuanNeural, zh-CN-YunxiNeural, and zh-CN-YunyeNeural.
**Attributes**
Above changes are applied at the sentence level, and styles and role-plays vary
|--|-|| | `style` | Specifies the speaking style. Currently, speaking styles are voice-specific. | Required if adjusting the speaking style for a neural voice. If using `mstts:express-as`, then style must be provided. If an invalid value is provided, this element will be ignored. | | `styledegree` | Specifies the intensity of speaking style. **Accepted values**: 0.01 to 2 inclusive. The default value is 1, which means the predefined style intensity. The minimum unit is 0.01, which results in a slight tendency for the target style. A value of 2 results in a doubling of the default style intensity. | Optional (At the moment, `styledegree` only supports Chinese (Mandarin, Simplified) neural voices.)|
-| `role` | Specifies the speaking role-play. The voice will act as a different age and gender, but the voice name won't be changed. | Optional (At the moment, `role` only supports zh-CN-XiaomoNeural and zh-CN-XiaoxuanNeural.)|
+| `role` | Specifies the speaking role-play. The voice will act as a different age and gender, but the voice name won't be changed. | Optional (At the moment, `role` only supports zh-CN-XiaomoNeural, zh-CN-XiaoxuanNeural, zh-CN-YunxiNeural, and zh-CN-YunyeNeural.)|
Use this table to determine which speaking styles are supported for each neural voice.
Use the `break` element to insert pauses (or breaks) between words, or prevent p
| Attribute | Description | Required / Optional | |--|-|| | `strength` | Specifies the relative duration of a pause using one of the following values:<ul><li>none</li><li>x-weak</li><li>weak</li><li>medium (default)</li><li>strong</li><li>x-strong</li></ul> | Optional |
-| `time` | Specifies the absolute duration of a pause in seconds or milliseconds,this value should be set less than 5000 ms. Examples of valid values are `2s` and `500ms` | Optional |
+| `time` | Specifies the absolute duration of a pause in seconds or milliseconds. This value should be set less than 5000 ms. Examples of valid values are `2s` and `500ms` | Optional |
| Strength | Description | |-|-|
Use the `mstts:silence` element to insert pauses before or after text, or betwee
| Attribute | Description | Required / Optional | |--|-|| | `type` | Specifies the location of silence be added: <ul><li>`Leading` ΓÇô at the beginning of text </li><li>`Tailing` ΓÇô in the end of text </li><li>`Sentenceboundary` ΓÇô between adjacent sentences </li></ul> | Required |
-| `Value` | Specifies the absolute duration of a pause in seconds or milliseconds,this value should be set less than 5000 ms. Examples of valid values are `2s` and `500ms` | Required |
+| `Value` | Specifies the absolute duration of a pause in seconds or milliseconds. This value should be set less than 5000 ms. Examples of valid values are `2s` and `500ms` | Required |
**Example** In this example, `mtts:silence` is used to add 200 ms of silence between two sentences.
The `lexicon` element contains at least one `lexeme` element. Each `lexeme` elem
Lexicon contains necessary `xml:lang` attribute to indicate which locale it should be applied for. One custom lexicon is limited to one locale by design, so apply it for a different locale it won't work.
-It's important to note, that you cannot directly set the pronunciation of a phrase using the custom lexicon. If you need to set the pronunciation for an acronym or an abbreviated term, first provide an `alias`, then associate the `phoneme` with that `alias`. For example:
+It's important to note, that you can't directly set the pronunciation of a phrase using the custom lexicon. If you need to set the pronunciation for an acronym or an abbreviated term, first provide an `alias`, then associate the `phoneme` with that `alias`. For example:
```xml <lexeme>
Since it's easy to make mistakes in custom lexicon, Microsoft has provided [vali
**Speech service phonetic sets**
-In the sample above, we're using the International Phonetic Alphabet, also known as the IPA phone set. We suggest developers use the IPA, because it is the international standard. For some IPA characters, they have the 'precomposed' and 'decomposed' version when being represented with Unicode. Custom lexicon only supports the decomposed Unicode.
+In the sample above, we're using the International Phonetic Alphabet, also known as the IPA phone set. We suggest developers use the IPA, because it's the international standard. For some IPA characters, they've the 'precomposed' and 'decomposed' version when being represented with Unicode. Custom lexicon only supports the decomposed Unicode.
Considering that the IPA isn't easy to remember, the Speech service defines a phonetic set for seven languages (`en-US`, `fr-FR`, `de-DE`, `es-ES`, `ja-JP`, `zh-CN`, and `zh-TW`).
cognitive-services Container Image Tags https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/containers/container-image-tags.md
The [Custom Speech-to-text][sp-cstt] container image can be found on the `mcr.mi
# [Latest version](#tab/current)
-Release note for `2.17.0-amd64`:
+Release note for `2.18.0-amd64`:
-**Features**
-* Upgrade to latest `media`, `text` component.
-
-**Fixes**
-* Security patches
+Regular monthly upgrade
Note that due to the phrase lists feature, the size of this container image has increased. | Image Tags | Notes | Digest | |-|:|:-|
-| `latest` | | `sha256:46634652014527f11fb0ba2edf60ac7151c10a2fd05d167ab9e0b81862601c36`|
-| `2.17.0-amd64` | | `sha256:46634652014527f11fb0ba2edf60ac7151c10a2fd05d167ab9e0b81862601c36`|
+| `latest` | | `sha256:c9ef9b95effe2be170d245c1b380262076224a21e859cd648e9dbd4146ddbdaf`|
+| `2.18.0-amd64` | | `sha256:c9ef9b95effe2be170d245c1b380262076224a21e859cd648e9dbd4146ddbdaf`|
# [Previous version](#tab/previous)
+Release note for `2.17.0-amd64`:
+
+**Features**
+* Upgrade to latest `media`, `text` component.
+
+**Fixes**
+* Security patches
+ Release note for `2.16.0-amd64`: Regular monthly upgrade
Release note for `2.5.0-amd64`:
| Image Tags | Notes | |-|:--|
+| `2.17.0-amd64` | |
| `2.16.0-amd64` | | | `2.15.0-amd64` | | | `2.14.0-amd64` | |
Since Speech-to-text v2.5.0, images are supported in the *US Government Virginia
# [Latest version](#tab/current)
-Release note for `2.17.0-amd64-<locale>`:
-
-**Features**
-* Upgrade to latest `media`, `text` components.
-* Support `de-AT`, `de-CH` locales.
-
-**Fixes**
-* Upgrade security patches.
+Release note for `2.18.0-amd64-<locale>`:
+Regular monthly release
Note that due to the phrase lists feature, the size of this container image has increased. | Image Tags | Notes | |-|:--| | `latest` | Container image with the `en-US` locale. |
-| `2.17.0-amd64-<locale>` | Replace `<locale>` with one of the available locales, listed below. For example `2.17.0-amd64-en-us`.|
+| `2.18.0-amd64-<locale>` | Replace `<locale>` with one of the available locales, listed below. For example `2.18.0-amd64-en-us`.|
This container has the following locales available.
-| Locale for v2.17.0 | Notes | Digest |
+| Locale for v2.18.0 | Notes | Digest |
|--|:--|:--|
-| `ar-ae` | Container image with the `ar-AE` locale. | `sha256:4e7fd7c2412e13e4f5d642b105230e90ae6cc2f0457ceabd2db340e0ae29f316` |
-| `ar-bh` | Container image with the `ar-BH` locale. | `sha256:da8805f4f64844f140e7a72b2adf367d45b2435e2dc1cd579a1adb2ec77a8df2` |
-| `ar-eg` | Container image with the `ar-EG` locale. | `sha256:d5a8652c680097c54668e6b16b01248be523d756ad859c9449931adee95df9d7` |
-| `ar-iq` | Container image with the `ar-IQ` locale. | `sha256:19a19894bb9a1c1b28e8bb7234e19757a1f870f4032ad50f44a477fc2b452ada` |
-| `ar-jo` | Container image with the `ar-JO` locale. | `sha256:2279655c98bbf09f221212fbe6876bad5662ccdc55be069975a23512f4a3d55c` |
-| `ar-kw` | Container image with the `ar-KW` locale. | `sha256:4e7fd7c2412e13e4f5d642b105230e90ae6cc2f0457ceabd2db340e0ae29f316` |
-| `ar-lb` | Container image with the `ar-LB` locale. | `sha256:f1b0e5083e71f5c2f56f841b6db399f50f44c154126e3316e8e6e73b6b895c97` |
-| `ar-om` | Container image with the `ar-OM` locale. | `sha256:8af7ce49be6d3839ac0e1ce4f1f45d4361fbbcbffa66081b0e7c6824dfa7c1a0` |
-| `ar-qa` | Container image with the `ar-QA` locale. | `sha256:4e7fd7c2412e13e4f5d642b105230e90ae6cc2f0457ceabd2db340e0ae29f316` |
-| `ar-sa` | Container image with the `ar-SA` locale. | `sha256:4e7fd7c2412e13e4f5d642b105230e90ae6cc2f0457ceabd2db340e0ae29f316` |
-| `ar-sy` | Container image with the `ar-SY` locale. | `sha256:de0521c0728468540699387e4152887c2a0a43ba37e9c144256a5c541a2f1d7e` |
-| `bg-bg` | Container image with the `bg-BG` locale. | `sha256:533272cf3c8920b002461e8cdb51fea9a6318aed41c9b77d0cbcfce3bfd7d077` |
-| `ca-es` | Container image with the `ca-ES` locale. | `sha256:10af2a3eb4f8cfe88925a512165c3fb555681b9a89d3db9d67fed02a33809063` |
-| `cs-cz` | Container image with the `cs-CZ` locale. | `sha256:95cf93202ae922318862ba585b38a989b4fc83e4d335f2c3916be4564df0d215` |
-| `da-dk` | Container image with the `da-DK` locale. | `sha256:fa66a446e0575fa1c89f52b06e16279fee0fe4f0d61b1e18a0dcebc8a866ddf6` |
-| `de-at` | Container image with the `de-AT` locale. | `sha256:e0d8a74ebf48981999306e6cc9f99dfb9fa3fa16cc12aa5086e9720639ce9f52` |
-| `de-ch` | Container image with the `de-CH` locale. | `sha256:ab58cb7bbe5a5a78a7459b690c95f036d1b4703610f563f5854334f7332d5fca` |
-| `de-de` | Container image with the `de-DE` locale. | `sha256:abbbf003661da23eb6bc2133d3585ffe58af3a9d3167b7eece656d0007bc65d2` |
-| `el-gr` | Container image with the `el-GR` locale. | `sha256:01311455b2425e41031368691de73e28c3c08de0486e50f4801ade584af27c2d` |
-| `en-au` | Container image with the `en-AU` locale. | `sha256:86c84a560a23b5bfcbadae8dee62805f897520b7d3ac6969d80e3eb88141d7ef` |
-| `en-ca` | Container image with the `en-CA` locale. | `sha256:0f5912fa924212aca1522f6a27970778b0c22d943a8b2c9e9410df243ad62ff7` |
-| `en-gb` | Container image with the `en-GB` locale. | `sha256:a5f3efff449bb9e43fafc9feafe0b31f11070c8f2bb9c60e34892b0765fbf0c5` |
-| `en-hk` | Container image with the `en-HK` locale. | `sha256:57274ea44bd9dd34afc845e4dcdadf61b33681b0e4e5dba1f3c0e13511b40fe8` |
-| `en-ie` | Container image with the `en-IE` locale. | `sha256:f4406c366940ef5185aedf42bfdacc1246ef636aebb8ad5b5a6bc521589f528c` |
-| `en-in` | Container image with the `en-IN` locale. | `sha256:9b6529181e7fe12ca00222c6164342b32ff637e4f394240ff2194489c56408df` |
-| `en-nz` | Container image with the `en-NZ` locale. | `sha256:10978a40cc3b7101517f35215c663ddec69679d5650ba827476060da8b84812d` |
-| `en-ph` | Container image with the `en-PH` locale. | `sha256:a76b360f883ee746f77a650f285244d0424be9d6b3990a0c8075ec475f6e77d3` |
-| `en-sg` | Container image with the `en-SG` locale. | `sha256:336f939f5db41312e6bfeda5419570db456df05c09d01fc98d7bc1967e4a8c3f` |
-| `en-us` | Container image with the `en-US` locale. | `sha256:e8cec9044fd1f7958b2188dfe818311932fe0560d0f1d56aab684bec42a05723` |
-| `en-za` | Container image with the `en-ZA` locale. | `sha256:b80e9e6349e057dae1c60a0b718cc2bb9e6db01681b1910afb5c48525aaf99a2` |
-| `es-ar` | Container image with the `es-AR` locale. | `sha256:b9b9bc3cd87c5524b9630c29ab790a5e10e725237c4a482ba09673b3e98cb7f6` |
-| `es-bo` | Container image with the `es-BO` locale. | `sha256:849f9f3a4ad8b1266b07837afcd9cbd5815bef2473bb8b3726b1cfaec75c8a62` |
-| `es-cl` | Container image with the `es-CL` locale. | `sha256:dc09280f0d7df607e363f137ddc6ad333d0b7628492296eece9d3717d60ea339` |
-| `es-co` | Container image with the `es-CO` locale. | `sha256:270de8bfbae05c984f164d0c4e15c096459f41bf8f1aeb5cb18c1b7d20419bf3` |
-| `es-cr` | Container image with the `es-CR` locale. | `sha256:e6c7e8ded3c75c19ce0f94db2d468b1d48548eb9b9827a67a9995e4820b6ae58` |
-| `es-cu` | Container image with the `es-CU` locale. | `sha256:56510d176425bc3ba329ac0cf9520ee5a041370777320faf8081288c89c83c14` |
-| `es-do` | Container image with the `es-DO` locale. | `sha256:af3c853b766eb01a7ec51660ceb179040ac007c7b85f07c68a32adcdc3b280f1` |
-| `es-ec` | Container image with the `es-EC` locale. | `sha256:cf05048a4762dabc23dc44bbb5c59d26cef5946658d653da4e2965d5331ea497` |
-| `es-es` | Container image with the `es-ES` locale. | `sha256:9575b02e64c47e4b4253a90dbc8cc3ebc393fd1a3a1b5680d3eff6efd76b1f3f` |
-| `es-gt` | Container image with the `es-GT` locale. | `sha256:86775303621fc1a80761cefba4aae5aa769a998d96cf61e54490d4aa59edae6c` |
-| `es-hn` | Container image with the `es-HN` locale. | `sha256:a8246e041c1a10338397c8ce9ba1389b0ee517bb8c0ec8e6fd1579c10704529b` |
-| `es-mx` | Container image with the `es-MX` locale. | `sha256:d709a021dd398fd2bb77f0fa5478646642774b5b91f25699001ee2d7ee7c9e4b` |
-| `es-ni` | Container image with the `es-NI` locale. | `sha256:e532978cf0d7d696016d3b5353543b6f9f0f4bfbd41669403a5e69e144c67259` |
-| `es-pa` | Container image with the `es-PA` locale. | `sha256:3f45b4169cb131bfa81820d0c08a80f27ecd54e7a56755a2d9db7da358fb9f27` |
-| `es-pe` | Container image with the `es-PE` locale. | `sha256:2b9eddd484a3262002dc92e07603cb161e254bc5460ecfdccb6f0db91c48c5a9` |
-| `es-pr` | Container image with the `es-PR` locale. | `sha256:2b95ff684b8c60e91531bf7c19f4ca71a69ede37d1d06262cf90368cc6b1ff9a` |
-| `es-py` | Container image with the `es-PY` locale. | `sha256:b03ac0b66f5af771cf3d4e3bb61efd674df0ef2fd0934f77467d163472308805` |
-| `es-sv` | Container image with the `es-SV` locale. | `sha256:e5bc6399ef63b07e6e154e438b25a58622beb1a2f90e31e043ae2720dfe1daaa` |
-| `es-us` | Container image with the `es-US` locale. | `sha256:21de294ee17c097d7624ad679c24715ec93aa46e0b982afcbf2d6defd4177ff6` |
-| `es-uy` | Container image with the `es-UY` locale. | `sha256:d4827464db58661c57f7ba981b03e493302d9b51a89f41cf7ca633a4e7f69f6a` |
-| `es-ve` | Container image with the `es-VE` locale. | `sha256:b56baf94cedc5d50e2cf3846d995f63f36473c4146008f50b31b9b747c8e4c45` |
-| `et-ee` | Container image with the `et-EE` locale. | `sha256:b2f45988b0d077f4f7279a58353d3179fac181ad5cdc3848667fa25d7d96e4f0` |
-| `fi-fi` | Container image with the `fi-FI` locale. | `sha256:c13210ddfc885f359dfc020f7a1c39773ee62db15617ef472ac10d62f8829904` |
-| `fr-ca` | Container image with the `fr-CA` locale. | `sha256:2a6f9e5afcac65f9030dbffbd22da428a6633f7dde8386eff674961dc61fd040` |
-| `fr-fr` | Container image with the `fr-FR` locale. | `sha256:11f4ac68104d3853558bf7a7d08871cbec4ab1194281eda80906b256e8b82f18` |
-| `ga-ie` | Container image with the `ga-IE` locale. | `sha256:c07dfc10ed61e3a35142d0ea26c7b87adeeecfb95a5a3a16ac44023b3df03e0a` |
-| `gu-in` | Container image with the `gu-IN` locale. | `sha256:76b9c92a1e77513681249e71c7194e94067ff3af97ad35ca2ac0d476dfb2b744` |
-| `hi-in` | Container image with the `hi-IN` locale. | `sha256:f45c3de121f2b922c413468bfa9214a5dedd3e34049fbe2d77f50cba7ef8fcbd` |
-| `hr-hr` | Container image with the `hr-HR` locale. | `sha256:b9f01a38f65884b9f690394c5e06f7d4ce02067c18d41dfbe9bcdc7c36b85373` |
-| `hu-hu` | Container image with the `hu-HU` locale. | `sha256:f5da4f23d87d41438220822cd26be08376cd2c2920560fcee4e892673227c90c` |
-| `it-it` | Container image with the `it-IT` locale. | `sha256:93cda63ee7583bee565fd8a49d2bd331ac9033c5111858185aec916d50af988c` |
-| `ja-jp` | Container image with the `ja-JP` locale. | `sha256:d9a3e69997cbbd490e471f09048bf78aea828323eb49702ed51c60d5af98b40e` |
-| `ko-kr` | Container image with the `ko-KR` locale. | `sha256:a8978adf5b51b0daeac13e605d1930bc06be5cfb44159ec759d1fa460ebac9cd` |
-| `lt-lt` | Container image with the `lt-LT` locale. | `sha256:a69cc1b076321b951d2746e401bf17b18b720e0a7e27004f64e15bfa7ec4f68a` |
-| `lv-lv` | Container image with the `lv-LV` locale. | `sha256:27e2d8eb315ff43406332479e20780576efb4319ac153129f695a09b092987d8` |
-| `mr-in` | Container image with the `mr-IN` locale. | `sha256:1e89166b7851bd7ed58804e4bf5cd23ab9bd522ea4ec178ddc46b5c47fd85fda` |
-| `mt-mt` | Container image with the `mt-MT` locale. | `sha256:df02ebd3291d24b9f46ae9ef109c4c4713a6494dcf980581c70855f9a08abdbd` |
-| `nb-no` | Container image with the `nb-NO` locale. | `sha256:3aa31830f15fb90165169bac9bd23ffbdc5d3cca2eda6dc80bdefefeb9567fcf` |
-| `nl-nl` | Container image with the `nl-NL` locale. | `sha256:532ccc97086b6afd99c83277d9c09ac6f94873f8f7556f407085e0aa2d50bc30` |
-| `pl-pl` | Container image with the `pl-PL` locale. | `sha256:fc1d7af8419904e98ec41b789eb17a406f8097ed01b4a95776f2a208ff4636b1` |
-| `pt-br` | Container image with the `pt-BR` locale. | `sha256:37b45b5a8cbd8cdccde0005276a09fe4b3b8e04922499644e67fad1b480507a7` |
-| `pt-pt` | Container image with the `pt-PT` locale. | `sha256:b5d9a3d6f343e6b20919f00dbe51b41e55034bf2a29407b344ba2352c7014741` |
-| `ro-ro` | Container image with the `ro-RO` locale. | `sha256:0526a3e3bb31e0a929492fdb886f3f5ef6be61bc81dd58c64b773ca13ed27731` |
-| `ru-ru` | Container image with the `ru-RU` locale. | `sha256:1db419aa78fdcfd78078361d2c547d962d22677e6aab4d44856334ea90e1dffa` |
-| `sk-sk` | Container image with the `sk-SK` locale. | `sha256:baf36c0b1a4e50c586a71d9b266d1e528e194d6fbe7ab630521bda6fa4191974` |
-| `sl-si` | Container image with the `sl-SI` locale. | `sha256:e5a119c4f7fae31d6dfb988d1930bfcea0d0c5d44b0fd8486f777041c6051f97` |
-| `sv-se` | Container image with the `sv-SE` locale. | `sha256:1dce2cac02b6100f87173dbad9128bf2a7c9bd214033b01b9e50fb377a837ea5` |
-| `ta-in` | Container image with the `ta-IN` locale. | `sha256:091876dbf3a8b40d6737f3d6d8717868e9fcd1bafdd2a590598769a53dc25d5a` |
-| `te-in` | Container image with the `te-IN` locale. | `sha256:b44f16adb5124ea16cabc640079ffaf26d654a86cc25b5d98663b6e139e9e653` |
-| `th-th` | Container image with the `th-TH` locale. | `sha256:e315b48f2f76c11a9fa0f59a6ed5fc346014be7adc23fbf0a2719eea6a9abe9d` |
-| `tr-tr` | Container image with the `tr-TR` locale. | `sha256:77a345f63a3c357b072052cdd38604a0f3bfa885e3b3058b77f37e46c80109d9` |
-| `zh-cn` | Container image with the `zh-CN` locale. | `sha256:91400eb20a9d4a49cfa587e5d9ac15f3714239d8189bf6ec55f04caa1d0fadd9` |
-| `zh-hk` | Container image with the `zh-HK` locale. | `sha256:3f1a782e85ba536d7761a4addc315915464bd949a9298fc7c93ee17ff5b15994` |
-| `zh-tw` | Container image with the `zh-TW` locale. | `sha256:f0088423fbaaac0cf20e1f747b1bb17adc58214e45cda4f571985d86a93565ad` |
+| `ar-ae` | Container image with the `ar-AE` locale. | `sha256:5cbc37cc91e0608cf174be5f2a320ca7daf312ade59fd9a3983d5324e68edae2` |
+| `ar-bh` | Container image with the `ar-BH` locale. | `sha256:16e6f169cf2ea025fc7d21c805a4a452e12b8d7b9530c8e9fc54ae68ee4f08dd` |
+| `ar-eg` | Container image with the `ar-EG` locale. | `sha256:05dd5bc85de5567809259339aa213fc802b38924d025dc1786600e663bfd4996` |
+| `ar-iq` | Container image with the `ar-IQ` locale. | `sha256:94973685069d212c19d67d9c0c8eb3f0124e08ff82807e976b59578f1bd67e97` |
+| `ar-jo` | Container image with the `ar-JO` locale. | `sha256:0dd7f1985b8544136bb1049d1b40d7c5858551f81721181a2e34fd1f9cb68e5b` |
+| `ar-kw` | Container image with the `ar-KW` locale. | `sha256:5cbc37cc91e0608cf174be5f2a320ca7daf312ade59fd9a3983d5324e68edae2` |
+| `ar-lb` | Container image with the `ar-LB` locale. | `sha256:9879fce4158fb8af2457eb6503607f78b7aade76eb4146c1ee7c142e7f9a21d4` |
+| `ar-om` | Container image with the `ar-OM` locale. | `sha256:0b1cd0c810cabad4217833d44b91479cd416d375e7ea43f2d14645f7bf859aa6` |
+| `ar-qa` | Container image with the `ar-QA` locale. | `sha256:5cbc37cc91e0608cf174be5f2a320ca7daf312ade59fd9a3983d5324e68edae2` |
+| `ar-sa` | Container image with the `ar-SA` locale. | `sha256:5cbc37cc91e0608cf174be5f2a320ca7daf312ade59fd9a3983d5324e68edae2` |
+| `ar-sy` | Container image with the `ar-SY` locale. | `sha256:7b206ca47a9004866857ad8b9c9ea824bd128089a8bdb374e6da565b0ea30f05` |
+| `bg-bg` | Container image with the `bg-BG` locale. | `sha256:a560c4e58476dcd9e5044f81da766a350b3b3464faaa6c93741a094c4afb621c` |
+| `ca-es` | Container image with the `ca-ES` locale. | `sha256:405cb4f74d10d5ff50efe9161b5cf21204d51c74b83766ea31ec2b8a878de495` |
+| `cs-cz` | Container image with the `cs-CZ` locale. | `sha256:87bde59f8fc441165f638a8665c480d259a3107b0edae5f022cb1b8f7e02a837` |
+| `da-dk` | Container image with the `da-DK` locale. | `sha256:ee6773b88378e9a01a35804f965bec0531b01327630174b927320553f023b7e9` |
+| `de-at` | Container image with the `de-AT` locale. | `sha256:f66bee7e43c05c1e434e0218d57ad094d47ec7be39e90ede3eb48fc9398fb873` |
+| `de-ch` | Container image with the `de-CH` locale. | `sha256:adb77da42c2637c072850fb2b5b2b2e508dff79e1ccdc5111b8f635167e35cc1` |
+| `de-de` | Container image with the `de-DE` locale. | `sha256:7143c59231017104bab633a108b5605166355f78e9dde2e3a4ebe6ffe71faafb` |
+| `el-gr` | Container image with the `el-GR` locale. | `sha256:4ce2fdeeaf53edc6811c079365e2aab56be75ea9abe3d94a6a96ca8dc0368573` |
+| `en-au` | Container image with the `en-AU` locale. | `sha256:e02827b1dcef490f792b04e7cd39eb7d46df4dbe57d340549b11193753136e76` |
+| `en-ca` | Container image with the `en-CA` locale. | `sha256:f5411eccf7659b1cc2303e118ef1ef002a700dd1a7363688a224763a6d19b7fe` |
+| `en-gb` | Container image with the `en-GB` locale. | `sha256:a87007b86fb1ca31b9a0368d01d7bfc4337b4262afb3356a88c752a29480f364` |
+| `en-hk` | Container image with the `en-HK` locale. | `sha256:a6014d4cbfafd2d49453f3ff12ea82fe8abc1e14bae639a2e9361de85a095f34` |
+| `en-ie` | Container image with the `en-IE` locale. | `sha256:aa6202c44028d4a8c608f04d6b66f473566d945012372182053d94dfc78eaa93` |
+| `en-in` | Container image with the `en-IN` locale. | `sha256:7ec9eaef19a2545e0a1afd70cb9707cf48029031e9f6b50cb6833045cbe66b29` |
+| `en-nz` | Container image with the `en-NZ` locale. | `sha256:48a95d03dc1200bfb56b1e3416dd1f94a0ad0227c0cf6c3c1730d862f2e99c15` |
+| `en-ph` | Container image with the `en-PH` locale. | `sha256:ab220ea3063af44c0ee7f7b9805289302faea578a50f4da5790b587ea49d31bc` |
+| `en-sg` | Container image with the `en-SG` locale. | `sha256:0f9cadefbe4d8236ef8e9c57b7473327541c1e37f53a2796f332bb2e190391f4` |
+| `en-us` | Container image with the `en-US` locale. | `sha256:bb13765581c938cbdcdcdec16fbc86f098fcebeecd16f33a50d9e5728a9dedb7` |
+| `en-za` | Container image with the `en-ZA` locale. | `sha256:096f4652fa8150cd1e2fa9b504cd2cce5bbb55b467ca9ba9f33d6b5c904fc51f` |
+| `es-ar` | Container image with the `es-AR` locale. | `sha256:acccaa583aaedab78d6614ada897d948d1d36d994d2fcd7f6b7e6435fe0b224f` |
+| `es-bo` | Container image with the `es-BO` locale. | `sha256:8d6631fefc679fe27366521a124d65dfa21c3e6b2a983f7da953e87d8711fad0` |
+| `es-cl` | Container image with the `es-CL` locale. | `sha256:0cd131cc39c2fe1231b7442f43f81b5e7c5317b51f5c9d9306bfa38c6abee060` |
+| `es-co` | Container image with the `es-CO` locale. | `sha256:ef4dcdcbce5f0dadde35f52c4322084274312e7b4a1e7dd18d76f92471a0688a` |
+| `es-cr` | Container image with the `es-CR` locale. | `sha256:8ee41457cf10efda1f3b126ae8dc21a1d5d2e966c9e3327a2134c597cfc16d89` |
+| `es-cu` | Container image with the `es-CU` locale. | `sha256:d00af5e4c41c9a240b64029ea8035e5e0012f54eec970771e84cfc4b59ecc373` |
+| `es-do` | Container image with the `es-DO` locale. | `sha256:9905d776b637cc5de8014a36af94ecc67088c1725fc578f805b682e969e04b3f` |
+| `es-ec` | Container image with the `es-EC` locale. | `sha256:a4e8d08b0a696d879cc20fb55171e90b32590514e999f73f98146b6921443cc3` |
+| `es-es` | Container image with the `es-ES` locale. | `sha256:1ecb4b3c86ff34b26b25058fd6c00b738c3c65d98f15c7a42e187f372ebadb60` |
+| `es-gt` | Container image with the `es-GT` locale. | `sha256:fd575f64f124bcb909d0515666e0a2555c3f1fe31dc8383c7fc953b423eed2e7` |
+| `es-hn` | Container image with the `es-HN` locale. | `sha256:5f96eebe2cea5a67e054c211cb744205e0ef15c957e8d38d618c746ff2c9f82a` |
+| `es-mx` | Container image with the `es-MX` locale. | `sha256:f9c8beb68ac7a1090f974b192df158013da5817b84b7e4c478ca646afe777c70` |
+| `es-ni` | Container image with the `es-NI` locale. | `sha256:150b98205f6802d85c4bb49fd8d334a6dd757ca1bb6cec747f93a5450a94eb85` |
+| `es-pa` | Container image with the `es-PA` locale. | `sha256:b27591217dc5b6db01570e9afac00949cdd78b26fe3469ed538bda62d6fb9209` |
+| `es-pe` | Container image with the `es-PE` locale. | `sha256:77dc8b771f638c2086de2ab573a28953865b95145cf82016459361e5cc3c5a47` |
+| `es-pr` | Container image with the `es-PR` locale. | `sha256:9f429598b0fc09efc6e9ce575fde538d400ceb7fa92807319873daba4b19dcf1` |
+| `es-py` | Container image with the `es-PY` locale. | `sha256:5cdaefc98a799ddd3800176efd6ffb896f5356af9b53a215d0600e874d94d893` |
+| `es-sv` | Container image with the `es-SV` locale. | `sha256:888bee57b4962c05c7a2cf569a22bb7bdc8bf2cf502e7f235ef1a0dafacb352d` |
+| `es-us` | Container image with the `es-US` locale. | `sha256:b021255ff7916f2d4b669114f3e5aad06de0c0b87656a9cc37af1f5f452e910b` |
+| `es-uy` | Container image with the `es-UY` locale. | `sha256:f69c019aa438f3f701b84805842dad98eeaa9a6998b261ea63e56dd80c1cd42c` |
+| `es-ve` | Container image with the `es-VE` locale. | `sha256:6cbd6d11bf9a021277c2fd42ef53242f12b7df00b559e572bbbe6baf48a84bac` |
+| `et-ee` | Container image with the `et-EE` locale. | `sha256:7b3a11a1e6f03ea4b802d97034588fbd461ebfed7ad08dc100c92586feff2208` |
+| `fi-fi` | Container image with the `fi-FI` locale. | `sha256:eb765a640aa8ff89e9bc718b100635a7c6adc2342b2da8fc621e66b7ba8696d4` |
+| `fr-ca` | Container image with the `fr-CA` locale. | `sha256:90127487c698e5d1a45c1a5813cda6805ba52a41468130f6dd4c28fe87f98fab` |
+| `fr-fr` | Container image with the `fr-FR` locale. | `sha256:ffc7c3844873f7e639f2a137b991edc54b750b362756f6f8897fbfaaa32fe1df` |
+| `ga-ie` | Container image with the `ga-IE` locale. | `sha256:ab41b4ad9161c342fac69fbd517264ad23579512a2500190b62e97586e5ec963` |
+| `gu-in` | Container image with the `gu-IN` locale. | `sha256:ac4da9f6d62baa41a193c4765e76eb507f51d069f989ae2860bada1c3e5ff968` |
+| `hi-in` | Container image with the `hi-IN` locale. | `sha256:9131208103997e9829239e3a8585c23f5dc2affc4ffbe3840270247d30b42be6` |
+| `hr-hr` | Container image with the `hr-HR` locale. | `sha256:4ccb5056e7763b736362b7f7b663f71f2bd20b23fc4516a6c63dd105f2b99e9b` |
+| `hu-hu` | Container image with the `hu-HU` locale. | `sha256:05a8d6be2d280cf8aa43fa059f4571417d47866bf603b8c1714ce079c4e66e6d` |
+| `it-it` | Container image with the `it-IT` locale. | `sha256:9e35544bc1a488d4b3fefc05860279c7a189505562fe2e4b1267da67154efded` |
+| `ja-jp` | Container image with the `ja-JP` locale. | `sha256:a1a3a6a81916a98aa6df68704f8a2d8ad318e3cd54d78ed97a98ee3b6af1e599` |
+| `ko-kr` | Container image with the `ko-KR` locale. | `sha256:67af86517f8915f3ebe107f65e62175dd2a7bb995416c963dca1eb398ed1502a` |
+| `lt-lt` | Container image with the `lt-LT` locale. | `sha256:aa2248878811831ab58438f40c66be6332505f3194037275b37babfceaed1732` |
+| `lv-lv` | Container image with the `lv-LV` locale. | `sha256:1ac940c96d054cf75e93cda1b88942ad5a7f4d3a269bbaf42060b91786394356` |
+| `mr-in` | Container image with the `mr-IN` locale. | `sha256:ca917fa5139516a75a9747f479fbbfb80819899c9d447c893578aadebf2d1c84` |
+| `mt-mt` | Container image with the `mt-MT` locale. | `sha256:8f2e0aac8961d8c7d560b83ff02f9fdb50708c1e508f8c0c12662391940354df` |
+| `nb-no` | Container image with the `nb-NO` locale. | `sha256:7eae1acddc5341e653944dbe26fd44669e1868b70e5d49559529f2eeb8f33b02` |
+| `nl-nl` | Container image with the `nl-NL` locale. | `sha256:5c3767d6f563b6b201a55338de1149fac43706c026c4ba6a358675d44c44d743` |
+| `pl-pl` | Container image with the `pl-PL` locale. | `sha256:22ee4fd3a864576b58276b9a02821fba439f7ea5f5c462e62deca1778a8b91a6` |
+| `pt-br` | Container image with the `pt-BR` locale. | `sha256:660c69103e721206e14436882272e80396592a45801a186d2830993140d4c8e0` |
+| `pt-pt` | Container image with the `pt-PT` locale. | `sha256:3579963235d8b05173fac42725e3509475bc42e197a5f0f325828a37ef2cf613` |
+| `ro-ro` | Container image with the `ro-RO` locale. | `sha256:23c07debd00bf4a817898784fb77bdf3fd27071b196226a8df81de5bdf4bf9f8` |
+| `ru-ru` | Container image with the `ru-RU` locale. | `sha256:b310ce3849e3c066678e4c90843ccf24e5972759a58b32863ba94801a481811b` |
+| `sk-sk` | Container image with the `sk-SK` locale. | `sha256:a750a88a2c7677b2507730905819764ae56e560a96394abe3340888d4c986f3f` |
+| `sl-si` | Container image with the `sl-SI` locale. | `sha256:3b92dde403d279395de09c77e3f866fc5d6757fc1c9bbf52639be59aee57b3be` |
+| `sv-se` | Container image with the `sv-SE` locale. | `sha256:70291a568a3093db066fbeff4ae294dac1d3ee41789e293896793b9c76990eb9` |
+| `ta-in` | Container image with the `ta-IN` locale. | `sha256:e1a5d1a748137d549b858635c6c9f470e3049a14dc3f5b300dca46819765de9b` |
+| `te-in` | Container image with the `te-IN` locale. | `sha256:0e11a0d8be515c7149f4d1774c1621d6a3b27674a31beaa7a9f62e54f9497858` |
+| `th-th` | Container image with the `th-TH` locale. | `sha256:2164d04ab1f9821c4beccc2d34e97bc9cec7ad387b17e8257801cd25a28dc412` |
+| `tr-tr` | Container image with the `tr-TR` locale. | `sha256:011ce659926bb4d4a56c8b3616b16ac7b80228c43e23d4b9154c96c67aa5db1b` |
+| `zh-cn` | Container image with the `zh-CN` locale. | `sha256:c7357d975838ae827376cc10ef48c6db8ee65751ee4f15db9a31ab5e51a876f2` |
+| `zh-hk` | Container image with the `zh-HK` locale. | `sha256:ea1c310631044b22fb61b79da59089db5ecd2e2ea0c3ab75d63e1c1c1d204a48` |
+| `zh-tw` | Container image with the `zh-TW` locale. | `sha256:c3a2388d3cb7d22035b3a5e4562185541cbfe885ab6ed96f3b9e3a3aa65aa56c` |
# [Previous version](#tab/previous)
+Release note for `2.17.0-amd64-<locale>`:
+
+**Features**
+* Upgrade to latest `media`, `text` components.
+* Support `de-AT`, `de-CH` locales.
+
+**Fixes**
+* Upgrade security patches.
+ Release note for `2.16.0-amd64-<locale>`: Regular monthly upgrade
Release note for `2.5.0-amd64-<locale>`:
| Image Tags | Notes | |--|:--|
+| `2.17.0-amd64-<locale>` | Replace `<locale>` with one of the available locales, listed below. For example `2.17.0-amd64-en-us`.|
| `2.16.0-amd64-<locale>` | Replace `<locale>` with one of the available locales, listed below. For example `2.16.0-amd64-en-us`.| | `2.15.0-amd64-<locale>` | Replace `<locale>` with one of the available locales, listed below. For example `2.15.0-amd64-en-us`.| | `2.14.0-amd64-<locale>` | Replace `<locale>` with one of the available locales, listed below. For example `2.14.0-amd64-en-us`.|
Release note for `2.5.0-amd64-<locale>`:
This container has the following locales available.
+| Locale for v2.17.0 | Notes | Digest |
+|--|:--|:--|
+| `ar-ae` | Container image with the `ar-AE` locale. | `sha256:4e7fd7c2412e13e4f5d642b105230e90ae6cc2f0457ceabd2db340e0ae29f316` |
+| `ar-bh` | Container image with the `ar-BH` locale. | `sha256:da8805f4f64844f140e7a72b2adf367d45b2435e2dc1cd579a1adb2ec77a8df2` |
+| `ar-eg` | Container image with the `ar-EG` locale. | `sha256:d5a8652c680097c54668e6b16b01248be523d756ad859c9449931adee95df9d7` |
+| `ar-iq` | Container image with the `ar-IQ` locale. | `sha256:19a19894bb9a1c1b28e8bb7234e19757a1f870f4032ad50f44a477fc2b452ada` |
+| `ar-jo` | Container image with the `ar-JO` locale. | `sha256:2279655c98bbf09f221212fbe6876bad5662ccdc55be069975a23512f4a3d55c` |
+| `ar-kw` | Container image with the `ar-KW` locale. | `sha256:4e7fd7c2412e13e4f5d642b105230e90ae6cc2f0457ceabd2db340e0ae29f316` |
+| `ar-lb` | Container image with the `ar-LB` locale. | `sha256:f1b0e5083e71f5c2f56f841b6db399f50f44c154126e3316e8e6e73b6b895c97` |
+| `ar-om` | Container image with the `ar-OM` locale. | `sha256:8af7ce49be6d3839ac0e1ce4f1f45d4361fbbcbffa66081b0e7c6824dfa7c1a0` |
+| `ar-qa` | Container image with the `ar-QA` locale. | `sha256:4e7fd7c2412e13e4f5d642b105230e90ae6cc2f0457ceabd2db340e0ae29f316` |
+| `ar-sa` | Container image with the `ar-SA` locale. | `sha256:4e7fd7c2412e13e4f5d642b105230e90ae6cc2f0457ceabd2db340e0ae29f316` |
+| `ar-sy` | Container image with the `ar-SY` locale. | `sha256:de0521c0728468540699387e4152887c2a0a43ba37e9c144256a5c541a2f1d7e` |
+| `bg-bg` | Container image with the `bg-BG` locale. | `sha256:533272cf3c8920b002461e8cdb51fea9a6318aed41c9b77d0cbcfce3bfd7d077` |
+| `ca-es` | Container image with the `ca-ES` locale. | `sha256:10af2a3eb4f8cfe88925a512165c3fb555681b9a89d3db9d67fed02a33809063` |
+| `cs-cz` | Container image with the `cs-CZ` locale. | `sha256:95cf93202ae922318862ba585b38a989b4fc83e4d335f2c3916be4564df0d215` |
+| `da-dk` | Container image with the `da-DK` locale. | `sha256:fa66a446e0575fa1c89f52b06e16279fee0fe4f0d61b1e18a0dcebc8a866ddf6` |
+| `de-at` | Container image with the `de-AT` locale. | `sha256:e0d8a74ebf48981999306e6cc9f99dfb9fa3fa16cc12aa5086e9720639ce9f52` |
+| `de-ch` | Container image with the `de-CH` locale. | `sha256:ab58cb7bbe5a5a78a7459b690c95f036d1b4703610f563f5854334f7332d5fca` |
+| `de-de` | Container image with the `de-DE` locale. | `sha256:abbbf003661da23eb6bc2133d3585ffe58af3a9d3167b7eece656d0007bc65d2` |
+| `el-gr` | Container image with the `el-GR` locale. | `sha256:01311455b2425e41031368691de73e28c3c08de0486e50f4801ade584af27c2d` |
+| `en-au` | Container image with the `en-AU` locale. | `sha256:86c84a560a23b5bfcbadae8dee62805f897520b7d3ac6969d80e3eb88141d7ef` |
+| `en-ca` | Container image with the `en-CA` locale. | `sha256:0f5912fa924212aca1522f6a27970778b0c22d943a8b2c9e9410df243ad62ff7` |
+| `en-gb` | Container image with the `en-GB` locale. | `sha256:a5f3efff449bb9e43fafc9feafe0b31f11070c8f2bb9c60e34892b0765fbf0c5` |
+| `en-hk` | Container image with the `en-HK` locale. | `sha256:57274ea44bd9dd34afc845e4dcdadf61b33681b0e4e5dba1f3c0e13511b40fe8` |
+| `en-ie` | Container image with the `en-IE` locale. | `sha256:f4406c366940ef5185aedf42bfdacc1246ef636aebb8ad5b5a6bc521589f528c` |
+| `en-in` | Container image with the `en-IN` locale. | `sha256:9b6529181e7fe12ca00222c6164342b32ff637e4f394240ff2194489c56408df` |
+| `en-nz` | Container image with the `en-NZ` locale. | `sha256:10978a40cc3b7101517f35215c663ddec69679d5650ba827476060da8b84812d` |
+| `en-ph` | Container image with the `en-PH` locale. | `sha256:a76b360f883ee746f77a650f285244d0424be9d6b3990a0c8075ec475f6e77d3` |
+| `en-sg` | Container image with the `en-SG` locale. | `sha256:336f939f5db41312e6bfeda5419570db456df05c09d01fc98d7bc1967e4a8c3f` |
+| `en-us` | Container image with the `en-US` locale. | `sha256:e8cec9044fd1f7958b2188dfe818311932fe0560d0f1d56aab684bec42a05723` |
+| `en-za` | Container image with the `en-ZA` locale. | `sha256:b80e9e6349e057dae1c60a0b718cc2bb9e6db01681b1910afb5c48525aaf99a2` |
+| `es-ar` | Container image with the `es-AR` locale. | `sha256:b9b9bc3cd87c5524b9630c29ab790a5e10e725237c4a482ba09673b3e98cb7f6` |
+| `es-bo` | Container image with the `es-BO` locale. | `sha256:849f9f3a4ad8b1266b07837afcd9cbd5815bef2473bb8b3726b1cfaec75c8a62` |
+| `es-cl` | Container image with the `es-CL` locale. | `sha256:dc09280f0d7df607e363f137ddc6ad333d0b7628492296eece9d3717d60ea339` |
+| `es-co` | Container image with the `es-CO` locale. | `sha256:270de8bfbae05c984f164d0c4e15c096459f41bf8f1aeb5cb18c1b7d20419bf3` |
+| `es-cr` | Container image with the `es-CR` locale. | `sha256:e6c7e8ded3c75c19ce0f94db2d468b1d48548eb9b9827a67a9995e4820b6ae58` |
+| `es-cu` | Container image with the `es-CU` locale. | `sha256:56510d176425bc3ba329ac0cf9520ee5a041370777320faf8081288c89c83c14` |
+| `es-do` | Container image with the `es-DO` locale. | `sha256:af3c853b766eb01a7ec51660ceb179040ac007c7b85f07c68a32adcdc3b280f1` |
+| `es-ec` | Container image with the `es-EC` locale. | `sha256:cf05048a4762dabc23dc44bbb5c59d26cef5946658d653da4e2965d5331ea497` |
+| `es-es` | Container image with the `es-ES` locale. | `sha256:9575b02e64c47e4b4253a90dbc8cc3ebc393fd1a3a1b5680d3eff6efd76b1f3f` |
+| `es-gt` | Container image with the `es-GT` locale. | `sha256:86775303621fc1a80761cefba4aae5aa769a998d96cf61e54490d4aa59edae6c` |
+| `es-hn` | Container image with the `es-HN` locale. | `sha256:a8246e041c1a10338397c8ce9ba1389b0ee517bb8c0ec8e6fd1579c10704529b` |
+| `es-mx` | Container image with the `es-MX` locale. | `sha256:d709a021dd398fd2bb77f0fa5478646642774b5b91f25699001ee2d7ee7c9e4b` |
+| `es-ni` | Container image with the `es-NI` locale. | `sha256:e532978cf0d7d696016d3b5353543b6f9f0f4bfbd41669403a5e69e144c67259` |
+| `es-pa` | Container image with the `es-PA` locale. | `sha256:3f45b4169cb131bfa81820d0c08a80f27ecd54e7a56755a2d9db7da358fb9f27` |
+| `es-pe` | Container image with the `es-PE` locale. | `sha256:2b9eddd484a3262002dc92e07603cb161e254bc5460ecfdccb6f0db91c48c5a9` |
+| `es-pr` | Container image with the `es-PR` locale. | `sha256:2b95ff684b8c60e91531bf7c19f4ca71a69ede37d1d06262cf90368cc6b1ff9a` |
+| `es-py` | Container image with the `es-PY` locale. | `sha256:b03ac0b66f5af771cf3d4e3bb61efd674df0ef2fd0934f77467d163472308805` |
+| `es-sv` | Container image with the `es-SV` locale. | `sha256:e5bc6399ef63b07e6e154e438b25a58622beb1a2f90e31e043ae2720dfe1daaa` |
+| `es-us` | Container image with the `es-US` locale. | `sha256:21de294ee17c097d7624ad679c24715ec93aa46e0b982afcbf2d6defd4177ff6` |
+| `es-uy` | Container image with the `es-UY` locale. | `sha256:d4827464db58661c57f7ba981b03e493302d9b51a89f41cf7ca633a4e7f69f6a` |
+| `es-ve` | Container image with the `es-VE` locale. | `sha256:b56baf94cedc5d50e2cf3846d995f63f36473c4146008f50b31b9b747c8e4c45` |
+| `et-ee` | Container image with the `et-EE` locale. | `sha256:b2f45988b0d077f4f7279a58353d3179fac181ad5cdc3848667fa25d7d96e4f0` |
+| `fi-fi` | Container image with the `fi-FI` locale. | `sha256:c13210ddfc885f359dfc020f7a1c39773ee62db15617ef472ac10d62f8829904` |
+| `fr-ca` | Container image with the `fr-CA` locale. | `sha256:2a6f9e5afcac65f9030dbffbd22da428a6633f7dde8386eff674961dc61fd040` |
+| `fr-fr` | Container image with the `fr-FR` locale. | `sha256:11f4ac68104d3853558bf7a7d08871cbec4ab1194281eda80906b256e8b82f18` |
+| `ga-ie` | Container image with the `ga-IE` locale. | `sha256:c07dfc10ed61e3a35142d0ea26c7b87adeeecfb95a5a3a16ac44023b3df03e0a` |
+| `gu-in` | Container image with the `gu-IN` locale. | `sha256:76b9c92a1e77513681249e71c7194e94067ff3af97ad35ca2ac0d476dfb2b744` |
+| `hi-in` | Container image with the `hi-IN` locale. | `sha256:f45c3de121f2b922c413468bfa9214a5dedd3e34049fbe2d77f50cba7ef8fcbd` |
+| `hr-hr` | Container image with the `hr-HR` locale. | `sha256:b9f01a38f65884b9f690394c5e06f7d4ce02067c18d41dfbe9bcdc7c36b85373` |
+| `hu-hu` | Container image with the `hu-HU` locale. | `sha256:f5da4f23d87d41438220822cd26be08376cd2c2920560fcee4e892673227c90c` |
+| `it-it` | Container image with the `it-IT` locale. | `sha256:93cda63ee7583bee565fd8a49d2bd331ac9033c5111858185aec916d50af988c` |
+| `ja-jp` | Container image with the `ja-JP` locale. | `sha256:d9a3e69997cbbd490e471f09048bf78aea828323eb49702ed51c60d5af98b40e` |
+| `ko-kr` | Container image with the `ko-KR` locale. | `sha256:a8978adf5b51b0daeac13e605d1930bc06be5cfb44159ec759d1fa460ebac9cd` |
+| `lt-lt` | Container image with the `lt-LT` locale. | `sha256:a69cc1b076321b951d2746e401bf17b18b720e0a7e27004f64e15bfa7ec4f68a` |
+| `lv-lv` | Container image with the `lv-LV` locale. | `sha256:27e2d8eb315ff43406332479e20780576efb4319ac153129f695a09b092987d8` |
+| `mr-in` | Container image with the `mr-IN` locale. | `sha256:1e89166b7851bd7ed58804e4bf5cd23ab9bd522ea4ec178ddc46b5c47fd85fda` |
+| `mt-mt` | Container image with the `mt-MT` locale. | `sha256:df02ebd3291d24b9f46ae9ef109c4c4713a6494dcf980581c70855f9a08abdbd` |
+| `nb-no` | Container image with the `nb-NO` locale. | `sha256:3aa31830f15fb90165169bac9bd23ffbdc5d3cca2eda6dc80bdefefeb9567fcf` |
+| `nl-nl` | Container image with the `nl-NL` locale. | `sha256:532ccc97086b6afd99c83277d9c09ac6f94873f8f7556f407085e0aa2d50bc30` |
+| `pl-pl` | Container image with the `pl-PL` locale. | `sha256:fc1d7af8419904e98ec41b789eb17a406f8097ed01b4a95776f2a208ff4636b1` |
+| `pt-br` | Container image with the `pt-BR` locale. | `sha256:37b45b5a8cbd8cdccde0005276a09fe4b3b8e04922499644e67fad1b480507a7` |
+| `pt-pt` | Container image with the `pt-PT` locale. | `sha256:b5d9a3d6f343e6b20919f00dbe51b41e55034bf2a29407b344ba2352c7014741` |
+| `ro-ro` | Container image with the `ro-RO` locale. | `sha256:0526a3e3bb31e0a929492fdb886f3f5ef6be61bc81dd58c64b773ca13ed27731` |
+| `ru-ru` | Container image with the `ru-RU` locale. | `sha256:1db419aa78fdcfd78078361d2c547d962d22677e6aab4d44856334ea90e1dffa` |
+| `sk-sk` | Container image with the `sk-SK` locale. | `sha256:baf36c0b1a4e50c586a71d9b266d1e528e194d6fbe7ab630521bda6fa4191974` |
+| `sl-si` | Container image with the `sl-SI` locale. | `sha256:e5a119c4f7fae31d6dfb988d1930bfcea0d0c5d44b0fd8486f777041c6051f97` |
+| `sv-se` | Container image with the `sv-SE` locale. | `sha256:1dce2cac02b6100f87173dbad9128bf2a7c9bd214033b01b9e50fb377a837ea5` |
+| `ta-in` | Container image with the `ta-IN` locale. | `sha256:091876dbf3a8b40d6737f3d6d8717868e9fcd1bafdd2a590598769a53dc25d5a` |
+| `te-in` | Container image with the `te-IN` locale. | `sha256:b44f16adb5124ea16cabc640079ffaf26d654a86cc25b5d98663b6e139e9e653` |
+| `th-th` | Container image with the `th-TH` locale. | `sha256:e315b48f2f76c11a9fa0f59a6ed5fc346014be7adc23fbf0a2719eea6a9abe9d` |
+| `tr-tr` | Container image with the `tr-TR` locale. | `sha256:77a345f63a3c357b072052cdd38604a0f3bfa885e3b3058b77f37e46c80109d9` |
+| `zh-cn` | Container image with the `zh-CN` locale. | `sha256:91400eb20a9d4a49cfa587e5d9ac15f3714239d8189bf6ec55f04caa1d0fadd9` |
+| `zh-hk` | Container image with the `zh-HK` locale. | `sha256:3f1a782e85ba536d7761a4addc315915464bd949a9298fc7c93ee17ff5b15994` |
+| `zh-tw` | Container image with the `zh-TW` locale. | `sha256:f0088423fbaaac0cf20e1f747b1bb17adc58214e45cda4f571985d86a93565ad` |
+ | Locale for v2.16.0 | Notes | Digest | |--|:--|:--| | `ar-ae` | Container image with the `ar-AE` locale. | `sha256:66d3df9332cac66bcba96c8f62af4f1f66658c35b9dba9b382fe7f4c67587e3e` |
This container image has the following tags available. You can also find a full
# [Latest version](#tab/current)
-Release notes for `v1.11.0`:
+Release notes for `v1.12.0`:
**Features**
-* Support `de-ch-janneural` and `de-ch-lenineural`.
-
-**Fixes**
-* Upgrade security patches.
+* Support `am-et-amehaneural` and `am-et-mekdesneural` and `so-so-muuseneural` and `so-so-ubaxneural`.
| Image Tags | Notes | ||:| | `latest` | Container image with the `en-US` locale and `en-US-AriaNeural` voice. |
-| `1.11.0-amd64-<locale-and-voice>` | Replace `<locale>` with one of the available locales, listed below. For example `1.11.0-amd64-en-us-arianeural`. |
+| `1.12.0-amd64-<locale-and-voice>` | Replace `<locale>` with one of the available locales, listed below. For example `1.12.0-amd64-en-us-arianeural`. |
-| v1.11.0 Locales and voices | Notes |
+| v1.12.0 Locales and voices | Notes |
|-|:|
-| `cs-cz-antoninneural` | Container image with the `cs-CZ` locale and `cs-CZ-antoninneural` voice. |
+| `am-et-amehaneural` | Container image with the `am-ET` locale and `am-ET-Amehaneural` voice. |
+| `am-et-mekdesneural` | Container image with the `am-ET` locale and `am-ET-Mekdesneural` voice. |
+| `cs-cz-antoninneural` | Container image with the `cs-CZ` locale and `cs-CZ-Antoninneural` voice. |
| `cs-cz-vlastaneural` | Container image with the `cs-CZ` locale and `cs-CZ-Vlastaneural` voice. | | `de-ch-janneural` | Container image with the `de-CH` locale and `de-CH-Janneural` voice. | | `de-ch-lenineural` | Container image with the `de-CH` locale and `de-CH-Lenineural` voice. |
Release notes for `v1.11.0`:
| `ko-kr-sunhineural` | Container image with the `ko-KR` locale and `ko-KR-SunHiNeural` voice. | | `pt-br-antonioneural` | Container image with the `pt-BR` locale and `pt-BR-AntonioNeural` voice. | | `pt-br-franciscaneural` | Container image with the `pt-BR` locale and `pt-BR-FranciscaNeural` voice. |
+| `so-so-muuseneural` | Container image with the `so-SO` locale and `so-SO-Muuseneural` voice. |
+| `so-so-ubaxneural` | Container image with the `so-SO` locale and `so-SO-Ubaxneural` voice. |
| `tr-tr-ahmetneural` | Container image with the `tr-TR` locale and `tr-TR-AhmetNeural` voice. | | `tr-tr-emelneural` | Container image with the `tr-TR` locale and `tr-TR-EmelNeural` voice. | | `zh-cn-xiaoxiaoneural` | Container image with the `zh-CN` locale and `zh-CN-XiaoxiaoNeural` voice. |
Release notes for `v1.11.0`:
# [Previous version](#tab/previous)
+Release notes for `v1.11.0`:
+
+**Features**
+* Support `de-ch-janneural` and `de-ch-lenineural`.
+
+**Fixes**
+* Upgrade security patches.
+ Release notes for `v1.10.0`: Regular monthly upgrade
Release notes for `v1.3.0`:
| Image Tags | Notes | ||:|
+| `1.11.0-amd64-<locale-and-voice>` | Replace `<locale>` with one of the available locales, listed below. For example `1.11.0-amd64-en-us-arianeural`. |
| `1.10.0-amd64-<locale-and-voice>` | Replace `<locale>` with one of the available locales, listed below. For example `1.10.0-amd64-en-us-arianeural`. | | `1.9.0-amd64-<locale-and-voice>` | Replace `<locale>` with one of the available locales, listed below. For example `1.9.0-amd64-en-us-arianeural`. | | `1.8.0-amd64-<locale-and-voice>` | Replace `<locale>` with one of the available locales, listed below. For example `1.8.0-amd64-en-us-arianeural`. |
Release notes for `v1.3.0`:
| `1.3.0-amd64-<locale-and-voice>-preview` | Replace `<locale>` with one of the available locales, listed below. For example `1.3.0-amd64-en-us-arianeural-preview`. | | `1.2.0-amd64-<locale-and-voice>-preview` | Replace `<locale>` with one of the available locales, listed below. For example `1.2.0-amd64-en-us-arianeural-preview`. |
+| v1.11.0 Locales and voices | Notes |
+|-|:|
+| `cs-cz-antoninneural` | Container image with the `cs-CZ` locale and `cs-CZ-Antoninneural` voice. |
+| `cs-cz-vlastaneural` | Container image with the `cs-CZ` locale and `cs-CZ-Vlastaneural` voice. |
+| `de-ch-janneural` | Container image with the `de-CH` locale and `de-CH-Janneural` voice. |
+| `de-ch-lenineural` | Container image with the `de-CH` locale and `de-CH-Lenineural` voice. |
+| `de-de-conradneural` | Container image with the `de-DE` locale and `de-DE-ConradNeural` voice. |
+| `de-de-katjaneural` | Container image with the `de-DE` locale and `de-DE-KatjaNeural` voice. |
+| `en-au-natashaneural` | Container image with the `en-AU` locale and `en-AU-NatashaNeural` voice. |
+| `en-au-williamneural` | Container image with the `en-AU` locale and `en-AU-WilliamNeural` voice. |
+| `en-ca-claraneural` | Container image with the `en-CA` locale and `en-CA-ClaraNeural` voice. |
+| `en-ca-liamneural` | Container image with the `en-CA` locale and `en-CA-LiamNeural` voice. |
+| `en-gb-libbyneural` | Container image with the `en-GB` locale and `en-GB-LibbyNeural` voice. |
+| `en-gb-ryanneural` | Container image with the `en-GB` locale and `en-GB-RyanNeural` voice. |
+| `en-gb-sonianeural` | Container image with the `en-GB` locale and `en-GB-SoniaNeural` voice. |
+| `en-us-arianeural` | Container image with the `en-US` locale and `en-US-AriaNeural` voice. |
+| `en-us-guyneural` | Container image with the `en-US` locale and `en-US-GuyNeural` voice. |
+| `en-us-jennyneural` | Container image with the `en-US` locale and `en-US-JennyNeural` voice. |
+| `es-es-alvaroneural` | Container image with the `es-ES` locale and `es-ES-AlvaroNeural` voice. |
+| `es-es-elviraneural` | Container image with the `es-ES` locale and `es-ES-ElviraNeural` voice. |
+| `es-mx-dalianeural` | Container image with the `es-MX` locale and `es-MX-DaliaNeural` voice. |
+| `es-mx-jorgeneural` | Container image with the `es-MX` locale and `es-MX-JorgeNeural` voice. |
+| `fr-ca-antoineneural` | Container image with the `fr-CA` locale and `fr-CA-AntoineNeural` voice. |
+| `fr-ca-jeanneural` | Container image with the `fr-CA` locale and `fr-CA-JeanNeural` voice. |
+| `fr-ca-sylvieneural` | Container image with the `fr-CA` locale and `fr-CA-SylvieNeural` voice. |
+| `fr-fr-deniseneural` | Container image with the `fr-FR` locale and `fr-FR-DeniseNeural` voice. |
+| `fr-fr-henrineural` | Container image with the `fr-FR` locale and `fr-FR-HenriNeural` voice. |
+| `hi-in-madhurneural` | Container image with the `hi-IN` locale and `hi-IN-MadhurNeural` voice. |
+| `hi-in-swaraneural` | Container image with the `hi-IN` locale and `hi-IN-Swaraneural` voice. |
+| `it-it-diegoneural` | Container image with the `it-IT` locale and `it-IT-DiegoNeural` voice. |
+| `it-it-elsaneural` | Container image with the `it-IT` locale and `it-IT-ElsaNeural` voice. |
+| `it-it-isabellaneural` | Container image with the `it-IT` locale and `it-IT-IsabellaNeural` voice. |
+| `ja-jp-keitaneural` | Container image with the `ja-JP` locale and `ja-JP-KeitaNeural` voice. |
+| `ja-jp-nanamineural` | Container image with the `ja-JP` locale and `ja-JP-NanamiNeural` voice. |
+| `ko-kr-injoonneural` | Container image with the `ko-KR` locale and `ko-KR-InJoonNeural` voice. |
+| `ko-kr-sunhineural` | Container image with the `ko-KR` locale and `ko-KR-SunHiNeural` voice. |
+| `pt-br-antonioneural` | Container image with the `pt-BR` locale and `pt-BR-AntonioNeural` voice. |
+| `pt-br-franciscaneural` | Container image with the `pt-BR` locale and `pt-BR-FranciscaNeural` voice. |
+| `tr-tr-ahmetneural` | Container image with the `tr-TR` locale and `tr-TR-AhmetNeural` voice. |
+| `tr-tr-emelneural` | Container image with the `tr-TR` locale and `tr-TR-EmelNeural` voice. |
+| `zh-cn-xiaoxiaoneural` | Container image with the `zh-CN` locale and `zh-CN-XiaoxiaoNeural` voice. |
+| `zh-cn-xiaoyouneural` | Container image with the `zh-CN` locale and `zh-CN-XiaoYouNeural` voice. |
+| `zh-cn-yunyangneural` | Container image with the `zh-CN` locale and `zh-CN-YunYangNeural` voice. |
+| `zh-cn-yunyeneural` | Container image with the `zh-CN` locale and `zh-CN-YunYeNeural` voice. |
+| `zh-cn-xiaochenneural-preview` | Container image with the `zh-CN` locale and `zh-CN-XiaoChenNeural` voice. |
+| `zh-cn-xiaohanneural` | Container image with the `zh-CN` locale and `zh-CN-XiaoHanNeural` voice. |
+| `zh-cn-xiaomoneural` | Container image with the `zh-CN` locale and `zh-CN-XiaoMoNeural` voice. |
+| `zh-cn-xiaoqiuneural-preview` | Container image with the `zh-CN` locale and `zh-CN-XiaoQiuNeural` voice. |
+| `zh-cn-xiaoruineural` | Container image with the `zh-CN` locale and `zh-CN-XiaoRuiNeural` voice. |
+| `zh-cn-xiaoshuangneural-preview` | Container image with the `zh-CN` locale and `zh-CN-XiaoShuangNeural` voice.|
+| `zh-cn-xiaoxuanneural` | Container image with the `zh-CN` locale and `zh-CN-XiaoXuanNeural` voice. |
+| `zh-cn-xiaoyanneural-preview` | Container image with the `zh-CN` locale and `zh-CN-XiaoYanNeural` voice. |
+| `zh-cn-yunxineural` | Container image with the `zh-CN` locale and `zh-CN-YunXiNeural` voice. |
+ | v1.10.0 Locales and voices | Notes | |-|:|
-| `cs-cz-antoninneural` | Container image with the `cs-CZ` locale and `cs-CZ-antoninneural` voice. |
+| `cs-cz-antoninneural` | Container image with the `cs-CZ` locale and `cs-CZ-Antoninneural` voice. |
| `cs-cz-vlastaneural` | Container image with the `cs-CZ` locale and `cs-CZ-Vlastaneural` voice. | | `de-de-conradneural` | Container image with the `de-DE` locale and `de-DE-ConradNeural` voice. | | `de-de-katjaneural` | Container image with the `de-DE` locale and `de-DE-KatjaNeural` voice. |
Release notes for `v1.3.0`:
| v1.9.0 Locales and voices | Notes | |-|:|
-| `cs-cz-antoninneural` | Container image with the `cs-CZ` locale and `cs-CZ-antoninneural` voice. |
+| `cs-cz-antoninneural` | Container image with the `cs-CZ` locale and `cs-CZ-Antoninneural` voice. |
| `cs-cz-vlastaneural` | Container image with the `cs-CZ` locale and `cs-CZ-Vlastaneural` voice. | | `de-de-conradneural` | Container image with the `de-DE` locale and `de-DE-ConradNeural` voice. | | `de-de-katjaneural` | Container image with the `de-DE` locale and `de-DE-KatjaNeural` voice. |
Release notes for `v1.3.0`:
| v1.8.0 Locales and voices | Notes | |-|:|
-| `cs-cz-antoninneural` | Container image with the `cs-CZ` locale and `cs-CZ-antoninneural` voice. |
+| `cs-cz-antoninneural` | Container image with the `cs-CZ` locale and `cs-CZ-Antoninneural` voice. |
| `cs-cz-vlastaneural` | Container image with the `cs-CZ` locale and `cs-CZ-Vlastaneural` voice. | | `de-de-conradneural` | Container image with the `de-DE` locale and `de-DE-ConradNeural` voice. | | `de-de-katjaneural` | Container image with the `de-DE` locale and `de-DE-KatjaNeural` voice. |
Release notes for `v1.3.0`:
| v1.7.0 Locales and voices | Notes | |-|:|
-| `cs-cz-antoninneural` | Container image with the `cs-CZ` locale and `cs-CZ-antoninneural` voice. |
+| `cs-cz-antoninneural` | Container image with the `cs-CZ` locale and `cs-CZ-Antoninneural` voice. |
| `cs-cz-vlastaneural` | Container image with the `cs-CZ` locale and `cs-CZ-Vlastaneural` voice. | | `de-de-conradneural` | Container image with the `de-DE` locale and `de-DE-ConradNeural` voice. | | `de-de-katjaneural` | Container image with the `de-DE` locale and `de-DE-KatjaNeural` voice. |
cognitive-services Deploy Query Model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/language-service/conversational-language-understanding/how-to/deploy-query-model.md
Previously updated : 12/07/2021 Last updated : 01/07/2022 ms.devlang: csharp, python
You can get the full URL for your endpoint by going to the **Deploy model** page
:::image type="content" source="../media/prediction-url.png" alt-text="Screenshot showing the prediction request and URL" lightbox="../media/prediction-url.png":::
-### Use the client libraries
+### Use the client libraries (Azure SDK)
> [!NOTE] > The client library for conversational language understanding is only available for:
cognitive-services Call Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/language-service/custom-classification/how-to/call-api.md
Previously updated : 11/02/2021 Last updated : 01/07/2022
First you will need to get your resource key and endpoint
[!INCLUDE [JSON result for classification](../includes/classification-result-json.md)]
-# [Using the client libraries](#tab/client)
+# [Using the client libraries (Azure SDK)](#tab/client)
## Use the client libraries
cognitive-services Call Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/language-service/custom-named-entity-recognition/how-to/call-api.md
Previously updated : 11/16/2021 Last updated : 01/07/2022
First you will need to get your resource key and endpoint
[!INCLUDE [JSON result for entity recognition](../includes/recognition-result-json.md)]
-# [Using the client libraries](#tab/client)
+# [Using the client libraries (Azure SDK)](#tab/client)
## Use the client libraries
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/language-service/whats-new.md
Previously updated : 12/10/2021 Last updated : 01/07/2022
Azure Cognitive Service for Language is updated on an ongoing basis. To stay up-
* SDK support for sending requests to custom models: * [Custom Named Entity Recognition](custom-named-entity-recognition/how-to/call-api.md?tabs=client#use-the-client-libraries) * [Custom text classification](custom-classification/how-to/call-api.md?tabs=api#use-the-client-libraries)
- * [Custom language understanding](conversational-language-understanding/how-to/deploy-query-model.md#use-the-client-libraries)
+ * [Custom language understanding](conversational-language-understanding/how-to/deploy-query-model.md#use-the-client-libraries-azure-sdk)
## Next steps
communication-services Distribution Concepts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/router/distribution-concepts.md
When the distribution process locates a suitable Worker who has an open channel
**OfferTTL -** The time-to-live for each offer generated
-**Mode -** The **distribution modes** which contain both `minConcurrentOffers` and `maxConcurrentOffers` properties.
+**Mode -** The **distribution modes** which contain both `minConcurrentOffers` and `maxConcurrentOffers` properties. Set two integers for these two variables to control the concurrent numbers of active workers that job offer will be distributed. For example:
+
+```csharp
+ "mode": {
+ "kind": "longest-idle",
+ "minConcurrentOffers": 1,
+ "maxConcurrentOffers": 5,
+ "bypassSelectors": false
+ }
+}
+```
+
+In the above example, minConcurrentOffers and maxConcurrentOffers will distribute at least one offer and up to a maximum of five offers to active Workers who match the requirements of the Job.
> [!Important] > When a Job offer is generated for a Worker it consumes one of the channel configurations matching the channel ID of the Job. The consumption of this channel means the Worker will not receive another offer unless additional capacity for that channel is available on the Worker. If the Worker declines the offer or the offer expires, the channel is released.
container-registry Container Registry Transfer Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/container-registry-transfer-cli.md
+
+ Title: ACR Transfer with Az CLI
+description: Use ACR Transfer with Az CLI
+ Last updated : 11/18/2021+++
+# ACR Transfer with Az CLI
+
+This article shows how to use the ACR Transfer feature with the acrtransfer Az CLI extension.
+
+## Complete prerequisites
+
+Please complete the prerequisites outlined [here](./container-registry-transfer-prerequisites.md) prior to attempting the actions in this article. This means that:
+
+- You have an existing Premium SKU Registry in both clouds.
+- You have an existing Storage Account Container in both clouds.
+- You have an existing Keyvault with a secret containing a valid SAS token with the necessary permissions in both clouds.
+- You have a recent version of Az CLI installed in both clouds.
+
+## Install the Az CLI extension
+
+In AzureCloud, you can install the extension with the following command:
+
+```azurecli
+az extension add --name acrtransfer
+```
+
+In AzureCloud and other clouds, you can install the blob directly from a public storage account container. The blob is hosted in the `acrtransferext` storage account, `dist` container, `acrtransfer-1.0.0-py2.py3-none-any.wh` blob. You may need to change the storage URI suffix depending on which cloud you are in. The following will install in AzureCloud:
+
+```azurecli
+az extension add --source https://acrtransferext.blob.core.windows.net/dist/acrtransfer-1.0.0-py2.py3-none-any.whl
+```
+
+## Create ExportPipeline with the acrtransfer Az CLI extension
+
+Create an ExportPipeline resource for your AzureCloud container registry using the acrtransfer Az CLI extension.
+
+Create an export pipeline with no options and a system-assigned identity:
+
+```azurecli
+az acr export-pipeline create \
+--resource-group $MyRG \
+--registry $MyReg \
+--name $MyPipeline \
+--secret-uri https://$MyKV.vault.azure.net/secrets/$MySecret \
+--storage-container-uri https://$MyStorage.blob.core.windows.net/$MyContainer
+```
+
+Create an export pipeline with all possible options and a user-assigned identity:
+
+```azurecli
+az acr export-pipeline create \
+--resource-group $MyRG \
+--registry $MyReg \
+--name $MyPipeline \
+--secret-uri https://$MyKV.vault.azure.net/secrets/$MySecret \
+--storage-container-uri https://$MyStorage.blob.core.windows.net/$MyContainer \
+--options OverwriteBlobs ContinueOnErrors \
+--assign-identity /subscriptions/$MySubID/resourceGroups/$MyRG/providers/Microsoft.ManagedIdentity/userAssignedIdentities/$MyIdentity
+```
+
+### Export options
+
+The `options` property for the export pipelines supports optional boolean values. The following values are recommended:
+
+|Parameter |Value |
+|||
+|options | OverwriteBlobs - Overwrite existing target blobs<br/>ContinueOnErrors - Continue export of remaining artifacts in the source registry if one artifact export fails.
+
+### Give the ExportPipeline identity keyvault policy access
+
+If you created your pipeline with a user-assigned identity, simply give this user-assigned identity `secret get` access policy permissions on the keyvault.
+
+If you created your pipeline with a system-assigned identity, you will first need to retrieve the principalId that the system has assigned to your pipeline resource.
+
+Run the following command to retrieve your pipeline resource:
+
+```azurecli
+az acr export-pipeline show --resource-group $MyRG --registry $MyReg --name $MyPipeline
+```
+
+From this output, you will want to copy the value in the `principalId` field.
+
+Then, you will run the following command to give this principal the appropriate `secret get` access policy permissions on your keyvault.
+
+```azurecli
+az keyvault set-policy --name $MyKeyvault --secret-permissions get --object-id $MyPrincipalID
+```
+
+## Create ImportPipeline with the acrtransfer Az CLI extension
+
+Create an ImportPipeline resource in your target container registry using the acrtransfer Az CLI extension. By default, the pipeline is enabled to create an Import PipelineRun automatically when the attached storage account container receives a new artifact blob.
+
+Create an import pipeline with no options and a system-assigned identity:
+
+```azurecli
+az acr import-pipeline create \
+--resource-group $MyRG \
+--registry $MyReg \
+--name $MyPipeline \
+--secret-uri https://$MyKV.vault.azure.net/secrets/$MySecret \
+--storage-container-uri https://$MyStorage.blob.core.windows.net/$MyContainer
+```
+
+Create an import pipeline with all possible options, source-trigger disabled, and a user-assigned identity:
+
+```azurecli
+az acr import-pipeline create \
+--resource-group $MyRG \
+--registry $MyReg \
+--name $MyPipeline \
+--secret-uri https://$MyKV.vault.azure.net/secrets/$MySecret \
+--storage-container-uri https://$MyStorage.blob.core.windows.net/$MyContainer \
+--options DeleteSourceBlobOnSuccess OverwriteTags ContinueOnErrors \
+--assign-identity /subscriptions/$MySubID/resourceGroups/$MyRG/providers/Microsoft.ManagedIdentity/userAssignedIdentities/$MyIdentity \
+--source-trigger-enabled False
+```
+
+### Import options
+
+The `options` property for the import pipeline supports optional boolean values. The following values are recommended:
+
+|Parameter |Value |
+|||
+|options | OverwriteTags - Overwrite existing target tags<br/>DeleteSourceBlobOnSuccess - Delete the source storage blob after successful import to the target registry<br/>ContinueOnErrors - Continue import of remaining artifacts in the target registry if one artifact import fails.
+
+### Give the ImportPipeline identity keyvault policy access
+
+If you created your pipeline with a user-assigned identity, simply give this user-assigned identity `secret get` access policy permissions on the keyvault.
+
+If you created your pipeline with a system-assigned identity, you will first need to retrieve the principalId that the system has assigned to your pipeline resource.
+
+Run the following command to retrieve your pipeline resource:
+
+```azurecli
+az acr import-pipeline show --resource-group $MyRG --registry $MyReg --name $MyPipeline
+```
+
+From this output, you will want to copy the value in the `principalId` field.
+
+Then, you will run the following command to give this principal the appropriate `secret get` access policy on your keyvault.
+
+```azurecli
+az keyvault set-policy --name $MyKeyvault --secret-permissions get --object-id $MyPrincipalID
+```
+
+## Create PipelineRun for export with the acrtransfer Az CLI extension
+
+Create a PipelineRun resource for your container registry using the acrtransfer Az CLI extension. This resource runs the ExportPipeline resource you created previously and exports specified artifacts from your container registry as a blob to your storage account container.
+
+Create an export pipeline-run:
+
+```azurecli
+az acr pipeline-run create \
+--resource-group $MyRG \
+--registry $MyReg \
+--pipeline $MyPipeline \
+--name $MyPipelineRun \
+--pipeline-type export \
+--storage-blob $MyBlob \
+--artifacts hello-world:latest hello-world@sha256:90659bf80b44ce6be8234e6ff90a1ac34acbeb826903b02cfa0da11c82cbc042 \
+--force-redeploy
+```
+
+If redeploying a PipelineRun resource with identical properties, you must use the --force-redeploy flag.
+
+It can take several minutes for artifacts to export. When deployment completes successfully, verify artifact export by listing the exported blob in the container of the source storage account. For example, run the [az storage blob list][az-storage-blob-list] command:
+
+```azurecli
+az storage blob list --account-name $MyStorageAccount --container $MyContainer --output table
+```
+
+## Transfer blob across domain
+
+In most use-cases, you will now use a Cross Domain Solution or other method to transfer your blob from the storage account in your source domain (the storage account associated with your export pipeline) to the storage account in your target domain (the storage account associated with your import pipeline). At this point, we will assume that the blob has arrived in the target domain storage account associated with your import pipeline.
+
+## Trigger ImportPipeline resource
+
+If you did not use the `--source-trigger-enabled False` parameter when creating your import pipeline, the pipeline will be triggered within 15 minutes after the blob arrives in the storage account container. It can take several minutes for artifacts to import. When the import completes successfully, verify artifact import by listing the tags on the repository you are importing in the target container registry. For example, run [az acr repository show-tags][az-acr-repository-show-tags]:
+
+```azurecli
+az acr repository show-tags --name $MyRegistry --repository $MyRepository
+```
+
+> [!Note]
+> Source Trigger will only import blobs that have a Last Modified time within the last 60 days. If you intend to use Source Trigger to import blobs older than that, please refresh the Last Modified time of the blobs by add blob metadata to them or else import them with manually created pipeline runs.
+
+If you did use the the `--source-trigger-enabled False` parameter when creating your ImportPipeline, you will need to create a PipelineRun manually, as shown in the following section.
+
+## Create PipelineRun for import with the acrtransfer Az CLI extension
+
+Create a PipelineRun resource for your container registry using the acrtransfer Az CLI extension. This resource runs the ImportPipeline resource you created previously and imports specified blobs from your storage account into your container registry.
+
+Create an import pipeline-run:
+
+```azurecli
+az acr pipeline-run create \
+--resource-group $MyRG \
+--registry $MyReg \
+--pipeline $MyPipeline \
+--name $MyPipelineRun \
+--pipeline-type import \
+--storage-blob $MyBlob \
+--force-redeploy
+```
+
+If redeploying a PipelineRun resource with identical properties, you must use the --force-redeploy flag.
+
+It can take several minutes for artifacts to import. When the import completes successfully, verify artifact import by listing the repositories in the target container registry. For example, run [az acr repository show-tags][az-acr-repository-show-tags]:
+
+```azurecli
+az acr repository show-tags --name $MyRegistry --repository $MyRepository
+```
+
+## Delete ACR Transfer resources
+
+Delete an ExportPipeline:
+
+```azurecli
+az acr export-pipeline delete --resource-group $MyRG --registry $MyReg --name $MyPipeline
+```
+
+Delete an ImportPipeline:
+
+```azurecli
+az acr import-pipeline delete --resource-group $MyRG --registry $MyReg --name $MyPipeline
+```
+
+Delete a PipelineRun resource. Note that this does not reverse the action taken by the PipelineRun. This is more like deleting the log of the PipelineRun.
+
+```azurecli
+az acr pipeline-run delete --resource-group $MyRG --registry $MyReg --name $MyPipelineRun
+```
+
+## ACR Transfer troubleshooting
+
+View [ACR Transfer Troubleshooting](container-registry-transfer-troubleshooting.md) for troubleshooting guidance.
+
+## Next steps
+
+* Learn how to [block creation of export pipelines](data-loss-prevention.md) from a network-restricted container registry.
+
+<!-- LINKS - External -->
+[terms-of-use]: https://azure.microsoft.com/support/legal/preview-supplemental-terms/
+
+<!-- LINKS - Internal -->
+[azure-cli]: /cli/azure/install-azure-cli
+[az-login]: /cli/azure/reference-index#az-login
+[az-keyvault-secret-set]: /cli/azure/keyvault/secret#az-keyvault-secret-set
+[az-keyvault-secret-show]: /cli/azure/keyvault/secret#az-keyvault-secret-show
+[az-keyvault-set-policy]: /cli/azure/keyvault#az-keyvault-set-policy
+[az-storage-container-generate-sas]: /cli/azure/storage/container#az-storage-container-generate-sas
+[az-storage-blob-list]: /cli/azure/storage/blob#az-storage-blob-list
+[az-deployment-group-create]: /cli/azure/deployment/group#az-deployment-group-create
+[az-deployment-group-delete]: /cli/azure/deployment/group#az-deployment-group-delete
+[az-deployment-group-show]: /cli/azure/deployment/group#az-deployment-group-show
+[az-acr-repository-show-tags]: /cli/azure/acr/repository##az_acr_repository_show_tags
+[az-acr-import]: /cli/azure/acr#az-acr-import
+[az-resource-delete]: /cli/azure/resource#az-resource-delete
container-registry Container Registry Transfer Images https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/container-registry-transfer-images.md
Title: Transfer artifacts
-description: Transfer collections of images or other artifacts from one container registry to another registry by creating a transfer pipeline using Azure storage accounts
+ Title: ACR Transfer with Arm Templates
+description: ACR Transfer with Az CLI with ARM templates
Last updated 10/07/2020-+
-# Transfer artifacts to another registry
+# ACR Transfer with ARM templates
-This article shows how to transfer collections of images or other registry artifacts from one Azure container registry to another registry. The source and target registries can be in the same or different subscriptions, Active Directory tenants, Azure clouds, or physically disconnected clouds.
+## Complete Prerequisites
-To transfer artifacts, you create a *transfer pipeline* that replicates artifacts between two registries by using [blob storage](../storage/blobs/storage-blobs-introduction.md):
+Please complete the prerequisites outlined [here](./container-registry-transfer-prerequisites.md) prior to attempting the actions in this article. This means that:
-* Artifacts from a source registry are exported to a blob in a source storage account
-* The blob is copied from the source storage account to a target storage account
-* The blob in the target storage account gets imported as artifacts in the target registry. You can set up the import pipeline to trigger whenever the artifact blob updates in the target storage.
+- You have an existing Premium SKU Registry in both clouds.
+- You have an existing Storage Account Container in both clouds.
+- You have an existing Keyvault with a secret containing a valid SAS token with the necessary permissions in both clouds.
+- You have a recent version of Az CLI installed in both clouds.
-Transfer is ideal for copying content between two Azure container registries in physically disconnected clouds, mediated by storage accounts in each cloud. If instead you want to copy images from container registries in connected clouds including Docker Hub and other cloud vendors, [image import](container-registry-import-images.md) is recommended.
+## Consider using the Az CLI extension
-In this article, you use Azure Resource Manager template deployments to create and run the transfer pipeline. The Azure CLI is used to provision the associated resources such as storage secrets. Azure CLI version 2.2.0 or later is recommended. If you need to install or upgrade the CLI, see [Install Azure CLI][azure-cli].
-
-This feature is available in the **Premium** container registry service tier. For information about registry service tiers and limits, see [Azure Container Registry tiers](container-registry-skus.md).
-
-> [!IMPORTANT]
-> This feature is currently in preview. Previews are made available to you on the condition that you agree to the [supplemental terms of use][terms-of-use]. Some aspects of this feature may change prior to general availability (GA).
-
-## Prerequisites
-
-* **Container registries** - You need an existing source registry with artifacts to transfer, and a target registry. ACR transfer is intended for movement across physically disconnected clouds. For testing, the source and target registries can be in the same or a different Azure subscription, Active Directory tenant, or cloud.
-
- If you need to create a registry, see [Quickstart: Create a private container registry using the Azure CLI](container-registry-get-started-azure-cli.md).
-* **Storage accounts** - Create source and target storage accounts in a subscription and location of your choice. For testing purposes, you can use the same subscription or subscriptions as your source and target registries. For cross-cloud scenarios, typically you create a separate storage account in each cloud.
-
- If needed, create the storage accounts with the [Azure CLI](../storage/common/storage-account-create.md?tabs=azure-cli) or other tools.
-
- Create a blob container for artifact transfer in each account. For example, create a container named *transfer*. Two or more transfer pipelines can share the same storage account, but should use different storage container scopes.
-* **Key vaults** - Key vaults are needed to store SAS token secrets used to access source and target storage accounts. Create the source and target key vaults in the same Azure subscription or subscriptions as your source and target registries. For demonstration purposes, the templates and commands used in this article also assume that the source and target key vaults are located in the same resource groups as the source and target registries, respectively. This use of common resource groups isn't required, but it simplifies the templates and commands used in this article.
-
- If needed, create key vaults with the [Azure CLI](../key-vault/secrets/quick-create-cli.md) or other tools.
-
-* **Environment variables** - For example commands in this article, set the following environment variables for the source and target environments. All examples are formatted for the Bash shell.
- ```console
- SOURCE_RG="<source-resource-group>"
- TARGET_RG="<target-resource-group>"
- SOURCE_KV="<source-key-vault>"
- TARGET_KV="<target-key-vault>"
- SOURCE_SA="<source-storage-account>"
- TARGET_SA="<target-storage-account>"
- ```
-
-## Scenario overview
-
-You create the following three pipeline resources for image transfer between registries. All are created using PUT operations. These resources operate on your *source* and *target* registries and storage accounts.
-
-Storage authentication uses SAS tokens, managed as secrets in key vaults. The pipelines use managed identities to read the secrets in the vaults.
-
-* **[ExportPipeline](#create-exportpipeline-with-resource-manager)** - Long-lasting resource that contains high-level information about the *source* registry and storage account. This information includes the source storage blob container URI and the key vault managing the source SAS token.
-* **[ImportPipeline](#create-importpipeline-with-resource-manager)** - Long-lasting resource that contains high-level information about the *target* registry and storage account. This information includes the target storage blob container URI and the key vault managing the target SAS token. An import trigger is enabled by default, so the pipeline runs automatically when an artifact blob lands in the target storage container.
-* **[PipelineRun](#create-pipelinerun-for-export-with-resource-manager)** - Resource used to invoke either an ExportPipeline or ImportPipeline resource.
- * You run the ExportPipeline manually by creating a PipelineRun resource and specify the artifacts to export.
- * If an import trigger is enabled, the ImportPipeline runs automatically. It can also be run manually using a PipelineRun.
- * Currently a maximum of **50 artifacts** can be transferred with each PipelineRun.
-
-### Things to know
-* The ExportPipeline and ImportPipeline will typically be in different Active Directory tenants associated with the source and destination clouds. This scenario requires separate managed identities and key vaults for the export and import resources. For testing purposes, these resources can be placed in the same cloud, sharing identities.
-* By default, the ExportPipeline and ImportPipeline templates each enable a system-assigned managed identity to access key vault secrets. The ExportPipeline and ImportPipeline templates also support a user-assigned identity that you provide.
-
-## Create and store SAS keys
-
-Transfer uses shared access signature (SAS) tokens to access the storage accounts in the source and target environments. Generate and store tokens as described in the following sections.
-
-### Generate SAS token for export
-
-Run the [az storage container generate-sas][az-storage-container-generate-sas] command to generate a SAS token for the container in the source storage account, used for artifact export.
-
-*Recommended token permissions*: Read, Write, List, Add.
-
-In the following example, command output is assigned to the EXPORT_SAS environment variable, prefixed with the '?' character. Update the `--expiry` value for your environment:
-
-```azurecli
-EXPORT_SAS=?$(az storage container generate-sas \
- --name transfer \
- --account-name $SOURCE_SA \
- --expiry 2021-01-01 \
- --permissions alrw \
- --https-only \
- --output tsv)
-```
-
-### Store SAS token for export
-
-Store the SAS token in your source Azure key vault using [az keyvault secret set][az-keyvault-secret-set]:
-
-```azurecli
-az keyvault secret set \
- --name acrexportsas \
- --value $EXPORT_SAS \
- --vault-name $SOURCE_KV
-```
-
-### Generate SAS token for import
-
-Run the [az storage container generate-sas][az-storage-container-generate-sas] command to generate a SAS token for the container in the target storage account, used for artifact import.
-
-*Recommended token permissions*: Read, Delete, List
-
-In the following example, command output is assigned to the IMPORT_SAS environment variable, prefixed with the '?' character. Update the `--expiry` value for your environment:
-
-```azurecli
-IMPORT_SAS=?$(az storage container generate-sas \
- --name transfer \
- --account-name $TARGET_SA \
- --expiry 2021-01-01 \
- --permissions dlr \
- --https-only \
- --output tsv)
-```
-
-### Store SAS token for import
-
-Store the SAS token in your target Azure key vault using [az keyvault secret set][az-keyvault-secret-set]:
-
-```azurecli
-az keyvault secret set \
- --name acrimportsas \
- --value $IMPORT_SAS \
- --vault-name $TARGET_KV
-```
+For most non-automated use-cases, we recommend using the Az CLI Extension if possible. You can view documentation for the Az CLI Extension [here](./container-registry-transfer-cli.md).
## Create ExportPipeline with Resource Manager
The `options` property for the export pipelines supports optional boolean values
### Create the resource
-Run [az deployment group create][az-deployment-group-create] to create a resource named *exportPipeline* as shown in the following examples. By default, with the first option, the example template enables a system-assigned identity in the ExportPipeline resource.
+Run [az deployment group create][az-deployment-group-create] to create a resource named *exportPipeline* as shown in the following examples. By default, with the first option, the example template enables a system-assigned identity in the ExportPipeline resource.
With the second option, you can provide the resource with a user-assigned identity. (Creation of the user-assigned identity not shown.)
-With either option, the template configures the identity to access the SAS token in the export key vault.
+With either option, the template configures the identity to access the SAS token in the export key vault.
#### Option 1: Create resource and enable system-assigned identity
EXPORT_RES_ID=$(az deployment group show \
--output tsv) ```
-## Create ImportPipeline with Resource Manager
+## Create ImportPipeline with Resource Manager
Create an ImportPipeline resource in your target container registry using Azure Resource Manager template deployment. By default, the pipeline is enabled to import automatically when the storage account in the target environment has an artifact blob.
The `options` property for the import pipeline supports optional boolean values.
### Create the resource
-Run [az deployment group create][az-deployment-group-create] to create a resource named *importPipeline* as shown in the following examples. By default, with the first option, the example template enables a system-assigned identity in the ImportPipeline resource.
+Run [az deployment group create][az-deployment-group-create] to create a resource named *importPipeline* as shown in the following examples. By default, with the first option, the example template enables a system-assigned identity in the ImportPipeline resource.
With the second option, you can provide the resource with a user-assigned identity. (Creation of the user-assigned identity not shown.)
-With either option, the template configures the identity to access the SAS token in the import key vault.
+With either option, the template configures the identity to access the SAS token in the import key vault.
#### Option 1: Create resource and enable system-assigned identity
az deployment group create \
--resource-group $TARGET_RG \ --template-file azuredeploy.json \ --name importPipeline \
- --parameters azuredeploy.parameters.json
+ --parameters azuredeploy.parameters.json
``` #### Option 2: Create resource and provide user-assigned identity
IMPORT_RES_ID=$(az deployment group show \
--output tsv) ```
-## Create PipelineRun for export with Resource Manager
+## Create PipelineRun for export with Resource Manager
Create a PipelineRun resource for your source container registry using Azure Resource Manager template deployment. This resource runs the ExportPipeline resource you created previously, and exports specified artifacts from your container registry as a blob to your source storage account.
az storage blob list \
--output table ```
-## Transfer blob (optional)
+## Transfer blob (optional)
Use the AzCopy tool or other methods to [transfer blob data](../storage/common/storage-use-azcopy-v10.md#transfer-data) from the source storage account to the target storage account.
If you enabled the `sourceTriggerStatus` parameter of the ImportPipeline (the de
az acr repository list --name <target-registry-name> ```
-If you didn't enable the `sourceTriggerStatus` parameter of the import pipeline, run the ImportPipeline resource manually, as shown in the following section.
+> [!Note]
+> Source Trigger will only import blobs that have a Last Modified time within the last 60 days. If you intend to use Source Trigger to import blobs older than that, please refresh the Last Modified time of the blobs by add blob metadata to them or else import them with manually created pipeline runs.
+
+If you didn't enable the `sourceTriggerStatus` parameter of the import pipeline, run the ImportPipeline resource manually, as shown in the following section.
+
+## Create PipelineRun for import with Resource Manager (optional)
-## Create PipelineRun for import with Resource Manager (optional)
-
You can also use a PipelineRun resource to trigger an ImportPipeline for artifact import to your target container registry. Copy PipelineRun Resource Manager [template files](https://github.com/Azure/acr/tree/master/docs/image-transfer/PipelineRun/PipelineRun-Import) to a local folder.
az resource delete \
--api-version 2019-12-01-preview ```
-## Troubleshooting
-
-* **Template deployment failures or errors**
- * If a pipeline run fails, look at the `pipelineRunErrorMessage` property of the run resource.
- * For common template deployment errors, see [Troubleshoot ARM template deployments](../azure-resource-manager/templates/template-tutorial-troubleshoot.md)
-* **Problems accessing storage**<a name="problems-accessing-storage"></a>
- * If you see a `403 Forbidden` error from storage, you likely have a problem with your SAS token.
- * The SAS token might not currently be valid. The SAS token might be expired or the storage account keys might have changed since the SAS token was created. Verify that the SAS token is valid by attempting to use the SAS token to authenticate for access to the storage account container. For example, put an existing blob endpoint followed by the SAS token in the address bar of a new Microsoft Edge InPrivate window or upload a blob to the container with the SAS token by using `az storage blob upload`.
- * The SAS token might not have sufficient Allowed Resource Types. Verify that the SAS token has been given permissions to Service, Container, and Object under Allowed Resource Types (`srt=sco` in the SAS token).
- * The SAS token might not have sufficient permissions. For export pipelines, the required SAS token permissions are Read, Write, List, and Add. For import pipelines, the required SAS token permissions are Read, Delete, and List. (The Delete permission is required only if the import pipeline has the `DeleteSourceBlobOnSuccess` option enabled.)
- * The SAS token might not be configured to work with HTTPS only. Verify that the SAS token is configured to work with HTTPS only (`spr=https` in the SAS token).
-* **Problems with export or import of storage blobs**
- * SAS token may be invalid, or may have insufficient permissions for the specified export or import run. See [Problems accessing storage](#problems-accessing-storage).
- * Existing storage blob in source storage account might not be overwritten during multiple export runs. Confirm that the OverwriteBlob option is set in the export run and the SAS token has sufficient permissions.
- * Storage blob in target storage account might not be deleted after successful import run. Confirm that the DeleteBlobOnSuccess option is set in the import run and the SAS token has sufficient permissions.
- * Storage blob not created or deleted. Confirm that container specified in export or import run exists, or specified storage blob exists for manual import run.
-* **AzCopy issues**
- * See [Troubleshoot AzCopy issues](../storage/common/storage-use-azcopy-configure.md).
-* **Artifacts transfer problems**
- * Not all artifacts, or none, are transferred. Confirm spelling of artifacts in export run, and name of blob in export and import runs. Confirm you are transferring a maximum of 50 artifacts.
- * Pipeline run might not have completed. An export or import run can take some time.
- * For other pipeline issues, provide the deployment [correlation ID](../azure-resource-manager/templates/deployment-history.md) of the export run or import run to the Azure Container Registry team.
-* **Problems pulling the image in a physically isolated environment**
- * If you see errors regarding foreign layers or attempts to resolve mcr.microsoft.com when attempting to pull an image in a physically isolated environment, your image manifest likely has non-distributable layers. Due to the nature of a physically isolated environment, these images will often fail to pull. You can confirm that this is the case by checking the image manifest for any references to external registries. If this is the case, you will need to push the non-distributable layers to your public cloud ACR prior to deploying an export pipeline-run for that image. For guidance on how to do this, see [How do I push non-distributable layers to a registry?](./container-registry-faq.yml#how-do-i-push-non-distributable-layers-to-a-registry-)
+## ACR Transfer troubleshooting
+
+View [ACR Transfer Troubleshooting](container-registry-transfer-troubleshooting.md) for troubleshooting guidance.
## Next steps
-* To import single container images to an Azure container registry from a public registry or another private registry, see the [az acr import][az-acr-import] command reference.
* Learn how to [block creation of export pipelines](data-loss-prevention.md) from a network-restricted container registry. <!-- LINKS - External -->
container-registry Container Registry Transfer Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/container-registry-transfer-prerequisites.md
+
+ Title: Transfer artifacts
+description: Overview of ACR Transfer and prerequisites
+ Last updated : 11/18/2021+++
+# Transfer artifacts to another registry
+
+This article shows how to transfer collections of images or other registry artifacts from one Azure container registry to another registry. The source and target registries can be in the same or different subscriptions, Active Directory tenants, Azure clouds, or physically disconnected clouds.
+
+To transfer artifacts, you create a *transfer pipeline* that replicates artifacts between two registries by using [blob storage](../storage/blobs/storage-blobs-introduction.md):
+
+* Artifacts from a source registry are exported to a blob in a source storage account
+* The blob is copied from the source storage account to a target storage account
+* The blob in the target storage account gets imported as artifacts in the target registry. You can set up the import pipeline to trigger whenever the artifact blob updates in the target storage.
+
+In this article, you create the prerequisite resources to create and run the transfer pipeline. The Azure CLI is used to provision the associated resources such as storage secrets. Azure CLI version 2.2.0 or later is recommended. If you need to install or upgrade the CLI, see [Install Azure CLI][azure-cli].
+
+This feature is available in the **Premium** container registry service tier. For information about registry service tiers and limits, see [Azure Container Registry tiers](container-registry-skus.md).
+
+> [!IMPORTANT]
+> This feature is currently in preview. Previews are made available to you on the condition that you agree to the [supplemental terms of use][terms-of-use]. Some aspects of this feature may change prior to general availability (GA).
+
+## Consider your use-case
+
+Transfer is ideal for copying content between two Azure container registries in physically disconnected clouds, mediated by storage accounts in each cloud. If instead you want to copy images from container registries in connected clouds including Docker Hub and other cloud vendors, [image import](container-registry-import-images.md) is recommended.
+
+## Prerequisites
+
+* **Container registries** - You need an existing source registry with artifacts to transfer, and a target registry. ACR transfer is intended for movement across physically disconnected clouds. For testing, the source and target registries can be in the same or a different Azure subscription, Active Directory tenant, or cloud.
+
+ If you need to create a registry, see [Quickstart: Create a private container registry using the Azure CLI](container-registry-get-started-azure-cli.md).
+* **Storage accounts** - Create source and target storage accounts in a subscription and location of your choice. For testing purposes, you can use the same subscription or subscriptions as your source and target registries. For cross-cloud scenarios, typically you create a separate storage account in each cloud.
+
+ If needed, create the storage accounts with the [Azure CLI](../storage/common/storage-account-create.md?tabs=azure-cli) or other tools.
+
+ Create a blob container for artifact transfer in each account. For example, create a container named *transfer*.
+
+* **Key vaults** - Key vaults are needed to store SAS token secrets used to access source and target storage accounts. Create the source and target key vaults in the same Azure subscription or subscriptions as your source and target registries. For demonstration purposes, the templates and commands used in this article also assume that the source and target key vaults are located in the same resource groups as the source and target registries, respectively. This use of common resource groups isn't required, but it simplifies the templates and commands used in this article.
+
+ If needed, create key vaults with the [Azure CLI](../key-vault/secrets/quick-create-cli.md) or other tools.
+
+* **Environment variables** - For example commands in this article, set the following environment variables for the source and target environments. All examples are formatted for the Bash shell.
+ ```console
+ SOURCE_RG="<source-resource-group>"
+ TARGET_RG="<target-resource-group>"
+ SOURCE_KV="<source-key-vault>"
+ TARGET_KV="<target-key-vault>"
+ SOURCE_SA="<source-storage-account>"
+ TARGET_SA="<target-storage-account>"
+ ```
+
+## Scenario overview
+
+You create the following three pipeline resources for image transfer between registries. All are created using PUT operations. These resources operate on your *source* and *target* registries and storage accounts.
+
+Storage authentication uses SAS tokens, managed as secrets in key vaults. The pipelines use managed identities to read the secrets in the vaults.
+
+* **[ExportPipeline](./container-registry-transfer-cli.md#create-exportpipeline-with-the-acrtransfer-az-cli-extension)** - Long-lasting resource that contains high-level information about the *source* registry and storage account. This information includes the source storage blob container URI and the key vault managing the source SAS token.
+* **[ImportPipeline](./container-registry-transfer-cli.md#create-importpipeline-with-the-acrtransfer-az-cli-extension)** - Long-lasting resource that contains high-level information about the *target* registry and storage account. This information includes the target storage blob container URI and the key vault managing the target SAS token. An import trigger is enabled by default, so the pipeline runs automatically when an artifact blob lands in the target storage container.
+* **[PipelineRun](./container-registry-transfer-cli.md#create-pipelinerun-for-export-with-the-acrtransfer-az-cli-extension)** - Resource used to invoke either an ExportPipeline or ImportPipeline resource.
+ * You run the ExportPipeline manually by creating a PipelineRun resource and specify the artifacts to export.
+ * If an import trigger is enabled, the ImportPipeline runs automatically. It can also be run manually using a PipelineRun.
+ * Currently a maximum of **50 artifacts** can be transferred with each PipelineRun.
+
+### Things to know
+* The ExportPipeline and ImportPipeline will typically be in different Active Directory tenants associated with the source and destination clouds. This scenario requires separate managed identities and key vaults for the export and import resources. For testing purposes, these resources can be placed in the same cloud, sharing identities.
+* By default, the ExportPipeline and ImportPipeline templates each enable a system-assigned managed identity to access key vault secrets. The ExportPipeline and ImportPipeline templates also support a user-assigned identity that you provide.
+
+## Create and store SAS keys
+
+Transfer uses shared access signature (SAS) tokens to access the storage accounts in the source and target environments. Generate and store tokens as described in the following sections.
+> [!IMPORTANT]
+> While ACR Transfer will work with a manually generated SAS token stored in a Keyvault Secret, for production workloads we *strongly* recommend using [Keyvault Managed Storage SAS Definition Secrets][kv-managed-sas] instead.
+
+### Generate SAS token for export
+
+Run the [az storage container generate-sas][az-storage-container-generate-sas] command to generate a SAS token for the container in the source storage account, used for artifact export.
+
+*Recommended token permissions*: Read, Write, List, Add.
+
+In the following example, command output is assigned to the EXPORT_SAS environment variable, prefixed with the '?' character. Update the `--expiry` value for your environment:
+
+```azurecli
+EXPORT_SAS=?$(az storage container generate-sas \
+ --name transfer \
+ --account-name $SOURCE_SA \
+ --expiry 2021-01-01 \
+ --permissions alrw \
+ --https-only \
+ --output tsv)
+```
+
+### Store SAS token for export
+
+Store the SAS token in your source Azure key vault using [az keyvault secret set][az-keyvault-secret-set]:
+
+```azurecli
+az keyvault secret set \
+ --name acrexportsas \
+ --value $EXPORT_SAS \
+ --vault-name $SOURCE_KV
+```
+
+### Generate SAS token for import
+
+Run the [az storage container generate-sas][az-storage-container-generate-sas] command to generate a SAS token for the container in the target storage account, used for artifact import.
+
+*Recommended token permissions*: Read, Delete, List
+
+In the following example, command output is assigned to the IMPORT_SAS environment variable, prefixed with the '?' character. Update the `--expiry` value for your environment:
+
+```azurecli
+IMPORT_SAS=?$(az storage container generate-sas \
+ --name transfer \
+ --account-name $TARGET_SA \
+ --expiry 2021-01-01 \
+ --permissions dlr \
+ --https-only \
+ --output tsv)
+```
+
+### Store SAS token for import
+
+Store the SAS token in your target Azure key vault using [az keyvault secret set][az-keyvault-secret-set]:
+
+```azurecli
+az keyvault secret set \
+ --name acrimportsas \
+ --value $IMPORT_SAS \
+ --vault-name $TARGET_KV
+```
+
+## Next steps
+
+* Follow one of the below tutorials to create your ACR Transfer resources. For most non-automated use-cases, we recommend using the Az CLI Extension.
+
+ * [ACR Transfer with Az CLI](./container-registry-transfer-cli.md)
+ * [ACR Transfer with ARM templates](./container-registry-transfer-images.md)
+
+<!-- LINKS - External -->
+[terms-of-use]: https://azure.microsoft.com/support/legal/preview-supplemental-terms/
++
+<!-- LINKS - Internal -->
+[azure-cli]: /cli/azure/install-azure-cli
+[az-login]: /cli/azure/reference-index#az_login
+[az-keyvault-secret-set]: /cli/azure/keyvault/secret#az_keyvault_secret_set
+[az-keyvault-secret-show]: /cli/azure/keyvault/secret#az_keyvault_secret_show
+[az-keyvault-set-policy]: /cli/azure/keyvault#az_keyvault_set_policy
+[az-storage-container-generate-sas]: /cli/azure/storage/container#az_storage_container_generate_sas
+[az-storage-blob-list]: /cli/azure/storage/blob#az_storage-blob-list
+[az-deployment-group-create]: /cli/azure/deployment/group#az_deployment_group_create
+[az-deployment-group-delete]: /cli/azure/deployment/group#az_deployment_group_delete
+[az-deployment-group-show]: /cli/azure/deployment/group#az_deployment_group_show
+[az-acr-repository-list]: /cli/azure/acr/repository#az_acr_repository_list
+[az-acr-import]: /cli/azure/acr#az_acr_import
+[az-resource-delete]: /cli/azure/resource#az_resource_delete
+[kv-managed-sas]: ../key-vault/secrets/overview-storage-keys.md
container-registry Container Registry Transfer Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/container-registry-transfer-troubleshooting.md
+
+ Title: ACR Transfer Troubleshooting
+description: Troubleshoot ACR Transfer
+ Last updated : 11/18/2021+++
+# ACR Transfer troubleshooting
+
+* **Template deployment failures or errors**
+ * If a pipeline run fails, look at the `pipelineRunErrorMessage` property of the run resource.
+ * For common template deployment errors, see [Troubleshoot ARM template deployments](../azure-resource-manager/templates/template-tutorial-troubleshoot.md)
+* **Problems accessing storage**<a name="problems-accessing-storage"></a>
+ * If you see a `403 Forbidden` error from storage, you likely have a problem with your SAS token.
+ * The SAS token might not currently be valid. The SAS token might be expired or the storage account keys might have changed since the SAS token was created. Verify that the SAS token is valid by attempting to use the SAS token to authenticate for access to the storage account container. For example, put an existing blob endpoint followed by the SAS token in the address bar of a new Microsoft Edge InPrivate window or upload a blob to the container with the SAS token by using `az storage blob upload`.
+ * The SAS token might not have sufficient Allowed Resource Types. Verify that the SAS token has been given permissions to Service, Container, and Object under Allowed Resource Types (`srt=sco` in the SAS token).
+ * The SAS token might not have sufficient permissions. For export pipelines, the required SAS token permissions are Read, Write, List, and Add. For import pipelines, the required SAS token permissions are Read, Delete, and List. (The Delete permission is required only if the import pipeline has the `DeleteSourceBlobOnSuccess` option enabled.)
+ * The SAS token might not be configured to work with HTTPS only. Verify that the SAS token is configured to work with HTTPS only (`spr=https` in the SAS token).
+* **Problems with export or import of storage blobs**
+ * SAS token may be invalid, or may have insufficient permissions for the specified export or import run. See [Problems accessing storage](#problems-accessing-storage).
+ * Existing storage blob in source storage account might not be overwritten during multiple export runs. Confirm that the OverwriteBlob option is set in the export run and the SAS token has sufficient permissions.
+ * Storage blob in target storage account might not be deleted after successful import run. Confirm that the DeleteBlobOnSuccess option is set in the import run and the SAS token has sufficient permissions.
+ * Storage blob not created or deleted. Confirm that container specified in export or import run exists, or specified storage blob exists for manual import run.
+* **Problems with Source Trigger Imports**
+ * The SAS token must have the List permission for Source Trigger imports to work.
+ * Source Trigger imports will only fire if the Storage Blob has a Last Modified time within the last 60 days.
+ * The Storage Blob must have a valid ContentMD5 property in order to be imported by the Source Trigger feature.
+ * The Storage Blob must have the "category":"acr-transfer-blob" blob metadata in order to be imported by the Source Trigger feature. This metadata is added automatically during an Export Pipeline Run, but may be stripped when moved from storage account to storage account depending on the method of copy.
+* **AzCopy issues**
+ * See [Troubleshoot AzCopy issues](../storage/common/storage-use-azcopy-configure.md).
+* **Artifacts transfer problems**
+ * Not all artifacts, or none, are transferred. Confirm spelling of artifacts in export run, and name of blob in export and import runs. Confirm you are transferring a maximum of 50 artifacts.
+ * Pipeline run might not have completed. An export or import run can take some time.
+ * For other pipeline issues, provide the deployment [correlation ID](../azure-resource-manager/templates/deployment-history.md) of the export run or import run to the Azure Container Registry team.
+* **Problems pulling the image in a physically isolated environment**
+ * If you see errors regarding foreign layers or attempts to resolve mcr.microsoft.com when attempting to pull an image in a physically isolated environment, your image manifest likely has non-distributable layers. Due to the nature of a physically isolated environment, these images will often fail to pull. You can confirm that this is the case by checking the image manifest for any references to external registries. If this is the case, you will need to push the non-distributable layers to your public cloud ACR prior to deploying an export pipeline-run for that image. For guidance on how to do this, see [How do I push non-distributable layers to a registry?](./container-registry-faq.yml#how-do-i-push-non-distributable-layers-to-a-registry-)
+
+ <!-- LINKS - External -->
+[terms-of-use]: https://azure.microsoft.com/support/legal/preview-supplemental-terms/
+
+<!-- LINKS - Internal -->
+[azure-cli]: /cli/azure/install-azure-cli
+[az-login]: /cli/azure/reference-index#az_login
+[az-keyvault-secret-set]: /cli/azure/keyvault/secret#az_keyvault_secret_set
+[az-keyvault-secret-show]: /cli/azure/keyvault/secret#az_keyvault_secret_show
+[az-keyvault-set-policy]: /cli/azure/keyvault#az_keyvault_set_policy
+[az-storage-container-generate-sas]: /cli/azure/storage/container#az_storage_container_generate_sas
+[az-storage-blob-list]: /cli/azure/storage/blob#az_storage-blob-list
+[az-deployment-group-create]: /cli/azure/deployment/group#az_deployment_group_create
+[az-deployment-group-delete]: /cli/azure/deployment/group#az_deployment_group_delete
+[az-deployment-group-show]: /cli/azure/deployment/group#az_deployment_group_show
+[az-acr-repository-list]: /cli/azure/acr/repository#az_acr_repository_list
+[az-acr-import]: /cli/azure/acr#az_acr_import
+[az-resource-delete]: /cli/azure/resource#az_resource_delete
cosmos-db Synapse Link Use Cases https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/synapse-link-use-cases.md
Title: Real-time analytics use cases with Azure Synapse Link for Azure Cosmos DB
+ Title: Near real-time analytics use cases with Azure Synapse Link for Azure Cosmos DB
description: Learn how Azure Synapse Link for Azure Cosmos DB is used in Supply chain analytics, forecasting, reporting, real-time personalization, and IOT predictive maintenance.
To learn more, see the following docs:
* [Apache Spark in Azure Synapse Analytics](../synapse-analytics/spark/apache-spark-concepts.md)
-* [Serverless SQL pool runtime support in Azure Synapse Analytics](../synapse-analytics/sql/on-demand-workspace-overview.md)
+* [Serverless SQL pool runtime support in Azure Synapse Analytics](../synapse-analytics/sql/on-demand-workspace-overview.md)
data-factory Create Self Hosted Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/create-self-hosted-integration-runtime.md
This article describes how you can create and configure a self-hosted IR.
## Considerations for using a self-hosted IR -- You can use a single self-hosted integration runtime for multiple on-premises data sources. You can also share it with another data factory or Synapse workspace within the same Azure Active Directory (Azure AD) tenant. For more information, see [Sharing a self-hosted integration runtime](./create-shared-self-hosted-integration-runtime-powershell.md).
+- You can use a single self-hosted integration runtime for multiple on-premises data sources. You can also share it with another data factory within the same Azure Active Directory (Azure AD) tenant. For more information, see [Sharing a self-hosted integration runtime](./create-shared-self-hosted-integration-runtime-powershell.md).
- You can install only one instance of a self-hosted integration runtime on any single machine. If you have two data factories or Synapse workspaces that need to access on-premises data sources, either use the [self-hosted IR sharing feature](./create-shared-self-hosted-integration-runtime-powershell.md) to share the self-hosted IR, or install the self-hosted IR on two on-premises computers, one for each data factory or Synapse workspace. - The self-hosted integration runtime doesn't need to be on the same machine as the data source. However, having the self-hosted integration runtime close to the data source reduces the time for the self-hosted integration runtime to connect to the data source. We recommend that you install the self-hosted integration runtime on a machine that differs from the one that hosts the on-premises data source. When the self-hosted integration runtime and data source are on different machines, the self-hosted integration runtime doesn't compete with the data source for resources. - You can have multiple self-hosted integration runtimes on different machines that connect to the same on-premises data source. For example, if you have two self-hosted integration runtimes that serve two data factories, the same on-premises data source can be registered with both data factories.
This article describes how you can create and configure a self-hosted IR.
- Use the self-hosted integration runtime even if the data store is in the cloud on an Azure Infrastructure as a Service (IaaS) virtual machine. - Tasks might fail in a self-hosted integration runtime that you installed on a Windows server for which FIPS-compliant encryption is enabled. To work around this problem, you have two options: store credentials/secret values in an Azure Key Vault or disable FIPS-compliant encryption on the server. To disable FIPS-compliant encryption, change the following registry subkey's value from 1 (enabled) to 0 (disabled): `HKLM\System\CurrentControlSet\Control\Lsa\FIPSAlgorithmPolicy\Enabled`. If you use the [self-hosted integration runtime as a proxy for SSIS integration runtime](./self-hosted-integration-runtime-proxy-ssis.md), FIPS-compliant encryption can be enabled and will be used when moving data from on premises to Azure Blob Storage as a staging area.
+> [!NOTE]
+> Currently self-hosted integration runtime can only be shared with multiple data factories, it can't be shared across Synapse workspaces or between data factory and Synapse workspace.
+ ## Command flow and data flow When you move data between on-premises and the cloud, the activity uses a self-hosted integration runtime to transfer the data between an on-premises data source and the cloud.
digital-twins How To Send Twin To Twin Events https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-send-twin-to-twin-events.md
description: See how to create a function in Azure for propagating events through the twin graph. Previously updated : 11/16/2021 Last updated : 1/07/2022 -+ # Optional fields. Don't forget to remove # if you need a field. #
# Set up twin-to-twin event handling
-A fully-connected Azure Digital Twins graph is driven by event propagation. Data arrives into Azure Digital Twins from external sources like IoT Hub, and then is propagated through the Azure Digital Twins graph, updating relevant twins as appropriate.
+This article shows how to **send events from twin to twin**, so that when one digital twin in the graph is updated, related twins in the graph that are affected by this information can update accordingly. This will help you create a fully-connected Azure Digital Twins graph, where data that arrives into Azure Digital Twins from external sources like IoT Hub is propagated through the entire graph.
-This article shows how to **send events from twin to twin**, so that twins can be updated in response to property changes or other data from related twins in the graph. This is done by setting up an [Azure function](../azure-functions/functions-overview.md) that watches for twin life cycle events. The function recognizes which events should affect other twins in the graph, and uses the event data to update the affected twins accordingly.
+To set up this twin-to-twin event handling, you'll create an [Azure function](../azure-functions/functions-overview.md) that watches for twin life cycle events. The function recognizes which events should affect other twins in the graph, and uses the event data to update the affected twins accordingly.
## Prerequisites
To set up twin-to-twin handling, you'll need an **Azure Digital Twins instance**
Optionally, you may want to set up [automatic telemetry ingestion through IoT Hub](how-to-ingest-iot-hub-data.md) for your twins as well. This is not required in order to send data from twin to twin, but it's an important piece of a complete solution where the twin graph is driven by live telemetry.
-## Set up endpoint and route
+## Send twin events to an endpoint
To set up twin-to-twin event handling, start by creating an **endpoint** in Azure Digital Twins and a **route** to that endpoint. Twins undergoing an update will use the route to send information about their update events to the endpoint (where Event Grid can pick them up later and pass them to an Azure function for processing). [!INCLUDE [digital-twins-twin-to-twin-resources.md](../../includes/digital-twins-twin-to-twin-resources.md)]
-## Create the Azure function
+## Create Azure function to update twins
-Next, create an Azure function that will listen on the endpoint and receive twin events that are sent there via the route.
+Next, create an Azure function that will listen on the endpoint and receive twin events that are sent there via the route. The logic of the function should use the information in the events to determine what other twins need to be updated and then perform the updates.
1. First, create an Azure Functions project in Visual Studio on your machine. For instructions on how to do this, see [Develop Azure Functions using Visual Studio](../azure-functions/functions-develop-vs.md#create-an-azure-functions-project).
Before your function can access Azure Digital Twins, it needs some information a
[!INCLUDE [digital-twins-configure-function-app-cli.md](../../includes/digital-twins-configure-function-app-cli.md)]
-## Connect the function to Event Grid
+## Connect the function to the endpoint
-Next, subscribe your Azure function to the event grid topic you created earlier. This will ensure that data can flow from an updated twin through the event grid topic to the function.
+Next, subscribe your Azure function to the Event Grid endpoint you created earlier. This will ensure that data can flow from an updated twin through the Event Grid topic to the function, which can use the event information to update other twins as needed.
-To do this, you'll create an **Event Grid subscription** that sends data from the event grid topic that you created earlier to your Azure function.
+To do this, you'll create an **Event Grid subscription** that sends data from the Event Grid topic that you created earlier to your Azure function.
Use the following CLI command, filling in placeholders for your subscription ID, resource group, function app, and function name.
Use the following CLI command, filling in placeholders for your subscription ID,
az eventgrid event-subscription create --name <name-for-your-event-subscription> --source-resource-id /subscriptions/<subscription-ID>/resourceGroups/<your-resource-group>/providers/Microsoft.EventGrid/topics/<your-event-grid-topic> \ --endpoint-type azurefunction --endpoint /subscriptions/<subscription-ID>/resourceGroups/<your-resource-group>/providers/Microsoft.Web/sites/<your-function-app-name>/functions/<function-name> ```
-Now, your function can receive events through your event grid topic. The data flow setup is complete.
+Now, your function can receive events through your Event Grid topic. The data flow setup is complete.
## Test and verify results
event-grid Secure Webhook Delivery https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/secure-webhook-delivery.md
Based on the diagram above, follow the next steps to configure the tenant.
- **$eventSubscriptionWriterUserPrincipalName**: Azure User Principal Name of the user who will create event subscription > [!NOTE]
- > You don't need to modify the value of **$eventGridAppId**, for this script we set **AzureEventGridSecureWebhookSubscriber** as the value for the **$eventGridRoleName**. Remember, you must be a member of the [Azure AD Application Administrator role](../active-directory/roles/permissions-reference.md#all-roles) to execute this script.
+ > You don't need to modify the value of **$eventGridAppId**, for this script we set **AzureEventGridSecureWebhookSubscriber** as the value for the **$eventGridRoleName**. Remember, you must be a member of the [Azure AD Application Administrator role](../active-directory/roles/permissions-reference.md#all-roles) or be an owner of the service principal of Webhook app in Azure AD to execute this script.
If you see the following error message, you need to elevate to the service principal. An additional access check has been introduced as part of create or update of event subscription on March 30, 2021 to address a security vulnerability. The subscriber client's service principal needs to be either an owner or have a role assigned on the destination application service principal.
Based on the diagram above, follow the next steps to configure the tenant.
- **$eventSubscriptionWriterAppId**: Azure AD Application ID for Event Grid subscription writer > [!NOTE]
- > You don't need to modify the value of **```$eventGridAppId```**, for this script we set **AzureEventGridSecureWebhookSubscriber** as the value for the **```$eventGridRoleName```**. Remember, you must be a member of the [Azure AD Application Administrator role](../active-directory/roles/permissions-reference.md#all-roles) to execute this script.
+ > You don't need to modify the value of **```$eventGridAppId```**, for this script we set **AzureEventGridSecureWebhookSubscriber** as the value for the **```$eventGridRoleName```**. Remember, you must be a member of the [Azure AD Application Administrator role](../active-directory/roles/permissions-reference.md#all-roles) or be an owner of the service principal of Webhook app in Azure AD to execute this script.
8. Login as the Event Grid subscription writer Azure AD Application by running the command.
Do the following steps in **Tenant B**:
- **$eventSubscriptionWriterAppId**: Azure AD application ID for Event Grid subscription writer > [!NOTE]
- > You don't need to modify the value of **```$eventGridAppId```**, for this script we set **AzureEventGridSecureWebhookSubscriber** as the value for the **```$eventGridRoleName```**. Remember, you must be a member of the [Azure AD Application Administrator role](../active-directory/roles/permissions-reference.md#all-roles) to execute this script.
+ > You don't need to modify the value of **```$eventGridAppId```**, for this script we set **AzureEventGridSecureWebhookSubscriber** as the value for the **```$eventGridRoleName```**. Remember, you must be a member of the [Azure AD Application Administrator role](../active-directory/roles/permissions-reference.md#all-roles) or be an owner of the service principal of Webhook app in Azure AD to execute this script.
If you see the following error message, you need to elevate to the service principal. An additional access check has been introduced as part of create or update of event subscription on March 30, 2021 to address a security vulnerability. The subscriber client's service principal needs to be either an owner or have a role assigned on the destination application service principal.
expressroute Expressroute Locations Providers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-locations-providers.md
The following table shows connectivity locations and the service providers for e
| **Dublin2** | [Interxion DUB2](https://www.interxion.com/locations/europe/dublin) | 1 | North Europe | 10G, 100G | Interxion | | **Frankfurt** | [Interxion FRA11](https://www.interxion.com/Locations/frankfurt/) | 1 | Germany West Central | 10G, 100G | AT&T NetBond, British Telecom, CenturyLink Cloud Connect, Colt, DE-CIX, Equinix, euNetworks, GBI, GEANT, InterCloud, Interxion, Megaport, NTT Global DataCenters EMEA, Orange, Telia Carrier, T-Systems | | **Frankfurt2** | [Equinix FR7](https://www.equinix.com/locations/europe-colocation/germany-colocation/frankfurt-data-centers/fr7/) | 1 | Germany West Central | 10G, 100G | Deutsche Telekom AG, Equinix |
-| **Geneva** | [Equinix GV2](https://www.equinix.com/locations/europe-colocation/switzerland-colocation/geneva-data-centers/gv2/) | 1 | Switzerland West | 10G, 100G | Colt, Equinix, Megaport, Swisscom |
+| **Geneva** | [Equinix GV2](https://www.equinix.com/locations/europe-colocation/switzerland-colocation/geneva-data-centers/gv2/) | 1 | Switzerland West | 10G, 100G | Colt, Equinix, InterCloud, Megaport, Swisscom |
| **Hong Kong** | [Equinix HK1](https://www.equinix.com/data-centers/asia-pacific-colocation/hong-kong-colocation/hong-kong-data-centers/hk1) | 2 | East Asia | 10G | Aryaka Networks, British Telecom, CenturyLink Cloud Connect, Chief Telecom, China Telecom Global, China Unicom, Colt, Equinix, InterCloud, Megaport, NTT Communications, Orange, PCCW Global Limited, Tata Communications, Telia Carrier, Verizon | | **Hong Kong2** | [iAdvantage MEGA-i](https://www.iadvantage.net/index.php/locations/mega-i) | 2 | East Asia | 10G | China Mobile International, China Telecom Global, iAdvantage, Megaport, PCCW Global Limited, SingTel | | **Jakarta** | [Telin](https://www.telin.net/) | 4 | n/a | 10G | NTT Communications, Telin, XL Axiata |
expressroute Expressroute Locations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-locations.md
The following table shows locations by service provider. If you want to view ava
| **[Global Cloud Xchange (GCX)](https://globalcloudxchange.com/cloud-platform/cloud-x-fusion/)** | Supported| Supported | Chennai, Mumbai | | **[iAdvantage](https://www.scx.sunevision.com/)** | Supported | Supported | Hong Kong2 | | **Intelsat** | Supported | Supported | Washington DC2 |
-| **[InterCloud](https://www.intercloud.com/)** |Supported |Supported |Amsterdam, Chicago, Frankfurt, Hong Kong, London, New York, Paris, Sao Paulo, Silicon Valley, Singapore, Tokyo, Washington DC, Zurich |
+| **[InterCloud](https://www.intercloud.com/)** |Supported |Supported |Amsterdam, Chicago, Frankfurt, Geneva, Hong Kong, London, New York, Paris, Sao Paulo, Silicon Valley, Singapore, Tokyo, Washington DC, Zurich |
| **[Internet2](https://internet2.edu/services/cloud-connect/#service-cloud-connect)** |Supported |Supported |Chicago, Dallas, Silicon Valley, Washington DC | | **[Internet Initiative Japan Inc. - IIJ](https://www.iij.ad.jp/en/news/pressrelease/2015/1216-2.html)** |Supported |Supported |Osaka, Tokyo | | **[Internet Solutions - Cloud Connect](https://www.is.co.za/solution/cloud-connect/)** |Supported |Supported |Cape Town, Johannesburg, London |
firewall Premium Migrate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall/premium-migrate.md
The minimum Azure PowerShell version requirement is 6.5.0. For more information,
```azurepowershell $azfw = Get-AzFirewall -Name -Name "<firewall-name>" -ResourceGroupName "<resource-group-name>"
- $hub = get-azvirtualhub -ResourceGroupName "<resource-group-name>" -name "<vWAN-name>"
+ $hub = get-azvirtualhub -ResourceGroupName "<resource-group-name>" -name "<vWANhub-name>"
$azfw.Sku.Tier="Premium" $azfw.Allocate($hub.id) Set-AzFirewall -AzureFirewall $azfw
governance Manage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/management-groups/manage.md
Title: How to work with your management groups - Azure Governance description: Learn how to view, maintain, update, and delete your management group hierarchy. Previously updated : 08/17/2021 Last updated : 01/07/2022 # Manage your resources with management groups
template and deploy it at [tenant level](../../azure-resource-manager/templates/
} ```
+Or, the following Bicep file.
+
+```bicep
+targetScope = 'managementGroup'
+
+@description('Provide the ID of the management group that you want to move the subscription to.')
+param targetMgId string
+
+@description('Provide the ID of the existing subscription to move.')
+param subscriptionId string
+
+resource subToMG 'Microsoft.Management/managementGroups/subscriptions@2020-05-01' = {
+ scope: tenant()
+ name: '${targetMgId}/${subscriptionId}'
+}
+```
+ ## Move management groups ### Move management groups in the portal
hdinsight Hdinsight Private Link https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hdinsight-private-link.md
The use of Private Link to connect to an HDInsight cluster is an optional featur
When `privateLink` is set to *enabled*, internal [standard load balancers](../load-balancer/load-balancer-overview.md) (SLBs) are created, and an Azure Private Link service is provisioned for each SLB. The Private Link service is what allows you to access the HDInsight cluster from private endpoints.
-## Prerequisites
+## Private Link Deployment Steps
+Successfully creating a Private Link cluster takes many steps, so we have outlined them here. Follow each of the steps below to ensure everything is setup correctly.
-Standard load balancers don't automatically provide [public outbound NAT](../load-balancer/load-balancer-outbound-connections.md) as basic load balancers do. You must provide your own NAT solution, such as a NAT gateway or a NAT provided by your [firewall](./hdinsight-restrict-outbound-traffic.md), to connect to outbound, public HDInsight dependencies.
+* Step 1: Create prerequisites
+* Step 2: Configure HDInsight subnet
+* Step 3: Deploy NAT gateway OR firewall
+* Step 4: Deploy Private Link cluster
+* Step 5: Create private endpoints
+* Step 6: Configure DNS
+* Step 7: Check cluster connectivity
+* Appendix: Manage private endpoints for Azure HDInsight
-Your HDInsight cluster still needs access to its outbound dependencies. If these outbound dependencies are not allowed, cluster creation might fail.
+## <a name="Createpreqs"></a>Step 1: Create Prerequisites
+
+To start, deploy the following resources if you have not created them already. Once this is done you should have at least 1 resource group, 2 virtual networks, and a network security group to attach to the subnet where the HDInsight cluster will be deployed as shown below.
-### Configure a default network security group on the subnet
+|Type|Name|Purpose|
+|-|-|-|
+|Resource group|hdi-privlink-rg|Used to keep common resources together|
+|Virtual network|hdi-privlink-cluster-vnet|The VNET where the cluster will be deployed|
+|Virtual network|hdi-privlink-client-vnet|The VNET where clients will connect to the cluster from|
+|Network security group|hdi-privlink-cluster-vnet-nsg|Default NSG as required for cluster deployment|
+
+> [!NOTE]
+> The network security group (NSG) can simply be deployed, we do not need to modify any NSG rules for cluster deployment.
-Create and add a network security group (NSG) on the subnet where you intend to deploy the HDInsight cluster. An NSG is required for enabling outbound connectivity.
-### Disable network policies for the Private Link service
+## <a name="DisableNetworkPolicy"></a>Step 2: Configure HDInsight Subnet
-For the successful creation of a Private Link service, you must explicitly [disable network policies for Private Link services](../private-link/disable-private-link-service-network-policy.md).
+In order to choose a source IP address for your Private Link service, an explicit disable setting ```privateLinkServiceNetworkPolicies``` is required on the subnet. Follow the instructions here to [disable network policies for Private Link services](../private-link/disable-private-link-service-network-policy.md).
-### Configure a NAT gateway on the subnet
+## <a name="NATorFirewall"></a>Step 3: Deploy NAT Gateway *OR* Firewall
-You can opt to use a NAT gateway if you don't want to configure a firewall or a network virtual appliance (NVA) for NAT. Otherwise, skip to the next prerequisite.
+Standard load balancers don't automatically provide [public outbound NAT](../load-balancer/load-balancer-outbound-connections.md) as basic load balancers do. Since Private Link clusters use standard load balancers, you must provide your own NAT solution, such as a NAT gateway or a NAT provided by your [firewall](./hdinsight-restrict-outbound-traffic.md), to connect to outbound, public HDInsight dependencies.
-To get started, add a NAT gateway (with a new public IP address in your virtual network) to the configured subnet of your virtual network. This gateway is responsible for translating your private internal IP address to public addresses when traffic needs to go outside your virtual network.
+### Deploy a NAT Gateway (Option 1)
+You can opt to use a NAT gateway if you don't want to configure a firewall or a network virtual appliance (NVA) for NAT. To get started, add a NAT gateway (with a new public IP address in your virtual network) to the configured subnet of your virtual network. This gateway is responsible for translating your private internal IP address to public addresses when traffic needs to go outside your virtual network.
-### Configure a firewall (optional)
+For a basic setup to get started:
+
+1. Search for 'NAT Gateways' in the Azure portal and click **Create**.
+2. Use the following configurations in the NAT Gateway. (We are not including all configs here, so you can use the default value for those)
+
+ | Config | Value |
+ | | -- |
+ | NAT gateway name | hdi-privlink-nat-gateway |
+ | Public IP Prefixes | Create a new public IP prefix |
+ | Public IP prefix name | hdi-privlink-nat-gateway-prefix |
+ | Public IP prefix size | /28 (16 addresses) |
+ | Virtual network | hdi-privlink-cluster-vnet |
+ | Subnet name | default |
+
+3. Once the NAT Gateway is finished deploying, you are ready to go to the next step.
+
+### Configure a firewall (Option 2)
For a basic setup to get started: 1. Add a new subnet named *AzureFirewallSubnet* to your virtual network.
For a basic setup to get started:
1. Use the new firewall's private IP address as the `nextHopIpAddress` value in your route table. 1. Add the route table to the configured subnet of your virtual network.
+Your HDInsight cluster still needs access to its outbound dependencies. If these outbound dependencies are not allowed, cluster creation might fail.
For more information on setting up a firewall, see [Control network traffic in Azure HDInsight](./control-network-traffic.md).
-The following diagram shows an example of the networking configuration that's required before you create a cluster. In this example, all outbound traffic is forced to Azure Firewall through a user-defined route. The required outbound dependencies should be allowed on the firewall before cluster creation. For Enterprise Security Package clusters, virtual network peering can provide the network connectivity to Azure Active Directory Domain Services.
+## <a name="deployCluster"></a>Step 4: Deploy Private Link cluster
+
+At this point all prerequisites should be taken care of and you are ready to deploy the Private Link cluster. The following diagram shows an example of the networking configuration that's required before you create the cluster. In this example, all outbound traffic is forced to Azure Firewall through a user-defined route. The required outbound dependencies should be allowed on the firewall before cluster creation. For Enterprise Security Package clusters, virtual network peering can provide the network connectivity to Azure Active Directory Domain Services.
:::image type="content" source="media/hdinsight-private-link/before-cluster-creation.png" alt-text="Diagram of the Private Link environment before cluster creation.":::
-## Manage private endpoints for Azure HDInsight
+### Create the cluster
+
+The following JSON code snippet includes the two network properties that you must configure in your Azure Resource Manager template to create a private HDInsight cluster:
+
+```json
+networkProperties: {
+ "resourceProviderConnection": "Outbound",
+ "privateLink": "Enabled"
+}
+```
+For a complete template with many of the HDInsight enterprise security features, including Private Link, see [HDInsight enterprise security template](https://github.com/Azure-Samples/hdinsight-enterprise-security/tree/main/ESP-HIB-PL-Template).
+
+To create a cluster by using PowerShell, see the [example](/powershell/module/az.hdinsight/new-azhdinsightcluster#example-4--create-an-azure-hdinsight-cluster-with-relay-outbound-and-private-link-feature).
+
+To create a cluster by using the Azure CLI, see the [example](/cli/azure/hdinsight#az_hdinsight_create-examples).
+
+## <a name="PrivateEndpoints"></a>Step 5: Create Private Endpoints
+
+Azure automatically creates a Private link service for the Ambari and SSH load balancers during the Private Link cluster deployment. After the cluster is deployed, you have to create two Private endpoints on the client VNET(s), one for Ambari and one for SSH access. Then, link them to the Private link services which were created as part of the cluster deployment.
+
+To create the Private Endpoints:
+1. Open the Azure portal and search for 'Private link'.
+2. In the results, click the Private link icon.
+3. Click 'Create private endpoint' and use the following configurations to setup the Ambari private endpoint:
+
+ | Config | Value |
+ | | -- |
+ | Name | hdi-privlink-cluster |
+ | Resource type | Microsoft.Network/privateLinkServices |
+ | Resource | gateway-* (This should match the HDI deployment ID of your cluster, for example gateway-4eafe3a2a67e4cd88762c22a55fe4654) |
+ | Virtual network | hdi-privlink-client-vnet |
+ | Subnet | default |
+
+4. Repeat the process to create another private endpoint for SSH access using the following configurations:
+
+ | Config | Value |
+ | | -- |
+ | Name | hdi-privlink-cluster-ssh |
+ | Resource type | Microsoft.Network/privateLinkServices |
+ | Resource | headnode-* (This should match the HDI deployment ID of your cluster, for example headnode-4eafe3a2a67e4cd88762c22a55fe4654) |
+ | Virtual network | hdi-privlink-client-vnet |
+ | Subnet | default |
+
+Once the private endpoints are created, youΓÇÖre done with this phase of the setup. If you didnΓÇÖt make a note of the private IP addresses assigned to the endpoints, follow the steps below:
+
+1. Open the client VNET in the Azure portal.
+2. Click the 'Overview' tab.
+3. You should see both the Ambari and ssh Network interfaces listed and their private IP Addresses.
+4. Make a note of these IP addresses because they are required to connect to the cluster and properly configure DNS.
+
+## <a name="ConfigureDNS"></a>Step 6: Configure DNS to connect over private endpoints
+
+To access private clusters, you can configure DNS resolution through private DNS zones. The Private Link entries created in the Azure-managed public DNS zone `azurehdinsight.net` are as follows:
+
+```dns
+<clustername> CNAME <clustername>.privatelink
+<clustername>-int CNAME <clustername>-int.privatelink
+<clustername>-ssh CNAME <clustername>-ssh.privatelink
+```
+The following image shows an example of the private DNS entries configured to enable access to a cluster from a virtual network that isn't peered or doesn't have a direct line of sight to the cluster. You can use an Azure DNS private zone to override `*.privatelink.azurehdinsight.net` fully qualified domain names (FQDNs) and resolve private endpoints' IP addresses in the client's network. The configuration is only for `<clustername>.azurehdinsight.net` in the example, but it also extends to other cluster endpoints.
++
+To configure DNS resolution through a Private DNS zone:
+
+1. Create an Azure Private DNS zone. (We are not including all configs here, all other configs are left at default values)
+
+ | Config | Value |
+ | | -- |
+ | Name | privatelink.azurehdinsight.net |
+
+2. Add a Record set to the Private DNS zone for Ambari.
+
+ | Config | Value |
+ | | -- |
+ | Name | YourPrivateLinkClusterName |
+ | Type | A - Alias record to IPv4 address |
+ | TTL | 1 |
+ | TTL unit | Hours |
+ | IP Address | Private IP of private endpoint for Ambari access |
+
+3. Add a Record set to the Private DNS zone for SSH.
+
+ | Config | Value |
+ | | -- |
+ | Name | YourPrivateLinkClusterName-ssh |
+ | Type | A - Alias record to IPv4 address |
+ | TTL | 1 |
+ | TTL unit | Hours |
+ | IP Address | Private IP of private endpoint for SSH access |
+
+4. Associate the private DNS zone with the client VNET by adding a Virtual Network Link.
+ 1. Open the private DNS zone in the Azure portal.
+ 1. Click the 'Virtual network links' tab.
+ 1. Click the 'Add' button.
+ 1. Fill in the details: Link name, Subscription, and Virtual Network
+ 1. Click **Save**.
+
+## <a name="CheckConnectivity"></a>Step 6: Check cluster connectivity
+
+The last step is to test connectivity to the cluster. Since this cluster is isolated or private, we cannot access the cluster using any public IP or FQDN. Instead we have a couple of options:
+
+* Set up VPN access to the client VNET from your on premise network
+* Deploy a VM to the client VNET and access the cluster from this VM
+
+For this example, we will deploy a VM in the client VNET using the following configuration to test the connectivity.
+
+| Config | Value |
+| | -- |
+| Virtual machine name | hdi-privlink-client-vm |
+| Image | Windows 10 Pro, Version 2004 - Gen1 |
+| Public inbound ports | Allow selected ports |
+| Select inbound ports | RDP (3389) |
+| I confirm I have an eligible Windows 10 license... | Checked |
+| Virtual network | hdi-privlink-client-vnet |
+| Subnet | default |
+
+Once the client VM is deployed, you can test both Ambari and SSH access.
+
+To test Ambari access: <br>
+1. Open a web browser on the VM.
+2. Navigate to your cluster's regular FQDN: `https://<clustername>.azurehdinsight.net`
+3. If the Ambari UI loads, the configuration is correct for Ambari access.
+
+To test ssh access: <br>
+1. Open a command prompt to get a terminal window.
+2. In the terminal window, try connecting to your cluster with SSH: `ssh sshuser@<clustername>.azurehdinsight.net` (Replace "sshuser" with the ssh user you created for your cluster)
+3. If you are able to connect, the configuration is correct for SSH access.
+
+## <a name="ManageEndpoints"></a>Manage Private endpoints for Azure HDInsight
You can use [private endpoints](../private-link/private-endpoint-overview.md) for your Azure HDInsight clusters to allow clients on a virtual network to securely access your cluster over [Private Link](../private-link/private-link-overview.md). Network traffic between the clients on the virtual network and the HDInsight cluster traverses over the Microsoft backbone network, eliminating exposure from the public internet.
The following table shows the various HDInsight resource actions and the resulti
| Reject | Rejected | Connection was rejected by the Private Link resource owner. | | Remove | Disconnected | Connection was removed by the Private Link resource owner. The private endpoint becomes informative and should be deleted for cleanup. |
-## Configure DNS to connect over private endpoints
-
-After you've set up the networking, you can create a cluster with an outbound resource provider connection and Private Link enabled.
-
-To access private clusters, you can use Private Link DNS extensions and private endpoints. When `privateLink` is set to *enabled*, you can create private endpoints and configure DNS resolution through private DNS zones.
-
-The Private Link entries created in the Azure-managed public DNS zone `azurehdinsight.net` are as follows:
-
-```dns
-<clustername> CNAME <clustername>.privatelink
-<clustername>-int CNAME <clustername>-int.privatelink
-<clustername>-ssh CNAME <clustername>-ssh.privatelink
-```
-The following image shows an example of the private DNS entries configured to enable access to a cluster from a virtual network that isn't peered or doesn't have a direct line of sight to the cluster. You can use an Azure DNS private zone to override `*.privatelink.azurehdinsight.net` fully qualified domain names (FQDNs) and resolve private endpoints' IP addresses in the client's network. The configuration is only for `<clustername>.azurehdinsight.net` in the example, but it also extends to other cluster endpoints.
--
-## Create clusters
-
-The following JSON code snippet includes the two network properties that you must configure in your Azure Resource Manager template to create a private HDInsight cluster:
-
-```json
-networkProperties: {
- "resourceProviderConnection": "Outbound",
- "privateLink": "Enabled"
-}
-```
-
-For a complete template with many of the HDInsight enterprise security features, including Private Link, see [HDInsight enterprise security template](https://github.com/Azure-Samples/hdinsight-enterprise-security/tree/main/ESP-HIB-PL-Template).
-
-To create a cluster by using PowerShell, see the [example](/powershell/module/az.hdinsight/new-azhdinsightcluster#example-4--create-an-azure-hdinsight-cluster-with-relay-outbound-and-private-link-feature).
-
-To create a cluster by using the Azure CLI, see the [example](/cli/azure/hdinsight#az_hdinsight_create-examples).
- ## Next steps * [Enterprise Security Package for Azure HDInsight](enterprise-security-package.md)
-* [Enterprise security general information and guidelines in Azure HDInsight](./domain-joined/general-guidelines.md)
+* [Enterprise security general information and guidelines in Azure HDInsight](./domain-joined/general-guidelines.md)
healthcare-apis Configure Local Rbac https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/azure-api-for-fhir/configure-local-rbac.md
Title: Configure local role-based access control (local RBAC) for Azure API for FHIR
-description: This article describes how to configure the Azure API for FHIR to use an external Azure AD tenant for data plane
+description: This article describes how to configure the Azure API for FHIR to use a secondary Azure AD tenant for data plane
Previously updated : 12/06/2021 Last updated : 01/05/2022 # Configure local RBAC for FHIR
-This article explains how to configure the Azure API for FHIR to use an external, secondary Azure Active Directory tenant for managing data plane access. Use this mode only if it is not possible for you to use the Azure Active Directory tenant associated with your subscription.
+This article explains how to configure the Azure API for FHIR to use a secondary Azure Active Directory (Azure AD) tenant for data access. Use this mode only if it is not possible for you to use the Azure AD tenant associated with your subscription.
> [!NOTE]
-> If your FHIR service data plane is configured to use your primary Azure Active Directory tenant associated with your subscription, [use Azure RBAC to assign data plane roles](configure-azure-rbac.md).
+> If your FHIR service is configured to use your primary Azure AD tenant associated with your subscription, [use Azure RBAC to assign data plane roles](configure-azure-rbac.md).
-## Add service principal
+## Add a new service principal or use an existing one
-Local RBAC allows you to use an external Azure Active Directory tenant with your FHIR server. In order to allow the local RBAC system to check group memberships in this tenant, the Azure API for FHIR must have a service principal in the tenant. This service principal will get created automatically in tenants tied to subscriptions that have deployed the Azure API for FHIR, but in case your tenant has no subscription tied to it, a tenant administrator will need to create this service principal with one of the following commands:
+Local RBAC allows you to use a service principal in the secondary Azure AD tenant with your FHIR server. You can create a new service principal through the Azure portal, PowerShell or CLI commands, or use an existing service principal. The process is also known as [application registration](../register-application.md). You can review and modify the service principals through Azure AD from the portal or using scripts.
-Using the `Az` PowerShell module:
+The PowerShell and CLI scripts below, which are tested and validated in Visual Studio Code, create a new service principal (or client application), and add a client secret. The service principal ID is used for local RBAC and the application ID and client secret will be used to access the FHIR service later.
-```azurepowershell-interactive
-New-AzADServicePrincipal -ApplicationId 3274406e-4e0a-4852-ba4f-d7226630abb7 -Role Contributor
-```
-
-or you can use the `AzureAd` PowerShell module:
+You can use the `Az` PowerShell module:
```azurepowershell-interactive
-New-AzureADServicePrincipal -AppId 3274406e-4e0a-4852-ba4f-d7226630abb7
+$appname="xxx"
+$sp= New-AzADServicePrincipal -DisplayName $appname
+$clientappid=sp.ApplicationId
+$spid=$sp.Id
+#Get client secret which is not visible from the portal
+$clientsecret=ConvertFrom-SecureString -SecureString $sp.Secret -AsPlainText
``` or you can use Azure CLI: ```azurecli-interactive
-az ad sp create --id 3274406e-4e0a-4852-ba4f-d7226630abb7
+appname=xxx
+clientappid=$(az ad app create --display-name $appname --query appId --output tsv)
+spid=$(az ad sp create --id $appid --query objectId --output tsv)
+#Add client secret with expiration. The default is one year.
+clientsecretname=mycert2
+clientsecretduration=2
+clientsecret=$(az ad app credential reset --id $appid --append --credential-description $clientsecretname --years $clientsecretduration --query password --output tsv)
``` ## Configure local RBAC
-You can configure the Azure API for FHIR to use an external or secondary Azure Active Directory tenant in the **Authentication** blade:
+You can configure the Azure API for FHIR to use a secondary Azure Active Directory tenant in the **Authentication** blade:
-![Local RBAC assignments](media/rbac/local-rbac-guids.png).
+![Local RBAC assignments](media/rbac/local-rbac-guids.png)
-In the authority box, enter a valid Azure Active Directory tenant. Once the tenant has been validated, the **Allowed object IDs** box should be activated and you can enter a list of identity object IDs. These IDs can be the identity object IDs of:
+In the authority box, enter a valid secondary Azure Active Directory tenant. Once the tenant has been validated, the **Allowed object IDs** box should be activated and you can enter one or a list of Azure AD service principal object IDs. These IDs can be the identity object IDs of:
* An Azure Active Directory user. * An Azure Active Directory service principal.
In the authority box, enter a valid Azure Active Directory tenant. Once the tena
You can read the article on how to [find identity object IDs](find-identity-object-ids.md) for more details.
-After entering the required object IDs, click **Save** and wait for changes to be saved before trying to access the data plane using the assigned users, service principals, or groups.
+After entering the required Azure AD object IDs, click **Save** and wait for changes to be saved before trying to access the data plane using the assigned users, service principals, or groups. The object IDs are granted with all permissions, an equivalent of the "FHIR Data Contributor" role.
+
+The local RBAC setting is only visible from the authentication blade; it is not visible from the Access Control (IAM) blade.
+
+> [!NOTE]
+> Only a single tenant is supported for RBAC or local RBAC. To disable the local RBAC function, you can change it back to the valid tenant (or primary tenant) associated with your subscription, and remove all Azure AD object IDs in the "Allowed object IDs" box.
## Caching behavior
healthcare-apis Healthcare Apis Faqs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/healthcare-apis-faqs.md
Previously updated : 11/05/2021 Last updated : 01/03/2022
During the public preview phase, Azure Healthcare APIs is available for you to u
Please refer to the [Products by region](https://azure.microsoft.com/global-infrastructure/services/?products=azure-api-for-fhir) page for the most current information. ### What are the subscription quota limits for the Azure Healthcare APIs?-
-#### Workspace (logical container):
-* 200 instances per Subscription (not adjustable)
-
-#### DICOM service:
-* 800 instances per Subscription (not adjustable)
-* 10 DICOM instances per Workspace (not adjustable)
-
-#### FHIR service:
-* 25 instances per Subscription (not adjustable)
-* 10 FHIR instances per Workspace (not adjustable)
-
-#### IoT connector:
-* 25 IoT connectors per Subscription (not adjustable)
-* 10 IoT connectors per Workspace (not adjustable)
-* One FHIR Destination* per IoT connector (not adjustable)
-
-(* - FHIR destination is a child resource of IoT connector)
+Please refer to [Healthcare APIs service limits](https://docs.microsoft.com/azure/azure-resource-manager/management/azure-subscription-service-limits#azure-healthcare-apis) for the most current information.
## More frequently asked questions [FAQs about Azure Healthcare APIs FHIR service](./fhir/fhir-faq.md)
iot-central Concepts App Templates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/concepts-app-templates.md
Title: What are application templates in Azure IoT Central | Microsoft Docs
description: Azure IoT Central application templates allow you to jump in to IoT solution development. Previously updated : 08/24/2021 Last updated : 12/21/2021
iot-central Concepts Iot Edge https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/concepts-iot-edge.md
Title: Azure IoT Edge and Azure IoT Central | Microsoft Docs
description: Understand how to use Azure IoT Edge with an IoT Central application. Previously updated : 08/31/2021 Last updated : 12/28/2021
iot-central Concepts Telemetry Properties Commands https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/concepts-telemetry-properties-commands.md
Title: Telemetry, property, and command payloads in Azure IoT Central | Microsof
description: Azure IoT Central device templates let you specify the telemetry, properties, and commands of a device must implement. Understand the format of the data a device can exchange with IoT Central. Previously updated : 08/25/2021 Last updated : 12/27/2021
iot-central How To Connect Iot Edge Transparent Gateway https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/how-to-connect-iot-edge-transparent-gateway.md
To find these values, navigate to each device in the device list and select **Co
## Deploy the gateway and devices
-To enable you to try out this scenario, the following steps show you how to deploy the gateway and downstream devices to Azure virtual machines. In a real scenario, the downstream device and gateway run on physical devices on your local network.
+To let you try out this scenario, the following steps show you how to deploy the gateway and downstream devices to Azure virtual machines.
-To try out the transparent gateway scenario, select the following button to deploy two Linux virtual machines. One virtual machine is a transparent IoT Edge gateway, the other is a downstream device that simulates a thermostat:
+> [!TIP]
+> To learn how to deploy the IoT Edge runtime to a physical device, see [Create an IoT Edge device](../../iot-edge/how-to-create-iot-edge-device.md) in the IoT Edge documentation.
+
+To try out the transparent gateway scenario, select the following button to deploy two Linux virtual machines. One virtual machine has the IoT Edge runtime installed and is a transparent IoT Edge gateway. The other virtual machine is a downstream device where you'll run code to send simulated telemetry:
<a href="https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure-Samples%2Fiot-central-docs-samples%2Fmaster%2Ftransparent-gateway%2FDeployGatewayVMs.json" target="_blank"> <img src="https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/1-CONTRIBUTION-GUIDE/images/deploytoazure.png" alt="Deploy to Azure button" />
iot-central Howto Administer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-administer.md
Title: Change Azure IoT Central application settings | Microsoft Docs
description: Learn how to manage your Azure IoT Central application by changing application name, URL, upload image, and delete an application Previously updated : 08/25/2021 Last updated : 12/28/2021
iot-central Howto Authorize Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-authorize-rest-api.md
Title: Authorize REST API in Azure IoT Central
description: How to authenticate and authorize IoT Central REST API calls Previously updated : 08/25/2021 Last updated : 12/27/2021
iot-central Howto Configure File Uploads https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-configure-file-uploads.md
description: How to configure file uploads from your devices to the cloud. After
Previously updated : 08/23/2021 Last updated : 12/22/2021
iot-central Howto Configure Rules Advanced https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-configure-rules-advanced.md
Title: Use workflows to integrate your Azure IoT Central application with other
description: This how-to article shows you, as a builder, how to configure rules and actions that integrate your IoT Central application with other cloud services. To create an advanced rule, you use an IoT Central connector in either Power Automate or Azure Logic Apps. Previously updated : 08/26/2021 Last updated : 12/21/2021
iot-central Howto Control Devices With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-control-devices-with-rest-api.md
Title: Use the REST API to manage devices in Azure IoT Central
description: How to use the IoT Central REST API to control devices in an application Previously updated : 08/28/2021 Last updated : 12/28/2021
iot-central Howto Create Organizations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-create-organizations.md
Previously updated : 08/20/2021 Last updated : 12/27/2021
iot-central Howto Customize Ui https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-customize-ui.md
Title: Customize the Azure IoT Central UI | Microsoft Docs
description: How to customize the theme and help links for your Azure IoT central application Previously updated : 08/18/2021 Last updated : 12/21/2021
iot-central Howto Manage Dashboards https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-manage-dashboards.md
Title: Create and manage Azure IoT Central dashboards | Microsoft Docs
description: Learn how to create and manage application and personal dashboards in Azure IoT Central. Previously updated : 08/19/2021 Last updated : 12/28/2021
iot-central Howto Manage Devices Individually https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-manage-devices-individually.md
Title: Manage devices individually in your Azure IoT Central application | Micro
description: Learn how to manage devices individually in your Azure IoT Central application. Create, delete, and update devices. Previously updated : 08/20/2021 Last updated : 12/27/2021
iot-central Howto Manage Users Roles https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-manage-users-roles.md
Title: Manage users and roles in Azure IoT Central application | Microsoft Docs
description: As an administrator, how to manage users and roles in your Azure IoT Central application Previously updated : 08/20/2021 Last updated : 12/22/2021
iot-central Howto Use Commands https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-use-commands.md
The following screenshot shows how the successful command response displays in t
## Long-running commands
-This section shows you how a device can delay sending a confirmation that the command competed.
+This section shows you how a device can delay sending a confirmation that the command completed.
The following code snippet shows how a device can implement a long-running command:
iot-central Iot Central Customer Data Requests https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/iot-central-customer-data-requests.md
Title: Customer data request featuresΓÇï in Azure IoT Central | Microsoft Docs
description: This article describes identifying, deleting, and exporting customer data in Azure IoT Central application. Previously updated : 08/18/2021 Last updated : 12/28/2021
iot-central Iot Central Supported Browsers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/iot-central-supported-browsers.md
Title: Supported browsers for Azure IoT Central | Microsoft Docs
description: Azure IoT Central can be accessed across modern desktops, tablets and browsers. This article outlines the list of supported browsers. Previously updated : 08/17/2021 Last updated : 12/21/2021
iot-central Tutorial Define Gateway Device Type https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/tutorial-define-gateway-device-type.md
Title: Tutorial - Define a new gateway device type in Azure IoT Central | Micros
description: This tutorial shows you, as a builder, how to define a new IoT gateway device type in your Azure IoT Central application. Previously updated : 08/18/2021 Last updated : 12/21/2021
iot-central Tutorial Health Data Triage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/healthcare/tutorial-health-data-triage.md
Title: Tutorial - Create a health data triage dashboard with Azure IoT Central |
description: Tutorial - Learn to build a health data triage dashboard using Azure IoT Central application templates. Previously updated : 09/01/2021 Last updated : 12/21/2021
iot-central Architecture Connected Logistics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/retail/architecture-connected-logistics.md
Previously updated : 08/18/2021 Last updated : 12/28/2021
iot-central Tutorial Micro Fulfillment Center https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/retail/tutorial-micro-fulfillment-center.md
Previously updated : 09/01/2021 Last updated : 12/21/2021 # Tutorial: Deploy and walk through the micro-fulfillment center application template
load-balancer Outbound Rules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/outbound-rules.md
Previously updated : 10/13/2020 Last updated : 1/6/2022
Load balancer gives [SNAT](load-balancer-outbound-connections.md) ports in multi
If you attempt to give more [SNAT](load-balancer-outbound-connections.md) ports than are available based on the number of public IP addresses, the configuration operation is rejected. For example, if you give 10,000 ports per VM and seven VMs in a backend pool share a single public IP, the configuration is rejected. Seven multiplied by 10,000 exceeds the 64,000 port limit. Add more public IP addresses to the frontend of the outbound rule to enable the scenario.
-Revert to the [default port allocation](load-balancer-outbound-connections.md#preallocatedports) by specifying 0 for the number of ports. The first 50 VM instances will get 1024 ports, 51-100 VM instances will get 512 up to the maximum instances. For more information on default SNAT port allocation, see [SNAT ports allocation table](./load-balancer-outbound-connections.md#preallocatedports).
+Revert to the [default port allocation](load-balancer-outbound-connections.md#preallocatedports) by specifying 0 for the number of ports. For more information on default SNAT port allocation, see [SNAT ports allocation table](./load-balancer-outbound-connections.md#preallocatedports).
### <a name="scenario3out"></a>Scenario 3: Enable outbound only - #### Details - Use a public standard load balancer to provide outbound NAT for a group of VMs. In this scenario, use an outbound rule by itself, without any additional rules configured. - > [!NOTE] > **Azure Virtual Network NAT** can provide outbound connectivity for virtual machines without the need for a load balancer. See [What is Azure Virtual Network NAT?](../virtual-network/nat-gateway/nat-overview.md) for more information.
machine-learning How To Create Component Pipelines Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-create-component-pipelines-cli.md
Previously updated : 10/21/2021 Last updated : 01/07/2022
Open `ComponentA.yaml` to see how the first component is defined:
:::code language="yaml" source="~/azureml-examples-main/cli/jobs/pipelines-with-components/basics/3a_basic_pipeline/componentA.yml":::
-In the current preview, only components of type `command` are supported. The `name` is the unique identifier and used in Studio to describe the component, and `display_name` is used to provide friendly name . The `version` key-value pair allows you to evolve your pipeline components while maintaining reproducibility with older versions.
+In the current preview, only components of type `command` are supported. The `name` is the unique identifier and used in Studio to describe the component, and `display_name` is used for a display-friendly name. The `version` key-value pair allows you to evolve your pipeline components while maintaining reproducibility with older versions.
All files in the `code.local_path` value will be uploaded to Azure for processing.
Notice how `jobs.train_job.outputs.model_output` is used as an input to both the
:::image type="content" source="media/how-to-create-component-pipelines-cli/regression-graph.png" alt-text="pipeline graph of the NYC taxi-fare prediction task" lightbox="media/how-to-create-component-pipelines-cli/regression-graph.png":::
+## Register components for reuse and sharing
+
+While some components will be specific to a particular pipeline, the real benefit of components comes from reuse and sharing. Register a component in your Machine Learning workspace to make it available for reuse. Registered components support automatic versioning so you can update the component but assure that pipelines that require an older version will continue to work.
+
+In the azureml-examples repository, navigate to the `cli/jobs/pipelines-with-components/basics/1b_e2e_registered_components` directory.
+
+To register a component, use the `az ml component create` command:
+
+```azurecli
+az ml component create --file train.yml
+az ml component create --file score.yml
+az ml component create --file eval.yml
+```
+
+After these commands run to completion, you can see the components in Studio:
+
+![Screenshot of Studio showing the components that were just registered](media/how-to-create-component-pipelines-cli/registered-components.png)
+
+Click on a component. You'll see some basic information about the component, such as creation and modification dates. Also, you'll see editable fields for Tags and Description. The tags can be used for adding rapidly searched keywords. The description field supports Markdown formatting and should be used to describe your component's functionality and basic use.
+
+### Use registered components in a job specification file
+
+In the `1b_e2e_registered_components` directory, open the `pipeline.yml` file. The keys and values in the `inputs` and `outputs` dictionaries are similar to those already discussed. The only significant difference is the value of the `component` values in the `jobs.<JOB_NAME>.component` entries. The `component` value is of the form `azureml:<JOB_NAME>:<COMPONENT_VERSION>`. The `train-job` definition, for instance, specifies that version 31 of the registered component `Train` should be used:
+++ ## Caching & reuse By default, only those components whose inputs have changed are rerun. You can change this behavior by setting the `is_deterministic` key of the component specification YAML to `False`. A common need for this is a component that loads data that may have been updated from a fixed location or URL.
marketplace Revenue Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/revenue-dashboard.md
Last updated 12/06/2021
# Revenue dashboard in commercial marketplace analytics
-This article provides information on the _Revenue dashboard_ in Microsoft Partner Center. The Revenue dashboard shows the summary of _billed sales_ of all offer purchases and consumption through the commercial marketplace. Use this report to understand your revenue information across customers, billing models, offer plans, and so on. It provides a unified view across entities and helps answer queries, such as:
+This article provides information on the _Revenue dashboard_ in Microsoft Partner Center. The Revenue dashboard shows the summary of _billed sales_ of all offer purchases and consumption through the commercial marketplace. It enables you to reconcile billed sales, payouts, and analytic reports in the commercial marketplace.
+
+Use this report to understand your revenue information across customers, billing models, offer plans, and so on. It provides a unified view across entities and helps answer queries, such as:
- How much revenue was invoiced to customers and when can I expect payouts? - Which customer transacted the offer and where are they located?
Details widget with expandable and collapsible view.
[ ![Illustrates the expandable view of the Revenue details section of the Revenue dashboard.](./media/revenue-dashboard/details-widget-1.png) ](./media/revenue-dashboard/details-widget-1.png#lightbox)
-[ ![Illustrates the collapsable view of the Revenue details section of the Revenue dashboard.](./media/revenue-dashboard/details-widget-2.png) ](./media/revenue-dashboard/details-widget-2.png#lightbox)
+[ ![Illustrates the collapsible view of the Revenue details section of the Revenue dashboard.](./media/revenue-dashboard/details-widget-2.png) ](./media/revenue-dashboard/details-widget-2.png#lightbox)
Note the following:
In the lower left of most widgets, youΓÇÖll see a thumbs up and thumbs down icon
| Payment sent date | The date on which payment was sent to the partner | | Quantity | Indicates billed quantity for transactions. This can represent the seats and site purchase count for subscription-based offers, and usage units for consumption-based offers. | | Units | The unit quantity. Represents count of purchased seat/site SaaS orders and core hours for VM-based offers. Units will be displayed as NA for offers with custom meters. |
-|||
+|
+
+## Next steps
+
+- For common questions about the revenue dashboard or commercial marketplace analytics, and for a comprehensive dictionary of data terms, see [Commercial marketplace analytics Frequently Asked Questions](analytics-faq.yml).
+- For information on payout statements, see [Payout statements](/partner-center/payout-statement).
+- For information on Payout schedules, see [Payout schedules and processes](/partner-center/payout-policy-details).
+- For Virtual Machine (VM) offers usage and metered billing metrics, see [Usage dashboard in commercial marketplace analytics](usage-dashboard.md).
+- For information about your orders in a graphical and downloadable format, see [Orders dashboard in commercial marketplace analytics](orders-dashboard.md).
+- For a list of your download requests over the last 30 days, see [Downloads dashboard in commercial marketplace analytics](downloads-dashboard.md).
+- For an overview of analytics reports available in the commercial marketplace, see [Access analytic reports for the commercial marketplace in Partner Center](analytics.md).
media-services Assets Concept https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/assets-concept.md
editor: ''
Previously updated : 08/31/2020 Last updated : 01/06/2022
[!INCLUDE [media services api v3 logo](./includes/v3-hr.md)]
-In Azure Media Services, an [Asset](/rest/api/media/assets) is a core concept. It is where you input media (for example, through upload or live ingest), output media (from a job output), and publish media from (for streaming).
+In Azure Media Services, an [Asset](/rest/api/media/assets) is a core concept. It is where you input media (for example, through upload or live ingest), output media (from a job output), and publish media (for streaming).
An Asset is mapped to a blob container in the [Azure Storage account](storage-account-concept.md) and the files in the Asset are stored as block blobs in that container. Assets contain information about digital files stored in Azure Storage (including video, audio, images, thumbnail collections, text tracks, and closed caption files).
postgresql Concepts Server Parameters https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/flexible-server/concepts-server-parameters.md
Here is the list of some of the parameters:
| **maintenance_work_mem** | The maintenance_work_mem parameter basically provides the maximum amount of memory to be used by maintenance operations like vacuum, create index, and alter table add foreign key operations. Default value for that parameter is 64 KB. ItΓÇÖs recommended to set this value higher than work_mem; this can improve performance for vacuuming. | | **effective_io_concurrency** | Sets the number of concurrent disk I/O operations that PostgreSQL expects can be executed simultaneously. Raising this value will increase the number of I/O operations that any individual PostgreSQL session attempts to initiate in parallel. The allowed range is 1 to 1000, or zero to disable issuance of asynchronous I/O requests. Currently, this setting only affects bitmap heap scans.. | |**require_secure_transport** | If your application does not support SSL connectivity to the server, you can optionally disable secured transport from your client by turning `OFF` this parameter value. |
+ |**log_connections** | This parameter may be read-only, as on Azure Database for PostgreSQL - Flexible Server all connections are logged and intercepted to make sure connections are coming in from right sources for security reasons. |
>[!NOTE] > As you scale Azure Database for PostgreSQL - Flexible Server SKUs up or down, affecting available memory to the server, you may wish to tune your memory global parameters, such as work_mem or effective_cache_size accordingly based on information above.
purview Register Scan Google Bigquery Source https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-google-bigquery-source.md
When scanning Google BigQuery source, Purview supports:
- Fetching static lineage on assets relationships among tables and views.
+>[!NOTE]
+> Currently, Purview only supports scanning Google BigQuery datasets in US multi-regional location. If the specified dataset is in other location e.g. us-east1 or EU, you will observe scan completes but no assets shown up in Purview.
+ ## Prerequisites * An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
role-based-access-control Resource Provider Operations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/role-based-access-control/resource-provider-operations.md
Azure service: [Azure Monitor](../azure-monitor/index.yml)
> | Microsoft.Insights/Logs/Alert/Read | Read data from the Alert table | > | Microsoft.Insights/Logs/AlertHistory/Read | Read data from the AlertHistory table | > | Microsoft.Insights/Logs/AmlComputeClusterEvent/Read | Read data from the AmlComputeClusterEvent table |
-> | Microsoft.Insights/Logs/AmlComputeClusterNodeEvent/Read | Read data from the AmlComputeClusterNodeEvent table |
+> | Microsoft.Insights/Logs/AmlComputeClusterNodeEvent/Read | Read data from the AmlComputeClusterNodeEvent table. This API is deprecated, please use AmlComputeClusterEvent instead |
> | Microsoft.Insights/Logs/AmlComputeCpuGpuUtilization/Read | Read data from the AmlComputeCpuGpuUtilization table | > | Microsoft.Insights/Logs/AmlComputeJobEvent/Read | Read data from the AmlComputeJobEvent table | > | Microsoft.Insights/Logs/AmlRunStatusChangedEvent/Read | Read data from the AmlRunStatusChangedEvent table |
Azure service: [Azure Monitor](../azure-monitor/index.yml)
> | Microsoft.OperationalInsights/workspaces/query/AlertHistory/read | Read data from the AlertHistory table | > | Microsoft.OperationalInsights/workspaces/query/AlertInfo/read | Read data from the AlertInfo table | > | Microsoft.OperationalInsights/workspaces/query/AmlComputeClusterEvent/read | Read data from the AmlComputeClusterEvent table |
-> | Microsoft.OperationalInsights/workspaces/query/AmlComputeClusterNodeEvent/read | Read data from the AmlComputeClusterNodeEvent table |
+> | Microsoft.OperationalInsights/workspaces/query/AmlComputeClusterNodeEvent/read | Read data from the AmlComputeClusterNodeEvent table. This API is deprecated, please use AmlComputeClusterEvent instead |
> | Microsoft.OperationalInsights/workspaces/query/AmlComputeCpuGpuUtilization/read | Read data from the AmlComputeCpuGpuUtilization table | > | Microsoft.OperationalInsights/workspaces/query/AmlComputeInstanceEvent/read | Read data from the AmlComputeInstanceEvent table | > | Microsoft.OperationalInsights/workspaces/query/AmlComputeJobEvent/read | Read data from the AmlComputeJobEvent table |
role-based-access-control Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/role-based-access-control/troubleshooting.md
na Previously updated : 11/12/2021 Last updated : 01/07/2022
Azure supports up to **500** role assignments per management group. This limit i
- If you attempt to remove the last Owner role assignment for a subscription, you might see the error "Cannot delete the last RBAC admin assignment." Removing the last Owner role assignment for a subscription is not supported to avoid orphaning the subscription. If you want to cancel your subscription, see [Cancel your Azure subscription](../cost-management-billing/manage/cancel-azure-subscription.md).
+ You are allowed to remove the last Owner (or User Access Administrator) role assignment at subscription scope, if you are the Global Administrator for the tenant. In this case, there is no constraint for deletion. However, if the call comes from some other principal, then you won't be able to remove the last Owner role assignment at subscription scope.
+ ## Problems with custom roles - If you need steps for how to create a custom role, see the custom role tutorials using the [Azure portal](custom-roles-portal.md), [Azure PowerShell](tutorial-custom-role-powershell.md), or [Azure CLI](tutorial-custom-role-cli.md).
search Semantic How To Query Request https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/semantic-how-to-query-request.md
Captions and answers are extracted verbatim from text in the search document. Th
+ A Cognitive Search service at a Standard tier (S1, S2, S3), located in one of these regions: Australia East, East US, East US 2, North Central US, South Central US, West US, West US 2, North Europe, UK South, West Europe. If you have an existing S1 or greater service in one of these regions, you can enable semantic search on your service without having to create a new one.
-+ [Enable semantic search on your service](semantic-search-overview.md#enable-semantic-search).
++ [Semantic search enabled on your search service](semantic-search-overview.md#enable-semantic-search). + An existing search index with content in a [supported language](/rest/api/searchservice/preview-api/search-documents#queryLanguage). Semantic search works best on content that is informational or descriptive. + A search client for sending queries.
- The search client must support preview REST APIs on the query request. You can use [Postman](search-get-started-rest.md), [Visual Studio Code](search-get-started-vs-code.md), or code that makes REST calls to the preview APIs. You can also use [Search explorer](search-explorer.md) in Azure portal to submit a semantic query. You can also use [Azure.Search.Documents 11.4.0-beta.5](https://www.nuget.org/packages/Azure.Search.Documents/11.4.0-beta.5).
+ The search client must support preview REST APIs on the query request. You can use [Postman](search-get-started-rest.md), [Visual Studio Code](search-get-started-vs-code.md), or code that makes REST calls to the preview APIs. You can also use [Search explorer](search-explorer.md) in Azure portal to submit a semantic query or use [Azure.Search.Documents 11.4.0-beta.5](https://www.nuget.org/packages/Azure.Search.Documents/11.4.0-beta.5).
+ A search client for updating indexes.
- The search client must support preview REST APIs on the query request. You can use [Postman](search-get-started-rest.md) or code that makes REST calls to the preview APIs. You can also use [Azure.Search.Documents 11.4.0-beta.5](https://www.nuget.org/packages/Azure.Search.Documents/11.4.0-beta.5).
+ The search client must support preview REST APIs on the query request. You can use the Azure portal, [Postman](search-get-started-rest.md), [Visual Studio Code](search-get-started-vs-code.md), or code that makes REST calls to the preview APIs. You can also use [Azure.Search.Documents 11.4.0-beta.5](https://www.nuget.org/packages/Azure.Search.Documents/11.4.0-beta.5).
+ A [query request](/rest/api/searchservice/preview-api/search-documents) must include `queryType=semantic` and other parameters described in this article.
You're only required to specify one field between `titleField`, `prioritizedCont
Similar to [scoring profiles](index-add-scoring-profiles.md), semantic configurations are a part of your [index definition](/rest/api/searchservice/preview-api/create-or-update-index) and can be updated at any time without rebuilding your index. When you issue a query, you'll add the `semanticConfiguration` that specifies which semantic configuration to use for the query.
+### [**Azure portal**](#tab/portal)
+
+To create a semantic configuration in the Azure portal:
+
+1. Open the [Azure portal](https://portal.azure.com) and navigate to a search service that has semantic search [enabled](semantic-search-overview.md#enable-semantic-search).
+
+1. Navigate to the index you want to add a semantic configuration to.
+
+1. Select **Semantic Configurations** and then select **Add Semantic Configuration**.
+
+1. At this point, a wizard will appear allowing you to select a title field, content fields, and keyword fields. Make sure to list content fields and keyword fields in priority order. After you're finished, select **OK** and then save the changes.
+++ ### [**REST API**](#tab/rest) ```json
When selecting fields for your semantic configuration, choose only fields of the
> [!NOTE] > Subfields of Collection(Edm.ComplexType) fields are not currently supported by semantic search and won't be used for semantic ranking, captions, or answers.
+## Query in Azure portal
+
+[Search explorer](search-explorer.md) has been updated to include options for semantic queries. To create a semantic query in the portal, follow the steps below:
+
+1. Open the [Azure portal](https://portal.azure.com) and navigate to a search service that has semantic search [enabled](semantic-search-overview.md#enable-semantic-search).
+
+1. Click **Search explorer** at the top of the overview page.
+
+1. Choose an index that has content in a [supported language](/rest/api/searchservice/preview-api/search-documents#queryLanguage).
+
+1. In Search explorer, set query options that enable semantic queries, semantic configurations, and spell correction. You can also paste the required query parameters into the query string.
++ ## Query using REST Use the [Search Documents (REST preview)](/rest/api/searchservice/preview-api/search-documents) to formulate the request programmatically. A response includes captions and highlighting automatically. If you want spelling correction or answers in the response, add **`speller`** or **`answers`** to the request.
The following table summarizes the parameters used in a semantic query. For a li
-## Query in Azure portal
-
-> [!Note]
-> The portal does not yet support querying with the 2021-04-30-Preview API version that includes semantic configurations. To query with a semantic configuration, you can use the [REST API](#query-using-rest) or an [SDK](#query-using-azure-sdks).
--
-[Search explorer](search-explorer.md) has been updated to include options for semantic queries. These options become visible in the portal after completing the following steps:
-
-1. Open the [Azure portal](https://portal.azure.com).
-
-1. Click **Search explorer** at the top of the overview page.
-
-1. Choose an index that has content in a [supported language](/rest/api/searchservice/preview-api/search-documents#queryLanguage).
-
-1. In Search explorer, set query options that enable semantic queries, searchFields, and spell correction. You can also paste the required query parameters into the query string.
- ### Formulate the request
security End To End https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security/fundamentals/end-to-end.md
na Previously updated : 4/07/2021 Last updated : 1/06/2022
The [Azure Security Benchmark](../benchmarks/introduction.md) program includes a
| [VPN Gateway](../../vpn-gateway/vpn-gateway-about-vpngateways.md) | A virtual network gateway that is used to send encrypted traffic between an Azure virtual network and an on-premises location over the public Internet and to send encrypted traffic between Azure virtual networks over the Microsoft network. | | [Azure DDoS Protection Standard](../../ddos-protection/ddos-protection-overview.md) | Provides enhanced DDoS mitigation features to defend against DDoS attacks. It is automatically tuned to help protect your specific Azure resources in a virtual network. | | [Azure Front Door](../../frontdoor/front-door-overview.md) | A global, scalable entry-point that uses the Microsoft global edge network to create fast, secure, and widely scalable web applications. |
-| [Azure Firewall](../../firewall/overview.md) | A managed, cloud-based network security service that protects your Azure Virtual Network resources. It's a fully stateful firewall as a service with built-in high availability and unrestricted cloud scalability. |
+| [Azure Firewall](../../firewall/overview.md) | A cloud-native and intelligent network firewall security service that provides threat protection for your cloud workloads running in Azure. It's a fully stateful firewall as a service with built-in high availability and unrestricted cloud scalability. Azure Firewall is offered in two SKUs: [Standard](../../firewall/features.md) and [Premium](../../firewall/premium-features.md). |
| [Azure Key Vault](../../key-vault/general/overview.md) | A secure secrets store for tokens, passwords, certificates, API keys, and other secrets. Key Vault can also be used to create and control the encryption keys used to encrypt your data. | | [Key Vault Managed HSM](../../key-vault/managed-hsm/overview.md) | A fully managed, highly available, single-tenant, standards-compliant cloud service that enables you to safeguard cryptographic keys for your cloud applications, using FIPS 140-2 Level 3 validated HSMs. | | [Azure Private Link](../../private-link/private-link-overview.md) | Enables you to access Azure PaaS Services (for example, Azure Storage and SQL Database) and Azure hosted customer-owned/partner services over a private endpoint in your virtual network. | | [Azure Application Gateway](../../application-gateway/overview.md) | An advanced web traffic load balancer that enables you to manage traffic to your web applications. Application Gateway can make routing decisions based on additional attributes of an HTTP request, for example URI path or host headers. | | [Azure Service Bus](../../service-bus-messaging/service-bus-messaging-overview.md) | A fully managed enterprise message broker with message queues and publish-subscribe topics. Service Bus is used to decouple applications and services from each other. | | [Web Application Firewall](../../web-application-firewall/overview.md) | Provides centralized protection of your web applications from common exploits and vulnerabilities. WAF can be deployed with Azure Application Gateway and Azure Front Door. |
+| [Azure Policy](../../governance/policy/overview.md) | Helps to enforce organizational standards and to assess compliance at-scale. Through its compliance dashboard, it provides an aggregated view to evaluate the overall state of the environment, with the ability to drill down to the per-resource, per-policy granularity. It also helps to bring your resources to compliance through bulk remediation for existing resources and automatic remediation for new resources. |
| **Data & Application** | | | [Azure Backup](../../backup/backup-overview.md) | Provides simple, secure, and cost-effective solutions to back up your data and recover it from the Microsoft Azure cloud. | | [Azure Storage Service Encryption](../../storage/common/storage-service-encryption.md) | Automatically encrypts data before it is stored and automatically decrypts the data when you retrieve it. |
The [Azure Security Benchmark](../benchmarks/introduction.md) program includes a
| | [Microsoft Defender for Identity](/defender-for-identity/what-is) is a cloud-based security solution that leverages your on-premises Active Directory signals to identify, detect, and investigate advanced threats, compromised identities, and malicious insider actions directed at your organization. | | [Azure AD Identity Protection](../../active-directory/identity-protection/howto-identity-protection-configure-notifications.md) | Sends two types of automated notification emails to help you manage user risk and risk detections: Users at risk detected email and Weekly digest email. | | **Infrastructure & Network** | |
+| [Azure Firewall](../../firewall/premium-features.md#idps) | Azure Firewall Premium provides signature-based intrusion detection and prevention system (IDPS) to allow rapid detection of attacks by looking for specific patterns, such as byte sequences in network traffic, or known malicious instruction sequences used by malware. |
| [Microsoft Defender for IoT](../../defender-for-iot/overview.md) | A unified security solution for identifying IoT/OT devices, vulnerabilities, and threats. It enables you to secure your entire IoT/OT environment, whether you need to protect existing IoT/OT devices or build security into new IoT innovations. | | [Azure Network Watcher](../../network-watcher/network-watcher-monitoring-overview.md) | Provides tools to monitor, diagnose, view metrics, and enable or disable logs for resources in an Azure virtual network. Network Watcher is designed to monitor and repair the network health of IaaS products which includes virtual machines, virtual networks, application gateways, and load balancers. |
-| [Azure Policy audit logging](../../governance/policy/overview.md) | Helps to enforce organizational standards and to assess compliance at-scale. Azure Policy uses activity logs, which are automatically enabled to include event source, date, user, timestamp, source addresses, destination addresses, and other useful elements. |
+| [Azure Policy](../../governance/policy/overview.md) | Helps to enforce organizational standards and to assess compliance at-scale. Azure Policy uses activity logs, which are automatically enabled to include event source, date, user, timestamp, source addresses, destination addresses, and other useful elements. |
| **Data & Application** | | | [Microsoft Defender for container registries](../../security-center/defender-for-container-registries-introduction.md) | Includes a vulnerability scanner to scan the images in your Azure Resource Manager-based Azure Container Registry registries and provide deeper visibility into your images' vulnerabilities. | | [Microsoft Defender for Kubernetes](../../security-center/defender-for-kubernetes-introduction.md) | Provides cluster-level threat protection by monitoring your AKS-managed services through the logs retrieved by Azure Kubernetes Service (AKS). |
security Network Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security/fundamentals/network-overview.md
Title: Network security concepts and requirements in Azure | Microsoft Docs
description: This article provides basic explanations about core network security concepts and requirements, and information on what Azure offers in each of these areas. documentationcenter: na--++ ms.assetid: bedf411a-0781-47b9-9742-d524cf3dbfc1
na Previously updated : 10/29/2018 Last updated : 01/06/2022 #Customer intent: As an IT Pro or decision maker, I am looking for information on the network security controls available in Azure.
You can access these enhanced network security features by using an Azure partne
## Azure Firewall
-Azure Firewall is a managed, cloud-based network security service that protects your Azure Virtual Network resources. It is a fully stateful firewall as a service with built-in high availability and unrestricted cloud scalability. Some features include:
+[Azure Firewall](../../firewall/overview.md) is a cloud-native and intelligent network firewall security service that provides threat protection for your cloud workloads running in Azure. It's a fully stateful firewall as a service with built-in high availability and unrestricted cloud scalability. It provides both east-west and north-south traffic inspection.
-* High availability
-* Cloud scalability
-* Application FQDN filtering rules
-* Network traffic filtering rules
+Azure Firewall is offered in two SKUs: Standard and Premium. [Azure Firewall Standard](../../firewall/features.md) provides L3-L7 filtering and threat intelligence feeds directly from Microsoft Cyber Security. [Azure Firewall Premium](../../firewall/premium-features.md) provides advanced capabilities include signature-based IDPS to allow rapid detection of attacks by looking for specific patterns.
Learn more:
-* [Azure Firewall overview](../../firewall/overview.md)
+* [What is Azure Firewall](../../firewall/overview.md)
## Secure remote access and cross-premises connectivity
security Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security/fundamentals/overview.md
na Previously updated : 03/03/2021- Last updated : 01/06/2022+
The built-in capabilities are organized in six functional areas: Operations, App
This section provides additional information regarding key features in security operations and summary information about these capabilities.
+### Microsoft Sentinel
+
+[Microsoft Sentinel](../../sentinel/overview.md) is a scalable, cloud-native, security information and event management (SIEM) and security orchestration, automation, and response (SOAR) solution. Microsoft Sentinel delivers intelligent security analytics and threat intelligence across the enterprise, providing a single solution for attack detection, threat visibility, proactive hunting, and threat response.
+ ### Microsoft Defender for Cloud
-[Defender for Cloud](../../security-center/security-center-introduction.md) helps you prevent, detect, and respond to threats with increased visibility into and control over the security of your Azure resources. It provides integrated security monitoring and policy management across your Azure subscriptions, helps detect threats that might otherwise go unnoticed, and works with a broad ecosystem of security solutions.
+[Microsoft Defender for Cloud](../../security-center/security-center-introduction.md) helps you prevent, detect, and respond to threats with increased visibility into and control over the security of your Azure resources. It provides integrated security monitoring and policy management across your Azure subscriptions, helps detect threats that might otherwise go unnoticed, and works with a broad ecosystem of security solutions.
In addition, Defender for Cloud helps with security operations by providing you a single dashboard that surfaces alerts and recommendations that can be acted upon immediately. Often, you can remediate issues with a single click within the Defender for Cloud console.
Network access control is the act of limiting connectivity to and from specific
A [Network Security Group (NSG)](../../virtual-network/virtual-network-vnet-plan-design-arm.md#security) is a basic stateful packet filtering firewall and it enables you to control access based on a 5-tuple. NSGs do not provide application layer inspection or authenticated access controls. They can be used to control traffic moving between subnets within an Azure Virtual Network and traffic between an Azure Virtual Network and the Internet.
+#### Azure Firewall
+
+[Azure Firewall](../../firewall/overview.md) is a cloud-native and intelligent network firewall security service that provides threat protection for your cloud workloads running in Azure. It's a fully stateful firewall as a service with built-in high availability and unrestricted cloud scalability. It provides both east-west and north-south traffic inspection.
+
+Azure Firewall is offered in two SKUs: Standard and Premium. [Azure Firewall Standard](../../firewall/features.md) provides L3-L7 filtering and threat intelligence feeds directly from Microsoft Cyber Security. [Azure Firewall Premium](../../firewall/premium-features.md) provides advanced capabilities include signature-based IDPS to allow rapid detection of attacks by looking for specific patterns.
+ #### Route Control and Forced Tunneling The ability to control routing behavior on your Azure Virtual Networks is a critical network security and access control capability. For example, if you want to make sure that all traffic to and from your Azure Virtual Network goes through that virtual security appliance, you need to be able to control and customize routing behavior. You can do this by configuring User-Defined Routes in Azure.
You can enable the following diagnostic log categories for NSGs:
- Rules counter: Contains entries for how many times each NSG rule is applied to deny or allow traffic.
-### Defender for Cloud
+### Microsoft Defender for Cloud
[Microsoft Defender for Cloud](../../security-center/security-center-introduction.md) continuously analyzes the security state of your Azure resources for network security best practices. When Defender for Cloud identifies potential security vulnerabilities, it creates [recommendations](../../security-center/security-center-recommendations.md) that guide you through the process of configuring the needed controls to harden and protect your resources.
security Technical Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security/fundamentals/technical-capabilities.md
ms.assetid:
Previously updated : 02/04/2021 Last updated : 01/06/2022
The [Azure network infrastructure](/previous-versions/azure/virtual-machines/win
If you need basic network level access control (based on IP address and the TCP or UDP protocols), then you can use [Network Security Groups](../../virtual-network/virtual-network-vnet-plan-design-arm.md). A Network Security Group (NSG) is a basic stateful packet filtering firewall and it enables you to control access based on a [5-tuple](https://www.techopedia.com/definition/28190/5-tuple).
+[Azure Firewall](../../firewall/overview.md) is a cloud-native and intelligent network firewall security service that provides threat protection for your cloud workloads running in Azure. It's a fully stateful firewall as a service with built-in high availability and unrestricted cloud scalability. It provides both east-west and north-south traffic inspection.
+
+Azure Firewall is offered in two SKUs: Standard and Premium. [Azure Firewall Standard](../../firewall/features.md) provides L3-L7 filtering and threat intelligence feeds directly from Microsoft Cyber Security. [Azure Firewall Premium](../../firewall/premium-features.md) provides advanced capabilities include signature-based IDPS to allow rapid detection of attacks by looking for specific patterns.
+ Azure networking supports the ability to customize the routing behavior for network traffic on your Azure Virtual Networks. You can do this by configuring [User-Defined Routes](../../virtual-network/virtual-networks-udr-overview.md) in Azure. [Forced tunneling](https://www.petri.com/azure-forced-tunneling) is a mechanism you can use to ensure that your services are not allowed to initiate a connection to devices on the Internet.
With Azure Monitor, you can manage any instance in any cloud, including on-premi
This method allows you to consolidate data from a variety of sources, so you can combine data from your Azure services with your existing on-premises environment. It also clearly separates the collection of the data from the action taken on that data so that all actions are available to all kinds of data.
+### Microsoft Sentinel
+
+[Microsoft Sentinel](../../sentinel/overview.md) is a scalable, cloud-native, security information and event management (SIEM) and security orchestration, automation, and response (SOAR) solution. Microsoft Sentinel delivers intelligent security analytics and threat intelligence across the enterprise, providing a single solution for attack detection, threat visibility, proactive hunting, and threat response.
+ ### Microsoft Defender for Cloud [Microsoft Defender for Cloud](../../security-center/security-center-introduction.md) helps you prevent, detect, and respond to threats with increased visibility into and control over the security of your Azure resources. It provides integrated security monitoring and policy management across your Azure subscriptions, helps detect threats that might otherwise go unnoticed, and works with a broad ecosystem of security solutions.
site-recovery Azure To Azure How To Enable Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/azure-to-azure-how-to-enable-policy.md
-# Using Policy with Azure Site Recovery (Public Preview)
+# Using Policy with Azure Site Recovery
This article describes how to set up [Azure Site Recovery](./site-recovery-overview.md) for your resources, using Azure Policy. [Azure Policy](../governance/policy/overview.md) helps to enforce certain business rules on your Azure resources and assess compliance of said resources. ## Disaster Recovery with Azure Policy
-Site Recovery helps you keep your applications up and running in the event of planned or unplanned zonal/regional outages. Enabling Site Recovery on your machines at scale through the Azure portal can be challenging. Now, you have way to enable Site Recovery en masse on specific Resource Groups (_Scope_ of the Policy) through the portal.
+Site Recovery helps you keep your applications up and running in the event of planned or unplanned zonal/regional outages. Enabling Site Recovery on your machines at scale through the Azure portal can be challenging. Azure Policy can help you enable replication at scale without resorting to any scripting.
-Azure Policy solves this problem. Once you have a disaster recovery policy created for a resource group, then all the new virtual machines that are added to the Resource Group will get Site Recovery enabled for them automatically. Moreover, for all the virtual machines already present in the Resource Group, you can get Site Recovery enabled through a process called _remediation_ (details below).
+With the built-in Azure Policy, you have a way to enable Site Recovery en masse on specific subscriptions or resource groups through the portal. Once you have a disaster recovery policy created for a subscription or resource group(s), then all the new virtual machines that are added to that/those subscription or resource group(s) will get Site Recovery enabled for them automatically. Moreover, for all the virtual machines already present in the resource group, Site Recovery can be enabled through a process called _remediation_(details below).
>[!NOTE]
->The _Scope_ of this policy should be at Resource Group Level.
+>The _Scope_ of this policy can be at a subscription level or resource group level.
## Prerequisites
Azure Policy solves this problem. Once you have a disaster recovery policy creat
**Scenario** | **Support Statement** |
-Managed Disks | Supported
+Managed Disks | Supported <br/>OS disk should be at least 1GB and at most 4TB in size.<br/>Data disk(s) should be at least 1GB and at most 32TB in size.<br/>
Unmanaged Disks | Not supported
-Multiple Disks | Supported
+Multiple Disks | Supported for up to 100 disks per VM.
+Ephemeral Disks | Not supported
+Ultra Disks | Not supported
Availability Sets | Supported Availability Zones | Supported Azure Disk Encryption (ADE) enabled VMs | Not supported
-Proximity Placement Groups (PPG) | Supported
+Proximity Placement Groups (PPG) | Supported. If the source VM is inside a PPG, then the Policy will create a PPG by appending ΓÇÿ-asrΓÇÖ on the source PPG and use it for the DR/secondary region failover.
+VMs in both PPG and availability set | Not supported
Customer-managed keys (CMK) enabled disks | Not supported Storage spaces direct (S2D) clusters | Not supported
+Virtual machine scale set VMs | Not supported
+VM with image as Azure Site Recovery Configuration Server | Not supported
+Powered off VMs | Not supported. VM must be powered on for the Policy to work on it.
Azure Resource Manager Deployment Model | Supported Classic Deployment Model | Not supported Zone to Zone DR | Supported Interoperability with other policies applied as default by Azure (if any) | Supported >[!NOTE]
->In the following cases, Site Recovery will not be enabled for them. However, they will reflect as _Non-compliant_ in Resource Compliance:
+>In the following cases, Site Recovery will not be enabled:
>1. If a not-supported VM is created within the scope of policy. >1. If a VM is a part of both an Availability Set as well as PPG. ## Create a Policy Assignment
-In this section, you create a policy assignment that enables Azure Site Recovery for all newly created resources.
+To create a policy assignment of the built-in Azure Site Recovery Policy that enables replication for all newly created VMs in a subscription or resource group(s), perform the following:
1. Go to the **Azure portal** and navigate to **Azure Policy** 1. Select **Assignments** on the left side of the Azure Policy page. An assignment is a policy that has been assigned to execute on a specific scope.
In this section, you create a policy assignment that enables Azure Site Recovery
1. Select **Assign Policy** from the top of the **Policy - Assignments** page. :::image type="content" source="./media/azure-to-azure-how-to-enable-policy/select-assign-policy.png" alt-text="Screenshot of selecting 'Assign policy' from Assignments page." border="false":::
-1. On the **Assign Policy** page, set the **Scope** by selecting the ellipsis and then selecting a subscription and then a resource group. A scope determines what resources or grouping of resources the policy assignment gets enforced on. Then use the **Select** button at the bottom of the **Scope** page.
+1. On the **Assign Policy** page, set the **Scope** by selecting the ellipsis and then selecting a subscription and then optionally a resource group. A scope determines what resources or grouping of resources the policy assignment gets enforced on. Then use the **Select** button at the bottom of the **Scope** page. Please note that you can also choose to exclude a few resource groups from assignment of the Policy by selecting them under ΓÇÿExclusionsΓÇÖ. This is particularly useful when you want to assign the Policy to all but a few resource groups in a given subscription.
-1. Launch the _Policy Definition Picker_ by selecting the ellipses next to **Policy Definition**. _Search for "Configure disaster recovery on virtual machines by enabling replication"_ and select the Policy.
+1. Launch the _Policy Definition Picker_ by selecting the ellipses next to **Policy Definition**. Search for _'disaster recovery'_ or _'site recovery'_. You will find a built-in Policy titled _"Configure disaster recovery on virtual machines by enabling replication via Azure Site Recovery"_. Select it and click _'Select'_.
:::image type="content" source="./media/azure-to-azure-how-to-enable-policy/select-policy-definition.png" alt-text="Screenshot of selecting 'Policy Definition' from Basics page." border="true"::: 1. The **Assignment name** is automatically populated with the policy name you selected, but you can change it. It may be helpful if you plan to assign multiple Azure Site Recovery Policies to the same scope.
In this section, you create a policy assignment that enables Azure Site Recovery
## Configure Target Settings and Properties You are on the way to create a Policy to enable Azure Site Recovery. Let us now configure the Target Settings and Properties:
-1. You are on the _Parameters_ section of the _Assign Policy_ workflow, which looks like this:
+1. Go to the **Parameters** section of the **Assign Policy** workflow. Unselect _Only show parameters that need input or review_. The parameters look as follows:
:::image type="content" source="./media/azure-to-azure-how-to-enable-policy/specify-parameters.png" alt-text="Screenshot of setting Parameters from Parameters page." border="true"::: 1. Select appropriate values for these parameters: - **Source Region**: The Source Region of the Virtual Machines for which the Policy will be applicable. >[!NOTE]
- >The policy will apply to all the Virtual Machines belonging to the Source Region in the scope of the Policy. Virtual Machines not present in the Source Region will not be included in _Resource Compliance_.
- - **Target Region**: The location where your source virtual machine data will be replicated. Site Recovery provides the list of target regions that the customer can replicate to. We recommend that you use the same location as the Recovery Services vault's location.
+ >The policy will apply to all the Virtual Machines belonging to the Source Region in the scope of the Policy. Virtual Machines not present in the Source Region will not be included.
+ - **Target Region**: The location where your source virtual machine data will be replicated. Site Recovery provides the list of target regions that the customer can replicate to. If you want to enable zone to zone replication within a given region, select the same region as Source Region.
- **Target Resource Group**: The resource group to which all your replicated virtual machines belong. By default, Site Recovery creates a new resource group in the target region. - **Vault Resource Group**: The resource group in which Recovery Services Vault exists.
- - **Recovery Services Vault**: The Vault against which all the VMs of the Scope will get protected. Policy can create a new vault on your behalf if required.
- - **Recovery Virtual Network**: Pick an existing virtual network in the target region to be used for recovery virtual machine. Policy can create a new virtual network for you as well, if required.
- - **Target Availability Zone**: Enter the Availability Zone of the Target Region where the Virtual Machine will failover.
- >[!NOTE]
- >For Zone to Zone Scenario, you need to choose the Same Target Region as the Source Region, and opt for a different Availability Zone in _Target Availability Zone_.
- >If some of the virtual machines in your resource group are already in the target availability zone, then the policy will not be applied to them in case you are setting up Zone to Zone DR.
+ - **Recovery Services Vault**: The Vault in which all the VMs of the Scope will get protected. Policy can create a new vault on your behalf, if required.
+ - **Recovery Virtual Network** **(optional parameter)**: Pick an existing virtual network in the target region to be used for recovery virtual machine. Policy can create a new virtual network for you as well, if required.
+ - **Target Availability Zone** **(optional)**: Enter the Availability Zone of the Target Region where the Virtual Machine will failover. If some of the virtual machines in your resource group are already in the target availability zone, then the policy will not be applied to them in case you are setting up Zone to Zone DR.
+ - **Cache Storage Account** **(optional)**: Azure Site Recovery makes use of a storage account for caching replicated data in the source region. Please select an account of your choice. You can choose to select the default cache storage account if you do not need to take care of any special considerations.
+ > [!NOTE]
+ > Please check cache storage account limits in the [Support Matrix](../site-recovery/azure-to-azure-support-matrix.md#cache-storage) before choosing a cache storage account.
+ - **Tag name** **(optional)**: You can apply tags to your replicated VMs to logically organize them into a taxonomy. Each tag consists of a name and a value pair. You can use this field to enter the tag name. For example, *Environment*.
+ - **Tag values** **(optional)**: You can use this field to enter the tag value. For example, *Production*.
+ - **Tag type** **(optional)**: You can use tags to include VMs as part of the Policy assignment by selecting ΓÇÿTag type = InclusionΓÇÖ. This ensures that only the VMs that have the tag (provided via ΓÇÿTag nameΓÇÖ and ΓÇÿTag valuesΓÇÖ fields) are included in the Policy assignment. Alternatively, you can choose ΓÇÿTag type = ExclusionΓÇÖ. This ensures that the VMs that have the tag (provided via ΓÇÿTag nameΓÇÖ and ΓÇÿTag valuesΓÇÖ fields) are excluded from Policy assignment. If no tags are selected, the entire resource group and/or subscription (as the case may be) gets selected for the Policy assignment.
- **Effect**: Enable or disable the execution of the policy. Select _DeployIfNotExists_ to enable the policy as soon as it gets created.
-1. Select on **Next** to decide on Remediation Task.
+1. Select **Next** to decide on Remediation Task.
## Remediation and other properties
-1. The Target Properties for Azure Site Recovery have been configured. However, this policy will take effect only for newly created virtual machines in the scope of the Policy. It can be applied to existing resources via a Remediation Task after the policy is assigned. You can create a Remediation Task here by checking _Create a Remediation Task_ checkbox.
+1. The Target Properties for Azure Site Recovery have been configured. However, this policy will take effect only for newly created virtual machines in the scope of the Policy. Pre-existing VMs in the scope of the Policy do not see replication being enabled on them automatically. This can be solved via a Remediation Task after the policy is assigned. You can create a Remediation Task here by checking _Create a Remediation Task_ checkbox.
1. Azure Policy will create a [Managed Identity](../governance/policy/how-to/remediate-resources.md), which will have owner permissions to enable Azure Site Recovery for the resources in the scope.
You are on the way to create a Policy to enable Azure Site Recovery. Let us now
1. Review the selected options, then select _Create_ at the bottom of the page.
+## Checking protection status of VMs after assignment of Policy
+After the Policy is assigned, please wait for up to 1 hour for replication to be enabled. Subsequently, please go to the Recovery Services Vault chosen during Policy assignment and look for replication jobs. You should be able to locate all VMs for which Site Recovery was enabled via Policy in this vault.
+
+If the VMs do not show up in the vault as protected, you can go back to the Policy assignment and attempt to remediate.
+
+If the VMs show up as non-compliant, it may be because Policy evaluation may have taken place before the VM was up and running completely. You can choose to either remediate or wait for up to 24 hours for the Policy to evaluate the subscription/resource group and remediate automatically.
+ ## Next Steps [Learn more](site-recovery-test-failover-to-azure.md) about running a test failover.
spatial-anchors Authentication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/spatial-anchors/concepts/authentication.md
In this article, you'll learn the various ways you can authenticate to Azure Spatial Anchors from your app or web service. You'll also learn about the ways you can use Azure role-based access control (Azure RBAC) in Azure Active Directory (Azure AD) to control access to your Spatial Anchors accounts.
+> [!WARNING]
+> We recommend that you use account keys for quick onboarding, but only during development/prototyping. We don't recommend that you ship your application to production with an embedded account key in it. Instead, use the user-based or service-based Azure AD authentication approaches described next.
+ ## Overview ![Diagram that shows an overview of authentication to Azure Spatial Anchors.](./media/spatial-anchors-authentication-overview.png)
configuration.AccountKey(LR"(MyAccountKey)");
After you set that property, the SDK will handle the exchange of the account key for an access token and the necessary caching of tokens for your app.
-> [!WARNING]
-> We recommend that you use account keys for quick onboarding, but only during development/prototyping. We don't recommend that you ship your application to production with an embedded account key in it. Instead, use the user-based or service-based Azure AD authentication approaches described next.
- ## Azure AD user authentication For applications that target Azure Active Directory users, we recommend that you use an Azure AD token for the user. You can obtain this token by using the [MSAL](../../active-directory/develop/msal-overview.md). Follow the steps in the [quickstart on registering an app](../../active-directory/develop/quickstart-register-app.md), which include:
spring-cloud Quickstart Setup Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/spring-cloud/quickstart-setup-log-analytics.md
Title: "Quickstart - Set up a Log Analytics workspace in Azure Spring Cloud"
-description: Describes the setup of a Log Analytics workspace for app deployment.
+description: This article describes the setup of a Log Analytics workspace for app deployment.
This quickstart explains how to set up a Log Analytics workspace in Azure Spring Cloud for application development.
-Log Analytics is a tool in the Azure portal used to edit and run log queries with data in Azure Monitor Logs. You may write a query that returns a set of records and then use features of Log Analytics to sort, filter, and analyze those records. You may also write a more advanced query to do statistical analysis and visualize the results in a chart to identify particular trends. Whether you work with the results of your queries interactively or use them with other Azure Monitor features, Log Analytics is the tool that you use to write and test queries.
+Log Analytics is a tool in the Azure portal that's used to edit and run log queries with data in Azure Monitor Logs. You can write a query that returns a set of records and then use features of Log Analytics to sort, filter, and analyze those records. You can also write a more advanced query to do statistical analysis and visualize the results in a chart to identify particular trends. Whether you work with the results of your queries interactively or use them with other Azure Monitor features, Log Analytics is the tool that you use to write and test queries.
-You can set up Azure Monitor Logs for your application in Azure Spring Cloud to collect logs and run logs queries via Log Analytics.
+You can set up Azure Monitor Logs for your application in Azure Spring Cloud to collect logs and run log queries via Log Analytics.
## Prerequisites
-* Complete the previous quickstart in this series: [Provision Azure Spring Cloud service](./quickstart-provision-service-instance.md).
-
-## Set up a Log Analytics workspace
-
-Use the following steps to set up your Log Analytics workspace.
+Complete the previous quickstart in this series: [Provision an Azure Spring Cloud service](./quickstart-provision-service-instance.md).
#### [Portal](#tab/Azure-Portal)
-## Create a Log Analytics Workspace
+## Create a Log Analytics workspace
-* To create a workspace, follow the steps in [Create a Log Analytics workspace in the Azure portal](../azure-monitor/logs/quick-create-workspace.md).
+To create a workspace, follow the steps in [Create a Log Analytics workspace in the Azure portal](../azure-monitor/logs/quick-create-workspace.md).
## Set up Log Analytics for a new service
-* In the create Azure Spring Cloud service wizard, you can configure the **Log Analytics workspace** field with an existing workspace or create one.
+In the wizard for creating an Azure Spring Cloud service instance, you can configure the **Log Analytics workspace** field with an existing workspace or create one.
- [![Where to setup diagnostic settings during provisioning](media/spring-cloud-quickstart-setup-log-analytics/setup-diagnostics-setting.png)](media/spring-cloud-quickstart-setup-log-analytics/setup-diagnostics-setting.png#lightbox)
## Set up Log Analytics for an existing service
-1. In the Azure portal, go to the **Diagnostic Settings** section under **Monitoring**.
+1. In the Azure portal, go to the **Diagnostic settings** section under **Monitoring**.
- [![Location of the diagnostic settings menu](media/spring-cloud-quickstart-setup-log-analytics/diagnostic-settings-entry.png)](media/spring-cloud-quickstart-setup-log-analytics/diagnostic-settings-entry.png#lightbox)
+ [![Screenshot that shows the location of diagnostic settings.](media/spring-cloud-quickstart-setup-log-analytics/diagnostic-settings-entry.png)](media/spring-cloud-quickstart-setup-log-analytics/diagnostic-settings-entry.png#lightbox)
1. If no settings exist, select **Add diagnostic setting**. You can also select **Edit setting** to update existing settings.
-1. Fill out the form on the **Diagnostics setting** page.
- * **Diagnostic setting name**: Set a unique name for the given configuration.
- * **Logs/Categories**: Choose **ApplicationConsole** and **SystemLogs**. For more information on log categories and contents, see [Create diagnostic settings to send Azure Monitor platform logs and metrics to different destinations](../azure-monitor/essentials/diagnostic-settings.md).
- * **Destination details**: Choose **Send to Log Analytics workspace** and specify the Log Analytics workspace you created previously.
+1. Fill out the form on the **Diagnostic setting** page:
+
+ * **Diagnostic setting name**: Set a unique name for the configuration.
+ * **Logs** > **Categories**: Select **ApplicationConsole** and **SystemLogs**. For more information on log categories and contents, see [Create diagnostic settings to send Azure Monitor platform logs and metrics to different destinations](../azure-monitor/essentials/diagnostic-settings.md).
+ * **Destination details**: Select **Send to Log Analytics workspace** and specify the Log Analytics workspace that you created previously.
- [![Setup example of diagnostic settings](media/spring-cloud-quickstart-setup-log-analytics/diagnostic-settings-edit-form.png)](media/spring-cloud-quickstart-setup-log-analytics/diagnostic-settings-edit-form.png#lightbox)
+ [![Screenshot that shows an example of set-up diagnostic settings.](media/spring-cloud-quickstart-setup-log-analytics/diagnostic-settings-edit-form.png)](media/spring-cloud-quickstart-setup-log-analytics/diagnostic-settings-edit-form.png#lightbox)
-1. Click **Save**
+1. Select **Save**.
#### [CLI](#tab/Azure-CLI)
-## Create a Log Analytics Workspace
+## Create a Log Analytics workspace
-1. Create a Log Analytics workspace and get the workspace ID
+Use the following commands to create a Log Analytics workspace and get the workspace ID:
- ```azurecli
- az monitor log-analytics workspace create \
- --workspace-name <new-workspace-name> \
- --resource-group <your-resource-group> \
- --location <your-service-region> \
- --query id --output tsv
- ```
+```azurecli
+az monitor log-analytics workspace create \
+ --workspace-name <new-workspace-name> \
+ --resource-group <your-resource-group> \
+ --location <your-service-region> \
+ --query id --output tsv
+```
- If you have an existing workspace, you can get the workspace ID with the following command:
+If you have an existing workspace, you can get the workspace ID by using the following commands:
- ```azurecli
- az monitor log-analytics workspace show \
- --resource-group <your-resource-group> \
- --workspace-name <workspace-name> \
- --query id --output tsv
- ```
+```azurecli
+az monitor log-analytics workspace show \
+ --resource-group <your-resource-group> \
+ --workspace-name <workspace-name> \
+ --query id --output tsv
+```
+
+## Set up Log Analytics for a new service
+
+Setting up for a new service isn't applicable when you're using the Azure CLI.
## Set up Log Analytics for an existing service
-1. Get the Azure Spring Cloud service instance ID
+1. Get the instance ID for the Azure Spring Cloud service:
```azurecli az spring-cloud show \
Use the following steps to set up your Log Analytics workspace.
--query id --output tsv ```
-1. Set up the diagnostic settings. For more information on log categories and contents, see [Create diagnostic settings to send Azure Monitor platform logs and metrics to different destinations](../azure-monitor/essentials/diagnostic-settings.md).
+1. Configure the diagnostic settings. For more information on log categories and contents, see [Create diagnostic settings to send Azure Monitor platform logs and metrics to different destinations](../azure-monitor/essentials/diagnostic-settings.md).
```azurecli az monitor diagnostic-settings create \
Use the following steps to set up your Log Analytics workspace.
## Next steps
-In this quickstart, you created Azure resources that will continue to accrue charges if they remain in your subscription. If you don't intend to continue on to the next quickstart, see [Clean up resources](./quickstart-logs-metrics-tracing.md#clean-up-resources). Otherwise, advance to the next quickstart:
+In this quickstart, you created Azure resources that will continue to accrue charges if they remain in your subscription. If you don't want to continue on to the next quickstart, see [Clean up resources](./quickstart-logs-metrics-tracing.md#clean-up-resources). Otherwise, advance to the next quickstart:
> [!div class="nextstepaction"]
-> [Logs, Metrics and Tracing](./quickstart-logs-metrics-tracing.md)
+> [Monitor Azure Spring Cloud apps with logs, metrics, and tracing](./quickstart-logs-metrics-tracing.md)
storage Secure File Transfer Protocol Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/secure-file-transfer-protocol-support.md
Blob storage now supports the SSH File Transfer Protocol (SFTP). This support provides the ability to securely connect to Blob Storage accounts via an SFTP endpoint, allowing you to leverage SFTP for file access, file transfer, as well as file management. > [!IMPORTANT]
-> SFTP support is currently in PREVIEW and is available in [these regions](secure-file-transfer-protocol-support.md#regional-availability).
+> SFTP support currently is in PREVIEW and is available in only [these regions](secure-file-transfer-protocol-support.md#regional-availability) and only when you set these [data redundancy options](secure-file-transfer-protocol-known-issues.md#data-redundancy-options).
> > See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. >
storage Storage Network Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/storage-network-security.md
You can add or remove resource network rules in the Azure portal.
3. Select **Networking** to display the configuration page for networking.
-4. In the **Resource type** drop-down list, choose the resource type of your resource instance.
+4. Under **Firewalls and virtual networks**, for **Selected networks**, select to allow access.
-5. In the **Instance name** drop-down list, choose the resource instance. You can also choose to include all resource instances in the active tenant, subscription, or resource group.
+5. Scroll down to find **Resource instances**, and in the **Resource type** dropdown list, choose the resource type of your resource instance.
-6. Select **Save** to apply your changes. The resource instance appears in the **Resource instances** section of the network settings page.
+6. In the **Instance name** dropdown list, choose the resource instance. You can also choose to include all resource instances in the active tenant, subscription, or resource group.
+
+7. Select **Save** to apply your changes. The resource instance appears in the **Resource instances** section of the network settings page.
To remove the resource instance, select the delete icon (:::image type="icon" source="media/storage-network-security/delete-icon.png":::) next to the resource instance.
storage Storage Ref Azcopy Copy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/storage-ref-azcopy-copy.md
For more information, see the examples section of this article.
## Advanced
-AzCopy automatically detects the content type of the files when you upload them from the local disk. AzCopy detects the content type based on the file extension or content (if no extension is specified).
+AzCopy automatically detects the content type of the files based on the file extension or content (if no extension is specified) when you upload them from the local disk.
The built-in lookup table is small, but on Unix, it is augmented by the local system's `mime.types` file(s) if they are available under one or more of these names:
storage File Sync Server Recovery https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/file-sync/file-sync-server-recovery.md
+
+ Title: Recover an Azure File Sync equipped server from a server-level failure
+description: Learn how to recover an Azure File Sync equipped server from a server-level failure
+++ Last updated : 12/07/2021++++
+# Recover an Azure File Sync equipped server from a server-level failure
+
+If the server hosting your Azure file share fails but, your data disk is still intact, you may be able to recover the data from it. This article covers the general steps for successfully recovering your data.
+
+First, on either a new on-premises Windows Server, or an Azure VM, create a new data disk that is the same size as the original data disk. Creating a new data disk reduces the potential for hardware failure from the original data disk.
+
+[Install the latest Azure File Sync agent](file-sync-deployment-guide.md#install-the-azure-file-sync-agent) on the new server, then [register the new server](file-sync-deployment-guide.md#register-windows-server-with-storage-sync-service) to the same Storage Sync Service as the original server.
+
+## Create a new server endpoint
+
+Now that your server itself is configured, [create and configure a new server endpoint](file-sync-deployment-guide.md#create-a-server-endpoint). For recovery purposes, there are a few things you should consider before configuring your new server endpoint:
+
+If you want to enable cloud tiering, leave **Initial Download Mode** at its default setting. This allows for a faster disaster recovery since only the namespace is downloaded, creating tiered files. If instead you want to keep cloud tiering disabled, the only option for **Initial Download Mode** is to fully download all files.
+
+While the namespace is being synced, don't copy data manually, since that will increase the download time. When the sync completes, additional data will download in the background. While this background recall occurs, feel free to continue working as normal, you don't need to wait for it to complete.
+
+If there is data on your original server that didn't upload to the cloud before it went offline, you can potentially recover it. You would do this by copying its contents into the new server's volume. If you would like to do this, use the following robocopy command.
+
+> [!IMPORTANT]
+> If you're recovering more than one VM/machine, don't run this command.
+>
+> Wait for this copy to complete before moving to the next step.
+
+```bash
+RobocopyΓÇ»<directory-in-old-drive> <directory-in-new-drive>ΓÇ»/COPY:DATSO /MIR /DCOPY:AT /XA:O /B /IT /UNILOG:RobocopyLog.txt
+```
+
+## Changeover
+
+Now that everything is setup, you can redirect all your data access to the new server and detach the older data disk. You can also delete the old server endpoint and unregister the old server.
+
+You've now completed your configuration. Your new server should be operating normally and all data can be accessed from the new server.
virtual-desktop Autoscale Scaling Plan https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-desktop/autoscale-scaling-plan.md
To create and assign the custom role to your subscription with the Azure portal:
"Microsoft.DesktopVirtualization/hostpools/sessionhosts/usersessions/delete" "Microsoft.DesktopVirtualization/hostpools/sessionhosts/usersessions/read" "Microsoft.DesktopVirtualization/hostpools/sessionhosts/usersessions/sendMessage/action"
- "Microsoft.DesktopVirtualization/hostpools/sessionhosts/usersessions/read"
``` 5. When you're finished, select **Ok**.
virtual-desktop Teams On Avd https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-desktop/teams-on-avd.md
Title: Microsoft Teams on Azure Virtual Desktop - Azure
description: How to use Microsoft Teams on Azure Virtual Desktop. Previously updated : 12/02/2021 Last updated : 01/07/2021
>Media optimization for Teams is supported for Microsoft 365 Government (GCC) and GCC-High environments. Media optimization for Teams is not supported for Microsoft 365 DoD. >[!NOTE]
->Media optimization for Microsoft Teams is only available for the Windows Desktop client on Windows 10 machines. Media optimizations require Windows Desktop client version 1.2.1026.0 or later.
+>Media optimization for Microsoft Teams is only available for the following two Windows 10 clients:
+>
+> - Windows Desktop client, version 1.2.1026.0 or later
+> - macOS Remote Desktop client, version 10.7.2 or later
+>
+> Teams for the macOS Remote Desktop client is currently in public preview. In order for the macOS client version of Teams to work properly, you must go to **App Preferences** > **General** and enable Teams optimizations.
Microsoft Teams on Azure Virtual Desktop supports chat and collaboration. With media optimizations, it also supports calling and meeting functionality. To learn more about how to use Microsoft Teams in Virtual Desktop Infrastructure (VDI) environments, see [Teams for Virtualized Desktop Infrastructure](/microsoftteams/teams-for-vdi/).
virtual-machines Ephemeral Os Disks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/ephemeral-os-disks.md
POST https://management.azure.com/subscriptions/{sub-
id}/resourceGroups/{rgName}/providers/Microsoft.Compute/VirtualMachines/{vmName}/reimage?a pi-version=2019-12-01" ``` -
-> [!NOTE]
-> Ephemeral OS disk placement option (VM cache disk or VM temp/resource disk) is coming soon on PowerShell
- ## PowerShell
-To use an ephemeral disk for a PowerShell VM deployment, use [Set-AzVMOSDisk](/powershell/module/az.compute/set-azvmosdisk) in your VM configuration. Set the `-DiffDiskSetting` to `Local` and `-Caching` to `ReadOnly`.
+To use an ephemeral disk for a PowerShell VM deployment, use [Set-AzVMOSDisk](/powershell/module/az.compute/set-azvmosdisk) in your VM configuration. Set the `-DiffDiskSetting` to `Local` and `-Caching` to `ReadOnly` and `-DiffDiskPlacement` to `ResourceDisk`.
```powershell
-Set-AzVMOSDisk -DiffDiskSetting Local -Caching ReadOnly
-```
-For scale set deployments, use the [Set-AzVmssStorageProfile](/powershell/module/az.compute/set-azvmssstorageprofile) cmdlet in your configuration. Set the `-DiffDiskSetting` to `Local` and `-Caching` to `ReadOnly`.
+Set-AzVMOSDisk -DiffDiskSetting Local -DiffDiskPlacement ResourceDisk -Caching ReadOnly
-```powershell
-Set-AzVmssStorageProfile -DiffDiskSetting Local -OsDiskCaching ReadOnly
+```
+To use an ephemeral disk on cache disk for a PowerShell VM deployment, use [Set-AzVMOSDisk](/powershell/module/az.compute/set-azvmosdisk) in your VM configuration. Set the `-DiffDiskSetting` to `Local` , `-Caching` to `ReadOnly` and `-DiffDiskPlacement` to `CacheDisk`.
+```PowerShell
+Set-AzVMOSDisk -DiffDiskSetting Local -DiffDiskPlacement CacheDisk -Caching ReadOnly
+```
+For scale set deployments, use the [Set-AzVmssStorageProfile](/powershell/module/az.compute/set-azvmssstorageprofile) cmdlet in your configuration. Set the `-DiffDiskSetting` to `Local` , `-Caching` to `ReadOnly` and `-DiffDiskPlacement` to `ResourceDisk` or `CacheDisk`.
+```PowerShell
+Set-AzVmssStorageProfile -DiffDiskSetting Local -DiffDiskPlacement ResourceDisk -OsDiskCaching ReadOnly
``` ## Frequently asked questions
virtual-machines Automation Deploy Workload Zone https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/automation-deploy-workload-zone.md
You can copy the sample configuration files to start testing the deployment auto
```bash cd ~/Azure_SAP_Automated_Deployment
-cp -R sap-automation/deploy/samples/WORKSPACES WORKSPACES
+cp -R sap-automation/samples/WORKSPACES WORKSPACES
```
You can copy the sample configuration files to start testing the deployment auto
cd C:\Azure_SAP_Automated_Deployment
-xcopy sap-automation\deploy\samples\WORKSPACES WORKSPACES
+xcopy sap-automation\samples\WORKSPACES WORKSPACES
``` ```powershell $subscription="<subscriptionID>"
-$appId="<appID>"
+$spn_id="<appID>"
$spn_secret="<password>" $tenant_id="<tenant>" $keyvault=<keyvaultName> $storageaccount=<storageaccountName> $statefile_subscription=<statefile_subscription>
+$region_code="WEEU"
-cd C:\Azure_SAP_Automated_Deployment\WORKSPACES\LANDSCAPE\DEV-WEEU-SAP01-INFRASTRUCTURE
+cd C:\Azure_SAP_Automated_Deployment\WORKSPACES\LANDSCAPE\DEV-$region_code-SAP01-INFRASTRUCTURE
-New-SAPWorkloadZone -Parameterfile DEV-WEEU-SAP01-INFRASTRUCTURE.tfvars
--Subscription $subscription -SPN_id $appId -SPN_password $spn_secret -Tenant_id $tenant_id
+New-SAPWorkloadZone -Parameterfile DEV-$region_code-SAP01-INFRASTRUCTURE.tfvars
+-Subscription $subscription -SPN_id $spn_id -SPN_password $spn_secret -Tenant_id $tenant_id
-State_subscription $statefile_subscription -Vault $keyvault -$StorageAccountName $storageaccount ```
virtual-machines Automation Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/automation-tutorial.md
The SAP application installation happens through Ansible playbooks.
Navigate to the system deployment folder: ```bash
-cd ~/Azure_ SAP_Automated_Deployment/WORKSPACES/SYSTEM/DEV-NOEU-SAP01-X00/
+cd ~/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/DEV-NOEU-SAP01-X00/
``` Make sure you have the following files in the current folder: `sap-parameters.yaml` and `SID_host.yaml`.
virtual-network Nat Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-network/nat-gateway/nat-overview.md
NAT can be created in a specific Availability Zone and has redundancy built in w
NAT is fully scaled out from the start. There's no ramp up or scale-out operation required. Azure manages the operation of NAT for you. NAT always has multiple fault domains and can sustain multiple failures without service outage.
-* Outbound connectivity can be defined for each subnet with NAT. Multiple subnets within the same virtual network can have different NATs. A subnet is configured by specifying which NAT gateway resource to use. All outbound traffic for the subnet is processed by NAT automatically without any customer configuration. User-defined routes aren't necessary. NAT takes precedence over other outbound scenarios and replaces the default Internet destination of a subnet.
+* Outbound connectivity can be defined for each subnet with NAT. Multiple subnets within the same virtual network can have different NATs. Or multiple subnets within the same virtual network can use the same NAT. A subnet is configured by specifying which NAT gateway resource to use. All outbound traffic for the subnet is processed by NAT automatically without any customer configuration. User-defined routes aren't necessary. NAT takes precedence over other outbound scenarios and replaces the default Internet destination of a subnet.
* NAT supports TCP and UDP protocols only. ICMP is not supported. * A NAT gateway resource can use a:
NAT is fully scaled out from the start. There's no ramp up or scale-out operatio
* NAT cannot be associated to an IPv6 Public IP address or IPv6 Public IP Prefix. However, it can be associated to a dual stack subnet. * NAT allows flows to be created from the virtual network to the services outside your VNet. Return traffic from the Internet is only allowed in response to an active flow. Services outside your VNet cannot initiate a connection to instances. * NAT can't span multiple virtual networks.
+* Multiple NATs cannot be attached to a single subnet.
* NAT cannot be deployed in a [Gateway Subnet](../../vpn-gateway/vpn-gateway-about-vpn-gateway-settings.md#gwsub) * The private side of NAT (virtual machine instances or other compute resources) sends TCP Reset packets for attempts to communicate on a TCP connection that doesn't exist. One example is connections that have reached idle timeout. The next packet received will return a TCP Reset to the private IP address to signal and force connection closure. The public side of NAT doesn't generate TCP Reset packets or any other traffic. Only traffic produced by the customer's virtual network is emitted. * A default TCP idle timeout of 4 minutes is used and can be increased to up to 120 minutes. Any activity on a flow can also reset the idle timer, including TCP keepalives.
vpn-gateway Vpn Gateway About Vpngateways https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/vpn-gateway/vpn-gateway-about-vpngateways.md
A VPN gateway is a specific type of virtual network gateway that is used to send
## <a name="whatis"></a>What is a virtual network gateway?
-A virtual network gateway is composed of two or more VMs that are deployed to a specific subnet you create called the *gateway subnet*. Virtual network gateway VMs contain routing tables and run specific gateway services. These VMs are created when you create the virtual network gateway. You can't directly configure the VMs that are part of the virtual network gateway.
+A virtual network gateway is composed of two or more VMs that are automatically configured and deployed to a specific subnet you create called the *gateway subnet*. The gateway VMs contain routing tables and run specific gateway services. You can't directly configure the VMs that are part of the virtual network gateway, although the settings that you select when configuring your gateway impact the gateway VMs that are created.
+
+### <a name="vpn"></a>What is a VPN gateway?
When you configure a virtual network gateway, you configure a setting that specifies the gateway type. The gateway type determines how the virtual network gateway will be used and the actions that the gateway takes. The gateway type 'Vpn' specifies that the type of virtual network gateway created is a 'VPN gateway'. This distinguishes it from an ExpressRoute gateway, which uses a different gateway type. A virtual network can have two virtual network gateways; one VPN gateway and one ExpressRoute gateway. For more information, see [Gateway types](vpn-gateway-about-vpn-gateway-settings.md#gwtype).
-Creating a gateway can often take 45 minutes or more, depending on the selected gateway SKU. When you create a virtual network gateway, gateway VMs are deployed to the gateway subnet and configured with the settings that you specify. After you create a VPN gateway, you can create an IPsec/IKE VPN tunnel connection between that VPN gateway and another VPN gateway (VNet-to-VNet), or create a cross-premises IPsec/IKE VPN tunnel connection between the VPN gateway and an on-premises VPN device (Site-to-Site). You can also create a Point-to-Site VPN connection (VPN over OpenVPN, IKEv2, or SSTP), which lets you connect to your virtual network from a remote location, such as from a conference or from home.
+When you create a VPN gateway, gateway VMs are deployed to the gateway subnet and configured with the settings that you specified. This process can take 45 minutes or more to complete, depending on the gateway SKU that you selected. After you create a VPN gateway, you can create an IPsec/IKE VPN tunnel connection between that VPN gateway and another VPN gateway (VNet-to-VNet), or create a cross-premises IPsec/IKE VPN tunnel connection between the VPN gateway and an on-premises VPN device (Site-to-Site). You can also create a Point-to-Site VPN connection (VPN over OpenVPN, IKEv2, or SSTP), which lets you connect to your virtual network from a remote location, such as from a conference or from home.
## <a name="configuring"></a>Configuring a VPN Gateway