Updates from: 11/08/2023 02:27:11
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Add Api Connector Token Enrichment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/add-api-connector-token-enrichment.md
Repeat this step for the **ProfileEdit.xml**, and **PasswordReset.xml** user jou
Save the files you changed: *TrustFrameworkBase.xml*, and *TrustFrameworkExtensions.xml*, *SignUpOrSignin.xml*, *ProfileEdit.xml*, and *PasswordReset.xml*. ## Test the custom policy 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that contains your Microsoft Entra tenant by selecting the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Microsoft Entra directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. Choose **All services** in the top-left corner of the Azure portal, and then search for and select **App registrations**. 1. Select **Identity Experience Framework**. 1. Select **Upload Custom Policy**, and then upload the policy files that you changed: *TrustFrameworkBase.xml*, and *TrustFrameworkExtensions.xml*, *SignUpOrSignin.xml*, *ProfileEdit.xml*, and *PasswordReset.xml*.
active-directory-b2c Add Api Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/add-api-connector.md
To return the promo code claim back to the relying party application, add an out
## Test the custom policy 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that contains your Microsoft Entra tenant by selecting the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Microsoft Entra directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Microsoft Entra ID tenant from the **Directories + subscriptions** menu.
1. Choose **All services** in the top-left corner of the Azure portal, and then search for and select **App registrations**. 1. Select **Identity Experience Framework**. 1. Select **Upload Custom Policy**, and then upload the policy files that you changed: *TrustFrameworkExtensions.xml*, and *SignUpOrSignin.xml*.
active-directory-b2c Add Password Change Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/add-password-change-policy.md
The password change flow involves the following steps:
## Upload and test the policy 1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant by selecting the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. Choose **All services** in the top-left corner of the Azure portal, and then search for and select **Azure AD B2C**. 1. Select **Identity Experience Framework**. 1. In **Custom Policies**, select **Upload Policy**.
active-directory-b2c Add Password Reset Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/add-password-reset-policy.md
The self-service password reset experience can be configured for the Sign in (Re
To set up self-service password reset for the sign-up or sign-in user flow: 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. In the portal toolbar, select the **Directories + Subscriptions** icon.
-1. In the **Portal settings | Directories + subscriptions** pane, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. In the Azure portal, search for and select **Azure AD B2C**. 1. Select **User flows**. 1. Select a sign-up or sign-in user flow (of type **Recommended**) that you want to customize.
Your application might need to detect whether the user signed in by using the Fo
### Upload the custom policy 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. In the portal toolbar, select the **Directories + Subscriptions** icon.
-1. In the **Portal settings | Directories + subscriptions** pane, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to the Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. In the Azure portal, search for and select **Azure AD B2C**. 1. In the menu under **Policies**, select **Identity Experience Framework**. 1. Select **Upload custom policy**. In the following order, upload the policy files that you changed:
active-directory-b2c Add Ropc Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/add-ropc-policy.md
When using the ROPC flow, consider the following:
## Create a resource owner user flow 1. Sign in to the [Azure portal](https://portal.azure.com) as the **global administrator** of your Azure AD B2C tenant.
-2. Make sure you're using the directory that contains your Azure AD B2C tenant:
- 1. Select the **Directories + subscriptions** icon in the portal toolbar.
- 1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. In the Azure portal, search for and select **Azure AD B2C**. 1. Select **User flows**, and select **New user flow**. 1. Select **Sign in using resource owner password credentials (ROPC)**.
active-directory-b2c Add Sign In Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/add-sign-in-policy.md
The sign-in policy lets users:
To add sign-in policy: 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Select the **Directories + Subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. In the Azure portal, search for and select **Azure AD B2C**. 1. Under **Policies**, select **User flows**, and then select **New user flow**. 1. On the **Create a user flow** page, select the **Sign in** user flow.
The **SelfAsserted-LocalAccountSignin-Email** technical profile is a [self-asser
## Update and test your policy 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant by selecting the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. Choose **All services** in the top-left corner of the Azure portal, and then search for and select **App registrations**. 1. Select **Identity Experience Framework**. 1. Select **Upload Custom Policy**, and then upload the policy file that you changed, *TrustFrameworkExtensions.xml*.
active-directory-b2c Add Sign Up And Sign In Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/add-sign-up-and-sign-in-policy.md
Watch this video to learn how the user sign-up and sign-in policy works.
The sign-up and sign-in user flow handles both sign-up and sign-in experiences with a single configuration. Users of your application are led down the right path depending on the context. 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Select the **Directories + Subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. In the Azure portal, search for and select **Azure AD B2C**. 1. Under **Policies**, select **User flows**, and then select **New user flow**.
active-directory-b2c Add Web Api Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/add-web-api-application.md
To register an application in your Azure AD B2C tenant, you can use the Azure po
#### [App registrations](#tab/app-reg-ga/) 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. In the left menu, select **Azure AD B2C**. Or, select **All services** and search for and select **Azure AD B2C**. 1. Select **App registrations**, and then select **New registration**. 1. Enter a **Name** for the application. For example, *webapi1*.
To register an application in your Azure AD B2C tenant, you can use the Azure po
#### [Applications (Legacy)](#tab/applications-legacy/) 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. Choose **All services** in the top-left corner of the Azure portal, and then search for and select **Azure AD B2C**. 1. Select **Applications (Legacy)**, and then select **Add**. 1. Enter a name for the application. For example, *webapi1*.
active-directory-b2c Age Gating https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/age-gating.md
Azure AD B2C uses the information that the user enters to identify whether they'
To use age gating in a user flow, you need to configure your tenant to have extra properties. 1. Use [this link](https://portal.azure.com/?Microsoft_AAD_B2CAdmin_agegatingenabled=true#blade/Microsoft_AAD_B2CAdmin/TenantManagementMenuBlade/overview) to try the age gating preview.
-1. Make sure you're using the directory that contains your Azure AD B2C tenant by selecting the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. Select **All services** in the top-left corner of the Azure portal, search for and select **Azure AD B2C**. 1. Select **Properties** for your tenant in the menu on the left. 1. Under the **Age gating**, select **Configure**.
active-directory-b2c Analytics With Application Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/analytics-with-application-insights.md
When you use Application Insights, consider the following:
When you use Application Insights with Azure AD B2C, all you need to do is create a resource and get the instrumentation key. For information, see [Create an Application Insights resource](/previous-versions/azure/azure-monitor/app/create-new-resource). 1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. Make sure you're using the directory that has your Microsoft Entra subscription, and not your Azure AD B2C directory. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find the Microsoft Entra directory that has your subscription in the **Directory name** list, and then select **Switch**
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Microsoft Entra ID tenant from the **Directories + subscriptions** menu.
1. Choose **Create a resource** in the upper-left corner of the Azure portal, and then search for and select **Application Insights**. 1. Select **Create**. 1. For **Name**, enter a name for the resource.
active-directory-b2c Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/azure-monitor.md
In summary, you'll use Azure Lighthouse to allow a user or group in your Azure A
First, create, or choose a resource group that contains the destination Log Analytics workspace that will receive data from Azure AD B2C. You'll specify the resource group name when you deploy the Azure Resource Manager template. 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that contains your *Microsoft Entra ID* tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Microsoft Entra directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Microsoft Entra ID tenant from the **Directories + subscriptions** menu.
1. [Create a resource group](../azure-resource-manager/management/manage-resource-groups-portal.md#create-resource-groups) or choose an existing one. This example uses a resource group named _azure-ad-b2c-monitor_. ## 2. Create a Log Analytics workspace
First, create, or choose a resource group that contains the destination Log Anal
A **Log Analytics workspace** is a unique environment for Azure Monitor log data. You'll use this Log Analytics workspace to collect data from Azure AD B2C [audit logs](view-audit-logs.md), and then visualize it with queries and workbooks, or create alerts. 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that contains your *Microsoft Entra ID* tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Microsoft Entra directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Microsoft Entra ID tenant from the **Directories + subscriptions** menu.
1. [Create a Log Analytics workspace](../azure-monitor/logs/quick-create-workspace.md). This example uses a Log Analytics workspace named _AzureAdB2C_, in a resource group named _azure-ad-b2c-monitor_. ## 3. Delegate resource management
In this step, you choose your Azure AD B2C tenant as a **service provider**. You
First, get the **Tenant ID** of your Azure AD B2C directory (also known as the directory ID). 1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. Make sure you're using the directory that contains your *Azure AD B2C* tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. Select **Microsoft Entra ID**, select **Overview**. 1. Record the **Tenant ID**.
To make management easier, we recommend using Microsoft Entra user _groups_ for
To create the custom authorization and delegation in Azure Lighthouse, we use an Azure Resource Manager template. This template grants Azure AD B2C access to the Microsoft Entra resource group, which you created earlier, for example, _azure-ad-b2c-monitor_. Deploy the template from the GitHub sample by using the **Deploy to Azure** button, which opens the Azure portal and lets you configure and deploy the template directly in the portal. For these steps, make sure you're signed in to your Microsoft Entra tenant (not the Azure AD B2C tenant). 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that contains your *Microsoft Entra tenant*. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Microsoft Entra directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Microsoft Entra ID tenant from the **Directories + subscriptions** menu.
1. Use the **Deploy to Azure** button to open the Azure portal and deploy the template directly in the portal. For more information, see [create an Azure Resource Manager template](../lighthouse/how-to/onboard-customer.md#create-an-azure-resource-manager-template). [![Deploy to Azure](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2Fazure-ad-b2c%2Fsiem%2Fmaster%2Ftemplates%2FrgDelegatedResourceManagement.json)
After you've deployed the template and waited a few minutes for the resource pro
> On the **Portal settings | Directories + subscriptions** page, ensure that your Azure AD B2C and Microsoft Entra tenants are selected under **Current + delegated directories**. 1. Sign out of the [Azure portal](https://portal.azure.com) and sign back in with your **Azure AD B2C** administrative account. This account must be a member of the security group you specified in the [Delegate resource management](#3-delegate-resource-management) step. Signing out and singing back in allows your session credentials to be refreshed in the next step.
-1. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, in the **Directory name** list, find your Microsoft Entra directory that contains the Azure subscription and the _azure-ad-b2c-monitor_ resource group you created, and then select **Switch**.
+1. Select the **Settings** icon in the portal toolbar.
+1. On the **Portal settings | Directories + subscriptions** page, in the **Directory name** list, find your Microsoft Entra ID directory that contains the Azure subscription and the _azure-ad-b2c-monitor_ resource group you created, and then select **Switch**.
1. Verify that you've selected the correct directory and your Azure subscription is listed and selected in the **Default subscription filter**. ![Screenshot of the default subscription filter](./media/azure-monitor/default-subscription-filter.png)
You're ready to [create diagnostic settings](../active-directory/reports-monitor
To configure monitoring settings for Azure AD B2C activity logs: 1. Sign in to the [Azure portal](https://portal.azure.com/) with your *Azure AD B2C* administrative account. This account must be a member of the security group you specified in the [Select a security group](#32-select-a-security-group) step.
-1. Make sure you're using the directory that contains your Azure AD B2C tenant:
- 1. Select the **Directories + subscriptions** icon in the portal toolbar.
- 2. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. Select **Microsoft Entra ID** 1. Under **Monitoring**, select **Diagnostic settings**. 1. If there are existing settings for the resource, you'll see a list of settings already configured. Either select **Add diagnostic setting** to add a new setting, or select **Edit settings** to edit an existing setting. Each setting can have no more than one of each of the destination types.
Now you can configure your Log Analytics workspace to visualize your data and co
Log queries help you to fully use the value of the data collected in Azure Monitor Logs. A powerful query language allows you to join data from multiple tables, aggregate large sets of data, and perform complex operations with minimal code. Virtually any question can be answered and analysis performed as long as the supporting data has been collected, and you understand how to construct the right query. For more information, see [Get started with log queries in Azure Monitor](../azure-monitor/logs/get-started-queries.md). 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that contains your *Microsoft Entra ID* tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Microsoft Entra ID tenant from the **Directories + subscriptions** menu.
1. From **Log Analytics workspace** window, select **Logs** 1. In the query editor, paste the following [Kusto Query Language](/azure/data-explorer/kusto/query/) query. This query shows policy usage by operation over the past x days. The default duration is set to 90 days (90d). Notice that the query is focused only on the operation where a token/code is issued by policy.
Workbooks provide a flexible canvas for data analysis and the creation of rich v
Follow the instructions below to create a new workbook using a JSON Gallery Template. This workbook provides a **User Insights** and **Authentication** dashboard for Azure AD B2C tenant. 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that contains your *Microsoft Entra ID* tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Microsoft Entra ID tenant from the **Directories + subscriptions** menu.
1. From the **Log Analytics workspace** window, select **Workbooks**. 1. From the toolbar, select **+ New** option to create a new workbook. 1. On the **New workbook** page, select the **Advanced Editor** using the **</>** option on the toolbar.
Alerts are created by alert rules in Azure Monitor and can automatically run sav
Use the following instructions to create a new Azure Alert, which will send an [email notification](../azure-monitor/alerts/action-groups.md) whenever there's a 25% drop in the **Total Requests** compared to previous period. Alert will run every 5 minutes and look for the drop in the last hour compared to the hour before it. The alerts are created using Kusto query language. 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that contains your *Microsoft Entra ID* tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Microsoft Entra ID tenant from the **Directories + subscriptions** menu.
1. From **Log Analytics workspace**, select **Logs**. 1. Create a new **Kusto query** by using this query.
To stop collecting logs to your Log Analytics workspace, delete the diagnostic s
## Delete Log Analytics workspace and resource group 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that contains your *Microsoft Entra ID* tenant:
- 1. Select the **Directories + subscriptions** icon in the portal toolbar.
- 1. On the **Portal settings | Directories + subscriptions** page, find your Microsoft Entra directory in the **Directory name** list, and then select **Switch** button next to it.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Microsoft Entra ID tenant from the **Directories + subscriptions** menu.
1. Choose the resource group that contains the Log Analytics workspace. This example uses a resource group named _azure-ad-b2c-monitor_ and a Log Analytics workspace named `AzureAdB2C`. 1. [Delete the Logs Analytics workspace](../azure-monitor/logs/delete-workspace.md#azure-portal). 1. Select the **Delete** button to delete the resource group.
active-directory-b2c Billing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/billing.md
A subscription linked to an Azure AD B2C tenant can be used for the billing of A
### Create the link 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that has your Microsoft Entra subscription, and not the directory containing your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Microsoft Entra directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. Select **Create a resource**, and then, in the **Search services and Marketplace** field, search for and select **Azure Active Directory B2C**. 1. Select **Create**. 1. Select **Link an existing Azure AD B2C Tenant to my Azure subscription**.
To change your pricing tier, follow these steps:
1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. Make sure you're using the Microsoft Entra directory that contains the subscription your Azure B2C tenant and not the Azure AD B2C tenant itself:
- 1. In the Azure portal toolbar, select the **Directories + subscriptions** icon.
-
- 1. On the **Portal settings | Directories + subscriptions** page, find your Microsoft Entra directory in the **Directory name** list, and then select **Switch** button next to it.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Microsoft Entra ID tenant from the **Directories + subscriptions** menu.
1. In the search box at the top of the portal, enter the name of your Azure AD B2C tenant. Then select the tenant in the search results under **Resources**.
The switch to monthly active users (MAU) billing is **irreversible**. Once you c
Here's how to make the switch to MAU billing for an existing Azure AD B2C resource: 1. Sign in to the [Azure portal](https://portal.azure.com) as the subscription owner with administrative access to the Azure AD B2C resource.
-1. To select the Azure AD B2C directory that you want to upgrade to MAU billing, select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Microsoft Entra ID tenant from the **Directories + subscriptions** menu.
1. In the left menu, select **Azure AD B2C**. Or, select **All services** and search for and select **Azure AD B2C**. 1. On the **Overview** page of the Azure AD B2C tenant, select the link under **Resource name**. You're directed to the Azure AD B2C resource in your Microsoft Entra tenant.<br/>
active-directory-b2c Conditional Access User Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/conditional-access-user-flow.md
Azure AD B2C **Premium P2** is required to create risky sign-in policies. **Prem
To add a Conditional Access policy, disable security defaults: 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. Under **Azure services**, select **Microsoft Entra ID**. Or use the search box to find and select **Microsoft Entra ID**. 1. Select **Properties**, and then select **Manage Security defaults**.
When adding Conditional Access to a user flow, consider using **Multi-factor aut
To enable Conditional Access for a user flow, make sure the version supports Conditional Access. These user flow versions are labeled **Recommended**. 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. Under **Azure services**, select **Azure AD B2C**. Or use the search box to find and select **Azure AD B2C**. 1. Under **Policies**, select **User flows**. Then select the user flow. 1. Select **Properties** and make sure the user flow supports Conditional Access by looking for the setting labeled **Conditional Access**.
The claims transformation isn't limited to the `strongAuthenticationPhoneNumber`
To review the result of a Conditional Access event: 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. Under **Azure services**, select **Azure AD B2C**. Or use the search box to find and select **Azure AD B2C**. 1. Under **Activities**, select **Audit logs**. 1. Filter the audit log by setting **Category** to **B2C** and setting **Activity Resource Type** to **IdentityProtection**. Then select **Apply**.
active-directory-b2c Configure A Sample Node Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-a-sample-node-web-app.md
During app registration, you'll specify the *Redirect URI*. The redirect URI is
To register the web app, follow these steps: 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. In the Azure portal, search for and select **Azure AD B2C**. 1. Select **App registrations**, and then select **New registration**. 1. Under **Name**, enter a name for the application (for example, *webapp1*).
active-directory-b2c Configure Authentication In Azure Static App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-authentication-in-azure-static-app.md
During app registration, you specify a *redirect URI*. The redirect URI is the e
To register your application, follow these steps: 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. In the Azure portal, search for and select **Azure AD B2C**. 1. Select **App registrations**, and then select **New registration**. 1. Under **Name**, enter a name for the application (for example, *My Azure Static web app*).
active-directory-b2c Configure Authentication In Azure Web App File Based https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-authentication-in-azure-web-app-file-based.md
During app registration, you'll specify the *redirect URI*. The redirect URI is
To register your application, follow these steps: 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. In the Azure portal, search for and select **Azure AD B2C**. 1. Select **App registrations**, and then select **New registration**. 1. Under **Name**, enter a name for the application (for example, *My Azure web app*).
active-directory-b2c Configure Authentication In Azure Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-authentication-in-azure-web-app.md
During app registration, you'll specify the *redirect URI*. The redirect URI is
To register your application, follow these steps: 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. In the Azure portal, search for and select **Azure AD B2C**. 1. Select **App registrations**, and then select **New registration**. 1. Under **Name**, enter a name for the application (for example, *My Azure web app*).
To register your application, follow these steps:
## Step 3: Configure the Azure App 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that contains your Microsoft Entra tenant (not the Azure AD B2C tenant). Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find the Microsoft Entra directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Microsoft Entra ID tenant from the **Directories + subscriptions** menu.
1. Navigate to your Azure web app. 1. Select **Authentication** in the menu on the left. Select **Add identity provider**. 1. Select **OpenID Connect** in the identity provider dropdown.
active-directory-b2c Configure Authentication In Sample Node Web App With Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-authentication-in-sample-node-web-app-with-api.md
In this step, you create the web and the web API application registrations, and
To create the SPA registration, do the following: 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. Search for and select **Azure AD B2C**. 1. Select **App registrations**, and then select **New registration**. 1. Enter a **Name** for the application (for example, *App ID: 1*).
active-directory-b2c Configure Authentication Sample Angular Spa App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-authentication-sample-angular-spa-app.md
In this step, you create the registrations for the Angular SPA and the web API a
Follow these steps to create the Angular app registration: 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant:
- 1. Select the **Directories + subscriptions** icon in the portal toolbar.
- 2. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. In the Azure portal, search for and select **Azure AD B2C**. 1. Select **App registrations**, and then select **New registration**. 1. For **Name**, enter a name for the application. For example, enter **MyApp**.
active-directory-b2c Configure Authentication Sample Python Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-authentication-sample-python-web-app.md
During app registration, you'll specify the *Redirect URI*. The redirect URI is
To create the web app registration, follow these steps: 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. In the Azure portal, search for and select **Azure AD B2C**. 1. Select **App registrations**, and then select **New registration**. 1. Under **Name**, enter a name for the application (for example, *webapp1*).
active-directory-b2c Configure Authentication Sample React Spa App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-authentication-sample-react-spa-app.md
In this step, you create the registrations for the React SPA and the web API app
Follow these steps to create the React app registration: 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. In the Azure portal, search for and select **Azure AD B2C**. 1. Select **App registrations**, and then select **New registration**. 1. For **Name**, enter a name for the application. For example, enter **MyApp**.
active-directory-b2c Configure Authentication Sample Spa App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-authentication-sample-spa-app.md
In this step, you create the SPA and the web API application registrations, and
To create the SPA registration, use the following steps: 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. Search for and select **Azure AD B2C**. 1. Select **App registrations**, and then select **New registration**. 1. Enter a **Name** for the application (for example, *MyApp*).
active-directory-b2c Configure Authentication Sample Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-authentication-sample-web-app.md
During app registration, you'll specify the *redirect URI*. The redirect URI is
To create the web app registration, use the following steps: 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. In the Azure portal, search for and select **Azure AD B2C**. 1. Select **App registrations**, and then select **New registration**. 1. Under **Name**, enter a name for the application (for example, *webapp1*).
active-directory-b2c Configure Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-tokens.md
The following diagram shows the refresh token sliding window lifetime behavior.
To configure your user flow token lifetime: 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. Choose **All services** in the top-left corner of the Azure portal, and then search for and select **Azure AD B2C**. 1. Select **User flows (policies)**. 1. Open the user flow that you previously created.
active-directory-b2c Configure User Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-user-input.md
In this article, you collect a new attribute during your sign-up journey in Azur
## Add user attributes your user flow 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. Under **Azure services**, select **Azure AD B2C**. Or use the search box to find and select **Azure AD B2C**. 1. In your Azure AD B2C tenant, select **User flows**. 1. Select your policy (for example, "B2C_1_SignupSignin") to open it.
To return the city claim back to the relying party application, add an output cl
## Upload and test your updated custom policy
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. Search for and select **Azure AD B2C**. 1. Under **Policies**, select **Identity Experience Framework**. 1. Select **Upload custom policy**.
active-directory-b2c Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-domain.md
Follow these steps to create an Azure Front Door:
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. To choose the directory that contains the Azure subscription that youΓÇÖd like to use for Azure Front Door and *not* the directory containing your Azure AD B2C tenant:
-
- 1. Select the **Directories + subscriptions** icon in the portal toolbar.
-
- 1. On the **Portal settings | Directories + subscriptions** page, find your Microsoft Entra directory in the **Directory name** list, and then select **Switch** button next to the directory.
+1. To choose the directory that contains the Azure subscription that youΓÇÖd like to use for Azure Front Door and *not* the directory containing your Azure AD B2C tenant select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. Follow the steps in [Create Front Door profile - Quick Create](../frontdoor/create-front-door-portal.md#create-front-door-profilequick-create) to create a Front Door for your Azure AD B2C tenant using the following settings:
Configure Azure Blob storage for Cross-Origin Resource Sharing with the followin
## Test your custom domain 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant:
- 1. Select the **Directories + subscriptions** icon in the portal toolbar.
- 2. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch** button next to it.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. In the Azure portal, search for and select **Azure AD B2C**. 1. Under **Policies**, select **User flows (policies)**. 1. Select a user flow, and then select **Run user flow**.
active-directory-b2c Custom Email Mailjet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-email-mailjet.md
If you don't already have one, start by setting up a Mailjet account (Azure cust
Next, store the Mailjet API key in an Azure AD B2C policy key for your policies to reference. 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. Choose **All services** in the top-left corner of the Azure portal, and then search for and select **Azure AD B2C**. 1. On the **Overview** page, select **Identity Experience Framework**. 1. Select **Policy Keys**, and then select **Add**.
active-directory-b2c Custom Email Sendgrid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-email-sendgrid.md
Be sure to complete the section in which you [create a SendGrid API key](https:/
Next, store the SendGrid API key in an Azure AD B2C policy key for your policies to reference. 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. Choose **All services** in the top-left corner of the Azure portal, and then search for and select **Azure AD B2C**. 1. On the Overview page, select **Identity Experience Framework**. 1. Select **Policy Keys** and then select **Add**.
active-directory-b2c Custom Policies Series Hello World https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-policies-series-hello-world.md
After you complete [step 2](#step-2build-the-custom-policy-file), the `Contos
## Step 3 - Upload custom policy file 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant:
- 1. Select the **Directories + subscriptions** icon in the portal toolbar.
- 1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. In the Azure portal, search for and select **Azure AD B2C**. 1. In the left menu, under **Policies**, select **Identity Experience Framework**. 1. Select **Upload custom policy**, browse select and then upload the `ContosoCustomPolicy.XML` file.
active-directory-b2c Custom Policies Series Store User https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-policies-series-store-user.md
After the policy finishes execution, and you receive your ID token, check that t
1. Sign in to the [Azure portal](https://portal.azure.com/) with Global Administrator or Privileged Role Administrator permissions.
-1. Make sure you're using the directory that contains your Azure AD B2C tenant:
-
- 1. Select the **Directories + subscriptions** icon in the portal toolbar.
-
- 1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the Directory name list, and then select Switch.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. Under **Azure services**, select **Azure AD B2C**. Or use the search box to find and select **Azure AD B2C**.
active-directory-b2c Customize Ui With Html https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/customize-ui-with-html.md
In this article, we use Azure Blob storage to host our content. You can choose t
To host your HTML content in Blob storage, use the following steps: 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that contains your Microsoft Entra tenant, and which has a subscription:
- 1. Select the **Directories + subscriptions** icon in the portal toolbar.
- 1. On the **Portal settings | Directories + subscriptions** page, find your Microsoft Entra directory in the Directory name list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Microsoft Entra ID tenant from the **Directories + subscriptions** menu.
1. In the Azure portal, search for and select **Storage accounts** 1. Select **+ Create**. 1. Select a **Subscription** for your storage account.
Learn more about [how to create and manage Azure storage accounts](../storage/co
### 4. Update the user flow
-1. Make sure you're using the directory that contains your Azure AD B2C tenant:
- 1. Select the **Directories + subscriptions** icon in the portal toolbar.
- 1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the directory name list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. In the Azure portal, search for and select **Azure AD B2C**. 1. In the left-hand menu, select **User flows**, and then select the *B2C_1_signupsignin1* user flow. 1. Select **Page layouts**, and then under **Unified sign-up or sign-in page**, select **Yes** for **Use custom page content**.
To configure UI customization, copy the **ContentDefinition** and its child elem
#### 5.1 Upload the custom policy
-1. Make sure you're using the directory that contains your Azure AD B2C tenant:
- 1. Select the **Directories + subscriptions** icon in the portal toolbar.
- 2. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. Search for and select **Azure AD B2C**. 1. Under **Policies**, select **Identity Experience Framework**. 1. Select **Upload custom policy**.
active-directory-b2c Customize Ui https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/customize-ui.md
The following example shows a *Sign up and sign in* page with a custom logo, bac
::: zone pivot="b2c-user-flow" 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. In the Azure portal, search for and select **Azure AD B2C**. 1. Select **User flows**. 1. Select a user flow you want to customize.
To customize your user flow pages, you first configure company branding in Micro
Start by setting the banner logo, background image, and background color within **Company branding**. 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. In the Azure portal, search for and select **Azure AD B2C**. 1. Under **Manage**, select **Company branding**. 1. Follow the steps in [Add branding to your organization's Microsoft Entra sign-in page](../active-directory/fundamentals/how-to-customize-branding.md).
The following example shows the content definitions with their corresponding the
## Rearrange input fields in the sign-up form To rearrange the input fields on the sign-up page for local accounts form, follow these steps: 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. In the Azure portal, search for and select **Azure AD B2C**. 1. In the left menu, select **User flows**. 1. Select a user flow (for local accounts only) that you want to rearrange its input fields.
active-directory-b2c Disable Email Verification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/disable-email-verification.md
Some application developers prefer to skip email verification during the sign-up
Follow these steps to disable email verification: 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. In the left menu, select **Azure AD B2C**. Or, select **All services** and search for and select **Azure AD B2C**. 1. Select **User flows**. 1. Select the user flow for which you want to disable email verification.
The **LocalAccountSignUpWithLogonEmail** technical profile is a [self-asserted](
## Test your policy 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. In the left menu, select **Azure AD B2C**. Or, select **All services** and search for and select **Azure AD B2C**. 1. Select **User flows**. 1. Select the user flow for which you want to disable email verification. For example, *B2C_1_signinsignup*.
The **LocalAccountSignUpWithLogonEmail** technical profile is a [self-asserted](
## Update and test the relying party file 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that contains your Microsoft Entra tenant. Select the **Directories + Subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Microsoft Entra directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Microsoft Entra ID tenant from the **Directories + subscriptions** menu.
1. Choose **All services** in the top-left corner of the Azure portal, and then search for and select **App registrations**. 1. Select **Identity Experience Framework**. 1. Select **Upload Custom Policy**, and then upload the two policy files that you changed.
active-directory-b2c Enable Authentication Android App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-android-app.md
Configure where your application listens to the Azure AD B2C token response.
To update the mobile app registration with your app redirect URI, do the following: 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. Search for and select **Azure AD B2C**. 1. Select **App registrations**, and then select the application you registered in [Step 2.3: Register the mobile app](configure-authentication-sample-android-app.md#step-23-register-the-mobile-app). 1. Select **Authentication**.
active-directory-b2c Find Help Open Support Ticket https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/find-help-open-support-ticket.md
If you're unable to find answers by using self-help resources, you can open an o
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the Microsoft Entra tenant that contains your Azure subscription:
-
- 1. In the Azure portal toolbar, select the **Directories + subscriptions** icon.
-
- 1. On the **Portal settings | Directories + subscriptions** page, find your Microsoft Entra directory in the **Directory name** list, and then select **Switch** button next to it.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. In the Azure portal, search for and select **Microsoft Entra ID**.
active-directory-b2c Force Password Reset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/force-password-reset.md
The password reset flow is applicable to local accounts in Azure AD B2C that use
To enable the **Forced password reset** setting in a sign-up or sign-in user flow: 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. In the Azure portal, search for and select **Azure AD B2C**. 1. Select **User flows**. 1. Select the sign-up and sign-in, or sign-in user flow (of type **Recommended**) that you want to customize.
To enable the **Forced password reset** setting in a sign-up or sign-in user flo
## Test the user flow 1. Sign in to the [Azure portal](https://portal.azure.com) as a user administrator or a password administrator. For more information about the available roles, see [Assigning administrator roles in Microsoft Entra ID](../active-directory/roles/permissions-reference.md#all-roles).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. In the Azure portal, search for and select **Azure AD B2C**. 1. Select **Users**. Search for and select the user you'll use to test the password reset, and then select **Reset Password**. 1. In the Azure portal, search for and select **Azure AD B2C**.
Get the example of the force password reset policy on [GitHub](https://github.co
## Upload and test the policy 1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant by selecting the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. Choose **All services** in the top-left corner of the Azure portal, and then search for and select **Azure AD B2C**. 1. Select **Identity Experience Framework**. 1. In **Custom Policies**, select **Upload Policy**.
active-directory-b2c Id Token Hint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/id-token-hint.md
This code creates a secret string like `VK62QTn0m1hMcn0DQ3RPYDAr6yIiSvYgdRwjZtU5
The same key that is used by the token issuer needs to be created in your Azure AD B2C policy keys. 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. In the Azure portal, search for and select **Azure AD B2C**. 1. On the overview page, under **Policies**, select **Identity Experience Framework**. 1. Select **Policy Keys**
active-directory-b2c Identity Protection Investigate Risk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-protection-investigate-risk.md
An administrator can choose to dismiss a user's risk in the Azure portal or prog
### Navigating the risky users report 1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. Under **Azure services**, select **Azure AD B2C**. Or use the search box to find and select **Azure AD B2C**. 1. Under **Security**, select **Risky users**.
active-directory-b2c Identity Provider Adfs Saml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-adfs-saml.md
This article shows you how to enable sign-in for an AD FS user account by using
You need to store your certificate in your Azure AD B2C tenant. 1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. Choose **All services** in the top-left corner of the Azure portal, and then search for and select **Azure AD B2C**. 1. On the Overview page, select **Identity Experience Framework**. 1. Select **Policy Keys** and then select **Add**.
Open a browser and navigate to the URL. Make sure you type the correct URL and t
## Test your custom policy 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. In the Azure portal, search for and select **Azure AD B2C**. 1. Under **Policies**, select **Identity Experience Framework** 1. Select your relying party policy, for example `B2C_1A_signup_signin`.
active-directory-b2c Identity Provider Adfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-adfs.md
In this step, configure the claims AD FS application returns to Azure AD B2C.
## Configure AD FS as an identity provider 1. Sign in to the [Azure portal](https://portal.azure.com/) as the global administrator of your Azure AD B2C tenant.
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. Choose **All services** in the top-left corner of the Azure portal, and then search for and select **Azure AD B2C**. 1. Select **Identity providers**, and then select **New OpenID Connect provider**. 1. Enter a **Name**. For example, *Contoso*.
active-directory-b2c Identity Provider Amazon https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-amazon.md
To enable sign-in for users with an Amazon account in Azure Active Directory B2C
## Configure Amazon as an identity provider 1. Sign in to the [Azure portal](https://portal.azure.com/) as the global administrator of your Azure AD B2C tenant.
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. Choose **All services** in the top-left corner of the Azure portal, search for and select **Azure AD B2C**. 1. Select **Identity providers**, then select **Amazon**. 1. Enter a **Name**. For example, *Amazon*.
If the sign-in process is successful, your browser is redirected to `https://jwt
You need to store the client secret that you previously recorded in your Azure AD B2C tenant. 1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. Choose **All services** in the top-left corner of the Azure portal, and then search for and select **Azure AD B2C**. 1. On the Overview page, select **Identity Experience Framework**. 1. Select **Policy Keys** and then select **Add**.
active-directory-b2c Identity Provider Apple Id https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-apple-id.md
To enable sign-in for users with an Apple ID in Azure Active Directory B2C (Azur
## Configure Apple as an identity provider 1. Sign in to the [Azure portal](https://portal.azure.com/) as a global administrator of your Azure AD B2C tenant.
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. Under **Azure services**, select **Azure AD B2C**. Or use the search box to find and select **Azure AD B2C**. 1. Select **Identity providers**, then select **Apple**. 1. For the **Name**, enter **Sign in with Apple**.
The Azure function responds with a properly formatted and signed client secret J
You need to store the client secret that you previously recorded in your Azure AD B2C tenant. 1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. Under **Azure services**, select **Azure AD B2C**. Or use the search box to find and select **Azure AD B2C**. 1. On the **Overview** page, select **Identity Experience Framework**. 1. Select **Policy Keys**, and then select **Add**.
active-directory-b2c Identity Provider Azure Ad B2c https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-azure-ad-b2c.md
To enable sign-in for users with an account from another Azure AD B2C tenant (fo
To create an application. 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that contains your other Azure AD B2C tenant (for example, Fabrikam.com).
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. In the Azure portal, search for and select **Azure AD B2C**. 1. Select **App registrations**, and then select **New registration**. 1. Enter a **Name** for the application. For example, *ContosoApp*.
To create an application.
## Configure Azure AD B2C as an identity provider 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that contains the Azure AD B2C tenant you want to configure the federation (for example, Contoso). Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. Choose **All services** in the top-left corner of the Azure portal, and then search for and select **Azure AD B2C**. 1. Select **Identity providers**, and then select **New OpenID Connect provider**. 1. Enter a **Name**. For example, enter *Fabrikam*.
If the sign-in process is successful, your browser is redirected to `https://jwt
You need to store the application key that you created earlier in your Azure AD B2C tenant.
-1. Make sure you're using the directory that contains the Azure AD B2C tenant you want to configure the federation (for example, Contoso). Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. Choose **All services** in the top-left corner of the Azure portal, and then search for and select **Azure AD B2C**. 1. Under **Policies**, select **Identity Experience Framework**. 1. Select **Policy keys** and then select **Add**.
active-directory-b2c Identity Provider Azure Ad Multi Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-azure-ad-multi-tenant.md
This article shows you how to enable sign-in for users using the multi-tenant en
To enable sign-in for users with a Microsoft Entra account in Azure Active Directory B2C (Azure AD B2C), you need to create an application in the [Azure portal](https://portal.azure.com). For more information, see [Register an application with the Microsoft identity platform](../active-directory/develop/quickstart-register-app.md). 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that contains your organizational Microsoft Entra tenant (for example, Contoso). Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Microsoft Entra directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Microsoft Entra ID tenant from the **Directories + subscriptions** menu.
1. Choose **All services** in the top-left corner of the Azure portal, and then search for and select **App registrations**. 1. Select **New registration**. 1. Enter a **Name** for your application. For example, `Azure AD B2C App`.
If you want to get the `family_name`, and `given_name` claims from Microsoft Ent
You need to store the application key that you created in your Azure AD B2C tenant.
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. Choose **All services** in the top-left corner of the Azure portal, and then search for and select **Azure AD B2C**. 1. Under **Policies**, select **Identity Experience Framework**. 1. Select **Policy keys** and then select **Add**.
active-directory-b2c Identity Provider Azure Ad Single Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-azure-ad-single-tenant.md
As of November 2020, new application registrations show up as unverified in the
To enable sign-in for users with a Microsoft Entra account from a specific Microsoft Entra organization, in Azure Active Directory B2C (Azure AD B2C), you need to create an application in the [Azure portal](https://portal.azure.com). For more information, see [Register an application with the Microsoft identity platform](../active-directory/develop/quickstart-register-app.md). 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that contains your organizational Microsoft Entra tenant (for example, Contoso):
- 1. Select the **Directories + subscriptions** icon in the portal toolbar.
- 2. On the **Portal settings | Directories + subscriptions** page, find your Microsoft Entra directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Microsoft Entra ID tenant from the **Directories + subscriptions** menu.
1. In the Azure portal, search for and select **Microsoft Entra ID**. 1. In the left menu, under **Manage**, select **App registrations**. 1. Select **+ New registration**.
To enable sign-in for users with a Microsoft Entra account from a specific Micro
## Configure Microsoft Entra ID as an identity provider
-1. Make sure you're using the directory that contains Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. Choose **All services** in the top-left corner of the Azure portal, and then search for and select **Azure AD B2C**. 1. Select **Identity providers**, and then select **New OpenID Connect provider**. 1. Enter a **Name**. For example, enter *Contoso Microsoft Entra ID*.
If the sign-in process is successful, your browser is redirected to `https://jwt
You need to store the application key that you created in your Azure AD B2C tenant.
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. Choose **All services** in the top-left corner of the Azure portal, and then search for and select **Azure AD B2C**. 1. Under **Policies**, select **Identity Experience Framework**. 1. Select **Policy keys** and then select **Add**.
active-directory-b2c Identity Provider Ebay https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-ebay.md
To create an eBay application, follow these steps:
You need to store the client secret that you previously recorded in your Azure AD B2C tenant. 1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. Choose **All services** in the top-left corner of the Azure portal, and then search for and select **Azure AD B2C**. 1. On the Overview page, select **Identity Experience Framework**. 1. Select **Policy Keys** and then select **Add**.
active-directory-b2c Identity Provider Facebook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-facebook.md
If you don't already have a Facebook account, sign up at [https://www.facebook.c
## Configure Facebook as an identity provider 1. Sign in to the [Azure portal](https://portal.azure.com/) as the global administrator of your Azure AD B2C tenant.
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. Choose **All services** in the top-left corner of the Azure portal, search for and select **Azure AD B2C**. 1. Select **Identity providers**, then select **Facebook**. 1. Enter a **Name**. For example, *Facebook*.
If the sign-in process is successful, your browser is redirected to `https://jwt
You need to store the App Secret that you previously recorded in your Azure AD B2C tenant. 1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. Choose **All services** in the top-left corner of the Azure portal, and then search for and select **Azure AD B2C**. 1. On the Overview page, select **Identity Experience Framework**. 1. Select **Policy Keys** and then select **Add**.
active-directory-b2c Identity Provider Generic Openid Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-generic-openid-connect.md
This article explains how you can add custom OpenID Connect identity providers i
::: zone pivot="b2c-user-flow" 1. Sign in to the [Azure portal](https://portal.azure.com/) as the global administrator of your Azure AD B2C tenant.
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. Choose **All services** in the top-left corner of the Azure portal, search for and select **Azure AD B2C**. 1. Select **Identity providers**, and then select **New OpenID Connect provider**. 1. Enter a **Name**. For example, enter *Contoso*.
active-directory-b2c Identity Provider Generic Saml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-generic-saml.md
A self-signed certificate is acceptable for most scenarios. For production envir
You need to store your certificate in your Azure AD B2C tenant. 1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. Choose **All services** in the top-left corner of the Azure portal, and then search for and select **Azure AD B2C**. 1. On the Overview page, select **Identity Experience Framework**. 1. Select **Policy Keys** and then select **Add**.
Open a browser and navigate to the URL. Make sure you type the correct URL and t
## Test your custom policy 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. In the Azure portal, search for and select **Azure AD B2C**. 1. Under **Policies**, select **Identity Experience Framework** 1. Select your relying party policy, for example `B2C_1A_signup_signin`.
active-directory-b2c Identity Provider Github https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-github.md
To enable sign-in with a GitHub account in Azure Active Directory B2C (Azure AD
## Configure GitHub as an identity provider 1. Sign in to the [Azure portal](https://portal.azure.com/) as the global administrator of your Azure AD B2C tenant.
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. Choose **All services** in the top-left corner of the Azure portal, search for and select **Azure AD B2C**. 1. Select **Identity providers**, then select **GitHub (Preview)**. 1. Enter a **Name**. For example, *GitHub*.
If the sign-in process is successful, your browser is redirected to `https://jwt
You need to store the client secret that you previously recorded in your Azure AD B2C tenant. 1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. Choose **All services** in the top-left corner of the Azure portal, and then search for and select **Azure AD B2C**. 1. On the Overview page, select **Identity Experience Framework**. 1. Select **Policy Keys** and then select **Add**.
active-directory-b2c Identity Provider Google https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-google.md
To enable sign-in for users with a Google account in Azure Active Directory B2C
## Configure Google as an identity provider 1. Sign in to the [Azure portal](https://portal.azure.com/) as the global administrator of your Azure AD B2C tenant.
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. Choose **All services** in the top-left corner of the Azure portal, search for and select **Azure AD B2C**. 1. Select **Identity providers**, then select **Google**. 1. Enter a **Name**. For example, *Google*.
If the sign-in process is successful, your browser is redirected to `https://jwt
You need to store the client secret that you previously recorded in your Azure AD B2C tenant. 1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. Choose **All services** in the top-left corner of the Azure portal, and then search for and select **Azure AD B2C**. 1. On the Overview page, select **Identity Experience Framework**. 1. Select **Policy Keys** and then select **Add**.
active-directory-b2c Identity Provider Id Me https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-id-me.md
To enable sign-in for users with an ID.me account in Azure Active Directory B2C
You need to store the client secret that you previously recorded in your Azure AD B2C tenant. 1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. Choose **All services** in the top-left corner of the Azure portal, and then search for and select **Azure AD B2C**. 1. On the Overview page, select **Identity Experience Framework**. 1. Select **Policy Keys** and then select **Add**.
active-directory-b2c Identity Provider Linkedin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-linkedin.md
To enable sign-in for users with a LinkedIn account in Azure Active Directory B2
## Configure LinkedIn as an identity provider 1. Sign in to the [Azure portal](https://portal.azure.com/) as the global administrator of your Azure AD B2C tenant.
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. Choose **All services** in the top-left corner of the Azure portal, search for and select **Azure AD B2C**. 1. Select **Identity providers**, then select **LinkedIn**. 1. Enter a **Name**. For example, *LinkedIn*.
If the sign-in process is successful, your browser is redirected to `https://jwt
You need to store the client secret that you previously recorded in your Azure AD B2C tenant. 1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. Choose **All services** in the top-left corner of the Azure portal, and then search for and select **Azure AD B2C**. 1. On the Overview page, select **Identity Experience Framework**. 1. Select **Policy keys** and then select **Add**.
active-directory-b2c Identity Provider Local https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-local.md
You can choose the local account sign-in methods (email, username, or phone numb
To set your local account sign-in options at the tenant level: 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. Under **Azure services**, select **Azure AD B2C**. Or use the search box to find and select **Azure AD B2C**. 1. Under **Manage**, select **Identity providers**. 1. In the identity provider list, select **Local account**.
To set your local account sign-in options at the tenant level:
If you choose the **Phone signup**, **Phone/Email signup** option, enable the recovery email prompt. 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. In the Azure portal, search for and select **Azure AD B2C**. 1. In Azure AD B2C, under **Policies**, select **User flows**. 1. Select the user flow from the list.
active-directory-b2c Identity Provider Microsoft Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-microsoft-account.md
zone_pivot_groups: b2c-policy-type
To enable sign-in for users with a Microsoft account in Azure Active Directory B2C (Azure AD B2C), you need to create an application in the [Azure portal](https://portal.azure.com). For more information, see [Register an application with the Microsoft identity platform](../active-directory/develop/quickstart-register-app.md). If you don't already have a Microsoft account, you can get one at [https://www.live.com/](https://www.live.com/). 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that contains your Microsoft Entra tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Microsoft Entra directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Microsoft Entra ID tenant from the **Directories + subscriptions** menu.
1. Choose **All services** in the top-left corner of the Azure portal, and then search for and select **App registrations**. 1. Select **New registration**. 1. Enter a **Name** for your application. For example, *MSAapp1*.
To enable sign-in for users with a Microsoft account in Azure Active Directory B
## Configure Microsoft as an identity provider 1. Sign in to the [Azure portal](https://portal.azure.com) as the global administrator of your Azure AD B2C tenant.
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. Choose **All services** in the top-left corner of the Azure portal, search for and select **Azure AD B2C**. 1. Select **Identity providers**, then select **Microsoft Account**. 1. Enter a **Name**. For example, *MSA*.
If you want to get the `family_name` and `given_name` claims from Microsoft Entr
Now that you've created the application in your Microsoft Entra tenant, you need to store that application's client secret in your Azure AD B2C tenant. 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. Choose **All services** in the top-left corner of the Azure portal, and then search for and select **Azure AD B2C**. 1. On the Overview page, select **Identity Experience Framework**. 1. Select **Policy Keys** and then select **Add**.
active-directory-b2c Identity Provider Mobile Id https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-mobile-id.md
To enable sign-in for users with Mobile ID in Azure AD B2C, you need to create a
## Configure Mobile ID as an identity provider
-1. Make sure you're using the directory that contains Azure AD B2C tenant. Select the **Directory + subscription** filter in the top menu and choose the directory that contains your Azure AD B2C tenant.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. Choose **All services** in the top-left corner of the Azure portal, and then search for and select **Azure AD B2C**. 1. Select **Identity providers**, and then select **New OpenID Connect provider**. 1. Enter a **Name**. For example, enter *Mobile ID*.
active-directory-b2c Identity Provider Ping One https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-ping-one.md
To enable sign-in for users with a PingOne (Ping Identity) account in Azure Acti
## Configure PingOne as an identity provider
-1. Make sure you're using the directory that contains Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. Choose **All services** in the top-left corner of the Azure portal, and then search for and select **Azure AD B2C**. 1. Select **Identity providers**, and then select **New OpenID Connect provider**. 1. Enter a **Name**. For example, enter *PingOne*.
If the sign-in process is successful, your browser is redirected to `https://jwt
You need to store the client secret that you previously recorded in your Azure AD B2C tenant. 1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. Choose **All services** in the top-left corner of the Azure portal, and then search for and select **Azure AD B2C**. 1. On the Overview page, select **Identity Experience Framework**. 1. Select **Policy Keys** and then select **Add**.
active-directory-b2c Identity Provider Qq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-qq.md
To enable sign-in for users with a QQ account in Azure Active Directory B2C (Azu
## Configure QQ as an identity provider 1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. In the Azure portal, search for and select **Azure AD B2C**. 1. Select **Identity providers**, then select **QQ (Preview)**. 1. Enter a **Name**. For example, *QQ*.
If the sign-in process is successful, your browser is redirected to `https://jwt
You need to store the client secret that you previously recorded in your Azure AD B2C tenant. 1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. Choose **All services** in the top-left corner of the Azure portal, and then search for and select **Azure AD B2C**. 1. On the Overview page, select **Identity Experience Framework**. 1. Select **Policy Keys** and then select **Add**.
active-directory-b2c Identity Provider Salesforce Saml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-salesforce-saml.md
This article shows you how to enable sign-in for users from a Salesforce organiz
You need to store the certificate that you created in your Azure AD B2C tenant. 1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. Choose **All services** in the top-left corner of the Azure portal, and then search for and select **Azure AD B2C**. 1. On the Overview page, select **Identity Experience Framework**. 1. Select **Policy Keys** and then select **Add**.
active-directory-b2c Identity Provider Salesforce https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-salesforce.md
To enable sign-in for users with a Salesforce account in Azure Active Directory
## Configure Salesforce as an identity provider
-1. Make sure you're using the directory that contains Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. Choose **All services** in the top-left corner of the Azure portal, and then search for and select **Azure AD B2C**. 1. Select **Identity providers**, and then select **New OpenID Connect provider**. 1. Enter a **Name**. For example, enter *Salesforce*.
If the sign-in process is successful, your browser is redirected to `https://jwt
You need to store the client secret that you previously recorded in your Azure AD B2C tenant. 1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. Choose **All services** in the top-left corner of the Azure portal, and then search for and select **Azure AD B2C**. 1. On the Overview page, select **Identity Experience Framework**. 1. Select **Policy Keys** and then select **Add**.
active-directory-b2c Identity Provider Swissid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-swissid.md
To enable sign-in for users with a SwissID account in Azure AD B2C, you need to
## Configure SwissID as an identity provider
-1. Make sure you're using the directory that contains Azure AD B2C tenant. Select the **Directory + subscription** filter in the top menu and choose the directory that contains your Azure AD B2C tenant.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. Choose **All services** in the top-left corner of the Azure portal, and then search for and select **Azure AD B2C**. 1. Select **Identity providers**, and then select **New OpenID Connect provider**. 1. Enter a **Name**. For example, enter *SwissID*.
active-directory-b2c Identity Provider Twitter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-twitter.md
To enable sign-in for users with a Twitter account in Azure AD B2C, you need to
## Configure Twitter as an identity provider 1. Sign in to the [Azure portal](https://portal.azure.com/) as the global administrator of your Azure AD B2C tenant.
-1. Make sure you're using the directory that contains your Azure AD B2C tenant.
- 1. Select the **Directories + subscriptions** icon in the portal toolbar.
- 1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. Choose **All services** in the top-left corner of the Azure portal, search for and select **Azure AD B2C**. 1. Select **Identity providers**, then select **Twitter**. 1. Enter a **Name**. For example, *Twitter*.
At this point, the Twitter identity provider has been set up, but it's not yet a
You need to store the secret key that you previously recorded for Twitter app in your Azure AD B2C tenant. 1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant.
- 1. Select the **Directories + subscriptions** icon in the portal toolbar.
- 1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. In the Azure portal, search for and select **Azure AD B2C**. 1. On the left menu, under **Policies**, select **Identity Experience Framework**. 1. Select **Policy Keys** and then select **Add**.
active-directory-b2c Identity Provider Wechat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-wechat.md
To enable sign-in for users with a WeChat account in Azure Active Directory B2C
## Configure WeChat as an identity provider 1. Sign in to the [Azure portal](https://portal.azure.com/) as the global administrator of your Azure AD B2C tenant.
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. Choose **All services** in the top-left corner of the Azure portal, search for and select **Azure AD B2C**. 1. Select **Identity providers**, then select **WeChat (Preview)**. 1. Enter a **Name**. For example, *WeChat*.
If the sign-in process is successful, your browser is redirected to `https://jwt
You need to store the client secret that you previously recorded in your Azure AD B2C tenant. 1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. Choose **All services** in the top-left corner of the Azure portal, and then search for and select **Azure AD B2C**. 1. On the Overview page, select **Identity Experience Framework**. 1. Select **Policy Keys** and then select **Add**.
active-directory-b2c Identity Provider Weibo https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-weibo.md
To enable sign-in for users with a Weibo account in Azure Active Directory B2C (
## Configure Weibo as an identity provider 1. Sign in to the [Azure portal](https://portal.azure.com/) as the global administrator of your Azure AD B2C tenant.
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. Choose **All services** in the top-left corner of the Azure portal, search for and select **Azure AD B2C**. 1. Select **Identity providers**, then select **Weibo (Preview)**. 1. Enter a **Name**. For example, *Weibo*.
If the sign-in process is successful, your browser is redirected to `https://jwt
You need to store the client secret that you previously recorded in your Azure AD B2C tenant. 1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. Choose **All services** in the top-left corner of the Azure portal, and then search for and select **Azure AD B2C**. 1. On the Overview page, select **Identity Experience Framework**. 1. Select **Policy Keys** and then select **Add**.
active-directory-b2c Idp Pass Through User Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/idp-pass-through-user-flow.md
The following diagram shows how an identity provider token returns to your app:
## Enable the claim 1. Sign in to the [Azure portal](https://portal.azure.com/) as the global administrator of your Azure AD B2C tenant.
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. Choose **All services** in the top-left corner of the Azure portal, search for and select **Azure AD B2C**. 1. Select **User flows (policies)**, and then select your user flow. For example, **B2C_1_signupsignin1**. 1. Select **Application claims**.
When testing your applications in Azure AD B2C, it can be useful to have the Azu
### Upload the files 1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. Choose **All services** in the top-left corner of the Azure portal, and then search for and select **Azure AD B2C**. 1. Select **Identity Experience Framework**. 1. On the Custom Policies page, click **Upload Policy**.
active-directory-b2c Language Customization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/language-customization.md
In the following example, English (en) and Spanish (es) custom strings are added
### Upload the custom policy 1. Save the extensions file.
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. Search for and select **Azure AD B2C**. 1. Under **Policies**, select **Identity Experience Framework**. 1. Select **Upload custom policy**.
active-directory-b2c Manage Users Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/manage-users-portal.md
This article focuses on working with **consumer accounts** in the Azure portal.
## Create a consumer user 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. In the left menu, select **Microsoft Entra ID**. Or, select **All services** and search for and select **Microsoft Entra ID**. 1. Under **Manage**, select **Users**. 1. Select **New user**.
active-directory-b2c Microsoft Graph Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/microsoft-graph-get-started.md
Azure AD B2C authentication service directly supports OAuth 2.0 client credentia
Before your scripts and applications can interact with the [Microsoft Graph API][ms-graph-api] to manage Azure AD B2C resources, you need to create an application registration in your Azure AD B2C tenant that grants the required API permissions. 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. In the Azure portal, search for and select **Azure AD B2C**. 1. Select **App registrations**, and then select **New registration**. 1. Enter a **Name** for the application. For example, *managementapp1*.
If your application or script needs to update users' passwords, you need to assi
To add the *User administrator* role, follow these steps: 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. Search for and select **Azure AD B2C**. 1. Under **Manage**, select **Roles and administrators**. 1. Select the **User administrator** role.
active-directory-b2c Multi Factor Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/multi-factor-authentication.md
With [Conditional Access](conditional-access-identity-protection-overview.md) us
::: zone pivot="b2c-user-flow" 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. In the left menu, select **Azure AD B2C**. Or, select **All services** and search for and select **Azure AD B2C**. 1. Select **User flows**. 1. Select the user flow for which you want to enable MFA. For example, *B2C_1_signinsignup*.
In Azure AD B2C, you can delete a user's TOTP authenticator app enrollment. Then
### Delete TOTP authenticator app enrollment using the Azure portal 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. In the left menu, select **Users**. 1. Search for and select the user for which you want to delete TOTP authenticator app enrollment. 1. In the left menu, select **Authentication methods**.
active-directory-b2c Partner Akamai Secure Hybrid Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-akamai-secure-hybrid-access.md
Save your changes and upload the `TrustFrameworkBase.xml`, the new `TrustFramewo
1. Sign in to the [Azure portal](https://portal.azure.com/#home).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. In the [Azure portal](https://portal.azure.com/#home), search for and select **Azure AD B2C**.
For Azure AD B2C to trust Akamai Enterprise Application Access, create an Azure
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. On the left menu, select **Azure AD B2C**. Or, select **All services** and then search for and select **Azure AD B2C**.
active-directory-b2c Partner Deduce https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-deduce.md
The relying party policy specifies the user journey which Azure AD B2C will exec
1. Sign in to the [Azure portal](https://portal.azure.com/#home).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant:
-
- a. Select the **Directories + subscriptions** icon in the portal toolbar.
-
- b. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch** button next to it.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. In the [Azure portal](https://portal.azure.com/#home), search for and select **Azure AD B2C**.
active-directory-b2c Partner Itsme https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-itsme.md
Please clarify step 1 in the description below - we don't have steps in this tut
> [!NOTE] > If you don't have one already, [create an Azure AD B2C tenant](tutorial-create-tenant.md) that is linked to your Azure subscription.
-1. Make sure you're using the directory that contains Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. Under **Azure services**, select **Azure AD B2C** (or select **More services** and use the **All services** search box to search for *Azure AD B2C*). 1. Select **Identity providers**, and then select **New OpenID Connect provider**. 1. Fill in the form with the following information:
active-directory-b2c Partner Trusona https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-trusona.md
Before your applications can interact with Azure AD B2C, they must be registered
To register a web application in your Azure AD B2C tenant, use our new unified app registration experience. 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. In the Azure portal, search for and select **Azure AD B2C**. 1. Select **App registrations**, and then select **New registration**. 1. Enter a **Name** for the application. For example, *jwt ms*.
If you register this app and configure it with `https://jwt.ms/` app for testing
1. Sign in to the [Azure portal](https://portal.azure.com/) as the global administrator of your Azure AD B2C tenant.
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. Choose **All services** in the top-left corner of the Azure portal, search for and select **Azure AD B2C**.
Store the client secret that you previously generated in [step 1](#step-1-onboar
1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. Choose **All services** in the top-left corner of the Azure portal, and then search for and select **Azure AD B2C**.
In the following example, for the `Trusona Authentication Cloud` user journey, t
1. Sign in to the [Azure portal](https://portal.azure.com/#home).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant:
-
- a. Select the **Directories + subscriptions** icon in the portal toolbar.
-
- b. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
-
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. In the [Azure portal](https://portal.azure.com/#home), search for and select **Azure AD B2C**. 1. Under Policies, select **Identity Experience Framework**.
active-directory-b2c Partner Twilio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-twilio.md
The following components make up the Twilio solution:
Add the policy files to Azure AD B2C: 1. Sign in to the [Azure portal](https://portal.azure.com/) as the global administrator of your Azure AD B2C tenant.
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. Choose **All services** in the top-left corner of the Azure portal, search for and select **Azure AD B2C**. 1. Navigate to **Azure AD B2C** > **Identity Experience Framework** > **Policy Keys**. 1. Add a new key with the name **B2cRestTwilioClientId**. Select **manual**, and provide the value of the Twilio AccountSID.
active-directory-b2c Password Complexity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/password-complexity.md
If you're using custom policies, you can [configure password complexity in a cus
## Configure password complexity 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant:
- 1. Select the **Directories + subscriptions** icon in the portal toolbar.
- 1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**..
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. In the Azure portal, search for and select **Azure AD B2C**. 1. Select **User flows**. 1. Select a user flow, and click **Properties**.
Save the policy file.
### Upload the files 1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant:
- 1. Select the **Directories + subscriptions** icon in the portal toolbar.
- 1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. Choose **All services** in the top-left corner of the Azure portal, and then search for and select **Azure AD B2C**. 1. Select **Identity Experience Framework**. 1. On the Custom Policies page, select **Upload Policy**.
active-directory-b2c Phone Authentication User Flows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/phone-authentication-user-flows.md
Multi-factor authentication (MFA) is disabled by default when you configure a us
Email sign-up is enabled by default in your local account identity provider settings. You can change the identity types you'll support in your tenant by selecting or deselecting email sign-up, username, or phone number. 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. Choose **All services** in the top-left corner of the Azure portal, and then search for and select **Azure AD B2C**. 1. Under **Manage**, select **Identity providers**. 1. In the identity provider list, select **Local account**.
After you've added phone sign-up as an identity option for local accounts, you c
Here's an example showing how to add phone sign-up to a new user flow. 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. In the Azure portal, search for and select **Azure AD B2C**. 1. Under **Policies**, select **User flows**, and then select **New user flow**.
You can enable the recovery email prompt in the user flow properties.
### To enable the recovery email prompt 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. In the Azure portal, search for and select **Azure AD B2C**. 1. In Azure AD B2C, under **Policies**, select **User flows**. 1. Select the user flow from the list.
We strongly suggest you include consent information in your sign-up and sign-in
To enable consent information 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. In the Azure portal, search for and select **Azure AD B2C**. 1. In Azure AD B2C, under **Policies**, select **User flows**. 1. Select the user flow from the list.
active-directory-b2c Phone Based Mfa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/phone-based-mfa.md
Take the following actions to help mitigate fraudulent sign-ups.
- Remove country codes that aren't relevant to your organization from the drop-down menu where the user verifies their phone number (this change will apply to future sign-ups): 1. Sign in to the [Azure portal](https://portal.azure.com) as the global administrator of your Azure AD B2C tenant.
- 1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
- 1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+ 1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. Choose **All services** in the top-left corner of the Azure portal, search for and select **Azure AD B2C**. 1. Select the user flow, and then select **Languages**. Select the language for your organization's geographic location to open the language details panel. (For this example, we'll select **English en** for the United States). Select **Multifactor authentication page**, and then select **Download defaults (en)**.
active-directory-b2c Policy Keys Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/policy-keys-overview.md
To get the current active key within a key container, use the Microsoft Graph AP
To add or delete signing and encryption keys: 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. In the Azure portal, search for and select **Azure AD B2C**. 1. On the overview page, under **Policies**, select **Identity Experience Framework**. 1. Select **Policy Keys**
active-directory-b2c Saml Service Provider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/saml-service-provider.md
To have a trust relationship between your application and Azure AD B2C, create a
You need to store your certificate in your Azure AD B2C tenant. 1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. Select **All services** in the upper-left corner of the Azure portal, and then search for and select **Azure AD B2C**. 1. On the **Overview** page, select **Identity Experience Framework**. 1. Select **Policy Keys**, and then select **Add**.
Replace `<tenant-name>` with the name of your Azure AD B2C tenant. Replace `<pol
For Azure AD B2C to trust your application, you create an Azure AD B2C application registration. The registration contains configuration information, such as the application's metadata endpoint. 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. On the left menu, select **Azure AD B2C**. Or, select **All services** and then search for and select **Azure AD B2C**. 1. Select **App registrations**, and then select **New registration**. 1. Enter a **Name** for the application. For example, enter **SAMLApp1**.
active-directory-b2c Secure Api Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/secure-api-management.md
To register an application in your Azure AD B2C tenant, you can use our new, uni
# [App registrations](#tab/app-reg-ga/) 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. On the left pane, select **Azure AD B2C**. Alternatively, you can select **All services** and then search for and select **Azure AD B2C**. 1. Select **App registrations**, and then select the **Owned applications** tab. 1. Record the value in the **Application (client) ID** column for *webapp1* or for another application you've previously created.
To register an application in your Azure AD B2C tenant, you can use our new, uni
# [Applications (Legacy)](#tab/applications-legacy/) 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. On the left pane, select **Azure AD B2C**. Alternatively, you can select **All services** and then search for and select **Azure AD B2C**. 1. Under **Manage**, select **Applications (Legacy)**. 1. Record the value in the **Application ID** column for *webapp1* or for another application you've previously created.
active-directory-b2c Secure Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/secure-rest-api.md
To configure an API Connector with HTTP basic authentication, follow these steps
To configure a REST API technical profile with HTTP basic authentication, create the following cryptographic keys to store the username and password: 1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. Choose **All services** in the top-left corner of the Azure portal, and then search for and select **Azure AD B2C**. 1. On the Overview page, select **Identity Experience Framework**. 1. Select **Policy Keys**, and then select **Add**.
To upload a new certificate to an existing API connector, select the API connect
### Add a client certificate policy key 1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. Choose **All services** in the top-left corner of the Azure portal, and then search for and select **Azure AD B2C**. 1. On the Overview page, select **Identity Experience Framework**. 1. Select **Policy Keys**, and then select **Add**.
The following example uses a REST API technical profile to make a request to the
Before the technical profile can interact with Microsoft Entra ID to obtain an access token, you need to register an application. Azure AD B2C relies the Microsoft Entra platform. You can create the app in your Azure AD B2C tenant, or in any Microsoft Entra tenant you manage. To register an application: 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Microsoft Entra ID or Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. In the left menu, select **Microsoft Entra ID**. Or, select **All services** and search for and select **Microsoft Entra ID**. 1. Select **App registrations**, and then select **New registration**. 1. Enter a **Name** for the application. For example, *Client_Credentials_Auth_app*.
For a client credentials flow, you need to create an application secret. The cli
You need to store the client ID and the client secret value that you previously recorded in your Azure AD B2C tenant. 1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. Choose **All services** in the top-left corner of the Azure portal, and then search for and select **Azure AD B2C**. 1. On the Overview page, select **Identity Experience Framework**. 1. Select **Policy Keys** and then select **Add**.
The following example shows how to call the `REST-GetProfile` technical profile
To configure a REST API technical profile with an OAuth2 bearer token, obtain an access token from the REST API owner. Then create the following cryptographic key to store the bearer token. 1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. Choose **All services** in the top-left corner of the Azure portal, and then search for and select **Azure AD B2C**. 1. On the Overview page, select **Identity Experience Framework**. 1. Select **Policy Keys**, and then select **Add**.
API key is a unique identifier used to authenticate a user to access a REST API
To configure a REST API technical profile with API key authentication, create the following cryptographic key to store the API key: 1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. Choose **All services** in the top-left corner of the Azure portal, and then search for and select **Azure AD B2C**. 1. On the Overview page, select **Identity Experience Framework**. 1. Select **Policy Keys**, and then select **Add**.
active-directory-b2c Session Behavior https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/session-behavior.md
You can configure the Azure AD B2C session behavior, including:
To configure the session behavior in your user flow, follow these steps: 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. Choose **All services** in the top-left corner of the Azure portal, and then search for and select **Azure AD B2C**. 1. Select **User flows**. 1. Open the user flow that you previously created.
KMSI is configurable at the individual user flow level. Before enabling KMSI for
To enable KMSI for your user flow: 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. Choose **All services** in the top-left corner of the Azure portal, and then search for and select **Azure AD B2C**. 1. Select **User flows (policies)**. 1. Open the user flow that you previously created.
After logout, the user is redirected to the URI specified in the `post_logout_re
To require an ID Token in logout requests: 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. Choose **All services** in the top-left corner of the Azure portal, and then search for and select **Azure AD B2C**. 1. Select **User flows**. 1. Open the user flow that you previously created.
To require an ID Token in logout requests, add a **UserJourneyBehaviors** elemen
To configure your application Logout URL: 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. Choose **All services** in the top-left corner of the Azure portal, and then search for and select **Azure AD B2C**. 1. Select **App registrations**, and then select your application. 1. Select **Authentication**.
active-directory-b2c Tenant Management Check Tenant Creation Permission https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/tenant-management-check-tenant-creation-permission.md
As a *Global Administrator* in an Azure AD B2C tenant, you can restrict non-admi
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant:
-
- 1. Select the **Directories + subscriptions** icon in the portal toolbar.
-
- 1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. In the Azure portal, search for and select **Microsoft Entra ID**.
Before you create an Azure AD B2C tenant, make sure that you've the permission t
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant:
-
- 1. Select the **Directories + subscriptions** icon in the portal toolbar.
-
- 1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. In the Azure portal, search for and select **Microsoft Entra ID**.
active-directory-b2c Tenant Management Manage Administrator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/tenant-management-manage-administrator.md
In this article, you learn how to:
To create a new administrative account, follow these steps: 1. Sign in to the [Azure portal](https://portal.azure.com/) with Global Administrator or Privileged Role Administrator permissions.
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. Under **Azure services**, select **Azure AD B2C**. Or use the search box to find and select **Azure AD B2C**. 1. Under **Manage**, select **Users**. 1. Select **New user**.
You can also invite a new guest user to manage your tenant. The guest account is
To invite a user, follow these steps: 1. Sign in to the [Azure portal](https://portal.azure.com/) with Global Administrator or Privileged Role Administrator permissions.
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. Under **Azure services**, select **Azure AD B2C**. Or use the search box to find and select **Azure AD B2C**. 1. Under **Manage**, select **Users**. 1. Select **New guest account**.
An invitation email is sent to the user. The user needs to accept the invitation
If the guest didn't receive the invitation email, or the invitation expired, you can resend the invite. As an alternative to the invitation email, you can give a guest a direct link to accept the invitation. To resend the invitation and get the direct link: 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. Under **Azure services**, select **Azure AD B2C**. Or use the search box to find and select **Azure AD B2C**. 1. Under **Manage**, select **Users**. 1. Search for and select the user you want to resend the invite to.
If the guest didn't receive the invitation email, or the invitation expired, you
You can assign a role when you [create a user](#add-an-administrator-work-account) or [invite a guest user](#invite-an-administrator-guest-account). You can add a role, change the role, or remove a role for a user: 1. Sign in to the [Azure portal](https://portal.azure.com/) with Global Administrator or Privileged Role Administrator permissions.
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. Under **Azure services**, select **Azure AD B2C**. Or use the search box to find and select **Azure AD B2C**. 1. Under **Manage**, select **Users**. 1. Select the user you want to change the roles for. Then select **Assigned roles**.
If you need to remove a role assignment from a user, follow these steps:
As part of an auditing process, you typically review which users are assigned to specific roles in the Azure AD B2C directory. Use the following steps to audit which users are currently assigned privileged roles. 1. Sign in to the [Azure portal](https://portal.azure.com/) with Global Administrator or Privileged Role Administrator permissions.
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. Under **Azure services**, select **Azure AD B2C**. Or use the search box to find and select **Azure AD B2C**. 1. Under **Manage**, select **Roles and administrators**. 1. Select a role, such as **Global administrator**. The **Role | Assignments** page lists the users with that role.
active-directory-b2c Tenant Management Read Tenant Name https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/tenant-management-read-tenant-name.md
In this article, you learn how to:
To get your Azure AD B2C tenant name, follow these steps: 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. In the Azure portal, search for and select **Azure AD B2C**. 1. In the **Overview**, copy the **Domain name**.
To get your Azure AD B2C tenant name, follow these steps:
To get your Azure AD B2C tenant ID, follow these steps: 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. In the Azure portal, search for and select **Microsoft Entra ID**. 1. In the **Overview**, copy the **Tenant ID**.
active-directory-b2c Threat Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/threat-management.md
The first 10 lockout periods are one minute long. The next 10 lockout periods ar
To manage smart lockout settings, including the lockout threshold: 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. In the left menu, select **Azure AD B2C**. Or, select **All services** and search for and select **Azure AD B2C**. 1. Under **Security**, select **Authentication methods (Preview)**, then select **Password protection**. 1. Under **Custom smart lockout**, enter your desired smart lockout settings:
active-directory-b2c Tutorial Create Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/tutorial-create-tenant.md
Azure AD B2C allows you to activate Go-Local add-on on an existing tenant as lon
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant:
-
- 1. In the Azure portal toolbar, select the **Directories + subscriptions** icon.
-
- 1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select the **Switch** button next to it.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. In the Azure portal, search for and select **Azure AD B2C**.
active-directory-b2c Tutorial Create User Flows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/tutorial-create-user-flows.md
A user flow lets you determine how users interact with your application when the
The sign-up and sign-in user flow handles both sign-up and sign-in experiences with a single configuration. Users of your application are led down the right path depending on the context. 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. In the Azure portal, search for and select **Azure AD B2C**. 1. Under **Policies**, select **User flows**, and then select **New user flow**.
If you want to enable users to edit their profile in your application, you use a
## Add signing and encryption keys for Identity Experience Framework applications 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. In the Azure portal, search for and select **Azure AD B2C**. 1. On the overview page, under **Policies**, select **Identity Experience Framework**.
Use the steps outlined in [Create a Facebook application](identity-provider-face
Add your Facebook application's [App Secret](identity-provider-facebook.md) as a policy key. You can use the App Secret of the application you created as part of this article's prerequisites. 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. In the Azure portal, search for and select **Azure AD B2C**. 1. On the overview page, under **Policies**, select **Identity Experience Framework**. 1. Select **Policy Keys** and then select **Add**.
active-directory-b2c Tutorial Delete Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/tutorial-delete-tenant.md
When you've finished the Azure Active Directory B2C (Azure AD B2C) tutorials, yo
## Identify cleanup tasks 1. Sign in to the [Azure portal](https://portal.azure.com/) with a global administrator or subscription administrator role. Use the same work or school account or the same Microsoft account that you used to sign up for Azure.
-1. Make sure you're using the directory that contains your Azure AD B2C tenant:
- 1. Select the **Directories + subscriptions** icon in the portal toolbar.
- 1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch** button next to it.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. In the Azure portal, search for and select the **Microsoft Entra ID** service. 1. In the left menu, under **Manage**, select **Properties**. 1. Under **Access management for Azure resources**, select **Yes**, and then select **Save**.
When you've finished the Azure Active Directory B2C (Azure AD B2C) tutorials, yo
If you've the confirmation page open from the previous section, you can use the links in the **Required action** column to open the Azure portal pages where you can remove these resources. Or, you can remove tenant resources from within the Azure AD B2C service using the following steps. 1. Sign in to the [Azure portal](https://portal.azure.com/) with a global administrator or subscription administrator role. Use the same work or school account or the same Microsoft account that you used to sign up for Azure.
-1. Make sure you're using the directory that contains your Azure AD B2C tenant:
- 1. Select the **Directories + subscriptions** icon in the portal toolbar.
- 1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch** button next to it.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. In the Azure portal, select the **Azure AD B2C** service, or search for and select **Azure AD B2C**. 1. Delete all users *except* the admin account you're currently signed in as: 1. Under **Manage**, select **Users**.
If you've the confirmation page open from the previous section, you can use the
Once you delete all the tenant resources, you can now delete the tenant itself: 1. Sign in to the [Azure portal](https://portal.azure.com/) with a global administrator or subscription administrator role. Use the same work or school account or the same Microsoft account that you used to sign up for Azure.
-1. Make sure you're using the directory that contains your Azure AD B2C tenant:
- 1. Select the **Directories + subscriptions** icon in the portal toolbar.
- 1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch** button next to it.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. In the Azure portal, search for and select the **Microsoft Entra ID** service. 1. If you haven't already granted yourself access management permissions, do the following:
active-directory-b2c Tutorial Register Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/tutorial-register-applications.md
To register a web application in your Azure AD B2C tenant, you can use our new u
#### [App registrations](#tab/app-reg-ga/) 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. In the Azure portal, search for and select **Azure AD B2C**. 1. Select **App registrations**, and then select **New registration**. 1. Enter a **Name** for the application. For example, *webapp1*.
To register a web application in your Azure AD B2C tenant, you can use our new u
#### [Applications (Legacy)](#tab/applications-legacy/) 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. In the Azure portal, search for and select **Azure AD B2C**. 1. Select **Applications (Legacy)**, and then select **Add**. 1. Enter a name for the application. For example, *webapp1*.
active-directory-b2c Tutorial Register Spa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/tutorial-register-spa.md
This authentication flow doesn't include application scenarios that use cross-pl
## Register the SPA application 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. In the Azure portal, search for and select **Azure AD B2C**. 1. Select **App registrations**, and then select **New registration**. 1. Enter a **Name** for the application. For example, *spaapp1*.
active-directory-b2c User Flow Custom Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/user-flow-custom-attributes.md
Azure AD B2C allows you to extend the set of attributes stored on each user acco
## Create a custom attribute 1. Sign in to the [Azure portal](https://portal.azure.com/) as the global administrator of your Azure AD B2C tenant.
-1. Make sure you're using the directory that contains your Azure AD B2C tenant:
- 1. Select the **Directories + subscriptions** icon in the portal toolbar.
- 1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the Directory name list, and then select **Switch**
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. Choose **All services** in the top-left corner of the Azure portal, search for and select **Azure AD B2C**. 1. Select **User attributes**, and then select **Add**. 1. Provide a **Name** for the custom attribute (for example, "ShoeSize")
Extension attributes can only be registered on an application object, even thoug
### Get extensions app's application ID 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant;
- 1. Select the **Directories + subscriptions** icon in the portal toolbar.
- 2. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. In the left menu, select **Azure AD B2C**. Or, select **All services** and search for and select **Azure AD B2C**. 1. Select **App registrations**, and then select **All applications**. 1. Select the `b2c-extensions-app. Do not modify. Used by AADB2C for storing user data.` application.
Extension attributes can only be registered on an application object, even thoug
### Get extensions app's application properties 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant:
- 1. Select the **Directories + subscriptions** icon in the portal toolbar.
- 2. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. In the left menu, select **Azure AD B2C**. Or, select **All services** and search for and select **Azure AD B2C**. 1. Select **App registrations**, and then select **All applications**. 1. Select the **b2c-extensions-app. Do not modify. Used by AADB2C for storing user data.** application.
To enable custom attributes in your policy, provide **Application ID** and Appli
## Upload your custom policy 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that contains your Azure B2C AD tenant:
- 1. Select the **Directories + subscriptions** icon in the portal toolbar.
- 1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. Choose **All services** in the top-left corner of the Azure portal, and then search for and select **App registrations**. 1. Select **Identity Experience Framework**. 1. Select **Upload Custom Policy**, and then upload the TrustFrameworkExtensions.xml policy files that you changed.
active-directory-b2c Userinfo Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/userinfo-endpoint.md
The completed relying party element will be as follows:
### 4. Upload the files 1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
1. Choose **All services** in the top-left corner of the Azure portal, and then search for and select **Azure AD B2C**. 1. Select **Identity Experience Framework**. 1. On the **Custom policies** page, select **Upload custom policy**.
ai-services Content Filter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/content-filter.md
Previously updated : 09/15/2023 Last updated : 11/06/2023 keywords:
When annotations are enabled as shown in the code snippet below, the following i
Annotations are currently in preview for Completions and Chat Completions (GPT models); the following code snippet shows how to use annotations in preview:
-# [Python](#tab/python)
+# [OpenAI Python 0.28.1](#tab/python)
```python
-# Note: The openai-python library support for Azure OpenAI is in preview.
# os.getenv() for the endpoint and key assumes that you are using environment variables. import os
print(response)
The following code snippet shows how to retrieve annotations when content was filtered: ```python
-# Note: The openai-python library support for Azure OpenAI is in preview.
# os.getenv() for the endpoint and key assumes that you are using environment variables. import os
except openai.error.InvalidRequestError as e:
```
+# [OpenAI Python 1.x](#tab/python-new)
+
+```python
+# os.getenv() for the endpoint and key assumes that you are using environment variables.
+
+import os
+from openai import AzureOpenAI
+client = AzureOpenAI(
+ api_key=os.getenv("AZURE_OPENAI_KEY"),
+ api_version="2023-10-01-preview",
+ azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT")
+ )
+
+response = client.completions.create(
+ model="gpt-35-turbo-instruct", # model = "deployment_name".
+ prompt="{Example prompt where a severity level of low is detected}"
+ # Content that is detected at severity level medium or high is filtered,
+ # while content detected at severity level low isn't filtered by the content filters.
+)
+
+print(response.model_dump_json(indent=2))
+```
+ # [JavaScript](#tab/javascrit) [Azure OpenAI JavaScript SDK source code & samples](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/openai/openai)
ai-services Function Calling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/function-calling.md
Previously updated : 07/20/2023 Last updated : 11/06/2023
To use function calling with the Chat Completions API, you need to include two n
When functions are provided, by default the `function_call` will be set to `"auto"` and the model will decide whether or not a function should be called. Alternatively, you can set the `function_call` parameter to `{"name": "<insert-function-name>"}` to force the API to call a specific function or you can set the parameter to `"none"` to prevent the model from calling any functions.
+# [OpenAI Python 0.28.1](#tab/python)
+ ```python
-# Note: The openai-python library support for Azure OpenAI is in preview.
+ import os import openai
functions= [
] response = openai.ChatCompletion.create(
- engine="gpt-35-turbo-0613",
+ engine="gpt-35-turbo-0613", # engine = "deployment_name"
messages=messages, functions=functions, function_call="auto",
The response from the API includes a `function_call` property if the model deter
In some cases, the model may generate both `content` and a `function_call`. For example, for the prompt above the content could say something like "Sure, I can help you find some hotels in San Diego that match your criteria" along with the function_call.
+# [OpenAI Python 1.x](#tab/python-new)
+
+```python
+import os
+from openai import AzureOpenAI
+
+client = AzureOpenAI(
+ api_key=os.getenv("AZURE_OPENAI_KEY"),
+ api_version="2023-10-01-preview",
+ azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT"
+)
+
+messages= [
+ {"role": "user", "content": "Find beachfront hotels in San Diego for less than $300 a month with free breakfast."}
+]
+
+functions= [
+ {
+ "name": "search_hotels",
+ "description": "Retrieves hotels from the search index based on the parameters provided",
+ "parameters": {
+ "type": "object",
+ "properties": {
+ "location": {
+ "type": "string",
+ "description": "The location of the hotel (i.e. Seattle, WA)"
+ },
+ "max_price": {
+ "type": "number",
+ "description": "The maximum price for the hotel"
+ },
+ "features": {
+ "type": "string",
+ "description": "A comma separated list of features (i.e. beachfront, free wifi, etc.)"
+ }
+ },
+ "required": ["location"]
+ }
+ }
+]
+
+response = client.chat.completions.create(
+ model="gpt-35-turbo-0613", # model = "deployment_name"
+ messages= messages,
+ functions = functions,
+ function_call="auto",
+)
+
+print(response.choices[0].message.model_dump_json(indent=2))
+```
+
+The response from the API includes a `function_call` property if the model determines that a function should be called. The `function_call` property includes the name of the function to call and the arguments to pass to the function. The arguments are a JSON string that you can parse and use to call your function.
+
+```json
+{
+ "content": null,
+ "role": "assistant",
+ "function_call": {
+ "arguments": "{\n \"location\": \"San Diego\",\n \"max_price\": 300,\n \"features\": \"beachfront, free breakfast\"\n}",
+ "name": "search_hotels"
+ }
+}
+```
+
+In some cases, the model may generate both `content` and a `function_call`. For example, for the prompt above the content could say something like "Sure, I can help you find some hotels in San Diego that match your criteria" along with the function_call.
+++ ## Working with function calling The following section goes into additional detail on how to effectively use functions with the Chat Completions API.
If you want to describe a function that doesn't accept any parameters, use `{"ty
### Managing the flow with functions ```python+ response = openai.ChatCompletion.create( deployment_id="gpt-35-turbo-0613", messages=messages,
ai-services Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/migration.md
+
+ Title: How to migrate to OpenAI Python v1.x
+
+description: Learn about migrating to the latest release of the OpenAI Python library with Azure OpenAI
+++++ Last updated : 11/06/2023+++
+# Migrating to the OpenAI Python API library 1.x
+
+OpenAI has just released a new version of the [OpenAI Python API library](https://github.com/openai/openai-python/). This guide is supplemental to [OpenAI's migration guide](https://github.com/openai/openai-python/discussions/631) and will help bring you up to speed on the changes specific to Azure OpenAI.
+
+## Updates
+
+- This is a completely new version of the OpenAI Python API library.
+- Starting on November 6, 2023 `pip install openai` and `pip install openai --upgrade` will install `version 1.x` of the OpenAI Python library.
+- Upgrading from `version 0.28.1` to `version 1.x` is a breaking change, you'll need to test and update your code.
+- Auto-retry with backoff if there's an error
+- Proper types (for mypy/pyright/editors)
+- You can now instantiate a client, instead of using a global default.
+- Switch to explicit client instantiation
+- [Name changes](#name-changes)
+
+## Known issues
+
+- The latest release of the [OpenAI Python library](https://pypi.org/project/openai/) doesn't currently support DALL-E when used with Azure OpenAI. DALL-E with Azure OpenAI is still supported with `0.28.1`. For those who can't wait for native support for DALL-E and Azure OpenAI we're providing [two code examples](#dall-e-fix) which can be used as a workaround.
+- `embeddings_utils.py` which was used to provide functionality like cosine similarity for semantic text search is [no longer part of the OpenAI Python API library](https://github.com/openai/openai-python/issues/676).
+- You should also check the active [GitHub Issues](https://github.com/openai/openai-python/issues/703) for the OpenAI Python library.
+
+## Test before you migrate
+
+> [!IMPORTANT]
+> Automatic migration of your code using `openai migrate` is not supported with Azure OpenAI.
+
+As this is a new version of the library with breaking changes, you should test your code extensively against the new release before migrating any production applications to rely on version 1.x. You should also review your code and internal processes to make sure that you're following best practices and pinning your production code to only versions that you have fully tested.
+
+To make the migration process easier, we're updating existing code examples in our docs for Python to a tabbed experience:
+
+# [OpenAI Python 0.28.1](#tab/python)
+
+```console
+pip install openai==0.28.1
+```
+
+# [OpenAI Python 1.x](#tab/python-new)
+
+```console
+pip install openai --upgrade
+```
+++
+This provides context for what has changed and allows you to test the new library in parallel while continuing to provide support for version `0.28.1`. If you upgrade to `1.x` and realize you need to temporarily revert back to the previous version, you can always `pip uninstall openai` and then reinstall targeted to `0.28.1` with `pip install openai==0.28.1`.
+
+## Chat completions
+
+# [OpenAI Python 0.28.1](#tab/python)
+
+You need to set the `engine` variable to the deployment name you chose when you deployed the GPT-3.5-Turbo or GPT-4 models. Entering the model name will result in an error unless you chose a deployment name that is identical to the underlying model name.
+
+```python
+import os
+import openai
+openai.api_type = "azure"
+openai.api_base = os.getenv("AZURE_OPENAI_ENDPOINT")
+openai.api_key = os.getenv("AZURE_OPENAI_KEY")
+openai.api_version = "2023-05-15"
+
+response = openai.ChatCompletion.create(
+ engine="gpt-35-turbo", # engine = "deployment_name".
+ messages=[
+ {"role": "system", "content": "You are a helpful assistant."},
+ {"role": "user", "content": "Does Azure OpenAI support customer managed keys?"},
+ {"role": "assistant", "content": "Yes, customer managed keys are supported by Azure OpenAI."},
+ {"role": "user", "content": "Do other Azure AI services support this too?"}
+ ]
+)
+
+print(response)
+print(response['choices'][0]['message']['content'])
+```
+
+# [OpenAI Python 1.x](#tab/python-new)
+
+You need to set the `model` variable to the deployment name you chose when you deployed the GPT-3.5-Turbo or GPT-4 models. Entering the model name results in an error unless you chose a deployment name that is identical to the underlying model name.
+
+```python
+import os
+from openai import AzureOpenAI
+
+client = AzureOpenAI(
+ azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT"),
+ api_key=os.getenv("AZURE_OPENAI_KEY"),
+ api_version="2023-05-15"
+)
+
+response = client.chat.completions.create(
+ model="gpt-35-turbo", # model = "deployment_name".
+ messages=[
+ {"role": "system", "content": "You are a helpful assistant."},
+ {"role": "user", "content": "Does Azure OpenAI support customer managed keys?"},
+ {"role": "assistant", "content": "Yes, customer managed keys are supported by Azure OpenAI."},
+ {"role": "user", "content": "Do other Azure AI services support this too?"}
+ ]
+)
+
+print(response.choices[0].message.content)
+```
+
+Additional examples can be found in our [in-depth Chat Completion article](chatgpt.md).
+++
+## Completions
+
+# [OpenAI Python 0.28.1](#tab/python)
+
+```python
+import os
+import openai
+
+openai.api_key = os.getenv("AZURE_OPENAI_KEY")
+openai.api_base = os.getenv("AZURE_OPENAI_ENDPOINT") # your endpoint should look like the following https://YOUR_RESOURCE_NAME.openai.azure.com/
+openai.api_type = 'azure'
+openai.api_version = '2023-05-15' # this might change in the future
+
+deployment_name='REPLACE_WITH_YOUR_DEPLOYMENT_NAME' #This will correspond to the custom name you chose for your deployment when you deployed a model.
+
+# Send a completion call to generate an answer
+print('Sending a test completion job')
+start_phrase = 'Write a tagline for an ice cream shop. '
+response = openai.Completion.create(engine=deployment_name, prompt=start_phrase, max_tokens=10)
+text = response['choices'][0]['text'].replace('\n', '').replace(' .', '.').strip()
+print(start_phrase+text)
+```
+
+# [OpenAI Python 1.x](#tab/python-new)
+
+```python
+import os
+from openai import AzureOpenAI
+
+client = AzureOpenAI(
+ api_key=os.getenv("AZURE_OPENAI_KEY"),
+ api_version="2023-10-01-preview",
+ azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT")
+ )
+
+deployment_name='REPLACE_WITH_YOUR_DEPLOYMENT_NAME' #This will correspond to the custom name you chose for your deployment when you deployed a model.
+
+# Send a completion call to generate an answer
+print('Sending a test completion job')
+start_phrase = 'Write a tagline for an ice cream shop. '
+response = client.completions.create(model=deployment_name, prompt=start_phrase, max_tokens=10)
+print(response.choices[0].text)
+```
+++
+## Embeddings
+
+# [OpenAI Python 0.28.1](#tab/python)
+
+```python
+import openai
+
+openai.api_type = "azure"
+openai.api_key = YOUR_API_KEY
+openai.api_base = "https://YOUR_RESOURCE_NAME.openai.azure.com"
+openai.api_version = "2023-05-15"
+
+response = openai.Embedding.create(
+ input="Your text string goes here",
+ engine="YOUR_DEPLOYMENT_NAME"
+)
+embeddings = response['data'][0]['embedding']
+print(embeddings)
+```
+
+# [OpenAI Python 1.x](#tab/python-new)
+
+```python
+import os
+from openai import AzureOpenAI
+
+client = AzureOpenAI(
+ api_key = os.getenv("AZURE_OPENAI_KEY"),
+ api_version = "2023-05-15",
+ azure_endpoint =os.getenv("AZURE_OPENAI_ENDPOINT")
+)
+
+response = client.embeddings.create(
+ input = "Your text string goes here",
+ model= "text-embedding-ada-002"
+)
+
+print(response.model_dump_json(indent=2))
+```
+
+Additional examples including how to handle semantic text search without `embeddings_utils.py` can be found in our [embeddings tutorial](../tutorials/embeddings.md).
+++
+## Async
+
+OpenAI doesn't support calling asynchronous methods in the module-level client, instead you should instantiate an async client.
+
+```python
+from openai import AsyncAzureOpenAI
+
+client = AsyncAzureOpenAI(
+ api_key = os.getenv("AZURE_OPENAI_KEY"),
+ api_version = "2023-10-01-preview",
+ azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT")
+)
+response = await client.chat.completions.create(model="gpt-35-turbo", messages=[{"role": "user", "content": "Hello world"}])
+
+print(response.model_dump_json(indent=2))
+```
+
+## Authentication
+
+```python
+from azure.identity import DefaultAzureCredential, get_bearer_token_provider
+from openai import AzureOpenAI
+
+token_provider = get_bearer_token_provider(DefaultAzureCredential(), "https://cognitiveservices.azure.com/.default")
+
+api_version = "2023-10-01-preview"
+endpoint = "https://my-resource.openai.azure.com"
+
+client = AzureOpenAI(
+ api_version=api_version,
+ azure_endpoint=endpoint,
+ azure_ad_token_provider=token_provider,
+)
+
+completion = client.chat.completions.create(
+ model="deployment-name", # gpt-35-instant
+ messages=[
+ {
+ "role": "user",
+ "content": "How do I output all files in a directory using Python?",
+ },
+ ],
+)
+print(completion.model_dump_json(indent=2))
+```
+
+## DALL-E fix
+
+# [DALLE-Fix](#tab/dalle-fix)
+
+```python
+import time
+import json
+import httpx
+import openai
++
+class CustomHTTPTransport(httpx.HTTPTransport):
+ def handle_request(
+ self,
+ request: httpx.Request,
+ ) -> httpx.Response:
+ if "images/generations" in request.url.path and request.url.params[
+ "api-version"
+ ] in [
+ "2023-06-01-preview",
+ "2023-07-01-preview",
+ "2023-08-01-preview",
+ "2023-09-01-preview",
+ "2023-10-01-preview",
+ ]:
+ request.url = request.url.copy_with(path="/openai/images/generations:submit")
+ response = super().handle_request(request)
+ operation_location_url = response.headers["operation-location"]
+ request.url = httpx.URL(operation_location_url)
+ request.method = "GET"
+ response = super().handle_request(request)
+ response.read()
+
+ timeout_secs: int = 120
+ start_time = time.time()
+ while response.json()["status"] not in ["succeeded", "failed"]:
+ if time.time() - start_time > timeout_secs:
+ timeout = {"error": {"code": "Timeout", "message": "Operation polling timed out."}}
+ return httpx.Response(
+ status_code=400,
+ headers=response.headers,
+ content=json.dumps(timeout).encode("utf-8"),
+ request=request,
+ )
+
+ time.sleep(int(response.headers.get("retry-after")) or 10)
+ response = super().handle_request(request)
+ response.read()
+
+ if response.json()["status"] == "failed":
+ error_data = response.json()
+ return httpx.Response(
+ status_code=400,
+ headers=response.headers,
+ content=json.dumps(error_data).encode("utf-8"),
+ request=request,
+ )
+
+ result = response.json()["result"]
+ return httpx.Response(
+ status_code=200,
+ headers=response.headers,
+ content=json.dumps(result).encode("utf-8"),
+ request=request,
+ )
+ return super().handle_request(request)
++
+client = openai.AzureOpenAI(
+ azure_endpoint="<azure_endpoint>",
+ api_key="<api_key>",
+ api_version="<api_version>",
+ http_client=httpx.Client(
+ transport=CustomHTTPTransport(),
+ ),
+)
+image = client.images.generate(prompt="a cute baby seal")
+
+print(image.data[0].url)
+```
+
+# [DALLE-Fix Async](#tab/dalle-fix-async)
+
+```python
+import time
+import asyncio
+import json
+import httpx
+import openai
++
+class AsyncCustomHTTPTransport(httpx.AsyncHTTPTransport):
+ async def handle_async_request(
+ self,
+ request: httpx.Request,
+ ) -> httpx.Response:
+ if "images/generations" in request.url.path and request.url.params[
+ "api-version"
+ ] in [
+ "2023-06-01-preview",
+ "2023-07-01-preview",
+ "2023-08-01-preview",
+ "2023-09-01-preview",
+ "2023-10-01-preview",
+ ]:
+ request.url = request.url.copy_with(path="/openai/images/generations:submit")
+ response = await super().handle_async_request(request)
+ operation_location_url = response.headers["operation-location"]
+ request.url = httpx.URL(operation_location_url)
+ request.method = "GET"
+ response = await super().handle_async_request(request)
+ await response.aread()
+
+ timeout_secs: int = 120
+ start_time = time.time()
+ while response.json()["status"] not in ["succeeded", "failed"]:
+ if time.time() - start_time > timeout_secs:
+ timeout = {"error": {"code": "Timeout", "message": "Operation polling timed out."}}
+ return httpx.Response(
+ status_code=400,
+ headers=response.headers,
+ content=json.dumps(timeout).encode("utf-8"),
+ request=request,
+ )
+
+ await asyncio.sleep(int(response.headers.get("retry-after")) or 10)
+ response = await super().handle_async_request(request)
+ await response.aread()
+
+ if response.json()["status"] == "failed":
+ error_data = response.json()
+ return httpx.Response(
+ status_code=400,
+ headers=response.headers,
+ content=json.dumps(error_data).encode("utf-8"),
+ request=request,
+ )
+
+ result = response.json()["result"]
+ return httpx.Response(
+ status_code=200,
+ headers=response.headers,
+ content=json.dumps(result).encode("utf-8"),
+ request=request,
+ )
+ return await super().handle_async_request(request)
++
+async def dall_e():
+ client = openai.AsyncAzureOpenAI(
+ azure_endpoint="<azure_endpoint>",
+ api_key="<api_key>",
+ api_version="<api_version>",
+ http_client=httpx.AsyncClient(
+ transport=AsyncCustomHTTPTransport(),
+ ),
+ )
+ image = await client.images.generate(prompt="a cute baby seal")
+
+ print(image.data[0].url)
+
+asyncio.run(dall_e())
+```
++
+## Name changes
+
+> [!NOTE]
+> All a* methods have been removed; the async client must be used instead.
+
+| OpenAI Python 0.28.1 | OpenAI Python 1.x |
+| | |
+| `openai.api_base` | `openai.base_url` |
+| `openai.proxy` | `openai.proxies` |
+| `openai.InvalidRequestError` | `openai.BadRequestError` |
+| `openai.Audio.transcribe()` | `client.audio.transcriptions.create()` |
+| `openai.Audio.translate()` | `client.audio.translations.create()` |
+| `openai.ChatCompletion.create()` | `client.chat.completions.create()` |
+| `openai.Completion.create()` | `client.completions.create()` |
+| `openai.Edit.create()` | `client.edits.create()` |
+| `openai.Embedding.create()` | `client.embeddings.create()` |
+| `openai.File.create()` | `client.files.create()` |
+| `openai.File.list()` | `client.files.list()` |
+| `openai.File.retrieve()` | `client.files.retrieve()` |
+| `openai.File.download()` | `client.files.retrieve_content()` |
+| `openai.FineTune.cancel()` | `client.fine_tunes.cancel()` |
+| `openai.FineTune.list()` | `client.fine_tunes.list()` |
+| `openai.FineTune.list_events()` | `client.fine_tunes.list_events()` |
+| `openai.FineTune.stream_events()` | `client.fine_tunes.list_events(stream=True)` |
+| `openai.FineTune.retrieve()` | `client.fine_tunes.retrieve()` |
+| `openai.FineTune.delete()` | `client.fine_tunes.delete()` |
+| `openai.FineTune.create()` | `client.fine_tunes.create()` |
+| `openai.FineTuningJob.create()` | `client.fine_tuning.jobs.create()` |
+| `openai.FineTuningJob.cancel()` | `client.fine_tuning.jobs.cancel()` |
+| `openai.FineTuningJob.delete()` | `client.fine_tuning.jobs.create()` |
+| `openai.FineTuningJob.retrieve()` | `client.fine_tuning.jobs.retrieve()` |
+| `openai.FineTuningJob.list()` | `client.fine_tuning.jobs.list()` |
+| `openai.FineTuningJob.list_events()` | `client.fine_tuning.jobs.list_events()` |
+| `openai.Image.create()` | `client.images.generate()` |
+| `openai.Image.create_variation()` | `client.images.create_variation()` |
+| `openai.Image.create_edit()` | `client.images.edit()` |
+| `openai.Model.list()` | `client.models.list()` |
+| `openai.Model.delete()` | `client.models.delete()` |
+| `openai.Model.retrieve()` | `client.models.retrieve()` |
+| `openai.Moderation.create()` | `client.moderations.create()` |
+| `openai.api_resources` | `openai.resources` |
+
+### Removed
+
+- `openai.api_key_path`
+- `openai.app_info`
+- `openai.debug`
+- `openai.log`
+- `openai.OpenAIError`
+- `openai.Audio.transcribe_raw()`
+- `openai.Audio.translate_raw()`
+- `openai.ErrorObject`
+- `openai.Customer`
+- `openai.api_version`
+- `openai.verify_ssl_certs`
+- `openai.api_type`
+- `openai.enable_telemetry`
+- `openai.ca_bundle_path`
+- `openai.requestssession` (OpenAI now uses `httpx`)
+- `openai.aiosession` (OpenAI now uses `httpx`)
+- `openai.Deployment` (Previously used for Azure OpenAI)
+- `openai.Engine`
+- `openai.File.find_matching_files()`
ai-services Working With Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/working-with-models.md
keywords:
Azure OpenAI Service is powered by a diverse set of models with different capabilities and price points. [Model availability varies by region](../concepts/models.md).
-You can get a list of models that are available for both inference and fine-tuning by your Azure OpenAI resource by using the [Models List API](/rest/api/cognitiveservices/azureopenaistable/models/list).
+You can get a list of models that are available for both inference and fine-tuning by your Azure OpenAI resource by using the [Models List API](/rest/api/azureopenai/models/list).
## Model updates
ai-services Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/reference.md
Azure OpenAI is deployed as a part of the Azure AI services. All Azure AI servic
## Next steps
-Learn about [ Models, and fine-tuning with the REST API](/rest/api/cognitiveservices/azureopenaistable/files).
+Learn about [ Models, and fine-tuning with the REST API](/rest/api/azureopenai/fine-tuning?view=rest-azureopenai-2023-10-01-preview).
Learn more about the [underlying models that power Azure OpenAI](./concepts/models.md).
ai-services Embeddings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/tutorials/embeddings.md
Previously updated : 09/12/2023 Last updated : 11/06/2023 recommendations: false
In this tutorial, you learn how to:
If you haven't already, you need to install the following libraries:
+# [OpenAI Python 0.28.1](#tab/python)
+ ```cmd pip install "openai==0.28.1" num2words matplotlib plotly scipy scikit-learn pandas tiktoken ```
+# [OpenAI Python 1.x](#tab/python-new)
+
+```console
+pip install openai num2words matplotlib plotly scipy scikit-learn pandas tiktoken
+```
+++ <!--Alternatively, you can use our [requirements.txt file](https://github.com/Azure-Samples/Azure-OpenAI-Docs-Samples/blob/main/Samples/Tutorials/Embeddings/requirements.txt).--> ### Download the BillSum dataset
Run the following code in your preferred Python IDE:
<!--If you wish to view the Jupyter notebook that corresponds to this tutorial you can download the tutorial from our [samples repo](https://github.com/Azure-Samples/Azure-OpenAI-Docs-Samples/blob/main/Samples/Tutorials/Embeddings/embedding_billsum.ipynb).-->
-## Import libraries and list models
+## Import libraries
+
+# [OpenAI Python 0.28.1](#tab/python)
```python import openai
print(r.text)
The output of this command will vary based on the number and type of models you've deployed. In this case, we need to confirm that we have an entry for **text-embedding-ada-002**. If you find that you're missing this model, you'll need to [deploy the model](../how-to/create-resource.md#deploy-a-model) to your resource before proceeding.
+# [OpenAI Python 1.x](#tab/python-new)
+
+```python
+import os
+import re
+import requests
+import sys
+from num2words import num2words
+import os
+import pandas as pd
+import numpy as np
+import tiktoken
+from openai import AzureOpenAI
+```
+++ Now we need to read our csv file and create a pandas DataFrame. After the initial DataFrame is created, we can view the contents of the table by running `df`. ```python
len(decode)
Now that we understand more about how tokenization works we can move on to embedding. It is important to note, that we haven't actually tokenized the documents yet. The `n_tokens` column is simply a way of making sure none of the data we pass to the model for tokenization and embedding exceeds the input token limit of 8,192. When we pass the documents to the embeddings model, it will break the documents into tokens similar (though not necessarily identical) to the examples above and then convert the tokens to a series of floating point numbers that will be accessible via vector search. These embeddings can be stored locally or in an [Azure Database to support Vector Search](../../../cosmos-db/mongodb/vcore/vector-search.md). As a result, each bill will have its own corresponding embedding vector in the new `ada_v2` column on the right side of the DataFrame.
+# [OpenAI Python 0.28.1](#tab/python)
+ ```python df_bills['ada_v2'] = df_bills["text"].apply(lambda x : get_embedding(x, engine = 'text-embedding-ada-002')) # engine should be set to the deployment name you chose when you deployed the text-embedding-ada-002 (Version 2) model ```
+# [OpenAI Python 1.x](#tab/python-new)
+
+```python
+client = AzureOpenAI(
+ api_key = os.getenv("AZURE_OPENAI_API_KEY"),
+ api_version = "2023-05-15",
+ azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT")
+)
+
+def generate_embeddings(text, model="text-embedding-ada-002"): # model = "deployment_name"
+ return client.embeddings.create(input = [text], model=model).data[0].embedding
+
+df_bills['ada_v2'] = df_bills["text"].apply(lambda x : generate_embeddings (x, model = 'text-embedding-ada-002')) # model should be set to the deployment name you chose when you deployed the text-embedding-ada-002 (Version 2) model
+```
+++ ```python df_bills ```
df_bills
As we run the search code block below, we'll embed the search query *"Can I get information on cable company tax revenue?"* with the same **text-embedding-ada-002 (Version 2)** model. Next we'll find the closest bill embedding to the newly embedded text from our query ranked by [cosine similarity](../concepts/understand-embeddings.md).
+# [OpenAI Python 0.28.1](#tab/python)
+ ```python # search through the reviews for a specific product def search_docs(df, user_query, top_n=3, to_print=True):
def search_docs(df, user_query, top_n=3, to_print=True):
res = search_docs(df_bills, "Can I get information on cable company tax revenue?", top_n=4) ```
+# [OpenAI Python 1.x](#tab/python-new)
+
+```python
+def cosine_similarity(a, b):
+ return np.dot(a, b) / (np.linalg.norm(a) * np.linalg.norm(b))
+
+def get_embedding(text, model="text-embedding-ada-002"): # model = "deployment_name"
+ return client.embeddings.create(input = [text], model=model).data[0].embedding
+
+def search_docs(df, user_query, top_n=4, to_print=True):
+ embedding = get_embedding(
+ user_query,
+ model="text-embedding-ada-002" # model should be set to the deployment name you chose when you deployed the text-embedding-ada-002 (Version 2) model
+ )
+ df["similarities"] = df.ada_v2.apply(lambda x: cosine_similarity(x, embedding))
+
+ res = (
+ df.sort_values("similarities", ascending=False)
+ .head(top_n)
+ )
+ if to_print:
+ display(res)
+ return res
++
+res = search_docs(df_bills, "Can I get information on cable company tax revenue?", top_n=4)
+```
+++ **Output**: :::image type="content" source="../media/tutorials/query-result.png" alt-text="Screenshot of the formatted results of res once the search query has been run." lightbox="../media/tutorials/query-result.png":::
ai-services Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure AI services description: Lists Azure Policy Regulatory Compliance controls available for Azure AI services. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 10/23/2023 Last updated : 11/06/2023
ai-services Batch Transcription Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/batch-transcription-create.md
Previously updated : 10/3/2023 Last updated : 11/7/2023 zone_pivot_groups: speech-cli-rest
Here are some property options that you can use to configure a transcription whe
|`contentContainerUrl`| You can submit individual audio files, or a whole storage container.<br/><br/>You must specify the audio data location via either the `contentContainerUrl` or `contentUrls` property. For more information about Azure blob storage for batch transcription, see [Locate audio files for batch transcription](batch-transcription-audio-data.md).<br/><br/>This property won't be returned in the response.| |`contentUrls`| You can submit individual audio files, or a whole storage container.<br/><br/>You must specify the audio data location via either the `contentContainerUrl` or `contentUrls` property. For more information, see [Locate audio files for batch transcription](batch-transcription-audio-data.md).<br/><br/>This property won't be returned in the response.| |`destinationContainerUrl`|The result can be stored in an Azure container. If you don't specify a container, the Speech service stores the results in a container managed by Microsoft. When the transcription job is deleted, the transcription result data is also deleted. For more information such as the supported security scenarios, see [Destination container URL](#destination-container-url).|
-|`diarization`|Indicates that diarization analysis should be carried out on the input, which is expected to be a mono channel that contains multiple voices. Specify the minimum and maximum number of people who might be speaking. You must also set the `diarizationEnabled` property to `true`. The [transcription file](batch-transcription-get.md#transcription-result-file) will contain a `speaker` entry for each transcribed phrase.<br/><br/>You need to use this property when you expect three or more speakers. For two speakers setting `diarizationEnabled` property to `true` is enough. See an example of the property usage in [Transcriptions_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Create) operation description.<br/><br/>Diarization is the process of separating speakers in audio data. The batch pipeline can recognize and separate multiple speakers on mono channel recordings. The maximum number of speakers for diarization must be less than 36 and more or equal to the `minSpeakers` property (see [example](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Create)). The feature isn't available with stereo recordings.<br/><br/>When this property is selected, source audio length can't exceed 240 minutes per file.<br/><br/>**Note**: This property is only available with Speech to text REST API version 3.1 and later.|
+|`diarization`|Indicates that diarization analysis should be carried out on the input, which is expected to be a mono channel that contains multiple voices. Specify the minimum and maximum number of people who might be speaking. You must also set the `diarizationEnabled` property to `true`. The [transcription file](batch-transcription-get.md#transcription-result-file) will contain a `speaker` entry for each transcribed phrase.<br/><br/>You need to use this property when you expect three or more speakers. For two speakers setting `diarizationEnabled` property to `true` is enough. See an example of the property usage in [Transcriptions_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Create) operation description.<br/><br/>Diarization is the process of separating speakers in audio data. The batch pipeline can recognize and separate multiple speakers on mono channel recordings. The maximum number of speakers for diarization must be less than 36 and more or equal to the `minSpeakers` property (see [example](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Create)). The feature isn't available with stereo recordings.<br/><br/>When this property is selected, source audio length can't exceed 240 minutes per file.<br/><br/>**Note**: This property is only available with Speech to text REST API version 3.1 and later. If you set this property with any previous version (such as version 3.0) then it will be ignored and only 2 speakers will be identified.|
|`diarizationEnabled`|Specifies that diarization analysis should be carried out on the input, which is expected to be a mono channel that contains two voices. The default value is `false`.<br/><br/>For three or more voices you also need to use property `diarization` (only with Speech to text REST API version 3.1 and later).<br/><br/>When this property is selected, source audio length can't exceed 240 minutes per file.| |`displayName`|The name of the batch transcription. Choose a name that you can refer to later. The display name doesn't have to be unique.<br/><br/>This property is required.| |`displayFormWordLevelTimestampsEnabled`|Specifies whether to include word-level timestamps on the display form of the transcription results. The results are returned in the displayWords property of the transcription file. The default value is `false`.<br/><br/>**Note**: This property is only available with Speech to text REST API version 3.1 and later.|
ai-services Speech Services Quotas And Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-services-quotas-and-limits.md
You can use real-time speech to text with the [Speech SDK](speech-sdk.md) or the
|--|--|--| | [Speech to text REST API](rest-speech-to-text.md) limit | Not available for F0 | 300 requests per minute | | Max audio input file size | N/A | 1 GB |
-| Max input blob size (for example, can contain more than one file in a zip archive). Note the file size limit from the preceding row. | N/A | 2.5 GB |
-| Max blob container size | N/A | 5 GB |
| Max number of blobs per container | N/A | 10000 | | Max number of files per transcription request (when you're using multiple content URLs as input). | N/A | 1000 | | Max audio length for transcriptions with diarizaion enabled. | N/A | 240 minutes per file |
ai-services Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/custom-translator/release-notes.md
This page presents the latest feature, improvement, bug fix, and known issue rel
#### June language model updates
-&emsp; Current supported language pairs are listed in the following table. For higher quality, we encourage you to retrain your models accordingly. For more information, *see* [Language support](../language-support.md#custom-translator-language-pairs).
+&emsp; Current supported language pairs are listed in the following table. For higher quality, we encourage you to retrain your models accordingly. For more information, *see* [Language support](../language-support.md).
|Source Language|Target Language| |:-|:-|
ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/language-support.md
Previously updated : 07/18/2023 Last updated : 11/06/2023 # Translator language support
> [!NOTE] > Language code `pt` will default to `pt-br`, Portuguese (Brazil).
-|Language | Language code | Cloud ΓÇô Text Translation and Document Translation | Containers ΓÇô Text Translation|Custom Translator|Auto Language Detection|Dictionary
-|:-|:-:|:-:|:-:|:-:|:-:|:-:|
-| Afrikaans | `af` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Albanian | `sq` |Γ£ö|Γ£ö||Γ£ö||
-| Amharic | `am` |Γ£ö|Γ£ö||||
-| Arabic | `ar` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Armenian | `hy` |Γ£ö|Γ£ö||Γ£ö||
-| Assamese | `as` |Γ£ö|Γ£ö|Γ£ö|||
-| Azerbaijani (Latin) | `az` |Γ£ö|Γ£ö||||
-| Bangla | `bn` |Γ£ö|Γ£ö|Γ£ö||Γ£ö|
-| Bashkir | `ba` |Γ£ö|Γ£ö||||
-| Basque | `eu` |Γ£ö|Γ£ö||||
-| Bosnian (Latin) | `bs` |Γ£ö|Γ£ö|Γ£ö||Γ£ö|
-| Bulgarian | `bg` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Cantonese (Traditional) | `yue` |Γ£ö|Γ£ö||||
-| Catalan | `ca` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Chinese (Literary) | `lzh` |Γ£ö|Γ£ö||||
-| Chinese Simplified | `zh-Hans` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Chinese Traditional | `zh-Hant` |Γ£ö|Γ£ö|Γ£ö|Γ£ö||
-| chiShona|`sn`|Γ£ö|Γ£ö||||
-| Croatian | `hr` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Czech | `cs` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Danish | `da` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Dari | `prs` |Γ£ö|Γ£ö||||
-| Divehi | `dv` |Γ£ö|Γ£ö||Γ£ö||
-| Dutch | `nl` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| English | `en` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Estonian | `et` |Γ£ö|Γ£ö|Γ£ö|Γ£ö||
-| Faroese | `fo` |Γ£ö|Γ£ö||||
-| Fijian | `fj` |Γ£ö|Γ£ö|Γ£ö|||
-| Filipino | `fil` |Γ£ö|Γ£ö|Γ£ö|||
-| Finnish | `fi` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| French | `fr` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| French (Canada) | `fr-ca` |Γ£ö|Γ£ö||||
-| Galician | `gl` |Γ£ö|Γ£ö||||
-| Georgian | `ka` |Γ£ö|Γ£ö||Γ£ö||
-| German | `de` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Greek | `el` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Gujarati | `gu` |Γ£ö|Γ£ö|Γ£ö|Γ£ö||
-| Haitian Creole | `ht` |Γ£ö|Γ£ö||Γ£ö|Γ£ö|
-| Hausa|`ha`|Γ£ö|Γ£ö||||
-| Hebrew | `he` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Hindi | `hi` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Hmong Daw (Latin) | `mww` |Γ£ö|Γ£ö|||Γ£ö|
-| Hungarian | `hu` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Icelandic | `is` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Igbo|`ig`|Γ£ö|Γ£ö||||
-| Indonesian | `id` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Inuinnaqtun | `ikt` |Γ£ö|Γ£ö||||
-| Inuktitut | `iu` |Γ£ö|Γ£ö|Γ£ö|Γ£ö||
-| Inuktitut (Latin) | `iu-Latn` |Γ£ö|Γ£ö||||
-| Irish | `ga` |Γ£ö|Γ£ö|Γ£ö|Γ£ö||
-| Italian | `it` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Japanese | `ja` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Kannada | `kn` |Γ£ö|Γ£ö|Γ£ö|||
-| Kazakh | `kk` |Γ£ö|Γ£ö||||
-| Khmer | `km` |Γ£ö|Γ£ö||Γ£ö||
-| Kinyarwanda|`rw`|Γ£ö|Γ£ö||||
-| Klingon | `tlh-Latn` |Γ£ö| ||Γ£ö|Γ£ö|
-| Klingon (plqaD) | `tlh-Piqd` |Γ£ö| ||Γ£ö||
-| Konkani|`gom`|Γ£ö|Γ£ö||||
-| Korean | `ko` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Kurdish (Central) | `ku` |Γ£ö|Γ£ö||Γ£ö||
-| Kurdish (Northern) | `kmr` |Γ£ö|Γ£ö||||
-| Kyrgyz (Cyrillic) | `ky` |Γ£ö|Γ£ö||||
-| Lao | `lo` |Γ£ö|Γ£ö||Γ£ö||
-| Latvian | `lv` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Lithuanian | `lt` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Lingala|`ln`|Γ£ö|Γ£ö||||
-| Lower Sorbian|`dsb`|Γ£ö| ||||
-| Luganda|`lug`|Γ£ö|Γ£ö||||
-| Macedonian | `mk` |Γ£ö|Γ£ö||Γ£ö||
-| Maithili|`mai`|Γ£ö|Γ£ö||||
-| Malagasy | `mg` |Γ£ö|Γ£ö|Γ£ö|||
-| Malay (Latin) | `ms` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Malayalam | `ml` |Γ£ö|Γ£ö|Γ£ö|||
-| Maltese | `mt` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Maori | `mi` |Γ£ö|Γ£ö|Γ£ö|||
-| Marathi | `mr` |Γ£ö|Γ£ö|Γ£ö|||
-| Mongolian (Cyrillic) | `mn-Cyrl` |Γ£ö|Γ£ö||||
-| Mongolian (Traditional) | `mn-Mong` |Γ£ö|Γ£ö||Γ£ö||
-| Myanmar | `my` |Γ£ö|Γ£ö||Γ£ö||
-| Nepali | `ne` |Γ£ö|Γ£ö||||
-| Norwegian | `nb` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Nyanja|`nya`|Γ£ö|Γ£ö||||
-| Odia | `or` |Γ£ö|Γ£ö|Γ£ö|||
-| Pashto | `ps` |Γ£ö|Γ£ö||Γ£ö||
-| Persian | `fa` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Polish | `pl` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Portuguese (Brazil) | `pt` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Portuguese (Portugal) | `pt-pt` |Γ£ö|Γ£ö||||
-| Punjabi | `pa` |Γ£ö|Γ£ö|Γ£ö|||
-| Queretaro Otomi | `otq` |Γ£ö|Γ£ö||||
-| Romanian | `ro` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Rundi|`run`|Γ£ö|Γ£ö||||
-| Russian | `ru` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Samoan (Latin) | `sm` |Γ£ö|Γ£ö |Γ£ö|||
-| Serbian (Cyrillic) | `sr-Cyrl` |Γ£ö|Γ£ö||Γ£ö||
-| Serbian (Latin) | `sr-Latn` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Sesotho|`st`|Γ£ö|Γ£ö||||
-| Sesotho sa Leboa|`nso`|Γ£ö|Γ£ö||||
-| Setswana|`tn`|Γ£ö|Γ£ö||||
-| Sindhi|`sd`|Γ£ö|Γ£ö||||
-| Sinhala|`si`|Γ£ö|Γ£ö||||
-| Slovak | `sk` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Slovenian | `sl` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Somali (Arabic) | `so` |Γ£ö|Γ£ö||Γ£ö||
-| Spanish | `es` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Swahili (Latin) | `sw` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Swedish | `sv` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Tahitian | `ty` |Γ£ö|Γ£ö |Γ£ö|Γ£ö||
-| Tamil | `ta` |Γ£ö|Γ£ö|Γ£ö||Γ£ö|
-| Tatar (Latin) | `tt` |Γ£ö|Γ£ö||||
-| Telugu | `te` |Γ£ö|Γ£ö|Γ£ö|||
-| Thai | `th` |Γ£ö|Γ£ö |Γ£ö|Γ£ö|Γ£ö|
-| Tibetan | `bo` |Γ£ö|Γ£ö|||
-| Tigrinya | `ti` |Γ£ö|Γ£ö||||
-| Tongan | `to` |Γ£ö|Γ£ö|Γ£ö|||
-| Turkish | `tr` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Turkmen (Latin) | `tk` |Γ£ö|Γ£ö|||
-| Ukrainian | `uk` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Upper Sorbian | `hsb` |Γ£ö|Γ£ö||||
-| Urdu | `ur` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Uyghur (Arabic) | `ug` |Γ£ö|Γ£ö|||
-| Uzbek (Latin) | `uz` |Γ£ö|Γ£ö||Γ£ö||
-| Vietnamese | `vi` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Welsh | `cy` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Xhosa|`xh`|Γ£ö|Γ£ö||||
-| Yoruba|`yo`|Γ£ö|Γ£ö||||
-| Yucatec Maya | `yua` |Γ£ö|Γ£ö||Γ£ö||
-| Zulu | `zu` |Γ£ö|Γ£ö||||
+|Language|Language code|Cloud ΓÇô Text Translation and Document Translation|Containers ΓÇô Text Translation|Custom Translator|Auto Language Detection|Dictionary|
+|:-|:-|:-|:-|:-|:-|:-|
+|Afrikaans|af|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Albanian|sq|Γ£ö|Γ£ö| |Γ£ö| |
+|Amharic|am|Γ£ö|Γ£ö| |Γ£ö| |
+|Arabic|ar|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Armenian|hy|Γ£ö|Γ£ö| |Γ£ö| |
+|Assamese|as|Γ£ö|Γ£ö|Γ£ö|Γ£ö| |
+|Azerbaijani (Latin)|az|Γ£ö|Γ£ö| |Γ£ö| |
+|Bangla|bn|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Bashkir|ba|Γ£ö|Γ£ö| |Γ£ö| |
+|Basque|eu|Γ£ö|Γ£ö| |Γ£ö| |
+|Bhojpuri|bho|Γ£ö|Γ£ö | | | |
+|Bodo|brx |Γ£ö|Γ£ö | | | |
+|Bosnian (Latin)|bs|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Bulgarian|bg|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Cantonese (Traditional)|yue|Γ£ö|Γ£ö| |Γ£ö| |
+|Catalan|ca|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Chinese (Literary)|lzh|Γ£ö|Γ£ö| | | |
+|Chinese Simplified|zh-Hans|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Chinese Traditional|zh-Hant|Γ£ö|Γ£ö|Γ£ö|Γ£ö| |
+|chiShona|sn|Γ£ö|Γ£ö| | | |
+|Croatian|hr|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Czech|cs|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Danish|da|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Dari|prs|Γ£ö|Γ£ö| |Γ£ö| |
+|Divehi|dv|Γ£ö|Γ£ö| |Γ£ö| |
+|Dogri|doi|Γ£ö| | | | |
+|Dutch|nl|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|English|en|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Estonian|et|Γ£ö|Γ£ö|Γ£ö|Γ£ö| |
+|Faroese|fo|Γ£ö|Γ£ö| |Γ£ö| |
+|Fijian|fj|Γ£ö|Γ£ö|Γ£ö|Γ£ö| |
+|Filipino|fil|Γ£ö|Γ£ö|Γ£ö| | |
+|Finnish|fi|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|French|fr|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|French (Canada)|fr-ca|Γ£ö|Γ£ö| | | |
+|Galician|gl|Γ£ö|Γ£ö| |Γ£ö| |
+|Georgian|ka|Γ£ö|Γ£ö| |Γ£ö| |
+|German|de|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Greek|el|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Gujarati|gu|Γ£ö|Γ£ö|Γ£ö|Γ£ö| |
+|Haitian Creole|ht|Γ£ö|Γ£ö| |Γ£ö|Γ£ö|
+|Hausa|ha|Γ£ö|Γ£ö| |Γ£ö| |
+|Hebrew|he|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Hindi|hi|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Hmong Daw (Latin)|mww|Γ£ö|Γ£ö| |Γ£ö|Γ£ö|
+|Hungarian|hu|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Icelandic|is|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Igbo|ig|Γ£ö|Γ£ö| |Γ£ö| |
+|Indonesian|id|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Inuinnaqtun|ikt|Γ£ö|Γ£ö| | | |
+|Inuktitut|iu|Γ£ö|Γ£ö|Γ£ö|Γ£ö| |
+|Inuktitut (Latin)|iu-Latn|Γ£ö|Γ£ö| |Γ£ö| |
+|Irish|ga|Γ£ö|Γ£ö|Γ£ö|Γ£ö| |
+|Italian|it|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Japanese|ja|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Kannada|kn|Γ£ö|Γ£ö|Γ£ö|Γ£ö| |
+|Kashmiri|ks|Γ£ö|Γ£ö | | | |
+|Kazakh|kk|Γ£ö|Γ£ö| |Γ£ö| |
+|Khmer|km|Γ£ö|Γ£ö| |Γ£ö| |
+|Kinyarwanda|rw|Γ£ö|Γ£ö| |Γ£ö| |
+|Klingon|tlh-Latn|Γ£ö| | |Γ£ö|Γ£ö|
+|Klingon (plqaD)|tlh-Piqd|Γ£ö| | |Γ£ö| |
+|Konkani|gom|Γ£ö|Γ£ö| | | |
+|Korean|ko|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Kurdish (Central)|ku|Γ£ö|Γ£ö| |Γ£ö| |
+|Kurdish (Northern)|kmr|Γ£ö|Γ£ö| | | |
+|Kyrgyz (Cyrillic)|ky|Γ£ö|Γ£ö| |Γ£ö| |
+|Lao|lo|Γ£ö|Γ£ö| |Γ£ö| |
+|Latvian|lv|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Lithuanian|lt|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Lingala|ln|Γ£ö|Γ£ö| | | |
+|Lower Sorbian|dsb|Γ£ö| | | | |
+|Luganda|lug|Γ£ö|Γ£ö| | | |
+|Macedonian|mk|Γ£ö|Γ£ö| |Γ£ö| |
+|Maithili|mai|Γ£ö|Γ£ö| | | |
+|Malagasy|mg|Γ£ö|Γ£ö|Γ£ö|Γ£ö| |
+|Malay (Latin)|ms|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Malayalam|ml|Γ£ö|Γ£ö|Γ£ö|Γ£ö| |
+|Maltese|mt|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Maori|mi|Γ£ö|Γ£ö|Γ£ö|Γ£ö| |
+|Marathi|mr|Γ£ö|Γ£ö|Γ£ö|Γ£ö| |
+|Mongolian (Cyrillic)|mn-Cyrl|Γ£ö|Γ£ö| |Γ£ö| |
+|Mongolian (Traditional)|mn-Mong|Γ£ö|Γ£ö| | | |
+|Myanmar|my|Γ£ö|Γ£ö| |Γ£ö| |
+|Nepali|ne|Γ£ö|Γ£ö| |Γ£ö| |
+|Norwegian|nb|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Nyanja|nya|Γ£ö|Γ£ö| | | |
+|Odia|or|Γ£ö|Γ£ö|Γ£ö|Γ£ö| |
+|Pashto|ps|Γ£ö|Γ£ö| |Γ£ö| |
+|Persian|fa|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Polish|pl|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Portuguese (Brazil)|pt|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Portuguese (Portugal)|pt-pt|Γ£ö|Γ£ö| | | |
+|Punjabi|pa|Γ£ö|Γ£ö|Γ£ö|Γ£ö| |
+|Queretaro Otomi|otq|Γ£ö|Γ£ö| |Γ£ö| |
+|Romanian|ro|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Rundi|run|Γ£ö|Γ£ö| | | |
+|Russian|ru|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Samoan (Latin)|sm|Γ£ö|Γ£ö|Γ£ö|Γ£ö| |
+|Serbian (Cyrillic)|sr-Cyrl|Γ£ö|Γ£ö| |Γ£ö| |
+|Serbian (Latin)|sr-Latn|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Sesotho|st|Γ£ö|Γ£ö| | | |
+|Sesotho sa Leboa|nso|Γ£ö|Γ£ö| | | |
+|Setswana|tn|Γ£ö|Γ£ö| | | |
+|Sindhi|sd|Γ£ö|Γ£ö| |Γ£ö| |
+|Sinhala|si|Γ£ö|Γ£ö| |Γ£ö| |
+|Slovak|sk|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Slovenian|sl|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Somali (Arabic)|so|Γ£ö|Γ£ö| |Γ£ö| |
+|Spanish|es|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Swahili (Latin)|sw|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Swedish|sv|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Tahitian|ty|Γ£ö|Γ£ö|Γ£ö|Γ£ö| |
+|Tamil|ta|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Tatar (Latin)|tt|Γ£ö|Γ£ö| |Γ£ö| |
+|Telugu|te|Γ£ö|Γ£ö|Γ£ö|Γ£ö| |
+|Thai|th|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Tibetan|bo|Γ£ö|Γ£ö| |Γ£ö| |
+|Tigrinya|ti|Γ£ö|Γ£ö| |Γ£ö| |
+|Tongan|to|Γ£ö|Γ£ö|Γ£ö|Γ£ö| |
+|Turkish|tr|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Turkmen (Latin)|tk|Γ£ö|Γ£ö| |Γ£ö| |
+|Ukrainian|uk|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Upper Sorbian|hsb|Γ£ö|Γ£ö| |Γ£ö| |
+|Urdu|ur|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Uyghur (Arabic)|ug|Γ£ö|Γ£ö| |Γ£ö| |
+|Uzbek (Latin)|uz|Γ£ö|Γ£ö| |Γ£ö| |
+|Vietnamese|vi|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Welsh|cy|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Xhosa|xh|Γ£ö|Γ£ö| |Γ£ö| |
+|Yoruba|yo|Γ£ö|Γ£ö| |Γ£ö| |
+|Yucatec Maya|yua|Γ£ö|Γ£ö| |Γ£ö| |
+|Zulu|zu|Γ£ö|Γ£ö| |Γ£ö| |
## Document Translation: scanned PDF support
The [Transliterate operation](reference/v3-0-transliterate.md) in the Text Trans
|Ukrainian| `uk` | Cyrillic `Cyrl` | <--> | Latin `Latn` | |Urdu| `ur` | Arabic `Arab` | <--> | Latin `Latn` |
-## Custom Translator language pairs
-
-|Source Language|Target Language|
-|:-|:-|
-| Czech (cs-cz) | English (en-us) |
-| Danish (da-dk) | English (en-us) |
-| German (de-&#8203;de) | English (en-us) |
-| Greek (el-gr) | English (en-us) |
-| English (en-us) | Arabic (ar-sa) |
-| English (en-us) | Czech (cs-cz) |
-| English (en-us) | Danish (da-dk) |
-| English (en-us) | German (de-&#8203;de) |
-| English (en-us) | Greek (el-gr) |
-| English (en-us) | Spanish (es-es) |
-| English (en-us) | French (fr-fr) |
-| English (en-us) | Hebrew (he-il) |
-| English (en-us) | Hindi (hi-in) |
-| English (en-us) | Croatian (hr-hr) |
-| English (en-us) | Hungarian (hu-hu) |
-| English (en-us) | Indonesian (id-id) |
-| English (en-us) | Italian (it-it) |
-| English (en-us) | Japanese (ja-jp) |
-| English (en-us) | Korean (ko-kr) |
-| English (en-us) | Lithuanian (lt-lt) |
-| English (en-us) | Latvian (lv-lv) |
-| English (en-us) | Norwegian (nb-no) |
-| English (en-us) | Polish (pl-pl) |
-| English (en-us) | Portuguese (pt-pt) |
-| English (en-us) | Russian (ru-ru) |
-| English (en-us) | Slovak (sk-sk) |
-| English (en-us) | Swedish (sv-se) |
-| English (en-us) | Ukrainian (uk-ua) |
-| English (en-us) | Vietnamese (vi-vn) |
-| English (en-us) | Chinese Simplified (zh-cn) |
-| Spanish (es-es) | English (en-us) |
-| French (fr-fr) | English (en-us) |
-| Hindi (hi-in) | English (en-us) |
-| Hungarian (hu-hu) | English (en-us) |
-| Indonesian (id-id) | English (en-us) |
-| Italian (it-it) | English (en-us) |
-| Japanese (ja-jp) | English (en-us) |
-| Korean (ko-kr) | English (en-us) |
-| Norwegian (nb-no) | English (en-us) |
-| Dutch (nl-nl) | English (en-us) |
-| Polish (pl-pl) | English (en-us) |
-| Portuguese (pt-br) | English (en-us) |
-| Russian (ru-ru) | English (en-us) |
-| Swedish (sv-se) | English (en-us) |
-| Thai (th-th) | English (en-us) |
-| Turkish (tr-tr) | English (en-us) |
-| Vietnamese (vi-vn) | English (en-us) |
-| Chinese Simplified (zh-cn) | English (en-us) |
- ## Other Azure AI services Add more capabilities to your apps and workflows by utilizing other Azure AI services with Translator. Language support for other
aks Azure Cni Overlay https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-cni-overlay.md
Previously updated : 08/11/2023 Last updated : 11/03/2023 # Configure Azure CNI Overlay networking in Azure Kubernetes Service (AKS)
You can configure the maximum number of pods per node at the time of cluster cre
## Choosing a network model to use
-Azure CNI offers two IP addressing options for pods: the traditional configuration that assigns VNet IPs to pods and Overlay networking. The choice of which option to use for your AKS cluster is a balance between flexibility and advanced configuration needs. The following considerations help outline when each network model may be the most appropriate.
+Azure CNI offers two IP addressing options for pods: The traditional configuration that assigns VNet IPs to pods and Overlay networking. The choice of which option to use for your AKS cluster is a balance between flexibility and advanced configuration needs. The following considerations help outline when each network model might be the most appropriate.
**Use Overlay networking when**:
Azure CNI Overlay has the following limitations:
- You can't use Application Gateway as an Ingress Controller (AGIC) for an Overlay cluster. - Virtual Machine Availability Sets (VMAS) aren't supported for Overlay.-- Dual stack networking isn't supported in Overlay. - You can't use [DCsv2-series](/azure/virtual-machines/dcv2-series) virtual machines in node pools. To meet Confidential Computing requirements, consider using [DCasv5 or DCadsv5-series confidential VMs](/azure/virtual-machines/dcasv5-dcadsv5-series) instead. ## Set up Overlay clusters
clusterName="myOverlayCluster"
resourceGroup="myResourceGroup" location="westcentralus"
-az aks create -n $clusterName -g $resourceGroup --location $location --network-plugin azure --network-plugin-mode overlay --pod-cidr 192.168.0.0/16
+az aks create -n $clusterName -g $resourceGroup \
+ --location $location \
+ --network-plugin azure \
+ --network-plugin-mode overlay \
+ --pod-cidr 192.168.0.0/16
``` ## Upgrade an existing cluster to CNI Overlay
az aks create -n $clusterName -g $resourceGroup --location $location --network-p
> Prior to Windows OS Build 20348.1668, there was a limitation around Windows Overlay pods incorrectly SNATing packets from host network pods, which had a more detrimental effect for clusters upgrading to Overlay. To avoid this issue, **use Windows OS Build greater than or equal to 20348.1668**. > [!WARNING]
-> If using a custom azure-ip-masq-agent config to include additional IP ranges that should not SNAT packets from pods, upgrading to Azure CNI Overlay may break connectivity to these ranges. Pod IPs from the overlay space will not be reachable by anything outside the cluster nodes.
-> Additionally, for sufficiently old clusters there may be a ConfigMap left over from a previous version of azure-ip-masq-agent. If this ConfigMap, named `azure-ip-masq-agent-config`, exists and is not intetionally in-place it should be deleted before running the update command.
+> If using a custom azure-ip-masq-agent config to include additional IP ranges that should not SNAT packets from pods, upgrading to Azure CNI Overlay can break connectivity to these ranges. Pod IPs from the overlay space will not be reachable by anything outside the cluster nodes.
+> Additionally, for sufficiently old clusters there might be a ConfigMap left over from a previous version of azure-ip-masq-agent. If this ConfigMap, named `azure-ip-masq-agent-config`, exists and is not intetionally in-place it should be deleted before running the update command.
> If not using a custom ip-masq-agent config, only the `azure-ip-masq-agent-config-reconciled` ConfigMap should exist with respect to Azure ip-masq-agent ConfigMaps and this will be updated automatically during the upgrade process. The upgrade process triggers each node pool to be re-imaged simultaneously. Upgrading each node pool separately to Overlay isn't supported. Any disruptions to cluster networking are similar to a node image upgrade or Kubernetes version upgrade where each node in a node pool is re-imaged.
az aks update --name $clusterName \
The `--pod-cidr` parameter is required when upgrading from legacy CNI because the pods need to get IPs from a new overlay space, which doesn't overlap with the existing node subnet. The pod CIDR also can't overlap with any VNet address of the node pools. For example, if your VNet address is *10.0.0.0/8*, and your nodes are in the subnet *10.240.0.0/16*, the `--pod-cidr` can't overlap with *10.0.0.0/8* or the existing service CIDR on the cluster.
+## Dual-stack Networking (Preview)
+
+You can deploy your AKS clusters in a dual-stack mode when using Overlay networking and a dual-stack Azure virtual network. In this configuration, nodes receive both an IPv4 and IPv6 address from the Azure virtual network subnet. Pods receive both an IPv4 and IPv6 address from a logically different address space to the Azure virtual network subnet of the nodes. Network address translation (NAT) is then configured so that the pods can reach resources on the Azure virtual network. The source IP address of the traffic is NAT'd to the node's primary IP address of the same family (IPv4 to IPv4 and IPv6 to IPv6).
++
+### Prerequisites
+
+ - You must have Azure CLI 2.48.0 or later installed.
+ - You must register the `Microsoft.ContainerService` `AzureOverlayDualStackPreview` feature flag.
+ - Kubernetes version 1.26.3 or greater.
+
+### Limitations
+
+The following features aren't supported with dual-stack networking:
+ - Windows Nodepools
+ - Azure network policies
+ - Calico network policies
+ - NAT Gateway
+ - Virtual nodes add-on
+
+## Deploy a dual-stack AKS cluster
+
+The following attributes are provided to support dual-stack clusters:
+
+* **`--ip-families`**: Takes a comma-separated list of IP families to enable on the cluster.
+ * Only `ipv4` or `ipv4,ipv6` are supported.
+* **`--pod-cidrs`**: Takes a comma-separated list of CIDR notation IP ranges to assign pod IPs from.
+ * The count and order of ranges in this list must match the value provided to `--ip-families`.
+ * If no values are supplied, the default value `10.244.0.0/16,fd12:3456:789a::/64` is used.
+* **`--service-cidrs`**: Takes a comma-separated list of CIDR notation IP ranges to assign service IPs from.
+ * The count and order of ranges in this list must match the value provided to `--ip-families`.
+ * If no values are supplied, the default value `10.0.0.0/16,fd12:3456:789a:1::/108` is used.
+ * The IPv6 subnet assigned to `--service-cidrs` can be no larger than a /108.
+
+### Register the `AzureOverlayDualStackPreview` feature flag
+
+1. Register the `AzureOverlayDualStackPreview` feature flag using the [`az feature register`][az-feature-register] command. It takes a few minutes for the status to show *Registered*.
+
+```azurecli-interactive
+az feature register --namespace "Microsoft.ContainerService" --name "AzureOverlayDualStackPreview"
+```
+
+2. Verify the registration status using the [`az feature show`][az-feature-show] command.
+
+```azurecli-interactive
+az feature show --namespace "Microsoft.ContainerService" --name "AzureOverlayDualStackPreview"
+```
+
+3. When the status reflects *Registered*, refresh the registration of the *Microsoft.ContainerService* resource provider using the [`az provider register`][az-provider-register] command.
+
+```azurecli-interactive
+az provider register --namespace Microsoft.ContainerService
+```
+
+### Create a dual-stack AKS cluster
+
+1. Create an Azure resource group for the cluster using the [`az group create`][az-group-create] command.
+
+ ```azurecli-interactive
+ az group create -l <region> -n <resourceGroupName>
+ ```
+
+2. Create a dual-stack AKS cluster using the [`az aks create`][az-aks-create] command with the `--ip-families` parameter set to `ipv4,ipv6`.
+
+ ```azurecli-interactive
+ az aks create -l <region> -g <resourceGroupName> -n <clusterName> \
+ --network-plugin azure \
+ --network-plugin-mode overlay \
+ --ip-families ipv4,ipv6
+ ```
+++
+## Create an example workload
+
+Once the cluster has been created, you can deploy your workloads. This article walks you through an example workload deployment of an NGINX web server.
+
+### Deploy an NGINX web server
+
+# [kubectl](#tab/kubectl)
+
+1. Create an NGINX web server using the `kubectl create deployment nginx` command.
+
+ ```bash-interactive
+ kubectl create deployment nginx --image=nginx:latest --replicas=3
+ ```
+
+2. View the pod resources using the `kubectl get pods` command.
+
+ ```bash-interactive
+ kubectl get pods -o custom-columns="NAME:.metadata.name,IPs:.status.podIPs[*].ip,NODE:.spec.nodeName,READY:.status.conditions[?(@.type=='Ready')].status"
+ ```
+
+ The output shows the pods have both IPv4 and IPv6 addresses. The pods don't show IP addresses until they're ready.
+
+ ```output
+ NAME IPs NODE READY
+ nginx-55649fd747-9cr7h 10.244.2.2,fd12:3456:789a:0:2::2 aks-nodepool1-14508455-vmss000002 True
+ nginx-55649fd747-p5lr9 10.244.0.7,fd12:3456:789a::7 aks-nodepool1-14508455-vmss000000 True
+ nginx-55649fd747-r2rqh 10.244.1.2,fd12:3456:789a:0:1::2 aks-nodepool1-14508455-vmss000001 True
+ ```
+
+# [YAML](#tab/yaml)
+
+1. Create an NGINX web server using the following YAML manifest.
+
+ ```yml
+ apiVersion: apps/v1
+ kind: Deployment
+ metadata:
+ labels:
+ app: nginx
+ name: nginx
+ spec:
+ replicas: 3
+ selector:
+ matchLabels:
+ app: nginx
+ template:
+ metadata:
+ labels:
+ app: nginx
+ spec:
+ containers:
+ - image: nginx:latest
+ name: nginx
+ ```
+
+2. View the pod resources using the `kubectl get pods` command.
+
+ ```bash-interactive
+ kubectl get pods -o custom-columns="NAME:.metadata.name,IPs:.status.podIPs[*].ip,NODE:.spec.nodeName,READY:.status.conditions[?(@.type=='Ready')].status"
+ ```
+
+ The output shows the pods have both IPv4 and IPv6 addresses. The pods don't show IP addresses until they're ready.
+
+ ```output
+ NAME IPs NODE READY
+ nginx-55649fd747-9cr7h 10.244.2.2,fd12:3456:789a:0:2::2 aks-nodepool1-14508455-vmss000002 True
+ nginx-55649fd747-p5lr9 10.244.0.7,fd12:3456:789a::7 aks-nodepool1-14508455-vmss000000 True
+ nginx-55649fd747-r2rqh 10.244.1.2,fd12:3456:789a:0:1::2 aks-nodepool1-14508455-vmss000001 True
+ ```
+++
+## Expose the workload via a `LoadBalancer` type service
+
+> [!IMPORTANT]
+> There are currently **two limitations** pertaining to IPv6 services in AKS.
+>
+> 1. Azure Load Balancer sends health probes to IPv6 destinations from a link-local address. In Azure Linux node pools, this traffic can't be routed to a pod, so traffic flowing to IPv6 services deployed with `externalTrafficPolicy: Cluster` fail. IPv6 services must be deployed with `externalTrafficPolicy: Local`, which causes `kube-proxy` to respond to the probe on the node.
+> 2. Prior to Kubernetes version 1.27, only the first IP address for a service will be provisioned to the load balancer, so a dual-stack service only receives a public IP for its first-listed IP family. To provide a dual-stack service for a single deployment, please create two services targeting the same selector, one for IPv4 and one for IPv6. This is no longer a limitation in kubernetes 1.27 or later.
+
+# [kubectl](#tab/kubectl)
+
+1. Expose the NGINX deployment using the `kubectl expose deployment nginx` command.
+
+ ```bash-interactive
+ kubectl expose deployment nginx --name=nginx-ipv4 --port=80 --type=LoadBalancer'
+ kubectl expose deployment nginx --name=nginx-ipv6 --port=80 --type=LoadBalancer --overrides='{"spec":{"ipFamilies": ["IPv6"]}}'
+ ```
+
+ You receive an output that shows the services have been exposed.
+
+ ```output
+ service/nginx-ipv4 exposed
+ service/nginx-ipv6 exposed
+ ```
+
+2. Once the deployment is exposed and the `LoadBalancer` services are fully provisioned, get the IP addresses of the services using the `kubectl get services` command.
+
+ ```bash-interactive
+ kubectl get services
+ ```
+
+ ```output
+ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+ nginx-ipv4 LoadBalancer 10.0.88.78 20.46.24.24 80:30652/TCP 97s
+ nginx-ipv6 LoadBalancer fd12:3456:789a:1::981a 2603:1030:8:5::2d 80:32002/TCP 63s
+ ```
+
+3. Verify functionality via a command-line web request from an IPv6 capable host. Azure Cloud Shell isn't IPv6 capable.
+
+ ```bash-interactive
+ SERVICE_IP=$(kubectl get services nginx-ipv6 -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
+ curl -s "http://[${SERVICE_IP}]" | head -n5
+ ```
+
+ ```html
+ <!DOCTYPE html>
+ <html>
+ <head>
+ <title>Welcome to nginx!</title>
+ <style>
+ ```
+
+# [YAML](#tab/yaml)
+
+1. Expose the NGINX deployment using the following YAML manifest.
+
+ ```yml
+
+ apiVersion: v1
+ kind: Service
+ metadata:
+ labels:
+ app: nginx
+ name: nginx-ipv4
+ spec:
+ externalTrafficPolicy: Cluster
+ ports:
+ - port: 80
+ protocol: TCP
+ targetPort: 80
+ selector:
+ app: nginx
+ type: LoadBalancer
+
+ apiVersion: v1
+ kind: Service
+ metadata:
+ labels:
+ app: nginx
+ name: nginx-ipv6
+ spec:
+ externalTrafficPolicy: Cluster
+ ipFamilies:
+ - IPv6
+ ports:
+ - port: 80
+ protocol: TCP
+ targetPort: 80
+ selector:
+ app: nginx
+ type: LoadBalancer
+ ```
+
+2. Once the deployment is exposed and the `LoadBalancer` services are fully provisioned, get the IP addresses of the services using the `kubectl get services` command.
+
+ ```bash-interactive
+ kubectl get services
+ ```
+
+ ```output
+ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+ nginx-ipv4 LoadBalancer 10.0.88.78 20.46.24.24 80:30652/TCP 97s
+ nginx-ipv6 LoadBalancer fd12:3456:789a:1::981a 2603:1030:8:5::2d 80:32002/TCP 63s
+ ```
+++ ## Next steps To learn how to utilize AKS with your own Container Network Interface (CNI) plugin, see [Bring your own Container Network Interface (CNI) plugin](use-byo-cni.md).
aks Azure Cni Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-cni-overview.md
With [Azure Container Networking Interface (CNI)][cni-networking], every pod get
* The virtual network for the AKS cluster must allow outbound internet connectivity.
-* AKS clusters may not use `169.254.0.0/16`, `172.30.0.0/16`, `172.31.0.0/16`, or `192.0.2.0/24` for the Kubernetes service address range, pod address range, or cluster virtual network address range.
+* AKS clusters can't use `169.254.0.0/16`, `172.30.0.0/16`, `172.31.0.0/16`, or `192.0.2.0/24` for the Kubernetes service address range, pod address range, or cluster virtual network address range.
* The cluster identity used by the AKS cluster must have at least [Network Contributor](../role-based-access-control/built-in-roles.md#network-contributor) permissions on the subnet within your virtual network. If you wish to define a [custom role](../role-based-access-control/custom-roles.md) instead of using the built-in Network Contributor role, the following permissions are required:
IP addresses for the pods and the cluster's nodes are assigned from the specifie
> * When you **upgrade** your AKS cluster, a new node is deployed into the cluster. Services and workloads begin to run on the new node, and an older node is removed from the cluster. This rolling upgrade process requires a minimum of one additional block of IP addresses to be available. Your node count is then `n + 1`. > * This consideration is particularly important when you use Windows Server node pools. Windows Server nodes in AKS do not automatically apply Windows Updates, instead you perform an upgrade on the node pool. This upgrade deploys new nodes with the latest Window Server 2019 base node image and security patches. For more information on upgrading a Windows Server node pool, see [Upgrade a node pool in AKS][nodepool-upgrade]. >
-> * When you **scale** an AKS cluster, a new node is deployed into the cluster. Services and workloads begin to run on the new node. Your IP address range needs to take into considerations how you may want to scale up the number of nodes and pods your cluster can support. One additional node for upgrade operations should also be included. Your node count is then `n + number-of-additional-scaled-nodes-you-anticipate + 1`.
+> * When you **scale** an AKS cluster, a new node is deployed into the cluster. Services and workloads begin to run on the new node. Your IP address range needs to take into considerations how you might want to scale up the number of nodes and pods your cluster can support. One additional node for upgrade operations should also be included. Your node count is then `n + number-of-additional-scaled-nodes-you-anticipate + 1`.
-If you expect your nodes to run the maximum number of pods, and regularly destroy and deploy pods, you should also factor in some extra IP addresses per node. A few seconds may be required to delete a service and release its IP address for a new service to be deployed and acquire the address. These extra IP addresses consider this possibility.
+If you expect your nodes to run the maximum number of pods, and regularly destroy and deploy pods, you should also factor in some extra IP addresses per node. A few seconds can be required to delete a service and release its IP address for a new service to be deployed and acquire the address. These extra IP addresses consider this possibility.
The IP address plan for an AKS cluster consists of a virtual network, at least one subnet for nodes and pods, and a Kubernetes service address range. | Address range / Azure resource | Limits and sizing | | | - |
-| Virtual network | The Azure virtual network can be as large as /8, but is limited to 65,536 configured IP addresses. Consider all your networking needs, including communicating with services in other virtual networks, before configuring your address space. For example, if you configure too large of an address space, you may run into issues with overlapping other address spaces within your network.|
+| Virtual network | The Azure virtual network can be as large as /8, but is limited to 65,536 configured IP addresses. Consider all your networking needs, including communicating with services in other virtual networks, before configuring your address space. For example, if you configure too large of an address space, you might run into issues with overlapping other address spaces within your network.|
| Subnet | Must be large enough to accommodate the nodes, pods, and all Kubernetes and Azure resources that might be provisioned in your cluster. For example, if you deploy an internal Azure Load Balancer, its front-end IPs are allocated from the cluster subnet, not public IPs. The subnet size should also take into account upgrade operations or future scaling needs.<p/> Use the following equation to calculate the *minimum* subnet size including an extra node for upgrade operations: `(number of nodes + 1) + ((number of nodes + 1) * maximum pods per node that you configure)`<p/> Example for a 50 node cluster: `(51) + (51 * 30 (default)) = 1,581` (/21 or larger)<p/>Example for a 50 node cluster that also includes preparation to scale up an extra 10 nodes: `(61) + (61 * 30 (default)) = 1,891` (/21 or larger)<p>If you don't specify a maximum number of pods per node when you create your cluster, the maximum number of pods per node is set to *30*. The minimum number of IP addresses required is based on that value. If you calculate your minimum IP address requirements on a different maximum value, see [how to configure the maximum number of pods per node](#configure-maximumnew-clusters) to set this value when you deploy your cluster. | | Kubernetes service address range | Any network element on or connected to this virtual network must not use this range. Service address CIDR must be smaller than /12. You can reuse this range across different AKS clusters. | | Kubernetes DNS service IP address | IP address within the Kubernetes service address range that is used by cluster service discovery. Don't use the first IP address in your address range. The first address in your subnet range is used for the *kubernetes.default.svc.cluster.local* address. |
A minimum value for maximum pods per node is enforced to guarantee space for sys
The maxPod per node setting can be defined when you create a new node pool. If you need to increase the maxPod per node setting on an existing cluster, add a new node pool with the new desired maxPod count. After migrating your pods to the new pool, delete the older pool. To delete any older pool in a cluster, ensure you're setting node pool modes as defined in the [system node pools document][system-node-pools]. + ## Deployment parameters When you create an AKS cluster, the following parameters are configurable for Azure CNI networking:
aks Concepts Vulnerability Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-vulnerability-management.md
The following table describes vulnerability severity categories:
AKS patches CVEs that has a *vendor fix* every week. CVEs without a fix are waiting on a *vendor fix* before it can be remediated. The fixed container images are cached in the next corresponding Virtual Hard Disk (VHD) build, which also contains the updated Ubuntu/Azure Linux/Windows patched CVEs. As long as you're running the updated VHD, you shouldn't be running any container image CVEs with a vendor fix that is over 30 days old.
-For the OS-based vulnerabilities in the VHD, AKS uses **Unattended Update** by default, so any security updates should be applied to the existing VHDs daily. If **Unattended Update** is disabled, then it's a recommended best practice that you apply a Node Image update on a regular cadence to ensure the latest OS and Image security updates are applied.
+For the OS-based vulnerabilities in the VHD, AKS also relies on node image vhd updates by default, so any security updates will come with weekly node image releases . Unattended upgrades is disabled unless you switch to unmanaged which is not recommended as its release is global.
## Update release timelines
aks Configure Kubenet Dual Stack https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/configure-kubenet-dual-stack.md
Once the cluster has been created, you can deploy your workloads. This article w
## Expose the workload via a `LoadBalancer` type service > [!IMPORTANT]
-> There are currently **two limitations** pertaining to IPv6 services in AKS. These are both preview limitations and work is underway to remove them.
+> There are currently **two limitations** pertaining to IPv6 services in AKS.
> > 1. Azure Load Balancer sends health probes to IPv6 destinations from a link-local address. In Azure Linux node pools, this traffic can't be routed to a pod, so traffic flowing to IPv6 services deployed with `externalTrafficPolicy: Cluster` fail. IPv6 services must be deployed with `externalTrafficPolicy: Local`, which causes `kube-proxy` to respond to the probe on the node. > 2. Only the first IP address for a service will be provisioned to the load balancer, so a dual-stack service only receives a public IP for its first-listed IP family. To provide a dual-stack service for a single deployment, please create two services targeting the same selector, one for IPv4 and one for IPv6.
aks Keda About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/keda-about.md
Title: Kubernetes Event-driven Autoscaling (KEDA) (Preview)
+ Title: Kubernetes Event-driven Autoscaling (KEDA)
description: Simplified application autoscaling with Kubernetes Event-driven Autoscaling (KEDA) add-on. Previously updated : 06/06/2023 Last updated : 08/08/2023
-# Simplified application autoscaling with Kubernetes Event-driven Autoscaling (KEDA) add-on (Preview)
+# Simplified application autoscaling with Kubernetes Event-driven Autoscaling (KEDA) add-on
Kubernetes Event-driven Autoscaling (KEDA) is a single-purpose and lightweight component that strives to make application autoscaling simple and is a CNCF Graduate project. It applies event-driven autoscaling to scale your application to meet demand in a sustainable and cost-efficient manner with scale-to-zero.
-The KEDA add-on makes it even easier by deploying a managed KEDA installation, providing you with [a rich catalog of 50+ KEDA scalers][keda-scalers] that you can scale your applications with on your Azure Kubernetes Services (AKS) cluster.
-
+The KEDA add-on makes it even easier by deploying a managed KEDA installation, providing you with [a rich catalog of Azure KEDA scalers][keda-scalers] that you can scale your applications with on your Azure Kubernetes Services (AKS) cluster.
## Architecture
The KEDA add-on makes it even easier by deploying a managed KEDA installation, p
Learn more about how KEDA works in the [official KEDA documentation][keda-architecture].
-## Installation and version
+## Installation
KEDA can be added to your Azure Kubernetes Service (AKS) cluster by enabling the KEDA add-on using an [ARM template][keda-arm] or [Azure CLI][keda-cli].
The KEDA add-on provides a fully supported installation of KEDA that is integrat
KEDA provides the following capabilities and features: - Build sustainable and cost-efficient applications with scale-to-zero-- Scale application workloads to meet demand using [a rich catalog of 50+ KEDA scalers][keda-scalers]
+- Scale application workloads to meet demand using [a rich catalog of Azure KEDA scalers][keda-scalers]
- Autoscale applications with `ScaledObjects`, such as Deployments, StatefulSets or any custom resource that defines `/scale` subresource - Autoscale job-like workloads with `ScaledJobs` - Use production-grade security by decoupling autoscaling authentication from workloads - Bring-your-own external scaler to use tailor-made autoscaling decisions
+- Integrate with [Microsoft Entra Workload ID][workload-identity] for authentication
+
+> [!NOTE]
+> If you plan to use workload identity, [enable the workload identity add-on][workload-identity-deploy] before enabling the KEDA add-on.
## Add-on limitations
The KEDA AKS add-on has the following limitations:
* KEDA's [external scaler for Azure Cosmos DB][keda-cosmos-db-scaler] to scale based on Azure Cosmos DB change feed isn't installed with the extension, but can be deployed separately. * Only one metric server is allowed in the Kubernetes cluster. Because of that the KEDA add-on should be the only metrics server inside the cluster. * Multiple KEDA installations aren't supported
-* Managed identity isn't supported.
For general KEDA questions, we recommend [visiting the FAQ overview][keda-faq]. +
+## Supported Kubernetes and KEDA versions
+
+Your cluster Kubernetes version determines what KEDA version will be installed on your AKS cluster. To see which KEDA version maps to each AKS version, see the **AKS managed add-ons** column of the [Kubernetes component version table](./supported-kubernetes-versions.md#aks-components-breaking-changes-by-version).
+
+For GA Kubernetes versions, AKS offers full support of the corresponding KEDA minor version in the table. Kubernetes preview versions and the latest KEDA patch are partially covered by customer support on a best-effort basis. As such, these features aren't meant for production use. For more information, see the following support articles:
+
+- [AKS support policies][support-policies]
+- [Azure support FAQ][azure-support-faq]
+ ## Next steps * [Enable the KEDA add-on with an ARM template][keda-arm] * [Enable the KEDA add-on with the Azure CLI][keda-cli] * [Troubleshoot KEDA add-on problems][keda-troubleshoot] * [Autoscale a .NET Core worker processing Azure Service Bus Queue messages][keda-sample]
+* [View the upstream KEDA docs][keda]
<!-- LINKS - internal --> [keda-azure-cli]: keda-deploy-addon-az-cli.md [keda-cli]: keda-deploy-add-on-cli.md [keda-arm]: keda-deploy-add-on-arm.md [keda-troubleshoot]: /troubleshoot/azure/azure-kubernetes/troubleshoot-kubernetes-event-driven-autoscaling-add-on?context=/azure/aks/context/aks-context
+[workload-identity]: ./workload-identity-overview.md
+[workload-identity-deploy]: ./workload-identity-deploy-cluster.md
+[support-policies]: ./support-policies.md
<!-- LINKS - external --> [keda]: https://keda.sh/
For general KEDA questions, we recommend [visiting the FAQ overview][keda-faq].
[keda-scalers]: https://keda.sh/docs/scalers/ [keda-http-add-on]: https://github.com/kedacore/http-add-on [keda-cosmos-db-scaler]: https://github.com/kedacore/external-scaler-azure-cosmos-db
+[azure-support-faq]: https://azure.microsoft.com/support/legal/faq/
aks Keda Deploy Add On Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/keda-deploy-add-on-arm.md
Title: Install the Kubernetes Event-driven Autoscaling (KEDA) add-on using an ARM template description: Use an ARM template to deploy the Kubernetes Event-driven Autoscaling (KEDA) add-on to Azure Kubernetes Service (AKS).-+ Last updated 09/26/2023-+ # Install the Kubernetes Event-driven Autoscaling (KEDA) add-on using an ARM template
This article shows you how to deploy the Kubernetes Event-driven Autoscaling (KE
- You need the [Azure CLI installed](/cli/azure/install-azure-cli). - This article assumes you have an existing Azure resource group. If you don't have an existing resource group, you can create one using the [`az group create`][az-group-create] command. - Ensure you have firewall rules configured to allow access to the Kubernetes API server. For more information, see [Outbound network and FQDN rules for Azure Kubernetes Service (AKS) clusters][aks-firewall-requirements].-- [Install the `aks-preview` Azure CLI extension](#install-the-aks-preview-azure-cli-extension).-- [Register the `AKS-KedaPreview` feature flag](#register-the-aks-kedapreview-feature-flag). - [Create an SSH key pair](#create-an-ssh-key-pair).
-### Install the `aks-preview` Azure CLI extension
-
-1. Install the `aks-preview` extension using the [`az extension add`][az-extension-add] command.
-
- ```azurecli-interactive
- az extension add --name aks-preview
- ```
-
-2. Update to the latest version of the `aks-preview` extension using the [`az extension update`][az-extension-update] command.
-
- ```azurecli-interactive
- az extension update --name aks-preview
- ```
-
-### Register the `AKS-KedaPreview` feature flag
-
-1. Register the `AKS-KedaPreview` feature flag using the [`az feature register`][az-feature-register] command.
-
- ```azurecli-interactive
- az feature register --namespace "Microsoft.ContainerService" --name "AKS-KedaPreview"
- ```
-
- It takes a few minutes for the status to show *Registered*.
-
-2. Verify the registration status using the [`az feature show`][az-feature-show] command.
-
- ```azurecli-interactive
- az feature show --namespace "Microsoft.ContainerService" --name "AKS-KedaPreview"
- ```
-
-3. When the status reflects *Registered*, refresh the registration of the *Microsoft.ContainerService* resource provider using the [`az provider register`][az-provider-register] command.
-
- ```azurecli-interactive
- az provider register --namespace Microsoft.ContainerService
- ```
-
-### Create an SSH key pair
+## Create an SSH key pair
1. Navigate to the [Azure Cloud Shell](https://shell.azure.com/). 2. Create an SSH key pair using the [`az sshkey create`][az-sshkey-create] command.
To connect to the Kubernetes cluster from your local device, you use [kubectl][k
If you use the Azure Cloud Shell, `kubectl` is already installed. You can also install it locally using the [`az aks install-cli`][az-aks-install-cli] command. -- Configure `kubectl` to connect to your Kubernetes cluster using the [`az aks get-credentials`][az-aks-get-credentials] command.-
- ```azurecli-interactive
- az aks get-credentials --resource-group <resource-group-name> --name <cluster-name>
- ```
+- Configure `kubectl` to connect to your Kubernetes cluster, use the [az aks get-credentials][az-aks-get-credentials] command. The following example gets credentials for the AKS cluster named *MyAKSCluster* in the *MyResourceGroup*:
+
+```azurecli
+az aks get-credentials --resource-group MyResourceGroup --name MyAKSCluster
+```
+
+## Example deployment
+
+The following snippet is a sample deployment that creates a cluster with KEDA enabled with a single node pool comprised of three `DS2_v5` nodes.
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "resources": [
+ {
+ "apiVersion": "2023-03-01",
+ "dependsOn": [],
+ "type": "Microsoft.ContainerService/managedClusters",
+ "location": "westcentralus",
+ "name": "myAKSCluster",
+ "properties": {
+ "kubernetesVersion": "1.27",
+ "enableRBAC": true,
+ "dnsPrefix": "myAKSCluster",
+ "agentPoolProfiles": [
+ {
+ "name": "agentpool",
+ "osDiskSizeGB": 200,
+ "count": 3,
+ "enableAutoScaling": false,
+ "vmSize": "Standard_D2S_v5",
+ "osType": "Linux",
+ "storageProfile": "ManagedDisks",
+ "type": "VirtualMachineScaleSets",
+ "mode": "System",
+ "maxPods": 110,
+ "availabilityZones": [],
+ "nodeTaints": [],
+ "enableNodePublicIP": false
+ }
+ ],
+ "networkProfile": {
+ "loadBalancerSku": "standard",
+ "networkPlugin": "kubenet"
+ },
+ "workloadAutoScalerProfile": {
+ "keda": {
+ "enabled": true
+ }
+ }
+ },
+ "identity": {
+ "type": "SystemAssigned"
+ }
+ }
+ ]
+}
+```
## Start scaling apps with KEDA
This article showed you how to install the KEDA add-on on an AKS cluster, and th
For information on KEDA troubleshooting, see [Troubleshoot the Kubernetes Event-driven Autoscaling (KEDA) add-on][keda-troubleshoot].
+To learn more, view the [upstream KEDA docs][keda].
+ <!-- LINKS - internal --> [az-group-delete]: /cli/azure/group#az-group-delete [keda-troubleshoot]: /troubleshoot/azure/azure-kubernetes/troubleshoot-kubernetes-event-driven-autoscaling-add-on?context=/azure/aks/context/aks-context
For information on KEDA troubleshooting, see [Troubleshoot the Kubernetes Event-
[kubectl]: https://kubernetes.io/docs/reference/kubectl/ [keda-scalers]: https://keda.sh/docs/scalers/ [keda-sample]: https://github.com/kedacore/sample-dotnet-worker-servicebus-queue
+[keda]: https://keda.sh/docs/2.12/
aks Keda Deploy Add On Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/keda-deploy-add-on-cli.md
This article shows you how to install the Kubernetes Event-driven Autoscaling (K
- You need an Azure subscription. If you don't have an Azure subscription, you can create a [free account](https://azure.microsoft.com/free). - You need the [Azure CLI installed](/cli/azure/install-azure-cli). - Ensure you have firewall rules configured to allow access to the Kubernetes API server. For more information, see [Outbound network and FQDN rules for Azure Kubernetes Service (AKS) clusters][aks-firewall-requirements].-- [Install the `aks-preview` Azure CLI extension](#install-the-aks-preview-azure-cli-extension).-- [Register the `AKS-KedaPreview` feature flag](#register-the-aks-kedapreview-feature-flag).
-### Install the `aks-preview` Azure CLI extension
+## Install the KEDA add-on with Azure CLI
-1. Install the `aks-preview` extension using the [`az extension add`][az-extension-add] command.
-
- ```azurecli-interactive
- az extension add --name aks-preview
- ```
-
-2. Update to the latest version of the `aks-preview` extension using the [`az extension update`][az-extension-update] command.
-
- ```azurecli-interactive
- az extension update --name aks-preview
- ```
-
-### Register the `AKS-KedaPreview` feature flag
-
-1. Register the `AKS-KedaPreview` feature flag using the [`az feature register`][az-feature-register] command.
-
- ```azurecli-interactive
- az feature register --namespace "Microsoft.ContainerService" --name "AKS-KedaPreview"
- ```
-
- It takes a few minutes for the status to show *Registered*.
-
-2. Verify the registration status using the [`az feature show`][az-feature-show] command.
-
- ```azurecli-interactive
- az feature show --namespace "Microsoft.ContainerService" --name "AKS-KedaPreview"
- ```
-
-3. When the status reflects *Registered*, refresh the registration of the *Microsoft.ContainerService* resource provider using the [`az provider register`][az-provider-register] command.
-
- ```azurecli-interactive
- az provider register --namespace Microsoft.ContainerService
- ```
+To install the KEDA add-on, use `--enable-keda` when creating or updating a cluster.
## Enable the KEDA add-on on your AKS cluster
This article shows you how to install the Kubernetes Event-driven Autoscaling (K
## Verify KEDA is running on your cluster -- Verify the KEDA add-on is running on your cluster using the [`kubectl get pods`][kubectl] command.
+- Verify the KEDA add-on is running on your cluster using the `kubectl get pods` command.
```azurecli-interactive kubectl get pods -n kube-system ```
- The following example output shows the KEDA operator and metrics API server are installed on the cluster:
+ The following example output shows the KEDA operator, admissions hook, and metrics API server are installed on the cluster:
```output
- keda-operator-********-k5rfv 1/1 Running 0 43m
- keda-operator-metrics-apiserver-*******-sj857 1/1 Running 0 43m
+ keda-admission-webhooks-**********-2n9zl 1/1 Running 0 3d18h
+ keda-admission-webhooks-**********-69dkg 1/1 Running 0 3d18h
+ keda-operator-*********-4hb5n 1/1 Running 0 3d18h
+ keda-operator-*********-pckpx 1/1 Running 0 3d18h
+ keda-operator-metrics-apiserver-**********-gqg4s 1/1 Running 0 3d18h
+ keda-operator-metrics-apiserver-**********-trfcb 1/1 Running 0 3d18h
``` ## Verify the KEDA version on your cluster -- Verify the KEDA version using the `kubectl get crd/scaledobjects.keda.sh -o yaml` command.-
- ```azurecli-interactive
- kubectl get crd/scaledobjects.keda.sh -o yaml
- ```
-
- The following condensed example output shows the configuration of KEDA in the `app.kubernetes.io/version` label:
-
- ```output
- apiVersion: apiextensions.k8s.io/v1
- kind: CustomResourceDefinition
- metadata:
- annotations:
- controller-gen.kubebuilder.io/version: v0.9.0
- meta.helm.sh/release-name: aks-managed-keda
- meta.helm.sh/release-namespace: kube-system
- creationTimestamp: "2023-09-26T10:31:06Z"
- generation: 1
- labels:
- app.kubernetes.io/component: operator
- app.kubernetes.io/managed-by: Helm
- app.kubernetes.io/name: keda-operator
- app.kubernetes.io/part-of: keda-operator
- app.kubernetes.io/version: 2.10.1
- ...
- ```
+To verify the version of your KEDA, use `kubectl get crd/scaledobjects.keda.sh -o yaml `. For example:
+
+```azurecli-interactive
+kubectl get crd/scaledobjects.keda.sh -o yaml
+```
+
+The following example output shows the configuration of KEDA in the `app.kubernetes.io/version` label:
+
+```yaml
+apiVersion: apiextensions.k8s.io/v1
+kind: CustomResourceDefinition
+metadata:
+ annotations:
+ controller-gen.kubebuilder.io/version: v0.9.0
+ meta.helm.sh/release-name: aks-managed-keda
+ meta.helm.sh/release-namespace: kube-system
+ creationTimestamp: "2023-08-09T15:58:56Z"
+ generation: 1
+ labels:
+ app.kubernetes.io/component: operator
+ app.kubernetes.io/managed-by: Helm
+ app.kubernetes.io/name: keda-operator
+ app.kubernetes.io/part-of: keda-operator
+ app.kubernetes.io/version: 2.10.1
+ helm.toolkit.fluxcd.io/name: keda-adapter-helmrelease
+ helm.toolkit.fluxcd.io/namespace: 64d3b6fd3365790001260647
+ name: scaledobjects.keda.sh
+ resourceVersion: "1421"
+ uid: 29109c8c-638a-4bf5-ac1b-c28ad9aa11fa
+spec:
+ conversion:
+ strategy: None
+ group: keda.sh
+ names:
+ kind: ScaledObject
+ listKind: ScaledObjectList
+ plural: scaledobjects
+ shortNames:
+ - so
+ singular: scaledobject
+ scope: Namespaced
+ # Redacted due to length
+```
## Disable the KEDA add-on on your AKS cluster
With the KEDA add-on installed on your cluster, you can [deploy a sample applica
For information on KEDA troubleshooting, see [Troubleshoot the Kubernetes Event-driven Autoscaling (KEDA) add-on][keda-troubleshoot].
+To learn more, view the [upstream KEDA docs][keda].
+ <!-- LINKS - internal --> [az-provider-register]: /cli/azure/provider#az-provider-register [az-feature-register]: /cli/azure/feature#az-feature-register
For information on KEDA troubleshooting, see [Troubleshoot the Kubernetes Event-
<!-- LINKS - external --> [kubectl]: https://kubernetes.io/docs/user-guide/kubectl [keda-sample]: https://github.com/kedacore/sample-dotnet-worker-servicebus-queue
+[keda]: https://keda.sh/docs/2.12/
aks Keda Integrations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/keda-integrations.md
Title: Integrations with Kubernetes Event-driven Autoscaling (KEDA) on Azure Kubernetes Service (AKS) (Preview)
-description: Integrations with Kubernetes Event-driven Autoscaling (KEDA) on Azure Kubernetes Service (AKS) (Preview).
+ Title: Integrations with Kubernetes Event-driven Autoscaling (KEDA) on Azure Kubernetes Service (AKS)
+description: Integrations with Kubernetes Event-driven Autoscaling (KEDA) on Azure Kubernetes Service (AKS).
Last updated 09/27/2023
-# Integrations with Kubernetes Event-driven Autoscaling (KEDA) on Azure Kubernetes Service (AKS) (Preview)
+# Integrations with Kubernetes Event-driven Autoscaling (KEDA) on Azure Kubernetes Service (AKS)
The Kubernetes Event-driven Autoscaling (KEDA) add-on for AKS integrates with features provided by Azure and open-source projects. - > [!IMPORTANT] > The [AKS support policy][aks-support-policy] doesn't cover integrations with open-source projects.
To learn about the available metrics, we recommend reading the [KEDA documentati
## Scalers for Azure services
-KEDA integrates with various tools and services through [a rich catalog of 50+ KEDA scalers][keda-scalers] and supports leading cloud platforms and open-source technologies.
+KEDA can integrate with various tools and services through [a rich catalog of Azure KEDA scalers][keda-scalers] and supports leading cloud platforms and open-source technologies.
KEDA leverages the following scalers for Azure
KEDA leverages the following scalers for Azure
- [Azure Service Bus](https://keda.sh/docs/latest/scalers/azure-service-bus/) - [Azure Storage Queue](https://keda.sh/docs/latest/scalers/azure-storage-queue/)
-You can also install external scalers to autoscale on other Azure
+As of KEDA version `2.10`, the [Prometheus scaler][prometheus-scaler] supports Azure managed service for Prometheus.
+You can also install external scalers to autoscale on other Azure
- [Azure Cosmos DB (Change feed)](https://github.com/kedacore/external-scaler-azure-cosmos-db)
-These external scalers *aren't supported as part of the add-on* and rely on community support.
+> [!IMPORTANT]
+> External scalers *aren't supported as part of the add-on* and rely on community support.
## Next steps -- [Enable the KEDA add-on with an ARM template][keda-arm]-- [Enable the KEDA add-on with the Azure CLI][keda-cli]-- [Troubleshoot KEDA add-on problems][keda-troubleshoot]-- [Autoscale a .NET Core worker processing Azure Service Bus Queue message][keda-sample]
+* [Enable the KEDA add-on with an ARM template][keda-arm]
+* [Enable the KEDA add-on with the Azure CLI][keda-cli]
+* [Troubleshoot KEDA add-on problems][keda-troubleshoot]
+* [Autoscale a .NET Core worker processing Azure Service Bus Queue message][keda-sample]
+* [View the upstream KEDA docs][keda]
<!-- LINKS - internal --> [aks-support-policy]: support-policies.md
These external scalers *aren't supported as part of the add-on* and rely on comm
[keda-scalers]: https://keda.sh/docs/latest/scalers/ [keda-event-docs]: https://keda.sh/docs/latest/operate/events/ [keda-sample]: https://github.com/kedacore/sample-dotnet-worker-servicebus-queue
+[prometheus-scaler]: https://keda.sh/docs/2.11/scalers/prometheus/
+[keda]: https://keda.sh/docs/2.12/
aks Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Kubernetes Service (AKS) description: Lists Azure Policy Regulatory Compliance controls available for Azure Kubernetes Service (AKS). These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 10/23/2023 Last updated : 11/06/2023
aks Start Stop Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/start-stop-cluster.md
You may not need to continuously run your Azure Kubernetes Service (AKS) workloa
To better optimize your costs during these periods, you can turn off, or stop, your cluster. This action stops your control plane and agent nodes, allowing you to save on all the compute costs, while maintaining all objects except standalone pods. The cluster state is stored for when you start it again, allowing you to pick up where you left off.
-> [!NOTE]
-> AKS start operations will restore all objects from ETCD with the exception of standalone pods with the same names and ages. meaning that a pod's age will continue to be calculated from its original creation time. This count will keep increasing over time, regardless of whether the cluster is in a stopped state.
- ## Before you begin This article assumes you have an existing AKS cluster. If you need an AKS cluster, you can create one using [Azure CLI][aks-quickstart-cli], [Azure PowerShell][aks-quickstart-powershell], or the [Azure portal][aks-quickstart-portal].
aks Stop Cluster Upgrade Api Breaking Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/stop-cluster-upgrade-api-breaking-changes.md
Title: Stop Azure Kubernetes Service (AKS) cluster upgrades automatically on API breaking changes (Preview)
+ Title: Stop Azure Kubernetes Service (AKS) cluster upgrades automatically on API breaking changes
description: Learn how to stop Azure Kubernetes Service (AKS) cluster upgrades automatically on API breaking changes. Last updated 10/19/2023
-# Stop Azure Kubernetes Service (AKS) cluster upgrades automatically on API breaking changes (Preview)
-
+# Stop Azure Kubernetes Service (AKS) cluster upgrades automatically on API breaking changes
To stay within a supported Kubernetes version, you have to upgrade your cluster at least once per year and prepare for all possible disruptions. These disruptions include ones caused by API breaking changes, deprecations, and dependencies such as Helm and Container Storage Interface (CSI). It can be difficult to anticipate these disruptions and migrate critical workloads without experiencing any downtime.
Before you begin, make sure you meet the following prerequisites:
* The upgrade operation is a Kubernetes minor version change for the cluster control plane. * The Kubernetes version you're upgrading to is 1.26 or later.
-* If you're using REST, the upgrade operation uses a preview API version of `2023-01-02-preview` or later.
-* If you're using the Azure CLI, you need the `aks-preview` CLI extension 0.5.154 or later.
* The last seen usage of deprecated APIs for the targeted version you're upgrading to must occur within 12 hours before the upgrade operation. AKS records usage hourly, so any usage of deprecated APIs within one hour isn't guaranteed to appear in the detection. ## Mitigate stopped upgrade operations
You can also check past API usage by enabling [Container Insights][container-ins
### Bypass validation to ignore API changes > [!NOTE]
-> This method requires you to use the `aks-preview` Azure CLI extension version 0.5.134 or later. This method isn't recommended, as deprecated APIs in the targeted Kubernetes version may not work long term. We recommend removing them as soon as possible after the upgrade completes.
+> This method requires you to use the Azure CLI version 2.53 or `aks-preview` Azure CLI extension version 0.5.134 or later. This method isn't recommended, as deprecated APIs in the targeted Kubernetes version may not work long term. We recommend removing them as soon as possible after the upgrade completes.
* Bypass validation to ignore API breaking changes using the [`az aks update`][az-aks-update] command. Specify the `enable-force-upgrade` flag and set the `upgrade-override-until` property to define the end of the window during which validation is bypassed. If no value is set, it defaults the window to three days from the current time. The date and time you specify must be in the future.
aks Supported Kubernetes Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/supported-kubernetes-versions.md
Examples:
Each number in the version indicates general compatibility with the previous version:
-* **Major versions** change when incompatible API updates or backwards compatibility may be broken.
+* **Major versions** change when incompatible API updates or backwards compatibility might be broken.
* **Minor versions** change when functionality updates are made that are backwards compatible to the other minor releases. * **Patch versions** change when backwards-compatible bug fixes are made.
For the past release history, see [Kubernetes history](https://github.com/kubern
| 1.25 | Aug 2022 | Oct 2022 | Dec 2022 | Jan 2, 2024 | Until 1.29 GA | | 1.26 | Dec 2022 | Feb 2023 | Apr 2023 | Mar 2024 | Until 1.30 GA | | 1.27* | Apr 2023 | Jun 2023 | Jul 2023 | Jul 2024, LTS until Jul 2025 | Until 1.31 GA |
-| 1.28 | Aug 2023 | Sep 2023 | Oct 2023 || Until 1.32 GA|
+| 1.28 | Aug 2023 | Sep 2023 | Nov 2023 | Nov 2024 | Until 1.32 GA|
| 1.29 | Dec 2023 | Jan 2024 | Feb 2024 | | Until 1.33 GA | *\* Indicates the version is designated for Long Term Support*
Note the following important changes to make before you upgrade to any of the av
|Kubernetes Version | AKS Managed Addons | AKS Components | OS components | Breaking Changes | Notes |--||-||-||
-| 1.25 | Azure policy 1.0.1<br>Metrics-Server 0.6.3<br>KEDA 2.9.3<br>Open Service Mesh 1.2.3<br>Core DNS V1.9.4<br>0.12.0</br>Overlay VPA 0.11.0<br>Azure-Keyvault-SecretsProvider 1.4.1<br>Ingress AppGateway 1.2.1<br>Eraser v1.1.1<br>Azure Workload identity v1.0.0<br>ASC Defender 1.0.56<br>Azure Active Directory Pod Identity 1.8.13.6<br>GitOps 1.7.0<br>KMS 0.5.0| Cilium 1.12.8<br>CNI 1.4.44<br> Cluster Autoscaler 1.8.5.3<br> | OS Image Ubuntu 18.04 Cgroups V1 <br>ContainerD 1.7<br>| Ubuntu 22.04 by default with cgroupv2 and Overlay VPA 0.13.0 |CgroupsV2 - If you deploy Java applications with the JDK, prefer to use JDK 11.0.16 and later or JDK 15 and later, which fully support cgroup v2
-| 1.26 | Azure policy 1.0.1<br>Metrics-Server 0.6.3<br>KEDA 2.9.3<br>Open Service Mesh 1.2.3<br>Core DNS V1.9.4<br>0.12.0</br>Overlay VPA 0.11.0<br>Azure-Keyvault-SecretsProvider 1.4.1<br>Ingress AppGateway 1.2.1<br>Eraser v1.1.1<br>Azure Workload identity v1.0.0<br>ASC Defender 1.0.56<br>Azure Active Directory Pod Identity 1.8.13.6<br>GitOps 1.7.0<br>KMS 0.5.0| Cilium 1.12.8<br>CNI 1.4.44<br> Cluster Autoscaler 1.8.5.3<br> | OS Image Ubuntu 22.04 Cgroups V2 <br>ContainerD 1.7<br>|No Breaking Changes |None
-| 1.27 | Azure policy 1.1.0<br>Metrics-Server 0.6.3<br>KEDA 2.10.0<br>Open Service Mesh 1.2.3<br>Core DNS V1.9.4<br>0.12.0</br>Overlay VPA 0.11.0<br>Azure-Keyvault-SecretsProvider 1.4.1<br>Ingress AppGateway 1.2.1<br>Eraser v1.1.1<br>Azure Workload identity v1.0.0<br>ASC Defender 1.0.56<br>Azure Active Directory Pod Identity 1.8.13.6<br>GitOps 1.7.0<br>KMS 0.5.0|Cilium 1.12.8<br>CNI 1.4.44<br> Cluster Autoscaler 1.8.5.3<br> | OS Image Ubuntu 22.04 Cgroups V2 <br>ContainerD 1.7 for Linux and 1.6 for Windows<br>|Keda 2.10.0 |Because of Ubuntu 22.04 FIPS certification status, we'll switch AKS FIPS nodes from 18.04 to 20.04 from 1.27 onwards.
+| 1.25 | Azure policy 1.0.1<br>Metrics-Server 0.6.3<br>KEDA 2.9.3<br>Open Service Mesh 1.2.3<br>Core DNS V1.9.4<br>0.12.0</br>Overlay VPA 0.11.0<br>Azure-Keyvault-SecretsProvider 1.4.1<br>Application Gateway Ingress Controller (AGIC) 1.5.3<br>Image Cleaner v1.1.1<br>Azure Workload identity v1.0.0<br>MDC Defender 1.0.56<br>Azure Active Directory Pod Identity 1.8.13.6<br>GitOps 1.7.0<br>KMS 0.5.0| Cilium 1.12.8<br>CNI 1.4.44<br> Cluster Autoscaler 1.8.5.3<br> | OS Image Ubuntu 18.04 Cgroups V1 <br>ContainerD 1.7<br>| Ubuntu 22.04 by default with cgroupv2 and Overlay VPA 0.13.0 |CgroupsV2 - If you deploy Java applications with the JDK, prefer to use JDK 11.0.16 and later or JDK 15 and later, which fully support cgroup v2
+| 1.26 | Azure policy 1.0.1<br>Metrics-Server 0.6.3<br>KEDA 2.9.3<br>Open Service Mesh 1.2.3<br>Core DNS V1.9.4<br>0.12.0</br>Overlay VPA 0.11.0<br>Azure-Keyvault-SecretsProvider 1.4.1<br>Application Gateway Ingress Controller (AGIC) 1.5.3<br>Image Cleaner v1.1.1<br>Azure Workload identity v1.0.0<br>MDC Defender 1.0.56<br>Azure Active Directory Pod Identity 1.8.13.6<br>GitOps 1.7.0<br>KMS 0.5.0| Cilium 1.12.8<br>CNI 1.4.44<br> Cluster Autoscaler 1.8.5.3<br> | OS Image Ubuntu 22.04 Cgroups V2 <br>ContainerD 1.7<br>|No breaking changes |None
+| 1.27 | Azure policy 1.1.0<br>Metrics-Server 0.6.3<br>KEDA 2.10.0<br>Open Service Mesh 1.2.3<br>Core DNS V1.9.4<br>0.12.0</br>Overlay VPA 0.11.0<br>Azure-Keyvault-SecretsProvider 1.4.1<br>Application Gateway Ingress Controller (AGIC) 1.7.2<br>Image Cleaner v1.1.1<br>Azure Workload identity v1.0.0<br>MDC Defender 1.0.56<br>Azure Active Directory Pod Identity 1.8.13.6<br>GitOps 1.7.0<br>KMS 0.5.0|Cilium 1.12.8<br>CNI 1.4.44<br> Cluster Autoscaler 1.8.5.3<br> | OS Image Ubuntu 22.04 Cgroups V2 <br>ContainerD 1.7 for Linux and 1.6 for Windows<br>|Keda 2.10.0 |Because of Ubuntu 22.04 FIPS certification status, we'll switch AKS FIPS nodes from 18.04 to 20.04 from 1.27 onwards.
+| 1.28 | Azure policy 1.2.1<br>Metrics-Server 0.6.3<br>KEDA 2.11.2<br>Open Service Mesh 1.2.7<br>Core DNS V1.9.4<br>0.12.0</br>Overlay VPA 0.13.0<br>Azure-Keyvault-SecretsProvider 1.4.1<br>Application Gateway Ingress Controller (AGIC) 1.7.2<br>Image Cleaner v1.2.2<br>Azure Workload identity v2.0.0<br>MDC Defender Security Publisher 1.0.68<br>MDC Defender Old File Cleaner 1.3.68<br>MDC Defender Pod Collector 1.0.78<br>MDC Defender Low Level Collector 1.3.81<br>Azure Active Directory Pod Identity 1.8.13.6<br>GitOps 1.8.1|Cilium 1.13.5<br>CNI v1.4.43.1 (Default)/v1.5.11 (Azure CNI Overlay)<br> Cluster Autoscaler 1.27.3<br> | OS Image Ubuntu 22.04 Cgroups V2 <br>ContainerD 1.7.5 for Linux and 1.7.1 for Windows<br>|No breaking changes|None
## Alias minor version
AKS defines a generally available (GA) version as a version available in all reg
* Two previous minor versions. * Each supported minor version also supports a maximum of two stable patches.
-AKS may also support preview versions, which are explicitly labeled and subject to [preview terms and conditions][preview-terms].
+AKS might also support preview versions, which are explicitly labeled and subject to [preview terms and conditions][preview-terms].
AKS provides platform support only for one GA minor version of Kubernetes after the regular supported versions. The platform support window of Kubernetes versions on AKS is known as "N-3". For more information, see [platform support policy](#platform-support-policy). > [!NOTE]
-> AKS uses safe deployment practices which involve gradual region deployment. This means it may take up to 10 business days for a new release or a new version to be available in all regions.
+> AKS uses safe deployment practices which involve gradual region deployment. This means it might take up to 10 business days for a new release or a new version to be available in all regions.
The supported window of Kubernetes versions on AKS is known as "N-2": (N (Latest release) - 2 (minor versions)), and ".letter" is representative of patch versions.
For new **patch** versions of Kubernetes:
AKS reserves the right to add or remove new/existing versions with one or more critical production-impacting bugs or security issues without advance notice.
-Specific patch releases may be skipped or rollout accelerated, depending on the severity of the bug or security issue.
+Specific patch releases might be skipped or rollout accelerated, depending on the severity of the bug or security issue.
## Azure portal and CLI versions
When performing an upgrade from an _unsupported version_ that skips two or more
### Can I create a new 1.xx.x cluster during its 30 day support window?
-No. Once a version is deprecated/removed, you can't create a cluster with that version. As the change rolls out, you'll start to see the old version removed from your version list. This process may take up to two weeks from announcement, progressively by region.
+No. Once a version is deprecated/removed, you can't create a cluster with that version. As the change rolls out, you'll start to see the old version removed from your version list. This process might take up to two weeks from announcement, progressively by region.
### I'm on a freshly deprecated version, can I still add new node pools? Or will I have to upgrade?
-No. You aren't allowed to add node pools of the deprecated version to your cluster. You can add node pools of a new version, but it may require you to update the control plane first.
+No. You aren't allowed to add node pools of the deprecated version to your cluster. You can add node pools of a new version, but it might require you to update the control plane first.
### How often do you update patches?
api-management Api Management Api Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-api-templates.md
- Title: API templates in Azure API Management | Microsoft Docs
-description: Learn how to customize the content of the API pages in the developer portal in Azure API Management.
------- Previously updated : 11/04/2019----
-# API templates in Azure API Management
-
-Azure API Management provides you the ability to customize the content of developer portal pages using a set of templates that configure their content. Using [DotLiquid](https://github.com/dotliquid) syntax and the editor of your choice, such as [DotLiquid for Designers](https://github.com/dotliquid/dotliquid/wiki/DotLiquid-for-Designers), and a provided set of localized [String resources](api-management-template-resources.md#strings), [Glyph resources](api-management-template-resources.md#glyphs), and [Page controls](api-management-page-controls.md), you have great flexibility to configure the content of the pages as you see fit using these templates.
-
-The templates in this section allow you to customize the content of the API pages in the developer portal.
-
-- [API list](#APIList) -- [Operation](#Product) -- [Code samples](#CodeSamples)
- - [Curl](#Curl)
- - [C#](#CSharp)
- - [Java](#Stub)
- - [JavaScript](#JavaScript)
- - [Objective C](#ObjectiveC)
- - [PHP](#PHP)
- - [Python](#Python)
- - [Ruby](#Ruby)
-
-> [!NOTE]
-> Sample default templates are included in the following documentation, but are subject to change due to continuous improvements. You can view the live default templates in the developer portal by navigating to the desired individual templates. For more information about working with templates, see [How to customize the API Management developer portal using templates](./api-management-developer-portal-templates.md).
--
-
-## <a name="APIList"></a> API list
- The **API list** template allows you to customize the body of the API list page in the developer portal.
-
- ![Developer Portal API List](./media/api-management-api-templates/APIM-Developer-Portal-Templates-API-List.png "APIM Developer Portal Templates API List")
-
-### Default template
-
-```xml
-<search-control></search-control>
-<div class="row">
- <div class="col-md-9">
- <h2>{% localized "ApisStrings|PageTitleApis" %}</h2>
- </div>
-</div>
-<div class="row">
- <div class="col-md-12">
- {% if apis.size > 0 %}
- <ol class="list-unstyled">
- {% for api in apis %}
- <li>
- <h3>
- <a href="/docs/services/{{api.id}}">{{api.name}}</a>
- </h3>
- {{api.description}}
- </li>
- {% endfor %}
- </ol>
- <paging-control></paging-control>
- {% else %}
- {% localized "CommonResources|NoItemsToDisplay" %}
- {% endif %}
- </div>
-</div>
-```
-
-### Controls
- The `API list` template may use the following [page controls](api-management-page-controls.md).
-
-- [paging-control](api-management-page-controls.md#paging-control)
-
-- [search-control](api-management-page-controls.md#search-control)
-
-### Data model
-
-|Property|Type|Description|
-|--|-|--|
-|`apis`|Collection of [API summary](api-management-template-data-model-reference.md#APISummary) entities.|The APIs visible to the current user.|
-
-### Sample template data
-
-```json
-{
- "apis": [
- {
- "id": "570275f1b16653124c8f9ba3",
- "name": "Basic Calculator",
- "description": "Arithmetics is just a call away!"
- },
- {
- "id": "57026e30de15d80041040001",
- "name": "Echo API",
- "description": null
- }
- ],
- "pageTitle": "APIs"
-}
-```
-
-## <a name="Product"></a> Operation
- The **Operation** template allows you to customize the body of the operation page in the developer portal.
-
- ![Developer Portal Operation page](./media/api-management-api-templates/APIM-Developer-Portal-templates-Operation-page.png "APIM Developer Portal templates Operation page")
-
-### Default template
-
-```xml
-<h2>{{api.name}}</h2>
-<p>{{api.description }}</p>
-
-<div class="panel">
- <h3>{{operation.name}}</h3>
- <p>{{operation.description }}</p>
- <a class="btn btn-primary" href="{{consoleUrl}}" id="btnOpenConsole" role="button">
- Try it
- </a>
-</div>
-
-<h4>{% localized "Documentation|SectionHeadingRequestUrl" %}</h4>
-<div class="panel">
- <div class="panel-body">
- <label>{{ sampleUrl | escape }}</label>
- </div>
-</div>
-
-{% if operation.request %}
- {% if operation.request.parameters.size > 0 %}
- <h4>{% localized "Documentation|SectionHeadingRequestParameters" %}</h4>
-
- <div class="panel">
- {% for parameter in operation.request.parameters %}
- <div class="row panel-body">
- <div class="col-md-3">
- <label>{{parameter.name}}</label>
- {% unless parameter.required %}
- <span class="text-muted">({% localized "Documentation|FormLabelSubtextOptional" %})</span>
- {% endunless %}
- </div>
- <div class="col-md-1">
- {{parameter.typeName}}
- </div>
- <div class="col-md-8">
- {{parameter.description}}
- </div>
- </div>
- {% endfor %}
- </div>
- {% endif %}
-
- {% if operation.request.headers.size > 0 %}
- <h4>{% localized "Documentation|SectionHeadingRequestHeaders" %}</h4>
- <div class="panel">
- {% for header in operation.request.headers %}
- <div class="row panel-body">
- <div class="col-md-3">
- <label>{{header.name}}</label>
- {%unless header.required %}
- <span class="text-muted">({% localized "Documentation|FormLabelSubtextOptional" %})</span>
- {% endunless %}
- </div>
- <div class="col-md-1">
- {{header.typeName}}
- </div>
- <div class="col-md-8">
- {{header.description}}
- </div>
- </div>
- {% endfor %}
- </div>
- {% endif %}
-
- {% if operation.request.description or operation.request.representations.size > 0 %}
- <h4>{% localized "Documentation|SectionHeadingRequestBody" %}</h4>
- <div class="panel">
- {% if operation.request.description %}
- <p>{{operation.request.description }}</p>
- {% endif %}
-
- {% if operation.request.representations.size > 0 %}
- <div role="tabpanel">
- <ul class="nav nav-tabs" role="tablist">
- {% for representation in operation.request.representations %}
- <li role="presentation" {% if forloop.first %}class="active"{% endif %}>
- <a href="#requesttab{{forloop.index}}" role="tab" data-toggle="tab">
- {{representation.contentType}}
- </a>
- </li>
- {% endfor %}
- </ul>
- <div class="tab-content tab-content-boxed">
- {% for representation in operation.request.representations %}
- <div id="requesttab{{forloop.index}}" role="tabpanel" class="tab-pane snippet{% if forloop.first %} active{% endif %}">
-
- {% if representation.sample or representation.schema %}
- <div role="tabpanel">
- {% if representation.sample and representation.schema %}
- <ul class="nav nav-tabs-borderless" role="tablist">
- <li role="presentation" class="active">
- <a href="#requesttab{{forloop.index}}sample" role="tab" data-toggle="tab">Sample</a>
- </li>
- <li role="presentation">
- <a href="#requesttab{{forloop.index}}schema" role="tab" data-toggle="tab">Schema</a>
- </li>
- </ul>
- {% endif %}
-
- <div class="tab-content">
- {% if representation.sample %}
- <div id="requesttab{{forloop.index}}sample" role="tabpanel" class="tab-pane snippet active">
- <pre><code class="{{representation.Brush}}">{{ representation.sample | escape }}</code></pre>
- </div>
- {% endif %}
-
- {% if representation.schema %}
- <div id="requesttab{{forloop.index}}schema" role="tabpanel" class="tab-pane snippet">
- <pre><code class="{{representation.Brush}}">{{ representation.schema | escape }}</code></pre>
- </div>
- {% endif %}
- </div>
- </div>
- {% endif %}
- </div>
- {% endfor %}
- </div>
- </div>
- {% endif %}
-
- <div class="clearfix"></div>
- </div>
- {% endif %}
-{% endif %}
-
-{% if operation.responses.size > 0 %}
- {% for response in operation.responses %}
- {% if response.description or response.representations.size > 0 %}
- <h4>{% localized "Documentation|SectionHeadingResponse" %} {{response.statusCode}}</h4>
-
- <div class="panel">
- {% if response.description %}
- <p>{{ response.description }}</p>
- {% endif %}
-
- {% if response.representations.size > 0 %}
- <div role="tabpanel">
- <ul class="nav nav-tabs" role="tablist">
- {% for representation in response.representations %}
- <li role="presentation" {% if forloop.first %}class="active"{% endif %}>
- <a href="#response{{response.statusCode}}tab{{forloop.index}}" role="tab" data-toggle="tab">
- {{representation.contentType}}
- </a>
- </li>
- {% endfor %}
- </ul>
- <div class="tab-content tab-content-boxed">
- {% for representation in response.representations %}
- <div id="response{{response.statusCode}}tab{{forloop.index}}" role="tabpanel" class="tab-pane snippet{% if forloop.first %} active{% endif %}">
-
- {% if representation.sample or representation.schema %}
- <div role="tabpanel">
-
- {% if representation.sample and representation.schema %}
- <ul class="nav nav-tabs-borderless" role="tablist">
- <li role="presentation" class="active">
- <a href="#response{{response.statusCode}}tab{{forloop.index}}sample" role="tab" data-toggle="tab">
- Sample
- </a>
- </li>
- <li role="presentation">
- <a href="#response{{response.statusCode}}tab{{forloop.index}}schema" role="tab" data-toggle="tab">
- Schema
- </a>
- </li>
- </ul>
- {% endif %}
-
- <div class="tab-content">
- {% if representation.sample %}
- <div id="response{{response.statusCode}}tab{{forloop.index}}sample" role="tabpanel" class="tab-pane snippet active">
- <pre><code class="{{representation.Brush}}">{{ representation.sample | escape }}</code></pre>
- </div>
- {% endif %}
-
- {% if representation.schema %}
- <div id="response{{response.statusCode}}tab{{forloop.index}}schema" role="tabpanel" class="tab-pane snippet">
- <pre><code class="{{representation.Brush}}">{{ representation.schema | escape }}</code></pre>
- </div>
- {% endif %}
- </div>
- </div>
- {% endif %}
- </div>
- {% endfor %}
- </div>
- </div>
- {% endif %}
-
- <div class="clearfix"></div>
- </div>
-
- {% endif %}
- {% endfor %}
-{% endif %}
-
-<h4>{% localized "Documentation|SectionHeadingCodeSamples" %}</h4>
-<div role="tabpanel">
- <ul class="nav nav-tabs" role="tablist">
- {% for sample in samples %}
- <li role="presentation" {% if forloop.first %}class="active"{% endif %}>
- <a href="#{{sample.brush}}" aria-controls="{{sample.brush}}" role="tab" data-toggle="tab">
- {{sample.title}}
- </a>
- </li>
- {% endfor %}
- </ul>
- <div class="tab-content tab-content-boxed" title="{% localized "Documentation|TooltipTextDoubleClickToSelectAll=""" %}">
- {% for sample in samples %}
- <div role="tabpanel" class="tab-pane tab-content-boxed {% if forloop.first %} active{% endif %} snippet snippet-resizable" id="{{sample.brush}}" >
- <pre><code class="{{sample.brush}}">{% partial sample.template for sample in samples %}</code></pre>
- </div>
- {% endfor %}
- </div>
- <div class="clearfix"></div>
-</div>
-```
-
-### Controls
- The `Operation` template does not allow the use of any [page controls](api-management-page-controls.md).
-
-### Data model
-
-|Property|Type|Description|
-|--|-|--|
-|`apiId`|string|The ID of the current API.|
-|`apiName`|string|The name of the API.|
-|`apiDescription`|string|A description of the API.|
-|`api`|[API summary](api-management-template-data-model-reference.md#APISummary) entity.|The current API.|
-|`operation`|[Operation](api-management-template-data-model-reference.md#Operation)|The currently displayed operation.|
-|`sampleUrl`|string|The URL for the current operation.|
-|`operationMenu`|[Operation menu](api-management-template-data-model-reference.md#Menu)|A menu of operations for this API.|
-|`consoleUrl`|URI|The URI for the **Try it** button.|
-|`samples`|Collection of [Code sample](api-management-template-data-model-reference.md#Sample) entities.|The code samples for the current operation..|
-
-### Sample template data
-
-```json
-{
- "apiId": "570275f1b16653124c8f9ba3",
- "apiName": "Basic Calculator",
- "apiDescription": "Arithmetics is just a call away!",
- "api": {
- "id": "570275f1b16653124c8f9ba3",
- "name": "Basic Calculator",
- "description": "Arithmetics is just a call away!"
- },
- "operation": {
- "id": "570275f2b1665305c811cf49",
- "name": "Add two integers",
- "description": "Produces a sum of two numbers.",
- "scheme": "https",
- "uriTemplate": "calc/add?a={a}&b={b}",
- "host": "sdcontoso5.azure-api.net",
- "httpMethod": "GET",
- "request": {
- "description": null,
- "headers": [
- {
- "name": "Ocp-Apim-Subscription-Key",
- "description": "Subscription key which provides access to this API. Found in your <a href='/developer'>Profile</a>.",
- "value": "{subscription key}",
- "typeName": "string",
- "options": null,
- "required": true,
- "readonly": false
- }
- ],
- "parameters": [
- {
- "name": "a",
- "description": "First operand. Default value is <code>51</code>.",
- "value": "51",
- "options": [
- "51"
- ],
- "required": true,
- "kind": 1,
- "typeName": null
- },
- {
- "name": "b",
- "description": "Second operand. Default value is <code>49</code>.",
- "value": "49",
- "options": [
- "49"
- ],
- "required": true,
- "kind": 1,
- "typeName": null
- }
- ],
- "representations": []
- },
- "responses": []
- },
- "sampleUrl": "https://sdcontoso5.azure-api.net/calc/add?a={a}&b={b}",
- "operationMenu": {
- "ApiId": "570275f1b16653124c8f9ba3",
- "CurrentOperationId": "570275f2b1665305c811cf49",
- "Action": "Operation",
- "MenuItems": [
- {
- "Id": "570275f2b1665305c811cf49",
- "Title": "Add two integers",
- "HttpMethod": "GET"
- },
- {
- "Id": "570275f2b1665305c811cf4c",
- "Title": "Divide two integers",
- "HttpMethod": "GET"
- },
- {
- "Id": "570275f2b1665305c811cf4b",
- "Title": "Multiply two integers",
- "HttpMethod": "GET"
- },
- {
- "Id": "570275f2b1665305c811cf4a",
- "Title": "Subtract two integers",
- "HttpMethod": "GET"
- }
- ]
- },
- "consoleUrl": "/docs/services/570275f1b16653124c8f9ba3/operations/570275f2b1665305c811cf49/console",
- "samples": [
- {
- "title": "Curl",
- "snippet": null,
- "brush": "plain",
- "template": "DocumentationSamplesCurl",
- "body": "{body}",
- "method": "GET",
- "scheme": "https",
- "path": "/calc/add?a={a}&b={b}",
- "query": "",
- "host": "sdcontoso5.azure-api.net",
- "headers": [
- {
- "name": "Ocp-Apim-Subscription-Key",
- "description": "Subscription key which provides access to this API. Found in your <a href='/developer'>Profile</a>.",
- "value": "{subscription key}",
- "typeName": "string",
- "options": null,
- "required": true,
- "readonly": false
- }
- ],
- "parameters": []
- },
- {
- "title": "C#",
- "snippet": null,
- "brush": "csharp",
- "template": "DocumentationSamplesCsharp",
- "body": "{body}",
- "method": "GET",
- "scheme": "https",
- "path": "/calc/add?a={a}&b={b}",
- "query": "",
- "host": "sdcontoso5.azure-api.net",
- "headers": [
- {
- "name": "Ocp-Apim-Subscription-Key",
- "description": "Subscription key which provides access to this API. Found in your <a href='/developer'>Profile</a>.",
- "value": "{subscription key}",
- "typeName": "string",
- "options": null,
- "required": true,
- "readonly": false
- }
- ],
- "parameters": []
- },
- {
- "title": "Java",
- "snippet": null,
- "brush": "java",
- "template": "DocumentationSamplesJava",
- "body": "{body}",
- "method": "GET",
- "scheme": "https",
- "path": "/calc/add?a={a}&b={b}",
- "query": "",
- "host": "sdcontoso5.azure-api.net",
- "headers": [
- {
- "name": "Ocp-Apim-Subscription-Key",
- "description": "Subscription key which provides access to this API. Found in your <a href='/developer'>Profile</a>.",
- "value": "{subscription key}",
- "typeName": "string",
- "options": null,
- "required": true,
- "readonly": false
- }
- ],
- "parameters": []
- },
- {
- "title": "JavaScript",
- "snippet": null,
- "brush": "xml",
- "template": "DocumentationSamplesJs",
- "body": "{body}",
- "method": "GET",
- "scheme": "https",
- "path": "/calc/add?a={a}&b={b}",
- "query": "",
- "host": "sdcontoso5.azure-api.net",
- "headers": [
- {
- "name": "Ocp-Apim-Subscription-Key",
- "description": "Subscription key which provides access to this API. Found in your <a href='/developer'>Profile</a>.",
- "value": "{subscription key}",
- "typeName": "string",
- "options": null,
- "required": true,
- "readonly": false
- }
- ],
- "parameters": []
- },
- {
- "title": "ObjC",
- "snippet": null,
- "brush": "objc",
- "template": "DocumentationSamplesObjc",
- "body": "{body}",
- "method": "GET",
- "scheme": "https",
- "path": "/calc/add?a={a}&b={b}",
- "query": "",
- "host": "sdcontoso5.azure-api.net",
- "headers": [
- {
- "name": "Ocp-Apim-Subscription-Key",
- "description": "Subscription key which provides access to this API. Found in your <a href='/developer'>Profile</a>.",
- "value": "{subscription key}",
- "typeName": "string",
- "options": null,
- "required": true,
- "readonly": false
- }
- ],
- "parameters": []
- },
- {
- "title": "PHP",
- "snippet": null,
- "brush": "php",
- "template": "DocumentationSamplesPhp",
- "body": "{body}",
- "method": "GET",
- "scheme": "https",
- "path": "/calc/add?a={a}&b={b}",
- "query": "",
- "host": "sdcontoso5.azure-api.net",
- "headers": [
- {
- "name": "Ocp-Apim-Subscription-Key",
- "description": "Subscription key which provides access to this API. Found in your <a href='/developer'>Profile</a>.",
- "value": "{subscription key}",
- "typeName": "string",
- "options": null,
- "required": true,
- "readonly": false
- }
- ],
- "parameters": []
- },
- {
- "title": "Python",
- "snippet": null,
- "brush": "python",
- "template": "DocumentationSamplesPython",
- "body": "{body}",
- "method": "GET",
- "scheme": "https",
- "path": "/calc/add?a={a}&b={b}",
- "query": "",
- "host": "sdcontoso5.azure-api.net",
- "headers": [
- {
- "name": "Ocp-Apim-Subscription-Key",
- "description": "Subscription key which provides access to this API. Found in your <a href='/developer'>Profile</a>.",
- "value": "{subscription key}",
- "typeName": "string",
- "options": null,
- "required": true,
- "readonly": false
- }
- ],
- "parameters": []
- },
- {
- "title": "Ruby",
- "snippet": null,
- "brush": "ruby",
- "template": "DocumentationSamplesRuby",
- "body": "{body}",
- "method": "GET",
- "scheme": "https",
- "path": "/calc/add?a={a}&b={b}",
- "query": "",
- "host": "sdcontoso5.azure-api.net",
- "headers": [
- {
- "name": "Ocp-Apim-Subscription-Key",
- "description": "Subscription key which provides access to this API. Found in your <a href='/developer'>Profile</a>.",
- "value": "{subscription key}",
- "typeName": "string",
- "options": null,
- "required": true,
- "readonly": false
- }
- ],
- "parameters": []
- }
- ]
-}
-```
-
-## <a name="CodeSamples"></a> Code samples
- The following templates allow you to customize the body of the individual code samples on the operation page.
-
- ![Developer Portal Templates Code samples](./media/api-management-api-templates/APIM-Developer-Portal-Templates-Code-samples.png "APIM Developer Portal Templates Code samples")
-
-- [Curl](#Curl)
-
-- [C#](#CSharp)
-
-- [Java](#Stub)
-
-- [JavaScript](#JavaScript)
-
-- [Objective C](#ObjectiveC)
-
-- [PHP](#PHP)
-
-- [Python](#Python)
-
-- [Ruby](#Ruby)
-
-### <a name="Curl"></a> Curl
- The **DocumentationSamplesCurl** template allows you to customize that code sample in the code samples section of the operation page.
-
-#### Default template
-
-```xml
-@ECHO OFF
-
-curl -v -X {{method}} "{{scheme}}://{{host}}{{path}}{{query | escape }}"
-{% for header in headers -%}
--H "{{ header.name }}: {{ header.value }}"
-{% endfor -%}
-{% if body -%}
data-ascii "{{ body | replace:'"','^"' }}"
-{% endif -%}
-
-```
-
-#### Controls
- The code sample templates do not allow the use of any [page controls](api-management-page-controls.md).
-
-#### Data model
- [Code sample](api-management-template-data-model-reference.md#Sample) entity.
-
-#### Sample template data
-
-```json
-{
- "title": "Curl",
- "snippet": null,
- "brush": "plain",
- "template": "DocumentationSamplesCurl",
- "body": "{body}",
- "method": "GET",
- "scheme": "https",
- "path": "/calc/add?a={a}&b={b}",
- "query": "",
- "host": "sdcontoso5.azure-api.net",
- "headers": [
- {
- "name": "Ocp-Apim-Subscription-Key",
- "description": "Subscription key which provides access to this API. Found in your <a href='/developer'>Profile</a>.",
- "value": "{subscription key}",
- "typeName": "string",
- "options": null,
- "required": true,
- "readonly": false
- }
- ],
- "parameters": []
-}
-```
-
-### <a name="CSharp"></a> C#
- The **DocumentationSamplesCsharp** template allows you to customize that code sample in the code samples section of the operation page.
-
-#### Default template
-
-```csharp
-using System;
-using System.Net.Http.Headers;
-using System.Text;
-using System.Net.Http;
-using System.Web;
-
-namespace CSHttpClientSample
-{
- static class Program
- {
- static void Main()
- {
- MakeRequest();
- Console.WriteLine("Hit ENTER to exit...");
- Console.ReadLine();
- }
-
- static async void MakeRequest()
- {
- var client = new HttpClient();
- var queryString = HttpUtility.ParseQueryString(string.Empty);
-
-{% if headers.size > 0 -%}
- // Request headers
-{% for header in headers -%}
-{% case header.Name -%}
-{% when "Accept"%}
- client.DefaultRequestHeaders.Accept.Add(MediaTypeWithQualityHeaderValue.Parse("{{header.value}}"));
-{% when "Accept-Charset" -%}
- client.DefaultRequestHeaders.AcceptCharset.Add(StringWithQualityHeaderValue.Parse("{{header.value}}"));
-{% when "Accept-Encoding" -%}
- client.DefaultRequestHeaders.AcceptEncoding.Add(StringWithQualityHeaderValue.Parse("{{header.value}}"));
-{% when "Accept-Language" -%}
- client.DefaultRequestHeaders.AcceptLanguage.Add(StringWithQualityHeaderValue.Parse("{{header.value}}"));
-{% when "Cache-Control" -%}
- client.DefaultRequestHeaders.CacheControl = CacheControlHeaderValue.Parse("{{header.value}}");
-{% when "Connection" -%}
- client.DefaultRequestHeaders.Connection.Add("{{header.value}}");
-{% when "Date" -%}
- client.DefaultRequestHeaders.Date = DateTimeOffset.Parse("{{header.value}}");
-{% when "Expect" -%}
- client.DefaultRequestHeaders.Expect.Add(NameValueWithParametersHeaderValue.Parse("{{header.value}}"));
-{% when "If-Match" -%}
- client.DefaultRequestHeaders.IfMatch.Add(EntityTagHeaderValue.Parse("{{header.value}}"));
-{% when "If-Modified-Since" -%}
- client.DefaultRequestHeaders.IfModifiedSince = DateTimeOffset.Parse("{{header.value}}");
-{% when "If-None-Match" -%}
- client.DefaultRequestHeaders.IfNoneMatch.Add(EntityTagHeaderValue.Parse("{{header.value}}"));
-{% when "If-Range" -%}
- client.DefaultRequestHeaders.IfRange = RangeConditionHeaderValue.Parse("{{header.value}}");
-{% when "If-Unmodified-Since" -%}
- client.DefaultRequestHeaders.IfUnmodifiedSince = DateTimeOffset.Parse("{{header.value}}");
-{% when "Max-Forwards" -%}
- client.DefaultRequestHeaders.MaxForwards = int.Parse("{{header.value}}");
-{% when "Pragma" -%}
- client.DefaultRequestHeaders.Pragma.Add(NameValueHeaderValue.Parse("{{header.value}}"));
-{% when "Range" -%}
- client.DefaultRequestHeaders.Range = RangeHeaderValue.Parse("{{header.value}}");
-{% when "Referer" -%}
- client.DefaultRequestHeaders.Referrer = new Uri("{{header.value}}");
-{% when "TE" -%}
- client.DefaultRequestHeaders.TE.Add(TransferCodingWithQualityHeaderValue.Parse("{{header.value}}"));
-{% when "Transfer-Encoding" -%}
- client.DefaultRequestHeaders.TransferEncoding.Add(TransferCodingHeaderValue.Parse("{{header.value}}"));
-{% when "Upgrade" -%}
- client.DefaultRequestHeaders.Upgrade.Add(ProductHeaderValue.Parse("{{header.value}}"));
-{% when "User-Agent" -%}
- client.DefaultRequestHeaders.UserAgent.Add(ProductInfoHeaderValue.Parse("{{header.value}}"));
-{% when "Via" -%}
- client.DefaultRequestHeaders.Via.Add(ViaHeaderValue.Parse("{{header.value}}"));
-{% when "Warning" -%}
- client.DefaultRequestHeaders.Warning.Add(WarningHeaderValue.Parse("{{header.value}}"));
-{% when "Content-Type" -%}
-{% else -%}
- client.DefaultRequestHeaders.Add("{{header.Name}}", "{{header.value}}");
-{% endcase -%}
-{% endfor -%}
-{% endif -%}
-
-{% if parameters.size > 0 -%}
- // Request parameters
-{% for parameter in parameters -%}
- queryString["{{parameter.Name}}"] = "{{parameter.Value}}";
-{% endfor -%}
-{% endif -%}
- var uri = "{{scheme}}://{{host}}{{path}}{% if path contains '?' %}&{% else %}?{% endif %}" + queryString;
-
-{% case method -%}
-
-{% when "POST" -%}
- HttpResponseMessage response;
-
- // Request body
- byte[] byteData = Encoding.UTF8.GetBytes("{{ body | replace:'"','\"'}}");
-
- using (var content = new ByteArrayContent(byteData))
- {
-{% if body -%}
- content.Headers.ContentType = new MediaTypeHeaderValue("< your content type, i.e. application/json >");
-{% endif -%}
- response = await client.PostAsync(uri, content);
- }
-
-{% when "GET" -%}
- var response = await client.GetAsync(uri);
-{% when "DELETE" -%}
- var response = await client.DeleteAsync(uri);
-{% when "PUT" -%}
- HttpResponseMessage response;
-
- // Request body
- byte[] byteData = Encoding.UTF8.GetBytes("{{ body | replace:'"','\"'}}");
-
- using (var content = new ByteArrayContent(byteData))
- {
-{% if body -%}
- content.Headers.ContentType = new MediaTypeHeaderValue("< your content type, i.e. application/json >");
-{% endif -%}
- response = await client.PutAsync(uri, content);
- }
-{% when "HEAD" -%}
- var response = await client.SendAsync(new HttpRequestMessage(HttpMethod.Head, uri));
-{% when "OPTIONS" -%}
- var response = await client.SendAsync(new HttpRequestMessage(HttpMethod.Options, uri));
-{% when "TRACE" -%}
- var response = await client.SendAsync(new HttpRequestMessage(HttpMethod.Trace, uri));
-
- if (response.Content != null)
- {
- var responseString = await response.Content.ReadAsStringAsync();
- Console.WriteLine(responseString);
- }
-{% endcase -%}
- }
- }
-}
-```
-
-#### Controls
- The code sample templates do not allow the use of any [page controls](api-management-page-controls.md).
-
-#### Data model
- [Code sample](api-management-template-data-model-reference.md#Sample) entity.
-
-#### Sample template data
-
-```json
-{
- "title": "C#",
- "snippet": null,
- "brush": "csharp",
- "template": "DocumentationSamplesCsharp",
- "body": "{body}",
- "method": "GET",
- "scheme": "https",
- "path": "/calc/add?a={a}&b={b}",
- "query": "",
- "host": "sdcontoso5.azure-api.net",
- "headers": [
- {
- "name": "Ocp-Apim-Subscription-Key",
- "description": "Subscription key which provides access to this API. Found in your <a href='/developer'>Profile</a>.",
- "value": "{subscription key}",
- "typeName": "string",
- "options": null,
- "required": true,
- "readonly": false
- }
- ],
- "parameters": []
-}
-```
-
-### <a name="Stub"></a> Java
- The **DocumentationSamplesJava** template allows you to customize that code sample in the code samples section of the operation page.
-
-#### Default template
-
-```java
-// // This sample uses the Apache HTTP client from HTTP Components (http://hc.apache.org/httpcomponents-client-ga/)
-import java.net.URI;
-import org.apache.http.HttpEntity;
-import org.apache.http.HttpResponse;
-import org.apache.http.client.HttpClient;
-import org.apache.http.client.methods.HttpGet;
-import org.apache.http.client.utils.URIBuilder;
-import org.apache.http.impl.client.HttpClients;
-import org.apache.http.util.EntityUtils;
-
-public class JavaSample
-{
- public static void main(String[] args)
- {
- HttpClient httpclient = HttpClients.createDefault();
-
- try
- {
- URIBuilder builder = new URIBuilder("{{scheme}}://{{host}}{{path}}");
-
-{% if parameters.size > 0 -%}
-{% for parameter in parameters -%}
- builder.setParameter("{{parameter.name}}", "{{parameter.value}}");
-{% endfor -%}
-{% endif -%}
-
- URI uri = builder.build();
- Http{{ method | downcase | capitalize }} request = new Http{{ method | downcase | capitalize }}(uri);
-{% for header in headers -%}
- request.setHeader("{{header.Name}}", "{{header.value}}");
-{% endfor %}
-
-{% if body -%}
- // Request body
- StringEntity reqEntity = new StringEntity("{{ body | replace:'"','\"' }}");
- request.setEntity(reqEntity);
-{% endif -%}
-
- HttpResponse response = httpclient.execute(request);
- HttpEntity entity = response.getEntity();
-
- if (entity != null)
- {
- System.out.println(EntityUtils.toString(entity));
- }
- }
- catch (Exception e)
- {
- System.out.println(e.getMessage());
- }
- }
-}
-
-```
-
-#### Controls
- The code sample templates do not allow the use of any [page controls](api-management-page-controls.md).
-
-#### Data model
- [Code sample](api-management-template-data-model-reference.md#Sample) entity.
-
-#### Sample template data
-
-```json
-{
- "title": "Java",
- "snippet": null,
- "brush": "java",
- "template": "DocumentationSamplesJava",
- "body": "{body}",
- "method": "GET",
- "scheme": "https",
- "path": "/calc/add?a={a}&b={b}",
- "query": "",
- "host": "sdcontoso5.azure-api.net",
- "headers": [
- {
- "name": "Ocp-Apim-Subscription-Key",
- "description": "Subscription key which provides access to this API. Found in your <a href='/developer'>Profile</a>.",
- "value": "{subscription key}",
- "typeName": "string",
- "options": null,
- "required": true,
- "readonly": false
- }
- ],
- "parameters": []
-}
-```
-
-### <a name="JavaScript"></a> JavaScript
- The **DocumentationSamplesJs** template allows you to customize that code sample in the code samples section of the operation page.
-
-#### Default template
-
-```html
-<!DOCTYPE html>
-<html>
-<head>
- <title>JSSample</title>
- <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.9.0/jquery.min.js"></script>
-</head>
-<body>
-
-<script type="text/javascript">
- $(function() {
- var params = {
-{% if parameters.size > 0 -%}
- // Request parameters
-{% for parameter in parameters -%}
- "{{parameter.name}}": "{{parameter.value}}",
-{% endfor -%}
-{% endif -%}
- };
-
- $.ajax({
- url: "{{scheme}}://{{host}}{{path}}{% if path contains '?' %}&{% else %}?{% endif %}" + $.param(params),
-{% if headers.size > 0 -%}
- beforeSend: function(xhrObj){
- // Request headers
-{% for header in headers -%}
- xhrObj.setRequestHeader("{{header.name}}","{{header.value}}");
-{% endfor -%}
- },
-{% endif -%}
- type: "{{method}}",
-{% if body -%}
- // Request body
- data: "{{ body | replace:'"','\"' }}",
-{% endif -%}
- })
- .done(function(data) {
- alert("success");
- })
- .fail(function() {
- alert("error");
- });
- });
-</script>
-</body>
-</html>
-
-```
-
-#### Controls
- The code sample templates do not allow the use of any [page controls](api-management-page-controls.md).
-
-#### Data model
- [Code sample](api-management-template-data-model-reference.md#Sample) entity.
-
-#### Sample template data
-
-```json
-{
- "title": "JavaScript",
- "snippet": null,
- "brush": "xml",
- "template": "DocumentationSamplesJs",
- "body": "{body}",
- "method": "GET",
- "scheme": "https",
- "path": "/calc/add?a={a}&b={b}",
- "query": "",
- "host": "sdcontoso5.azure-api.net",
- "headers": [
- {
- "name": "Ocp-Apim-Subscription-Key",
- "description": "Subscription key which provides access to this API. Found in your <a href='/developer'>Profile</a>.",
- "value": "{subscription key}",
- "typeName": "string",
- "options": null,
- "required": true,
- "readonly": false
- }
- ],
- "parameters": []
-}
-```
-
-### <a name="ObjectiveC"></a> Objective C
- The **DocumentationSamplesObjc** template allows you to customize that code sample in the code samples section of the operation page.
-
-#### Default template
-
-```objective-c
-#import <Foundation/Foundation.h>
-
-int main(int argc, const char * argv[])
-{
- NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];
-
- NSString* path = @"{{scheme}}://{{host}}{{path}}";
- NSArray* array = @[
- // Request parameters
- @"entities=true",
-{% if parameters.size > 0 -%}
-{% for parameter in parameters -%}
- @"{{parameter.name}}={{parameter.value}}",
-{% endfor -%}
-{% endif -%}
- ];
-
- NSString* string = [array componentsJoinedByString:@"&"];
- path = [path stringByAppendingFormat:@"?%@", string];
-
- NSLog(@"%@", path);
-
- NSMutableURLRequest* _request = [NSMutableURLRequest requestWithURL:[NSURL URLWithString:path]];
- [_request setHTTPMethod:@"{{method}}"];
-{% if headers.size > 0 -%}
- // Request headers
-{% for header in headers -%}
- [_request setValue:@"{{header.value}}" forHTTPHeaderField:@"{{header.name}}"];
-{% endfor -%}
-{% endif -%}
-{% if body -%}
- // Request body
- [_request setHTTPBody:[@"{{ body | replace:'"','\"' }}" dataUsingEncoding:NSUTF8StringEncoding]];
-{% endif -%}
-
- NSURLResponse *response = nil;
- NSError *error = nil;
- NSData* _connectionData = [NSURLConnection sendSynchronousRequest:_request returningResponse:&response error:&error];
-
- if (nil != error)
- {
- NSLog(@"Error: %@", error);
- }
- else
- {
- NSError* error = nil;
- NSMutableDictionary* json = nil;
- NSString* dataString = [[NSString alloc] initWithData:_connectionData encoding:NSUTF8StringEncoding];
- NSLog(@"%@", dataString);
-
- if (nil != _connectionData)
- {
- json = [NSJSONSerialization JSONObjectWithData:_connectionData options:NSJSONReadingMutableContainers error:&error];
- }
-
- if (error || !json)
- {
- NSLog(@"Could not parse loaded json with error:%@", error);
- }
-
- NSLog(@"%@", json);
- _connectionData = nil;
- }
-
- [pool drain];
-
- return 0;
-}
-
-```
-
-#### Controls
- The code sample templates do not allow the use of any [page controls](api-management-page-controls.md).
-
-#### Data model
- [Code sample](api-management-template-data-model-reference.md#Sample) entity.
-
-#### Sample template data
-
-```json
-{
- "title": "ObjC",
- "snippet": null,
- "brush": "objc",
- "template": "DocumentationSamplesObjc",
- "body": "{body}",
- "method": "GET",
- "scheme": "https",
- "path": "/calc/add?a={a}&b={b}",
- "query": "",
- "host": "sdcontoso5.azure-api.net",
- "headers": [
- {
- "name": "Ocp-Apim-Subscription-Key",
- "description": "Subscription key which provides access to this API. Found in your <a href='/developer'>Profile</a>.",
- "value": "{subscription key}",
- "typeName": "string",
- "options": null,
- "required": true,
- "readonly": false
- }
- ],
- "parameters": []
-}
-```
-
-### <a name="PHP"></a> PHP
- The **DocumentationSamplesPhp** template allows you to customize that code sample in the code samples section of the operation page.
-
-#### Default template
-
-```php
-<?php
-// This sample uses the HTTP_Request2 PHP library (https://github.com/pear/HTTP_Request2)
-require_once 'HTTP/Request2.php';
-
-$request = new Http_Request2('{{scheme}}://{{host}}{{path}}');
-$url = $request->getUrl();
-
-{% if headers.size > 0 -%}
-$headers = array(
- // Request headers
-{% for header in headers -%}
- '{{header.name}}' => '{{header.value}}',
-{% endfor -%}
-);
-
-$request->setHeader($headers);
-{% endif -%}
-
-{% if parameters.size > 0 -%}
-$parameters = array(
- // Request parameters
-{% for parameter in parameters -%}
- '{{parameter.name}}' => '{{parameter.value}}',
-{% endfor -%}
-);
-
-$url->setQueryVariables($parameters);
-{% endif -%}
-
-$request->setMethod(HTTP_Request2::METHOD_{{method}});
-
-{% if body -%}
-// Request body
-$request->setBody("{{ body | replace:'"','\"' }}");
-{% endif -%}
-
-try
-{
- $response = $request->send();
- echo $response->getBody();
-}
-catch (HttpException $ex)
-{
- echo $ex;
-}
-
-?>
-```
-
-#### Controls
- The code sample templates do not allow the use of any [page controls](api-management-page-controls.md).
-
-#### Data model
- [Code sample](api-management-template-data-model-reference.md#Sample) entity.
-
-#### Sample template data
-
-```json
-{
- "title": "PHP",
- "snippet": null,
- "brush": "php",
- "template": "DocumentationSamplesPhp",
- "body": "{body}",
- "method": "GET",
- "scheme": "https",
- "path": "/calc/add?a={a}&b={b}",
- "query": "",
- "host": "sdcontoso5.azure-api.net",
- "headers": [
- {
- "name": "Ocp-Apim-Subscription-Key",
- "description": "Subscription key which provides access to this API. Found in your <a href='/developer'>Profile</a>.",
- "value": "{subscription key}",
- "typeName": "string",
- "options": null,
- "required": true,
- "readonly": false
- }
- ],
- "parameters": []
-}
-```
-
-### <a name="Python"></a> Python
- The **DocumentationSamplesPython** template allows you to customize that code sample in the code samples section of the operation page.
-
-#### Default template
-
-```python
-########### Python 2.7 #############
-import httplib, urllib, base64
-
-headers = {
-{% if headers.size > 0 -%}
- # Request headers
-{% for header in headers -%}
- '{{header.name}}': '{{header.value}}',
-{% endfor -%}
-{% endif -%}
-}
-
-params = urllib.urlencode({
-{% if parameters.size > 0 -%}
- # Request parameters
-{% for parameter in parameters -%}
- '{{parameter.name}}': '{{parameter.value}}',
-{% endfor -%}
-{% endif -%}
-})
-
-try:
-{% case scheme -%}
-{% when "http" -%}
- conn = httplib.HTTPConnection('{{host}}')
-{% when "https" -%}
- conn = httplib.HTTPSConnection('{{host}}')
-{% endcase -%}
- conn.request("{{method}}", "{{path}}{% if path contains '?' %}&{% else %}?{% endif %}%s" % params{% if body %}, "{{ body | replace:'"','\"' }}"{% endif %}, headers)
- response = conn.getresponse()
- data = response.read()
- print(data)
- conn.close()
-except Exception as e:
- print("[Errno {0}] {1}".format(e.errno, e.strerror))
-
-####################################
-
-########### Python 3.2 #############
-import http.client, urllib.request, urllib.parse, urllib.error, base64
-
-headers = {
-{% if headers.size > 0 -%}
- # Request headers
-{% for header in headers -%}
- '{{header.name}}': '{{header.value}}',
-{% endfor -%}
-{% endif -%}
-}
-
-params = urllib.parse.urlencode({
-{% if parameters.size > 0 -%}
- # Request parameters
-{% for parameter in parameters -%}
- '{{parameter.name}}': '{{parameter.value}}',
-{% endfor -%}
-{% endif -%}
-})
-
-try:
-{% case scheme -%}
-{% when "http" -%}
- conn = http.client.HTTPConnection('{{host}}')
-{% when "https" -%}
- conn = http.client.HTTPSConnection('{{host}}')
-{% endcase -%}
- conn.request("{{method}}", "{{path}}{% if path contains '?' %}&{% else %}?{% endif %}%s" % params{% if body %}, "{{ body | replace:'"','\"' }}"{% endif %}, headers)
- response = conn.getresponse()
- data = response.read()
- print(data)
- conn.close()
-except Exception as e:
- print("[Errno {0}] {1}".format(e.errno, e.strerror))
-
-####################################
-```
-
-#### Controls
- The code sample templates do not allow the use of any [page controls](api-management-page-controls.md).
-
-#### Data model
- [Code sample](api-management-template-data-model-reference.md#Sample) entity.
-
-#### Sample template data
-
-```json
-{
- "title": "Python",
- "snippet": null,
- "brush": "python",
- "template": "DocumentationSamplesPython",
- "body": "{body}",
- "method": "GET",
- "scheme": "https",
- "path": "/calc/add?a={a}&b={b}",
- "query": "",
- "host": "sdcontoso5.azure-api.net",
- "headers": [
- {
- "name": "Ocp-Apim-Subscription-Key",
- "description": "Subscription key which provides access to this API. Found in your <a href='/developer'>Profile</a>.",
- "value": "{subscription key}",
- "typeName": "string",
- "options": null,
- "required": true,
- "readonly": false
- }
- ],
- "parameters": []
-}
-```
-
-### <a name="Ruby"></a> Ruby
- The **DocumentationSamplesRuby** template allows you to customize that code sample in the code samples section of the operation page.
-
-#### Default template
-
-```ruby
-require 'net/http'
-
-uri = URI('{{scheme}}://{{host}}{{path}}')
-uri.query = URI.encode_www_form({
-{% if parameters.size > 0 -%}
- # Request parameters
-{% for parameter in parameters -%}
- '{{parameter.name}}' => '{{parameter.value}}'{% unless forloop.last %},{% endunless %}
-{% endfor -%}
-{% endif -%}
-})
-
-request = Net::HTTP::{{ method | downcase | capitalize }}.new(uri.request_uri)
-{% for header in headers -%}
-# Request headers
-request['{{header.name}}'] = '{{header.value}}'
-{% endfor -%}
-{% if body -%}
-# Request body
-request.body = "{{ body | replace:'"','\"' }}"
-{% endif -%}
-
-response = Net::HTTP.start(uri.host, uri.port, :use_ssl => uri.scheme == 'https') do |http|
- http.request(request)
-end
-
-puts response.body
-
-```
-
-#### Controls
- The code sample templates do not allow the use of any [page controls](api-management-page-controls.md).
-
-#### Data model
- [Code sample](api-management-template-data-model-reference.md#Sample) entity.
-
-#### Sample template data
-
-```json
-{
- "title": "Ruby",
- "snippet": null,
- "brush": "ruby",
- "template": "DocumentationSamplesRuby",
- "body": "{body}",
- "method": "GET",
- "scheme": "https",
- "path": "/calc/add?a={a}&b={b}",
- "query": "",
- "host": "sdcontoso5.azure-api.net",
- "headers": [
- {
- "name": "Ocp-Apim-Subscription-Key",
- "description": "Subscription key which provides access to this API. Found in your <a href='/developer'>Profile</a>.",
- "value": "{subscription key}",
- "typeName": "string",
- "options": null,
- "required": true,
- "readonly": false
- }
- ],
- "parameters": []
-}
-```
-
-## Next steps
-For more information about working with templates, see [How to customize the API Management developer portal using templates](api-management-developer-portal-templates.md).
api-management Api Management Application Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-application-templates.md
- Title: Application templates in Azure API Management | Microsoft Docs
-description: Learn how to customize the content of the Application pages in the developer portal in Azure API Management.
------- Previously updated : 11/04/2019--
-# Application templates in Azure API Management
-Azure API Management provides you the ability to customize the content of developer portal pages using a set of templates that configure their content. Using DotLiquid syntax and the editor of your choice, such as [DotLiquid for Designers](https://github.com/dotliquid/dotliquid/wiki/DotLiquid-for-Designers), and a provided set of localized [String resources](api-management-template-resources.md#strings), [Glyph resources](api-management-template-resources.md#glyphs), and [Page controls](api-management-page-controls.md), you have great flexibility to configure the content of the pages as you see fit using these templates.
-
- The templates in this section allow you to customize the content of the Application pages in the developer portal.
-
-- [Application list](#ProductList)
-
-- [Application](#Application)
-
-> [!NOTE]
-> Sample default templates are included in the following documentation, but are subject to change due to continuous improvements. You can view the live default templates in the developer portal by navigating to the desired individual templates. For more information about working with templates, see [How to customize the API Management developer portal using templates](./api-management-developer-portal-templates.md).
--
-
-## <a name="ProductList"></a> Application list
- The **Application list** template allows you to customize the body of the application list page in the developer portal.
-
- ![Application List Page Developer Portal Templates](./media/api-management-application-templates/APIM-Application-List-Page-Developer-Portal-Templates.png "APIM Application List Page Developer Portal Templates")
-
-### Default template
-
-```xml
-<div class="row">
- <div class="col-md-9">
- <h2>{% localized "AppStrings|WebApplicationsHeader" %}</h2>
- </div>
-</div>
-<div class="row">
- <div class="col-md-12">
- {% if applications.size > 0 %}
- <ul class="list-unstyled">
- {% for app in applications %}
- <li>
- {% if app.application.icon.url != "" %}
- <aside>
- <a href="/applications/details/{{app.application.id}}"><img src="{{app.application.icon.url}}" alt="App Icon"></a>
- </aside>
- {% endif %}
- <h3><a href="/applications/details/{{app.application.id}}">{{app.application.title}}</a></h3>
- {{app.application.description}}
- </li>
- {% endfor %}
- </ul>
- <paging-control></paging-control>
- {% else %}
- {% localized "CommonResources|NoItemsToDisplay" %}
- {% endif %}
- </div>
-</div>
-```
-
-### Controls
- The `Product list` template may use the following [page controls](api-management-page-controls.md).
-
-- [paging-control](api-management-page-controls.md#paging-control)
-
-### Data model
-
-|Property|Type|Description|
-|--|-|--|
-|`Paging`|[Paging](api-management-template-data-model-reference.md#Paging) entity.|The paging information for the applications collection.|
-|`Applications`|Collection of [Application](api-management-template-data-model-reference.md#Application) entities.|The applications visible to the current user.|
-|`CategoryName`|string|The category of application.|
-
-### Sample template data
-
-```json
-{
- "Paging": {
- "Page": 1,
- "PageSize": 10,
- "TotalItemCount": 1,
- "ShowAll": false,
- "PageCount": 1
- },
- "Applications": [
- {
- "Application": {
- "Id": "5702b96fb16653124c8f9ba8",
- "Title": "Contoso Calculator",
- "Description": "A simple online calculator.",
- "Url": null,
- "Version": null,
- "Requirements": "Free application with no requirements.",
- "State": 2,
- "RegistrationDate": "2016-04-04T18:59:00",
- "CategoryId": 5,
- "DeveloperId": "5702b5b0b16653124c8f9ba4",
- "Attachments": [
- {
- "UniqueId": "a58af001-e6c3-45fd-8bc9-c60a1875c3f6",
- "Url": "https://apimgmtst65gdjvjrgdbfhr4.blob.core.windows.net/content/applications/a58af001-e6c3-45fd-8bc9-c60a1875c3f6.png",
- "Type": "Icon",
- "ContentType": "image/png"
- },
- {
- "UniqueId": "2b4fa5dd-00ff-4a8f-b1b7-51e715849ede",
- "Url": "https://apimgmtst65gdjvjrgdbfhr4.blob.core.windows.net/content/applications/2b4fa5dd-00ff-4a8f-b1b7-51e715849ede.png",
- "Type": "Screenshot",
- "ContentType": "image/png"
- }
- ],
- "Icon": {
- "UniqueId": "a58af001-e6c3-45fd-8bc9-c60a1875c3f6",
- "Url": "https://apimgmtst65gdjvjrgdbfhr4.blob.core.windows.net/content/applications/a58af001-e6c3-45fd-8bc9-c60a1875c3f6.png",
- "Type": "Icon",
- "ContentType": "image/png"
- }
- },
- "CategoryName": "Finance"
- }
- ]
-}
-```
-
-## <a name="Application"></a> Application
- The **Application** template allows you to customize the body of the application page in the developer portal.
-
- ![Application Page Developer Portal Templates](./media/api-management-application-templates/APIM-Application-Page-Developer-Portal-Templates.png "APIM Application Page Developer Portal Templates")
-
-### Default template
-
-```xml
-<h2>{{title}}</h2>
-{% if icon.url != "" %}
-<aside class="applications_aside">
- <div class="image-placeholder">
- <img src="{{icon.url}}" alt="Application Icon" />
- </div>
-</aside>
-{% endif %}
-
-<article>
- {% if url != "" %}
- <a target="_blank" href="{{url}}">{{url}}</a>
- {% endif %}
-
- <p>{{description}}</p>
-
- {% if requirements != null %}
- <h3>{% localized "AppDetailsStrings|WebApplicationsRequirementsHeader" %}</h3>
- <p>{{requirements}}</p>
- {% endif %}
-
- {% if attachments.size > 0 %}
- <h3>{% localized "AppDetailsStrings|WebApplicationsScreenshotsHeader" %}</h3>
- {% for screenshot in attachments %}
- {% if screenshot.type != "Icon" %}
- <a href="{{screenshot.url}}" data-lightbox="example-set">
- <img src="/Developer/Applications/Thumbnail?url={{screenshot.url}}" alt='{% localized "AppDetailsStrings|WebApplicationsScreenshotAlt" %}' />
- </a>
- {% endif %}
- {% endfor %}
- {% endif %}
-</article>
-
-```
-
-### Controls
- The `Application` template does not allow the use of any [page controls](api-management-page-controls.md).
-
-### Data model
- [Application](api-management-template-data-model-reference.md#Application) entity.
-
-### Sample template data
-
-```json
-{
- "Id": "5702b96fb16653124c8f9ba8",
- "Title": "Contoso Calculator",
- "Description": "A simple online calculator.",
- "Url": null,
- "Version": null,
- "Requirements": "Free application with no requirements.",
- "State": 2,
- "RegistrationDate": "2016-04-04T18:59:00",
- "CategoryId": 5,
- "DeveloperId": "5702b5b0b16653124c8f9ba4",
- "Attachments": [
- {
- "UniqueId": "a58af001-e6c3-45fd-8bc9-c60a1875c3f6",
- "Url": "https://apimgmtst3aybshdqqcqrle4.blob.core.windows.net/content/applications/a58af001-e6c3-45fd-8bc9-c60a1875c3f6.png",
- "Type": "Icon",
- "ContentType": "image/png"
- },
- {
- "UniqueId": "2b4fa5dd-00ff-4a8f-b1b7-51e715849ede",
- "Url": "https://apimgmtst3aybshdqqcqrle4.blob.core.windows.net/content/applications/2b4fa5dd-00ff-4a8f-b1b7-51e715849ede.png",
- "Type": "Screenshot",
- "ContentType": "image/png"
- }
- ],
- "Icon": {
- "UniqueId": "a58af001-e6c3-45fd-8bc9-c60a1875c3f6",
- "Url": "https://apimgmtst3aybshdqqcqrle4.blob.core.windows.net/content/applications/a58af001-e6c3-45fd-8bc9-c60a1875c3f6.png",
- "Type": "Icon",
- "ContentType": "image/png"
- }
-}
-```
-
-## Next steps
-For more information about working with templates, see [How to customize the API Management developer portal using templates](api-management-developer-portal-templates.md).
api-management Api Management Customize Styles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-customize-styles.md
- Title: Customize page style on API Management legacy developer portal-
-description: Follow the steps of this quickstart to customize the styling of elements on the Azure API Management developer portal.
-------- Previously updated : 11/04/2019---
-# Customize the style of the developer portal pages
-
-There are three most common ways to customize the developer portal in Azure API Management:
-
-* [Edit the contents of static pages and page layout elements](api-management-modify-content-layout.md)
-* Update the styles used for page elements across the developer portal (explained in this guide)
-* [Modify the templates used for pages generated by the portal](api-management-developer-portal-templates.md) (for example, API docs, products, user authentication)
-
-In this article, you learn how to customize the style of elements on pages of the legacy **developer** portal and view your changes.
-
-![Screenshot that shows where you change your settings in the legacy Developer portal.](./media/modify-developer-portal-style/developer_portal.png)
---
-## Prerequisites
-
-+ Learn the [Azure API Management terminology](api-management-terminology.md).
-+ Complete the following quickstart: [Create an Azure API Management instance](get-started-create-service-instance.md).
-+ Also, complete the following tutorial: [Import and publish your first API](import-and-publish.md).
-
-## Customize the developer portal
-
-1. Select **Overview**.
-2. Click the **Developer portal (legacy)** button on the top of the **Overview** window.
-3. On the upper left side of the screen, you see an icon comprised of two paint brushes. Hover over this icon to open the portal customization menu.
-
- ![Screenshot that highlights the icon with two paint brushes.](./media/modify-developer-portal-style/modify-developer-portal-style01.png)
-4. Select **Styles** from the menu to open the styling customization pane.
-
- All elements that you can customize using **Styles** appear on the page
-5. Enter "headings-color" in the **Change variable values to customize developer portal appearance:** field.
-
- The **\@headings-color** element appears on the page. This variable controls the color of the text.
-
- ![customize style](./media/modify-developer-portal-style/modify-developer-portal-style02.png)
-
-6. Click on the field for the **\@headings-color** variable.
-
- Color picker drop-down opens.
-7. From the color pickers drop-down select a new color.
-
- > [!TIP]
- > Real-time preview is available for all changes. A progress indicator appears at the top of the customization pane. After a couple seconds the header text changes in color to the newly selected.
-
-8. Select **Publish** from the lower left on the customization pane menu.
-9. Select **Publish customizations** to make the changes publicly available.
-
-## View your change
-
-1. Navigate to the developer portal.
-2. You can see the change that you made.
-
-## Next steps
-
-You might also be interested in learning [how to customize the Azure API Management developer portal using templates](api-management-developer-portal-templates.md).
api-management Api Management Developer Portal Templates Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-developer-portal-templates-reference.md
- Title: Azure API Management Developer portal templates | Microsoft Docs
-description: Learn how to customize the content of developer portal pages using a set of templates in Azure API Management.
------- Previously updated : 11/04/2019---
-# Developer portal templates
-
-Azure API Management provides you the ability to customize the content of developer portal pages using a set of templates that configure their content. Using [DotLiquid](https://github.com/dotliquid) syntax and the editor of your choice, such as [DotLiquid for Designers](https://github.com/dotliquid/dotliquid/wiki/DotLiquid-for-Designers), and a provided set of localized [String resources](api-management-template-resources.md#strings), [Glyph resources](api-management-template-resources.md#glyphs), and [Page controls](api-management-page-controls.md), you have great flexibility to configure the content of the pages as you see fit using these templates.
-
-For more information about working with templates, see [How to customize the API Management developer portal using templates](api-management-developer-portal-templates.md).
---
-## <a name="DeveloperPortalTemplates"></a> Developer portal templates
-
-- [APIs](api-management-api-templates.md)
- - [API list](api-management-api-templates.md#APIList)
- - [Operation](api-management-api-templates.md#Product)
- - [Code samples](api-management-api-templates.md#CodeSamples)
- - [Curl](api-management-api-templates.md#Curl)
- - [C#](api-management-api-templates.md#CSharp)
- - [Java](api-management-api-templates.md#Stub)
- - [JavaScript](api-management-api-templates.md#JavaScript)
- - [Objective C](api-management-api-templates.md#ObjectiveC)
- - [PHP](api-management-api-templates.md#PHP)
- - [Python](api-management-api-templates.md#Python)
- - [Ruby](api-management-api-templates.md#Ruby)
-- [Products](api-management-product-templates.md)
- - [Product list](api-management-product-templates.md#ProductList)
- - [Product](api-management-product-templates.md#Product)
-- [Applications](api-management-application-templates.md)
- - [Application list](api-management-application-templates.md#ProductList)
- - [Application](api-management-application-templates.md#Application)
-- [Issues](api-management-issue-templates.md)
- - [Issue list](api-management-issue-templates.md#IssueList)
-- [User Profile](api-management-user-profile-templates.md)
- - [Profile](api-management-user-profile-templates.md#Profile)
- - [Subscriptions](api-management-user-profile-templates.md#Subscriptions)
- - [Applications](api-management-user-profile-templates.md#Applications)
- - [Update account info](api-management-user-profile-templates.md#UpdateAccountInfo)
-- [Pages](api-management-page-templates.md)
- - [Sign in](api-management-page-templates.md#SignIn)
- - [Sign up](api-management-page-templates.md#SignUp)
- - [Page not found](api-management-page-templates.md#PageNotFound)
-
-## Next steps
-
-+ [Template reference](api-management-developer-portal-templates-reference.md)
-+ [Data model reference](api-management-template-data-model-reference.md)
-+ [Page controls](api-management-page-controls.md)
-+ [Template resources](api-management-template-resources.md)
api-management Api Management Developer Portal Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-developer-portal-templates.md
- Title: Customize the API Management developer portal using templates-
-description: Learn how to customize the Azure API Management developer portal using templates.
------- Previously updated : 11/04/2019---
-# How to customize the Azure API Management developer portal using templates
-
-There are three fundamental ways to customize the developer portal in Azure API Management:
-
-* [Edit the contents of static pages and page layout elements][modify-content-layout]
-* [Update the styles used for page elements across the developer portal][customize-styles]
-* [Modify the templates used for pages generated by the portal][portal-templates] (explained in this guide)
-
-Templates are used to customize the content of system-generated developer portal pages (for example, API docs, products, user authentication, etc.). Using [DotLiquid](https://github.com/dotliquid) syntax, and a provided set of localized string resources, icons, and page controls, you have great flexibility to configure the content of the pages as you see fit.
---
-## Developer portal templates overview
-
-Editing templates is done from the **Developer portal** while being logged in as an administrator. To get there first open the Azure portal and click **Developer portal** from the service toolbar of your API Management instance.
-
-To access the developer portal templates, click the customize icon on the left to display the customization menu, and click **Templates**.
-
-![Screenshot that highlights the customize icon to display the customization menu.][api-management-customize-menu]
-
-The templates list displays several categories of templates covering the different pages in the developer portal. Each template is different, but the steps to edit them and publish the changes are the same. To edit a template, click the name of the template.
-
-![Developer portal templates][api-management-templates-menu]
-
-Clicking a template takes you to the developer portal page that is customizable by that template. In this example, the **Product list** template is displayed. The **Product list** template controls the area of the screen indicated by the red rectangle.
-
-![Products list template][api-management-developer-portal-templates-overview]
-
-Some templates, like the **User Profile** templates, customize different parts of the same page.
-
-![User profile templates][api-management-user-profile-templates]
-
-The editor for each developer portal template has two sections displayed at the bottom of the page. The left-hand side displays the editing pane for the template, and the right-hand side displays the data model for the template.
-
-The template editing pane contains the markup that controls the appearance and behavior of the corresponding page in the developer portal. The markup in the template uses the [DotLiquid](https://github.com/dotliquid) syntax. One popular editor for DotLiquid is [DotLiquid for Designers](https://github.com/dotliquid/dotliquid/wiki/DotLiquid-for-Designers). Any changes made to the template during editing are displayed in real-time in the browser, but are not visible to your customers until you [save](#to-save-a-template) and [publish](#to-publish-a-template) the template.
-
-![Template markup][api-management-template]
-
-The **Template data** pane provides a guide to the data model for the entities that are available for use in a particular template. It provides this guide by displaying the live data that are currently displayed in the developer portal. You can expand the template panes by clicking the rectangle in the upper-right corner of the **Template data** pane.
-
-![Template data model][api-management-template-data]
-
-In the previous example, there are two products displayed in the developer portal that were retrieved from the data displayed in the **Template data** pane, as shown in the following example:
-
-```json
-{
- "Paging": {
- "Page": 1,
- "PageSize": 10,
- "TotalItemCount": 2,
- "ShowAll": false,
- "PageCount": 1
- },
- "Filtering": {
- "Pattern": null,
- "Placeholder": "Search products"
- },
- "Products": [
- {
- "Id": "56ec64c380ed850042060001",
- "Title": "Starter",
- "Description": "Subscribers will be able to run 5 calls/minute up to a maximum of 100 calls/week.",
- "Terms": "",
- "ProductState": 1,
- "AllowMultipleSubscriptions": false,
- "MultipleSubscriptionsCount": 1
- },
- {
- "Id": "56ec64c380ed850042060002",
- "Title": "Unlimited",
- "Description": "Subscribers have completely unlimited access to the API. Administrator approval is required.",
- "Terms": null,
- "ProductState": 1,
- "AllowMultipleSubscriptions": false,
- "MultipleSubscriptionsCount": 1
- }
- ]
-}
-```
-
-The markup in the **Product list** template processes the data to provide the desired output by iterating through the collection of products to display information and a link to each individual product. Note the `<search-control>` and `<page-control>` elements in the markup. These control the display of the searching and paging controls on the page. `ProductsStrings|PageTitleProducts` is a localized string reference that contains the `h2` header text for the page. For a list of string resources, page controls, and icons available for use in developer portal templates, see [API Management developer portal templates reference](api-management-developer-portal-templates-reference.md).
-
-```html
-<search-control></search-control>
-<div class="row">
- <div class="col-md-9">
- <h2>{% localized "ProductsStrings|PageTitleProducts" %}</h2>
- </div>
-</div>
-<div class="row">
- <div class="col-md-12">
- {% if products.size > 0 %}
- <ul class="list-unstyled">
- {% for product in products %}
- <li>
- <h3><a href="/products/{{product.id}}">{{product.title}}</a></h3>
- {{product.description}}
- </li>
- {% endfor %}
- </ul>
- <paging-control></paging-control>
- {% else %}
- {% localized "CommonResources|NoItemsToDisplay" %}
- {% endif %}
- </div>
-</div>
-```
-
-## To save a template
-To save a template, click save in the template editor.
-
-![Save template][api-management-save-template]
-
-Saved changes are not live in the developer portal until they are published.
-
-## To publish a template
-Saved templates can be published either individually, or all together. To publish an individual template, click publish in the template editor.
-
-![Publish template][api-management-publish-template]
-
-Click **Yes** to confirm and make the template live on the developer portal.
-
-![Screenshot that shows where you select Yes to make the template live.][api-management-publish-template-confirm]
-
-To publish all currently unpublished template versions, click **Publish** in the templates list. Unpublished templates are designated by an asterisk following the template name. In this example, the **Product list** and **Product** templates are being published.
-
-![Publish templates][api-management-publish-templates]
-
-Click **Publish customizations** to confirm.
-
-![Confirm publish][api-management-publish-customizations]
-
-Newly published templates are effective immediately in the developer portal.
-
-## To revert a template to the previous version
-To revert a template to the previous published version, click revert in the template editor.
-
-![Screenshot that highlights the icon you use to revert a template.][api-management-revert-template]
-
-Click **Yes** to confirm.
-
-![Screenshot that shows where you select Yes to confirm the changes.][api-management-revert-template-confirm]
-
-The previously published version of a template is live in the developer portal once the revert operation is complete.
-
-## To restore a template to the default version
-Restoring templates to their default version is a two-step process. First the templates must be restored, and then the restored versions must be published.
-
-To restore a single template to the default version click restore in the template editor.
-
-![Revert template][api-management-reset-template]
-
-Click **Yes** to confirm.
-
-![Confirm][api-management-reset-template-confirm]
-
-To restore all templates to their default versions, click **Restore default templates** on the template list.
-
-![Restore templates][api-management-restore-templates]
-
-The restored templates must then be published individually or all at once by following the steps in [To publish a template](#to-publish-a-template).
-
-## Next steps
-For reference information for developer portal templates, string resources, icons, and page controls, see [API Management developer portal templates reference](api-management-developer-portal-templates-reference.md).
-
-[modify-content-layout]: api-management-modify-content-layout.md
-[customize-styles]: api-management-customize-styles.md
-[portal-templates]: api-management-developer-portal-templates.md
-
-[api-management-customize-menu]: ./media/api-management-developer-portal-templates/api-management-customize-menu.png
-[api-management-templates-menu]: ./media/api-management-developer-portal-templates/api-management-templates-menu.png
-[api-management-developer-portal-templates-overview]: ./media/api-management-developer-portal-templates/api-management-developer-portal-templates-overview.png
-[api-management-template]: ./media/api-management-developer-portal-templates/api-management-template.png
-[api-management-template-data]: ./media/api-management-developer-portal-templates/api-management-template-data.png
-[api-management-developer-portal-menu]: ./media/api-management-developer-portal-templates/api-management-developer-portal-menu.png
-[api-management-management-console]: ./media/api-management-developer-portal-templates/api-management-management-console.png
-[api-management-browse]: ./media/api-management-developer-portal-templates/api-management-browse.png
-[api-management-user-profile-templates]: ./media/api-management-developer-portal-templates/api-management-user-profile-templates.png
-[api-management-save-template]: ./media/api-management-developer-portal-templates/api-management-save-template.png
-[api-management-publish-template]: ./media/api-management-developer-portal-templates/api-management-publish-template.png
-[api-management-publish-template-confirm]: ./media/api-management-developer-portal-templates/api-management-publish-template-confirm.png
-[api-management-publish-templates]: ./media/api-management-developer-portal-templates/api-management-publish-templates.png
-[api-management-publish-customizations]: ./media/api-management-developer-portal-templates/api-management-publish-customizations.png
-[api-management-revert-template]: ./media/api-management-developer-portal-templates/api-management-revert-template.png
-[api-management-revert-template-confirm]: ./media/api-management-developer-portal-templates/api-management-revert-template-confirm.png
-[api-management-reset-template]: ./media/api-management-developer-portal-templates/api-management-reset-template.png
-[api-management-reset-template-confirm]: ./media/api-management-developer-portal-templates/api-management-reset-template-confirm.png
-[api-management-restore-templates]: ./media/api-management-developer-portal-templates/api-management-restore-templates.png
api-management Api Management Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-features.md
Each API Management [pricing tier](https://aka.ms/apimpricing) offers a distinct
| [Pass-through WebSocket APIs](websocket-api.md) | No | Yes | Yes | Yes | Yes | | [Pass-through GraphQL APIs](graphql-apis-overview.md) | Yes | Yes | Yes | Yes | Yes | | [Synthetic GraphQL APIs](graphql-apis-overview.md) | Yes | Yes | Yes | Yes | Yes |
+| [Pass-through gRPC APIs](grpc-api.md) (preview) | No | Yes | No | No | Yes |
<sup>1</sup> Enables the use of Microsoft Entra ID (and Azure AD B2C) as an identity provider for user sign in on the developer portal.<br/> <sup>2</sup> Including related functionality such as users, groups, issues, applications, and email templates and notifications.<br/>
api-management Api Management Gateways Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-gateways-overview.md
Previously updated : 06/27/2023 Last updated : 11/6/2023
The following table compares features available in the managed gateway versus th
| [Function App](import-function-app-as-api.md) | ✔️ | ✔️ | ✔️ | | [Container App](import-container-app-with-oas.md) | ✔️ | ✔️ | ✔️ | | [Service Fabric](../service-fabric/service-fabric-api-management-overview.md) | Developer, Premium | ❌ | ❌ |
-| [Pass-through GraphQL](graphql-apis-overview.md) | ✔️ | ✔️ | ❌ |
-| [Synthetic GraphQL](graphql-apis-overview.md)| ✔️ | ✔️<sup>1</sup> | ❌ |
+| [Pass-through GraphQL](graphql-apis-overview.md) | ✔️ | ✔️ | ✔️ |
+| [Synthetic GraphQL](graphql-apis-overview.md)| ✔️ | ✔️<sup>1</sup> | ✔️<sup>1</sup> |
| [Pass-through WebSocket](websocket-api.md) | ✔️ | ❌ | ✔️ |
+| [Pass-through gRPC](grpc-api.md) | ❌ | ❌ | ✔️ |
+| [Circuit Breaker](backends.md#circuit-breaker-preview) | ✔️ | ✔️ | ✔️ |
-<sup>1</sup> Synthetic GraphQL subscriptions (preview) aren't supported in the Consumption tier.
+<sup>1</sup> Synthetic GraphQL subscriptions (preview) aren't supported.
### Policies
api-management Api Management Issue Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-issue-templates.md
- Title: Issue templates in Azure API Management | Microsoft Docs
-description: Learn how to customize the content of the Issue pages in the developer portal in Azure API Management.
------- Previously updated : 11/04/2019--
-# Issue templates in Azure API Management
-Azure API Management provides you the ability to customize the content of developer portal pages using a set of templates that configure their content. Using [DotLiquid](https://github.com/dotliquid) syntax and the editor of your choice, such as [DotLiquid for Designers](https://github.com/dotliquid/dotliquid/wiki/DotLiquid-for-Designers), and a provided set of localized [String resources](api-management-template-resources.md#strings), [Glyph resources](api-management-template-resources.md#glyphs), and [Page controls](api-management-page-controls.md), you have great flexibility to configure the content of the pages as you see fit using these templates.
-
- The templates in this section allow you to customize the content of the Issue pages in the developer portal.
-
-- [Issue list](#IssueList)
-
-> [!NOTE]
-> Sample default templates are included in the following documentation, but are subject to change due to continuous improvements. You can view the live default templates in the developer portal by navigating to the desired individual templates. For more information about working with templates, see [How to customize the API Management developer portal using templates](api-management-developer-portal-templates.md).
--
-
-## <a name="IssueList"></a> Issue list
- The **Issue list** template allows you to customize the body of the issue list page in the developer portal.
-
- ![Issue List Developer Portal](./media/api-management-issue-templates/APIM-Issue-List-Developer-Portal.png "APIM Issue List Developer Portal")
-
-### Default template
-
-```xml
-<div class="row">
- <div class="col-md-9">
- <h2>{% localized "IssuesStrings|WebIssuesIndexTitle" %}</h2>
- </div>
-</div>
-<div class="row">
- <div class="col-md-12">
- {% if issues.size > 0 %}
- <ul class="list-unstyled">
- {% capture reportedBy %}{% localized "IssuesStrings|WebIssuesStatusReportedBy" %}{% endcapture %}
- {% assign replaceString0 = '{0}' %}
- {% assign replaceString1 = '{1}' %}
- {% for issue in issues %}
- <li>
- <h3>
- <a href="/issues/{{issue.id}}">{{issue.title}}</a>
- </h3>
- <p>{{issue.description}}</p>
- <em>
- {% capture state %}{{issue.issueState}}{% endcapture %}
- {% capture devName %}{{issue.subscriptionDeveloperName}}{% endcapture %}
- {% capture str1 %}{{ reportedBy | replace : replaceString0, state }}{% endcapture %}
- {{ str1 | replace : replaceString1, devName }}
- <span class="UtcDateElement">{{ issue.reportedOn | date: "r" }}</span>
- </em>
- </li>
- {% endfor %}
- </ul>
- <paging-control></paging-control>
- {% else %}
- {% localized "CommonResources|NoItemsToDisplay" %}
- {% endif %}
- {% if canReportIssue %}
- <a class="btn btn-primary" id="createIssue" href="/Issues/Create">{% localized "IssuesStrings|WebIssuesReportIssueButton" %}</a>
- {% elsif isAuthenticated %}
- <hr />
- <p>{% localized "IssuesStrings|WebIssuesNoActiveSubscriptions" %}</p>
- {% else %}
- <hr />
- <p>
- {% capture signIntext %}{% localized "IssuesStrings|WebIssuesNotSignin" %}{% endcapture %}
- {% capture link %}<a href="/signin">{% localized "IssuesStrings|WebIssuesSignIn" %}</a>{% endcapture %}
- {{ signIntext | replace : replaceString0, link }}
- </p>
- {% endif %}
- </div>
-</div>
-```
-
-### Controls
- The `Issue list` template may use the following [page controls](api-management-page-controls.md).
-
-- [paging-control](api-management-page-controls.md#paging-control)
-
-### Data model
-
-|Property|Type|Description|
-|--|-|--|
-|`Issues`|Collection of [Issue](api-management-template-data-model-reference.md#Issue) entities.|The issues visible to the current user.|
-|`Paging`|[Paging](api-management-template-data-model-reference.md#Paging) entity.|The paging information for the applications collection.|
-|`IsAuthenticated`|boolean|Whether the current user is signed-in to the developer portal.|
-|`CanReportIssues`|boolean|Whether the current user has permissions to file an issue.|
-|`Search`|string|This property is deprecated and should not be used.|
-
-### Sample template data
-
-```json
-{
- "Issues": [
- {
- "Id": "5702b68bb16653124c8f9ba7",
- "ApiId": "570275f1b16653124c8f9ba3",
- "Title": "I couldn't figure out how to connect my application to the API",
- "Description": "I'm having trouble connecting my application to the backend API.",
- "SubscriptionDeveloperName": "Clayton",
- "IssueState": "Proposed",
- "ReportedOn": "2016-04-04T18:46:35.64",
- "Comments": null,
- "Attachments": null,
- "Services": null
- }
- ],
- "Paging": {
- "Page": 1,
- "PageSize": 10,
- "TotalItemCount": 1,
- "ShowAll": false,
- "PageCount": 1
- },
- "IsAuthenticated": true,
- "CanReportIssue": true,
- "Search": null
-}
-```
-
-## Next steps
-For more information about working with templates, see [How to customize the API Management developer portal using templates](api-management-developer-portal-templates.md).
api-management Api Management Modify Content Layout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-modify-content-layout.md
- Title: Modify page contents in developer portal in API Management-
-description: Learn how to edit page contents on the developer portal in Azure API Management.
------- Previously updated : 02/09/2017---
-# Modify the content and layout of pages on the developer portal in Azure API Management
-There are three fundamental ways to customize the developer portal in Azure API Management:
-
-* [Edit the contents of static pages and page layout elements][modify-content-layout] (explained in this guide)
-* [Update the styles used for page elements across the developer portal][customize-styles]
-* [Modify the templates used for pages generated by the portal][portal-templates] (for example, API docs, products, user authentication, etc.)
--
-## <a name="page-structure"> </a>Structure of developer portal pages
-
-The developer portal is based on a content management system. The layout of every page is built based on set of small page elements known as widgets:
-
-![Developer portal page structure][api-management-customization-widget-structure]
-
-All widgets are editable.
-* The core contents specific to each individual page reside in the "Contents" widget. Editing a page means editing the contents of this widget.
-* All page layout elements are contained with the remaining widgets. Changes made to these widgets are applied to all pages. They are referred to as "layout widgets."
-
-In day-to-day page editing one would often modify just the Content widget, which will have different content for each individual page.
-
-## <a name="modify-layout-widget"> </a>Modifying the contents of a layout widget
-
-The Developer portal is accessible from the Azure Portal.
-
-1. Click **Developer Portal** from the toolbar of your API Management instance.
-2. To edit the contents of widgets, click the icon comprised of two paint brushes from the **Developer** portal menu on the left.
-3. To modify the contents of the header, scroll to the **Header** section in the list on the left.
-
- The widgets are editable from within the fields.
-4. Once you are ready to publish your changes, click **Publish** at the bottom of the page.
-
-Now you should be able to see the new header on every page within the developer portal.
-
-## <a name="next-steps"> </a>Next steps
-* [Update the styles used for page elements across the developer portal][customize-styles]
-* [Modify the templates used for pages generated by the portal][portal-templates] (for example, API docs, products, user authentication, etc.)
-
-[Structure of developer portal pages]: #page-structure
-[Modifying the contents of a layout widget]: #modify-layout-widget
-[Edit the contents of a page]: #edit-page-contents
-[Next steps]: #next-steps
-
-[modify-content-layout]: api-management-modify-content-layout.md
-[customize-styles]: api-management-customize-styles.md
-[portal-templates]: api-management-developer-portal-templates.md
-
-[api-management-customization-widget-structure]: ./media/api-management-modify-content-layout/portal-widget-structure.png
-[api-management-management-console]: ./media/api-management-modify-content-layout/api-management-management-console.png
-[api-management-widgets-header]: ./media/api-management-modify-content-layout/api-management-widgets-header.png
-[api-management-customization-manage-content]: ./media/api-management-modify-content-layout/api-management-customization-manage-content.png
api-management Api Management Page Controls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-page-controls.md
- Title: Azure API Management page controls | Microsoft Docs
-description: Learn about the page controls available for use in developer portal templates in Azure API Management.
------- Previously updated : 11/04/2019---
-# Azure API Management page controls
-Azure API Management provides the following controls for use in the developer portal templates.
-
-To use a control, place it in the desired location in the developer portal template. Some controls, such as the [app-actions](#app-actions) control, have parameters, as shown in the following example:
-
-```xml
-<app-actions params="{ appId: '{{app.id}}' }"></app-actions>
-```
-
-The values for the parameters are passed in as part of the data model for the template. In most cases, you can simply paste in the provided example for each control for it to work correctly. For more information on the parameter values, you can see the data model section for each template in which a control may be used.
-
-For more information about working with templates, see [How to customize the API Management developer portal using templates](./api-management-developer-portal-templates.md).
--
-
-## Developer portal template page controls
-
-- [app-actions](#app-actions) -- [basic-signin](#basic-signin) -- [paging-control](#paging-control) -- [providers](#providers) -- [search-control](#search-control) -- [sign-up](#sign-up) -- [subscribe-button](#subscribe-button) -- [subscription-cancel](#subscription-cancel)
-
-## <a name="app-actions"></a> app-actions
- The `app-actions` control provides a user interface for interacting with applications on the user profile page in the developer portal.
-
- ![app&#45;actions control](./media/api-management-page-controls/APIM-app-actions-control.png "APIM app-actions control")
-
-### Usage
-
-```xml
-<app-actions params="{ appId: '{{app.id}}' }"></app-actions>
-```
-
-### Parameters
-
-|Parameter|Description|
-||--|
-|appId|The ID of the application.|
-
-### Developer portal templates
- The `app-actions` control may be used in the following developer portal templates:
-
-- [Applications](api-management-user-profile-templates.md#Applications)
-
-## <a name="basic-signin"></a> basic-signin
- The `basic-signin` control provides a control for collecting user sign-in information in the sign-in page in the developer portal.
-
- ![basic&#45;signin control](./media/api-management-page-controls/APIM-basic-signin-control.png "APIM basic-signin control")
-
-### Usage
-
-```xml
-<basic-SignIn></basic-SignIn>
-```
-
-### Parameters
- None.
-
-### Developer portal templates
- The `basic-signin` control may be used in the following developer portal templates:
-
-- [Sign in](api-management-page-templates.md#SignIn)
-
-## <a name="paging-control"></a> paging-control
- The `paging-control` provides paging functionality on developer portal pages that display a list of items.
-
- ![paging control](./media/api-management-page-controls/APIM-paging-control.png "APIM paging control")
-
-### Usage
-
-```xml
-<paging-control></paging-control>
-```
-
-### Parameters
- None.
-
-### Developer portal templates
- The `paging-control` control may be used in the following developer portal templates:
-
-- [API list](api-management-api-templates.md#APIList)
-
-- [Issue list](api-management-issue-templates.md#IssueList)
-
-- [Product list](api-management-product-templates.md#ProductList)
-
-## <a name="providers"></a> providers
- The `providers` control provides a control for selection of authentication providers in the sign-in page in the developer portal.
-
- ![providers control](./media/api-management-page-controls/APIM-providers-control.png "APIM providers control")
-
-### Usage
-
-```xml
-<providers></providers>
-```
-
-### Parameters
- None.
-
-### Developer portal templates
- The `providers` control may be used in the following developer portal templates:
-
-- [Sign in](api-management-page-templates.md#SignIn)
-
-## <a name="search-control"></a> search-control
- The `search-control` provides search functionality on developer portal pages that display a list of items.
-
- ![search control](./media/api-management-page-controls/APIM-search-control.png "APIM search control")
-
-### Usage
-
-```xml
-<search-control></search-control>
-```
-
-### Parameters
- None.
-
-### Developer portal templates
- The `search-control` control may be used in the following developer portal templates:
-
-- [API list](api-management-api-templates.md#APIList)
-
-- [Product list](api-management-product-templates.md#ProductList)
-
-## <a name="sign-up"></a> sign-up
- The `sign-up` control provides a control for collecting user profile information in the sign-up page in the developer portal.
-
- ![sign&#45;up control](./media/api-management-page-controls/APIM-sign-up-control.png "APIM sign-up control")
-
-### Usage
-
-```xml
-<sign-up></sign-up>
-```
-
-### Parameters
- None.
-
-### Developer portal templates
- The `sign-up` control may be used in the following developer portal templates:
-
-- [Sign up](api-management-page-templates.md#SignUp)
-
-## <a name="subscribe-button"></a> subscribe-button
- The `subscribe-button` provides a control for subscribing a user to a product.
-
- ![subscribe&#45;button control](./media/api-management-page-controls/APIM-subscribe-button-control.png "APIM subscribe-button control")
-
-### Usage
-
-```xml
-<subscribe-button></subscribe-button>
-```
-
-### Parameters
- None.
-
-### Developer portal templates
- The `subscribe-button` control may be used in the following developer portal templates:
-
-- [Product](api-management-product-templates.md#Product)
-
-## <a name="subscription-cancel"></a> subscription-cancel
- The `subscription-cancel` control provides a control for canceling a subscription to a product in the user profile page in the developer portal.
-
- ![subscription&#45;cancel control](./media/api-management-page-controls/APIM-subscription-cancel-control.png "APIM subscription-cancel control")
-
-### Usage
-
-```xml
-<subscription-cancel params="{ subscriptionId: '{{subscription.id}}', cancelUrl: '{{subscription.cancelUrl}}' }">
-</subscription-cancel>
-
-```
-
-### Parameters
-
-|Parameter|Description|
-||--|
-|subscriptionId|The ID of the subscription to cancel.|
-|cancelUrl|The subscription cancels URL.|
-
-### Developer portal templates
- The `subscription-cancel` control may be used in the following developer portal templates:
-
-- [Product](api-management-product-templates.md#Product)-
-## Next steps
-For more information about working with templates, see [How to customize the API Management developer portal using templates](api-management-developer-portal-templates.md).
api-management Api Management Page Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-page-templates.md
- Title: Page templates in Azure API Management | Microsoft Docs
-description: Learn how to customize the content of developer portal page templates in Azure API Management.
------- Previously updated : 11/04/2019--
-# Page templates in Azure API Management
-Azure API Management provides you the ability to customize the content of developer portal pages using a set of templates that configure their content. Using [DotLiquid](https://github.com/dotliquid) syntax and the editor of your choice, such as [DotLiquid for Designers](https://github.com/dotliquid/dotliquid/wiki/DotLiquid-for-Designers), and a provided set of localized [String resources](api-management-template-resources.md#strings), [Glyph resources](api-management-template-resources.md#glyphs), and [Page controls](api-management-page-controls.md), you have great flexibility to configure the content of the pages as you see fit using these templates.
-
- The templates in this section allow you to customize the content of the sign in, sign up, and page not found pages in the developer portal.
-
-- [Sign in](#SignIn)
-
-- [Sign up](#SignUp)
-
-- [Page not found](#PageNotFound)
-
-> [!NOTE]
-> Sample default templates are included in the following documentation, but are subject to change due to continuous improvements. You can view the live default templates in the developer portal by navigating to the desired individual templates. For more information about working with templates, see [How to customize the API Management developer portal using templates](./api-management-developer-portal-templates.md).
--
-
-## <a name="SignIn"></a> Sign in
- The **sign in** template allows you to customize the sign in page in the developer portal.
-
- ![Sign In Page](./media/api-management-page-templates/APIM-Sign-In-Page-Developer-Portal-Templates.png "APIM Sign In Page Developer Portal Templates")
-
-### Default template
-
-```xml
-<h2 class="text-center">{% localized "SigninStrings|WebAuthenticationSigninTitle" %}</h2>
-{% if registrationEnabled == true %}
-<p class="text-center">{% localized "SigninStrings|WebAuthenticationNotAMember" %}</p>
-{% endif %}
-
-<div class="row center-block ap-idp-container">
- <div class="col-md-6">
- {% if registrationEnabled == true %}
- <p>{% localized "SigninStrings|WebAuthenticationSigininWithPassword" %}</p>
- <basic-SignIn></basic-SignIn>
- {% endif %}
- </div>
-
- {% if registrationEnabled != true and providers.size == 0 %}
- {% localized "ProviderInfoStrings|TextboxExternalIdentitiesDisabled" %}
- {% else %}
- {% if providers.size > 0 %}
- <div class="col-md-6">
- <div class="providers-list">
- <p class="text-left">
- {% if registrationEnabled == true %}
- {% localized "ProviderInfoStrings|TextboxExternalIdentitiesSigninInvitation" %}
- {% else %}
- {% localized "ProviderInfoStrings|TextboxExternalIdentitiesSigninInvitationPrimary" %}
- {% endif %}
- </p>
- <providers></providers>
- </div>
- </div>
- {% endif %}
- {% endif %}
-
- {% if userRegistrationTermsEnabled == true %}
- <div class="col-md-6">
- <div id="terms" class="modal" role="dialog" tabindex="-1">
- <div class="modal-dialog">
- <div class="modal-content">
- <div class="modal-header">
- <h4 class="modal-title">{% localized "SigninResources|DialogHeadingTermsOfUse" %}</h4>
- </div>
- <div class="modal-body break-all">{{userRegistrationTerms}}</div>
- <div class="modal-footer">
- <button type="button" class="btn btn-default" data-dismiss="modal">{% localized "CommonStrings|ButtonLabelClose" %}</button>
- </div>
- </div>
- </div>
- </div>
- <p>{% localized "SigninResources|TextblockUserRegistrationTermsProvided" %}</p>
- </div>
- {% endif %}
-</div>
-```
-
-### Controls
- This template may use the following [page controls](api-management-page-controls.md).
-
-- [basic-signin](api-management-page-controls.md#basic-signin)
-
-- [providers](api-management-page-controls.md#providers)
-
-### Data model
- [User sign in](api-management-template-data-model-reference.md#UseSignIn) entity.
-
-### Sample template data
-
-```json
-{
- "Email": null,
- "Password": null,
- "ReturnUrl": null,
- "RememberMe": false,
- "RegistrationEnabled": true,
- "DelegationEnabled": false,
- "DelegationUrl": null,
- "SsoSignUpUrl": null,
- "AuxServiceUrl": "https://portal.azure.com/#resource/subscriptions/{subscription ID}/resourceGroups/Api-Default-West-US/providers/Microsoft.ApiManagement/service/contoso5",
- "Providers": [
- {
- "Properties": {
- "AuthenticationType": "Aad",
- "Caption": "Azure Active Directory"
- },
- "AuthenticationType": "Aad",
- "Caption": "Azure Active Directory"
- }
- ],
- "UserRegistrationTerms": null,
- "UserRegistrationTermsEnabled": false
-}
-```
-
-## <a name="SignUp"></a> Sign up
- The **sign up** template allows you to customize the sign up page in the developer portal.
-
- ![Sign Up Page](./media/api-management-page-templates/APIM-Sign-Up-Page-Developer-Portal-Templates.png "APIM Sign Up Page Developer Portal Templates")
-
-### Default template
-
-```xml
-<h2 class="text-center">{% localized "SignupStrings|PageTitleSignup" %}</h2>
-<p class="text-center">
- {% localized "SignupStrings|WebAuthenticationAlreadyAMember" %} <a href="/signin">{% localized "SignupStrings|WebAuthenticationSigninNow" %}</a>
-</p>
-
-<div class="row center-block ap-idp-container">
- <div class="col-md-6">
- <p>{% localized "SignupStrings|WebAuthenticationCreateNewAccount" %}</p>
- <sign-up></sign-up>
- </div>
-</div>
-```
-
-### Controls
- This template may use the following [page controls](api-management-page-controls.md).
-
-- [sign-up](api-management-page-controls.md#sign-up)
-
-### Data model
- [User sign up](api-management-template-data-model-reference.md#UserSignUp) entity.
-
-### Sample template data
-
-```json
-{
- "PasswordConfirm": null,
- "Password": null,
- "PasswordVerdictLevel": 0,
- "UserRegistrationTerms": null,
- "UserRegistrationTermsOptions": 0,
- "ConsentAccepted": false,
- "Email": null,
- "FirstName": null,
- "LastName": null,
- "UserData": null,
- "NameIdentifier": null,
- "ProviderName": null
-}
-```
-
-## <a name="PageNotFound"></a> Page not found
- The **page not found** template allows you to customize the page not found page in the developer portal.
-
- ![Not Found Page](./media/api-management-page-templates/APIM-Not-Found-Page-Developer-Portal-Templates.png "APIM Not Found Page Developer Portal Templates")
-
-### Default template
-
-```xml
-<h2>{% localized "NotFoundStrings|PageTitleNotFound" %}</h2>
-
-<h3>{% localized "NotFoundStrings|TitlePotentialCause" %}</h3>
-<ul>
- <li>{% localized "NotFoundStrings|TextblockPotentialCauseOldLink" %}</li>
- <li>{% localized "NotFoundStrings|TextblockPotentialCauseMisspelledUrl" %}</li>
-</ul>
-
-<h3>{% localized "NotFoundStrings|TitlePotentialSolution" %}</h3>
-<ul>
- <li>{% localized "NotFoundStrings|TextblockPotentialSolutionRetype" %}</li>
- <li>
- {% capture textPotentialSolutionStartOver %}{% localized "NotFoundStrings|TextblockPotentialSolutionStartOver" %}{% endcapture %}
- {% capture homeLink %}<a href="/">{% localized "NotFoundStrings|LinkLabelHomePage" %}</a>{% endcapture %}
- {% assign replaceString = '{0}' %}
-
- {{ textPotentialSolutionStartOver | replace : replaceString, homeLink }}
- </li>
-</ul>
-
-<p>
- {% capture textReportProblem %}{% localized "NotFoundStrings|TextReportProblem" %}{% endcapture %}
- {% capture emailLink %}<a href="mailto:apimgmt@microsoft.com" target="_self" title="API Management Support">{% localized "NotFoundStrings|LinkLabelSendUsEmail" %}</a>{% endcapture %}
- {% assign replaceString = '{0}' %}
-
- {{ textReportProblem | replace : replaceString, emailLink }}
-</p>
-```
-
-### Controls
- This template may not use any [page controls](api-management-page-controls.md).
-
-### Data model
-
-|Property|Type|Description|
-|--|-|--|
-|referenceCode|string|Code generated if this page was displayed as the result of an internal error.|
-|errorCode|string|Code generated if this page was displayed as the result of an internal error.|
-|emailBody|string|Email body generated if this page was displayed as the result of an internal error.|
-|requestedUrl|string|The URL requested when the page was not found.|
-|referrerUrl|string|The referrer URL to the requested URL.|
-
-### Sample template data
-
-```json
-{
- "referenceCode": null,
- "errorCode": null,
- "emailBody": null,
- "requestedUrl": "https://contoso5.portal.azure-api.net:443/NotFoundPage?startEditTemplate=NotFoundPage",
- "referrerUrl": "https://contoso5.portal.azure-api.net/signup?startEditTemplate=SignUpTemplate"
-}
-```
-
-## Next steps
-For more information about working with templates, see [How to customize the API Management developer portal using templates](api-management-developer-portal-templates.md).
api-management Api Management Product Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-product-templates.md
- Title: Product templates in Azure API Management | Microsoft Docs
-description: Learn how to customize the content of the product pages in the Azure API Management developer portal.
------- Previously updated : 11/04/2019--
-# Product templates in Azure API Management
-
-Azure API Management provides you the ability to customize the content of developer portal pages using a set of templates that configure their content. Using [DotLiquid](https://github.com/dotliquid) syntax and the editor of your choice, such as [DotLiquid for Designers](https://github.com/dotliquid/dotliquid/wiki/DotLiquid-for-Designers), and a provided set of localized [String resources](api-management-template-resources.md#strings), [Glyph resources](api-management-template-resources.md#glyphs), and [Page controls](api-management-page-controls.md), you have great flexibility to configure the content of the pages as you see fit using these templates.
-
- The templates in this section allow you to customize the content of the product pages in the developer portal.
-
-- [Product list](#ProductList)
-
-- [Product](#Product)
-
-> [!NOTE]
-> Sample default templates are included in the following documentation, but are subject to change due to continuous improvements. You can view the live default templates in the developer portal by navigating to the desired individual templates. For more information about working with templates, see [How to customize the API Management developer portal using templates](./api-management-developer-portal-templates.md).
--
-
-## <a name="ProductList"></a> Product list
- The **Product list** template allows you to customize the body of the product list page in the developer portal.
-
- ![Products list](./media/api-management-product-templates/APIM_ProductsListTemplatePage.png "APIM_ProductsListTemplatePage")
-
-### Default template
-
-```xml
-<search-control></search-control>
-<div class="row">
- <div class="col-md-9">
- <h2>{% localized "ProductsStrings|PageTitleProducts" %}</h2>
- </div>
-</div>
-<div class="row">
- <div class="col-md-12">
- {% if products.size > 0 %}
- <ul class="list-unstyled">
- {% for product in products %}
- <li>
- <h3><a href="/products/{{product.id}}">{{product.title}}</a></h3>
- {{product.description}}
- </li>
- {% endfor %}
- </ul>
- <paging-control></paging-control>
- {% else %}
- {% localized "CommonResources|NoItemsToDisplay" %}
- {% endif %}
- </div>
-</div>
-```
-
-### Controls
- The `Product list` template may use the following [page controls](api-management-page-controls.md).
-
-- [paging-control](api-management-page-controls.md#paging-control)
-
-- [search-control](api-management-page-controls.md#search-control)
-
-### Data model
-
-|Property|Type|Description|
-|--|-|--|
-|Paging|[Paging](api-management-template-data-model-reference.md#Paging) entity.|The paging information for the products collection.|
-|Filtering|[Filtering](api-management-template-data-model-reference.md#Filtering) entity.|The filtering information for the products list page.|
-|Products|Collection of [Product](api-management-template-data-model-reference.md#Product) entities.|The products visible to the current user.|
-
-### Sample template data
-
-```json
-{
- "Paging": {
- "Page": 1,
- "PageSize": 10,
- "TotalItemCount": 2,
- "ShowAll": false,
- "PageCount": 1
- },
- "Filtering": {
- "Pattern": null,
- "Placeholder": "Search products"
- },
- "Products": [
- {
- "Id": "56f9445ffaf7560049060001",
- "Title": "Starter",
- "Description": "Subscribers will be able to run 5 calls/minute up to a maximum of 100 calls/week.",
- "Terms": "",
- "ProductState": 1,
- "AllowMultipleSubscriptions": false,
- "MultipleSubscriptionsCount": 1
- },
- {
- "Id": "56f9445ffaf7560049060002",
- "Title": "Unlimited",
- "Description": "Subscribers have completely unlimited access to the API. Administrator approval is required.",
- "Terms": null,
- "ProductState": 1,
- "AllowMultipleSubscriptions": false,
- "MultipleSubscriptionsCount": 1
- }
- ]
-}
-```
-
-## <a name="Product"></a> Product
- The **Product** template allows you to customize the body of the product page in the developer portal.
-
- ![Developer portal product page](./media/api-management-product-templates/APIM_ProductPage.png "APIM_ProductPage")
-
-### Default template
-
-```xml
-<h2>{{Product.Title}}</h2>
-<p>{{Product.Description}}</p>
-
-{% assign replaceString0 = '{0}' %}
-
-{% if Limits and Limits.size > 0 %}
-<h3>{% localized "ProductDetailsStrings|WebProductsUsageLimitsHeader"%}</h3>
-<ul>
- {% for limit in Limits %}
- <li>{{limit.DisplayName}}</li>
- {% endfor %}
-</ul>
-{% endif %}
-
-{% if apis.size > 0 %}
-<p>
- <b>
- {% if apis.size == 1 %}
- {% capture apisCountText %}{% localized "ProductDetailsStrings|TextblockSingleApisCount" %}{% endcapture %}
- {% else %}
- {% capture apisCountText %}{% localized "ProductDetailsStrings|TextblockMultipleApisCount" %}{% endcapture %}
- {% endif %}
-
- {% capture apisCount %}{{apis.size}}{% endcapture %}
- {{ apisCountText | replace : replaceString0, apisCount }}
- </b>
-</p>
-
-<ul>
- {% for api in Apis %}
- <li>
- <a href="/docs/services/{{api.Id}}">{{api.Name}}</a>
- </li>
- {% endfor %}
-</ul>
-{% endif %}
-
-{% if subscriptions.size > 0 %}
-<p>
- <b>
- {% if subscriptions.size == 1 %}
- {% capture subscriptionsCountText %}{% localized "ProductDetailsStrings|TextblockSingleSubscriptionsCount" %}{% endcapture %}
- {% else %}
- {% capture subscriptionsCountText %}{% localized "ProductDetailsStrings|TextblockMultipleSubscriptionsCount" %}{% endcapture %}
- {% endif %}
-
- {% capture subscriptionsCount %}{{subscriptions.size}}{% endcapture %}
- {{ subscriptionsCountText | replace : replaceString0, subscriptionsCount }}
- </b>
-</p>
-
-<ul>
- {% for subscription in subscriptions %}
- <li>
- <a href="/developer#{{subscription.Id}}">{{subscription.DisplayName}}</a>
- </li>
- {% endfor %}
-</ul>
-{% endif %}
-{% if CannotAddBecauseSubscriptionNumberLimitReached %}
-<b>{% localized "ProductDetailsStrings|TextblockSubscriptionLimitReached" %}</b>
-{% elsif CannotAddBecauseMultipleSubscriptionsNotAllowed == false %}
-<subscribe-button></subscribe-button>
-{% endif %}
-```
-
-### Controls
- The `Product list` template may use the following [page controls](api-management-page-controls.md).
-
-- [subscribe-button](api-management-page-controls.md#subscribe-button)
-
-### Data model
-
-|Property|Type|Description|
-|--|-|--|
-|Product|[Product](api-management-template-data-model-reference.md#Product)|The specified product.|
-|IsDeveloperSubscribed|boolean|Whether the current user is subscribed to this product.|
-|SubscriptionState|number|The state of the subscription. Possible states are:<br /><br /> - `0 - suspended` ΓÇô the subscription is blocked, and the subscriber cannot call any APIs of the product.<br />- `1 - active` ΓÇô the subscription is active.<br />- `2 - expired` ΓÇô the subscription reached its expiration date and was deactivated.<br />- `3 - submitted` ΓÇô the subscription request has been made by the developer, but has not yet been approved or rejected.<br />- `4 - rejected` ΓÇô the subscription request has been denied by an administrator.<br />- `5 - cancelled` ΓÇô the subscription has been canceled by the developer or administrator.|
-|Limits|array|This property is deprecated and should not be used.|
-|DelegatedSubscriptionEnabled|boolean|Whether [delegation](./api-management-howto-setup-delegation.md) is enabled for this subscription.|
-|DelegatedSubscriptionUrl|string|If delegation is enabled, the delegated subscription URL.|
-|IsAgreed|boolean|If the product has terms, whether the current user has agreed to the terms.|
-|Subscriptions|Collection of [Subscription summary](api-management-template-data-model-reference.md#SubscriptionSummary) entities.|The subscriptions to the product.|
-|Apis|Collection of [API](api-management-template-data-model-reference.md#API) entities.|The APIs in this product.|
-|CannotAddBecauseSubscriptionNumberLimitReached|boolean|Whether the current user is eligible to subscribe to this product with regard to the subscription limit.|
-|CannotAddBecauseMultipleSubscriptionsNotAllowed|boolean|Whether the current user is eligible to subscribe to this product with regard to multiple subscriptions being allowed or not.|
-
-### Sample template data
-
-```json
-{
- "Product": {
- "Id": "56f9445ffaf7560049060001",
- "Title": "Starter",
- "Description": "Subscribers will be able to run 5 calls/minute up to a maximum of 100 calls/week.",
- "Terms": "",
- "ProductState": 1,
- "AllowMultipleSubscriptions": false,
- "MultipleSubscriptionsCount": 1
- },
- "IsDeveloperSubscribed": true,
- "SubscriptionState": 1,
- "Limits": [],
- "DelegatedSubscriptionEnabled": false,
- "DelegatedSubscriptionUrl": null,
- "IsAgreed": false,
- "Subscriptions": [
- {
- "Id": "56f9445ffaf7560049070001",
- "DisplayName": "Starter (default)"
- }
- ],
- "Apis": [
- {
- "id": "56f9445ffaf7560049040001",
- "name": "Echo API",
- "description": null,
- "serviceUrl": "http://echoapi.cloudapp.net/api",
- "path": "echo",
- "protocols": [
- 2
- ],
- "authenticationSettings": null,
- "subscriptionKeyParameterNames": null
- }
- ],
- "CannotAddBecauseSubscriptionNumberLimitReached": false,
- "CannotAddBecauseMultipleSubscriptionsNotAllowed": true
-}
-```
-
-## Next steps
-For more information about working with templates, see [How to customize the API Management developer portal using templates](api-management-developer-portal-templates.md).
api-management Api Management Template Data Model Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-template-data-model-reference.md
- Title: Azure API Management template data model reference | Microsoft Docs
-description: Learn about the entity and type representations for common items used in the data models for the developer portal templates in Azure API Management.
------- Previously updated : 11/04/2019--
-# Azure API Management template data model reference
-This topic describes the entity and type representations for common items used in the data models for the developer portal templates in Azure API Management.
-
- For more information about working with templates, see [How to customize the API Management developer portal using templates](./api-management-developer-portal-templates.md).
---
-## Reference
--- [API](#API) -- [API summary](#APISummary) -- [Application](#Application) -- [Attachment](#Attachment) -- [Code sample](#Sample) -- [Comment](#Comment) -- [Filtering](#Filtering) -- [Header](#Header) -- [HTTP Request](#HTTPRequest) -- [HTTP Response](#HTTPResponse) -- [Issue](#Issue) -- [Operation](#Operation) -- [Operation menu](#Menu) -- [Operation menu item](#MenuItem) -- [Paging](#Paging) -- [Parameter](#Parameter) -- [Product](#Product) -- [Provider](#Provider) -- [Representation](#Representation) -- [Subscription](#Subscription) -- [Subscription summary](#SubscriptionSummary) -- [User account info](#UserAccountInfo) -- [User sign-in](#UseSignIn) -- [User sign-up](#UserSignUp)
-
-## <a name="API"></a> API
- The `API` entity has the following properties:
-
-|Property|Type|Description|
-|--|-|--|
-|`id`|string|Resource identifier. Uniquely identifies the API within the current API Management service instance. The value is a valid relative URL in the format of `apis/{id}` where `{id}` is an API identifier. This property is read-only.|
-|`name`|string|Name of the API. Must not be empty. Maximum length is 100 characters.|
-|`description`|string|Description of the API. Must not be empty. May include HTML formatting tags. Maximum length is 1000 characters.|
-|`serviceUrl`|string|Absolute URL of the backend service implementing this API.|
-|`path`|string|Relative URL uniquely identifying this API and all of its resource paths within the API Management service instance. It is appended to the API endpoint base URL specified during the service instance creation to form a public URL for this API.|
-|`protocols`|array of number|Describes on which protocols the operations in this API can be invoked. Allowed values are `1 - http` and `2 - https`, or both.|
-|`authenticationSettings`|[Authorization server authentication settings](/rest/api/apimanagement/apimanagementrest/azure-api-management-rest-api-contract-reference#AuthenticationSettings)|Collection of authentication settings included in this API.|
-|`subscriptionKeyParameterNames`|object|Optional property that can be used to specify custom names for query and/or header parameters containing the subscription key. When this property is present, it must contain at least one of the two following properties.<br /><br /> `{ "subscriptionKeyParameterNames": { "query": ΓÇ£customQueryParameterName", "header": ΓÇ£customHeaderParameterName" } }`|
-
-## <a name="APISummary"></a> API summary
- The `API summary` entity has the following properties:
-
-|Property|Type|Description|
-|--|-|--|
-|`id`|string|Resource identifier. Uniquely identifies the API within the current API Management service instance. The value is a valid relative URL in the format of `apis/{id}` where `{id}` is an API identifier. This property is read-only.|
-|`name`|string|Name of the API. Must not be empty. Maximum length is 100 characters.|
-|`description`|string|Description of the API. Must not be empty. May include HTML formatting tags. Maximum length is 1000 characters.|
-
-## <a name="Application"></a> Application
- The `application` entity has the following properties:
-
-|Property|Type|Description|
-|--|-|--|
-|`Id`|string|The unique identifier of the application.|
-|`Title`|string|The title of the application.|
-|`Description`|string|The description of the application.|
-|`Url`|URI|The URI for the application.|
-|`Version`|string|Version information for the application.|
-|`Requirements`|string|A description of requirements for the application.|
-|`State`|number|The current state of the application.<br /><br /> - 0 - Registered<br /><br /> - 1 - Submitted<br /><br /> - 2 - Published<br /><br /> - 3 - Rejected<br /><br /> - 4 - Unpublished|
-|`RegistrationDate`|DateTime|The date and time the application was registered.|
-|`CategoryId`|number|The category of the application (Finance, entertainment, etc.)|
-|`DeveloperId`|string|The unique identifier of the developer that submitted the application.|
-|`Attachments`|Collection of [Attachment](#Attachment) entities.|Any attachments for the application such as screenshots or icons.|
-|`Icon`|[Attachment](#Attachment)|The icon the for the application.|
-
-## <a name="Attachment"></a> Attachment
- The `attachment` entity has the following properties:
-
-|Property|Type|Description|
-|--|-|--|
-|`UniqueId`|string|The unique identifier for the attachment.|
-|`Url`|string|The URL of the resource.|
-|`Type`|string|The type of attachment.|
-|`ContentType`|string|The media type of the attachment.|
-
-## <a name="Sample"></a> Code sample
-
-|Property|Type|Description|
-|--|-|--|
-|`title`|string|The name of the operation.|
-|`snippet`|string|This property is deprecated and should not be used.|
-|`brush`|string|Which code syntax coloring template to be used when displaying the code sample. Allowed values are `plain`, `php`, `java`, `xml`, `objc`, `python`, `ruby`, and `csharp`.|
-|`template`|string|The name of this code sample template.|
-|`body`|string|A placeholder for the code sample portion of the snippet.|
-|`method`|string|The HTTP method of the operation.|
-|`scheme`|string|The protocol to use for the operation request.|
-|`path`|string|The path of the operation.|
-|`query`|string|Query string example with defined parameters.|
-|`host`|string|The URL of the API Management service gateway for the API that contains this operation.|
-|`headers`|Collection of [Header](#Header) entities.|Headers for this operation.|
-|`parameters`|Collection of [Parameter](#Parameter) entities.|Parameters that are defined for this operation.|
-
-## <a name="Comment"></a> Comment
- The `API` entity has the following properties:
-
-|Property|Type|Description|
-|--|-|--|
-|`Id`|number|The ID of the comment.|
-|`CommentText`|string|The body of the comment. May include HTML.|
-|`DeveloperCompany`|string|The company name of the developer.|
-|`PostedOn`|DateTime|The date and time the comment was posted.|
-
-## <a name="Issue"></a> Issue
- The `issue` entity has the following properties.
-
-|Property|Type|Description|
-|--|-|--|
-|`Id`|string|The unique identifier for the issue.|
-|`ApiID`|string|The ID for the API for which this issue was reported.|
-|`Title`|string|Title of the issue.|
-|`Description`|string|Description of the issue.|
-|`SubscriptionDeveloperName`|string|First name of the developer that reported the issue.|
-|`IssueState`|string|The current state of the issue. Possible values are Proposed, Opened, Closed.|
-|`ReportedOn`|DateTime|The date and time the issue was reported.|
-|`Comments`|Collection of [Comment](#Comment) entities.|Comments on this issue.|
-|`Attachments`|Collection of [Attachment](api-management-template-data-model-reference.md#Attachment) entities.|Any attachments to the issue.|
-|`Services`|Collection of [API](#API) entities.|The APIs subscribed to by the user that filed the issue.|
-
-## <a name="Filtering"></a> Filtering
- The `filtering` entity has the following properties:
-
-|Property|Type|Description|
-|--|-|--|
-|`Pattern`|string|The current search term; or `null` if there is no search term.|
-|`Placeholder`|string|The text to display in the search box when there is no search term specified.|
-
-## <a name="Header"></a> Header
- This section describes the `parameter` representation.
-
-|Property|Type|Description|
-|--|--|-|
-|`name`|string|Parameter name.|
-|`description`|string|Parameter description.|
-|`value`|string|Header value.|
-|`typeName`|string|Data type of header value.|
-|`options`|string|Options.|
-|`required`|boolean|Whether the header is required.|
-|`readOnly`|boolean|Whether the header is read-only.|
-
-## <a name="HTTPRequest"></a> HTTP Request
- This section describes the `request` representation.
-
-|Property|Type|Description|
-|--|-|--|
-|`description`|string|Operation request description.|
-|`headers`|array of [Header](#Header) entities.|Request headers.|
-|`parameters`|array of [Parameter](#Parameter)|Collection of operation request parameters.|
-|`representations`|array of [Representation](#Representation)|Collection of operation request representations.|
-
-## <a name="HTTPResponse"></a> HTTP Response
- This section describes the `response` representation.
-
-|Property|Type|Description|
-|--|-|--|
-|`statusCode`|positive integer|Operation response status code.|
-|`description`|string|Operation response description.|
-|`representations`|array of [Representation](#Representation)|Collection of operation response representations.|
-
-## <a name="Operation"></a> Operation
- The `operation` entity has the following properties:
-
-|Property|Type|Description|
-|--|-|--|
-|`id`|string|Resource identifier. Uniquely identifies the operation within the current API Management service instance. The value is a valid relative URL in the format of `apis/{aid}/operations/{id}` where `{aid}` is an API identifier and `{id}` is an operation identifier. This property is read-only.|
-|`name`|string|Name of the operation. Must not be empty. Maximum length is 100 characters.|
-|`description`|string|Description of the operation. Must not be empty. May include HTML formatting tags. Maximum length is 1000 characters.|
-|`scheme`|string|Describes on which protocols the operations in this API can be invoked. Allowed values are `http`, `https`, or both `http` and `https`.|
-|`uriTemplate`|string|Relative URL template identifying the target resource for this operation. May include parameters. Example: `customers/{cid}/orders/{oid}/?date={date}`|
-|`host`|string|The API Management gateway URL that hosts the API.|
-|`httpMethod`|string|Operation HTTP method.|
-|`request`|[HTTP Request](#HTTPRequest)|An entity containing request details.|
-|`responses`|array of [HTTP Response](#HTTPResponse)|Array of operation [HTTP Response](#HTTPResponse) entities.|
-
-## <a name="Menu"></a> Operation menu
- The `operation menu` entity has the following properties:
-
-|Property|Type|Description|
-|--|-|--|
-|`ApiId`|string|The ID of the current API.|
-|`CurrentOperationId`|string|The ID of the current operation.|
-|`Action`|string|The menu type.|
-|`MenuItems`|Collection of [Operation menu item](#MenuItem) entities.|The operations for the current API.|
-
-## <a name="MenuItem"></a> Operation menu item
- The `operation menu item` entity has the following properties:
-
-|Property|Type|Description|
-|--|-|--|
-|`Id`|string|The ID of the operation.|
-|`Title`|string|The description of the operation.|
-|`HttpMethod`|string|The Http method of the operation.|
-
-## <a name="Paging"></a> Paging
- The `paging` entity has the following properties:
-
-|Property|Type|Description|
-|--|-|--|
-|`Page`|number|The current page number.|
-|`PageSize`|number|The maximum results to be displayed on a single page.|
-|`TotalItemCount`|number|The number of items for display.|
-|`ShowAll`|boolean|Whether to sho all results on a single page.|
-|`PageCount`|number|The number of pages of results.|
-
-## <a name="Parameter"></a> Parameter
- This section describes the `parameter` representation.
-
-|Property|Type|Description|
-|--|--|-|
-|`name`|string|Parameter name.|
-|`description`|string|Parameter description.|
-|`value`|string|Parameter value.|
-|`options`|array of string|Values defined for query parameter values.|
-|`required`|boolean|Specifies whether parameter is required or not.|
-|`kind`|number|Whether this parameter is a path parameter (1), or a querystring parameter (2).|
-|`typeName`|string|Parameter type.|
-
-## <a name="Product"></a> Product
- The `product` entity has the following properties:
-
-|Property|Type|Description|
-|--|-|--|
-|`Id`|string|Resource identifier. Uniquely identifies the product within the current API Management service instance. The value is a valid relative URL in the format of `products/{pid}` where `{pid}` is a product identifier. This property is read-only.|
-|`Title`|string|Name of the product. Must not be empty. Maximum length is 100 characters.|
-|`Description`|string|Description of the product. Must not be empty. May include HTML formatting tags. Maximum length is 1000 characters.|
-|`Terms`|string|Product terms of use. Developers trying to subscribe to the product will be presented and required to accept these terms before they can complete the subscription process.|
-|`ProductState`|number|Specifies whether the product is published or not. Published products are discoverable by developers on the developer portal. Non-published products are visible only to administrators.<br /><br /> The allowable values for product state are:<br /><br /> - `0 - Not Published`<br /><br /> - `1 - Published`<br /><br /> - `2 - Deleted`|
-|`AllowMultipleSubscriptions`|boolean|Specifies whether a user can have multiple subscriptions to this product at the same time.|
-|`MultipleSubscriptionsCount`|number|Maximum number of subscriptions to this product a user is allowed to have at the same time.|
-
-## <a name="Provider"></a> Provider
- The `provider` entity has the following properties:
-
-|Property|Type|Description|
-|--|-|--|
-|`Properties`|string dictionary|Properties for this authentication provider.|
-|`AuthenticationType`|string|The provider type. (Microsoft Entra ID, Facebook login, Google Account, Microsoft Account, Twitter).|
-|`Caption`|string|Display name of the provider.|
-
-## <a name="Representation"></a> Representation
- This section describes a `representation`.
-
-|Property|Type|Description|
-|--|-|--|
-|`contentType`|string|Specifies a registered or custom content type for this representation, for example, `application/xml`.|
-|`sample`|string|An example of the representation.|
-
-## <a name="Subscription"></a> Subscription
- The `subscription` entity has the following properties:
-
-|Property|Type|Description|
-|--|-|--|
-|`Id`|string|Resource identifier. Uniquely identifies the subscription within the current API Management service instance. The value is a valid relative URL in the format of `subscriptions/{sid}` where `{sid}` is a subscription identifier. This property is read-only.|
-|`ProductId`|string|The product resource identifier of the subscribed product. The value is a valid relative URL in the format of `products/{pid}` where `{pid}` is a product identifier.|
-|`ProductTitle`|string|Name of the product. Must not be empty. Maximum length is 100 characters.|
-|`ProductDescription`|string|Description of the product. Must not be empty. May include HTML formatting tags. Maximum length is 1000 characters.|
-|`ProductDetailsUrl`|string|Relative URL to the product details.|
-|`state`|string|The state of the subscription. Possible states are:<br /><br /> - `0 - suspended` ΓÇô the subscription is blocked, and the subscriber cannot call any APIs of the product.<br /><br /> - `1 - active` ΓÇô the subscription is active.<br /><br /> - `2 - expired` ΓÇô the subscription reached its expiration date and was deactivated.<br /><br /> - `3 - submitted` ΓÇô the subscription request has been made by the developer, but has not yet been approved or rejected.<br /><br /> - `4 - rejected` ΓÇô the subscription request has been denied by an administrator.<br /><br /> - `5 - cancelled` ΓÇô the subscription has been canceled by the developer or administrator.|
-|`DisplayName`|string|Display name of the subscription.|
-|`CreatedDate`|dateTime|The date the subscription was created, in ISO 8601 format: `2014-06-24T16:25:00Z`.|
-|`CanBeCancelled`|boolean|Whether the subscription can be canceled by the current user.|
-|`IsAwaitingApproval`|boolean|Whether the subscription is awaiting approval.|
-|`StartDate`|dateTime|The start date for the subscription, in ISO 8601 format: `2014-06-24T16:25:00Z`.|
-|`ExpirationDate`|dateTime|The expiration date for the subscription, in ISO 8601 format: `2014-06-24T16:25:00Z`.|
-|`NotificationDate`|dateTime|The notification date for the subscription, in ISO 8601 format: `2014-06-24T16:25:00Z`.|
-|`primaryKey`|string|The primary subscription key. Maximum length is 256 characters.|
-|`secondaryKey`|string|The secondary subscription key. Maximum length is 256 characters.|
-|`CanBeRenewed`|boolean|Whether the subscription can be renewed by the current user.|
-|`HasExpired`|boolean|Whether the subscription has expired.|
-|`IsRejected`|boolean|Whether the subscription request was denied.|
-|`CancelUrl`|string|The relative Url to cancel the subscription.|
-|`RenewUrl`|string|The relative Url to renew the subscription.|
-
-## <a name="SubscriptionSummary"></a> Subscription summary
- The `subscription summary` entity has the following properties:
-
-|Property|Type|Description|
-|--|-|--|
-|`Id`|string|Resource identifier. Uniquely identifies the subscription within the current API Management service instance. The value is a valid relative URL in the format of `subscriptions/{sid}` where `{sid}` is a subscription identifier. This property is read-only.|
-|`DisplayName`|string|The display name of the subscription|
-
-## <a name="UserAccountInfo"></a> User account info
- The `user account info` entity has the following properties:
-
-|Property|Type|Description|
-|--|-|--|
-|`FirstName`|string|First name. Must not be empty. Maximum length is 100 characters.|
-|`LastName`|string|Last name. Must not be empty. Maximum length is 100 characters.|
-|`Email`|string|Email address. Must not be empty and must be unique within the service instance. Maximum length is 254 characters.|
-|`Password`|string|User account password.|
-|`NameIdentifier`|string|Account identifier, the same as the user email.|
-|`ProviderName`|string|Authentication provider name.|
-|`IsBasicAccount`|boolean|True if this account was registered using email and password; false if the account was registered using a provider.|
-
-## <a name="UseSignIn"></a> User sign in
- The `user sign in` entity has the following properties:
-
-|Property|Type|Description|
-|--|-|--|
-|`Email`|string|Email address. Must not be empty and must be unique within the service instance. Maximum length is 254 characters.|
-|`Password`|string|User account password.|
-|`ReturnUrl`|string|The URL of the page where the user clicked sign in.|
-|`RememberMe`|boolean|Whether to save the current user's information.|
-|`RegistrationEnabled`|boolean|Whether registration is enabled.|
-|`DelegationEnabled`|boolean|Whether delegated sign in is enabled.|
-|`DelegationUrl`|string|The delegated sign in url, if enabled.|
-|`SsoSignUpUrl`|string|The single sign on URL for the user, if present.|
-|`AuxServiceUrl`|string|If the current user is an administrator, this is a link to the service instance in the Azure portal.|
-|`Providers`|Collection of [Provider](#Provider) entities|The authentication providers for this user.|
-|`UserRegistrationTerms`|string|Terms that a user must agree to before signing in.|
-|`UserRegistrationTermsEnabled`|boolean|Whether terms are enabled.|
-
-## <a name="UserSignUp"></a> User sign up
- The `user sign up` entity has the following properties:
-
-|Property|Type|Description|
-|--|-|--|
-|`PasswordConfirm`|boolean|Value used by the [sign-up](api-management-page-controls.md#sign-up)sign-up control.|
-|`Password`|string|User account password.|
-|`PasswordVerdictLevel`|number|Value used by the [sign-up](api-management-page-controls.md#sign-up)sign-up control.|
-|`UserRegistrationTerms`|string|Terms that a user must agree to before signing in.|
-|`UserRegistrationTermsOptions`|number|Value used by the [sign-up](api-management-page-controls.md#sign-up)sign-up control.|
-|`ConsentAccepted`|boolean|Value used by the [sign-up](api-management-page-controls.md#sign-up)sign-up control.|
-|`Email`|string|Email address. Must not be empty and must be unique within the service instance. Maximum length is 254 characters.|
-|`FirstName`|string|First name. Must not be empty. Maximum length is 100 characters.|
-|`LastName`|string|Last name. Must not be empty. Maximum length is 100 characters.|
-|`UserData`|string|Value used by the [sign-up](api-management-page-controls.md#sign-up) control.|
-|`NameIdentifier`|string|Value used by the [sign-up](api-management-page-controls.md#sign-up)sign-up control.|
-|`ProviderName`|string|Authentication provider name.|
-
-## Next steps
-For more information about working with templates, see [How to customize the API Management developer portal using templates](api-management-developer-portal-templates.md).
api-management Api Management Template Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-template-resources.md
- Title: Azure API Management template resources | Microsoft Docs
-description: Learn about the types of resources available for use in developer portal templates in Azure API Management.
------- Previously updated : 11/04/2019--
-# Azure API Management template resources
-Azure API Management provides the following types of resources for use in the developer portal templates.
-
-- [String resources](#strings)
-
-- [Glyph resources](#glyphs) --
-
-## <a name="strings"></a> String resources
- API Management provides a comprehensive set of string resources for use in the developer portal. These resources are localized into all of the languages supported by API Management. The default set of templates uses these resources for page headers, labels, and any constant strings that are displayed in the developer portal. To use a string resource in your templates, provide the resource string prefix followed by the string name, as shown in the following example.
-
-```
-{% localized "Prefix|Name" %}
-
-```
-
- The following example is from the Product list template, and displays **Products** at the top of the page.
-
-```
-<h2>{% localized "ProductsStrings|PageTitleProducts" %}</h2>
-
-```
-
-The following localization options are supported:
-
-| Locale | Language |
-|--||
-| "en" | "English" |
-| "cs" | "Čeština" |
-| "de" | "Deutsch" |
-| "es" | "Espa├▒ol" |
-| "fr" | "Français" |
-| "hu" | "Magyar" |
-| "it" | "Italiano" |
-| "ja-JP" | "日本語" |
-| "ko" | "한국어" |
-| "nl" | "Nederlands" |
-| "pl" | "Polski" |
-| "pt-br" | "Português (Brasil)" |
-| "pt-pt" | "Português (Portugal)" |
-| "ru" | "Русский" |
-| "sv" | "Svenska" |
-| "tr" | "Türkçe" |
-| "zh-hans" | "中文(简体)" |
-| "zh-hant" | "中文(繁體)" |
-
- Refer to the following tables for the string resources available for use in your developer portal templates. Use the table name as the prefix for the string resources in that table.
-
-- [ApisStrings](#ApisStrings)
-
-- [ApplicationListStrings](#ApplicationListStrings)
-
-- [AppDetailsStrings](#AppDetailsStrings)
-
-- [AppStrings](#AppStrings)
-
-- [CommonResources](#CommonResources)
-
-- [CommonStrings](#CommonStrings)
-
-- [Documentation](#Documentation)
-
-- [ErrorPageStrings](#ErrorPageStrings)
-
-- [IssuesStrings](#IssuesStrings)
-
-- [NotFoundStrings](#NotFoundStrings)
-
-- [ProductDetailsStrings](#ProductDetailsStrings)
-
-- [ProductsStrings](#ProductsStrings)
-
-- [ProviderInfoStrings](#ProviderInfoStrings)
-
-- [SigninResources](#SigninResources)
-
-- [SigninStrings](#SigninStrings)
-
-- [SignupStrings](#SignupStrings)
-
-- [SubscriptionListStrings](#SubscriptionListStrings)
-
-- [SubscriptionStrings](#SubscriptionStrings)
-
-- [UpdateProfileStrings](#UpdateProfileStrings)
-
-- [UserProfile](#UserProfile)
-
-### <a name="ApisStrings"></a> ApisStrings
-
-|Name|Text|
-|-|-|
-|PageTitleApis|APIs|
-
-### <a name="AppDetailsStrings"></a> AppDetailsStrings
-
-|Name|Text|
-|-|-|
-|WebApplicationsDetailsTitle|Application preview|
-|WebApplicationsRequirementsHeader|Requirements|
-|WebApplicationsScreenshotAlt|Screenshot|
-|WebApplicationsScreenshotsHeader|Screenshots|
-
-### <a name="ApplicationListStrings"></a> ApplicationListStrings
-
-|Name|Text|
-|-|-|
-|WebDevelopersAppDeleteConfirmation|Are you sure that you want to remove application?|
-|WebDevelopersAppNotPublished|Not published|
-|WebDevelopersAppNotSubmitted|Not submitted|
-|WebDevelopersAppTableCategoryHeader|Category|
-|WebDevelopersAppTableNameHeader|Name|
-|WebDevelopersAppTableStateHeader|State|
-|WebDevelopersEditLink|Edit|
-|WebDevelopersRegisterAppLink|Register application|
-|WebDevelopersRemoveLink|Remove|
-|WebDevelopersSubmitLink|Submit|
-|WebDevelopersYourApplicationsHeader|Your applications|
-
-### <a name="AppStrings"></a> AppStrings
-
-|Name|Text|
-|-|-|
-|WebApplicationsHeader|Applications|
-
-### <a name="CommonResources"></a> CommonResources
-
-|Name|Text|
-|-|-|
-|NoItemsToDisplay|No results found.|
-|GeneralExceptionMessage|Something is not right. It could be a temporary glitch or a bug. Please, try again.|
-|GeneralJsonExceptionMessage|Something is not right. It could be a temporary glitch or a bug. Please, reload the page and try again.|
-|ConfirmationMessageUnsavedChanges|There are some unsaved changes. Are you sure you want to cancel and discard the changes?|
-|AzureActiveDirectory|Microsoft Entra ID|
-|HttpLargeRequestMessage|Http Request Body too large.|
-
-### <a name="CommonStrings"></a> CommonStrings
-
-|Name|Text|
-|-|-|
-|ButtonLabelCancel|Cancel|
-|ButtonLabelSave|Save|
-|GeneralExceptionMessage|Something is not right. It could be a temporary glitch or a bug. Please, try again.|
-|NoItemsToDisplay|There are no items to display.|
-|PagerButtonLabelFirst|First|
-|PagerButtonLabelLast|Last|
-|PagerButtonLabelNext|Next|
-|PagerButtonLabelPrevious|Prev|
-|PagerLabelPageNOfM|Page {0} of {1}|
-|PasswordTooShort|The Password is too short|
-|EmailAsPassword|Do not use your email as your password|
-|PasswordSameAsUserName|Your password cannot contain your username|
-|PasswordTwoCharacterClasses|Use different character classes|
-|PasswordTooManyRepetitions|Too many repetitions|
-|PasswordSequenceFound|Your password contains sequences|
-|PagerLabelPageSize|Page size|
-|CurtainLabelLoading|Loading...|
-|TablePlaceholderNothingToDisplay|There is no data for the selected period and scope|
-|ButtonLabelClose|Close|
-
-### <a name="Documentation"></a> Documentation
-
-|Name|Text|
-|-|-|
-|WebDocumentationInvalidHeaderErrorMessage|Invalid header '{0}'|
-|WebDocumentationInvalidRequestErrorMessage|Invalid Request URL|
-|TextboxLabelAccessToken|Access token *|
-|DropdownOptionPrimaryKeyFormat|Primary-{0}|
-|DropdownOptionSecondaryKeyFormat|Secondary-{0}|
-|WebDocumentationSubscriptionKeyText|Your subscription key|
-|WebDocumentationTemplatesAddHeaders|Add required HTTP headers|
-|WebDocumentationTemplatesBasicAuthSample|Basic Authorization Sample|
-|WebDocumentationTemplatesCurlForBasicAuth|for Basic Authorization use: --user {username}:{password}|
-|WebDocumentationTemplatesCurlValuesForPath|Specify values for path parameters (shown as {...}), your subscription key and values for query parameters|
-|WebDocumentationTemplatesDeveloperKey|Specify your subscription key|
-|WebDocumentationTemplatesJavaApache|This sample uses the Apache HTTP client from HTTP Components (http://hc.apache.org/httpcomponents-client-ga/)|
-|WebDocumentationTemplatesOptionalParams|Specify values for optional parameters, as needed|
-|WebDocumentationTemplatesPhpPackage|This sample uses the HTTP_Request2 package. (for more information: https://pear.php.net/package/HTTP_Request2)|
-|WebDocumentationTemplatesPythonValuesForPath|Specify values for path parameters (shown as {...}) and request body if needed|
-|WebDocumentationTemplatesRequestBody|Specify request body|
-|WebDocumentationTemplatesRequiredParams|Specify values for the following required parameters|
-|WebDocumentationTemplatesValuesForPath|Specify values for path parameters (shown as {...})|
-|OAuth2AuthorizationEndpointDescription|The authorization endpoint is used to interact with the resource owner and obtain an authorization grant.|
-|OAuth2AuthorizationEndpointName|Authorization endpoint|
-|OAuth2TokenEndpointDescription|The token endpoint is used by the client to obtain an access token by presenting its authorization grant or refresh token.|
-|OAuth2TokenEndpointName|Token endpoint|
-|OAuth2Flow_AuthorizationCodeGrant_Step_AuthorizationRequest_Description|<p\> The client initiates the flow by directing the resource owner's user-agent to the authorization endpoint. The client includes its client identifier, requested scope, local state, and a redirection URI to which the authorization server will send the user-agent back once access is granted (or denied). </p\> <p\> The authorization server authenticates the resource owner (via the user-agent) and establishes whether the resource owner grants or denies the client's access request. </p\> <p\> Assuming the resource owner grants access, the authorization server redirects the user-agent back to the client using the redirection URI provided earlier (in the request or during client registration). The redirection URI includes an authorization code and any local state provided by the client earlier. </p\>|
-|OAuth2Flow_AuthorizationCodeGrant_Step_AuthorizationRequest_ErrorDescription|<p\> If the user denies the access request of if the request is invalid, the client will be informed using the following parameters added on to the redirect: </p\>|
-|OAuth2Flow_AuthorizationCodeGrant_Step_AuthorizationRequest_Name|Authorization request|
-|OAuth2Flow_AuthorizationCodeGrant_Step_AuthorizationRequest_RequestDescription|<p\> The client app must send the user to the authorization endpoint in order to initiate the OAuth process. At the authorization endpoint, the user authenticates and then grants or denies access to the app. </p\>|
-|OAuth2Flow_AuthorizationCodeGrant_Step_AuthorizationRequest_ResponseDescription|<p\> Assuming the resource owner grants access, authorization server redirects the user-agent back to the client using the redirection URI provided earlier (in the request or during client registration). The redirection URI includes an authorization code and any local state provided by the client earlier. </p\>|
-|OAuth2Flow_AuthorizationCodeGrant_Step_TokenRequest_Description|<p\> The client requests an access token from the authorization server''s token endpoint by including the authorization code received in the previous step. When making the request, the client authenticates with the authorization server. The client includes the redirection URI used to obtain the authorization code for verification. </p\> <p\> The authorization server authenticates the client, validates the authorization code, and ensures that the redirection URI received matches the URI used to redirect the client in step (C). If valid, the authorization server responds back with an access token and, optionally, a refresh token. </p\>|
-|OAuth2Flow_AuthorizationCodeGrant_Step_TokenRequest_ErrorDescription|<p\> If the request client authentication failed or is invalid, the authorization server responds with an HTTP 400 (Bad Request) status code (unless specified otherwise) and includes the following parameters with the response. </p\>|
-|OAuth2Flow_AuthorizationCodeGrant_Step_TokenRequest_RequestDescription|<p\> The client makes a request to the token endpoint by sending the following parameters using the "application/x-www-form-urlencoded" format with a character encoding of UTF-8 in the HTTP request entity-body. </p\>|
-|OAuth2Flow_AuthorizationCodeGrant_Step_TokenRequest_ResponseDescription|<p\> The authorization server issues an access token and optional refresh token, and constructs the response by adding the following parameters to the entity-body of the HTTP response with a 200 (OK) status code. </p\>|
-|OAuth2Flow_ClientCredentialsGrant_Step_TokenRequest_Description|<p\> The client authenticates with the authorization server and requests an access token from the token endpoint. </p\> <p\> The authorization server authenticates the client, and if valid, issues an access token. </p\>|
-|OAuth2Flow_ClientCredentialsGrant_Step_TokenRequest_ErrorDescription|<p\> If the request failed client authentication or is invalid the authorization server responds with an HTTP 400 (Bad Request) status code (unless specified otherwise) and includes the following parameters with the response. </p\>|
-|OAuth2Flow_ClientCredentialsGrant_Step_TokenRequest_RequestDescription|<p\> The client makes a request to the token endpoint by adding the following parameters using the "application/x-www-form-urlencoded" format with a character encoding of UTF-8 in the HTTP request entity-body. </p\>|
-|OAuth2Flow_ClientCredentialsGrant_Step_TokenRequest_ResponseDescription|<p\> If the access token request is valid and authorized, the authorization server issues an access token and optional refresh token, and constructs the response by adding the following parameters to the entity-body of the HTTP response with a 200 (OK) status code. </p\>|
-|OAuth2Flow_ImplicitGrant_Step_AuthorizationRequest_Description|<p\> The client initiates the flow by directing the resource owner''s user-agent to the authorization endpoint. The client includes its client identifier, requested scope, local state, and a redirection URI to which the authorization server will send the user-agent back once access is granted (or denied). </p\> <p\> The authorization server authenticates the resource owner (via the user-agent) and establishes whether the resource owner grants or denies the client''s access request. </p\> <p\> Assuming the resource owner grants access, the authorization server redirects the user-agent back to the client using the redirection URI provided earlier. The redirection URI includes the access token in the URI fragment. </p\>|
-|OAuth2Flow_ImplicitGrant_Step_AuthorizationRequest_ErrorDescription|<p\> If the resource owner denies the access request or if the request fails for reasons other than a missing or invalid redirection URI, the authorization server informs the client by adding the following parameters to the fragment component of the redirection URI using the "application/x-www-form-urlencoded" format. </p\>|
-|OAuth2Flow_ImplicitGrant_Step_AuthorizationRequest_RequestDescription|<p\> The client app must send the user to the authorization endpoint in order to initiate the OAuth process. At the authorization endpoint, the user authenticates and then grants or denies access to the app. </p\>|
-|OAuth2Flow_ImplicitGrant_Step_AuthorizationRequest_ResponseDescription|<p\> If the resource owner grants the access request, the authorization server issues an access token and delivers it to the client by adding the following parameters to the fragment component of the redirection URI using the "application/x-www-form-urlencoded" format. </p\>|
-|OAuth2Flow_ObtainAuthorization_AuthorizationCodeGrant_Description|Authorization code flow is optimized for clients capable of maintaining the confidentiality of their credentials (e.g., web server applications implemented using PHP, Java, Python, Ruby, ASP.NET, etc.).|
-|OAuth2Flow_ObtainAuthorization_AuthorizationCodeGrant_Name|Authorization Code grant|
-|OAuth2Flow_ObtainAuthorization_ClientCredentialsGrant_Description|Client credentials flow is suitable in cases where the client (your application) is requesting access to the protected resources under its control. The client is considered as a resource owner, so no end-user interaction is required.|
-|OAuth2Flow_ObtainAuthorization_ClientCredentialsGrant_Name|Client Credentials grant|
-|OAuth2Flow_ObtainAuthorization_ImplicitGrant_Description|Implicit flow is optimized for clients incapable of maintaining the confidentiality of their credentials known to operate a particular redirection URI. These clients are typically implemented in a browser using a scripting language such as JavaScript.|
-|OAuth2Flow_ObtainAuthorization_ImplicitGrant_Name|Implicit grant|
-|OAuth2Flow_ObtainAuthorization_ResourceOwnerPasswordCredentialsGrant_Description|Resource owner password credentials flow is suitable in cases where the resource owner has a trust relationship with the client (your application), such as the device operating system or a highly privileged application. This flow is suitable for clients capable of obtaining the resource owner's credentials (username and password, typically using an interactive form).|
-|OAuth2Flow_ObtainAuthorization_ResourceOwnerPasswordCredentialsGrant_Name|Resource Owner Password Credentials grant|
-|OAuth2Flow_ResourceOwnerPasswordCredentialsGrant_Step_TokenRequest_Description|<p\> The resource owner provides the client with its username and password. </p\> <p\> The client requests an access token from the authorization server''s token endpoint by including the credentials received from the resource owner. When making the request, the client authenticates with the authorization server. </p\> <p\> The authorization server authenticates the client and validates the resource owner credentials, and if valid, issues an access token. </p\>|
-|OAuth2Flow_ResourceOwnerPasswordCredentialsGrant_Step_TokenRequest_ErrorDescription|<p\> If the request failed client authentication or is invalid the authorization server responds with an HTTP 400 (Bad Request) status code (unless specified otherwise) and includes the following parameters with the response. </p\>|
-|OAuth2Flow_ResourceOwnerPasswordCredentialsGrant_Step_TokenRequest_RequestDescription|<p\> The client makes a request to the token endpoint by adding the following parameters using the "application/x-www-form-urlencoded" format with a character encoding of UTF-8 in the HTTP request entity-body. </p\>|
-|OAuth2Flow_ResourceOwnerPasswordCredentialsGrant_Step_TokenRequest_ResponseDescription|<p\> If the access token request is valid and authorized, the authorization server issues an access token and optional refresh token, and constructs the response by adding the following parameters to the entity-body of the HTTP response with a 200 (OK) status code. </p\>|
-|OAuth2Step_AccessTokenRequest_Name|Access token request|
-|OAuth2Step_AuthorizationRequest_Name|Authorization request|
-|OAuth2AccessToken_AuthorizationCodeGrant_TokenResponse|REQUIRED. The access token issued by the authorization server.|
-|OAuth2AccessToken_ClientCredentialsGrant_TokenResponse|REQUIRED. The access token issued by the authorization server.|
-|OAuth2AccessToken_ImplicitGrant_AuthorizationResponse|REQUIRED. The access token issued by the authorization server.|
-|OAuth2AccessToken_ResourceOwnerPasswordCredentialsGrant_TokenResponse|REQUIRED. The access token issued by the authorization server.|
-|OAuth2ClientId_AuthorizationCodeGrant_AuthorizationRequest|REQUIRED. Client identifier.|
-|OAuth2ClientId_AuthorizationCodeGrant_TokenRequest|REQUIRED if the client is not authenticating with the authorization server.|
-|OAuth2ClientId_ImplicitGrant_AuthorizationRequest|REQUIRED. The client identifier.|
-|OAuth2Code_AuthorizationCodeGrant_AuthorizationResponse|REQUIRED. The authorization code generated by the authorization server.|
-|OAuth2Code_AuthorizationCodeGrant_TokenRequest|REQUIRED. The authorization code received from the authorization server.|
-|OAuth2ErrorDescription_AuthorizationCodeGrant_AuthorizationErrorResponse|OPTIONAL. Human-readable ASCII text providing additional information.|
-|OAuth2ErrorDescription_AuthorizationCodeGrant_TokenErrorResponse|OPTIONAL. Human-readable ASCII text providing additional information.|
-|OAuth2ErrorDescription_ClientCredentialsGrant_TokenErrorResponse|OPTIONAL. Human-readable ASCII text providing additional information.|
-|OAuth2ErrorDescription_ImplicitGrant_AuthorizationErrorResponse|OPTIONAL. Human-readable ASCII text providing additional information.|
-|OAuth2ErrorDescription_ResourceOwnerPasswordCredentialsGrant_TokenErrorResponse|OPTIONAL. Human-readable ASCII text providing additional information.|
-|OAuth2ErrorUri_AuthorizationCodeGrant_AuthorizationErrorResponse|OPTIONAL. A URI identifying a human-readable web page with information about the error.|
-|OAuth2ErrorUri_AuthorizationCodeGrant_TokenErrorResponse|OPTIONAL. A URI identifying a human-readable web page with information about the error.|
-|OAuth2ErrorUri_ClientCredentialsGrant_TokenErrorResponse|OPTIONAL. A URI identifying a human-readable web page with information about the error.|
-|OAuth2ErrorUri_ImplicitGrant_AuthorizationErrorResponse|OPTIONAL. A URI identifying a human-readable web page with information about the error.|
-|OAuth2ErrorUri_ResourceOwnerPasswordCredentialsGrant_TokenErrorResponse|OPTIONAL. A URI identifying a human-readable web page with information about the error.|
-|OAuth2Error_AuthorizationCodeGrant_AuthorizationErrorResponse|REQUIRED. A single ASCII error code from the following: invalid_request, unauthorized_client, access_denied, unsupported_response_type, invalid_scope, server_error, temporarily_unavailable.|
-|OAuth2Error_AuthorizationCodeGrant_TokenErrorResponse|REQUIRED. A single ASCII error code from the following: invalid_request, invalid_client, invalid_grant, unauthorized_client, unsupported_grant_type, invalid_scope.|
-|OAuth2Error_ClientCredentialsGrant_TokenErrorResponse|REQUIRED. A single ASCII error code from the following: invalid_request, invalid_client, invalid_grant, unauthorized_client, unsupported_grant_type, invalid_scope.|
-|OAuth2Error_ImplicitGrant_AuthorizationErrorResponse|REQUIRED. A single ASCII error code from the following: invalid_request, unauthorized_client, access_denied, unsupported_response_type, invalid_scope, server_error, temporarily_unavailable.|
-|OAuth2Error_ResourceOwnerPasswordCredentialsGrant_TokenErrorResponse|REQUIRED. A single ASCII error code from the following: invalid_request, invalid_client, invalid_grant, unauthorized_client, unsupported_grant_type, invalid_scope.|
-|OAuth2ExpiresIn_AuthorizationCodeGrant_TokenResponse|RECOMMENDED. The lifetime in seconds of the access token.|
-|OAuth2ExpiresIn_ClientCredentialsGrant_TokenResponse|RECOMMENDED. The lifetime in seconds of the access token.|
-|OAuth2ExpiresIn_ImplicitGrant_AuthorizationResponse|RECOMMENDED. The lifetime in seconds of the access token.|
-|OAuth2ExpiresIn_ResourceOwnerPasswordCredentialsGrant_TokenResponse|RECOMMENDED. The lifetime in seconds of the access token.|
-|OAuth2GrantType_AuthorizationCodeGrant_TokenRequest|REQUIRED. Value MUST be set to "authorization_code".|
-|OAuth2GrantType_ClientCredentialsGrant_TokenRequest|REQUIRED. Value MUST be set to "client_credentials".|
-|OAuth2GrantType_ResourceOwnerPasswordCredentialsGrant_TokenRequest|REQUIRED. Value MUST be set to "password".|
-|OAuth2Password_ResourceOwnerPasswordCredentialsGrant_TokenRequest|REQUIRED. The resource owner password.|
-|OAuth2RedirectUri_AuthorizationCodeGrant_AuthorizationRequest|OPTIONAL. The redirection endpoint URI must be an absolute URI.|
-|OAuth2RedirectUri_AuthorizationCodeGrant_TokenRequest|REQUIRED if the "redirect_uri" parameter was included in the authorization request, and their values MUST be identical.|
-|OAuth2RedirectUri_ImplicitGrant_AuthorizationRequest|OPTIONAL. The redirection endpoint URI must be an absolute URI.|
-|OAuth2RefreshToken_AuthorizationCodeGrant_TokenResponse|OPTIONAL. The refresh token, which can be used to obtain new access tokens.|
-|OAuth2RefreshToken_ClientCredentialsGrant_TokenResponse|OPTIONAL. The refresh token, which can be used to obtain new access tokens.|
-|OAuth2RefreshToken_ResourceOwnerPasswordCredentialsGrant_TokenResponse|OPTIONAL. The refresh token, which can be used to obtain new access tokens.|
-|OAuth2ResponseType_AuthorizationCodeGrant_AuthorizationRequest|REQUIRED. Value MUST be set to "code".|
-|OAuth2ResponseType_ImplicitGrant_AuthorizationRequest|REQUIRED. Value MUST be set to "token".|
-|OAuth2Scope_AuthorizationCodeGrant_AuthorizationRequest|OPTIONAL. The scope of the access request.|
-|OAuth2Scope_AuthorizationCodeGrant_TokenResponse|OPTIONAL if identical to the scope requested by the client; otherwise, REQUIRED.|
-|OAuth2Scope_ClientCredentialsGrant_TokenRequest|OPTIONAL. The scope of the access request.|
-|OAuth2Scope_ClientCredentialsGrant_TokenResponse|OPTIONAL, if identical to the scope requested by the client; otherwise, REQUIRED.|
-|OAuth2Scope_ImplicitGrant_AuthorizationRequest|OPTIONAL. The scope of the access request.|
-|OAuth2Scope_ImplicitGrant_AuthorizationResponse|OPTIONAL if identical to the scope requested by the client; otherwise, REQUIRED.|
-|OAuth2Scope_ResourceOwnerPasswordCredentialsGrant_TokenRequest|OPTIONAL. The scope of the access request.|
-|OAuth2Scope_ResourceOwnerPasswordCredentialsGrant_TokenResponse|OPTIONAL, if identical to the scope requested by the client; otherwise, REQUIRED.|
-|OAuth2State_AuthorizationCodeGrant_AuthorizationErrorResponse|REQUIRED if the "state" parameter was present in the client authorization request. The exact value received from the client.|
-|OAuth2State_AuthorizationCodeGrant_AuthorizationRequest|RECOMMENDED. An opaque value used by the client to maintain state between the request and callback. The authorization server includes this value when redirecting the user-agent back to the client. The parameter SHOULD be used for preventing cross-site request forgery.|
-|OAuth2State_AuthorizationCodeGrant_AuthorizationResponse|REQUIRED if the "state" parameter was present in the client authorization request. The exact value received from the client.|
-|OAuth2State_ImplicitGrant_AuthorizationErrorResponse|REQUIRED if the "state" parameter was present in the client authorization request. The exact value received from the client.|
-|OAuth2State_ImplicitGrant_AuthorizationRequest|RECOMMENDED. An opaque value used by the client to maintain state between the request and callback. The authorization server includes this value when redirecting the user-agent back to the client. The parameter SHOULD be used for preventing cross-site request forgery.|
-|OAuth2State_ImplicitGrant_AuthorizationResponse|REQUIRED if the "state" parameter was present in the client authorization request. The exact value received from the client.|
-|OAuth2TokenType_AuthorizationCodeGrant_TokenResponse|REQUIRED. The type of the token issued.|
-|OAuth2TokenType_ClientCredentialsGrant_TokenResponse|REQUIRED. The type of the token issued.|
-|OAuth2TokenType_ImplicitGrant_AuthorizationResponse|REQUIRED. The type of the token issued.|
-|OAuth2TokenType_ResourceOwnerPasswordCredentialsGrant_TokenResponse|REQUIRED. The type of the token issued.|
-|OAuth2UserName_ResourceOwnerPasswordCredentialsGrant_TokenRequest|REQUIRED. The resource owner username.|
-|OAuth2UnsupportedTokenType|Token type '{0}' is not supported.|
-|OAuth2InvalidState|Invalid response from authorization server|
-|OAuth2GrantType_AuthorizationCode|Authorization code|
-|OAuth2GrantType_Implicit|Implicit|
-|OAuth2GrantType_ClientCredentials|Client credentials|
-|OAuth2GrantType_ResourceOwnerPassword|Resource owner password|
-|WebDocumentation302Code|302 Found|
-|WebDocumentation400Code|400 (Bad request)|
-|OAuth2SendingMethod_AuthHeader|Authorization header|
-|OAuth2SendingMethod_QueryParam|Query parameter|
-|OAuth2AuthorizationServerGeneralException|An error has occurred while authorizing access via {0}|
-|OAuth2AuthorizationServerCommunicationException|An HTTP connection to authorization server could not be established or it has been unexpectedly closed.|
-|WebDocumentationOAuth2GeneralErrorMessage|Unexpected error occurred.|
-|AuthorizationServerCommunicationException|Authorization server communication exception has happened. Please contact administrator.|
-|TextblockSubscriptionKeyHeaderDescription|Subscription key which provides access to this API. Found in your <a href='/developer'\>Profile</a\>.|
-|TextblockOAuthHeaderDescription|OAuth 2.0 access token obtained from <i\>{0}</i\>. Supported grant types: <i\>{1}</i\>.|
-|TextblockContentTypeHeaderDescription|Media type of the body sent to the API.|
-|ErrorMessageApiNotAccessible|The API you are trying to call is not accessible at this time. Please contact the API publisher <a href="/issues"\>here</a\>.|
-|ErrorMessageApiTimedout|The API you are trying to call is taking longer than normal to get response back. Please contact the API publisher <a href="/issues"\>here</a\>.|
-|BadRequestParameterExpected|"'{0}' parameter is expected"|
-|TooltipTextDoubleClickToSelectAll|Double click to select all.|
-|TooltipTextHideRevealSecret|Show/Hide|
-|ButtonLinkOpenConsole|Try it|
-|SectionHeadingRequestBody|Request body|
-|SectionHeadingRequestParameters|Request parameters|
-|SectionHeadingRequestUrl|Request URL|
-|SectionHeadingResponse|Response|
-|SectionHeadingRequestHeaders|Request headers|
-|FormLabelSubtextOptional|optional|
-|SectionHeadingCodeSamples|Code samples|
-|TextblockOpenidConnectHeaderDescription|OpenID Connect ID token obtained from <i\>{0}</i\>. Supported grant types: <i\>{1}</i\>.|
-
-### <a name="ErrorPageStrings"></a> ErrorPageStrings
-
-|Name|Text|
-|-|-|
-|LinkLabelBack|back|
-|LinkLabelHomePage|home page|
-|LinkLabelSendUsEmail|Send us an e-mail|
-|PageTitleError|Sorry, there was a problem serving the requested page|
-|TextblockPotentialCauseIntermittentIssue|This may be an intermittent data access issue that is already gone.ΓÇï|
-|TextblockPotentialCauseOldLink|The link you have clicked on may be old and not point to the correct location anymore.ΓÇïΓÇï|
-|TextblockPotentialCauseTechnicalProblem|There may be a technical problem on our end.ΓÇï|
-|TextblockPotentialSolutionRefresh|Try refreshing the page.ΓÇïΓÇï|
-|TextblockPotentialSolutionStartOver|Start over from our {0}.ΓÇï|
-|TextblockPotentialSolutionTryAgain|Go {0} and try the action you performed again.|
-|TextReportProblem|{0} describing what went wrong and we will look at it as soon as we can.|
-|TitlePotentialCause|Potential cause|
-|TitlePotentialSolution|It's possibly just a temporary issue, a few things to try|
-
-### <a name="IssuesStrings"></a> IssuesStrings
-
-|Name|Text|
-|-|-|
-|WebIssuesIndexTitle|Issues|
-|WebIssuesNoActiveSubscriptions|You have no active subscriptions. You need to subscribe for a product to report an issue.|
-|WebIssuesNotSignin|You're not signed in. Please {0} to report an issue or post a comment.|
-|WebIssuesReportIssueButton|Report Issue|
-|WebIssuesSignIn|sign in|
-|WebIssuesStatusReportedBy|Status: {0} &#124; Reported by {1}|
-
-### <a name="NotFoundStrings"></a> NotFoundStrings
-
-|Name|Text|
-|-|-|
-|LinkLabelHomePage|home page|
-|LinkLabelSendUsEmail|send us an e-mail|
-|PageTitleNotFound|Sorry, we canΓÇÖt find the page you are looking forΓÇï|
-|TextblockPotentialCauseMisspelledUrl|You may have misspelled the URL if you typed it in.ΓÇï|
-|TextblockPotentialCauseOldLink|The link you have clicked on may be old and not point to the correct location anymore.|
-|TextblockPotentialSolutionRetype|Try retyping the URL.|
-|TextblockPotentialSolutionStartOver|Start over from our {0}.|
-|TextReportProblem|{0} describing what went wrong and we will look at it as soon as we can.|
-|TitlePotentialCause|Potential cause|
-|TitlePotentialSolution|Potential solution|
-
-### <a name="ProductDetailsStrings"></a> ProductDetailsStrings
-
-|Name|Text|
-|-|-|
-|WebProductsAgreement|By subscribing to {0} Product, I agree to the `<a data-toggle='modal' href='#legal-terms'\>Terms of Use</a\>`.|
-|WebProductsLegalTermsLink|Terms of Use|
-|WebProductsSubscribeButton|Subscribe|
-|WebProductsUsageLimitsHeader|Usage limits|
-|WebProductsYouAreNotSubscribed|You are subscribed to this product.|
-|WebProductsYouRequestedSubscription|You requested subscription to this product.|
-|ErrorYouNeedToAgreeWithLegalTerms|You must agree to the Terms of Use before you can proceed.|
-|ButtonLabelAddSubscription|Add subscription|
-|LinkLabelChangeSubscriptionName|change|
-|ButtonLabelConfirm|Confirm|
-|TextblockMultipleSubscriptionsCount|You have {0} subscriptions to this product:|
-|TextblockSingleSubscriptionsCount|You have {0} subscription to this product:|
-|TextblockSingleApisCount|This product contains {0} API:|
-|TextblockMultipleApisCount|This product contains {0} APIs:|
-|TextblockHeaderSubscribe|Subscribe to product|
-|TextblockSubscriptionDescription|A new subscription will be created as follows:|
-|TextblockSubscriptionLimitReached|Subscriptions limit reached.|
-
-### <a name="ProductsStrings"></a> ProductsStrings
-
-|Name|Text|
-|-|-|
-|PageTitleProducts|Products|
-
-### <a name="ProviderInfoStrings"></a> ProviderInfoStrings
-
-|Name|Text|
-|-|-|
-|TextboxExternalIdentitiesDisabled|Sign in is disabled by the administrators at the moment.|
-|TextboxExternalIdentitiesSigninInvitation|Alternatively, sign in with|
-|TextboxExternalIdentitiesSigninInvitationPrimary|Sign in with:|
-
-### <a name="SigninResources"></a> SigninResources
-
-|Name|Text|
-|-|-|
-|PrincipalNotFound|Principal is not found or signature is invalid|
-|ErrorSsoAuthenticationFailed|SSO authentication failed|
-|ErrorSsoAuthenticationFailedDetailed|Invalid token provided or signature cannot be verified.|
-|ErrorSsoTokenInvalid|SSO token is invalid|
-|ValidationErrorSpecificEmailAlreadyExists|Email '{0}' already registered|
-|ValidationErrorSpecificEmailInvalid|Email '{0}' is invalid|
-|ValidationErrorPasswordInvalid|Password is invalid. Please correct the errors and try again.|
-|PropertyTooShort|{0} is too short|
-|WebAuthenticationAddresserEmailInvalidErrorMessage|Invalid email address.|
-|ValidationMessageNewPasswordConfirmationRequired|Confirm new password|
-|ValidationErrorPasswordConfirmationRequired|Confirm password is empty|
-|WebAuthenticationEmailChangeNotice|Change confirmation email is on the way to {0}. Please follow instructions within it to confirm your new email address. If the email does not arrive to your inbox in the next few minutes, please check your junk email folder.|
-|WebAuthenticationEmailChangeNoticeHeader|Your email change request was successfully processed|
-|WebAuthenticationEmailChangeNoticeTitle|Email change requested|
-|WebAuthenticationEmailHasBeenRevertedNotice|You email already exist. Request has been reverted|
-|ValidationErrorEmailAlreadyExists|Email already exist|
-|ValidationErrorEmailInvalid|Invalid e-mail address|
-|TextboxLabelEmail|Email|
-|ValidationErrorEmailRequired|Email is required.|
-|WebAuthenticationErrorNoticeHeader|Error|
-|WebAuthenticationFieldLengthErrorMessage|{0} must be a maximum length of {1}|
-|TextboxLabelEmailFirstName|First name|
-|ValidationErrorFirstNameRequired|First name is required.|
-|ValidationErrorFirstNameInvalid|Invalid first name|
-|NoticeInvalidInvitationToken|Please note that confirmation links are valid for only 48 hours. If you are still within this timeframe, please make sure your link is correct. If your link has expired, then please repeat the action you're trying to confirm.|
-|NoticeHeaderInvalidInvitationToken|Invalid invitation token|
-|NoticeTitleInvalidInvitationToken|Confirmation error|
-|WebAuthenticationLastNameInvalidErrorMessage|Invalid last name|
-|TextboxLabelEmailLastName|Last name|
-|ValidationErrorLastNameRequired|Last name is required.|
-|WebAuthenticationLinkExpiredNotice|Confirmation link sent to you has expired. `<a href={0}?token={1}>Resend confirmation email.</a\>`|
-|NoticePasswordResetLinkInvalidOrExpired|Your password reset link is invalid or expired.|
-|WebAuthenticationLinkExpiredNoticeTitle|Link sent|
-|WebAuthenticationNewPasswordLabel|New password|
-|ValidationMessageNewPasswordRequired|New password is required.|
-|TextboxLabelNotificationsSenderEmail|Notifications sender email|
-|TextboxLabelOrganizationName|Organization name|
-|WebAuthenticationOrganizationRequiredErrorMessage|Organization name is empty|
-|WebAuthenticationPasswordChangedNotice|Your password was successfully updated|
-|WebAuthenticationPasswordChangedNoticeTitle|Password updated|
-|WebAuthenticationPasswordCompareErrorMessage|Passwords don't match|
-|WebAuthenticationPasswordConfirmLabel|Confirm password|
-|ValidationErrorPasswordInvalidDetailed|Password is too weak.|
-|WebAuthenticationPasswordLabel|Password|
-|ValidationErrorPasswordRequired|Password is required.|
-|WebAuthenticationPasswordResetSendNotice|Change password confirmation email is on the way to {0}. Please follow the instructions within the email to continue your password change process.|
-|WebAuthenticationPasswordResetSendNoticeHeader|Your password reset request was successfully processed|
-|WebAuthenticationPasswordResetSendNoticeTitle|Password reset requested|
-|WebAuthenticationRequestNotFoundNotice|Request not found|
-|WebAuthenticationSenderEmailRequiredErrorMessage|Notifications sender email is empty|
-|WebAuthenticationSigninPasswordLabel|Please confirm the change by entering a password|
-|WebAuthenticationSignupConfirmNotice|Registration confirmation email is on its way to {0}.<br /\> Please follow instructions within the email to activate your account.<br /\> If the email does not arrive in your inbox within the next few minutes, please check your junk email folder.|
-|WebAuthenticationSignupConfirmNoticeHeader|Your account was successfully created|
-|WebAuthenticationSignupConfirmNoticeRepeatHeader|Registration confirmation email was sent again|
-|WebAuthenticationSignupConfirmNoticeTitle|Account created|
-|WebAuthenticationTokenRequiredErrorMessage|Token is empty|
-|WebAuthenticationUserAlreadyRegisteredNotice|It seems a user with this email is already registered in the system. If you forgot your password, please try to restore it or contact our support team.|
-|WebAuthenticationUserAlreadyRegisteredNoticeHeader|User already registered|
-|WebAuthenticationUserAlreadyRegisteredNoticeTitle|Already registered|
-|ButtonLabelChangePassword|Change password|
-|ButtonLabelChangeAccountInfo|Change account information|
-|ButtonLabelCloseAccount|Close account|
-|WebAuthenticationInvalidCaptchaErrorMessage|Text entered doesn't match text on the picture. Please try again.|
-|ValidationErrorCredentialsInvalid|Email or password is invalid. Please correct the errors and try again.|
-|WebAuthenticationRequestIsNotValid|Request is not valid|
-|WebAuthenticationUserIsNotConfirm|Please confirm your registration before attempting to sign in.|
-|WebAuthenticationInvalidEmailFormatted|Email is invalid: {0}|
-|WebAuthenticationUserNotFound|User not found|
-|WebAuthenticationTenantNotRegistered|Your account belongs to a Microsoft Entra tenant which is not authorized to access this portal.|
-|WebAuthenticationAuthenticationFailed|Authentication has failed.|
-|WebAuthenticationGooglePlusNotEnabled|Authentication has failed. If you authorized the application then please contact the admin to make sure that Google authentication is configured correctly.|
-|ValidationErrorAllowedTenantIsRequired|Allowed Tenant is required|
-|ValidationErrorTenantIsNotValid|The Microsoft Entra tenant '{0}' is not valid.|
-|WebAuthenticationActiveDirectoryTitle|Microsoft Entra ID|
-|WebAuthenticationLoginUsingYourProvider|Log in using your {0} account|
-|WebAuthenticationUserLimitNotice|This service has reached the maximum number of allowed users. Please `<a href="mailto:{0}"\>contact the administrator</a\>` to upgrade their service and re-enable user registration.|
-|WebAuthenticationUserLimitNoticeHeader|User registration disabled|
-|WebAuthenticationUserLimitNoticeTitle|User registration disabled|
-|WebAuthenticationUserRegistrationDisabledNotice|Registration of users has been disabled by the administrator. Please login with external identity provider.|
-|WebAuthenticationUserRegistrationDisabledNoticeHeader|User registration disabled|
-|WebAuthenticationUserRegistrationDisabledNoticeTitle|User registration disabled|
-|WebAuthenticationSignupPendingConfirmationNotice|Before we can complete the creation of your account we need to verify your e-mail address. WeΓÇÖve sent an e-mail to {0}. Please follow the instructions inside the e-mail to activate your account. If the e-mail doesnΓÇÖt arrive within the next few minutes, please check your junk email folder.|
-|WebAuthenticationSignupPendingConfirmationAccountFoundNotice|We found an unconfirmed account for the e-mail address {0}. To complete the creation of your account we need to verify your e-mail address. WeΓÇÖve sent an e-mail to {0}. Please follow the instructions inside the e-mail to activate your account. If the e-mail doesnΓÇÖt arrive within the next few minutes, please check your junk email folder|
-|WebAuthenticationSignupConfirmationAlmostDone|Almost Done|
-|WebAuthenticationSignupConfirmationEmailSent|WeΓÇÖve sent an e-mail to {0}. Please follow the instructions inside the e-mail to activate your account. If the e-mail doesnΓÇÖt arrive within the next few minutes, please check your junk email folder.|
-|WebAuthenticationEmailSentNotificationMessage|Email sent successfully to {0}|
-|WebAuthenticationNoAadTenantConfigured|No Microsoft Entra tenant configured for the service.|
-|CheckboxLabelUserRegistrationTermsConsentRequired|I agree to the `<a data-toggle="modal" href="#" data-target="#terms"\>Terms of Use</a\>`.|
-|TextblockUserRegistrationTermsProvided|Please review `<a data-toggle="modal" href="#" data-target="#terms"\>Terms of Use.</a\>`|
-|DialogHeadingTermsOfUse|Terms of Use|
-|ValidationMessageConsentNotAccepted|You must agree to the Terms of Use before you can proceed.|
-
-### <a name="SigninStrings"></a> SigninStrings
-
-|Name|Text|
-|-|-|
-|WebAuthenticationForgotPassword|Forgot your password?|
-|WebAuthenticationIfAdministrator|If you are an Administrator you must sign in `<a href="{0}"\>here</a\>`.|
-|WebAuthenticationNotAMember|Not a member yet? `<a href="/signup"\>Sign up now</a\>`|
-|WebAuthenticationRemember|Remember me on this computer|
-|WebAuthenticationSigininWithPassword|Sign in with your username and password|
-|WebAuthenticationSigninTitle|Sign in|
-|WebAuthenticationSignUpNow|Sign up now|
-
-### <a name="SignupStrings"></a> SignupStrings
-
-|Name|Text|
-|-|-|
-|PageTitleSignup|Sign up|
-|WebAuthenticationAlreadyAMember|Already a member?|
-|WebAuthenticationCreateNewAccount|Create a new API Management account|
-|WebAuthenticationSigninNow|Sign in now|
-|ButtonLabelSignup|Sign up|
-
-### <a name="SubscriptionListStrings"></a> SubscriptionListStrings
-
-|Name|Text|
-|-|-|
-|SubscriptionCancelConfirmation|Are you sure that you want to cancel this subscription?|
-|SubscriptionRenewConfirmation|Are you sure that you want to renew this subscription?|
-|WebDevelopersManageSubscriptions|Manage subscriptions|
-|WebDevelopersPrimaryKey|Primary key|
-|WebDevelopersRegenerateLink|Regenerate|
-|WebDevelopersSecondaryKey|Secondary key|
-|ButtonLabelShowKey|Show|
-|ButtonLabelRenewSubscription|Renew|
-|WebDevelopersSubscriptionRequested|Requested on {0}|
-|WebDevelopersSubscriptionRequestedState|Requested|
-|WebDevelopersSubscriptionTableNameHeader|Name|
-|WebDevelopersSubscriptionTableStateHeader|State|
-|WebDevelopersUsageStatisticsLink|Analytics reports|
-|WebDevelopersYourSubscriptions|Your subscriptions|
-|SubscriptionPropertyLabelRequestedDate|Requested on|
-|SubscriptionPropertyLabelStartedDate|Started on|
-|PageTitleRenameSubscription|Rename subscription|
-|SubscriptionPropertyLabelName|Subscription name|
-
-### <a name="SubscriptionStrings"></a> SubscriptionStrings
-
-|Name|Text|
-|-|-|
-|SectionHeadingCloseAccount|Looking to close your account?|
-|PageTitleDeveloperProfile|Profile|
-|ButtonLabelHideKey|Hide|
-|ButtonLabelRegenerateKey|Regenerate|
-|InformationMessageKeyWasRegenerated|Are you sure that you want to regenerate this key?|
-|ButtonLabelShowKey|Show|
-
-### <a name="UpdateProfileStrings"></a> UpdateProfileStrings
-
-|Name|Text|
-|-|-|
-|ButtonLabelUpdateProfile|Update profile|
-|PageTitleUpdateProfile|Update account information|
-
-### <a name="UserProfile"></a> UserProfile
-
-|Name|Text|
-|-|-|
-|ButtonLabelChangeAccountInfo|Change account information|
-|ButtonLabelChangePassword|Change password|
-|ButtonLabelCloseAccount|Close account|
-|TextboxLabelEmail|Email|
-|TextboxLabelEmailFirstName|First name|
-|TextboxLabelEmailLastName|Last name|
-|TextboxLabelNotificationsSenderEmail|Notifications sender email|
-|TextboxLabelOrganizationName|Organization name|
-|SubscriptionStateActive|Active|
-|SubscriptionStateCancelled|Cancelled|
-|SubscriptionStateExpired|Expired|
-|SubscriptionStateRejected|Rejected|
-|SubscriptionStateRequested|Requested|
-|SubscriptionStateSuspended|Suspended|
-|DefaultSubscriptionNameTemplate|{0} (default)|
-|SubscriptionNameTemplate|Developer access #{0}|
-|TextboxLabelSubscriptionName|Subscription name|
-|ValidationMessageSubscriptionNameRequired|Subscription name cannot be empty.|
-|ApiManagementUserLimitReached|This service has reached the maximum number of allowed users. Please upgrade to a higher pricing tier.|
-
-## <a name="glyphs"></a> Glyph resources
- API Management developer portal templates can use the glyphs from [Glyphicons from Bootstrap](https://getbootstrap.com/components/#glyphicons). This set of glyphs includes over 250 glyphs in font format from the [Glyphicon](https://glyphicons.com/) Halflings set. To use a glyph from this set, use the following syntax.
-
-```html
-<span class="glyphicon glyphicon-user">
-```
-
- For the complete list of glyphs, see [Glyphicons from Bootstrap](https://getbootstrap.com/components/#glyphicons).
-
-## Next steps
-For more information about working with templates, see [How to customize the API Management developer portal using templates](api-management-developer-portal-templates.md).
api-management Api Management User Profile Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-user-profile-templates.md
- Title: "User profile templates in Azure API Management | Microsoft Docs"
-description: Learn how to customize the content of the User Profile pages in the developer portal in Azure API Management.
------- Previously updated : 11/04/2019--
-# User profile templates in Azure API Management
-Azure API Management provides you the ability to customize the content of developer portal pages using a set of templates that configure their content. Using [DotLiquid](https://github.com/dotliquid) syntax and the editor of your choice, such as [DotLiquid for Designers](https://github.com/dotliquid/dotliquid/wiki/DotLiquid-for-Designers), and a provided set of localized [String resources](api-management-template-resources.md#strings), [Glyph resources](api-management-template-resources.md#glyphs), and [Page controls](api-management-page-controls.md), you have great flexibility to configure the content of the pages as you see fit using these templates.
-
- The templates in this section allow you to customize the content of the User profile pages in the developer portal.
-
-- [Profile](#Profile)
-
-- [Subscriptions](#Subscriptions)
-
-- [Applications](#Applications)
-
-- [Update account info](#UpdateAccountInfo)
-
-> [!NOTE]
-> Sample default templates are included in the following documentation, but are subject to change due to continuous improvements. You can view the live default templates in the developer portal by navigating to the desired individual templates. For more information about working with templates, see [How to customize the API Management developer portal using templates](./api-management-developer-portal-templates.md).
--
-
-## <a name="Profile"></a> Profile
- The **profile** template allows you to customize the user profile section of the user profile page in the developer portal.
-
- ![User Profile Page](./media/api-management-user-profile-templates/APIM-User-Profile-Page.png "APIM User Profile Page")
-
-### Default template
-
-```xml
-<div class="pull-right">
- {% if canChangePassword == true %}
- <a class="btn btn-default" id="ChangePassword" role="button" href="{{changePasswordUrl}}">{% localized "UserProfile|ButtonLabelChangePassword" %}</a>
- {% endif %}
- <a id="changeAccountInfo" href="{{changeNameOrEmailUrl}}" class="btn btn-default">
- <span class="glyphicon glyphicon-user"></span>
- <span>{% localized "UserProfile|ButtonLabelChangeAccountInfo" %}</span>
- </a>
-</div>
-<h2>{% localized "SubscriptionStrings|PageTitleDeveloperProfile" %}</h2>
-<div class="container-fluid">
- <div class="row">
- <div class="col-sm-3">
- <label for="Email">{% localized "UserProfile|TextboxLabelEmail" %}</label>
- </div>
- <div class="col-sm-9" id="Email">{{email}}</div>
- </div>
-
- {% if isSystemUser != true %}
- <div class="row">
- <div class="col-sm-3">
- <label for="FirstName">{% localized "UserProfile|TextboxLabelEmailFirstName" %}</label>
- </div>
- <div class="col-sm-9" id="FirstName">{{FirstName}}</div>
- </div>
- <div class="row">
- <div class="col-sm-3">
- <label for="LastName">{% localized "UserProfile|TextboxLabelEmailLastName" %}</label>
- </div>
- <div class="col-sm-9" id="LastName">{{LastName}}</div>
- </div>
- {% else %}
- <div class="row">
- <div class="col-sm-3">
- <label for="CompanyName">{% localized "UserProfile|TextboxLabelOrganizationName" %}</label>
- </div>
- <div class="col-sm-9" id="CompanyName">{{CompanyName}}</div>
- </div>
- <div class="row">
- <div class="col-sm-3">
- <label for="AddresserEmail">{% localized "UserProfile|TextboxLabelNotificationsSenderEmail" %}</label>
- </div>
- <div class="col-sm-9" id="AddresserEmail">{{AddresserEmail}}</div>
- </div>
- {% endif %}
-
-</div>
-```
-
-### Controls
- This template may not use any [page controls](api-management-page-controls.md).
-
-### Data model
-
-> [!NOTE]
-> The [Profile](#Profile), [Applications](#Applications), and [Subscriptions](#Subscriptions) templates share the same data model and receive the same template data.
-
-|Property|Type|Description|
-|--|-|--|
-|`firstName`|string|First name of the current user.|
-|`lastName`|string|Last name of the current user.|
-|`companyName`|string|The company name of the current user.|
-|`addresserEmail`|string|Email address of the current user.|
-|`developersUsageStatisticsLink`|string|Relative URL to view analytics for the current user.|
-|`subscriptions`|Collection of [Subscription](api-management-template-data-model-reference.md#Subscription) entities.|The subscriptions for the current user.|
-|`applications`|Collection of [Application](api-management-template-data-model-reference.md#Application) entities.|The applications of the current user.|
-|`changePasswordUrl`|string|The relative URL to change the current user's password.|
-|`changeNameOrEmailUrl`|string|The relative URL to change the name and email for the current user.|
-|`canChangePassword`|boolean|Whether the current user can change their password.|
-|`isSystemUser`|boolean|Whether the current user is a member of one of the built-in [groups](api-management-key-concepts.md#groups).|
-
-### Sample template data
-
-```json
-{
- "firstName": "Administrator",
- "lastName": "",
- "companyName": "Contoso",
- "addresserEmail": "apimgmt-noreply@mail.windowsazure.com",
- "email": "admin@live.com",
- "developersUsageStatisticsLink": "/Developer/Analytics",
- "subscriptions": [
- {
- "Id": "57026e30de15d80041070001",
- "ProductId": "57026e30de15d80041060001",
- "ProductTitle": "Starter",
- "ProductDescription": "Subscribers will be able to run 5 calls/minute up to a maximum of 100 calls/week.",
- "ProductDetailsUrl": "/Products/57026e30de15d80041060001",
- "State": "Active",
- "DisplayName": "Starter (default)",
- "CreatedDate": "2016-04-04T13:37:52.847",
- "CanBeCancelled": true,
- "IsAwaitingApproval": false,
- "StartDate": null,
- "ExpirationDate": null,
- "NotificationDate": null,
- "PrimaryKey": "b6b2870953d04420a4e02c58f2c08e74",
- "SecondaryKey": "cfe28d5a1cd04d8abc93f48352076ea5",
- "UserId": 1,
- "CanBeRenewed": false,
- "HasExpired": false,
- "IsRejected": false,
- "CancelUrl": "/Subscriptions/57026e30de15d80041070001/Cancel",
- "RenewUrl": "/Subscriptions/57026e30de15d80041070001/Renew"
- },
- {
- "Id": "57026e30de15d80041070002",
- "ProductId": "57026e30de15d80041060002",
- "ProductTitle": "Unlimited",
- "ProductDescription": "Subscribers have completely unlimited access to the API. Administrator approval is required.",
- "ProductDetailsUrl": "/Products/57026e30de15d80041060002",
- "State": "Active",
- "DisplayName": "Unlimited (default)",
- "CreatedDate": "2016-04-04T13:37:52.923",
- "CanBeCancelled": true,
- "IsAwaitingApproval": false,
- "StartDate": null,
- "ExpirationDate": null,
- "NotificationDate": null,
- "PrimaryKey": "8fe7843c36de4cceb4728e6cae297336",
- "SecondaryKey": "96c850d217e74acf9b514ff8a5b38551",
- "UserId": 1,
- "CanBeRenewed": false,
- "HasExpired": false,
- "IsRejected": false,
- "CancelUrl": "/Subscriptions/57026e30de15d80041070002/Cancel",
- "RenewUrl": "/Subscriptions/57026e30de15d80041070002/Renew"
- }
- ],
- "applications": [],
- "changePasswordUrl": "/account/password/change",
- "changeNameOrEmailUrl": "/account/update",
- "canChangePassword": false,
- "isSystemUser": true
-}
-```
-
-## <a name="Subscriptions"></a> Subscriptions
- The **Subscriptions** template allows you to customize the subscriptions section of the user profile page in the developer portal.
-
- ![User Subscription Page](./media/api-management-user-profile-templates/APIM-User-Subscription-Page.png "APIM User Subscription Page")
-
-### Default template
-
-```xml
-<div class="ap-account-subscriptions">
- <a href="{{developersUsageStatisticsLink}}" id="UsageStatistics" class="btn btn-default pull-right">
- <span class="glyphicon glyphicon-stats"></span>
- <span>{% localized "SubscriptionListStrings|WebDevelopersUsageStatisticsLink" %}</span>
- </a>
-
- <h2>{% localized "SubscriptionListStrings|WebDevelopersYourSubscriptions" %}</h2>
-
- <table class="table">
- <thead>
- <tr>
- <th>Subscription details</th>
- <th>Product</th>
- <th>{% localized "SubscriptionListStrings|WebDevelopersSubscriptionTableStateHeader" %}</th>
- <th>Action</th>
- </tr>
- </thead>
- <tbody>
- {% if subscriptions.size == 0 %}
- <tr>
- <td class="text-center" colspan="4">
- {% localized "CommonResources|NoItemsToDisplay" %}
- </td>
- </tr>
- {% else %}
- {% for subscription in subscriptions %}
- <tr id="{{subscription.id}}" {% if subscription.hasExpired %} class="expired" {% endif %}>
- <td>
- <div class="row">
- <label class="col-lg-3">{% localized "SubscriptionListStrings|SubscriptionPropertyLabelName" %}</label>
- <div class="col-lg-6">
- {{ subscription.displayName }}
- </div>
- <div class="col-lg-2">
- <a class="btn-link" href="/Subscriptions/{{subscription.id}}/Rename">Rename</a>
- </div>
- <div class="clearfix"></div>
- </div>
- {% if subscription.isAwaitingApproval %}
- <div class="row">
- <label class="col-lg-3">{% localized "SubscriptionListStrings|SubscriptionPropertyLabelRequestedDate" %}</label>
- <div class="col-lg-6">
- {{ subscription.createdDate | date:"MM/dd/yyyy" }}
- </div>
- </div>
- {% else %}
- {% if subscription.isRejected == false %}
- {% if subscription.startDate %}
- <div class="row">
- <label class="col-lg-3">{% localized "SubscriptionListStrings|SubscriptionPropertyLabelStartedDate" %}</label>
- <div class="col-lg-6">
- {{ subscription.startDate | date:"MM/dd/yyyy" }}
- </div>
- </div>
- {% endif %}
-
- <!-- ko with: Developers.Account.Root.account.key('{{subscription.primaryKey}}', '{{subscription.id}}', true) -->
- <div class="row">
- <label class="col-lg-3">{% localized "SubscriptionListStrings|WebDevelopersPrimaryKey" %}</label>
- <div class="col-lg-6">
- <code data-bind="text: $data.displayKey()" id="primary_{{subscription.id}}"></code>
- </div>
- <div class="col-lg-2">
- <!-- ko if: !requestInProgress() -->
- <div class="nowrap">
- <a href="#" class="btn-link" id="togglePrimary_{{subscription.id}}" data-bind="click: toggleKeyDisplay, text: toggleKeyLabel"></a>
- |
- <a href="#" class="btn-link" id="regeneratePrimary_{{subscription.id}}" data-bind="click: regenerateKey, text: regenerateKeyLabel"></a>
- </div>
- <!-- /ko -->
- <!-- ko if: requestInProgress() -->
- <div class="progress progress-striped active">
- <div class="progress-bar" role="progressbar" aria-valuenow="100" aria-valuemin="0" aria-valuemax="100" style="width: 100%">
- <span class="sr-only"></span>
- </div>
- </div>
- <!-- /ko -->
- </div>
- <div class="clearfix"></div>
- </div>
- <!-- /ko -->
- <!-- ko with: Developers.Account.Root.account.key('{{subscription.secondaryKey}}', '{{subscription.id}}', false) -->
- <div class="row">
- <label class="col-lg-3">{% localized "SubscriptionListStrings|WebDevelopersSecondaryKey" %}</label>
- <div class="col-lg-6">
- <code data-bind="text: $data.displayKey()" id="secondary_{{subscription.id}}"></code>
- </div>
- <div class="col-lg-2">
- <div class="nowrap">
- <a href="#" class="btn-link" id="toggleSecondary_{{subscription.id}}" data-bind="click: toggleKeyDisplay, text: toggleKeyLabel">{% localized "SubscriptionListStrings|ButtonLabelShowKey" %}</a>
- |
- <a href="#" class="btn-link" id="regenerateSecondary_{{subscription.id}}" data-bind="click: regenerateKey, text: regenerateKeyLabel">{% localized "SubscriptionListStrings|WebDevelopersRegenerateLink" %}</a>
- </div>
- </div>
- <div class="clearfix"> </div>
- </div>
- <!-- /ko -->
- {% endif %}
- {% endif %}
- </td>
- <td>
- <a href="{{subscription.productDetailsUrl}}">{{subscription.productTitle}}</a>
- </td>
- <td>
- <strong>
- {{subscription.state}}
- </strong>
- </td>
- <td>
- <div class="nowrap">
- {% if subscription.canBeCancelled %}
- <subscription-cancel params="{ subscriptionId: '{{subscription.id}}', cancelUrl: '{{subscription.cancelUrl}}' }"></subscription-cancel>
- {% endif %}
- </div>
- </td>
- </tr>
- {% endfor %}
- {% endif %}
- </tbody>
- </table>
-</div>
-```
-
-### Controls
- This template may use the following [page controls](api-management-page-controls.md).
-
-- [subscription-cancel](api-management-page-controls.md#subscription-cancel)
-
-### Data model
-
-> [!NOTE]
-> The [Profile](#Profile), [Applications](#Applications), and [Subscriptions](#Subscriptions) templates share the same data model and receive the same template data.
-
-|Property|Type|Description|
-|--|-|--|
-|`firstName`|string|First name of the current user.|
-|`lastName`|string|Last name of the current user.|
-|`companyName`|string|The company name of the current user.|
-|`addresserEmail`|string|Email address of the current user.|
-|`developersUsageStatisticsLink`|string|Relative URL to view analytics for the current user.|
-|`subscriptions`|Collection of [Subscription](api-management-template-data-model-reference.md#Subscription) entities.|The subscriptions for the current user.|
-|`applications`|Collection of [Application](api-management-template-data-model-reference.md#Application) entities.|The applications of the current user.|
-|`changePasswordUrl`|string|The relative URL to change the current user's password.|
-|`changeNameOrEmailUrl`|string|The relative URL to change the name and email for the current user.|
-|`canChangePassword`|boolean|Whether the current user can change their password.|
-|`isSystemUser`|boolean|Whether the current user is a member of one of the built-in [groups](api-management-key-concepts.md#groups).|
-
-### Sample template data
-
-```json
-{
- "firstName": "Administrator",
- "lastName": "",
- "companyName": "Contoso",
- "addresserEmail": "apimgmt-noreply@mail.windowsazure.com",
- "email": "admin@live.com",
- "developersUsageStatisticsLink": "/Developer/Analytics",
- "subscriptions": [
- {
- "Id": "57026e30de15d80041070001",
- "ProductId": "57026e30de15d80041060001",
- "ProductTitle": "Starter",
- "ProductDescription": "Subscribers will be able to run 5 calls/minute up to a maximum of 100 calls/week.",
- "ProductDetailsUrl": "/Products/57026e30de15d80041060001",
- "State": "Active",
- "DisplayName": "Starter (default)",
- "CreatedDate": "2016-04-04T13:37:52.847",
- "CanBeCancelled": true,
- "IsAwaitingApproval": false,
- "StartDate": null,
- "ExpirationDate": null,
- "NotificationDate": null,
- "PrimaryKey": "b6b2870953d04420a4e02c58f2c08e74",
- "SecondaryKey": "cfe28d5a1cd04d8abc93f48352076ea5",
- "UserId": 1,
- "CanBeRenewed": false,
- "HasExpired": false,
- "IsRejected": false,
- "CancelUrl": "/Subscriptions/57026e30de15d80041070001/Cancel",
- "RenewUrl": "/Subscriptions/57026e30de15d80041070001/Renew"
- },
- {
- "Id": "57026e30de15d80041070002",
- "ProductId": "57026e30de15d80041060002",
- "ProductTitle": "Unlimited",
- "ProductDescription": "Subscribers have completely unlimited access to the API. Administrator approval is required.",
- "ProductDetailsUrl": "/Products/57026e30de15d80041060002",
- "State": "Active",
- "DisplayName": "Unlimited (default)",
- "CreatedDate": "2016-04-04T13:37:52.923",
- "CanBeCancelled": true,
- "IsAwaitingApproval": false,
- "StartDate": null,
- "ExpirationDate": null,
- "NotificationDate": null,
- "PrimaryKey": "8fe7843c36de4cceb4728e6cae297336",
- "SecondaryKey": "96c850d217e74acf9b514ff8a5b38551",
- "UserId": 1,
- "CanBeRenewed": false,
- "HasExpired": false,
- "IsRejected": false,
- "CancelUrl": "/Subscriptions/57026e30de15d80041070002/Cancel",
- "RenewUrl": "/Subscriptions/57026e30de15d80041070002/Renew"
- }
- ],
- "applications": [],
- "changePasswordUrl": "/account/password/change",
- "changeNameOrEmailUrl": "/account/update",
- "canChangePassword": false,
- "isSystemUser": true
-}
-```
-
-## <a name="Applications"></a> Applications
- The **Applications** template allows you to customize the subscriptions section of the user profile page in the developer portal.
-
- ![User Account Applications Page](./media/api-management-user-profile-templates/APIM-User-Account-Applications-Page.png "APIM User Account Applications Page")
-
-### Default template
-
-```xml
-<div class="ap-account-applications">
- <a id="RegisterApplication" href="/Developer/Applications/Register" class="btn btn-success pull-right">
- <span class="glyphicon glyphicon-plus"></span>
- <span>{% localized "ApplicationListStrings|WebDevelopersRegisterAppLink" %}</span>
- </a>
- <h2>{% localized "ApplicationListStrings|WebDevelopersYourApplicationsHeader" %}</h2>
-
- <table class="table">
- <thead>
- <tr>
- <th class="col-md-8">{% localized "ApplicationListStrings|WebDevelopersAppTableNameHeader" %}</th>
- <th class="col-md-2">{% localized "ApplicationListStrings|WebDevelopersAppTableCategoryHeader" %}</th>
- <th class="col-md-2" colspan="2">{% localized "ApplicationListStrings|WebDevelopersAppTableStateHeader" %}</th>
- </tr>
- </thead>
- <tbody>
-
- {% if applications.size == 0 %}
-
- <tr>
- <td class="col-md-12 text-center" colspan="4">
- {% localized "CommonResources|NoItemsToDisplay" %}
- </td>
- </tr>
-
- {% else %}
-
- {% for app in applications %}
- <tr>
- <td class="col-md-8">
- {{app.title}}
- </td>
- <td class="col-md-2">
- {{app.categoryName}}
- </td>
- <td class="col-md-2">
- <strong>
- {% case app.state %}
- {% when ApplicationStateModel.Registered %}
- {% localized "ApplicationListStrings|WebDevelopersAppNotSubmitted" %}
-
- {% when ApplicationStateModel.Unpublished %}
- {% localized "ApplicationListStrings|WebDevelopersAppNotPublished" %}
-
- {% else %}
- {{ app.state }}
- {% endcase %}
- </strong>
- </td>
- <td class="col-md-1">
- <div class="nowrap">
- {% if app.state != ApplicationStateModel.Submitted and app.state != ApplicationStateModel.Published %}
- <app-actions params="{ appId: '{{app.id}}' }"></app-actions>
- {% endif %}
- </div>
- </td>
- </tr>
- {% endfor %}
-
- {% endif %}
- </tbody>
- </table>
-</div>
-```
-
-### Controls
- This template may use the following [page controls](api-management-page-controls.md).
-
-- [app-actions](api-management-page-controls.md#app-actions)
-
-### Data model
-
-> [!NOTE]
-> The [Profile](#Profile), [Applications](#Applications), and [Subscriptions](#Subscriptions) templates share the same data model and receive the same template data.
-
-|Property|Type|Description|
-|--|-|--|
-|`firstName`|string|First name of the current user.|
-|`lastName`|string|Last name of the current user.|
-|`companyName`|string|The company name of the current user.|
-|`addresserEmail`|string|Email address of the current user.|
-|`developersUsageStatisticsLink`|string|Relative URL to view analytics for the current user.|
-|`subscriptions`|Collection of [Subscription](api-management-template-data-model-reference.md#Subscription) entities.|The subscriptions for the current user.|
-|`applications`|Collection of [Application](api-management-template-data-model-reference.md#Application) entities.|The applications of the current user.|
-|`changePasswordUrl`|string|The relative URL to change the current user's password.|
-|`changeNameOrEmailUrl`|string|The relative URL to change the name and email for the current user.|
-|`canChangePassword`|boolean|Whether the current user can change their password.|
-|`isSystemUser`|boolean|Whether the current user is a member of one of the built-in [groups](api-management-key-concepts.md#groups).|
-
-### Sample template data
-
-```json
-{
- "firstName": "Administrator",
- "lastName": "",
- "companyName": "Contoso",
- "addresserEmail": "apimgmt-noreply@mail.windowsazure.com",
- "email": "admin@live.com",
- "developersUsageStatisticsLink": "/Developer/Analytics",
- "subscriptions": [
- {
- "Id": "57026e30de15d80041070001",
- "ProductId": "57026e30de15d80041060001",
- "ProductTitle": "Starter",
- "ProductDescription": "Subscribers will be able to run 5 calls/minute up to a maximum of 100 calls/week.",
- "ProductDetailsUrl": "/Products/57026e30de15d80041060001",
- "State": "Active",
- "DisplayName": "Starter (default)",
- "CreatedDate": "2016-04-04T13:37:52.847",
- "CanBeCancelled": true,
- "IsAwaitingApproval": false,
- "StartDate": null,
- "ExpirationDate": null,
- "NotificationDate": null,
- "PrimaryKey": "b6b2870953d04420a4e02c58f2c08e74",
- "SecondaryKey": "cfe28d5a1cd04d8abc93f48352076ea5",
- "UserId": 1,
- "CanBeRenewed": false,
- "HasExpired": false,
- "IsRejected": false,
- "CancelUrl": "/Subscriptions/57026e30de15d80041070001/Cancel",
- "RenewUrl": "/Subscriptions/57026e30de15d80041070001/Renew"
- },
- {
- "Id": "57026e30de15d80041070002",
- "ProductId": "57026e30de15d80041060002",
- "ProductTitle": "Unlimited",
- "ProductDescription": "Subscribers have completely unlimited access to the API. Administrator approval is required.",
- "ProductDetailsUrl": "/Products/57026e30de15d80041060002",
- "State": "Active",
- "DisplayName": "Unlimited (default)",
- "CreatedDate": "2016-04-04T13:37:52.923",
- "CanBeCancelled": true,
- "IsAwaitingApproval": false,
- "StartDate": null,
- "ExpirationDate": null,
- "NotificationDate": null,
- "PrimaryKey": "8fe7843c36de4cceb4728e6cae297336",
- "SecondaryKey": "96c850d217e74acf9b514ff8a5b38551",
- "UserId": 1,
- "CanBeRenewed": false,
- "HasExpired": false,
- "IsRejected": false,
- "CancelUrl": "/Subscriptions/57026e30de15d80041070002/Cancel",
- "RenewUrl": "/Subscriptions/57026e30de15d80041070002/Renew"
- }
- ],
- "applications": [],
- "changePasswordUrl": "/account/password/change",
- "changeNameOrEmailUrl": "/account/update",
- "canChangePassword": false,
- "isSystemUser": true
-}
-```
-
-## <a name="UpdateAccountInfo"></a> Update account info
- The **Update account info** template allows you to customize the **Update account information** page in the developer portal.
-
- ![User Account Info Page Developer Portal Templates](./media/api-management-user-profile-templates/APIM-User-Account-Info-Page-Developer-Portal-Templates.png "APIM User Account Info Page Developer Portal Templates")
-
-### Default template
-
-```xml
-<div class="row">
- <div class="col-sm-6 col-md-6">
- <div class="form-group">
- <label for="Email">{% localized "SigninResources|TextboxLabelEmail" %}</label>
- <input autofocus="autofocus" class="form-control" id="Email" name="Email" type="text" value="{{email}}">
- </div>
- <div class="form-group">
- <label for="FirstName">{% localized "SigninResources|TextboxLabelEmailFirstName" %}</label>
- <input class="form-control" id="FirstName" name="FirstName" type="text" value="{{firstName}}">
- </div>
- <div class="form-group">
- <label for="LastName">{% localized "SigninResources|TextboxLabelEmailLastName" %}</label>
- <input class="form-control" id="LastName" name="LastName" type="text" value="{{lastName}}">
- </div>
- <div class="form-group">
- <label for="Password">{% localized "SigninResources|WebAuthenticationSigninPasswordLabel" %}</label>
- <input class="form-control" id="Password" name="Password" type="password">
- </div>
- </div>
-</div>
-
-<button type="submit" class="btn btn-primary" id="UpdateProfile">
- {% localized "UpdateProfileStrings|ButtonLabelUpdateProfile" %}
-</button>
-<a class="btn btn-default" href="/developer" role="button">
- {% localized "CommonStrings|ButtonLabelCancel" %}
-</a>
-```
-
-### Controls
- This template may not use any [page controls](api-management-page-controls.md).
-
-### Data model
- [User account info](api-management-template-data-model-reference.md#UserAccountInfo) entity.
-
-### Sample template data
-
-```json
-{
- "FirstName": "Administrator",
- "LastName": "",
- "Email": "admin@live.com",
- "Password": null,
- "NameIdentifier": null,
- "ProviderName": null,
- "IsBasicAccount": false
-}
-```
-
-## Next steps
-For more information about working with templates, see [How to customize the API Management developer portal using templates](api-management-developer-portal-templates.md).
api-management Developer Portal Deprecated Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/developer-portal-deprecated-migration.md
- Title: Migrate to the new developer portal from the legacy developer portal-
-description: Learn how to migrate from the legacy developer portal to the new developer portal in API Management.
----- Previously updated : 04/15/2021---
-# Migrate to the new developer portal
-
-This article describes the steps you need to take to migrate from the deprecated legacy portal to the new developer portal in API Management.
-
-> [!IMPORTANT]
-> The legacy developer portal is now deprecated and it will receive security updates only. You can continue to use it, as per usual, until its retirement in October 2023, when it will be removed from all API Management services.
-
-![API Management developer portal](media/api-management-howto-developer-portal/cover.png)
--
-## Improvements in new developer portal
-
-The new developer portal addresses many limitations of the deprecated portal. It features a [visual drag-and-drop editor for editing content](api-management-howto-developer-portal-customize.md) and a dedicated panel for designers to style the website. Pages, customizations, and configuration are saved as Azure Resource Manager resources in your API Management service, which lets you [automate portal deployments](automate-portal-deployments.md). Lastly, the portal's codebase is open-source, so [you can extend it with custom functionality](api-management-howto-developer-portal.md#managed-vs-self-hosted).
-
-## How to migrate to new developer portal
-
-The new developer portal is incompatible with the deprecated portal and automated migration isn't possible. You need to manually recreate the content (pages, text, media files) and customize the look of the new portal. Precise steps will vary depending on the customizations and complexity of your portal. Refer to [the developer portal tutorial](api-management-howto-developer-portal-customize.md) for guidance. Remaining configuration, like the list of APIs, products, users, identity providers, is automatically shared across both portals.
-
-> [!IMPORTANT]
-> If you've launched the new developer portal before, but you haven't made any changes, reset the default content to update it to the latest version.
-
-When you migrate from the deprecated portal, keep in mind the following changes:
--- If you expose your developer portal via a custom domain, [assign a domain](configure-custom-domain.md) to the new developer portal. Use the **Developer portal** option from the dropdown in the Azure portal.-- [Apply a CORS policy](developer-portal-faq.md#cors) on your APIs to enable the interactive test console.-- If you inject custom CSS to style the portal, you need to [replicate the styling using the built-in design panel](api-management-howto-developer-portal-customize.md). CSS injection isn't allowed in the new portal.-- You can inject custom JavaScript only in the [self-hosted version of the new portal](api-management-howto-developer-portal.md#managed-vs-self-hosted).-- If your API Management is in a virtual network and is exposed to the Internet via Application Gateway, [refer to this documentation article](api-management-howto-integrate-internal-vnet-appgateway.md) for precise configuration steps. You need to:-
- - Enable connectivity to the API Management's management endpoint.
- - Enable connectivity to the new portal endpoint.
- - Disable selected Web Application Firewall rules.
--- If you changed the default e-mail notification templates to include an explicitly defined deprecated portal URL, change them to either use the portal URL parameter or point to the new portal URL. If the templates use the built-in portal URL parameter instead, no changes are required.-- *Issues* and *Applications* aren't supported in the new developer portal.-- Direct integration with Facebook, Microsoft, Twitter, and Google as identity providers isn't supported in the new developer portal. You can integrate with those providers via Azure AD B2C.-- If you use delegation, change the return URL in your applications and use the [*Get Shared Access Token* API endpoint](/rest/api/apimanagement/current-ga/user/get-shared-access-token) instead of the *Generate SSO URL* endpoint.-- If you use Microsoft Entra ID as an identity provider:-
- - Change the return URL in your application to point to the new developer portal domain.
- - Modify the suffix of the return URL in your application from `/signin-aad` to `/signin`.
--- If you use Azure AD B2C as an identity provider:-
- - Change the return URL in your application to point to the new developer portal domain.
- - Modify the suffix of the return URL in your application from `/signin-aad` to `/signin`.
- - Include *Given Name*, *Surname*, and *User's Object ID* in the application claims.
--- If you use OAuth 2.0 in the interactive test console, change the return URL in your application to point to the new developer portal domain and modify the suffix:-
- - From `/docs/services/[serverName]/console/oauth2/authorizationcode/callback` to `/signin-oauth/code/callback/[serverName]` for the authorization code grant flow.
- - From `/docs/services/[serverName]/console/oauth2/implicit/callback` to `/signin-oauth/implicit/callback` for the implicit grant flow.
-- If you use OpenID Connect in the interactive test console, change the return URL in your application to point to the new developer portal domain and modify the suffix:-
- - From `/docs/services/[serverName]/console/openidconnect/authorizationcode/callback` to `/signin-oauth/code/callback/[serverName]` for the authorization code grant flow.
- - From `/docs/services/[serverName]/console/openidconnect/implicit/callback` to `/signin-oauth/implicit/callback` for the implicit grant flow.
-
-## Next steps
-
-Learn more about the developer portal:
--- [Azure API Management developer portal overview](api-management-howto-developer-portal.md)-- [Access and customize the developer portal](api-management-howto-developer-portal-customize.md)
api-management Graphql Apis Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/graphql-apis-overview.md
Previously updated : 05/31/2023 Last updated : 09/18/2023
API Management helps you import, manage, protect, test, publish, and monitor Gra
## Availability * GraphQL APIs are supported in all API Management service tiers
-* Pass-through and synthetic GraphQL APIs currently aren't supported in a self-hosted gateway
* Synthetic GraphQL APIs currently aren't supported in API Management [workspaces](workspaces-overview.md) * Support for GraphQL subscriptions in synthetic GraphQL APIs is currently in preview and isn't available in the Consumption tier
api-management Grpc Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/grpc-api.md
+
+ Title: Import a gRPC API to Azure API Management (preview) | Microsoft Docs
+description: Learn how to import a gRPC service definition as an API to an API Management instance using the Azure portal, ARM template, or bicep template.
+++++ Last updated : 10/04/2023+++
+# Import a gRPC API (preview)
+
+This article shows how to import a gRPC service definition as an API in API Management. You can then manage the API in API Management, secure access and apply other polices, and pass gRPC API requests through the gateway to the gRPC backend.
+
+To add a gRPC API to API Management, you need to:
+
+* Upload the API's Protobuf (protocol buffer) definition file to API Management
+* Specify the location of your gRPC service
+* Configure the API in API Management
+
+API Management supports pass-through with the following types of gRPC service methods: unary, server streaming, client streaming, and bidirectional streaming. For background about gRPC, see [Introduction to gRPC](https://grpc.io/docs/what-is-grpc/introduction/).
++
+> [!NOTE]
+> * Importing a gRPC API is in preview. Currently, gRPC APIs are only supported in the self-hosted gateway, not the managed gateway for your API Management instance.
+> * Currently, testing gRPC APIs isn't supported in the test console of the Azure portal or in the API Management developer portal.
++
+## Prerequisites
+
+* An API Management instance. If you don't already have one, complete the following quickstart: [Create an Azure API Management instance](get-started-create-service-instance.md).
+
+* A gateway resource provisioned in your instance. If you don't already have one, see [Provision a self-hosted gateway in Azure API Management](api-management-howto-provision-self-hosted-gateway.md).
+
+* A gRPC Protobuff (.proto) file available locally and gRPC service that's accessible over HTTPS.
+
+## Add a gRPC API
+
+#### [Portal](#tab/portal)
+
+1. In the [Azure portal](https://portal.azure.com), navigate to your API Management instance.
+
+1. In the left menu, select **APIs** > **+ Add API**.
+
+1. Under **Define a new API**, select **gRPC**.
+
+ :::image type="content" source="./media/grpc-api/grpc-api.png" alt-text="Screenshot of creating a gRPC API in the portal." :::
+
+1. In the **Create a gRPC API window**, select **Full**.
+
+1. For a gRPC API, you must specify the following settings:
+
+ 1. In **Upload schema**, select a local .proto file associated with the API to import.
+
+ 1. In **gRPC server URL**, enter the address of the gRPC service. The address must be accessible over HTTPS.
+
+ 1. In **Gateways**, select the gateway resource that you want to use to expose the API.
+
+ > [!IMPORTANT]
+ > In public preview, you can only select a self-hosted gateway. The **Managed** gateway isn't supported.
+
+1. Enter remaining settings to configure your API. These settings are explained in the [Import and publish your first API](import-and-publish.md#import-and-publish-a-backend-api) tutorial.
+
+1. Select **Create**.
+
+ The API is added to the **APIs** list. You can view update your settings by going to the **Settings** tab of the API.
+++++
api-management Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure API Management description: Lists Azure Policy Regulatory Compliance controls available for Azure API Management. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 10/23/2023 Last updated : 11/06/2023
app-service Identity Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/identity-scenarios.md
Title: 'App Service authentication recommendations'
-description: There are several different authentication solutions available for web apps or web APIs hosted on App Service. This article provides recommendations on which auth solution(s) can be used for specific scenarios such as quickly and simply limiting access to your web app, custom authorization, and incremental consent.
+description: Learn about the different authentication options available for web apps or web APIs hosted on App Service. This article provides recommendations on which auth solution(s) can be used for specific scenarios such as quickly and simply limiting access to your web app, custom authorization, and incremental consent. Learn about the benefits and drawbacks of using built-in authentication versus code implementation of authentication.
Previously updated : 08/10/2023 Last updated : 10/31/2023 # Authentication scenarios and recommendations
-If you have a web app or an API running in Azure App Service, you can restrict access to it based on the identity of the users or applications that request it. App Service offers several authentication solutions to help you achieve this goal. In this article, you will learn about the different authentication solutions, their benefits and drawbacks, and which authentication solution to use for specific scenarios.
+If you have a web app or an API running in Azure App Service, you can restrict access to it based on the identity of the users or applications that request it. App Service offers several authentication solutions to help you achieve this goal. In this article, you will learn about the different authentication options, their benefits and drawbacks, and which authentication solution to use for specific scenarios.
## Authentication solutions
app-service Overview Tls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-tls.md
+
+ Title: Transport Layer Security (TLS) overview
+description: Learn about Transport Layer Security (TLS) on App Service.
+keywords: app service, azure app service, tls, transport layer security, support, web app, troubleshooting,
+ Last updated : 11/06/2023++++
+# Azure App Service TLS overview
+
+## What does TLS do in App Service?
+
+Transport Layer Security (TLS) is a widely adopted security protocol designed to secure connections and communications between servers and clients. App Service allows customers to use TLS/SSL certificates to secure incoming requests to their web apps. App Service currently supports different set of TLS features for customers to secure their web apps.
+
+## What TLS options are available in App Service?
+
+For incoming requests to your web app, App Service supports TLS versions 1.0, 1.1, and 1.2. [In the next few months, App Service will begin supporting TLS version 1.3](https://techcommunity.microsoft.com/t5/apps-on-azure-blog/upcoming-tls-1-3-on-azure-app-service-for-web-apps-functions-and/ba-p/3974138).
+
+### Minimum TLS Version and SCM Minimum TLS Version
+
+App Service also allows you to set minimum TLS version for incoming requests to your web app and to SCM site. By default, the minimum TLS version for incoming requests to your web app and to SCM would be set to 1.2 on both portal and API.
+
+## TLS 1.0 and 1.1
+
+TLS 1.0 and 1.1 are considered legacy protocols and are no longer considered secure. It's generally recommended for customers to use TLS 1.2 as the minimum TLS version, which is also the default.
+
+To ensure backward compatibility for TLS 1.0 and TLS 1.1, App Service will continue to support TLS 1.0 and 1.1 for incoming requests to your web app. However, since the default minimum TLS version is set to TLS 1.2, you need to update the minimum TLS version configurations on your web app to either TLS 1.0 or 1.1 so the requests won't be rejected.
+
+> [!IMPORTANT]
+> Incoming requests to web apps and incoming requests to Azure are treated differently. App Service will continue to support TLS 1.0 and 1.1 for incoming requests to the web apps. For incoming requests directly to Azure, for example through ARM or API, it's not recommended to use TLS 1.0 or 1.1.
+>
+
+## Next steps
+* [Secure a custom DNS name with a TLS/SSL binding](configure-ssl-bindings.md)
app-service Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure App Service description: Lists Azure Policy Regulatory Compliance controls available for Azure App Service. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 10/23/2023 Last updated : 11/06/2023
application-gateway Alb Controller Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/alb-controller-release-notes.md
Previously updated : 10/23/2023 Last updated : 11/07/2023
Instructions for new or existing deployments of ALB Controller are found in the
- [Upgrade existing ALB Controller](quickstart-deploy-application-gateway-for-containers-alb-controller.md#for-existing-deployments) ## Latest Release (Recommended)
-September 25, 2023 - 0.5.024542 - Custom Health Probes, Controller HA, Multi-site support for Ingress, [helm_release via Terraform fix](https://github.com/Azure/AKS/issues/3857), Path rewrite for Gateway API, status for Ingress resources, quality improvements
+November 6, 2023 - 0.6.1 - Gateway / Ingress API - Header rewrite support, Ingress API - URL rewrite support, Ingress multiple-TLS listener bug fix,
+two certificates maximum per host, adopting [semantic versioning (semver)](https://semver.org/), quality improvements
## Release history
-July 25, 2023 - 0.4.023971 - Ingress + Gateway co-existence improvements
+September 25, 2023 - 0.5.024542 - Custom Health Probes, Controller HA, Multi-site support for Ingress, [helm_release via Terraform fix](https://github.com/Azure/AKS/issues/3857), Path rewrite for Gateway API, status for Ingress resources, quality improvements
+
+July 25, 2023 - 0.4.023971 - Ingress + Gateway coexistence improvements
July 24, 2023 - 0.4.023961 - Improved Ingress support
application-gateway Api Specification Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/api-specification-kubernetes.md
Previously updated : 9/25/2023 Last updated : 11/6/2023
has been accepted by the controller.</p>
<em>(Optional)</em> <p>Known condition types are:</p> <ul>
-<li>&ldquo;Accepted&rdquo;</li>
-<li>&ldquo;Ready&rdquo;</li>
+<li>"Accepted"</li>
+<li>"Ready"</li>
</ul> </td> </tr>
particular BackendTLSPolicy condition type has been raised.</p>
When the given BackendTLSPolicy is correctly configured</p> </td> </tr><tr><td><p>&#34;InvalidBackendTLSPolicy&#34;</p></td>
-<td><p>BackendTLSPolicyReasonInvalid is the reason when the BackendTLSPolicy isn't Accepted</p>
+<td><p>BackendTLSPolicyReasonInvalid is the reason when the BackendTLSPolicy isn&rsquo;t Accepted</p>
+</td>
+</tr><tr><td><p>&#34;InvalidGroup&#34;</p></td>
+<td><p>BackendTLSPolicyReasonInvalidGroup is used when the group is invalid</p>
</td> </tr><tr><td><p>&#34;InvalidKind&#34;</p></td> <td><p>BackendTLSPolicyReasonInvalidKind is used when the kind/group is invalid</p> </td>
+</tr><tr><td><p>&#34;InvalidName&#34;</p></td>
+<td><p>BackendTLSPolicyReasonInvalidName is used when the name is invalid</p>
+</td>
+</tr><tr><td><p>&#34;InvalidSecret&#34;</p></td>
+<td><p>BackendTLSPolicyReasonInvalidSecret is used when the Secret is invalid</p>
+</td>
+</tr><tr><td><p>&#34;InvalidService&#34;</p></td>
+<td><p>BackendTLSPolicyReasonInvalidService is used when the Service is invalid</p>
+</td>
</tr><tr><td><p>&#34;NoTargetReference&#34;</p></td> <td><p>BackendTLSPolicyReasonNoTargetReference is used when there is no target reference</p> </td> </tr><tr><td><p>&#34;RefNotPermitted&#34;</p></td>
-<td><p>BackendTLSPolicyReasonRefNotPermitted is used when the ref isn't permitted</p>
-</td>
-</tr><tr><td><p>&#34;ServiceNotFound&#34;</p></td>
-<td><p>BackendTLSPolicyReasonServiceNotFound is used when the ref service isn't found</p>
-</td>
-</tr><tr><td><p>&#34;Degraded&#34;</p></td>
-<td><p>ReasonDegraded is the backendTLSPolicyConditionReason when the backendTLSPolicy has been incorrectly programmed</p>
+<td><p>BackendTLSPolicyReasonRefNotPermitted is used when the ref isn&rsquo;t permitted</p>
</td> </tr></tbody> </table>
field.</p>
</tr> </thead> <tbody><tr><td><p>&#34;Accepted&#34;</p></td>
-<td><p>BackendTLSPolicyConditionAccepted is used to set the BackendTLSPolicyCondition to Accepted</p>
-</td>
-</tr><tr><td><p>&#34;Ready&#34;</p></td>
-<td><p>BackendTLSPolicyConditionReady is used to set the condition to Ready</p>
+<td><p>BackendTLSPolicyConditionAccepted is used to set the BackendTLSPolicyConditionType to Accepted</p>
</td> </tr><tr><td><p>&#34;ResolvedRefs&#34;</p></td>
-<td><p>BackendTLSPolicyConditionResolvedRefs is used to set the BackendTLSPolicyCondition to ResolvedRefs
-This is used with the following Reasons :
-*BackendTLSPolicyReasonRefNotPermitted
-*BackendTLSPolicyReasonInvalidKind
-*BackendTLSPolicyReasonServiceNotFound
-*BackendTLSPolicyInvalidCertificateRef
-*ReasonDegraded</p>
+<td><p>BackendTLSPolicyConditionResolvedRefs is used to set the BackendTLSPolicyCondition to ResolvedRefs</p>
</td> </tr></tbody> </table>
string
<td> <code>clientCertificateRef</code><br/> <em>
-<a href="https://gateway-api.sigs.k8s.io/references/spec/#gateway.networking.k8s.io/v1beta1.SecretObjectReference">
+<a href="https://gateway-api.sigs.k8s.io/references/spec/#gateway.networking.k8s.io/v1.SecretObjectReference">
Gateway API .SecretObjectReference </a> </em>
constants so that operators and tools can converge on a common
vocabulary to describe BackendTLSPolicy state.</p> <p>Known condition types are:</p> <ul>
-<li>&ldquo;Accepted&rdquo;</li>
+<li>"Accepted"</li>
</ul> </td> </tr>
CommonTLSPolicyVerify
<td> <code>caCertificateRef</code><br/> <em>
-<a href="https://gateway-api.sigs.k8s.io/references/spec/#gateway.networking.k8s.io/v1beta1.SecretObjectReference">
+<a href="https://gateway-api.sigs.k8s.io/references/spec/#gateway.networking.k8s.io/v1.SecretObjectReference">
Gateway API .SecretObjectReference </a> </em>
certificate of the backend.</p>
(<em>Appears on:</em><a href="#alb.networking.azure.io/v1.FrontendTLSPolicySpec">FrontendTLSPolicySpec</a>) </p> <div>
-<p>CustomTargetRef is a reference to a custom resource that isn't part of the
+<p>CustomTargetRef is a reference to a custom resource that isn&rsquo;t part of the
Kubernetes core API.</p> </div> <table>
Kubernetes core API.</p>
<td> <code>name</code><br/> <em>
-<a href="https://gateway-api.sigs.k8s.io/references/spec/#gateway.networking.k8s.io/v1beta1.ObjectName">
+<a href="https://gateway-api.sigs.k8s.io/reference/spec/#gateway.networking.k8s.io/v1.ObjectName">
Gateway API .ObjectName </a> </em>
Gateway API .ObjectName
<td> <code>kind</code><br/> <em>
-<a href="https://gateway-api.sigs.k8s.io/references/spec/#gateway.networking.k8s.io/v1beta1.Kind">
+<a href="https://gateway-api.sigs.k8s.io/reference/spec/#gateway.networking.k8s.io/v1.Kind">
Gateway API .Kind </a> </em>
Gateway API .Kind
<td> <code>namespace</code><br/> <em>
-<a href="https://gateway-api.sigs.k8s.io/references/spec/#gateway.networking.k8s.io/v1beta1.Namespace">
+<a href="https://gateway-api.sigs.k8s.io/reference/spec/#gateway.networking.k8s.io/v1.Namespace">
Gateway API .Namespace </a> </em>
Gateway API .Namespace
<td> <code>group</code><br/> <em>
-<a href="https://gateway-api.sigs.k8s.io/references/spec/#gateway.networking.k8s.io/v1beta1.Group">
+<a href="https://gateway-api.sigs.k8s.io/reference/spec/#gateway.networking.k8s.io/v1.Group">
Gateway API .Group </a> </em>
FrontendTLSPolicyStatus
(<code>string</code> alias)</h3> <div> <p>FrontendTLSPolicyConditionReason defines the set of reasons that explain why a
-particular FrontTLSPolicy condition type has been raised.</p>
+particular FrontendTLSPolicy condition type has been raised.</p>
</div> <table> <thead>
particular FrontTLSPolicy condition type has been raised.</p>
<th>Description</th> </tr> </thead>
-<tbody><tr><td><p>&#34;InvalidGroup&#34;</p></td>
-<td><p>FrontTLSPolicyReasonInvalidGroup is used when the group is invalid</p>
+<tbody><tr><td><p>&#34;Accepted&#34;</p></td>
+<td><p>FrontendTLSPolicyReasonAccepted is used to set the FrontendTLSPolicyConditionReason to Accepted
+When the given FrontendTLSPolicy is correctly configured</p>
+</td>
+</tr><tr><td><p>&#34;InvalidFrontendTLSPolicy&#34;</p></td>
+<td><p>FrontendTLSPolicyReasonInvalid is the reason when the FrontendTLSPolicy isn&rsquo;t Accepted</p>
+</td>
+</tr><tr><td><p>&#34;InvalidGateway&#34;</p></td>
+<td><p>FrontendTLSPolicyReasonInvalidGateway is used when the gateway is invalid</p>
+</td>
+</tr><tr><td><p>&#34;InvalidGroup&#34;</p></td>
+<td><p>FrontendTLSPolicyReasonInvalidGroup is used when the group is invalid</p>
</td> </tr><tr><td><p>&#34;InvalidKind&#34;</p></td>
-<td><p>FrontTLSPolicyReasonInvalidKind is used when the kind/group is invalid</p>
+<td><p>FrontendTLSPolicyReasonInvalidKind is used when the kind/group is invalid</p>
</td> </tr><tr><td><p>&#34;InvalidName&#34;</p></td>
-<td><p>FrontTLSPolicyReasonInvalidName is used when the name is invalid</p>
+<td><p>FrontendTLSPolicyReasonInvalidName is used when the name is invalid</p>
</td> </tr><tr><td><p>&#34;InvalidPolicyName&#34;</p></td>
-<td><p>FrontTLSPolicyReasonInvalidPolicyName is used when the name is invalid</p>
+<td><p>FrontendTLSPolicyReasonInvalidPolicyName is used when the policy name is invalid</p>
</td> </tr><tr><td><p>&#34;InvalidPolicyType&#34;</p></td>
-<td><p>FrontTLSPolicyReasonInvalidPolicyType is used when the type is invalid</p>
+<td><p>FrontendTLSPolicyReasonInvalidPolicyType is used when the policy type is invalid</p>
</td> </tr><tr><td><p>&#34;NoTargetReference&#34;</p></td>
-<td><p>FrontTLSPolicyReasonNoTargetReference is used when there is no target reference</p>
+<td><p>FrontendTLSPolicyReasonNoTargetReference is used when there is no target reference</p>
</td> </tr><tr><td><p>&#34;RefNotPermitted&#34;</p></td>
-<td><p>FrontTLSPolicyReasonRefNotPermitted is used when the ref isn't permitted</p>
-</td>
-</tr><tr><td><p>&#34;Accepted&#34;</p></td>
-<td><p>FrontendTLSPolicyReasonAccepted is used to set the FrontTLSPolicyConditionReason to Accepted
-When the given FrontTLSPolicy is correctly configured</p>
-</td>
-</tr><tr><td><p>&#34;InvalidFrontendTLSPolicy&#34;</p></td>
-<td><p>FrontendTLSPolicyReasonInvalid is the reason when the FrontendTLSPolicy isn't Accepted</p>
-</td>
-</tr><tr><td><p>&#34;InvalidGateway&#34;</p></td>
-<td><p>FrontendTLSPolicyReasonInvalidGateway is used when the gateway is invalid</p>
+<td><p>FrontendTLSPolicyReasonRefNotPermitted is used when the ref isn&rsquo;t permitted</p>
</td> </tr></tbody> </table>
constants so that operators and tools can converge on a common
vocabulary to describe FrontendTLSPolicy state.</p> <p>Known condition types are:</p> <ul>
-<li>&ldquo;Accepted&rdquo;</li>
+<li>"Accepted"</li>
</ul> </td> </tr>
vocabulary to describe FrontendTLSPolicy state.</p>
</td> </tr></tbody> </table>
+<h3 id="alb.networking.azure.io/v1.HTTPHeader">HTTPHeader
+</h3>
+<p>
+(<em>Appears on:</em><a href="#alb.networking.azure.io/v1.HeaderFilter">HeaderFilter</a>)
+</p>
+<div>
+<p>HTTPHeader represents an HTTP Header name and value as defined by RFC 7230.</p>
+</div>
+<table>
+<thead>
+<tr>
+<th>Field</th>
+<th>Description</th>
+</tr>
+</thead>
+<tbody>
+<tr>
+<td>
+<code>name</code><br/>
+<em>
+<a href="#alb.networking.azure.io/v1.HTTPHeaderName">
+HTTPHeaderName
+</a>
+</em>
+</td>
+<td>
+<p>Name is the name of the HTTP Header to be matched. Name matching MUST be
+case insensitive. (See <a href="https://tools.ietf.org/html/rfc7230#section-3.2">https://tools.ietf.org/html/rfc7230#section-3.2</a>).</p>
+<p>If multiple entries specify equivalent header names, the first entry with
+an equivalent name MUST be considered for a match. Subsequent entries
+with an equivalent header name MUST be ignored. Due to the
+case-insensitivity of header names, "foo" and "Foo" are considered
+equivalent.</p>
+</td>
+</tr>
+<tr>
+<td>
+<code>value</code><br/>
+<em>
+string
+</em>
+</td>
+<td>
+<p>Value is the value of HTTP Header to be matched.</p>
+</td>
+</tr>
+</tbody>
+</table>
+<h3 id="alb.networking.azure.io/v1.HTTPHeaderName">HTTPHeaderName
+(<code>string</code> alias)</h3>
+<p>
+(<em>Appears on:</em><a href="#alb.networking.azure.io/v1.HTTPHeader">HTTPHeader</a>)
+</p>
+<div>
+<p>HTTPHeaderName is the name of an HTTP header.</p>
+<p>Valid values include:</p>
+<ul>
+<li>"Authorization"</li>
+<li>"Set-Cookie"</li>
+</ul>
+<p>Invalid values include:</p>
+<ul>
+<li>":method" - ":" is an invalid character. This means that HTTP/2 pseudo
+headers are not currently supported by this type.</li>
+<li>"/invalid" - "/ " is an invalid character</li>
+</ul>
+</div>
<h3 id="alb.networking.azure.io/v1.HTTPMatch">HTTPMatch </h3> <p>
string
</tr> </tbody> </table>
+<h3 id="alb.networking.azure.io/v1.HTTPPathModifier">HTTPPathModifier
+</h3>
+<p>
+(<em>Appears on:</em><a href="#alb.networking.azure.io/v1.Redirect">Redirect</a>, <a href="#alb.networking.azure.io/v1.URLRewriteFilter">URLRewriteFilter</a>)
+</p>
+<div>
+<p>HTTPPathModifier defines configuration for path modifiers.</p>
+</div>
+<table>
+<thead>
+<tr>
+<th>Field</th>
+<th>Description</th>
+</tr>
+</thead>
+<tbody>
+<tr>
+<td>
+<code>type</code><br/>
+<em>
+<a href="#alb.networking.azure.io/v1.HTTPPathModifierType">
+HTTPPathModifierType
+</a>
+</em>
+</td>
+<td>
+<p>Type defines the type of path modifier. Additional types may be
+added in a future release of the API.</p>
+<p>Note that values may be added to this enum, implementations
+must ensure that unknown values will not cause a crash.</p>
+<p>Unknown values here must result in the implementation setting the
+Accepted Condition for the rule to be false</p>
+</td>
+</tr>
+<tr>
+<td>
+<code>replaceFullPath</code><br/>
+<em>
+string
+</em>
+</td>
+<td>
+<em>(Optional)</em>
+<p>ReplaceFullPath specifies the value with which to replace the full path
+of a request during a rewrite or redirect.</p>
+</td>
+</tr>
+<tr>
+<td>
+<code>replacePrefixMatch</code><br/>
+<em>
+string
+</em>
+</td>
+<td>
+<em>(Optional)</em>
+<p>ReplacePrefixMatch specifies the value with which to replace the prefix
+match of a request during a rewrite or redirect. For example, a request
+to "/foo/bar" with a prefix match of "/foo" and a ReplacePrefixMatch
+of "/xyz" would be modified to "/xyz/bar".</p>
+<p>Note that this matches the behavior of the PathPrefix match type. This
+matches full path elements. A path element refers to the list of labels
+in the path split by the <code>/</code> separator. When specified, a trailing <code>/</code> is
+ignored. For example, the paths <code>/abc</code>, <code>/abc/</code>, and <code>/abc/def</code> would all
+match the prefix <code>/abc</code>, but the path <code>/abcd</code> would not.</p>
+<p>ReplacePrefixMatch is only compatible with a <code>PathPrefix</code> HTTPRouteMatch.
+Using any other HTTPRouteMatch type on the same HTTPRouteRule will result in
+the implementation setting the Accepted Condition for the Route to <code>status: False</code>.</p>
+<table>
+<thead>
+<tr>
+<th>Request Path</th>
+<th>Prefix Match</th>
+<th>Replace Prefix</th>
+<th>Modified Path</th>
+</tr>
+</thead>
+<tbody>
+<tr>
+<td>/foo/bar</td>
+<td>/foo</td>
+<td>/xyz</td>
+<td>/xyz/bar</td>
+</tr>
+<tr>
+<td>/foo/bar</td>
+<td>/foo</td>
+<td>/xyz/</td>
+<td>/xyz/bar</td>
+</tr>
+<tr>
+<td>/foo/bar</td>
+<td>/foo/</td>
+<td>/xyz</td>
+<td>/xyz/bar</td>
+</tr>
+<tr>
+<td>/foo/bar</td>
+<td>/foo/</td>
+<td>/xyz/</td>
+<td>/xyz/bar</td>
+</tr>
+<tr>
+<td>/foo</td>
+<td>/foo</td>
+<td>/xyz</td>
+<td>/xyz</td>
+</tr>
+<tr>
+<td>/foo/</td>
+<td>/foo</td>
+<td>/xyz</td>
+<td>/xyz/</td>
+</tr>
+<tr>
+<td>/foo/bar</td>
+<td>/foo</td>
+<td></td>
+<td>/bar</td>
+</tr>
+<tr>
+<td>/foo/</td>
+<td>/foo</td>
+<td></td>
+<td>/</td>
+</tr>
+<tr>
+<td>/foo</td>
+<td>/foo</td>
+<td></td>
+<td>/</td>
+</tr>
+<tr>
+<td>/foo/</td>
+<td>/foo</td>
+<td>/</td>
+<td>/</td>
+</tr>
+<tr>
+<td>/foo</td>
+<td>/foo</td>
+<td>/</td>
+<td>/</td>
+</tr>
+</tbody>
+</table>
+</td>
+</tr>
+</tbody>
+</table>
+<h3 id="alb.networking.azure.io/v1.HTTPPathModifierType">HTTPPathModifierType
+(<code>string</code> alias)</h3>
+<p>
+(<em>Appears on:</em><a href="#alb.networking.azure.io/v1.HTTPPathModifier">HTTPPathModifier</a>)
+</p>
+<div>
+<p>HTTPPathModifierType defines the type of path redirect or rewrite.</p>
+</div>
+<table>
+<thead>
+<tr>
+<th>Value</th>
+<th>Description</th>
+</tr>
+</thead>
+<tbody><tr><td><p>&#34;ReplaceFullPath&#34;</p></td>
+<td><p>FullPathHTTPPathModifier indicates that the full path will be replaced
+by the specified value.</p>
+</td>
+</tr><tr><td><p>&#34;ReplacePrefixMatch&#34;</p></td>
+<td><p>PrefixMatchHTTPPathModifier indicates that any prefix path matches will be
+replaced by the substitution value. For example, a path with a prefix
+match of "/foo" and a ReplacePrefixMatch substitution of "/bar" will have
+the "/foo" prefix replaced with "/bar" in matching requests.</p>
+<p>Note that this matches the behavior of the PathPrefix match type. This
+matches full path elements. A path element refers to the list of labels
+in the path split by the <code>/</code> separator. When specified, a trailing <code>/</code> is
+ignored. For example, the paths <code>/abc</code>, <code>/abc/</code>, and <code>/abc/def</code> would all
+match the prefix <code>/abc</code>, but the path <code>/abcd</code> would not.</p>
+</td>
+</tr></tbody>
+</table>
<h3 id="alb.networking.azure.io/v1.HTTPSpecifiers">HTTPSpecifiers </h3> <p>
HTTPMatch
</tr> </tbody> </table>
+<h3 id="alb.networking.azure.io/v1.HeaderFilter">HeaderFilter
+</h3>
+<p>
+(<em>Appears on:</em><a href="#alb.networking.azure.io/v1.IngressRewrites">IngressRewrites</a>)
+</p>
+<div>
+<p>HeaderFilter defines a filter that modifies the headers of an HTTP
+request or response. Only one action for a given header name is permitted.
+Filters specifying multiple actions of the same or different type for any one
+header name are invalid and will be rejected.
+Configuration to set or add multiple values for a header must use RFC 7230
+header value formatting, separating each value with a comma.</p>
+</div>
+<table>
+<thead>
+<tr>
+<th>Field</th>
+<th>Description</th>
+</tr>
+</thead>
+<tbody>
+<tr>
+<td>
+<code>set</code><br/>
+<em>
+<a href="#alb.networking.azure.io/v1.HTTPHeader">
+[]HTTPHeader
+</a>
+</em>
+</td>
+<td>
+<em>(Optional)</em>
+<p>Set overwrites the request with the given header (name, value)
+before the action.</p>
+<p>Input:
+GET /foo HTTP/1.1
+my-header: foo</p>
+<p>Config:
+set:
+- name: "my-header"
+value: "bar"</p>
+<p>Output:
+GET /foo HTTP/1.1
+my-header: bar</p>
+</td>
+</tr>
+<tr>
+<td>
+<code>add</code><br/>
+<em>
+<a href="#alb.networking.azure.io/v1.HTTPHeader">
+[]HTTPHeader
+</a>
+</em>
+</td>
+<td>
+<em>(Optional)</em>
+<p>Add adds the given header(s) (name, value) to the request
+before the action. It appends to any existing values associated
+with the header name.</p>
+<p>Input:
+GET /foo HTTP/1.1
+my-header: foo</p>
+<p>Config:
+add:
+- name: "my-header"
+value: "bar,baz"</p>
+<p>Output:
+GET /foo HTTP/1.1
+my-header: foo,bar,baz</p>
+</td>
+</tr>
+<tr>
+<td>
+<code>remove</code><br/>
+<em>
+[]string
+</em>
+</td>
+<td>
+<em>(Optional)</em>
+<p>Remove the given header(s) from the HTTP request before the action. The
+value of Remove is a list of HTTP header names. Note that the header
+names are case-insensitive (see
+<a href="https://datatracker.ietf.org/doc/html/rfc2616#section-4.2)">https://datatracker.ietf.org/doc/html/rfc2616#section-4.2)</a>.</p>
+<p>Input:
+GET /foo HTTP/1.1
+my-header1: foo
+my-header2: bar
+my-header3: baz</p>
+<p>Config:
+remove: ["my-header1", "my-header3"]</p>
+<p>Output:
+GET /foo HTTP/1.1
+my-header2: bar</p>
+</td>
+</tr>
+</tbody>
+</table>
+<h3 id="alb.networking.azure.io/v1.HeaderName">HeaderName
+(<code>string</code> alias)</h3>
+<div>
+<p>HeaderName is the name of a header or query parameter.</p>
+</div>
<h3 id="alb.networking.azure.io/v1.HealthCheckPolicy">HealthCheckPolicy </h3> <div>
particular HealthCheckPolicy condition type has been raised.</p>
<th>Description</th> </tr> </thead>
-<tbody><tr><td><p>&#34;InvalidReference&#34;</p></td>
-<td><p>HealthCheckPolicyInvalidReference is used when the reference is invalid</p>
-</td>
-</tr><tr><td><p>&#34;Accepted&#34;</p></td>
+<tbody><tr><td><p>&#34;Accepted&#34;</p></td>
<td><p>HealthCheckPolicyReasonAccepted is used to set the HealthCheckPolicyConditionReason to Accepted When the given HealthCheckPolicy is correctly configured</p> </td> </tr><tr><td><p>&#34;InvalidHealthCheckPolicy&#34;</p></td>
-<td><p>HealthCheckPolicyReasonInvalid is the reason when the HealthCheckPolicy isn't Accepted</p>
+<td><p>HealthCheckPolicyReasonInvalid is the reason when the HealthCheckPolicy isn&rsquo;t Accepted</p>
+</td>
+</tr><tr><td><p>&#34;InvalidGroup&#34;</p></td>
+<td><p>HealthCheckPolicyReasonInvalidGroup is used when the group is invalid</p>
+</td>
+</tr><tr><td><p>&#34;InvalidKind&#34;</p></td>
+<td><p>HealthCheckPolicyReasonInvalidKind is used when the kind/group is invalid</p>
+</td>
+</tr><tr><td><p>&#34;InvalidName&#34;</p></td>
+<td><p>HealthCheckPolicyReasonInvalidName is used when the name is invalid</p>
</td> </tr><tr><td><p>&#34;InvalidPort&#34;</p></td> <td><p>HealthCheckPolicyReasonInvalidPort is used when the port is invalid</p> </td>
-</tr><tr><td><p>&#34;InvalidServiceReference&#34;</p></td>
-<td><p>HealthCheckPolicyReasonInvalidServiceReference is used when the service is invalid</p>
-</td>
-</tr><tr><td><p>&#34;InvalidTargetReference&#34;</p></td>
-<td><p>HealthCheckPolicyReasonInvalidTargetReference is used when the target is invalid</p>
+</tr><tr><td><p>&#34;InvalidService&#34;</p></td>
+<td><p>HealthCheckPolicyReasonInvalidService is used when the Service is invalid</p>
</td> </tr><tr><td><p>&#34;NoTargetReference&#34;</p></td>
-<td><p>HealthCheckPolicyReasonNoTargetReference is used when the target isn't found</p>
+<td><p>HealthCheckPolicyReasonNoTargetReference is used when there is no target reference</p>
+</td>
+</tr><tr><td><p>&#34;RefNotPermitted&#34;</p></td>
+<td><p>HealthCheckPolicyReasonRefNotPermitted is used when the ref isn&rsquo;t permitted</p>
</td> </tr></tbody> </table>
constants so that operators and tools can converge on a common
vocabulary to describe HealthCheckPolicy state.</p> <p>Known condition types are:</p> <ul>
-<li>&ldquo;Accepted&rdquo;</li>
+<li>"Accepted"</li>
</ul> </td> </tr> </tbody> </table>
-<h3 id="alb.networking.azure.io/v1.IngressBackendOverride">IngressBackendOverride
-</h3>
-<p>
-(<em>Appears on:</em><a href="#alb.networking.azure.io/v1.IngressListenerSetting">IngressListenerSetting</a>)
-</p>
-<div>
-<p>IngressBackendOverride allows a user to change the hostname on a request before it is sent to a backend service</p>
-</div>
-<table>
-<thead>
-<tr>
-<th>Field</th>
-<th>Description</th>
-</tr>
-</thead>
-<tbody>
-<tr>
-<td>
-<code>service</code><br/>
-<em>
-string
-</em>
-</td>
-<td>
-<p>Service is the name of a backend service that this override refers to.</p>
-</td>
-</tr>
-<tr>
-<td>
-<code>backendHost</code><br/>
-<em>
-string
-</em>
-</td>
-<td>
-<p>BackendHost is the hostname that an incoming request will be mutated to use before being forwarded to the backend</p>
-</td>
-</tr>
-</tbody>
-</table>
<h3 id="alb.networking.azure.io/v1.IngressBackendPort">IngressBackendPort </h3> <p>
Only one of Name/Number should be defined.</p>
<tbody> <tr> <td>
-<code>number</code><br/>
+<code>port</code><br/>
<em> int32 </em> </td> <td> <em>(Optional)</em>
-<p>Number indicates the TCP port number being referred to</p>
+<p>Port indicates the port on the backend service</p>
</td> </tr> <tr>
Protocol
</em> </td> <td>
-<p>Protocol should be one of &ldquo;HTTP&rdquo;, &ldquo;HTTPS&rdquo;</p>
+<p>Protocol should be one of "HTTP", "HTTPS"</p>
</td> </tr> </tbody>
backend on a port specified as https</p>
</tr> <tr> <td>
-<code>pathPrefixOverride</code><br/>
-<em>
-string
-</em>
-</td>
-<td>
-<em>(Optional)</em>
-<p>PathPrefixOverride will mutate requests going to the backend to be prefixed with this value</p>
-</td>
-</tr>
-<tr>
-<td>
<code>sessionAffinity</code><br/> <em> <a href="#alb.networking.azure.io/v1.SessionAffinity">
IngressTimeouts
<h3 id="alb.networking.azure.io/v1.IngressCertificate">IngressCertificate </h3> <p>
-(<em>Appears on:</em><a href="#alb.networking.azure.io/v1.IngressListenerTLS">IngressListenerTLS</a>)
+(<em>Appears on:</em><a href="#alb.networking.azure.io/v1.IngressRuleTLS">IngressRuleTLS</a>)
</p> <div> <p>IngressCertificate defines a certificate and private key to be used with TLS.</p>
IngressExtensionSpec
<table> <tr> <td>
-<code>listenerSettings</code><br/>
+<code>rules</code><br/>
<em>
-<a href="#alb.networking.azure.io/v1.IngressListenerSetting">
-[]IngressListenerSetting
+<a href="#alb.networking.azure.io/v1.IngressRuleSetting">
+[]IngressRuleSetting
</a> </em> </td> <td>
-<p>Listeners defines a list of listeners to configure</p>
+<em>(Optional)</em>
+<p>Rules defines the rules per host</p>
</td> </tr> <tr>
IngressExtensionStatus
</em> </td> <td>
-<em>(Optional)</em>
-<p>Status describes the current state of the IngressExtension as enacted by the ALB controller</p>
</td> </tr> </tbody>
field.</p>
<tbody> <tr> <td>
-<code>listenerSettings</code><br/>
+<code>rules</code><br/>
<em>
-<a href="#alb.networking.azure.io/v1.IngressListenerSetting">
-[]IngressListenerSetting
+<a href="#alb.networking.azure.io/v1.IngressRuleSetting">
+[]IngressRuleSetting
</a> </em> </td> <td>
-<p>Listeners defines a list of listeners to configure</p>
+<em>(Optional)</em>
+<p>Rules defines the rules per host</p>
</td> </tr> <tr>
field.</p>
<tbody> <tr> <td>
-<code>listenerSettings</code><br/>
+<code>rules</code><br/>
<em>
-<a href="#alb.networking.azure.io/v1.IngressListenerSettingStatus">
-[]IngressListenerSettingStatus
+<a href="#alb.networking.azure.io/v1.IngressRuleStatus">
+[]IngressRuleStatus
</a> </em> </td> <td> <em>(Optional)</em>
-<p>ListenerSettings has detailed status information regarding each ListenerSetting</p>
+<p>Rules has detailed status information regarding each Rule</p>
</td> </tr> <tr>
field.</p>
<p>Conditions describe the current conditions of the IngressExtension. Known condition types are:</p> <ul>
-<li>&ldquo;Accepted&rdquo;</li>
-<li>&ldquo;Errors&rdquo;</li>
+<li>"Accepted"</li>
+<li>"Errors"</li>
</ul> </td> </tr> </tbody> </table>
-<h3 id="alb.networking.azure.io/v1.IngressListenerPort">IngressListenerPort
+<h3 id="alb.networking.azure.io/v1.IngressRewrites">IngressRewrites
</h3> <p>
-(<em>Appears on:</em><a href="#alb.networking.azure.io/v1.IngressListenerSetting">IngressListenerSetting</a>)
+(<em>Appears on:</em><a href="#alb.networking.azure.io/v1.IngressRuleSetting">IngressRuleSetting</a>)
</p> <div>
-<p>IngressListenerPort describes a port a listener will listen on.</p>
+<p>IngressRewrites provides the various rewrites supported on a rule</p>
</div> <table> <thead>
Known condition types are:</p>
<tbody> <tr> <td>
-<code>port</code><br/>
+<code>type</code><br/>
<em>
-int32
+<a href="#alb.networking.azure.io/v1.RewriteType">
+RewriteType
+</a>
</em> </td> <td>
-<p>Port defines what TCP port the listener will listen on</p>
+<p>Type identifies the type of rewrite</p>
</td> </tr> <tr> <td>
-<code>protocol</code><br/>
+<code>requestHeaderModifier</code><br/>
<em>
-<a href="#alb.networking.azure.io/v1.Protocol">
-Protocol
+<a href="#alb.networking.azure.io/v1.HeaderFilter">
+HeaderFilter
</a> </em> </td> <td> <em>(Optional)</em>
-<p>Protocol indicates if the port will be used for HTTP or HTTPS traffic.</p>
+<p>RequestHeaderModifier defines a schema that modifies request headers.</p>
</td> </tr> <tr> <td>
-<code>sslRedirectTo</code><br/>
+<code>responseHeaderModifier</code><br/>
<em>
-int32
+<a href="#alb.networking.azure.io/v1.HeaderFilter">
+HeaderFilter
+</a>
+</em>
+</td>
+<td>
+<em>(Optional)</em>
+<p>RequestHeaderModifier defines a schema that modifies response headers.</p>
+</td>
+</tr>
+<tr>
+<td>
+<code>urlRewrite</code><br/>
+<em>
+<a href="#alb.networking.azure.io/v1.URLRewriteFilter">
+URLRewriteFilter
+</a>
</em> </td> <td> <em>(Optional)</em>
-<p>SSLRedirectTo can be used to redirect HTTP traffic to HTTPS on the indicated port</p>
+<p>URLRewrite defines a schema that modifies a request during forwarding.</p>
</td> </tr> </tbody> </table>
-<h3 id="alb.networking.azure.io/v1.IngressListenerSetting">IngressListenerSetting
+<h3 id="alb.networking.azure.io/v1.IngressRuleSetting">IngressRuleSetting
</h3> <p> (<em>Appears on:</em><a href="#alb.networking.azure.io/v1.IngressExtensionSpec">IngressExtensionSpec</a>) </p> <div>
-<p>IngressListenerSetting provides configuration options for listeners</p>
+<p>IngressRuleSetting provides configuration options for rules</p>
</div> <table> <thead>
string
<td> <code>tls</code><br/> <em>
-<a href="#alb.networking.azure.io/v1.IngressListenerTLS">
-IngressListenerTLS
+<a href="#alb.networking.azure.io/v1.IngressRuleTLS">
+IngressRuleTLS
</a> </em> </td> <td> <em>(Optional)</em>
-<p>TLS defines TLS settings for the Listener</p>
+<p>TLS defines TLS settings for the rule</p>
</td> </tr> <tr>
IngressListenerTLS
</tr> <tr> <td>
-<code>ports</code><br/>
+<code>rewrites</code><br/>
<em>
-<a href="#alb.networking.azure.io/v1.IngressListenerPort">
-[]IngressListenerPort
+<a href="#alb.networking.azure.io/v1.IngressRewrites">
+[]IngressRewrites
</a> </em> </td> <td> <em>(Optional)</em>
-<p>Defines what ports and protocols a listener should listen on</p>
+<p>Rewrites defines the rewrites for the rule</p>
</td> </tr> <tr> <td>
-<code>overrideBackendHostnames</code><br/>
+<code>requestRedirect</code><br/>
<em>
-<a href="#alb.networking.azure.io/v1.IngressBackendOverride">
-[]IngressBackendOverride
+<a href="#alb.networking.azure.io/v1.Redirect">
+Redirect
</a> </em> </td> <td> <em>(Optional)</em>
-<p>OverrideBackendHostnames is a list of services on which incoming requests will have the value of the host header changed</p>
+<p>RequestRedirect defines the redirect behavior for the rule</p>
</td> </tr> </tbody> </table>
-<h3 id="alb.networking.azure.io/v1.IngressListenerSettingStatus">IngressListenerSettingStatus
+<h3 id="alb.networking.azure.io/v1.IngressRuleStatus">IngressRuleStatus
</h3> <p> (<em>Appears on:</em><a href="#alb.networking.azure.io/v1.IngressExtensionStatus">IngressExtensionStatus</a>) </p> <div>
-<p>IngressListenerSettingStatus describes the state of a listener setting</p>
+<p>IngressRuleStatus describes the state of a rule</p>
</div> <table> <thead>
string
</em> </td> <td>
-<p>Host identifies the listenerSetting this status describes</p>
+<p>Host identifies the rule this status describes</p>
</td> </tr> <tr>
bool
</td> <td> <em>(Optional)</em>
-<p>Valid indicates that there are no validation errors present on this listenerSetting</p>
+<p>Valid indicates that there are no validation errors present on this rule</p>
</td> </tr> </tbody> </table>
-<h3 id="alb.networking.azure.io/v1.IngressListenerTLS">IngressListenerTLS
+<h3 id="alb.networking.azure.io/v1.IngressRuleTLS">IngressRuleTLS
</h3> <p>
-(<em>Appears on:</em><a href="#alb.networking.azure.io/v1.IngressListenerSetting">IngressListenerSetting</a>)
+(<em>Appears on:</em><a href="#alb.networking.azure.io/v1.IngressRuleSetting">IngressRuleSetting</a>)
</p> <div>
-<p>IngressListenerTLS provides options for configuring TLS settings on a listener</p>
+<p>IngressRuleTLS provides options for configuring TLS settings on a rule</p>
</div> <table> <thead>
IngressCertificate
</td> <td> <em>(Optional)</em>
-<p>Certificate specifies a TLS Certificate to configure a Listener with</p>
-</td>
-</tr>
-<tr>
-<td>
-<code>policy</code><br/>
-<em>
-<a href="#alb.networking.azure.io/v1.IngressTLSPolicy">
-IngressTLSPolicy
-</a>
-</em>
-</td>
-<td>
-<em>(Optional)</em>
-<p>Policy configures a particular TLS Policy</p>
+<p>Certificate specifies a TLS Certificate to configure a rule with</p>
</td> </tr> </tbody> </table>
-<h3 id="alb.networking.azure.io/v1.IngressTLSPolicy">IngressTLSPolicy
-</h3>
-<p>
-(<em>Appears on:</em><a href="#alb.networking.azure.io/v1.IngressListenerTLS">IngressListenerTLS</a>)
-</p>
-<div>
-<p>IngressTLSPolicy describes cipher suites and related TLS configuration options</p>
-</div>
<h3 id="alb.networking.azure.io/v1.IngressTimeouts">IngressTimeouts </h3> <p>
FrontendTLSPolicyType
</tr> </tbody> </table>
+<h3 id="alb.networking.azure.io/v1.PortNumber">PortNumber
+(<code>int32</code> alias)</h3>
+<p>
+(<em>Appears on:</em><a href="#alb.networking.azure.io/v1.Redirect">Redirect</a>)
+</p>
+<div>
+<p>PortNumber defines a network port.</p>
+</div>
+<h3 id="alb.networking.azure.io/v1.PreciseHostname">PreciseHostname
+(<code>string</code> alias)</h3>
+<p>
+(<em>Appears on:</em><a href="#alb.networking.azure.io/v1.Redirect">Redirect</a>, <a href="#alb.networking.azure.io/v1.URLRewriteFilter">URLRewriteFilter</a>)
+</p>
+<div>
+<p>PreciseHostname is the fully qualified domain name of a network host. This
+matches the RFC 1123 definition of a hostname with 1 notable exception that
+numeric IP addresses are not allowed.</p>
+<p>Note that as per RFC1035 and RFC1123, a <em>label</em> must consist of lower case
+alphanumeric characters or &lsquo;-&rsquo;, and must start and end with an alphanumeric
+character. No other punctuation is allowed.</p>
+</div>
<h3 id="alb.networking.azure.io/v1.Protocol">Protocol (<code>string</code> alias)</h3> <p>
-(<em>Appears on:</em><a href="#alb.networking.azure.io/v1.HealthCheckPolicyConfig">HealthCheckPolicyConfig</a>, <a href="#alb.networking.azure.io/v1.IngressBackendPort">IngressBackendPort</a>, <a href="#alb.networking.azure.io/v1.IngressListenerPort">IngressListenerPort</a>)
+(<em>Appears on:</em><a href="#alb.networking.azure.io/v1.HealthCheckPolicyConfig">HealthCheckPolicyConfig</a>, <a href="#alb.networking.azure.io/v1.IngressBackendPort">IngressBackendPort</a>)
</p> <div> <p>Protocol defines the protocol used for certain properties.
Valid Protocol values are:</p>
</td> </tr></tbody> </table>
+<h3 id="alb.networking.azure.io/v1.Redirect">Redirect
+</h3>
+<p>
+(<em>Appears on:</em><a href="#alb.networking.azure.io/v1.IngressRuleSetting">IngressRuleSetting</a>)
+</p>
+<div>
+<p>Redirect defines a filter that redirects a request. This
+MUST NOT be used on the same rule that also has a URLRewriteFilter.</p>
+</div>
+<table>
+<thead>
+<tr>
+<th>Field</th>
+<th>Description</th>
+</tr>
+</thead>
+<tbody>
+<tr>
+<td>
+<code>scheme</code><br/>
+<em>
+string
+</em>
+</td>
+<td>
+<em>(Optional)</em>
+<p>Scheme is the scheme to be used in the value of the <code>Location</code> header in
+the response. When empty, the scheme of the request is used.</p>
+</td>
+</tr>
+<tr>
+<td>
+<code>hostname</code><br/>
+<em>
+<a href="#alb.networking.azure.io/v1.PreciseHostname">
+PreciseHostname
+</a>
+</em>
+</td>
+<td>
+<em>(Optional)</em>
+<p>Hostname is the hostname to be used in the value of the <code>Location</code>
+header in the response.
+When empty, the hostname in the <code>Host</code> header of the request is used.</p>
+</td>
+</tr>
+<tr>
+<td>
+<code>path</code><br/>
+<em>
+<a href="#alb.networking.azure.io/v1.HTTPPathModifier">
+HTTPPathModifier
+</a>
+</em>
+</td>
+<td>
+<em>(Optional)</em>
+<p>Path defines parameters used to modify the path of the incoming request.
+The modified path is then used to construct the <code>Location</code> header. When
+empty, the request path is used as-is.</p>
+</td>
+</tr>
+<tr>
+<td>
+<code>port</code><br/>
+<em>
+<a href="#alb.networking.azure.io/v1.PortNumber">
+PortNumber
+</a>
+</em>
+</td>
+<td>
+<em>(Optional)</em>
+<p>Port is the port to be used in the value of the <code>Location</code>
+header in the response.</p>
+<p>If no port is specified, the redirect port MUST be derived using the
+following rules:</p>
+<ul>
+<li>If redirect scheme is not-empty, the redirect port MUST be the well-known
+port associated with the redirect scheme. Specifically "http" to port 80
+and "https" to port 443. If the redirect scheme does not have a
+well-known port, the listener port of the Gateway SHOULD be used.</li>
+<li>If redirect scheme is empty, the redirect port MUST be the Gateway
+Listener port.</li>
+</ul>
+<p>Implementations SHOULD NOT add the port number in the &lsquo;Location&rsquo;
+header in the following cases:</p>
+<ul>
+<li>A Location header that will use HTTP (whether that is determined via
+the Listener protocol or the Scheme field) <em>and</em> use port 80.</li>
+<li>A Location header that will use HTTPS (whether that is determined via
+the Listener protocol or the Scheme field) <em>and</em> use port 443.</li>
+</ul>
+</td>
+</tr>
+<tr>
+<td>
+<code>statusCode</code><br/>
+<em>
+int
+</em>
+</td>
+<td>
+<em>(Optional)</em>
+<p>StatusCode is the HTTP status code to be used in response.</p>
+<p>Note that values may be added to this enum, implementations
+must ensure that unknown values will not cause a crash.</p>
+</td>
+</tr>
+</tbody>
+</table>
+<h3 id="alb.networking.azure.io/v1.RewriteType">RewriteType
+(<code>string</code> alias)</h3>
+<p>
+(<em>Appears on:</em><a href="#alb.networking.azure.io/v1.IngressRewrites">IngressRewrites</a>)
+</p>
+<div>
+<p>RewriteType identifies the rewrite type</p>
+</div>
+<table>
+<thead>
+<tr>
+<th>Value</th>
+<th>Description</th>
+</tr>
+</thead>
+<tbody><tr><td><p>&#34;RequestHeaderModifier&#34;</p></td>
+<td><p>RequestHeaderModifier can be used to add or remove an HTTP
+header from an HTTP request before it is sent to the upstream target.</p>
+</td>
+</tr><tr><td><p>&#34;ResponseHeaderModifier&#34;</p></td>
+<td><p>ResponseHeaderModifier can be used to add or remove an HTTP
+header from an HTTP response before it is sent to the client.</p>
+</td>
+</tr><tr><td><p>&#34;URLRewrite&#34;</p></td>
+<td><p>URLRewrite can be used to modify a request during forwarding.</p>
+</td>
+</tr></tbody>
+</table>
<h3 id="alb.networking.azure.io/v1.RoutePolicy">RoutePolicy </h3> <div>
particular RoutePolicy condition type has been raised.</p>
When the given RoutePolicy is correctly configured</p> </td> </tr><tr><td><p>&#34;InvalidRoutePolicy&#34;</p></td>
-<td><p>RoutePolicyReasonInvalid is the reason when the RoutePolicy isn't Accepted</p>
+<td><p>RoutePolicyReasonInvalid is the reason when the RoutePolicy isn&rsquo;t Accepted</p>
+</td>
+</tr><tr><td><p>&#34;InvalidGroup&#34;</p></td>
+<td><p>RoutePolicyReasonInvalidGroup is used when the group is invalid</p>
</td> </tr><tr><td><p>&#34;InvalidHTTPRoute&#34;</p></td> <td><p>RoutePolicyReasonInvalidHTTPRoute is used when the HTTPRoute is invalid</p> </td>
-</tr><tr><td><p>&#34;InvalidTargetReference&#34;</p></td>
-<td><p>RoutePolicyReasonInvalidTargetReference is used when there is no target reference</p>
+</tr><tr><td><p>&#34;InvalidKind&#34;</p></td>
+<td><p>RoutePolicyReasonInvalidKind is used when the kind/group is invalid</p>
+</td>
+</tr><tr><td><p>&#34;InvalidName&#34;</p></td>
+<td><p>RoutePolicyReasonInvalidName is used when the name is invalid</p>
+</td>
+</tr><tr><td><p>&#34;NoTargetReference&#34;</p></td>
+<td><p>RoutePolicyReasonNoTargetReference is used when there is no target reference</p>
+</td>
+</tr><tr><td><p>&#34;RefNotPermitted&#34;</p></td>
+<td><p>RoutePolicyReasonRefNotPermitted is used when the ref isn&rsquo;t permitted</p>
</td> </tr></tbody> </table>
field.</p>
(<em>Appears on:</em><a href="#alb.networking.azure.io/v1.RoutePolicySpec">RoutePolicySpec</a>) </p> <div>
-<p>RoutePolicyConfig defines the schema for RoutePolicy specification.
+<p>RoutePolicyConfig defines the schema for RoutePolicy specification.
This allows the specification of the following attributes: * Timeouts * Session Affinity</p>
constants so that operators and tools can converge on a common
vocabulary to describe RoutePolicy state.</p> <p>Known condition types are:</p> <ul>
-<li>&ldquo;Accepted&rdquo;</li>
+<li>"Accepted"</li>
</ul> </td> </tr>
This is inclusive.</p>
</tr> </tbody> </table>
+<h3 id="alb.networking.azure.io/v1.URLRewriteFilter">URLRewriteFilter
+</h3>
+<p>
+(<em>Appears on:</em><a href="#alb.networking.azure.io/v1.IngressRewrites">IngressRewrites</a>)
+</p>
+<div>
+<p>URLRewriteFilter defines a filter that modifies a request during
+forwarding. At most one of these filters may be used on a rule. This
+MUST NOT be used on the same rule having an sslRedirect.</p>
+</div>
+<table>
+<thead>
+<tr>
+<th>Field</th>
+<th>Description</th>
+</tr>
+</thead>
+<tbody>
+<tr>
+<td>
+<code>hostname</code><br/>
+<em>
+<a href="#alb.networking.azure.io/v1.PreciseHostname">
+PreciseHostname
+</a>
+</em>
+</td>
+<td>
+<em>(Optional)</em>
+<p>Hostname is the value to be used to replace the Host header value during
+forwarding.</p>
+</td>
+</tr>
+<tr>
+<td>
+<code>path</code><br/>
+<em>
+<a href="#alb.networking.azure.io/v1.HTTPPathModifier">
+HTTPPathModifier
+</a>
+</em>
+</td>
+<td>
+<em>(Optional)</em>
+<p>Path defines a path rewrite.</p>
+</td>
+</tr>
+</tbody>
+</table>
application-gateway Custom Health Probe https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/custom-health-probe.md
Previously updated : 09/25/2023 Last updated : 11/07/2023
The following properties make up custom health probes:
| -- | - | | port | the port number to initiate health probes to. Valid port values are 1-65535. | | interval | how often in seconds health probes should be sent to the backend target. The minimum interval must be > 0 seconds. |
-| timeout | how long in seconds the request should wait until it's deemed a failure The minimum interval must be > 0 seconds. |
+| timeout | how long in seconds the request should wait until it's marked as a failure The minimum interval must be > 0 seconds. |
| healthyThreshold | number of health probes before marking the target endpoint healthy. The minimum interval must be > 0. | | unhealthyTreshold | number of health probes to fail before the backend target should be labeled unhealthy. The minimum interval must be > 0. |
-| protocol| specifies either non-encrypted `HTTP` traffic or encrypted traffic via TLS as `HTTPS` |
+| protocol| specifies either nonencrypted `HTTP` traffic or encrypted traffic via TLS as `HTTPS` |
| (http) host | the hostname specified in the request to the backend target. |
-| (http) path | the specific path of the request. If a single file should be loaded, the path may be /https://docsupdatetracker.net/index.html as an example. |
+| (http) path | the specific path of the request. If a single file should be loaded, the path might be /https://docsupdatetracker.net/index.html. |
| (http -> match) statusCodes | Contains two properties, `start` and `end`, that define the range of valid HTTP status codes returned from the backend. |
+[ ![A diagram showing the Application Gateway for Containers using custom health probes to determine backend health.](./media/custom-health-probe/custom-health-probe.png) ](./media/custom-health-probe/custom-health-probe.png#lightbox)
+ ## Default health probe Application Gateway for Containers automatically configures a default health probe when you don't define a custom probe configuration or configure a readiness probe. The monitoring behavior works by making an HTTP GET request to the IP addresses of configured backend targets. For default probes, if the backend target is configured for HTTPS, the probe uses HTTPS to test health of the backend targets.
When the default health probe is used, the following values for each health prob
## Custom health probe
-In both Gateway API and Ingress API, a custom health probe can be defined by defining a [_HealthCheckPolicyPolicy_ resource](api-specification-kubernetes.md#alb.networking.azure.io/v1.HealthCheckPolicy) and referencing a service the health probes should check against. As the service is referenced by an HTTPRoute or Ingress resource with a class reference to Application Gateway for Containers, the custom health probe will be used for each reference.
+In both Gateway API and Ingress API, a custom health probe can be defined by defining a [_HealthCheckPolicyPolicy_ resource](api-specification-kubernetes.md#alb.networking.azure.io/v1.HealthCheckPolicy) and referencing a service the health probes should check against. As the service is referenced by an HTTPRoute or Ingress resource with a class reference to Application Gateway for Containers, the custom health probe is used for each reference.
-In this example, the health probe emitted by Application Gateway for Containers will send the hostname contoso.com to the pods that make up _test-service_. The request path will be `/`, a probe will be emitted every 5 seconds and wait 3 seconds before determining the connection has timed out. If a response is received, an HTTP response code between 200 and 299 (inclusive of 200 and 299) will be considered healthy, all other responses will be considered unhealthy.
+In this example, the health probe emitted by Application Gateway for Containers sends the hostname contoso.com to the pods that make up _test-service_. The request path is `/`, a probe is emitted every 5 seconds and wait 3 seconds before determining the connection has timed out. If a response is received, an HTTP response code between 200 and 299 (inclusive of 200 and 299) is considered healthy, all other responses are considered unhealthy.
```bash kubectl apply -f - <<EOF
application-gateway How To Backend Mtls Gateway Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/how-to-backend-mtls-gateway-api.md
See the following figure:
This command creates the following on your cluster: - a namespace called `test-infra`
- - 1 service called `mtls-app` in the `test-infra` namespace
- - 1 deployment called `mtls-app` in the `test-infra` namespace
- - 1 config map called `mtls-app-nginx-cm` in the `test-infra` namespace
- - 4 secrets called `backend.com`, `frontend.com`, `gateway-client-cert`, and `ca.bundle` in the `test-infra` namespace
+ - one service called `mtls-app` in the `test-infra` namespace
+ - one deployment called `mtls-app` in the `test-infra` namespace
+ - one config map called `mtls-app-nginx-cm` in the `test-infra` namespace
+ - four secrets called `backend.com`, `frontend.com`, `gateway-client-cert`, and `ca.bundle` in the `test-infra` namespace
## Deploy the required Gateway API resources
spec:
EOF ```
-Once the BackendTLSPolicy object has been create check the status on the object to ensure that the policy is valid.
+Once the BackendTLSPolicy object has been created check the status on the object to ensure that the policy is valid.
```bash kubectl get backendtlspolicy -n test-infra mtls-app-tls-policy -o yaml
application-gateway How To Header Rewrite Gateway Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/how-to-header-rewrite-gateway-api.md
+
+ Title: Header rewrite for Azure Application Gateway for Containers - Gateway API
+description: Learn how to rewrite headers in Gateway API for Application Gateway for Containers.
+++++ Last updated : 11/07/2023+++
+# Header rewrite for Azure Application Gateway for Containers - Gateway API (preview)
+
+Application Gateway for Containers allows you to rewrite HTTP headers of client requests and responses from backend targets.
+
+## Usage details
+
+Header rewrites take advantage of [filters](https://gateway-api.sigs.k8s.io/references/spec/#gateway.networking.k8s.io/v1beta1.HTTPURLRewriteFilter) as defined by Kubernetes Gateway API.
+
+## Background
+Header rewrites enable you to modify the request and response headers to and from your backend targets.
+
+The following figure illustrates a request with a specific user agent being rewritten to a simplified value called SearchEngine-BingBot when the request is initiated to the backend target by Application Gateway for Containers:
+
+[ ![A diagram showing the Application Gateway for Containers rewriting a request header to the backend.](./media/how-to-header-rewrite-gateway-api/header-rewrite.png) ](./media/how-to-header-rewrite-gateway-api/header-rewrite.png#lightbox)
+
+## Prerequisites
+
+> [!IMPORTANT]
+> Application Gateway for Containers is currently in PREVIEW.<br>
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+1. If following the BYO deployment strategy, ensure that you set up your Application Gateway for Containers resources and [ALB Controller](quickstart-deploy-application-gateway-for-containers-alb-controller.md)
+2. If you're following the ALB managed deployment strategy, ensure provisioning of the [ALB Controller](quickstart-deploy-application-gateway-for-containers-alb-controller.md) the Application Gateway for Containers resources via the [ApplicationLoadBalancer custom resource](quickstart-create-application-gateway-for-containers-managed-by-alb-controller.md).
+3. Deploy sample HTTP application
+ Apply the following deployment.yaml file on your cluster to create a sample web application to demonstrate the header rewrite.
+ ```bash
+ kubectl apply -f https://trafficcontrollerdocs.blob.core.windows.net/examples/traffic-split-scenario/deployment.yaml
+ ```
+
+ This command creates the following on your cluster:
+ - a namespace called `test-infra`
+ - two services called `backend-v1` and `backend-v2` in the `test-infra` namespace
+ - two deployments called `backend-v1` and `backend-v2` in the `test-infra` namespace
+
+## Deploy the required Gateway API resources
+
+# [ALB managed deployment](#tab/alb-managed)
+
+Create a gateway:
+
+```bash
+kubectl apply -f - <<EOF
+apiVersion: gateway.networking.k8s.io/v1beta1
+kind: Gateway
+metadata:
+ name: gateway-01
+ namespace: test-infra
+ annotations:
+ alb.networking.azure.io/alb-namespace: alb-test-infra
+ alb.networking.azure.io/alb-name: alb-test
+spec:
+ gatewayClassName: azure-alb-external
+ listeners:
+ - name: http-listener
+ port: 80
+ protocol: HTTP
+ allowedRoutes:
+ namespaces:
+ from: Same
+EOF
+```
++
+# [Bring your own (BYO) deployment](#tab/byo)
+1. Set the following environment variables
+
+```bash
+RESOURCE_GROUP='<resource group name of the Application Gateway For Containers resource>'
+RESOURCE_NAME='alb-test'
+
+RESOURCE_ID=$(az network alb show --resource-group $RESOURCE_GROUP --name $RESOURCE_NAME --query id -o tsv)
+FRONTEND_NAME='frontend'
+```
+
+2. Create a Gateway
+```bash
+kubectl apply -f - <<EOF
+apiVersion: gateway.networking.k8s.io/v1beta1
+kind: Gateway
+metadata:
+ name: gateway-01
+ namespace: test-infra
+ annotations:
+ alb.networking.azure.io/alb-id: $RESOURCE_ID
+spec:
+ gatewayClassName: azure-alb-external
+ listeners:
+ - name: http-listener
+ port: 80
+ protocol: HTTP
+ allowedRoutes:
+ namespaces:
+ from: Same
+ addresses:
+ - type: alb.networking.azure.io/alb-frontend
+ value: $FRONTEND_NAME
+EOF
+```
+++
+Once the gateway resource is created, ensure the status is valid, the listener is _Programmed_, and an address is assigned to the gateway.
+```bash
+kubectl get gateway gateway-01 -n test-infra -o yaml
+```
+
+Example output of successful gateway creation.
+```yaml
+status:
+ addresses:
+ - type: IPAddress
+ value: xxxx.yyyy.alb.azure.com
+ conditions:
+ - lastTransitionTime: "2023-06-19T21:04:55Z"
+ message: Valid Gateway
+ observedGeneration: 1
+ reason: Accepted
+ status: "True"
+ type: Accepted
+ - lastTransitionTime: "2023-06-19T21:04:55Z"
+ message: Application Gateway For Containers resource has been successfully updated.
+ observedGeneration: 1
+ reason: Programmed
+ status: "True"
+ type: Programmed
+ listeners:
+ - attachedRoutes: 0
+ conditions:
+ - lastTransitionTime: "2023-06-19T21:04:55Z"
+ message: ""
+ observedGeneration: 1
+ reason: ResolvedRefs
+ status: "True"
+ type: ResolvedRefs
+ - lastTransitionTime: "2023-06-19T21:04:55Z"
+ message: Listener is accepted
+ observedGeneration: 1
+ reason: Accepted
+ status: "True"
+ type: Accepted
+ - lastTransitionTime: "2023-06-19T21:04:55Z"
+ message: Application Gateway For Containers resource has been successfully updated.
+ observedGeneration: 1
+ reason: Programmed
+ status: "True"
+ type: Programmed
+ name: https-listener
+ supportedKinds:
+ - group: gateway.networking.k8s.io
+ kind: HTTPRoute
+```
+
+Once the gateway is created, create an HTTPRoute that listens for hostname contoso.com and overrides the user-agent value to SearchEngine-BingBot.
+
+In this example, we look for the user agent used by the Bing search engine and simplify the header to SearchEngine-BingBot for easier backend parsing.
+
+This example also demonstrates addition of a new header called `AGC-Header-Add` with a value of `agc-value` and removes a request header called `client-custom-header`.
+
+> [!TIP]
+> For this example, while we can use the HTTPHeaderMatch of "Exact" for a string match, a demonstration is used in regular expression for illistration of further capabilities.
+
+```bash
+kubectl apply -f - <<EOF
+apiVersion: gateway.networking.k8s.io/v1beta1
+kind: HTTPRoute
+metadata:
+ name: header-rewrite-route
+ namespace: test-infra
+spec:
+ parentRefs:
+ - name: gateway-01
+ namespace: test-infra
+ hostnames:
+ - "contoso.com"
+ rules:
+ - matches:
+ - headers:
+ - name: user-agent
+ value: Mozilla/5\.0 AppleWebKit/537\.36 \(KHTML, like Gecko; compatible; bingbot/2\.0; \+http://www\.bing\.com/bingbot\.htm\) Chrome/
+ type: RegularExpression
+ filters:
+ - type: RequestHeaderModifier
+ requestHeaderModifier:
+ set:
+ - name: user-agent
+ value: SearchEngine-BingBot
+ add:
+ - name: AGC-Header-Add
+ value: agc-value
+ remove: ["client-custom-header"]
+ backendRefs:
+ - name: backend-v2
+ port: 8080
+ - backendRefs:
+ - name: backend-v1
+ port: 8080
+EOF
+```
+
+Once the HTTPRoute resource is created, ensure the route is _Accepted_ and the Application Gateway for Containers resource is _Programmed_.
+```bash
+kubectl get httproute http-route -n test-infra -o yaml
+```
+
+Verify the status of the Application Gateway for Containers resource has been successfully updated.
+
+```yaml
+status:
+ parents:
+ - conditions:
+ - lastTransitionTime: "2023-06-19T22:18:23Z"
+ message: ""
+ observedGeneration: 1
+ reason: ResolvedRefs
+ status: "True"
+ type: ResolvedRefs
+ - lastTransitionTime: "2023-06-19T22:18:23Z"
+ message: Route is Accepted
+ observedGeneration: 1
+ reason: Accepted
+ status: "True"
+ type: Accepted
+ - lastTransitionTime: "2023-06-19T22:18:23Z"
+ message: Application Gateway For Containers resource has been successfully updated.
+ observedGeneration: 1
+ reason: Programmed
+ status: "True"
+ type: Programmed
+ controllerName: alb.networking.azure.io/alb-controller
+ parentRef:
+ group: gateway.networking.k8s.io
+ kind: Gateway
+ name: gateway-01
+ namespace: test-infra
+ ```
+
+## Test access to the application
+
+Now we're ready to send some traffic to our sample application, via the FQDN assigned to the frontend. Use the following command to get the FQDN:
+
+```bash
+fqdn=$(kubectl get gateway gateway-01 -n test-infra -o jsonpath='{.status.addresses[0].value}')
+```
+
+If you specify the server name indicator using the curl command, `contoso.com` for the frontend FQDN, the output should return a response from the backend-v1 service.
+
+```bash
+fqdnIp=$(dig +short $fqdn)
+curl -k --resolve contoso.com:80:$fqdnIp http://contoso.com
+```
+
+Via the response we should see:
+```json
+{
+ "path": "/",
+ "host": "contoso.com",
+ "method": "GET",
+ "proto": "HTTP/1.1",
+ "headers": {
+ "Accept": [
+ "*/*"
+ ],
+ "User-Agent": [
+ "curl/7.81.0"
+ ],
+ "X-Forwarded-For": [
+ "xxx.xxx.xxx.xxx"
+ ],
+ "X-Forwarded-Proto": [
+ "http"
+ ],
+ "X-Request-Id": [
+ "dcd4bcad-ea43-4fb6-948e-a906380dcd6d"
+ ]
+ },
+ "namespace": "test-infra",
+ "ingress": "",
+ "service": "",
+ "pod": "backend-v1-5b8fd96959-f59mm"
+}
+```
+
+Specifying a user-agent header with the value `` should return a response from the backend service of SearchEngine-BingBot:
+
+```bash
+fqdnIp=$(dig +short $fqdn)
+curl -k --resolve contoso.com:80:$fqdnIp http://contoso.com -H "user-agent: Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; bingbot/2.0; +http://www.bing.com/bingbot.htm) Chrome/"
+```
+
+Via the response we should see:
+```json
+{
+ "path": "/",
+ "host": "fabrikam.com",
+ "method": "GET",
+ "proto": "HTTP/1.1",
+ "headers": {
+ "Accept": [
+ "*/*"
+ ],
+ "User-Agent": [
+ "curl/7.81.0"
+ ],
+ "X-Forwarded-For": [
+ "xxx.xxx.xxx.xxx"
+ ],
+ "X-Forwarded-Proto": [
+ "http"
+ ],
+ "X-Request-Id": [
+ "adae8cc1-8030-4d95-9e05-237dd4e3941b"
+ ]
+ },
+ "namespace": "test-infra",
+ "ingress": "",
+ "service": "",
+ "pod": "backend-v2-5b8fd96959-f59mm"
+}
+```
+
+Specifying a `client-custom-header` header with the value `moo` should be stripped from the request when AGC initiates the connection to the backend service:
+
+```bash
+fqdnIp=$(dig +short $fqdn)
+curl -k --resolve contoso.com:80:$fqdnIp http://contoso.com -H "client-custom-header: moo"
+```
+
+Via the response we should see:
+```json
+{
+ "path": "/",
+ "host": "fabrikam.com",
+ "method": "GET",
+ "proto": "HTTP/1.1",
+ "headers": {
+ "Accept": [
+ "*/*"
+ ],
+ "User-Agent": [
+ "curl/7.81.0"
+ ],
+ "X-Forwarded-For": [
+ "xxx.xxx.xxx.xxx"
+ ],
+ "X-Forwarded-Proto": [
+ "http"
+ ],
+ "X-Request-Id": [
+ "kd83nc84-4325-5d22-3d23-237dd4e3941b"
+ ]
+ },
+ "namespace": "test-infra",
+ "ingress": "",
+ "service": "",
+ "pod": "backend-v2-5b8fd96959-f59mm"
+}
+```
+
+Congratulations, you have installed ALB Controller, deployed a backend application and modified header values via Gateway API on Application Gateway for Containers.
application-gateway How To Header Rewrite Ingress Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/how-to-header-rewrite-ingress-api.md
+
+ Title: Header rewrite for Azure Application Gateway for Containers - Ingress API
+description: Learn how to rewrite headers in Ingress API for Application Gateway for Containers.
+++++ Last updated : 11/6/2023+++
+# Header rewrite for Azure Application Gateway for Containers - Ingress API (preview)
+
+Application Gateway for Containers allows you to rewrite HTTP headers of client requests and responses from backend targets.
+
+## Usage details
+
+Header rewrites take advantage of Application Gateway for Container's IngressExtension custom resource.
+
+## Background
+Header rewrites enable you to modify the request and response headers to and from your backend targets.
+
+The following figure illustrates an example of a request with a specific user agent being rewritten to a simplified value called `rewritten-user-agent` when the request is initiated to the backend target by Application Gateway for Containers:
+
+[ ![A diagram showing the Application Gateway for Containers rewriting a request header to the backend.](./media/how-to-header-rewrite-ingress-api/header-rewrite.png) ](./media/how-to-header-rewrite-ingress-api/header-rewrite.png#lightbox)
+
+## Prerequisites
+
+> [!IMPORTANT]
+> Application Gateway for Containers is currently in PREVIEW.<br>
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+1. If following the BYO deployment strategy, ensure you have set up your Application Gateway for Containers resources and [ALB Controller](quickstart-deploy-application-gateway-for-containers-alb-controller.md)
+2. If following the ALB managed deployment strategy, ensure you have provisioned your [ALB Controller](quickstart-deploy-application-gateway-for-containers-alb-controller.md) and provisioned the Application Gateway for Containers resources via the [ApplicationLoadBalancer custom resource](quickstart-create-application-gateway-for-containers-managed-by-alb-controller.md).
+3. Deploy sample HTTP application
+ Apply the following deployment.yaml file on your cluster to create a sample web application to demonstrate the header rewrite.
+ ```bash
+ kubectl apply -f https://trafficcontrollerdocs.blob.core.windows.net/examples/traffic-split-scenario/deployment.yaml
+ ```
+
+ This command creates the following on your cluster:
+ - a namespace called `test-infra`
+ - two services called `backend-v1` and `backend-v2` in the `test-infra` namespace
+ - two deployments called `backend-v1` and `backend-v2` in the `test-infra` namespace
+
+## Deploy the required Gateway API resources
+
+# [ALB managed deployment](#tab/alb-managed)
+
+Create an Ingress resource to listen for requests to `contoso.com`:
+
+```bash
+kubectl apply -f - <<EOF
+apiVersion: networking.k8s.io/v1
+kind: Ingress
+metadata:
+ name: ingress-01
+ namespace: test-infra
+ annotations:
+ alb.networking.azure.io/alb-name: alb-test
+ alb.networking.azure.io/alb-namespace: alb-test-infra
+ alb.networking.azure.io/alb-ingress-extension: header-rewrite
+spec:
+ ingressClassName: azure-alb-external
+ rules:
+ - host: contoso.com
+ http:
+ paths:
+ - path: /
+ pathType: Prefix
+ backend:
+ service:
+ name: backend-v1
+ port:
+ number: 8080
+EOF
+```
++
+# [Bring your own (BYO) deployment](#tab/byo)
+1. Set the following environment variables
+
+```bash
+RESOURCE_GROUP='<resource group name of the Application Gateway For Containers resource>'
+RESOURCE_NAME='alb-test'
+
+RESOURCE_ID=$(az network alb show --resource-group $RESOURCE_GROUP --name $RESOURCE_NAME --query id -o tsv)
+FRONTEND_NAME='frontend'
+```
+
+2. Create an Ingress resource to listen for requests to `contoso.com`
+```bash
+kubectl apply -f - <<EOF
+apiVersion: networking.k8s.io/v1
+kind: Ingress
+metadata:
+ name: ingress-01
+ namespace: test-infra
+ annotations:
+ alb.networking.azure.io/alb-id: $RESOURCE_ID
+ alb.networking.azure.io/alb-frontend: $FRONTEND_NAME
+ alb.networking.azure.io/alb-ingress-extension: header-rewrite
+spec:
+ ingressClassName: azure-alb-external
+ rules:
+ - host: contoso.com
+ http:
+ paths:
+ - path: /
+ pathType: Prefix
+ backend:
+ service:
+ name: backend-v1
+ port:
+ number: 8080
+EOF
+```
+++
+Once the ingress resource is created, ensure the status shows the hostname of your load balancer and that both ports are listening for requests.
+
+```bash
+kubectl get ingress ingress-01 -n test-infra -o yaml
+```
+
+Example output of successful Ingress creation.
+
+```yaml
+apiVersion: networking.k8s.io/v1
+kind: Ingress
+metadata:
+ annotations:
+ alb.networking.azure.io/alb-frontend: FRONTEND_NAME
+ alb.networking.azure.io/alb-id: /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/yyyyyyyy/providers/Microsoft.ServiceNetworking/trafficControllers/zzzzzz
+ kubectl.kubernetes.io/last-applied-configuration: |
+ {"apiVersion":"networking.k8s.io/v1","kind":"Ingress","metadata":{"annotations":{"alb.networking.azure.io/alb-frontend":"FRONTEND_NAME","alb.networking.azure.io/alb-id":"/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/yyyyyyyy/providers/Microsoft.ServiceNetworking/trafficControllers/zzzzzz", "alb.networking.azure.io/alb-ingress-extension":"header-rewrite"},"name"
+:"ingress-01","namespace":"test-infra"},"spec":{"ingressClassName":"azure-alb-external","rules":[{"host":"contoso.com","http":{"paths":[{"backend":{"service":{"name":"backend-v1","port":{"number":8080}}},"path":"/","pathType":"Prefix"}]}}]}}
+ creationTimestamp: "2023-07-22T18:02:13Z"
+ generation: 2
+ name: ingress-01
+ namespace: test-infra
+ resourceVersion: "278238"
+ uid: 17c34774-1d92-413e-85ec-c5a8da45989d
+spec:
+ ingressClassName: azure-alb-external
+ rules:
+ - host: contoso.com
+ http:
+ paths:
+ - path: /
+ pathType: Prefix
+ backend:
+ service:
+ name: backend-v1
+ port:
+ number: 8080
+status:
+ loadBalancer:
+ ingress:
+ - hostname: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.fzyy.alb.azure.com
+ ports:
+ - port: 80
+ protocol: TCP
+```
++
+Once the Ingress is created, next we need to define an IngressExtension with the header rewrite rules.
+
+In this example, we set a static user-agent with a value of `rewritten-user-agent`.
+
+This example also demonstrates addition of a new header called `AGC-Header-Add` with a value of `agc-value` and removes a request header called `client-custom-header`.
+
+> [!TIP]
+> For this example, while we can use the HTTPHeaderMatch of "Exact" for a string match, a demonstration is used in regular expression for illistration of further capabilities.
+
+```bash
+kubectl apply -f - <<EOF
+apiVersion: alb.networking.azure.io/v1
+kind: IngressExtension
+metadata:
+ name: header-rewrite
+ namespace: test-infra
+spec:
+ rules:
+ - host: contoso.com
+ httpPort: 80
+ rewrites:
+ - type: RequestHeaderModifier
+ requestHeaderModifier:
+ set:
+ - name: "user-agent"
+ value: "rewritten-user-agent"
+ add:
+ - name: "AGC-Header-Add"
+ value: "agc-value"
+ remove:
+ - "client-custom-header"
+EOF
+```
+
+Once the HTTPRoute resource is created, ensure the route has been _Accepted_ and the Application Gateway for Containers resource has been _Programmed_.
+```bash
+kubectl get IngressExtension header-rewrite -n test-infra -o yaml
+```
+
+Verify the status of the Application Gateway for Containers resource has been successfully updated.
+
+## Test access to the application
+
+Now we're ready to send some traffic to our sample application, via the FQDN assigned to the frontend. Use the following command to get the FQDN.
+
+```bash
+fqdn=$(kubectl get ingress ingress-01 -n test-infra -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
+```
+
+If you specify the server name indicator using the curl command, `contoso.com` for the frontend FQDN, a response from the backend-v1 service is returned.
+
+```bash
+fqdnIp=$(dig +short $fqdn)
+curl -k --resolve contoso.com:80:$fqdnIp http://contoso.com
+```
+
+Via the response we should see:
+```json
+{
+ "path": "/",
+ "host": "contoso.com",
+ "method": "GET",
+ "proto": "HTTP/1.1",
+ "headers": {
+ "Accept": [
+ "*/*"
+ ],
+ "User-Agent": [
+ "curl/7.81.0"
+ ],
+ "X-Forwarded-For": [
+ "xxx.xxx.xxx.xxx"
+ ],
+ "X-Forwarded-Proto": [
+ "http"
+ ],
+ "X-Request-Id": [
+ "dcd4bcad-ea43-4fb6-948e-a906380dcd6d"
+ ]
+ },
+ "namespace": "test-infra",
+ "ingress": "",
+ "service": "",
+ "pod": "backend-v1-5b8fd96959-f59mm"
+}
+```
+
+Specifying a user-agent header with the value `my-user-agent` should return a response from the backend service of `rewritten-user-agent`:
+
+```bash
+fqdnIp=$(dig +short $fqdn)
+curl -k --resolve contoso.com:80:$fqdnIp http://contoso.com -H "user-agent: my-user-agent"
+```
+
+Via the response we should see:
+```json
+{
+ "path": "/",
+ "host": "fabrikam.com",
+ "method": "GET",
+ "proto": "HTTP/1.1",
+ "headers": {
+ "Accept": [
+ "*/*"
+ ],
+ "User-Agent": [
+ "curl/7.81.0"
+ ],
+ "X-Forwarded-For": [
+ "xxx.xxx.xxx.xxx"
+ ],
+ "X-Forwarded-Proto": [
+ "http"
+ ],
+ "X-Request-Id": [
+ "adae8cc1-8030-4d95-9e05-237dd4e3941b"
+ ]
+ },
+ "namespace": "test-infra",
+ "ingress": "",
+ "service": "",
+ "pod": "backend-v1-5b8fd96959-f59mm"
+}
+```
+
+Specifying a `client-custom-header` header with the value `moo` should be stripped from the request when AGC initiates the connection to the backend service:
+
+```bash
+fqdnIp=$(dig +short $fqdn)
+curl -k --resolve contoso.com:80:$fqdnIp http://contoso.com -H "client-custom-header: moo"
+```
+
+Via the response we should see:
+```json
+{
+ "path": "/",
+ "host": "fabrikam.com",
+ "method": "GET",
+ "proto": "HTTP/1.1",
+ "headers": {
+ "Accept": [
+ "*/*"
+ ],
+ "User-Agent": [
+ "curl/7.81.0"
+ ],
+ "X-Forwarded-For": [
+ "xxx.xxx.xxx.xxx"
+ ],
+ "X-Forwarded-Proto": [
+ "http"
+ ],
+ "X-Request-Id": [
+ "kd83nc84-4325-5d22-3d23-237dd4e3941b"
+ ]
+ },
+ "namespace": "test-infra",
+ "ingress": "",
+ "service": "",
+ "pod": "backend-v1-5b8fd96959-f59mm"
+}
+```
+
+Congratulations, you have installed ALB Controller, deployed a backend application and modified header values via Gateway API on Application Gateway for Containers.
application-gateway How To Multiple Site Hosting Gateway Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/how-to-multiple-site-hosting-gateway-api.md
Previously updated : 09/20/2023 Last updated : 11/07/2023
Application Gateway for Containers enables multi-site hosting by allowing you to
> Application Gateway for Containers is currently in PREVIEW.<br> > See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-1. If you follow the BYO deployment strategy, ensure you have set up your Application Gateway for Containers resources and [ALB Controller](quickstart-deploy-application-gateway-for-containers-alb-controller.md)
-2. If you follow the ALB managed deployment strategy, ensure you have provisioned your [ALB Controller](quickstart-deploy-application-gateway-for-containers-alb-controller.md) and provisioned the Application Gateway for Containers resources via the [ApplicationLoadBalancer custom resource](quickstart-create-application-gateway-for-containers-managed-by-alb-controller.md).
+1. If you follow the BYO deployment strategy, ensure you set up your Application Gateway for Containers resources and [ALB Controller](quickstart-deploy-application-gateway-for-containers-alb-controller.md)
+2. If you follow the ALB managed deployment strategy, ensure provisioning of your [ALB Controller](quickstart-deploy-application-gateway-for-containers-alb-controller.md) and the Application Gateway for Containers resources via the [ApplicationLoadBalancer custom resource](quickstart-create-application-gateway-for-containers-managed-by-alb-controller.md).
3. Deploy sample HTTP application Apply the following deployment.yaml file on your cluster to create a sample web application to demonstrate path, query, and header based routing. ```bash
Application Gateway for Containers enables multi-site hosting by allowing you to
This command creates the following on your cluster: - a namespace called `test-infra`
- - 2 services called `backend-v1` and `backend-v2` in the `test-infra` namespace
- - 2 deployments called `backend-v1` and `backend-v2` in the `test-infra` namespace
+ - two services called `backend-v1` and `backend-v2` in the `test-infra` namespace
+ - two deployments called `backend-v1` and `backend-v2` in the `test-infra` namespace
## Deploy the required Gateway API resources
EOF
-Once the gateway resource has been created, ensure the status is valid, the listener is _Programmed_, and an address is assigned to the gateway.
+Once the gateway resource is created, ensure the status is valid, the listener is _Programmed_, and an address is assigned to the gateway.
```bash kubectl get gateway gateway-01 -n test-infra -o yaml ```
status:
kind: HTTPRoute ```
-Once the gateway has been created, create two HTTPRoute resources for `contoso.com` and `fabrikam.com` domain names. Each domain forwards traffic to a different backend service.
+Once the gateway is created, create two HTTPRoute resources for `contoso.com` and `fabrikam.com` domain names. Each domain forwards traffic to a different backend service.
```bash kubectl apply -f - <<EOF apiVersion: gateway.networking.k8s.io/v1beta1
spec:
EOF ```
-Once the HTTPRoute resource has been created, ensure both HTTPRoute resources show _Accepted_ and the Application Gateway for Containers resource has been _Programmed_.
+Once the HTTPRoute resource is created, ensure both HTTPRoute resources show _Accepted_ and the Application Gateway for Containers resource is _Programmed_.
```bash kubectl get httproute contoso-route -n test-infra -o yaml kubectl get httproute fabrikam-route -n test-infra -o yaml ```
-Verify the status of the Application Gateway for Containers resource has been successfully updated for each HTTPRoute.
+Verify the status of the Application Gateway for Containers resource is successfully updated for each HTTPRoute.
```yaml status:
Now we're ready to send some traffic to our sample application, via the FQDN ass
fqdn=$(kubectl get gateway gateway-01 -n test-infra -o jsonpath='{.status.addresses[0].value}') ```
-Specifying server name indicator using the curl command, `contoso.com` for the frontend FQDN should return a response from the backend-v1 service.
+If you specify the server name indicator using the curl command, `contoso.com` for the frontend FQDN, it returns a response from the backend-v1 service.
```bash fqdnIp=$(dig +short $fqdn)
Via the response we should see:
} ```
-Specifying server name indicator using the curl command, `contoso.com` for the frontend FQDN should return a response from the backend-v1 service.
+If you specify the server name indicator using the curl command, `fabrikam.com` for the frontend FQDN, it returns a response from the backend-v1 service.
```bash fqdnIp=$(dig +short $fqdn)
application-gateway How To Multiple Site Hosting Ingress Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/how-to-multiple-site-hosting-ingress-api.md
Previously updated : 09/25/2023 Last updated : 11/07/2023
Application Gateway for Containers enables multi-site hosting by allowing you to
> Application Gateway for Containers is currently in PREVIEW.<br> > See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-1. If you follow the BYO deployment strategy, ensure you have set up your Application Gateway for Containers resources and [ALB Controller](quickstart-deploy-application-gateway-for-containers-alb-controller.md)
-2. If you follow the ALB managed deployment strategy, ensure you have provisioned your [ALB Controller](quickstart-deploy-application-gateway-for-containers-alb-controller.md) and provisioned the Application Gateway for Containers resources via the [ApplicationLoadBalancer custom resource](quickstart-create-application-gateway-for-containers-managed-by-alb-controller.md).
+1. If you follow the BYO deployment strategy, ensure that you set up your Application Gateway for Containers resources and [ALB Controller](quickstart-deploy-application-gateway-for-containers-alb-controller.md)
+2. If you follow the ALB managed deployment strategy, ensure provisioning of your [ALB Controller](quickstart-deploy-application-gateway-for-containers-alb-controller.md) and the Application Gateway for Containers resources via the [ApplicationLoadBalancer custom resource](quickstart-create-application-gateway-for-containers-managed-by-alb-controller.md).
3. Deploy sample HTTP application Apply the following deployment.yaml file on your cluster to create a sample web application to demonstrate path, query, and header based routing. ```bash
Application Gateway for Containers enables multi-site hosting by allowing you to
This command creates the following on your cluster: - a namespace called `test-infra`
- - 2 services called `backend-v1` and `backend-v2` in the `test-infra` namespace
- - 2 deployments called `backend-v1` and `backend-v2` in the `test-infra` namespace
+ - two services called `backend-v1` and `backend-v2` in the `test-infra` namespace
+ - two deployments called `backend-v1` and `backend-v2` in the `test-infra` namespace
## Deploy the required Ingress resource
EOF
-Once the ingress resource has been created, ensure the status shows the hostname of your load balancer and that both ports are listening for requests.
+Once the ingress resource is created, ensure the status shows the hostname of your load balancer and that both ports are listening for requests.
```bash kubectl get ingress ingress-01 -n test-infra -o yaml ```
-Example output of successful gateway creation.
+Example output of successful Ingress creation.
```yaml apiVersion: networking.k8s.io/v1 kind: Ingress
Via the response we should see:
} ```
-Next, specify server name indicator using the curl command, `contoso.com` for the frontend FQDN should return a response from the backend-v1 service.
+Next, specify server name indicator using the curl command, `fabrikam.com` for the frontend FQDN should return a response from the backend-v1 service.
```bash fqdnIp=$(dig +short $fqdn)
application-gateway How To Path Header Query String Routing Gateway Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/how-to-path-header-query-string-routing-gateway-api.md
Application Gateway for Containers enables traffic routing based on URL path, qu
This command creates the following on your cluster: - a namespace called `test-infra`
- - 2 services called `backend-v1` and `backend-v2` in the `test-infra` namespace
- - 2 deployments called `backend-v1` and `backend-v2` in the `test-infra` namespace
+ - two services called `backend-v1` and `backend-v2` in the `test-infra` namespace
+ - two deployments called `backend-v1` and `backend-v2` in the `test-infra` namespace
## Deploy the required Gateway API resources
application-gateway How To Ssl Offloading Gateway Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/how-to-ssl-offloading-gateway-api.md
Previously updated : 09/20/2023 Last updated : 11/07/2023
Application Gateway for Containers enables SSL [offloading](/azure/architecture/
> Application Gateway for Containers is currently in PREVIEW.<br> > See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-1. If following the BYO deployment strategy, ensure you have set up your Application Gateway for Containers resources and [ALB Controller](quickstart-deploy-application-gateway-for-containers-alb-controller.md)
-2. If following the ALB managed deployment strategy, ensure you have provisioned your [ALB Controller](quickstart-deploy-application-gateway-for-containers-alb-controller.md) and provisioned the Application Gateway for Containers resources via the [ApplicationLoadBalancer custom resource](quickstart-create-application-gateway-for-containers-managed-by-alb-controller.md).
+1. If following the BYO deployment strategy, ensure that you set up your Application Gateway for Containers resources and [ALB Controller](quickstart-deploy-application-gateway-for-containers-alb-controller.md)
+2. If following the ALB managed deployment strategy, ensure that you provision your [ALB Controller](quickstart-deploy-application-gateway-for-containers-alb-controller.md) and the Application Gateway for Containers resources via the [ApplicationLoadBalancer custom resource](quickstart-create-application-gateway-for-containers-managed-by-alb-controller.md).
3. Deploy sample HTTPS application Apply the following deployment.yaml file on your cluster to create a sample web application to demonstrate TLS/SSL offloading.
Application Gateway for Containers enables SSL [offloading](/azure/architecture/
This command creates the following on your cluster: - a namespace called `test-infra`
- - 1 service called `echo` in the `test-infra` namespace
- - 1 deployment called `echo` in the `test-infra` namespace
- - 1 secret called `listener-tls-secret` in the `test-infra` namespace
+ - one service called `echo` in the `test-infra` namespace
+ - one deployment called `echo` in the `test-infra` namespace
+ - one secret called `listener-tls-secret` in the `test-infra` namespace
## Deploy the required Gateway API resources
EOF
-Once the gateway resource has been created, ensure the status is valid, the listener is _Programmed_, and an address is assigned to the gateway.
+When the gateway resource is created, ensure the status is valid, the listener is _Programmed_, and an address is assigned to the gateway.
```bash kubectl get gateway gateway-01 -n test-infra -o yaml ```
status:
kind: HTTPRoute ```
-Once the gateway has been created, create an HTTPRoute
+Once the gateway is created, create an HTTPRoute
```bash kubectl apply -f - <<EOF apiVersion: gateway.networking.k8s.io/v1beta1
spec:
EOF ```
-Once the HTTPRoute resource has been created, ensure the route has been _Accepted_ and the Application Gateway for Containers resource has been _Programmed_.
+Once the HTTPRoute resource is created, ensure the route is _Accepted_ and the Application Gateway for Containers resource is _Programmed_.
```bash kubectl get httproute https-route -n test-infra -o yaml ```
-Verify the status of the Application Gateway for Containers resource has been successfully updated.
+Verify the Application Gateway for Containers resource is successfully updated.
```yaml status:
application-gateway How To Ssl Offloading Ingress Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/how-to-ssl-offloading-ingress-api.md
Previously updated : 09/20/2023 Last updated : 11/07/2023
Application Gateway for Containers enables SSL [offloading](/azure/architecture/
> Application Gateway for Containers is currently in PREVIEW.<br> > See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-1. If you follow the BYO deployment strategy, ensure you have set up your Application Gateway for Containers resources and [ALB Controller](quickstart-deploy-application-gateway-for-containers-alb-controller.md)
-2. If you follow the ALB managed deployment strategy, ensure you have provisioned your [ALB Controller](quickstart-deploy-application-gateway-for-containers-alb-controller.md) and provisioned the Application Gateway for Containers resources via the [ApplicationLoadBalancer custom resource](quickstart-create-application-gateway-for-containers-managed-by-alb-controller.md).
+1. If you follow the BYO deployment strategy, ensure that you set up your Application Gateway for Containers resources and [ALB Controller](quickstart-deploy-application-gateway-for-containers-alb-controller.md)
+2. If you follow the ALB managed deployment strategy, ensure that you provision your [ALB Controller](quickstart-deploy-application-gateway-for-containers-alb-controller.md) and the Application Gateway for Containers resources via the [ApplicationLoadBalancer custom resource](quickstart-create-application-gateway-for-containers-managed-by-alb-controller.md).
3. Deploy a sample HTTPS application: Apply the following deployment.yaml file on your cluster to create a sample web application to demonstrate TLS/SSL offloading.
Application Gateway for Containers enables SSL [offloading](/azure/architecture/
This command creates the following on your cluster: - a namespace called `test-infra`
- - 1 service called `echo` in the `test-infra` namespace
- - 1 deployment called `echo` in the `test-infra` namespace
- - 1 secret called `listener-tls-secret` in the `test-infra` namespace
+ - one service called `echo` in the `test-infra` namespace
+ - one deployment called `echo` in the `test-infra` namespace
+ - one secret called `listener-tls-secret` in the `test-infra` namespace
## Deploy the required Ingress API resources
EOF
-Once the ingress resource has been created, ensure the status shows the hostname of your load balancer and that both ports are listening for requests.
+When the ingress resource is created, ensure the status shows the hostname of your load balancer and that both ports are listening for requests.
```bash kubectl get ingress ingress-01 -n test-infra -o yaml ```
-Example output of successful gateway creation.
+Example output of successful Ingress creation.
```yaml apiVersion: networking.k8s.io/v1 kind: Ingress
application-gateway How To Traffic Splitting Gateway Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/how-to-traffic-splitting-gateway-api.md
Application Gateway for Containers enables you to set weights and shift traffic
This command creates the following on your cluster: - a namespace called `test-infra`
- - 2 services called `backend-v1` and `backend-v2` in the `test-infra` namespace
- - 2 deployments called `backend-v1` and `backend-v2` in the `test-infra` namespace
+ - two services called `backend-v1` and `backend-v2` in the `test-infra` namespace
+ - two deployments called `backend-v1` and `backend-v2` in the `test-infra` namespace
## Deploy the required Gateway API resources
application-gateway How To Url Rewrite Gateway Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/how-to-url-rewrite-gateway-api.md
Previously updated : 10/13/2023 Last updated : 11/07/2023
URL Rewrites take advantage of [filters](https://gateway-api.sigs.k8s.io/referen
## Background URL rewrite enables you to translate an incoming request to a different URL when initiated to a backend target.
-See the following figure, which illustrates an example of a request destined for _contoso.com/shop_ being rewritten to _contoso.com/ecommerce_ when the request is initiated to the backend target by Application Gateway for Containers:
+The following figure illustrates an example of a request destined for _contoso.com/shop_ being rewritten to _contoso.com/ecommerce_. The request is initiated to the backend target by Application Gateway for Containers:
[ ![A diagram showing the Application Gateway for Containers rewriting a URL to the backend.](./media/how-to-url-rewrite-gateway-api/url-rewrite.png) ](./media/how-to-url-rewrite-gateway-api/url-rewrite.png#lightbox)
See the following figure, which illustrates an example of a request destined for
This command creates the following on your cluster: - a namespace called `test-infra`
- - 2 services called `backend-v1` and `backend-v2` in the `test-infra` namespace
- - 2 deployments called `backend-v1` and `backend-v2` in the `test-infra` namespace
+ - two services called `backend-v1` and `backend-v2` in the `test-infra` namespace
+ - two deployments called `backend-v1` and `backend-v2` in the `test-infra` namespace
## Deploy the required Gateway API resources
EOF
-Once the gateway resource has been created, ensure the status is valid, the listener is _Programmed_, and an address is assigned to the gateway.
+Once the gateway resource is created, ensure the status is valid, the listener is _Programmed_, and an address is assigned to the gateway.
```bash kubectl get gateway gateway-01 -n test-infra -o yaml ```
status:
kind: HTTPRoute ```
-Once the gateway has been created, create an HTTPRoute resources for `contoso.com`. This example ensures traffic sent to `contoso.com/shop` is initiated as `contoso.com/ecommerce` to the backend target.
+Once the gateway is created, create an HTTPRoute resource for `contoso.com`. This example ensures traffic sent to `contoso.com/shop` is initiated as `contoso.com/ecommerce` to the backend target.
```bash kubectl apply -f - <<EOF
spec:
EOF ```
-Once the HTTPRoute resource has been created, ensure the HTTPRoute resource shows _Accepted_ and the Application Gateway for Containers resource has been _Programmed_.
+When the HTTPRoute resource is created, ensure the HTTPRoute resource shows _Accepted_ and the Application Gateway for Containers resource is _Programmed_.
```bash kubectl get httproute rewrite-example -n test-infra -o yaml ```
-Verify the status of the Application Gateway for Containers resource has been successfully updated for each HTTPRoute.
+Verify the Application Gateway for Containers resource is successfully updated for each HTTPRoute.
```yaml status:
Now we're ready to send some traffic to our sample application, via the FQDN ass
fqdn=$(kubectl get gateway gateway-01 -n test-infra -o jsonpath='{.status.addresses[0].value}') ```
-Specifying server name indicator using the curl command, `contoso.com/shop` should return a response from the backend-v1 service with the requested path to the backend target showing `contoso.com/ecommerce`.
+When you specify the server name indicator using the curl command, `contoso.com/shop` should return a response from the backend-v1 service with the requested path to the backend target showing `contoso.com/ecommerce`.
```bash fqdnIp=$(dig +short $fqdn)
Via the response we should see:
} ```
-Specifying server name indicator using the curl command, `contoso.com` should return a response from the backend-v2 service.
+When you specify the server name indicator using the curl command, `contoso.com` should return a response from the backend-v2 service.
```bash fqdnIp=$(dig +short $fqdn)
application-gateway How To Url Rewrite Ingress Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/how-to-url-rewrite-ingress-api.md
+
+ Title: URL Rewrite for Azure Application Gateway for Containers - Ingress API
+description: Learn how to rewrite URLs in Ingress API for Application Gateway for Containers.
+++++ Last updated : 11/07/2023+++
+# URL Rewrite for Azure Application Gateway for Containers - Ingress API (preview)
+
+Application Gateway for Containers allows you to rewrite the URL of a client request, including the requests' hostname and/or path. When Application Gateway for Containers initiates the request to the backend target, the request contains the newly rewritten URL to initiate the request.
++
+## Usage details
+
+URL Rewrites take advantage of Application Gateway for Containers' IngressExtension custom resource.
+
+## Background
+URL rewrite enables you to translate an incoming request to a different URL when initiated to a backend target.
+
+The following figure illustrates a request destined for _contoso.com/shop_ being rewritten to _contoso.com/ecommerce_ when the request is initiated to the backend target by Application Gateway for Containers:
+
+[ ![A diagram showing the Application Gateway for Containers rewriting a URL to the backend.](./media/how-to-url-rewrite-gateway-api/url-rewrite.png) ](./media/how-to-url-rewrite-gateway-api/url-rewrite.png#lightbox)
++
+## Prerequisites
+
+> [!IMPORTANT]
+> Application Gateway for Containers is currently in PREVIEW.<br>
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+1. If following the BYO deployment strategy, ensure you have set up your Application Gateway for Containers resources and [ALB Controller](quickstart-deploy-application-gateway-for-containers-alb-controller.md)
+2. If following the ALB managed deployment strategy, ensure you have provisioned your [ALB Controller](quickstart-deploy-application-gateway-for-containers-alb-controller.md) and provisioned the Application Gateway for Containers resources via the [ApplicationLoadBalancer custom resource](quickstart-create-application-gateway-for-containers-managed-by-alb-controller.md).
+3. Deploy sample HTTP application
+ Apply the following deployment.yaml file on your cluster to create a sample web application to demonstrate path, query, and header based routing.
+ ```bash
+ kubectl apply -f https://trafficcontrollerdocs.blob.core.windows.net/examples/traffic-split-scenario/deployment.yaml
+ ```
+
+ This command creates the following on your cluster:
+ - a namespace called `test-infra`
+ - two services called `backend-v1` and `backend-v2` in the `test-infra` namespace
+ - two deployments called `backend-v1` and `backend-v2` in the `test-infra` namespace
+
+## Deploy the required Ingress API resources
+
+# [ALB managed deployment](#tab/alb-managed)
+
+1. Create an Ingress that catches all traffic and routes to backend-v2
+```bash
+kubectl apply -f - <<EOF
+apiVersion: networking.k8s.io/v1
+kind: Ingress
+metadata:
+ name: ingress-01
+ namespace: test-infra
+ annotations:
+ alb.networking.azure.io/alb-name: alb-test
+ alb.networking.azure.io/alb-namespace: alb-test-infra
+spec:
+ ingressClassName: azure-alb-external
+ rules:
+ - host: contoso.com
+ http:
+ paths:
+ - path: /
+ pathType: Prefix
+ backend:
+ service:
+ name: backend-v2
+ port:
+ number: 8080
+EOF
+```
+
+2. Create an Ingress that matches the /shop prefix that routes to backend-v1
+```bash
+kubectl apply -f - <<EOF
+apiVersion: networking.k8s.io/v1
+kind: Ingress
+metadata:
+ name: ingress-02
+ namespace: test-infra
+ annotations:
+ alb.networking.azure.io/alb-name: alb-test
+ alb.networking.azure.io/alb-namespace: alb-test-infra
+ alb.networking.azure.io/alb-ingress-extension: url-rewrite
+spec:
+ ingressClassName: azure-alb-external
+ rules:
+ - host: contoso.com
+ http:
+ paths:
+ - path: /shop
+ pathType: Prefix
+ backend:
+ service:
+ name: backend-v1
+ port:
+ number: 8080
+EOF
+```
++
+# [Bring your own (BYO) deployment](#tab/byo)
+
+1. Set the following environment variables
+
+```bash
+RESOURCE_GROUP='<resource group name of the Application Gateway For Containers resource>'
+RESOURCE_NAME='alb-test'
+
+RESOURCE_ID=$(az network alb show --resource-group $RESOURCE_GROUP --name $RESOURCE_NAME --query id -o tsv)
+FRONTEND_NAME='frontend'
+```
+
+2. Create an Ingress that catches all traffic and routes to backend-v2
+```bash
+kubectl apply -f - <<EOF
+apiVersion: networking.k8s.io/v1
+kind: Ingress
+metadata:
+ name: ingress-01
+ namespace: test-infra
+ annotations:
+ alb.networking.azure.io/alb-id: $RESOURCE_ID
+ alb.networking.azure.io/alb-frontend: $FRONTEND_NAME
+spec:
+ ingressClassName: azure-alb-external
+ rules:
+ - host: contoso.com
+ http:
+ paths:
+ - path: /
+ pathType: Prefix
+ backend:
+ service:
+ name: backend-v2
+ port:
+ number: 8080
+EOF
+```
+
+3. Create an Ingress that matches the /shop prefix and routes to backend-v1
+```bash
+kubectl apply -f - <<EOF
+apiVersion: networking.k8s.io/v1
+kind: Ingress
+metadata:
+ name: ingress-02
+ namespace: test-infra
+ annotations:
+ alb.networking.azure.io/alb-id: $RESOURCE_ID
+ alb.networking.azure.io/alb-frontend: $FRONTEND_NAME
+ alb.networking.azure.io/alb-ingress-extension: url-rewrite
+spec:
+ ingressClassName: azure-alb-external
+ rules:
+ - host: contoso.com
+ http:
+ paths:
+ - path: /shop
+ pathType: Prefix
+ backend:
+ service:
+ name: backend-v1
+ port:
+ number: 8080
+EOF
+```
+++
+When each Ingress resource is created, ensure the status is valid, the listener is _Programmed_, and an address is assigned to the gateway.
+```bash
+kubectl get ingress ingress-01 -n test-infra -o yaml
+kubectl get ingress ingress-02 -n test-infra -o yaml
+```
+
+Example output of one of the Ingress resources.
+```yaml
+apiVersion: networking.k8s.io/v1
+kind: Ingress
+metadata:
+ annotations:
+ alb.networking.azure.io/alb-frontend: FRONTEND_NAME
+ alb.networking.azure.io/alb-id: /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/yyyyyyyy/providers/Microsoft.ServiceNetworking/trafficControllers/zzzzzz
+ kubectl.kubernetes.io/last-applied-configuration: |
+ {"apiVersion":"networking.k8s.io/v1","kind":"Ingress","metadata":{"annotations":{"alb.networking.azure.io/alb-frontend":"FRONTEND_NAME","alb.networking.azure.io/alb-id":"/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/yyyyyyyy/providers/Microsoft.ServiceNetworking/trafficControllers/zzzzzz"},"name"
+:"ingress-01","namespace":"test-infra"},"spec":{"ingressClassName":"azure-alb-external","rules":[{"host":"contoso.com","http":{"paths":[{"backend":{"service":{"name":"backend-v2","port":{"number":8080}}},"path":"/","pathType":"Prefix"}]}}]}}
+ creationTimestamp: "2023-07-22T18:02:13Z"
+ generation: 2
+ name: ingress-01
+ namespace: test-infra
+ resourceVersion: "278238"
+ uid: 17c34774-1d92-413e-85ec-c5a8da45989d
+spec:
+ ingressClassName: azure-alb-external
+ rules:
+ - host: contoso.com
+ http:
+ paths:
+ - backend:
+ service:
+ name: backend-v2
+ port:
+ number: 8080
+ path: /
+ pathType: Prefix
+status:
+ loadBalancer:
+ ingress:
+ - hostname: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.fzyy.alb.azure.com
+ ports:
+ - port: 80
+ protocol: TCP
+```
+
+When the Ingress is created, create an IngressExtension resource for `contoso.com`. This example ensures traffic sent to `contoso.com/shop` is initiated as `contoso.com/ecommerce` to the backend target.
+
+```bash
+kubectl apply -f - <<EOF
+apiVersion: alb.networking.azure.io/v1
+kind: IngressExtension
+metadata:
+ name: url-rewrite
+ namespace: test-infra
+spec:
+ rules:
+ - host: contoso.com
+ httpPort: 80
+ rewrites:
+ - type: URLRewrite
+ urlRewrite:
+ path:
+ type: ReplacePrefixMatch
+ replacePrefixMatch: /ecommerce
+EOF
+```
+
+When the IngressExtension resource is created, ensure the IngressExtension resource shows _Accepted_ and the Application Gateway for Containers resource is _Programmed_.
+```bash
+kubectl get IngressExtension url-rewrite -n test-infra -o yaml
+```
+
+Verify the Application Gateway for Containers resource is successfully updated for the IngressExtension.
+
+## Test access to the application
+
+Now we're ready to send some traffic to our sample application, via the FQDN assigned to the frontend. Use the following command to get the FQDN.
+
+```bash
+fqdn=$(kubectl get ingress ingress-01 -n test-infra -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
+```
+
+If you specify the server name indicator `contoso.com/shop` using the curl command, a response from the backend-v1 service is returned with the requested path to the backend target showing `contoso.com/ecommerce`.
+
+```bash
+fqdnIp=$(dig +short $fqdn)
+curl -k --resolve contoso.com:80:$fqdnIp http://contoso.com/shop
+```
+
+Via the response we should see:
+```json
+{
+ "path": "/ecommerce",
+ "host": "contoso.com",
+ "method": "GET",
+ "proto": "HTTP/1.1",
+ "headers": {
+ "Accept": [
+ "*/*"
+ ],
+ "User-Agent": [
+ "curl/7.81.0"
+ ],
+ "X-Forwarded-For": [
+ "xxx.xxx.xxx.xxx"
+ ],
+ "X-Forwarded-Proto": [
+ "http"
+ ],
+ "X-Request-Id": [
+ "dcd4bcad-ea43-4fb6-948e-a906380dcd6d"
+ ]
+ },
+ "namespace": "test-infra",
+ "ingress": "",
+ "service": "",
+ "pod": "backend-v1-5b8fd96959-f59mm"
+}
+```
+
+If you specify the server name indicator `contoso.com` using the curl command, a response is returned from the backend-v2 service as shown.
+
+```bash
+fqdnIp=$(dig +short $fqdn)
+curl -k --resolve contoso.com:80:$fqdnIp http://contoso.com
+```
+
+The following response should be displayed:
+```json
+{
+ "path": "/",
+ "host": "contoso.com",
+ "method": "GET",
+ "proto": "HTTP/1.1",
+ "headers": {
+ "Accept": [
+ "*/*"
+ ],
+ "User-Agent": [
+ "curl/7.81.0"
+ ],
+ "X-Forwarded-For": [
+ "xxx.xxx.xxx.xxx"
+ ],
+ "X-Forwarded-Proto": [
+ "http"
+ ],
+ "X-Request-Id": [
+ "adae8cc1-8030-4d95-9e05-237dd4e3941b"
+ ]
+ },
+ "namespace": "test-infra",
+ "ingress": "",
+ "service": "",
+ "pod": "backend-v2-594bd59865-ppv9w"
+}
+```
+
+Congratulations, you have installed ALB Controller, deployed a backend application and used the IngressExtension to rewrite the client requested URL, prior to traffic being set to the target on Application Gateway for Containers.
application-gateway Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/overview.md
Previously updated : 09/07/2023 Last updated : 11/06/2023
Application Gateway for Containers offers an elastic and scalable ingress to AKS
### Load balancing features Application Gateway for Containers supports the following features for traffic management:
+- Automatic retries
+- Autoscaling
+- Availability zone resiliency
+- Default and custom health probes
+- Header rewrite
+- HTTPS traffic management:
+ - SSL termination
+ - End to End SSL
+- Ingress and Gateway API support
- Layer 7 HTTP/HTTPS request forwarding based on prefix/exact match on: - Hostname - Path
- - Headers
- - Query string match
+ - Header
+ - Query string
- Methods - Ports (80/443)-- HTTPS traffic management:
- - SSL termination
- - End to End SSL
-- Ingress and Gateway API support-- Traffic Splitting / weighted round robin - Mutual Authentication (mTLS) to backend target-- Health checks: Application Gateway for Containers determines the health of a backend before it registers it as healthy and capable of handling traffic-- Automatic retries
+- Traffic Splitting / weighted round robin
- TLS Policies-- Autoscaling-- Availability zone resiliency
+- URL rewrite
### Deployment strategies
Application Gateway for Containers is currently offered in the following regions
### Implementation of Gateway API
-ALB Controller implements version [v1beta1](https://gateway-api.sigs.k8s.io/references/spec/#gateway.networking.k8s.io/v1beta1) of the [Gateway API](https://gateway-api.sigs.k8s.io/)
+ALB Controller implements version [v1beta1](https://gateway-api.sigs.k8s.io/reference/spec/#gateway.networking.k8s.io%2fv1) of the [Gateway API](https://gateway-api.sigs.k8s.io/)
| Gateway API Resource | Support | Comments | | - | - | |
-| [GatewayClass](https://gateway-api.sigs.k8s.io/references/spec/#gateway.networking.k8s.io%2fv1beta1.GatewayClass) | Yes | |
-| [Gateway](https://gateway-api.sigs.k8s.io/references/spec/#gateway.networking.k8s.io%2fv1beta1.Gateway) | Yes | Support for HTTP and HTTPS protocol on the listener. The only ports allowed on the listener are 80 and 443. |
-| [HTTPRoute](https://gateway-api.sigs.k8s.io/references/spec/#gateway.networking.k8s.io%2fv1beta1.HTTPRoute) | Yes | Currently doesn't support [HTTPRouteFilter](https://gateway-api.sigs.k8s.io/references/spec/#gateway.networking.k8s.io/v1beta1.HTTPRouteFilter) |
-| [ReferenceGrant](https://gateway-api.sigs.k8s.io/references/spec/#gateway.networking.k8s.io%2fv1alpha2.ReferenceGrant) | Yes | Currently supports version v1alpha1 of this api |
+| [GatewayClass](https://gateway-api.sigs.k8s.io/reference/spec/#gateway.networking.k8s.io/v1.GatewayClass) | Yes | |
+| [Gateway](https://gateway-api.sigs.k8s.io/reference/spec/#gateway.networking.k8s.io/v1.Gateway) | Yes | Support for HTTP and HTTPS protocol on the listener. The only ports allowed on the listener are 80 and 443. |
+| [HTTPRoute](https://gateway-api.sigs.k8s.io/reference/spec/#gateway.networking.k8s.io/v1.HTTPRoute) | Yes | |
+| [ReferenceGrant](https://gateway-api.sigs.k8s.io/reference/spec/#gateway.networking.k8s.io/v1alpha2.ReferenceGrant) | Yes | Currently supports version v1alpha1 of this API |
+
+> [!Note]
+> v1beta1 documentation has been removed within official Gateway API documentation, however the links to the v1 documentation are still highly relevent.
### Implementation of Ingress API
ALB Controller implements support for [Ingress](https://kubernetes.io/docs/conce
| Ingress API Resource | Support | Comments | | - | - | |
-| [Ingress](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.27/#ingress-v1-networking-k8s-io) | Yes | Support for HTTP and HTTPS protocol on the listener. |
+| [Ingress](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.28/#ingress-v1-networking-k8s-io) | Yes | Support for HTTP and HTTPS protocol on the listener. |
## Report issues and provide feedback
application-gateway Quickstart Deploy Application Gateway For Containers Alb Controller https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/quickstart-deploy-application-gateway-for-containers-alb-controller.md
Previously updated : 09/25/2023 Last updated : 11/07/2023
You need to complete the following tasks prior to deploying Application Gateway
az aks get-credentials --resource-group $RESOURCE_GROUP --name $AKS_NAME helm install alb-controller oci://mcr.microsoft.com/application-lb/charts/alb-controller \ --namespace <helm-resource-namespace> \
- --version 0.5.024542 \
+ --version 0.6.1 \
--set albController.namespace=<alb-controller-namespace> \ --set albController.podIdentity.clientID=$(az identity show -g $RESOURCE_GROUP -n azure-alb-identity --query clientId -o tsv) ```
You need to complete the following tasks prior to deploying Application Gateway
az aks get-credentials --resource-group $RESOURCE_GROUP --name $AKS_NAME helm upgrade alb-controller oci://mcr.microsoft.com/application-lb/charts/alb-controller \ --namespace <helm-resource-namespace> \
- --version 0.5.024542 \
+ --version 0.6.1 \
--set albController.namespace=<alb-controller-namespace> \ --set albController.podIdentity.clientID=$(az identity show -g $RESOURCE_GROUP -n azure-alb-identity --query clientId -o tsv) ```
application-gateway Troubleshooting Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/troubleshooting-guide.md
Previously updated : 09/25/2023 Last updated : 11/07/2023
Example output:
| NAME | READY | UP-TO-DATE | AVAILABLE | AGE | CONTAINERS | IMAGES | SELECTOR | | | -- | - | | - | -- | - | -- |
-| alb-controller | 2/2 | 2 | 2 | 18d | alb-controller | mcr.microsoft.com/application-lb/images/alb-controller:**0.5.024542** | app=alb-controller |
-| alb-controller-bootstrap | 1/1 | 1 | 1 | 18d | alb-controller-bootstrap | mcr.microsoft.com/application-lb/images/alb-controller-bootstrap:**0.5.024542** | app=alb-controller-bootstrap |
+| alb-controller | 2/2 | 2 | 2 | 18d | alb-controller | mcr.microsoft.com/application-lb/images/alb-controller:**0.6.1** | app=alb-controller |
+| alb-controller-bootstrap | 1/1 | 1 | 1 | 18d | alb-controller-bootstrap | mcr.microsoft.com/application-lb/images/alb-controller-bootstrap:**0.6.1** | app=alb-controller-bootstrap |
-In this example, the ALB controller version is **0.5.024542**.
+In this example, the ALB controller version is **0.6.1**.
The ALB Controller version can be upgraded by running the `helm upgrade alb-controller` command. For more information, see [Install the ALB Controller](quickstart-deploy-application-gateway-for-containers-alb-controller.md#install-the-alb-controller).
application-gateway Ipv6 Application Gateway Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ipv6-application-gateway-arm-template.md
+
+ Title: Deploy an Azure Application Gateway with an IPv6 frontend (Preview)
+
+description: This template helps you deploys an Azure Application Gateway with an IPv6 frontend (Preview) in a dual-stack virtual network with two load-balanced VMs.
+++ Last updated : 11/06/2022+++++
+# Deploy an Azure Application Gateway with an IPv6 frontend - ARM template (Preview)
++
+If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template opens in the Azure portal.
+
+[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.network%2Fapplication-gateway-ipv6-create%2Fazuredeploy.json)
+
+> [!IMPORTANT]
+> Application Gateway IPv6 frontend is currently in PREVIEW.<br>
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- You must [Register to the preview](ipv6-application-gateway-portal.md#register-to-the-preview) for Application Gateway IPv6 frontend.
+
+## Review the template
+
+This template creates a simple setup with a dual-stack public frontend IP, a basic listener to host a single site on the application gateway, a basic request routing rule, and two virtual machines in the backend pool.
+
+ > [!NOTE]
+ > Application Gateway's dual-stack frontend (Preview) supports up to four frontend IP addresses: Two IPv4 addresses (public and private) and two IPv6 addresses (public and private).
+
+The template used in this quickstart is from [Azure Quickstart Templates](/samples/azure/azure-quickstart-templates/application-gateway-ipv6-create/)
++
+Multiple Azure resources are defined in the template:
+
+- [**Microsoft.Network/applicationgateways**](/azure/templates/microsoft.network/applicationgateways)
+- [**Microsoft.Network/publicIPAddresses**](/azure/templates/microsoft.network/publicipaddresses) : one for the application gateway, and two for the virtual machines.
+- [**Microsoft.Network/networkSecurityGroups**](/azure/templates/microsoft.network/networksecuritygroups)
+- [**Microsoft.Network/virtualNetworks**](/azure/templates/microsoft.network/virtualnetworks)
+- [**Microsoft.Compute/virtualMachines**](/azure/templates/microsoft.compute/virtualmachines) : two virtual machines
+- [**Microsoft.Network/networkInterfaces**](/azure/templates/microsoft.network/networkinterfaces) : two for the virtual machines
+- [**Microsoft.Compute/virtualMachine/extensions**](/azure/templates/microsoft.compute/virtualmachines/extensions) : to configure IIS and the web pages
+
+## Deploy the template
+
+Deploy the ARM template to Azure:
+
+1. Select **Deploy to Azure** to sign in to Azure and open the template. The template creates an application gateway, the network infrastructure, and two virtual machines in the backend pool running IIS.
+
+ [![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.network%2Fapplication-gateway-ipv6-create%2Fazuredeploy.json)
+
+2. Select or create your resource group, type the virtual machine **Admin Username** and **Admin Password**.
+
+ ![A screenshot of create new application gateway: Basics.](./media/ipv6-application-gateway-arm-template/template-basics.png)
+
+ > [!NOTE]
+ > Select a region that is the same as your resource group. If the region does not support the Standard DS1 v2 virtual machine SKU, this SKU is not displayed and you must choose a different size.
+
+3. Select **Review + Create** and then select **Create**.
+
+ The deployment can take 20 minutes or longer to complete.
+
+## Validate the deployment
+
+Although IIS isn't required to create the application gateway, it can be used to verify that Azure successfully created the application gateway.
+
+To use IIS to test the application gateway:
+
+1. Find the public IP address and DNS name for the application gateway on its **Overview** page. In the following example, the DNS name is **dualipv611061903310.eastus.cloudapp.azure.com**.
+
+ [ ![A screenshot showing the application gateway's public IP address and DNS name.](./media/ipv6-application-gateway-arm-template/ipv6-address.png) ](./media/ipv6-application-gateway-arm-template/ipv6-address.png#lightbox)
+
+2. Copy the public IP address or DNS name, and then paste it into the address bar of your browser to browse that IP address.
+
+3. Check the response. A valid response verifies that the application gateway was successfully created and can successfully connect with the backend.
+
+ ![A screenshot showing a successful test of application gateway.](./media/ipv6-application-gateway-arm-template/connection-test.png)
+
+ Refresh the browser multiple times and you should see connections to both myVM1 and myVM2.
+
+## Clean up resources
+
+When you no longer need the resources that you created with the application gateway, delete the resource group. This process removes the application gateway and all the related resources.
+
+To delete the resource group, call the `Remove-AzResourceGroup` cmdlet:
+
+```azurepowershell-interactive
+Remove-AzResourceGroup -Name <your resource group name>
+```
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Manage web traffic with an application gateway using the Azure CLI](./tutorial-manage-web-traffic-cli.md)
automation Automation Runbook Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-runbook-types.md
Title: Azure Automation runbook types
description: This article describes the types of runbooks that you can use in Azure Automation and considerations for determining which type to use. Previously updated : 10/06/2023 Last updated : 11/07/2023
The following are the current limitations and known issues with PowerShell runbo
* Runbooks taking dependency on internal file paths such as `C:\modules` might fail due to changes in service backend infrastructure. Change runbook code to ensure there are no dependencies on internal file paths and use [Get-ChildItem](/powershell/module/microsoft.powershell.management/get-childitem?view=powershell-7.3) to get the required directory. * `Get-AzStorageAccount` cmdlet might fail with an error: *The `Get-AzStorageAccount` command was found in the module `Az.Storage`, but the module could not be loaded*.
-* Completed jobs might show a warning message: *Both Az and AzureRM modules were detected on this machine. Az and AzureRM modules cannot be imported in the same session or used in the same script or runbook*. This is just a warning message and does not impact job execution.
* PowerShell runbooks can't retrieve an unencrypted [variable asset](./shared-resources/variables.md) with a null value. * PowerShell runbooks can't retrieve a variable asset with `*~*` in the name. * A [Get-Process](/powershell/module/microsoft.powershell.management/get-process) operation in a loop in a PowerShell runbook can crash after about 80 iterations.
The following are the current limitations and known issues with PowerShell runbo
> Currently, PowerShell 7.2 (preview) runtime version is supported for both Cloud and Hybrid jobs in all Public regions except Australia Central2, Korea South, Sweden South, Jio India Central, Brazil Southeast, Central India, West India, UAE Central, and Gov clouds. - For the PowerShell 7 runtime version, the module activities aren't extracted for the imported modules.-- *PSCredential* runbook parameter type isn't supported in PowerShell 7 runtime version. - PowerShell 7.x doesn't support workflows. For more information, see [PowerShell workflow](/powershell/scripting/whats-new/differences-from-windows-powershell#powershell-workflow) for more details. - PowerShell 7.x currently doesn't support signed runbooks. - Source control integration doesn't support PowerShell 7.2 (preview). Also, PowerShell 7.2 (preview) runbooks in source control get created in Automation account as Runtime 5.1. - Currently, PowerShell 7.2 (preview) runbooks are only supported from Azure portal. Rest API and PowerShell aren't supported.-- Az module 8.3.0 is installed by default and can't be managed at the automation account level. Use custom modules to override the Az module to the desired version.
+- Az module 8.3.0 is installed by default and can't be managed at the automation account level for PowerShell 7.2 (preview). Use custom modules to override the Az module to the desired version.
- The imported PowerShell 7.2 (preview) module would be validated during job execution. Ensure that all dependencies for the selected module are also imported for successful job execution. - PowerShell 7.2 module management is not supported through `Get-AzAutomationModule` cmdlets. - Azure runbook doesn't support `Start-Job` with `-credential`.
The following are the current limitations and known issues with PowerShell runbo
- `Get-AzStorageAccount` cmdlet might fail with an error: *The `Get-AzStorageAccount` command was found in the module `Az.Storage`, but the module could not be loaded*. - Executing child scripts using `.\child-runbook.ps1` is not supported in this preview. **Workaround**: Use `Start-AutomationRunbook` (internal cmdlet) or `Start-AzAutomationRunbook` (from *Az.Automation* module) to start another runbook from parent runbook.-- Runbook properties defining logging preference isn't supported in PowerShell 7 runtime.
- **Workaround**: Explicitly set the preference at the start of the runbook as following -
- ```
- $VerbosePreference = "Continue"
- $ProgressPreference = "Continue"
- ```
- When you use [ExchangeOnlineManagement](/powershell/exchange/exchange-online-powershell?view=exchange-ps&preserve-view=true) module version: 3.0.0 or higher, you can experience errors. To resolve the issue, ensure that you explicitly upload [PowerShellGet](/powershell/module/powershellget/) and [PackageManagement](/powershell/module/packagemanagement/) modules.
automation Guidance Migration Log Analytics Monitoring Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/change-tracking/guidance-migration-log-analytics-monitoring-agent.md
**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: Azure Arc-enabled servers.
-This article provides guidance to move from Change Tracking and Inventory using Log Analytics version to the Azure Monitoring Agent version.
+This article provides guidance to move from Change Tracking and Inventory using Log Analytics (LA) version to the Azure Monitoring Agent (AMA) version.
+
+Using the Azure portal, you can migrate from Change Tracking & Inventory with LA agent to Change Tracking & Inventory with AMA and there are two ways to do this migration:
+
+- Migrate single/multiple VMs from the Virtual Machines page.
+- Migrate multiples VMs on LA version solution within a particular Automation Account.
## Onboarding to Change tracking and inventory using Azure Monitoring Agent
This article provides guidance to move from Change Tracking and Inventory using
:::image type="content" source="media/guidance-migration-log-analytics-monitoring-agent/onboarding-at-scale-inline.png" alt-text="Screenshot of onboarding at scale to Change tracking and inventory using Azure monitoring agent." lightbox="media/guidance-migration-log-analytics-monitoring-agent/onboarding-at-scale-expanded.png"::: 1. On the **Onboarding to Change Tracking with Azure Monitoring** page, you can view your automation account and list of machines that are currently on Log Analytics and ready to be onboarded to Azure Monitoring Agent of Change Tracking and inventory.
+
+ :::image type="content" source="media/guidance-migration-log-analytics-monitoring-agent/onboarding-from-log-analytics-inline.png" alt-text="Screenshot of onboarding multiple virtual machines to Change tracking and inventory from log analytics to Azure monitoring agent." lightbox="media/guidance-migration-log-analytics-monitoring-agent/onboarding-from-log-analytics-expanded.png":::
+ 1. On the **Assess virtual machines** tab, select the machines and then select **Next**. 1. On **Assign workspace** tab, assign a new [Log Analytics workspace resource ID](#obtain-log-analytics-workspace-resource-id) to which the settings of AMA based solution should be stored and select **Next**.
Use the [script](https://github.com/mayguptMSFT/AzureMonitorCommunity/blob/maste
+### Compare data across Log analytics Agent and Azure Monitoring Agent version
+
+After you complete the onboarding to Change tracking with AMA version, select **Switch to CT with AMA** on the landing page to switch across the two versions and compare the following events.
+
+ :::image type="content" source="media/guidance-migration-log-analytics-monitoring-agent/data-compare-log-analytics-monitoring-agent-inline.png" alt-text="Screenshot of data comparison from log analytics to Azure monitoring agent." lightbox="media/guidance-migration-log-analytics-monitoring-agent/data-compare-log-analytics-monitoring-agent-expanded.png":::
+
+For example, if the onboarding to AMA version of service takes place after 3rd November at 6:00 a.m. You can compare the data by keeping consistent filters across parameters like **Change Types**, **Time Range**. You can compare incoming logs in **Changes** section and in the graphical section to be assured on data consistency.
+
+> [!NOTE]
+> You must compare for the incoming data and logs after the onboarding to AMA version is done.
+ ### Obtain Log Analytics Workspace Resource ID To obtain the Log Analytics Workspace resource ID, follow these steps:
To obtain the Log Analytics Workspace resource ID, follow these steps:
1. In **Log Analytics Workspace**, select the specific workspace and select **Json View**. 1. Copy the **Resource ID**.
+ :::image type="content" source="media/guidance-migration-log-analytics-monitoring-agent/workspace-resource-inline.png" alt-text="Screenshot that shows the log analytics workspace ID." lightbox="media/guidance-migration-log-analytics-monitoring-agent/workspace-resource-expanded.png":::
+ ## Limitations
automation Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Automation description: Lists Azure Policy Regulatory Compliance controls available for Azure Automation. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 10/23/2023 Last updated : 11/06/2023
azure-app-configuration Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure App Configuration description: Lists Azure Policy Regulatory Compliance controls available for Azure App Configuration. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 10/23/2023 Last updated : 11/06/2023
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/overview.md
Arc resource bridge supports the following Azure regions:
* Canada Central * Australia East * Southeast Asia
+* East Asia
### Regional resiliency
Arc resource bridge typically releases a new version on a monthly cadence, at th
* Learn more about [how Azure Arc-enabled VMware vSphere extends Azure's governance and management capabilities to VMware vSphere infrastructure](../vmware-vsphere/overview.md). * Learn more about [provisioning and managing on-premises Windows and Linux VMs running on Azure Stack HCI clusters](/azure-stack/hci/manage/azure-arc-enabled-virtual-machines). * Review the [system requirements](system-requirements.md) for deploying and managing Arc resource bridge.++
azure-arc Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Arc-enabled servers (preview) description: Lists Azure Policy Regulatory Compliance controls available for Azure Arc-enabled servers (preview). These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 10/23/2023 Last updated : 11/06/2023
azure-arc Azure Arc Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/azure-arc-agent.md
When you [enable guest management](enable-guest-management-at-scale.md) on VMwar
## Agent components The Azure Connected Machine agent package contains several logical components bundled together:
azure-arc Browse And Enable Vcenter Resources In Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/browse-and-enable-vcenter-resources-in-azure.md
After you've connected your VMware vCenter to Azure, you can browse your vCenter inventory from the Azure portal. Visit the VMware vCenter blade in Azure Arc center to view all the connected vCenters. From there, you'll browse your virtual machines (VMs), resource pools, templates, and networks. From the inventory of your vCenter resources, you can select and enable one or more resources in Azure. When you enable a vCenter resource in Azure, it creates an Azure resource that represents your vCenter resource. You can use this Azure resource to assign permissions or conduct management operations.
In this section, you will enable resource pools, networks, and other non-VM reso
1. From your browser, go to the vCenters blade on [Azure Arc Center](https://portal.azure.com/#blade/Microsoft_Azure_HybridCompute/AzureArcCenterBlade/overview) and navigate to your vCenter.
- :::image type="content" source="media/enable-guest-management.png" alt-text="Screenshot of how to enable an existing virtual machine in the Azure portal." lightbox="media/enable-guest-management.png":::
+ :::image type="content" source="media/browse-and-enable-vcenter-resources-in-azure/enable-guest-management.png" alt-text="Screenshot of how to enable an existing virtual machine in the Azure portal." lightbox="media/browse-and-enable-vcenter-resources-in-azure/enable-guest-management.png":::
1. Navigate to the VM inventory resource blade, select the VMs you want to enable, and then select **Enable in Azure**.
azure-arc Perform Vm Ops Through Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/perform-vm-ops-through-azure.md
In this article, you learn how to perform various operations on the Azure Arc-en
- Install extensions (enabling guest management is required). All the [extensions](../servers/manage-vm-extensions.md#extensions) that are available with Arc-enabled Servers are supported. To perform guest OS operations on Arc-enabled VMs, you must enable guest management on the VMs. When you enable guest management, the Arc Connected Machine Agent is installed on the VM.
azure-arc Quick Start Connect Vcenter To Arc Using Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/quick-start-connect-vcenter-to-arc-using-script.md
You need a Windows or Linux machine that can access both your vCenter Server ins
4. In the **Platform** section, select **Add** under **VMware vCenter**.
- :::image type="content" source="media/add-vmware-vcenter.png" alt-text="Screenshot that shows how to add VMware vCenter through Azure Arc.":::
+ :::image type="content" source="media/quick-start-connect-vcenter-to-arc-using-script/add-vmware-vcenter.png" alt-text="Screenshot that shows how to add VMware vCenter through Azure Arc.":::
5. Select **Create a new resource bridge**, and then select **Next**.
You need a Windows or Linux machine that can access both your vCenter Server ins
13. If your subscription isn't registered with all the required resource providers, a **Register** button will appear. Select the button before you proceed to the next step.
- :::image type="content" source="media/register-arc-vmware-providers.png" alt-text="Screenshot that shows the button to register required resource providers during vCenter onboarding to Azure Arc.":::
+ :::image type="content" source="media/quick-start-connect-vcenter-to-arc-using-script/register-arc-vmware-providers.png" alt-text="Screenshot that shows the button to register required resource providers during vCenter onboarding to Azure Arc.":::
14. Based on the operating system of your workstation, download the PowerShell or Bash script and copy it to the [workstation](#prerequisites).
azure-arc Quick Start Create A Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/quick-start-create-a-vm.md
Once your administrator has connected a VMware vCenter to Azure, represented VMw
1. From your browser, go to the [Azure portal](https://portal.azure.com). Navigate to virtual machines browse view. You'll see a unified browse experience for Azure and Arc virtual machines.
- :::image type="content" source="media/browse-virtual-machines.png" alt-text="Screenshot showing the unified browse experience for Azure and Arc virtual machines.":::
+ :::image type="content" source="media/quick-start-create-a-vm/browse-virtual-machines.png" alt-text="Screenshot showing the unified browse experience for Azure and Arc virtual machines." lightbox="media/quick-start-create-a-vm/browse-virtual-machines.png":::
2. Select **Add** and then select **Azure Arc machine** from the drop-down.
- :::image type="content" source="media/create-azure-arc-virtual-machine-1.png" alt-text="Screenshot showing the Basic tab for creating an Azure Arc virtual machine.":::
+ :::image type="content" source="media/quick-start-create-a-vm/create-azure-arc-virtual-machine.png" alt-text="Screenshot showing the Basic tab for creating an Azure Arc virtual machine." lightbox="media/quick-start-create-a-vm/create-azure-arc-virtual-machine.png":::
3. Select the **Subscription** and **Resource group** where you want to deploy the VM.
azure-arc Remove Vcenter From Arc Vmware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/remove-vcenter-from-arc-vmware.md
Uninstall extensions using the following steps:
3. Search and select the vCenter you want to remove from Azure Arc.
- ![Browse your VMware Inventory ](./media/browse-vmware-inventory.png)
+ :::image type="content" source="media/remove-vcenter-from-arc-vmware/browse-vmware-inventory.png" alt-text="Screenshot of where to browse your VMware Inventory from Azure portal." lightbox="media/remove-vcenter-from-arc-vmware/browse-vmware-inventory.png":::
4. Select **Virtual machines** under **vCenter inventory**.
azure-cache-for-redis Cache Remove Tls 10 11 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-remove-tls-10-11.md
The following sections provide guidance about how to detect dependencies on thes
You can find out whether your application works with TLS 1.2 by setting the **Minimum TLS version** value to TLS 1.2 on a test or staging cache, then running tests. The **Minimum TLS version** setting is in the [Advanced settings](cache-configure.md#advanced-settings) of your cache instance in the Azure portal. If the application continues to function as expected after this change, it's probably compliant. You also need to configure the Redis client library used by your application to enable TLS 1.2 to connect to Azure Cache for Redis.
+> [!NOTE]
+> With your cache open in the portal, select **Advanced** in the resource menu. If the Minimum TLS version for your cache instance is set to **Default**, your Minimum TLS version is set to TLS 1.2. TLS 1.2 is the default value that is assigned to your cache instance when no explicit value is chosen.
+>
+ ## Configure your application to use TLS 1.2 Most applications use Redis client libraries to handle communication with their caches. Here are instructions for configuring some of the popular client libraries, in various programming languages and frameworks, to use TLS 1.2.
azure-cache-for-redis Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Cache for Redis description: Lists Azure Policy Regulatory Compliance controls available for Azure Cache for Redis. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 10/23/2023 Last updated : 11/06/2023
azure-functions Deployment Zip Push https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/deployment-zip-push.md
Zip deployment is also an easy way to run your functions from the deployment pac
Azure Functions has the full range of continuous deployment and integration options that are provided by Azure App Service. For more information, see [Continuous deployment for Azure Functions](functions-continuous-deployment.md).
-To speed up development, you may find it easier to deploy your function app project files directly from a .zip file. The .zip deployment API takes the contents of a .zip file and extracts the contents into the `wwwroot` folder of your function app. This .zip file deployment uses the same Kudu service that powers continuous integration-based deployments, including:
+To speed up development, you might find it easier to deploy your function app project files directly from a .zip file. The .zip deployment API takes the contents of a .zip file and extracts the contents into the `wwwroot` folder of your function app. This .zip file deployment uses the same Kudu service that powers continuous integration-based deployments, including:
+ Deletion of files that were left over from earlier deployments. + Deployment customization, including running deployment scripts.
When you're using Azure CLI on your local computer, `<zip_file_path>` is the pat
[!INCLUDE [app-service-deploy-zip-push-rest](../../includes/app-service-deploy-zip-push-rest.md)]
+## <a name="arm"></a>Deploy by using ARM Template
+
+You can use [ZipDeploy ARM template extension](https://github.com/projectkudu/kudu/wiki/MSDeploy-VS.-ZipDeploy#zipdeploy) to push your .zip file to your function app.
+
+### Example ZipDeploy ARM Template
+
+This template includes both a production and staging slot and deploys to one or the other. Typically, you would use this template to deploy to the staging slot and then swap to get your new zip package running on the production slot.
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "appServiceName": {
+ "type": "string"
+ },
+ "deployToProduction": {
+ "type": "bool",
+ "defaultValue": false
+ },
+ "slot": {
+ "type": "string",
+ "defaultValue": "staging"
+ },
+ "packageUri": {
+ "type": "secureString"
+ }
+ },
+ "resources": [
+ {
+ "condition": "[parameters('deployToProduction')]",
+ "type": "Microsoft.Web/sites/extensions",
+ "apiVersion": "2021-02-01",
+ "name": "[format('{0}/ZipDeploy', parameters('appServiceName'))]",
+ "properties": {
+ "packageUri": "[parameters('packageUri')]",
+ "appOffline": true
+ }
+ },
+ {
+ "condition": "[not(parameters('deployToProduction'))]",
+ "type": "Microsoft.Web/sites/slots/extensions",
+ "apiVersion": "2021-02-01",
+ "name": "[format('{0}/{1}/ZipDeploy', parameters('appServiceName'), parameters('slot'))]",
+ "properties": {
+ "packageUri": "[parameters('packageUri')]",
+ "appOffline": true
+ }
+ }
+ ]
+}
+```
+
+For the initial deployment, you would deploy directly to the production slot. For more information, see [Slot deployments](functions-infrastructure-as-code.md#slot-deployments).
+ ## Run functions from the deployment package You can also choose to run your functions directly from the deployment package file. This method skips the deployment step of copying files from the package to the `wwwroot` directory of your function app. Instead, the package file is mounted by the Functions runtime, and the contents of the `wwwroot` directory become read-only.
azure-functions Durable Functions Bindings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-bindings.md
public static Task Run(
[DurableClient] IDurableOrchestrationClient starter) { // Orchestration input comes from the queue message content.
- return starter.StartNewAsync("HelloWorld", input);
+ return starter.StartNewAsync<string>("HelloWorld", input);
} ```
azure-functions Functions App Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-app-settings.md
When using app settings, you should be aware of the following considerations:
+ You can use application settings to override host.json setting values without having to change the host.json file itself. This is helpful for scenarios where you need to configure or modify specific host.json settings for a specific environment. This also lets you change host.json settings without having to republish your project. To learn more, see the [host.json reference article](functions-host-json.md#override-hostjson-values).
-+ This article documents the settings that are most relevant to your function apps. Because Azure Functions runs on App Service, other application settings may also be supported. For more information, see [Environment variables and app settings in Azure App Service](../app-service/reference-app-settings.md).
++ This article documents the settings that are most relevant to your function apps. Because Azure Functions runs on App Service, other application settings might also be supported. For more information, see [Environment variables and app settings in Azure App Service](../app-service/reference-app-settings.md). + Some scenarios also require you to work with settings documented in [App Service site settings](#app-service-site-settings).
To learn more, see [Secret repositories](security-concepts.md#secret-repositorie
## AzureWebJobsStorage
-The Azure Functions runtime uses this storage account connection string for normal operation. Some uses of this storage account include key management, timer trigger management, and Event Hubs checkpoints. The storage account must be a general-purpose one that supports blobs, queues, and tables. See [Storage account](functions-infrastructure-as-code.md#storage-account) and [Storage account requirements](storage-considerations.md#storage-account-requirements).
+The Azure Functions runtime uses this storage account connection string for normal operation. Some uses of this storage account include key management, timer trigger management, and Event Hubs checkpoints. The storage account must be a general-purpose one that supports blobs, queues, and tables. For more information, see [Storage account requirements](storage-considerations.md#storage-account-requirements).
|Key|Sample value| |||
Used to customize the Java virtual machine (JVM) used to run your Java functions
Controls the managed dependencies background update period for PowerShell function apps, with a default value of `7.00:00:00` (weekly).
-Each PowerShell worker process initiates checking for module upgrades on the PowerShell Gallery on process start and every `MDMaxBackgroundUpgradePeriod` after that. When a new module version is available in the PowerShell Gallery, it's installed to the file system and made available to PowerShell workers. Decreasing this value lets your function app get newer module versions sooner, but it also increases the app resource usage (network I/O, CPU, storage). Increasing this value decreases the app's resource usage, but it may also delay delivering new module versions to your app.
+Each PowerShell worker process initiates checking for module upgrades on the PowerShell Gallery on process start and every `MDMaxBackgroundUpgradePeriod` after that. When a new module version is available in the PowerShell Gallery, it's installed to the file system and made available to PowerShell workers. Decreasing this value lets your function app get newer module versions sooner, but it also increases the app resource usage (network I/O, CPU, storage). Increasing this value decreases the app's resource usage, but it can also delay delivering new module versions to your app.
|Key|Sample value| |||
To learn more, see [Dependency management](functions-reference-powershell.md#dep
Specifies how often each PowerShell worker checks whether managed dependency upgrades have been installed. The default frequency is `01:00:00` (hourly).
-After new module versions are installed to the file system, every PowerShell worker process must be restarted. Restarting PowerShell workers affects your app availability as it can interrupt current function execution. Until all PowerShell worker processes are restarted, function invocations may use either the old or the new module versions. Restarting all PowerShell workers completes within `MDNewSnapshotCheckPeriod`.
+After new module versions are installed to the file system, every PowerShell worker process must be restarted. Restarting PowerShell workers affects your app availability as it can interrupt current function execution. Until all PowerShell worker processes are restarted, function invocations can use either the old or the new module versions. Restarting all PowerShell workers completes within `MDNewSnapshotCheckPeriod`.
-Within every `MDNewSnapshotCheckPeriod`, the PowerShell worker checks whether or not managed dependency upgrades have been installed. When upgrades have been installed, a restart is initiated. Increasing this value decreases the frequency of interruptions because of restarts. However, the increase might also increase the time during which function invocations could use either the old or the new module versions, non-deterministically.
+Within every `MDNewSnapshotCheckPeriod`, the PowerShell worker checks whether or not managed dependency upgrades have been installed. When upgrades have been installed, a restart is initiated. Increasing this value decreases the frequency of interruptions because of restarts. However, the increase might also increase the time during which function invocations could use either the old or the new module versions, nondeterministically.
|Key|Sample value| |||
The value for this setting indicates an extra index URL for custom packages for
To learn more, see [`pip` documentation for `--extra-index-url`](https://pip.pypa.io/en/stable/cli/pip_wheel/?highlight=index%20url#cmdoption-extra-index-url) and [Custom dependencies](functions-reference-python.md#remote-build-with-extra-index-url) in the Python developer reference.
+## PROJECT
+
+A [continuous deployment](./functions-continuous-deployment.md) setting that tells the Kudu deployment service the folder in a connected repository to location the deployable project.
+
+|Key|Sample value|
+|||
+|PROJECT |`WebProject/WebProject.csproj` |
+ ## PYTHON\_ISOLATE\_WORKER\_DEPENDENCIES The configuration is specific to Python function apps. It defines the prioritization of module loading order. By default, this value is set to `0`. |Key|Value|Description| ||--|--|
-|PYTHON\_ISOLATE\_WORKER\_DEPENDENCIES|`0`| Prioritize loading the Python libraries from internal Python worker's dependencies, which is the default behavior. Third-party libraries defined in requirements.txt may be shadowed. |
+|PYTHON\_ISOLATE\_WORKER\_DEPENDENCIES|`0`| Prioritize loading the Python libraries from internal Python worker's dependencies, which is the default behavior. Third-party libraries defined in requirements.txt might be shadowed. |
|PYTHON\_ISOLATE\_WORKER\_DEPENDENCIES|`1`| Prioritize loading the Python libraries from application's package defined in requirements.txt. This prevents your libraries from colliding with internal Python worker's libraries. | ## PYTHON_ENABLE_DEBUG_LOGGING
When debugging Python functions, make sure to also set a debug or trace [logging
## PYTHON\_ENABLE\_WORKER\_EXTENSIONS
-The configuration is specific to Python function apps. Setting this to `1` allows the worker to load in [Python worker extensions](functions-reference-python.md#python-worker-extensions) defined in requirements.txt. It enables your function app to access new features provided by third-party packages. It may also change the behavior of function load and invocation in your app. Ensure the extension you choose is trustworthy as you bear the risk of using it. Azure Functions gives no express warranties to any extensions. For how to use an extension, visit the extension's manual page or readme doc. By default, this value sets to `0`.
+The configuration is specific to Python function apps. Setting this to `1` allows the worker to load in [Python worker extensions](functions-reference-python.md#python-worker-extensions) defined in requirements.txt. It enables your function app to access new features provided by third-party packages. It can also change the behavior of function load and invocation in your app. Ensure the extension you choose is trustworthy as you bear the risk of using it. Azure Functions gives no express warranties to any extensions. For how to use an extension, visit the extension's manual page or readme doc. By default, this value sets to `0`.
|Key|Value|Description| ||--|--|
The above sample value of `1800` sets a timeout of 30 minutes. For more informat
## WEBSITE\_CONTENTAZUREFILECONNECTIONSTRING
-Connection string for storage account where the function app code and configuration are stored in event-driven scaling plans. For more information, see [Create a function app](functions-infrastructure-as-code.md?tabs=windows#create-a-function-app).
+Connection string for storage account where the function app code and configuration are stored in event-driven scaling plans. For more information, see [Storage account connection setting](storage-considerations.md#storage-account-connection-setting).
|Key|Sample value| |||
Connection string for storage account where the function app code and configurat
This setting is required for Consumption plan apps on Windows and for Elastic Premium plan apps on both Windows and Linux. It's not required for Dedicated plan apps, which aren't dynamically scaled by Functions.
-Changing or removing this setting may cause your function app to not start. To learn more, see [this troubleshooting article](functions-recover-storage-account.md#storage-account-application-settings-were-deleted).
+Changing or removing this setting can cause your function app to not start. To learn more, see [this troubleshooting article](functions-recover-storage-account.md#storage-account-application-settings-were-deleted).
## WEBSITE\_CONTENTOVERVNET
Supported on [Premium](functions-premium-plan.md) and [Dedicated (App Service) p
## WEBSITE\_CONTENTSHARE
-The file path to the function app code and configuration in an event-driven scaling plans. Used with WEBSITE_CONTENTAZUREFILECONNECTIONSTRING. Default is a unique string generated by the runtime that begins with the function app name. See [Create a function app](functions-infrastructure-as-code.md?tabs=windows#create-a-function-app).
+The file path to the function app code and configuration in an event-driven scaling plans. Used with WEBSITE_CONTENTAZUREFILECONNECTIONSTRING. Default is a unique string generated by the runtime that begins with the function app name. For more information, see [Storage account connection setting](storage-considerations.md#storage-account-connection-setting).
|Key|Sample value| |||
The file path to the function app code and configuration in an event-driven scal
This setting is required for Consumption and Premium plan apps on both Windows and Linux. It's not required for Dedicated plan apps, which aren't dynamically scaled by Functions.
-Changing or removing this setting may cause your function app to not start. To learn more, see [this troubleshooting article](functions-recover-storage-account.md#storage-account-application-settings-were-deleted).
+Changing or removing this setting can cause your function app to not start. To learn more, see [this troubleshooting article](functions-recover-storage-account.md#storage-account-application-settings-were-deleted).
The following considerations apply when using an Azure Resource Manager (ARM) template to create a function app during deployment:
-+ When you don't set a `WEBSITE_CONTENTSHARE` value for the main function app or any apps in slots, unique share values are generated for you. Not setting `WEBSITE_CONTENTSHARE` is the recommended approach for an ARM template deployment.
++ When you don't set a `WEBSITE_CONTENTSHARE` value for the main function app or any apps in slots, unique share values are generated for you. Not setting `WEBSITE_CONTENTSHARE` _is the recommended approach_ for an ARM template deployment. + There are scenarios where you must set the `WEBSITE_CONTENTSHARE` value to a predefined share, such as when you [use a secured storage account in a virtual network](configure-networking-how-to.md#restrict-your-storage-account-to-a-virtual-network). In this case, you must set a unique share name for the main function app and the app for each deployment slot. + Don't make `WEBSITE_CONTENTSHARE` a slot setting. + When you specify `WEBSITE_CONTENTSHARE`, the value must follow [this guidance for share names](/rest/api/storageservices/naming-and-referencing-shares--directories--files--and-metadata#share-names).
-To learn more, see [Automate resource deployment for your function app](functions-infrastructure-as-code.md?tabs=windows#create-a-function-app).
- ## WEBSITE\_DNS\_SERVER Sets the DNS server used by an app when resolving IP addresses. This setting is often required when using certain networking functionality, such as [Azure DNS private zones](functions-networking-options.md#azure-dns-private-zones) and [private endpoints](functions-networking-options.md#restrict-your-storage-account-to-a-virtual-network).
Valid values are either a URL that resolves to the location of a deployment pack
## WEBSITE\_SKIP\_CONTENTSHARE\_VALIDATION
-The [WEBSITE_CONTENTAZUREFILECONNECTIONSTRING](#website_contentazurefileconnectionstring) and [WEBSITE_CONTENTSHARE](#website_contentshare) settings have extra validation checks to ensure that the app can be properly started. Creation of application settings will fail if the function app can't properly call out to the downstream Storage Account or Key Vault due to networking constraints or other limiting factors. When WEBSITE_SKIP_CONTENTSHARE_VALIDATION is set to `1`, the validation check is skipped; otherwise the value defaults to `0` and the validation will take place.
+The [WEBSITE_CONTENTAZUREFILECONNECTIONSTRING](#website_contentazurefileconnectionstring) and [WEBSITE_CONTENTSHARE](#website_contentshare) settings have extra validation checks to ensure that the app can be properly started. Creation of application settings fail when the function app can't properly call out to the downstream Storage Account or Key Vault due to networking constraints or other limiting factors. When WEBSITE_SKIP_CONTENTSHARE_VALIDATION is set to `1`, the validation check is skipped; otherwise the value defaults to `0` and the validation will take place.
|Key|Sample value| |||
Indicates whether all outbound traffic from the app is routed through the virtua
Some configurations must be maintained at the App Service level as site settings, such as language versions. These settings are managed in the portal, by using REST APIs, or by using Azure CLI or Azure PowerShell. The following are site settings that could be required, depending on your runtime language, OS, and versions:
+### alwaysOn
+
+On a function app running in a [Dedicated (App Service) plan](./dedicated-plan.md), the functions runtime goes idle after a few minutes of inactivity, a which point only requests to an HTTP triggers _wakes-up_ your functions. To make sure that your non-HTTP triggered functions run correctly, including Timer trigger, enable Always On for the function app by setting the `alwaysOn` site setting to a value of `true`.
+ ### linuxFxVersion
-For function apps running on Linux, `linuxFxVersion` indicates the language and version for the language-specific worker process. This information is used, along with [`FUNCTIONS_EXTENSION_VERSION`](#functions_extension_version), to determine which specific Linux container image is installed to run your function app. This setting can be set to a pre-defined value or a custom image URI.
+For function apps running on Linux, `linuxFxVersion` indicates the language and version for the language-specific worker process. This information is used, along with [`FUNCTIONS_EXTENSION_VERSION`](#functions_extension_version), to determine which specific Linux container image is installed to run your function app. This setting can be set to a predefined value or a custom image URI.
-This value is set for you when you create your Linux function app. You may need to set it for ARM template and Bicep deployments and in certain upgrade scenarios.
+This value is set for you when you create your Linux function app. You might need to set it for ARM template and Bicep deployments and in certain upgrade scenarios.
#### Valid linuxFxVersion values
azure-functions Functions Bindings Event Grid Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-event-grid-output.md
The following table explains the properties that you can set on the `options` ob
|||-| |**topicEndpointUri** | The name of an app setting that contains the URI for the custom topic, such as `MyTopicEndpointUri`. | |**topicKeySetting** | The name of an app setting that contains an access key for the custom topic. |
+|**connection**<sup>*</sup> | The value of the common prefix for the setting that contains the topic endpoint URI. When setting the `connection` property, the `topicEndpointUri` and `topicKeySetting` properties shouldn't be set. For more information about the naming format of this application setting, see [Identity-based authentication](#identity-based-authentication). |
# [Model v3](#tab/nodejs-v3)
The following table explains the binding configuration properties that you set i
|**name** | The variable name used in function code that represents the event. | |**topicEndpointUri** | The name of an app setting that contains the URI for the custom topic, such as `MyTopicEndpointUri`. | |**topicKeySetting** | The name of an app setting that contains an access key for the custom topic. |
-|**connection**<sup>*</sup> | The value of the common prefix for the setting that contains the topic endpoint URI. For more information about the naming format of this application setting, see [Identity-based authentication](#identity-based-authentication). |
+|**connection**<sup>*</sup> | The value of the common prefix for the setting that contains the topic endpoint URI. When setting the `connection` property, the `topicEndpointUri` and `topicKeySetting` properties shouldn't be set. For more information about the naming format of this application setting, see [Identity-based authentication](#identity-based-authentication). |
Use the following steps to configure a topic key:
When using version 3.3.x or higher of the extension, you can connect to an Event Grid topic using an [Microsoft Entra identity](../active-directory/fundamentals/active-directory-whatis.md) to avoid having to obtain and work with topic keys.
-To do this, create an application setting that returns the topic endpoint URI, where the name of the setting combines a unique _common prefix_, such as `myawesometopic`, with the value `__topicEndpointUri`. You then use the common prefix `myawesometopic` when you define the `Connection` property in the binding.
+You need to create an application setting that returns the topic endpoint URI. The name of the setting should combine a _unique common prefix_ (for example, `myawesometopic`) with the value `__topicEndpointUri`. Then, you must use that common prefix (in this case, `myawesometopic`) when you define the `Connection` property in the binding.
In this mode, the extension requires the following properties:
In this mode, the extension requires the following properties:
|--|-||| | Topic Endpoint URI | `<CONNECTION_NAME_PREFIX>__topicEndpointUri` | The topic endpoint. | `https://<topic-name>.centralus-1.eventgrid.azure.net/api/events` |
-More properties may be set to customize the connection. See [Common properties for identity-based connections](functions-reference.md#common-properties-for-identity-based-connections).
+More properties can be used to customize the connection. See [Common properties for identity-based connections](functions-reference.md#common-properties-for-identity-based-connections).
> [!NOTE] > When using [Azure App Configuration](../azure-app-configuration/quickstart-azure-functions-csharp.md) or [Key Vault](../key-vault/general/overview.md) to provide settings for managed identity-based connections, setting names should use a valid key separator such as `:` or `/` in place of the `__` to ensure names are resolved correctly.
azure-functions Functions Bindings Warmup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-warmup.md
ms.devlang: csharp, java, javascript, python Last updated 09/04/2023
-zone_pivot_groups: programming-languages-set-functions-lang-workers
+zone_pivot_groups: programming-languages-set-functions
# Azure Functions warmup trigger
-This article explains how to work with the warmup trigger in Azure Functions. A warmup trigger is invoked when an instance is added to scale a running function app. The warmup trigger lets you define a function that's run when a new instance of your function app is started. You can use a warmup trigger to pre-load custom dependencies during the pre-warming process so your functions are ready to start processing requests immediately. Some actions for a warmup trigger might include opening connections, loading dependencies, or running any other custom logic before your app begins receiving traffic.
+This article explains how to work with the warmup trigger in Azure Functions. A warmup trigger is invoked when an instance is added to scale a running function app. The warmup trigger lets you define a function that runs when a new instance of your function app is started. You can use a warmup trigger to preload custom dependencies so your functions are ready to start processing requests immediately. Some actions for a warmup trigger might include opening connections, loading dependencies, or running any other custom logic before your app begins receiving traffic.
The following considerations apply when using a warmup trigger:
The following considerations apply when using a warmup trigger:
* The warmup trigger isn't supported on version 1.x of the Functions runtime. * Support for the warmup trigger is provided by default in all development environments. You don't have to manually install the package or register the extension. * There can be only one warmup trigger function per function app, and it can't be invoked after the instance is already running.
-* The warmup trigger is only called during scale-out operations, not during restarts or other non-scale startups. Make sure your logic can load all required dependencies without relying on the warmup trigger. Lazy loading is a good pattern to achieve this goal.
+* The warmup trigger is only called during scale-out operations, not during restarts or other nonscaling startups. Make sure your logic can load all required dependencies without relying on the warmup trigger. Lazy loading is a good pattern to achieve this goal.
* Dependencies created by warmup trigger should be shared with other functions in your app. To learn more, see [Static clients](manage-connections.md#static-clients).
-* If the [built-in authentication](../app-service/overview-authentication-authorization.md) (aka Easy Auth) is used, [HTTPS Only](../app-service/configure-ssl-bindings.md#enforce-https) should be enabled for the warmup trigger to get invoked.
+* If the [built-in authentication](../app-service/overview-authentication-authorization.md) (also known as Easy Auth) is used, [HTTPS Only](../app-service/configure-ssl-bindings.md#enforce-https) should be enabled for the warmup trigger to get invoked.
## Example
The following considerations apply when using a warmup trigger:
# [Isolated worker model](#tab/isolated-process)
-The following example shows a [C# function](dotnet-isolated-process-guide.md) that runs on each new instance when it's added to your app.
+The following example shows a [C# function](dotnet-isolated-process-guide.md) that runs on each new instance when added to your app.
:::code language="csharp" source="~/azure-functions-dotnet-worker/samples/Extensions/Warmup/Warmup.cs" range="4-18"::: # [In-process model](#tab/in-process)
-The following example shows a [C# function](functions-dotnet-class-library.md) that runs on each new instance when it's added to your app.
+The following example shows a [C# function](functions-dotnet-class-library.md) that runs on each new instance when added to your app.
```cs using Microsoft.Azure.WebJobs;
public void warmup( @WarmupTrigger Object warmupContext, ExecutionContext contex
``` ::: zone-end
-The following example shows a warmup trigger in a *function.json* file and a [JavaScript function](functions-reference-node.md) that runs on each new instance when it's added to your app.
+# [Model v4](#tab/nodejs-v4)
+
+The following example shows a warmup trigger [JavaScript function](functions-reference-node.md) that runs on each new instance when added to your app.
+
+```javascript
+import { app } from "@azure/functions";
+
+app.warmup('warmupTrigger1', {
+ handler: (warmupContext, context) => {
+ context.log('Function App instance is warm.');
+ },
+});
+```
+
+# [Model v3](#tab/nodejs-v3)
+
+The following example shows a warmup trigger in a *function.json* file and a [JavaScript function](functions-reference-node.md) that runs on each new instance when added to your app.
Here's the *function.json* file:
The [configuration](#configuration) section explains these properties.
Here's the JavaScript code:
-```javascript
-module.exports = async function (context, warmupContext) {
+```JavaScript
+module.exports = async function (warmupContext, context) {
context.log('Function App instance is warm.'); }; ```
+# [Model v4](#tab/nodejs-v4)
+
+The following example shows a warmup trigger [JavaScript function](functions-reference-node.md) that runs on each new instance when added to your app.
+
+```TypeScript
+import { app, InvocationContext, WarmupContextOptions } from "@azure/functions";
+
+export async function warmupFunction(warmupContext: WarmupContextOptions, context: InvocationContext): Promise<void> {
+ context.log('Function App instance is warm.');
+}
+
+app.warmup('warmup', {
+ handler: warmupFunction,
+});
+```
+
+# [Model v3](#tab/nodejs-v3)
+TypeScript samples aren't documented for model v3.
+ ::: zone pivot="programming-language-powershell" Here's the *function.json* file:
PowerShell example code pending.
The following example shows a warmup trigger in a *function.json* file and a [Python function](functions-reference-python.md) that runs on each new instance when it'is added to your app.
-Your function must be named `warmup` (case-insensitive) and there may only be one warmup function per app.
+Your function must be named `warmup` (case-insensitive) and there can only be one warmup function per app.
Here's the *function.json* file:
Use the `WarmupTrigger` attribute to define the function. This attribute has no
::: zone pivot="programming-language-java" ## Annotations
-Annotations aren't required by a warmup trigger. Just use a name of `warmup` (case-insensitive) for the `FunctionName` annotation.
+Warmup triggers don't require annotations. Just use a name of `warmup` (case-insensitive) for the `FunctionName` annotation.
::: zone-end ## Configuration
+# [Model v4](#tab/nodejs-v4)
+
+There are no properties that need to be set on the `options` object passed to the `app.warmup()` method.
+
+# [Model v3](#tab/nodejs-v3)
+ The following table explains the binding configuration properties that you set in the *function.json* file. |function.json property |Description|
The following table explains the binding configuration properties that you set i
| **direction** | Required - must be set to `in`. | | **name** | Required - the variable name used in function code. A `name` of `warmupContext` is recommended for the binding parameter.|
+## Configuration
+
+The following table explains the binding configuration properties that you set in the *function.json* file.
+
+|function.json property |Description|
+||-|
+| **type** | Required - must be set to `warmupTrigger`. |
+| **direction** | Required - must be set to `in`. |
+| **name** | Required - the variable name used in function code. A `name` of `warmupContext` is recommended for the binding parameter.|
+ See the [Example section](#example) for complete examples.
The following considerations apply to using a warmup function in C#:
::: zone-end ::: zone pivot="programming-language-java" Your function must be named `warmup` (case-insensitive) using the `FunctionName` annotation.
+# [Model v4](#tab/nodejs-v4)
+See the list of considerations at the top of the page for general usage advice.
+# [Model v3](#tab/nodejs-v3)
+The function type in _function.json_ must be set to `warmupTrigger`.
The function type in function.json must be set to `warmupTrigger`. ::: zone-end
azure-functions Functions Infrastructure As Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-infrastructure-as-code.md
Title: Automate function app resource deployment to Azure
-description: Learn how to build a Bicep file or an Azure Resource Manager template that deploys your function app.
-
+description: Learn how to build, validate, and use a Bicep file or an Azure Resource Manager template to deploy your function app and related Azure resources.
ms.assetid: d20743e3-aab6-442c-a836-9bcea09bfd32 Previously updated : 08/30/2022 Last updated : 10/26/2023
+zone_pivot_groups: functions-hosting-plan
# Automate resource deployment for your function app in Azure Functions
-You can use a Bicep file or an Azure Resource Manager template to deploy a function app. This article outlines the required resources and parameters for doing so. You might need to deploy other resources, depending on the [triggers and bindings](functions-triggers-bindings.md) in your function app. For more information about creating Bicep files, see [Understand the structure and syntax of Bicep files](../azure-resource-manager/bicep/file.md). For more information about creating templates, see [Authoring Azure Resource Manager templates](../azure-resource-manager/templates/syntax.md).
-
-For sample Bicep files and ARM templates, see:
--- [ARM templates for function app deployment](https://github.com/Azure-Samples/function-app-arm-templates)-- [Function app on Consumption plan]-- [Function app on Azure App Service plan]-
-## Required resources
-
-An Azure Functions deployment typically consists of these resources:
-
-# [Bicep](#tab/bicep)
-
-| Resource | Requirement | Syntax and properties reference |
-||-|--|
-| A function app | Required | [Microsoft.Web/sites](/azure/templates/microsoft.web/sites?pivots=deployment-language-bicep) |
-| A [storage account](../storage/index.yml) | Required | [Microsoft.Storage/storageAccounts](/azure/templates/microsoft.storage/storageaccounts?pivots=deployment-language-bicep) |
-| An [Application Insights](../azure-monitor/app/app-insights-overview.md) component | Optional | [Microsoft.Insights/components](/azure/templates/microsoft.insights/components?pivots=deployment-language-bicep) |
-| A [hosting plan](./functions-scale.md) | Optional<sup>1</sup> | [Microsoft.Web/serverfarms](/azure/templates/microsoft.web/serverfarms?pivots=deployment-language-bicep)
-
-# [JSON](#tab/json)
-
-| Resource | Requirement | Syntax and properties reference |
-||-|--|
-| A function app | Required | [Microsoft.Web/sites](/azure/templates/microsoft.web/sites?pivots=deployment-language-arm-template) |
-| A [storage account](../storage/index.yml) | Required | [Microsoft.Storage/storageAccounts](/azure/templates/microsoft.storage/storageaccounts?pivots=deployment-language-arm-template) |
-| An [Application Insights](../azure-monitor/app/app-insights-overview.md) component | Optional | [Microsoft.Insights/components](/azure/templates/microsoft.insights/components?pivots=deployment-language-arm-template) |
-| A [hosting plan](./functions-scale.md) | Optional<sup>1</sup> | [Microsoft.Web/serverfarms](/azure/templates/microsoft.web/serverfarms?pivots=deployment-language-arm-template) |
---
-<sup>1</sup>A hosting plan is only required when you choose to run your function app on a [Premium plan](./functions-premium-plan.md) or on an [App Service plan](../app-service/overview-hosting-plans.md).
-
-> [!TIP]
-> While not required, it is strongly recommended that you configure Application Insights for your app.
-
+You can use a Bicep file or an Azure Resource Manager template to automate the process of deploying a function app to new or existing Azure resources. Such automation provides a great way to be able to integrate your resource deployments with your source code in DevOps, restore a function app and related resources from a backup, or deploy an app topology multiple times.
+
+This article shows you how to automate the creation of resources and deployment for Azure Functions. Depending on the [triggers and bindings](functions-triggers-bindings.md) used by your functions, you might need to deploy other resources, which is outside of the scope of this article.
+
+The specific template code depends on how your function app is hosted, whether you're deploying code or a containerized function app, and the operating system used by your app. This article supports the following hosting options:
+
+| Hosting option | Deployment type | To learn more, see... |
+| -- | -- | -- |
+| [Azure Functions Consumption plan](functions-infrastructure-as-code.md?pivots=consumption-plan) | Code-only | [Consumption plan](./consumption-plan.md) |
+| [Azure Functions Elastic Premium plan](functions-infrastructure-as-code.md?pivots=premium-plan) | Code \| Container | [Premium plan](./functions-premium-plan.md)|
+| [Azure Functions Dedicated (App Service) plan](functions-infrastructure-as-code.md?pivots=dedicated-plan) | Code \| Container | [Dedicated plan](./dedicated-plan.md)|
+| [Azure Container Apps](functions-infrastructure-as-code.md?pivots=premium-plan) | Container-only | [Container Apps hosting of Azure Functions](functions-container-apps-hosting.md)|
+| [Azure Arc](functions-infrastructure-as-code.md?pivots=premium-plan) | Code \| Container | [App Service, Functions, and Logic Apps on Azure Arc (Preview)](../app-service/overview-arc-integration.md)|
+
+## Required resources
+An Azure Functions-hosted deployment typically consists of these resources:
+
+| Resource | Requirement | Syntax and properties reference |
+||-|-|
+| A [storage account](#create-storage-account) | Required | [Microsoft.Storage/storageAccounts](/azure/templates/microsoft.storage/storageaccounts) |
+| An [Application Insights](#create-application-insights) component | Recommended | [Microsoft.Insights/components](/azure/templates/microsoft.insights/components)|
+| A [hosting plan](#create-the-hosting-plan)| Required<sup>1</sup> | [Microsoft.Web/serverfarms](/azure/templates/microsoft.web/serverfarms) |
+| A [function app](#create-the-function-app) | Required | [Microsoft.Web/sites](/azure/templates/microsoft.web/sites) |
+An Azure Functions deployment for a Consumption plan typically consists of these resources:
+
+| Resource | Requirement | Syntax and properties reference |
+||-|-|
+| A [storage account](#create-storage-account) | Required | [Microsoft.Storage/storageAccounts](/azure/templates/microsoft.storage/storageaccounts) |
+| An [Application Insights](#create-application-insights) component | Recommended | [Microsoft.Insights/components](/azure/templates/microsoft.insights/components)|
+| A [function app](#create-the-function-app) | Required | [Microsoft.Web/sites](/azure/templates/microsoft.web/sites) |
+An Azure Container Apps-hosted deployment typically consists of these resources:
+
+| Resource | Requirement | Syntax and properties reference |
+||-|-|
+| A [storage account](#create-storage-account) | Required | [Microsoft.Storage/storageAccounts](/azure/templates/microsoft.storage/storageaccounts) |
+| An [Application Insights](#create-application-insights) component | Recommended | [Microsoft.Insights/components](/azure/templates/microsoft.insights/components)|
+| A [managed environment](./functions-container-apps-hosting.md#) | Required | [Microsoft.App/managedEnvironments](/azure/templates/microsoft.app/managedenvironments) |
+| A [function app](#create-the-function-app) | Required | [Microsoft.Web/sites](/azure/templates/microsoft.web/sites) |
+An Azure Arc-hosted deployment typically consists of these resources:
+
+| Resource | Requirement | Syntax and properties reference |
+||-|-|
+| A [storage account](#create-storage-account) | Required | [Microsoft.Storage/storageAccounts](/azure/templates/microsoft.storage/storageaccounts) |
+| An [Application Insights](#create-application-insights) component | Recommended | [Microsoft.Insights/components](/azure/templates/microsoft.insights/components)|
+| An [App Service Kubernetes environment](../app-service/overview-arc-integration.md#app-service-kubernetes-environment) | Required | [Microsoft.ExtendedLocation/customLocations](/azure/templates/microsoft.extendedlocation/customlocations) |
+| A [function app](#create-the-function-app) | Required | [Microsoft.Web/sites](/azure/templates/microsoft.web/sites) |
+<sup>1</sup>An explicit hosting plan isn't required when you choose to host your function app in a [Consumption plan](./consumption-plan.md).
+
+When you deploy multiple resources in a single Bicep file or ARM template, the order in which resources are created is important. This requirement is a result of dependencies between resources. For such dependencies, make sure to use the `dependsOn` element to define the dependency in the dependent resource. For more information, see either [Define the order for deploying resources in ARM templates](../azure-resource-manager/templates/resource-dependency.md) or [Resource dependencies in Bicep](../azure-resource-manager/bicep/resource-dependencies.md).
+
+This article assumes that you have a basic understanding about [creating Bicep files](../azure-resource-manager/bicep/file.md) or [authoring Azure Resource Manager templates](../azure-resource-manager/templates/syntax.md), and examples are shown as individual sections for specific resources. For a broad set of complete Bicep file and ARM template examples, see [these function app deployment examples](/samples/browse/?expanded=azure&terms=%22azure%20functions%22&products=azure-resource-manager).
+## Prerequisites
+This article assumes that you have already created a [managed environment](../container-apps/environment.md) in Azure Container Apps. You need both the name and the ID of the managed environment to create a function app hosted on Container Apps.
+This article assumes that you have already created an [App Service-enabled custom location](../app-service/overview-arc-integration.md) on an [Azure Arc-enabled Kubernetes cluster](../azure-arc/kubernetes/overview.md). You need both the custom location ID and the Kubernetes environment ID to create a function app hosted in an Azure Arc custom location.
<a name="storage"></a>
-### Storage account
+## Create storage account
-A storage account is required for a function app. You need a general purpose account that supports blobs, tables, queues, and files. For more information, see [Azure Functions storage account requirements](storage-considerations.md#storage-account-requirements).
+All function apps require an Azure storage account. You need a general purpose account that supports blobs, tables, queues, and files. For more information, see [Azure Functions storage account requirements](storage-considerations.md#storage-account-requirements).
[!INCLUDE [functions-storage-access-note](../../includes/functions-storage-access-note.md)]
-# [Bicep](#tab/bicep)
-
-```bicep
-resource storageAccountName 'Microsoft.Storage/storageAccounts@2022-05-01' = {
- name: storageAccountName
- location: location
- kind: 'StorageV2'
- sku: {
- name: storageAccountType
- }
- properties: {
- supportsHttpsTrafficOnly: true
- defaultToOAuthAuthentication: true
- }
-}
-```
+This example section creates a Standard general-purpose v2 storage account:
-# [JSON](#tab/json)
+### [ARM template](#tab/json)
```json "resources": [
resource storageAccountName 'Microsoft.Storage/storageAccounts@2022-05-01' = {
] ``` --
-You must also specify the `AzureWebJobsStorage` connection in the site configuration. This can be set in the `appSettings` collection in the `siteConfig` object:
+For more context, see the complete [azuredeploy.json](https://github.com/Azure-Samples/function-app-arm-templates/blob/main/function-app-linux-consumption/azuredeploy.json#L77) file in the templates repository.
-# [Bicep](#tab/bicep)
+### [Bicep](#tab/bicep)
```bicep
-resource functionApp 'Microsoft.Web/sites@2022-03-01' = {
- ...
+resource storageAccountName 'Microsoft.Storage/storageAccounts@2022-05-01' = {
+ name: storageAccountName
+ location: location
+ kind: 'StorageV2'
+ sku: {
+ name: storageAccountType
+ }
properties: {
- ...
- siteConfig: {
- ...
- appSettings: [
- {
- name: 'AzureWebJobsStorage'
- value: 'DefaultEndpointsProtocol=https;AccountName=${storageAccountName};AccountKey=${storageAccount.listKeys().keys[0].value}'
- }
- ...
- ]
- }
+ supportsHttpsTrafficOnly: true
+ defaultToOAuthAuthentication: true
} } ```
-# [JSON](#tab/json)
+For more context, see the complete [main.bicep](https://github.com/Azure-Samples/function-app-arm-templates/blob/main/function-app-linux-consumption/main.bicep#L37) file in the templates repository.
+++
+You need to set the connection string of this storage account as the `AzureWebJobsStorage` app setting, which Functions requires. The templates in this article construct this connection string value based on the created storage account, which is a best practice. For more information, see [Application configuration](#application-configuration).
+
+### Enable storage logs
+
+Because the storage account is used for important function app data, you should monitor the account for modification of that content. To monitor your storage account, you need to configure Azure Monitor resource logs for Azure Storage. In this example section, a Log Analytics workspace named `myLogAnalytics` is used as the destination for these logs.
+
+#### [ARM template](#tab/json)
```json "resources": [ {
- "type": "Microsoft.Web/sites",
- ...
+ "type": "Microsoft.Insights/diagnosticSettings",
+ "apiVersion": "2021-05-01-preview",
+ "scope": "[format('Microsoft.Storage/storageAccounts/{0}/blobServices/default', parameters('storageAccountName'))]",
+ "name": "[parameters('storageDataPlaneLogsName')]",
"properties": {
- ...
- "siteConfig": {
- ...
- "appSettings": [
+ "workspaceId": "[resourceId('Microsoft.OperationalInsights/workspaces', parameters('myLogAnalytics'))]",
+ "logs": [
{
- "name": "AzureWebJobsStorage",
- "value": "[format('DefaultEndpointsProtocol=https;AccountName={0};AccountKey={1}', parameters('storageAccountName'), listKeys(resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountName')), '2021-09-01').keys[0].value)]"
- },
- ...
- ]
- }
+ "category": "StorageWrite",
+ "enabled": true
+ }
+ ],
+ "metrics": [
+ {
+ "category": "Transaction",
+ "enabled": true
+ }
+ ]
} } ] ``` --
-In some hosting plan options, function apps should also have an Azure Files content share, and they will need additional app settings referencing this storage account. These are covered later in this article as a part of the hosting plan options to which this applies.
-
-#### Storage logs
-
-Because the storage account is used for important function app data, you may want to monitor for modification of that content. To do this, you need to configure Azure Monitor resource logs for Azure Storage. In the following example, a Log Analytics workspace named `myLogAnalytics` is used as the destination for these logs. This same workspace can be used for the Application Insights resource defined later.
-
-# [Bicep](#tab/bicep)
+#### [Bicep](#tab/bicep)
```bicep resource blobService 'Microsoft.Storage/storageAccounts/blobServices@2021-09-01' existing = {
resource storageDataPlaneLogs 'Microsoft.Insights/diagnosticSettings@2021-05-01-
} ```
-# [JSON](#tab/json)
+
-```json
-"resources": [
- {
- "type": "Microsoft.Insights/diagnosticSettings",
- "apiVersion": "2021-05-01-preview",
- "scope": "[format('Microsoft.Storage/storageAccounts/{0}/blobServices/default', parameters('storageAccountName'))]",
- "name": "[parameters('storageDataPlaneLogsName')]",
- "properties": {
- "workspaceId": "[resourceId('Microsoft.OperationalInsights/workspaces', parameters('myLogAnalytics'))]",
- "logs": [
- {
- "category": "StorageWrite",
- "enabled": true
- }
- ],
- "metrics": [
- {
- "category": "Transaction",
- "enabled": true
- }
- ]
- }
- }
-]
-```
+This same workspace can be used for the Application Insights resource defined later. For more information, including how to work with these logs, see [Monitoring Azure Storage](../storage/blobs/monitor-blob-storage.md).
+
+## Create Application Insights
+
+Application Insights is recommended for monitoring your function app executions. In this example section, the Application Insights resource is defined with the type `Microsoft.Insights/components` and the kind `web`:
+
+### [ARM template](#tab/json)
++
+For more context, see the complete [azuredeploy.json](https://github.com/Azure-Samples/function-app-arm-templates/blob/main/function-app-linux-consumption/azuredeploy.json#L102) file in the templates repository.
+
+### [Bicep](#tab/bicep)
++
+For more context, see the complete [main.bicep](https://github.com/Azure-Samples/function-app-arm-templates/blob/main/function-app-linux-consumption/main.bicep#L60) file in the templates repository.
-See [Monitoring Azure Storage](../storage/blobs/monitor-blob-storage.md) for instructions on how to work with these logs.
+The connection must be provided to the function app using the [`APPLICATIONINSIGHTS_CONNECTION_STRING`](functions-app-settings.md#applicationinsights_connection_string) application setting. For more information, see [Application settings](#application-configuration).
+
+The examples in this article obtain the connection string value for the created instance. Older versions might instead use [`APPINSIGHTS_INSTRUMENTATIONKEY`](functions-app-settings.md#appinsights_instrumentationkey) to set the instrumentation key, which is no longer recommended.
+
+## Create the hosting plan
-### Application Insights
+Apps hosted in an Azure Functions [Premium plan](./functions-premium-plan.md) or [Dedicated (App Service) plan](./dedicated-plan.md) must have the hosting plan explicitly defined.
+The Premium plan offers the same scaling as the Consumption plan but includes dedicated resources and extra capabilities. To learn more, see [Azure Functions Premium Plan](functions-premium-plan.md).
-Application Insights is recommended for monitoring your function apps. The Application Insights resource is defined with the type `Microsoft.Insights/components` and the kind **web**:
+A Premium plan is a special type of `serverfarm` resource. You can specify it by using either `EP1`, `EP2`, or `EP3` for the `Name` property value in the `sku` property. The way that you define the Functions hosting plan depends on whether your function app runs on Windows or on Linux. This example section creates an `EP1` plan:
-# [Bicep](#tab/bicep)
+### [Windows](#tab/windows/bicep)
```bicep
-resource applicationInsights 'Microsoft.Insights/components@2020-02-02' = {
- name: applicationInsightsName
- location: appInsightsLocation
- kind: 'web'
+resource hostingPlan 'Microsoft.Web/serverfarms@2022-03-01' = {
+ name: hostingPlanName
+ location: location
+ sku: {
+ name: 'EP1'
+ tier: 'ElasticPremium'
+ family: 'EP'
+ }
+ kind: 'elastic'
properties: {
- Application_Type: 'web'
- Request_Source: 'IbizaWebAppExtensionCreate'
+ maximumElasticWorkerCount: 20
} } ```
-# [JSON](#tab/json)
+For more context, see the complete [main.bicep](https://github.com/Azure-Samples/function-app-arm-templates/blob/main/function-app-premium-plan/main.bicep#L62) file in the templates repository.
+
+### [Windows](#tab/windows/json)
```json "resources": [ {
- "type": "Microsoft.Insights/components",
- "apiVersion": "2020-02-02",
- "name": "[parameters('applicationInsightsName')]",
- "location": "[parameters('appInsightsLocation')]",
- "kind": "web",
+ "type": "Microsoft.Web/serverfarms",
+ "apiVersion": "2022-03-01",
+ "name": "[parameters('hostingPlanName')]",
+ "location": "[parameters('location')]",
+ "sku": {
+ "name": "EP1",
+ "tier": "ElasticPremium",
+ "family": "EP"
+ },
+ "kind": "elastic",
"properties": {
- "Application_Type": "web",
- "Request_Source": "IbizaWebAppExtensionCreate"
+ "maximumElasticWorkerCount": 20
} } ] ``` -
+For more context, see the complete [azuredeploy.json](https://github.com/Azure-Samples/function-app-arm-templates/blob/main/function-app-premium-plan/azuredeploy.json#L113) file in the templates repository.
-In addition, the instrumentation key needs to be provided to the function app using the `APPINSIGHTS_INSTRUMENTATIONKEY` application setting. This property is specified in the `appSettings` collection in the `siteConfig` object:
+### [Linux](#tab/linux/bicep)
-# [Bicep](#tab/bicep)
+To run your app on Linux, you must also set property `"reserved": true` for the `serverfarms` resource:
```bicep
-resource functionApp 'Microsoft.Web/sites@2022-03-01' = {
- ...
+resource hostingPlan 'Microsoft.Web/serverfarms@2022-03-01' = {
+ name: hostingPlanName
+ location: location
+ sku: {
+ name: 'EP1'
+ tier: 'ElasticPremium'
+ family: 'EP'
+ }
+ kind: 'elastic'
properties: {
- ...
- siteConfig: {
- ...
- appSettings: [
- {
- name: 'APPINSIGHTS_INSTRUMENTATIONKEY'
- value: appInsights.properties.InstrumentationKey
- }
- ...
- ]
- }
+ maximumElasticWorkerCount: 20
+ reserved: true
} } ```
-# [JSON](#tab/json)
+For more context, see the complete [main.bicep](https://github.com/Azure-Samples/function-app-arm-templates/blob/main/function-app-premium-plan/main.bicep#L62) file in the templates repository.
+
+### [Linux](#tab/linux/json)
+
+To run your app on Linux, you must also set property `"reserved": true` for the `serverfarms` resource:
```json "resources": [ {
- "type": "Microsoft.Web/sites",
- ...
+ "type": "Microsoft.Web/serverfarms",
+ "apiVersion": "2022-03-01",
+ "name": "[parameters('hostingPlanName')]",
+ "location": "[parameters('location')]",
+ "sku": {
+ "name": "EP1",
+ "tier": "ElasticPremium",
+ "family": "EP",
+ },
+ "kind": "elastic",
"properties": {
- ...
- "siteConfig": {
- ...
- "appSettings": [
- {
- "name": "APPINSIGHTS_INSTRUMENTATIONKEY",
- "value": "[reference(resourceId('Microsoft.Insights/components', parameters('applicationInsightsName')), '2020-02-02').InstrumentationKey]"
- },
- ...
- ]
- }
+ "maximumElasticWorkerCount": 20,
+ "reserved": true
} } ] ``` -
+For more context, see the complete [azuredeploy.json](https://github.com/Azure-Samples/function-app-arm-templates/blob/main/function-app-premium-plan/azuredeploy.json#L113) file in the templates repository.
-### Hosting plan
+
-The definition of the hosting plan varies, and can be one of the following plans:
+For more information about the `sku` object, see [`SkuDefinition`](/azure/templates/microsoft.web/serverfarms#skudescription) or review the example templates.
+In the Dedicated (App Service) plan, your function app runs on dedicated VMs on Basic, Standard, and Premium SKUs in App Service plans, similar to web apps. For more information, see [Dedicated plan](./dedicated-plan.md).
-- [Consumption plan](#consumption) (default)-- [Premium plan](#premium)-- [App Service plan](#app-service-plan)
+For a sample Bicep file/Azure Resource Manager template, see [Function app on Azure App Service plan]
-### Function app
+In Functions, the Dedicated plan is just a regular App Service plan, which is defined by a `serverfarm` resource. You must provide at least the `name` value. For a list of supported plan names, see the `--sku` setting in [`az appservice plan create`](/cli/azure/appservice/plan#az-appservice-plan-create) for the current list of supported values for a Dedicated plan.
-The function app resource is defined by using a resource of type **Microsoft.Web/sites** and kind **functionapp**:
+The way that you define the hosting plan depends on whether your function app runs on Windows or on Linux:
-# [Bicep](#tab/bicep)
+### [Windows](#tab/windows/bicep)
```bicep
-resource functionApp 'Microsoft.Web/sites@2022-03-01' = {
- name: functionAppName
+resource hostingPlanName 'Microsoft.Web/serverfarms@2022-03-01' = {
+ name: hostingPlanName
location: location
- kind: 'functionapp'
- identity:{
- type:'SystemAssigned'
- }
- properties: {
- serverFarmId: hostingPlan.id
- clientAffinityEnabled: false
- siteConfig: {
- alwaysOn: true
- }
- httpsOnly: true
+ sku: {
+ tier: 'Standard'
+ name: 'S1'
+ size: 'S1'
+ family: 'S'
+ capacity: 1
}
- dependsOn: [
- storageAccount
- ]
} ```
-# [JSON](#tab/json)
+For more context, see the complete [main.bicep](https://github.com/Azure-Samples/function-app-arm-templates/blob/main/function-app-dedicated-plan/main.bicep#L62) file in the templates repository.
+
+### [Windows](#tab/windows/json)
```json
-"resources:": [
+"resources": [
{
- "type": "Microsoft.Web/sites",
+ "type": "Microsoft.Web/serverfarms",
"apiVersion": "2022-03-01",
- "name": "[parameters('functionAppName')]",
+ "name": "[parameters('hostingPlanName')]",
"location": "[parameters('location')]",
- "kind": "functionapp",
- "identity": {
- "type": "SystemAssigned"
- },
- "properties": {
- "serverFarmId": "[resourceId('Microsoft.Web/serverfarms', parameters('hostingPlanName'))]",
- "clientAffinityEnabled": false,
- "siteConfig": {
- "alwaysOn": true
- },
- "httpsOnly": true
- },
- "dependsOn": [
- "[resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountName'))]",
- "[resourceId('Microsoft.Insights/components', parameters('applicationInsightsName'))]"
- ]
+ "sku": {
+ "tier": "Standard",
+ "name": "S1",
+ "size": "S1",
+ "family": "S",
+ "capacity": 1
+ }
} ] ``` --
-> [!IMPORTANT]
-> If you're explicitly defining a hosting plan, an additional item would be needed in the dependsOn array: `"[resourceId('Microsoft.Web/serverfarms', parameters('hostingPlanName'))]"`
+For more context, see the complete [azuredeploy.json](https://github.com/Azure-Samples/function-app-arm-templates/blob/main/function-app-dedicated-plan/azuredeploy.json#L112) file in the templates repository.
-A function app must include these application settings:
+### [Linux](#tab/linux/bicep)
-| Setting name | Description | Example values |
-||-||
-| AzureWebJobsStorage | A connection string to a storage account that the Functions runtime uses for internal queueing | See [Storage account](#storage) |
-| FUNCTIONS_EXTENSION_VERSION | The version of the Azure Functions runtime | `~4` |
-| FUNCTIONS_WORKER_RUNTIME | The language stack to be used for functions in this app | `dotnet`, `node`, `java`, `python`, or `powershell` |
-| WEBSITE_NODE_DEFAULT_VERSION | Only needed if using the `node` language stack on **Windows**, specifies the [version](./functions-reference-node.md#node-version) to use | `~14` |
+```bicep
+resource hostingPlan 'Microsoft.Web/serverfarms@2022-03-01' = {
+ name: hostingPlanName
+ location: location
+ sku: {
+ tier: 'Standard'
+ name: 'S1'
+ size: 'S1'
+ family: 'S'
+ capacity: 1
+ }
+ properties: {
+ reserved: true
+ }
+}
+```
-These properties are specified in the `appSettings` collection in the `siteConfig` property:
+For more context, see the complete [main.bicep](https://github.com/Azure-Samples/function-app-arm-templates/blob/main/function-app-dedicated-plan/main.bicep#L62) file in the templates repository.
-# [Bicep](#tab/bicep)
-
-```bicep
-resource functionApp 'Microsoft.Web/sites@2022-03-01' = {
- ...
- properties: {
- ...
- siteConfig: {
- ...
- appSettings: [
- {
- name: 'AzureWebJobsStorage'
- value: 'DefaultEndpointsProtocol=https;AccountName=${storageAccountName};AccountKey=${listKeys(storageAccountName, '2021-09-01').keys[0].value}'
- }
- {
- name: 'FUNCTIONS_WORKER_RUNTIME'
- value: 'node'
- }
- {
- name: 'WEBSITE_NODE_DEFAULT_VERSION'
- value: '~14'
- }
- {
- name: 'FUNCTIONS_EXTENSION_VERSION'
- value: '~4'
- }
- ...
- ]
- }
- }
-}
-
-```
-
-# [JSON](#tab/json)
+### [Linux](#tab/linux/json)
```json "resources": [ {
- "type": "Microsoft.Web/sites",
- ...
+ "type": "Microsoft.Web/serverfarms",
+ "apiVersion": "2022-03-01",
+ "name": "[parameters('hostingPlanName')]",
+ "location": "[parameters('location')]",
+ "sku": {
+ "tier": "Standard",
+ "name": "S1",
+ "size": "S1",
+ "family": "S",
+ "capacity": 1
+ },
"properties": {
- ...
- "siteConfig": {
- ...
- "appSettings": [
- {
- "name": "AzureWebJobsStorage",
- "value": "[format('DefaultEndpointsProtocol=https;AccountName={0};AccountKey={1}', parameters('storageAccountName'), listKeys(resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountName')), '2021-09-01').keys[0].value)]"
- },
- {
- "name": "FUNCTIONS_WORKER_RUNTIME",
- "value": "node"
- },
- {
- "name": "WEBSITE_NODE_DEFAULT_VERSION",
- "value": "~14"
- },
- {
- "name": "FUNCTIONS_EXTENSION_VERSION",
- "value": "~4"
- },
- ...
- ]
- }
+ "reserved": true
} } ] ``` --
-<a name="consumption"></a>
-## Deploy on Consumption plan
-
-The Consumption plan automatically allocates compute power when your code is running, scales out as necessary to handle load, and then scales in when code isn't running. You don't have to pay for idle VMs, and you don't have to reserve capacity in advance. To learn more, see [Azure Functions scale and hosting](consumption-plan.md).
-
-For a sample Bicep file/Azure Resource Manager template, see [Function app on Consumption plan].
+For more context, see the complete [azuredeploy.json](https://github.com/Azure-Samples/function-app-arm-templates/blob/main/function-app-dedicated-plan/azuredeploy.json#L112) file in the templates repository.
-### Create a Consumption plan
+
-A Consumption plan doesn't need to be defined. When not defined, a plan is automatically be created or selected on a per-region basis when you create the function app resource itself.
+## Create the hosting plan
-The Consumption plan is a special type of `serverfarm` resource. You can specify it by using the `Dynamic` value for the `computeMode` and `sku` properties, as follows:
+You don't need to explicitly define a Consumption hosting plan resource. When you skip this resource definition, a plan is automatically either created or selected on a per-region basis when you create the function app resource itself.
-#### Windows
+You can explicitly define a Consumption plan as a special type of `serverfarm` resource, which you specify using the value `Dynamic` for the `computeMode` and `sku` properties. This example section shows you how to explicitly define a consumption plan. The way that you define a hosting plan depends on whether your function app runs on Windows or on Linux.
-# [Bicep](#tab/bicep)
+### [Windows](#tab/windows/bicep)
```bicep resource hostingPlan 'Microsoft.Web/serverfarms@2022-03-01' = {
resource hostingPlan 'Microsoft.Web/serverfarms@2022-03-01' = {
} ```
-# [JSON](#tab/json)
+For more context, see the complete [main.bicep](https://github.com/Azure-Samples/function-app-arm-templates/blob/main/function-app-windows-consumption/main.bicep#L40) file in the templates repository.
+
+### [Windows](#tab/windows/json)
```json "resources": [
resource hostingPlan 'Microsoft.Web/serverfarms@2022-03-01' = {
] ``` -
+For more context, see the complete [azuredeploy.json](https://github.com/Azure-Samples/function-app-arm-templates/blob/main/function-app-windows-consumption/azuredeploy.json#L67) file in the templates repository.
-#### Linux
-To run your app on Linux, you must also set the property `"reserved": true` for the `serverfarms` resource:
+### [Linux](#tab/linux/bicep)
-# [Bicep](#tab/bicep)
+To run your app on Linux, you must also set the property `"reserved": true` for the `serverfarms` resource:
```bicep resource hostingPlan 'Microsoft.Web/serverfarms@2022-03-01' = {
resource hostingPlan 'Microsoft.Web/serverfarms@2022-03-01' = {
} ```
-# [JSON](#tab/json)
+For more context, see the complete [main.bicep](https://github.com/Azure-Samples/function-app-arm-templates/blob/main/function-app-linux-consumption/main.bicep#L46) file in the templates repository.
+
+### [Linux](#tab/linux/json)
+
+To run your app on Linux, you must also set the property `"reserved": true` for the `serverfarms` resource:
```json "resources": [
resource hostingPlan 'Microsoft.Web/serverfarms@2022-03-01' = {
] ``` -
-### Create a function app
+For more context, see the complete [azuredeploy.json](https://github.com/Azure-Samples/function-app-arm-templates/blob/main/function-app-linux-consumption/azuredeploy.json#L115) file in the templates repository.
-When you explicitly define your Consumption plan, you must set the `serverFarmId` property on the app so that it points to the resource ID of the plan. Make sure that the function app has a `dependsOn` setting that also references the plan.
-
-The settings required by a function app running in Consumption plan differ between Windows and Linux.
-
-#### Windows
-
-On Windows, a Consumption plan requires another two other settings in the site configuration: [`WEBSITE_CONTENTAZUREFILECONNECTIONSTRING`](functions-app-settings.md#website_contentazurefileconnectionstring) and [`WEBSITE_CONTENTSHARE`](functions-app-settings.md#website_contentshare). This property configures the storage account where the function app code and configuration are stored.
+
+
+## Kubernetes environment
+Azure Functions can be deployed to [Azure Arc-enabled Kubernetes](../app-service/overview-arc-integration.md) either as a code project or a containerized function app.
-For a sample Bicep file/Azure Resource Manager template, see [Azure Function App Hosted on Windows Consumption Plan](https://github.com/Azure-Samples/function-app-arm-templates/tree/main/function-app-windows-consumption).
+To create the app and plan resources, you must have already [created an App Service Kubernetes environment](../app-service/manage-create-arc-environment.md) for an Azure Arc-enabled Kubernetes cluster. The examples in this article assume you have the resource ID of the custom location (`customLocationId`) and App Service Kubernetes environment (`kubeEnvironmentId`) to which you're deploying, which are set in this example:
-# [Bicep](#tab/bicep)
+### [ARM template](#tab/json)
-```bicep
-resource functionApp 'Microsoft.Web/sites@2022-03-01' = {
- name: functionAppName
- location: location
- kind: 'functionapp'
- properties: {
- serverFarmId: hostingPlan.id
- siteConfig: {
- appSettings: [
- {
- name: 'APPINSIGHTS_INSTRUMENTATIONKEY'
- value: applicationInsights.properties.InstrumentationKey
- }
- {
- name: 'AzureWebJobsStorage'
- value: 'DefaultEndpointsProtocol=https;AccountName=${storageAccountName};EndpointSuffix=${environment().suffixes.storage};AccountKey=${storageAccount.listKeys().keys[0].value}'
- }
- {
- name: 'WEBSITE_CONTENTAZUREFILECONNECTIONSTRING'
- value: 'DefaultEndpointsProtocol=https;AccountName=${storageAccountName};EndpointSuffix=${environment().suffixes.storage};AccountKey=${storageAccount.listKeys().keys[0].value}'
- }
- {
- name: 'WEBSITE_CONTENTSHARE'
- value: toLower(functionAppName)
- }
- {
- name: 'FUNCTIONS_EXTENSION_VERSION'
- value: '~4'
- }
- {
- name: 'FUNCTIONS_WORKER_RUNTIME'
- value: 'node'
- }
- {
- name: 'WEBSITE_NODE_DEFAULT_VERSION'
- value: '~14'
- }
- ]
- }
+```json
+"parameters": {
+ "kubeEnvironmentId" : {
+ "type": "string"
+ },
+ "customLocationId" : {
+ "type": "string"
} } ```
-# [JSON](#tab/json)
+### [Bicep](#tab/bicep)
-```json
-"resources": [
- {
- "type": "Microsoft.Web/sites",
- "apiVersion": "2022-03-01",
- "name": "[parameters('functionAppName')]",
- "location": "[parameters('location')]",
- "kind": "functionapp",
- "dependsOn": [
- "[resourceId('Microsoft.Insights/components', parameters('applicationInsightsName'))]",
- "[resourceId('Microsoft.Web/serverfarms', parameters('hostingPlanName'))]",
- "[resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountName'))]"
- ],
- "properties": {
- "serverFarmId": "[resourceId('Microsoft.Web/serverfarms', parameters('hostingPlanName'))]",
- "siteConfig": {
- "appSettings": [
- {
- "name": "APPINSIGHTS_INSTRUMENTATIONKEY",
- "value": "[reference(resourceId('Microsoft.Insights/components', parameters('applicationInsightsName')), '2020-02-02').InstrumentationKey]"
- },
- {
- "name": "AzureWebJobsStorage",
- "value": "[format('DefaultEndpointsProtocol=https;AccountName={0};EndpointSuffix={1};AccountKey={2}', parameters('storageAccountName'), environment().suffixes.storage, listKeys(resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountName')), '2021-09-01').keys[0].value)]"
- },
- {
- "name": "WEBSITE_CONTENTAZUREFILECONNECTIONSTRING",
- "value": "[format('DefaultEndpointsProtocol=https;AccountName={0};EndpointSuffix={1};AccountKey={2}', parameters('storageAccountName'), environment().suffixes.storage, listKeys(resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountName')), '2021-09-01').keys[0].value)]"
- },
- {
- "name": "WEBSITE_CONTENTSHARE",
- "value": "[toLower(parameters('functionAppName'))]"
- },
- {
- "name": "FUNCTIONS_EXTENSION_VERSION",
- "value": "~4"
- },
- {
- "name": "FUNCTIONS_WORKER_RUNTIME",
- "value": "node"
- },
- {
- "name": "WEBSITE_NODE_DEFAULT_VERSION",
- "value": "~14"
- }
- ]
- }
- }
- }
-]
+```bicep
+param kubeEnvironmentId string
+param customLocationId string
```
-> [!IMPORTANT]
-> Don't set the [`WEBSITE_CONTENTSHARE`](functions-app-settings.md#website_contentshare) setting in a new deployment slot. This setting is generated for you when the app is created in the deployment slot.
-
-#### Linux
+Both sites and plans must reference the custom location through an `extendedLocation` field. As shown in this truncated example, `extendedLocation` sits outside of `properties`, as a peer to `kind` and `location`:
-The function app must have set `"kind": "functionapp,linux"`, and it must have set property `"reserved": true`. Linux apps should also include a `linuxFxVersion` property under siteConfig. If you're just deploying code, the value for this property is determined by your desired runtime stack in the format of runtime|runtimeVersion. For example: `python|3.7`, `node|14` and `dotnet|3.1`.
+### [ARM template](#tab/json)
-For Linux Consumption plan it is also required to add the two other settings in the site configuration: [`WEBSITE_CONTENTAZUREFILECONNECTIONSTRING`](functions-app-settings.md#website_contentazurefileconnectionstring) and [`WEBSITE_CONTENTSHARE`](functions-app-settings.md#website_contentshare).
-
-For a sample Bicep file/Azure Resource Manager template, see [Azure Function App Hosted on Linux Consumption Plan](https://github.com/Azure-Samples/function-app-arm-templates/tree/main/function-app-linux-consumption).
-
-# [Bicep](#tab/bicep)
-
-```bicep
-resource functionApp 'Microsoft.Web/sites@2022-03-01' = {
- name: functionAppName
- location: location
- kind: 'functionapp,linux'
- properties: {
- reserved: true
- serverFarmId: hostingPlan.id
- siteConfig: {
- linuxFxVersion: 'node|14'
- appSettings: [
- {
- name: 'APPINSIGHTS_INSTRUMENTATIONKEY'
- value: applicationInsights.properties.InstrumentationKey
- }
- {
- name: 'AzureWebJobsStorage'
- value: 'DefaultEndpointsProtocol=https;AccountName=${storageAccountName};EndpointSuffix=${environment().suffixes.storage};AccountKey=${storageAccount.listKeys().keys[0].value}'
- }
- {
- name: 'FUNCTIONS_EXTENSION_VERSION'
- value: '~4'
- }
- {
- name: 'FUNCTIONS_WORKER_RUNTIME'
- value: 'node'
- }
- ]
- }
+```json
+{
+ "type": "Microsoft.Web/serverfarms",
+ ...
+ {
+ "extendedLocation": {
+ "name": "[parameters('customLocationId')]"
+ },
} } ```
-# [JSON](#tab/json)
+### [Bicep](#tab/bicep)
-```json
-"resources": [
+```bicep
+resource hostingPlan 'Microsoft.Web/serverfarms@2022-03-01' = {
+ ...
{
- "type": "Microsoft.Web/sites",
- "apiVersion": "2022-03-01",
- "name": "[parameters('functionAppName')]",
- "location": "[parameters('location')]",
- "kind": "functionapp,linux",
- "dependsOn": [
- "[resourceId('Microsoft.Insights/components', parameters('applicationInsightsName'))]",
- "[resourceId('Microsoft.Web/serverfarms', parameters('hostingPlanName'))]",
- "[resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountName'))]"
- ],
- "properties": {
- "reserved": true,
- "serverFarmId": "[resourceId('Microsoft.Web/serverfarms', parameters('hostingPlanName'))]",
- "siteConfig": {
- "linuxFxVersion": "node|14",
- "appSettings": [
- {
- "name": "APPINSIGHTS_INSTRUMENTATIONKEY",
- "value": "[reference(resourceId('Microsoft.Insights/components', parameters('applicationInsightsName')), '2020-02-02).InstrumentationKey]"
- },
- {
- "name": "AzureWebJobsStorage",
- "value": "[format('DefaultEndpointsProtocol=https;AccountName={0};EndpointSuffix={1};AccountKey={2}', parameters('storageAccountName'), environment().suffixes.storage, listKeys(resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountName')), '2021-09-01').keys[0].value)]"
- },
- {
- "name": "FUNCTIONS_EXTENSION_VERSION",
- "value": "~4"
- },
- {
- "name": "FUNCTIONS_WORKER_RUNTIME",
- "value": "node"
- }
- ]
- }
+ extendedLocation: {
+ name: customLocationId
} }
-]
+}
```
-<a name="premium"></a>
-## Deploy on Premium plan
-
-The Premium plan offers the same scaling as the Consumption plan but includes dedicated resources and extra capabilities. To learn more, see [Azure Functions Premium Plan](./functions-premium-plan.md).
-
-### Create a Premium plan
+The plan resource should use the Kubernetes (`K1`) value for `SKU`, the `kind` field should be `linux,kubernetes`, and the `reserved` property should be `true`, since it's a Linux deployment. You must also set the `extendedLocation` and `kubeEnvironmentProfile.id` to the custom location ID and the Kubernetes environment ID, respectively, which might look like this example section:
-A Premium plan is a special type of `serverfarm` resource. You can specify it by using either `EP1`, `EP2`, or `EP3` for the `Name` property value in the `sku` as shown in the following samples:
-
-#### Windows
-
-# [Bicep](#tab/bicep)
-
-```bicep
-resource hostingPlan 'Microsoft.Web/serverfarms@2022-03-01' = {
- name: hostingPlanName
- location: location
- sku: {
- name: 'EP1'
- tier: 'ElasticPremium'
- family: 'EP'
- }
- kind: 'elastic'
- properties: {
- maximumElasticWorkerCount: 20
- }
-}
-```
-
-# [JSON](#tab/json)
+### [ARM template](#tab/json)
```json "resources": [
resource hostingPlan 'Microsoft.Web/serverfarms@2022-03-01' = {
"apiVersion": "2022-03-01", "name": "[parameters('hostingPlanName')]", "location": "[parameters('location')]",
+ "kind": "linux,kubernetes",
"sku": {
- "name": "EP1",
- "tier": "ElasticPremium",
- "family": "EP"
+ "name": "K1",
+ "tier": "Kubernetes"
+ },
+ "extendedLocation": {
+ "name": "[parameters('customLocationId')]"
},
- "kind": "elastic",
"properties": {
- "maximumElasticWorkerCount": 20
+ "kubeEnvironmentProfile": {
+ "id": "[parameters('kubeEnvironmentId')]"
+ },
+ "reserved": true
} } ] ``` --
-#### Linux
-
-To run your app on Linux, you must also set property `"reserved": true` for the serverfarms resource:
-
-# [Bicep](#tab/bicep)
+### [Bicep](#tab/bicep)
```bicep resource hostingPlan 'Microsoft.Web/serverfarms@2022-03-01' = { name: hostingPlanName location: location
+ kind: 'linux,kubernetes'
sku: {
- name: 'EP1'
- tier: 'ElasticPremium'
- family: 'EP'
+ name: 'K1'
+ tier: 'Kubernetes'
+ }
+ extendedLocation: {
+ name: customLocationId
}
- kind: 'elastic'
properties: {
- maximumElasticWorkerCount: 20
+ kubeEnvironmentProfile: {
+ id: kubeEnvironmentId
+ }
reserved: true } } ```
-# [JSON](#tab/json)
+
-```json
-"resources": [
- {
- "type": "Microsoft.Web/serverfarms",
- "apiVersion": "2022-03-01",
- "name": "[parameters('hostingPlanName')]",
- "location": "[parameters('location')]",
- "sku": {
- "name": "EP1",
- "tier": "ElasticPremium",
- "family": "EP",
- },
- "kind": "elastic",
- "properties": {
- "maximumElasticWorkerCount": 20,
- "reserved": true
- }
- }
-]
-```
-
-### Create a function app
+## Create the function app
-For function app on a Premium plan, you'll need to set the `serverFarmId` property on the app so that it points to the resource ID of the plan. You should ensure that the function app has a `dependsOn` setting for the plan as well.
+The function app resource is defined by a resource of type `Microsoft.Web/sites` and `kind` that includes `functionapp`, at a minimum.
+The way that you define a function app resource depends on whether you're hosting on Linux or on Windows:
+### [Windows](#tab/windows)
-A Premium plan requires another settings in the site configuration: [`WEBSITE_CONTENTAZUREFILECONNECTIONSTRING`](functions-app-settings.md#website_contentazurefileconnectionstring) and [`WEBSITE_CONTENTSHARE`](functions-app-settings.md#website_contentshare). This property configures the storage account where the function app code and configuration are stored, which are used for dynamic scale.
+For a list of application settings required when running on Windows, see [Application configuration](#application-configuration). For a sample Bicep file/Azure Resource Manager template, see the [function app hosted on Windows in a Consumption plan](https://github.com/Azure-Samples/function-app-arm-templates/tree/main/function-app-windows-consumption) template.
-For a sample Bicep file/Azure Resource Manager template, see [Azure Function App Hosted on Premium Plan](https://github.com/Azure-Samples/function-app-arm-templates/tree/main/function-app-premium-plan).
+### [Linux](#tab/linux)
-The settings required by a function app running in Premium plan differ between Windows and Linux.
+
+For a sample Bicep file or ARM template, see the [function app hosted on Linux Consumption Plan](https://github.com/Azure-Samples/function-app-arm-templates/tree/main/function-app-linux-consumption) template.
-#### Windows
+
+### [Windows](#tab/windows)
+
+For a list of application settings required when running on Windows, see [Application configuration](#application-configuration).
+
+### [Linux](#tab/linux)
+
-# [Bicep](#tab/bicep)
+
+>[!NOTE]
+>If you choose to optionally define your Consumption plan, you must set the `serverFarmId` property on the app so that it points to the resource ID of the plan. Make sure that the function app has a `dependsOn` setting that also references the plan. If you didn't explicitly define a plan, one gets created for you.
+Set the `serverFarmId` property on the app so that it points to the resource ID of the plan. Make sure that the function app has a `dependsOn` setting that also references the plan.
+### [Windows](#tab/windows/bicep)
```bicep resource functionAppName_resource 'Microsoft.Web/sites@2022-03-01' = {
resource functionAppName_resource 'Microsoft.Web/sites@2022-03-01' = {
siteConfig: { appSettings: [ {
- name: 'APPINSIGHTS_INSTRUMENTATIONKEY'
- value: applicationInsightsName.properties.InstrumentationKey
+ name: 'APPLICATIONINSIGHTS_CONNECTION_STRING'
+ value: applicationInsightsName.properties.ConnectionString
} { name: 'AzureWebJobsStorage'
resource functionAppName_resource 'Microsoft.Web/sites@2022-03-01' = {
} ```
-# [JSON](#tab/json)
+For a complete end-to-end example, see this [main.bicep file](https://github.com/Azure-Samples/function-app-arm-templates/blob/main/function-app-windows-consumption/main.bicep).
+
+### [Windows](#tab/windows/json)
```json "resources": [
resource functionAppName_resource 'Microsoft.Web/sites@2022-03-01' = {
"siteConfig": { "appSettings": [ {
- "name": "APPINSIGHTS_INSTRUMENTATIONKEY",
- "value": "[reference(resourceId('Microsoft.Insights/components', parameters('applicationInsightsName')), '2020-02-02').InstrumentationKey]"
+ "name": "APPLICATIONINSIGHTS_CONNECTION_STRING",
+ "value": "[reference(resourceId('Microsoft.Insights/components', parameters('applicationInsightsName')), '2020-02-02').ConnectionString]"
}, { "name": "AzureWebJobsStorage",
resource functionAppName_resource 'Microsoft.Web/sites@2022-03-01' = {
] ``` --
-> [!IMPORTANT]
-> You don't need to set the [`WEBSITE_CONTENTSHARE`](functions-app-settings.md#website_contentshare) setting because it's generated for you when the site is first created.
+For a complete end-to-end example, see this [azuredeploy.json template](https://github.com/Azure-Samples/function-app-arm-templates/blob/main/function-app-windows-consumption/azuredeploy.json).
-#### Linux
-
-The function app must have set `"kind": "functionapp,linux"`, and it must have set property `"reserved": true`. Linux apps should also include a `linuxFxVersion` property under siteConfig. If you're just deploying code, the value for this property is determined by your desired runtime stack in the format of runtime|runtimeVersion. For example: `python|3.7`, `node|14` and `dotnet|3.1`.
-
-# [Bicep](#tab/bicep)
+# [Linux](#tab/linux/bicep)
```bicep resource functionApp 'Microsoft.Web/sites@2021-02-01' = {
resource functionApp 'Microsoft.Web/sites@2021-02-01' = {
linuxFxVersion: 'node|14' appSettings: [ {
- name: 'APPINSIGHTS_INSTRUMENTATIONKEY'
- value: applicationInsightsName.properties.InstrumentationKey
+ name: 'APPLICATIONINSIGHTS_CONNECTION_STRING'
+ value: applicationInsightsName.properties.ConnectionString
} { name: 'AzureWebJobsStorage'
resource functionApp 'Microsoft.Web/sites@2021-02-01' = {
} ```
-# [JSON](#tab/json)
+For a complete end-to-end example, see this [main.bicep file](https://github.com/Azure-Samples/function-app-arm-templates/blob/main/function-app-linux-consumption/main.bicep).
+
+# [Linux](#tab/linux/json)
```json "resources": [
resource functionApp 'Microsoft.Web/sites@2021-02-01' = {
"linuxFxVersion": "node|14", "appSettings": [ {
- "name": "APPINSIGHTS_INSTRUMENTATIONKEY",
- "value": "[reference(resourceId('Microsoft.Insights/components', parameters('applicationInsightsName')), '2020-02-02').InstrumentationKey]"
+ "name": "APPLICATIONINSIGHTS_CONNECTION_STRING",
+ "value": "[reference(resourceId('Microsoft.Insights/components', parameters('applicationInsightsName')), '2020-02-02').ConnectionString]"
}, { "name": "AzureWebJobsStorage",
resource functionApp 'Microsoft.Web/sites@2021-02-01' = {
] ``` --
-<a name="app-service-plan"></a>
-## Deploy on App Service plan
-
-In the App Service plan, your function app runs on dedicated VMs on Basic, Standard, and Premium SKUs, similar to web apps. For details about how the App Service plan works, see the [Azure App Service plans in-depth overview](../app-service/overview-hosting-plans.md).
-
-For a sample Bicep file/Azure Resource Manager template, see [Function app on Azure App Service plan].
-
-### Create a Dedicated plan
-
-In Functions, the Dedicated plan is just a regular App Service plan, which is defined by a `serverfarm` resource. You can specify the SKU as follows:
-
-#### Windows
-
-# [Bicep](#tab/bicep)
-
-```bicep
-resource hostingPlanName 'Microsoft.Web/serverfarms@2022-03-01' = {
- name: hostingPlanName
- location: location
- sku: {
- tier: 'Standard'
- name: 'S1'
- size: 'S1'
- family: 'S'
- capacity: 1
- }
-}
-```
-
-# [JSON](#tab/json)
-
-```json
-"resources": [
- {
- "type": "Microsoft.Web/serverfarms",
- "apiVersion": "2022-03-01",
- "name": "[parameters('hostingPlanName')]",
- "location": "[parameters('location')]",
- "sku": {
- "tier": "Standard",
- "name": "S1",
- "size": "S1",
- "family": "S",
- "capacity": 1
- }
- }
-]
-```
---
-#### Linux
-
-To run your app on Linux, you must also set property `"reserved": true` for the serverfarms resource:
-
-# [Bicep](#tab/bicep)
-
-```bicep
-resource hostingPlan 'Microsoft.Web/serverfarms@2022-03-01' = {
- name: hostingPlanName
- location: location
- sku: {
- tier: 'Standard'
- name: 'S1'
- size: 'S1'
- family: 'S'
- capacity: 1
- }
- properties: {
- reserved: true
- }
-}
-```
-
-# [JSON](#tab/json)
-
-```json
-"resources": [
- {
- "type": "Microsoft.Web/serverfarms",
- "apiVersion": "2022-03-01",
- "name": "[parameters('hostingPlanName')]",
- "location": "[parameters('location')]",
- "sku": {
- "tier": "Standard",
- "name": "S1",
- "size": "S1",
- "family": "S",
- "capacity": 1
- },
- "properties": {
- "reserved": true
- }
- }
-]
-```
+For a complete end-to-end example, see this [azuredeploy.json template](https://github.com/Azure-Samples/function-app-arm-templates/blob/main/function-app-linux-consumption/azuredeploy.json).
-### Create a function app
-
-For function app on a Dedicated plan, you must set the `serverFarmId` property on the app so that it points to the resource ID of the plan. Make sure that the function app has a `dependsOn` setting that also references the plan.
-
-On App Service plan, you should enable the `"alwaysOn": true` setting under site config so that your function app runs correctly. On an App Service plan, the functions runtime goes idle after a few minutes of inactivity, so only HTTP triggers will "wake up" your functions.
-
-The [`WEBSITE_CONTENTAZUREFILECONNECTIONSTRING`](functions-app-settings.md#website_contentazurefileconnectionstring) and [`WEBSITE_CONTENTSHARE`](functions-app-settings.md#website_contentshare) settings aren't supported on Dedicated plan.
-
-For a sample Bicep file/Azure Resource Manager template, see [Azure Function App Hosted on Dedicated Plan](https://github.com/Azure-Samples/function-app-arm-templates/tree/main/function-app-dedicated-plan).
-
-The settings required by a function app running in Dedicated plan differ between Windows and Linux.
-
-#### Windows
-
-# [Bicep](#tab/bicep)
+### [Windows](#tab/windows/bicep)
```bicep resource functionApp 'Microsoft.Web/sites@2022-03-01' = {
resource functionApp 'Microsoft.Web/sites@2022-03-01' = {
alwaysOn: true appSettings: [ {
- name: 'APPINSIGHTS_INSTRUMENTATIONKEY'
- value: applicationInsightsName.properties.InstrumentationKey
+ name: 'APPLICATIONINSIGHTS_CONNECTION_STRING'
+ value: applicationInsightsName.properties.ConnectionString
} { name: 'AzureWebJobsStorage'
resource functionApp 'Microsoft.Web/sites@2022-03-01' = {
} ```
-# [JSON](#tab/json)
+For a complete end-to-end example, see this [main.bicep file](https://github.com/Azure-Samples/function-app-arm-templates/blob/main/function-app-dedicated-plan/main.bicep).
+
+### [Windows](#tab/windows/json)
```json "resources": [
resource functionApp 'Microsoft.Web/sites@2022-03-01' = {
"alwaysOn": true, "appSettings": [ {
- "name": "APPINSIGHTS_INSTRUMENTATIONKEY",
- "value": "[reference(resourceId('Microsoft.Insights/components', parameters('applicationInsightsName')), '2020-02-02').InstrumentationKey]"
+ "name": "APPLICATIONINSIGHTS_CONNECTION_STRING",
+ "value": "[reference(resourceId('Microsoft.Insights/components', parameters('applicationInsightsName')), '2020-02-02').ConnectionString]"
}, { "name": "AzureWebJobsStorage",
resource functionApp 'Microsoft.Web/sites@2022-03-01' = {
] ``` --
-#### Linux
+For a complete end-to-end example, see this [azuredeploy.json template](https://github.com/Azure-Samples/function-app-arm-templates/blob/main/function-app-dedicated-plan/azuredeploy.json).
-The function app must have set `"kind": "functionapp,linux"`, and it must have set property `"reserved": true`. Linux apps should also include a `linuxFxVersion` property under siteConfig. If you're just deploying code, the value for this property is determined by your desired runtime stack in the format of runtime|runtimeVersion. Examples of `linuxFxVersion` property include: `python|3.7`, `node|14` and `dotnet|3.1`.
+### [Linux](#tab/linux/bicep)
-# [Bicep](#tab/bicep)
```bicep resource functionApp 'Microsoft.Web/sites@2022-03-01' = {
resource functionApp 'Microsoft.Web/sites@2022-03-01' = {
linuxFxVersion: 'node|14' appSettings: [ {
- name: 'APPINSIGHTS_INSTRUMENTATIONKEY'
- value: applicationInsightsName.properties.InstrumentationKey
+ name: 'APPLICATIONINSIGHTS_CONNECTION_STRING'
+ value: applicationInsightsName.properties.ConnectionString
} { name: 'AzureWebJobsStorage'
resource functionApp 'Microsoft.Web/sites@2022-03-01' = {
} ```
-# [JSON](#tab/json)
+For a complete end-to-end example, see this [main.bicep file](https://github.com/Azure-Samples/function-app-arm-templates/blob/main/function-app-dedicated-plan/main.bicep).
+
+### [Linux](#tab/linux/json)
```json "resources": [
resource functionApp 'Microsoft.Web/sites@2022-03-01' = {
"linuxFxVersion": "node|14", "appSettings": [ {
- "name": "APPINSIGHTS_INSTRUMENTATIONKEY",
- "value": "[reference(resourceId('Microsoft.Insights/components', parameters('applicationInsightsName')), '2020-02-02').InstrumentationKey]"
+ "name": "APPLICATIONINSIGHTS_CONNECTION_STRING",
+ "value": "[reference(resourceId('Microsoft.Insights/components', parameters('applicationInsightsName')), '2020-02-02').ConnectionString]"
}, { "name": "AzureWebJobsStorage",
resource functionApp 'Microsoft.Web/sites@2022-03-01' = {
"value": "~4" }, {
- "name": "FUNCTIONS_WORKER_RUNTIME",
- "value": "node"
+ "name": "FUNCTIONS_WORKER_RUNTIME",
+ "value": "node"
+ }
+ ]
+ }
+ }
+ }
+]
+```
+
+For a complete end-to-end example, see this [azuredeploy.json template](https://github.com/Azure-Samples/function-app-arm-templates/blob/main/function-app-dedicated-plan/azuredeploy.json).
+++
+## Deployment sources
+
+Your Bicep file or ARM template can optionally also define a deployment for your function code, which could include these methods:
+++ [Zip deployment package](./deployment-zip-push.md)++ [Linux container](./functions-how-to-custom-container.md)
+## Deployment sources
+
+Your Bicep file or ARM template can optionally also define a deployment for your function code using a [zip deployment package](./deployment-zip-push.md).
+To successfully deploy your application by using Azure Resource Manager, it's important to understand how resources are deployed in Azure. In most examples, top-level configurations are applied by using `siteConfig`. It's important to set these configurations at a top level, because they convey information to the Functions runtime and deployment engine. Top-level information is required before the child `sourcecontrols/web` resource is applied. Although it's possible to configure these settings in the child-level `config/appSettings` resource, in some cases your function app must be deployed *before* `config/appSettings` is applied.
+
+## Zip deployment package
+
+Zip deployment is a recommended way to deploy your function app code. By default, functions that use zip deployment run in the deployment package itself. For more information, including the requirements for a deployment package, see [Zip deployment for Azure Functions](deployment-zip-push.md). When using resource deployment automation, you can reference the .zip deployment package in your Bicep or ARM template.
+
+To use zip deployment in your template, set the `WEBSITE_RUN_FROM_PACKAGE` setting in the app to `1` and include the `/zipDeploy` resource definition.
+For a Consumption plan on Linux, instead set the URI of the deployment package directly in the `WEBSITE_RUN_FROM_PACKAGE` setting, as shown in [this example template](https://github.com/Azure-Samples/function-app-arm-templates/tree/main/function-app-linux-consumption#L152).
+This example adds a zip deployment source to an existing app:
+
+### [ARM template](#tab/json)
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "functionAppName": {
+ "type": "string",
+ "metadata": {
+ "description": "The name of the Azure Function app."
+ }
+ },
+ "location": {
+ "type": "string",
+ "defaultValue": "[resourceGroup().location]",
+ "metadata": {
+ "description": "The location into which the resources should be deployed."
+ }
+ },
+ "packageUri": {
+ "type": "string",
+ "metadata": {
+ "description": "The zip content url."
+ }
+ }
+ },
+ "resources": [
+ {
+ "name": "[concat(parameters('functionAppName'), '/ZipDeploy')]",
+ "type": "Microsoft.Web/sites/extensions",
+ "apiVersion": "2021-02-01",
+ "location": "[parameters('location')]",
+ "properties": {
+ "packageUri": "[parameters('packageUri')]"
+ }
+ }
+ ]
+}
+```
+### [Bicep](#tab/bicep)
+
+```bicep
+@description('The name of the function app.')
+param functionAppName string
+
+@description('The location into which the resources should be deployed.')
+param location string = resourceGroup().location
+
+@description('The zip content url.')
+param packageUri string
+
+resource functionAppName_ZipDeploy 'Microsoft.Web/sites/extensions@2021-02-01' = {
+ name: '${functionAppName}/ZipDeploy'
+ location: location
+ properties: {
+ packageUri: packageUri
+ }
+}
+```
++
+Keep the following things in mind when including zip deployment resources in your template:
++ Consumption plans on Linux don't support `WEBSITE_RUN_FROM_PACKAGE = 1`. You must instead set the URI of the deployment package directly in the `WEBSITE_RUN_FROM_PACKAGE` setting. For more information, see [WEBSITE\_RUN\_FROM\_PACKAGE](functions-app-settings.md#website_run_from_package). For an example template, see [Function app hosted on Linux in a Consumption plan](https://github.com/Azure-Samples/function-app-arm-templates/tree/main/function-app-linux-consumption).++ The `packageUri` must be a location that can be accessed by Functions. Consider using Azure blob storage with a shared access signature (SAS). After the SAS expires, Functions can no longer access the share for deployments. When you regenerate your SAS, remember to update the `WEBSITE_RUN_FROM_PACKAGE` setting with the new URI value. +++ Make sure to always set all required application settings in the `appSettings` collection when adding or updating settings. Existing settings not explicitly set are removed by the update. For more information, see [Application configuration](#application-configuration). +++ Functions doesn't support Web Deploy (msdeploy) for package deployments. You must instead use zip deployment in your deployment pipelines and automation. For more information, see [Zip deployment for Azure Functions](deployment-zip-push.md).+
+## Remote builds
+
+The deployment process assumes that the .zip file that you use or a zip deployment contains a ready-to-run app. This means that by default no customizations are run.
+
+However, there are scenarios that require you to rebuild your app remotely, such as when you need to pull Linux-specific packages in Python or Node.js apps that you developed on a Windows computer. In this case, you can configure Functions to perform a remote build on your code after the zip deployment.
+
+The way that you request a remote build depends on the operating system to which you are deploying:
+
+### [Windows](#tab/windows)
+
+When an app is deployed to Windows, language-specific commands (like `dotnet restore` for C# apps or `npm install` for Node.js apps) are run.
+
+To enable the same build processes that you get with continuous integration, add `SCM_DO_BUILD_DURING_DEPLOYMENT=true` to your application settings in your deployment code and remove the `WEBSITE_RUN_FROM_PACKAGE` entirely.
+
+### [Linux](#tab/linux)
+
+To enable the same build processes that you get with continuous integration, add `SCM_DO_BUILD_DURING_DEPLOYMENT=true` to your application settings in your deployment code and remove the `WEBSITE_RUN_FROM_PACKAGE` entirely.
+
+The `ENABLE_ORYX_BUILD` setting is set to `true` by default. If you have issues building a .NET or Java function app, instead set it to `false`.
+
+Function apps that are built remotely on Linux can run from a package.
+
+
+
+## Linux containers
+
+If you're deploying a [containerized function app](./functions-how-to-custom-container.md) to an Azure Functions Premium or Dedicated plan, you must:
+++ Set the [`linuxFxVersion`](functions-app-settings.md#linuxfxversion) site setting with the identifier of your container image. ++ Set any required [`DOCKER_REGISTRY_SERVER_*`](#application-configuration) settings when obtaining the container from a private registry. ++ Set [`WEBSITES_ENABLE_APP_SERVICE_STORAGE`](../app-service/reference-app-settings.md#custom-containers) application setting to `false`. +
+For more information, see [Application configuration](#application-configuration).
+
+### [ARM template](#tab/json)
+
+```json
+"resources": [
+ {
+ "type": "Microsoft.Web/sites",
+ "apiVersion": "2022-03-01",
+ "name": "[parameters('functionAppName')]",
+ "location": "[parameters('location')]",
+ "kind": "functionapp",
+ "dependsOn": [
+ "[resourceId('Microsoft.Web/serverfarms', parameters('hostingPlanName'))]",
+ "[resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountName'))]"
+ ],
+ "properties": {
+ "serverFarmId": "[resourceId('Microsoft.Web/serverfarms', parameters('hostingPlanName'))]",
+ "siteConfig": {
+ "appSettings": [
+ {
+ "name": "AzureWebJobsStorage",
+ "value": "[format('DefaultEndpointsProtocol=https;AccountName={0};AccountKey={1}', parameters('storageAccountName'), listKeys(resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountName')), '2021-09-01').keys[0].value)]"
+ },
+ {
+ "name": "FUNCTIONS_WORKER_RUNTIME",
+ "value": "node"
+ },
+ {
+ "name": "WEBSITE_NODE_DEFAULT_VERSION",
+ "value": "~14"
+ },
+ {
+ "name": "FUNCTIONS_EXTENSION_VERSION",
+ "value": "~4"
+ },
+ {
+ "name": "DOCKER_REGISTRY_SERVER_URL",
+ "value": "[parameters('dockerRegistryUrl')]"
+ },
+ {
+ "name": "DOCKER_REGISTRY_SERVER_USERNAME",
+ "value": "[parameters('dockerRegistryUsername')]"
+ },
+ {
+ "name": "DOCKER_REGISTRY_SERVER_PASSWORD",
+ "value": "[parameters('dockerRegistryPassword')]"
+ },
+ {
+ "name": "WEBSITES_ENABLE_APP_SERVICE_STORAGE",
+ "value": "false"
}
- ]
+ ],
+ "linuxFxVersion": "DOCKER|myacr.azurecr.io/myimage:mytag"
} } } ] ``` --
-### Custom Container Image
-
-If you're [deploying a custom container image](./functions-how-to-custom-container.md), you must specify it with `linuxFxVersion` and include configuration that allows your image to be pulled, as in [Web App for Containers](../app-service/index.yml). Also, set `WEBSITES_ENABLE_APP_SERVICE_STORAGE` to `false`, since your app content is provided in the container itself:
-
-# [Bicep](#tab/bicep)
+### [Bicep](#tab/bicep)
```bicep resource functionApp 'Microsoft.Web/sites@2022-03-01' = {
resource functionApp 'Microsoft.Web/sites@2022-03-01' = {
} ```
-# [JSON](#tab/json)
++
+When deploying [containerized functions to Azure Container Apps](./functions-container-apps-hosting.md), your template must:
+++ Set the `kind` field to a value of `functionapp,linux,container,azurecontainerapps`. ++ Set the `managedEnvironmentId` site property to the fully qualified URI of the Container Apps environment. ++ Add a resource link in the site's `dependsOn` collection when creating a `Microsoft.App/managedEnvironments` resource at the same time as the site. +
+The definition of a containerized function app deployed from a private container registry to an existing Container Apps environment might look like this example:
+
+### [ARM template](#tab/json)
```json "resources": [
resource functionApp 'Microsoft.Web/sites@2022-03-01' = {
"type": "Microsoft.Web/sites", "apiVersion": "2022-03-01", "name": "[parameters('functionAppName')]",
+ "kind": "functionapp,linux,container,azurecontainerapps",
"location": "[parameters('location')]",
- "kind": "functionapp",
"dependsOn": [
- "[resourceId('Microsoft.Web/serverfarms', parameters('hostingPlanName'))]",
+ "[resourceId('Microsoft.Insights/components', parameters('applicationInsightsName'))]",
"[resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountName'))]" ], "properties": {
- "serverFarmId": "[resourceId('Microsoft.Web/serverfarms', parameters('hostingPlanName'))]",
+ "serverFarmId": "[parameters('hostingPlanName')]",
"siteConfig": {
+ "linuxFxVersion": "DOCKER|myacr.azurecr.io/myimage:mytag",
"appSettings": [
- {
- "name": "AzureWebJobsStorage",
- "value": "[format('DefaultEndpointsProtocol=https;AccountName={0};AccountKey={1}', parameters('storageAccountName'), listKeys(resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountName')), '2021-09-01').keys[0].value)]"
- },
- {
- "name": "FUNCTIONS_WORKER_RUNTIME",
- "value": "node"
- },
- {
- "name": "WEBSITE_NODE_DEFAULT_VERSION",
- "value": "~14"
- },
{ "name": "FUNCTIONS_EXTENSION_VERSION", "value": "~4" }, {
- "name": "DOCKER_REGISTRY_SERVER_URL",
- "value": "[parameters('dockerRegistryUrl')]"
- },
- {
- "name": "DOCKER_REGISTRY_SERVER_USERNAME",
- "value": "[parameters('dockerRegistryUsername')]"
- },
- {
- "name": "DOCKER_REGISTRY_SERVER_PASSWORD",
- "value": "[parameters('dockerRegistryPassword')]"
+ "name": "AzureWebJobsStorage",
+ "value": "[format('DefaultEndpointsProtocol=https;AccountName={0};AccountKey={1}', parameters('storageAccountName'), listKeys(resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountName')), '2021-09-01').keys[0].value)]"
}, {
- "name": "WEBSITES_ENABLE_APP_SERVICE_STORAGE",
- "value": "false"
+ "name": "APPLICATIONINSIGHTS_CONNECTION_STRING",
+ "value": "[reference(resourceId('Microsoft.Insights/components', parameters('applicationInsightsName')), '2020-02-02').ConnectionString]"
} ],
- "linuxFxVersion": "DOCKER|myacr.azurecr.io/myimage:mytag"
- }
- }
- }
-]
-```
---
-## Deploy to Azure Arc
-
-Azure Functions can be deployed to [Azure Arc-enabled Kubernetes](../app-service/overview-arc-integration.md). This process largely follows [deploying to an App Service plan](#deploy-on-app-service-plan), with a few differences to note.
-
-To create the app and plan resources, you must have already [created an App Service Kubernetes environment](../app-service/manage-create-arc-environment.md) for an Azure Arc-enabled Kubernetes cluster. These examples assume you have the resource ID of the custom location and App Service Kubernetes environment that you're deploying to. For most Bicep files/ARM templates, you can supply these values as parameters.
-
-# [Bicep](#tab/bicep)
-
-```bicep
-param kubeEnvironmentId string
-param customLocationId string
-```
-
-# [JSON](#tab/json)
-
-```json
-"parameters": {
- "kubeEnvironmentId" : {
- "type": "string"
- },
- "customLocationId" : {
- "type": "string"
- }
-}
-```
---
-Both sites and plans must reference the custom location through an `extendedLocation` field. This block sits outside of `properties`, peer to `kind` and `location`:
-
-# [Bicep](#tab/bicep)
-
-```bicep
-resource hostingPlan 'Microsoft.Web/serverfarms@2022-03-01' = {
- ...
- {
- extendedLocation: {
- name: customLocationId
- }
- }
-}
-```
-
-# [JSON](#tab/json)
-
-```json
-{
- "type": "Microsoft.Web/serverfarms",
- ...
- {
- "extendedLocation": {
- "name": "[parameters('customLocationId')]"
- },
- }
-}
-```
---
-The plan resource should use the Kubernetes (K1) SKU, and its `kind` field should be `linux,kubernetes`. Within `properties`, `reserved` should be `true`, and `kubeEnvironmentProfile.id` should be set to the App Service Kubernetes environment resource ID. An example plan might look like:
-
-# [Bicep](#tab/bicep)
-
-```bicep
-resource hostingPlan 'Microsoft.Web/serverfarms@2022-03-01' = {
- name: hostingPlanName
- location: location
- kind: 'linux,kubernetes'
- sku: {
- name: 'K1'
- tier: 'Kubernetes'
- }
- extendedLocation: {
- name: customLocationId
- }
- properties: {
- kubeEnvironmentProfile: {
- id: kubeEnvironmentId
- }
- reserved: true
- }
-}
-```
-
-# [JSON](#tab/json)
-
-```json
-"resources": [
- {
- "type": "Microsoft.Web/serverfarms",
- "apiVersion": "2022-03-01",
- "name": "[parameters('hostingPlanName')]",
- "location": "[parameters('location')]",
- "kind": "linux,kubernetes",
- "sku": {
- "name": "K1",
- "tier": "Kubernetes"
- },
- "extendedLocation": {
- "name": "[parameters('customLocationId')]"
- },
- "properties": {
- "kubeEnvironmentProfile": {
- "id": "[parameters('kubeEnvironmentId')]"
},
- "reserved": true
+ "managedEnvironmentId": "[parameters('managedEnvironmentId')]"
} } ] ``` --
-The function app resource should have its `kind` field set to **functionapp,linux,kubernetes** or **functionapp,linux,kubernetes,container** depending on if you intend to deploy via code or container. An example .NET 6.0 function app might look like:
-
-# [Bicep](#tab/bicep)
+### [Bicep](#tab/bicep)
```bicep resource functionApp 'Microsoft.Web/sites@2022-03-01' = { name: functionAppName
- kind: 'kubernetes,functionapp,linux,container'
+ kind: 'functionapp,linux,container,azurecontainerapps'
location: location
- extendedLocation: {
- name: customLocationId
- }
properties: { serverFarmId: hostingPlanName siteConfig: {
- linuxFxVersion: 'DOCKER|mcr.microsoft.com/azure-functions/dotnet:3.0-appservice-quickstart'
+ linuxFxVersion: 'DOCKER|myacr.azurecr.io/myimage:mytag'
appSettings: [ { name: 'FUNCTIONS_EXTENSION_VERSION'
resource functionApp 'Microsoft.Web/sites@2022-03-01' = {
value: 'DefaultEndpointsProtocol=https;AccountName=${storageAccountName};AccountKey=${storageAccount.listKeys().keys[0].value}' } {
- name: 'APPINSIGHTS_INSTRUMENTATIONKEY'
- value: applicationInsightsName.properties.InstrumentationKey
+ name: 'APPLICATIONINSIGHTS_CONNECTION_STRING'
+ value: applicationInsightsName.properties.ConnectionString
} ]
- alwaysOn: true
}
+ managedEnvironmentId: managedEnvironmentId
} dependsOn: [ storageAccount
resource functionApp 'Microsoft.Web/sites@2022-03-01' = {
} ```
-# [JSON](#tab/json)
++
+When deploying functions to Azure Arc, the value you set for the `kind` field of the function app resource depends on the type of deployment:
+
+| Deployment type | `kind` field value |
+|-|-|
+| Code-only deployment | `functionapp,linux,kubernetes` |
+| Container deployment | `functionapp,linux,kubernetes,container` |
+
+You must also set the `customLocationId` as you did for the [hosting plan resource](#create-the-hosting-plan).
+
+The definition of a containerized function app, using a .NET 6 quickstart image, might look like this example:
+
+### [ARM template](#tab/json)
```json "resources": [
resource functionApp 'Microsoft.Web/sites@2022-03-01' = {
"properties": { "serverFarmId": "[parameters('hostingPlanName')]", "siteConfig": {
- "linuxFxVersion": "DOCKER|mcr.microsoft.com/azure-functions/dotnet:3.0-appservice-quickstart",
+ "linuxFxVersion": "DOCKER|mcr.microsoft.com/azure-functions/4-dotnet-isolated6.0-appservice-quickstart",
"appSettings": [ { "name": "FUNCTIONS_EXTENSION_VERSION",
resource functionApp 'Microsoft.Web/sites@2022-03-01' = {
"value": "[format('DefaultEndpointsProtocol=https;AccountName={0};AccountKey={1}', parameters('storageAccountName'), listKeys(resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountName')), '2021-09-01').keys[0].value)]" }, {
- "name": "APPINSIGHTS_INSTRUMENTATIONKEY",
- "value": "[reference(resourceId('Microsoft.Insights/components', parameters('applicationInsightsName')), '2020-02-02').InstrumentationKey]"
+ "name": "APPLICATIONINSIGHTS_CONNECTION_STRING",
+ "value": "[reference(resourceId('Microsoft.Insights/components', parameters('applicationInsightsName')), '2020-02-02').ConnectionString]"
} ], "alwaysOn": true
resource functionApp 'Microsoft.Web/sites@2022-03-01' = {
] ```
+### [Bicep](#tab/bicep)
+
+```bicep
+resource functionApp 'Microsoft.Web/sites@2022-03-01' = {
+ name: functionAppName
+ kind: 'kubernetes,functionapp,linux,container'
+ location: location
+ extendedLocation: {
+ name: customLocationId
+ }
+ properties: {
+ serverFarmId: hostingPlanName
+ siteConfig: {
+ linuxFxVersion: 'DOCKER|mcr.microsoft.com/azure-functions/4-dotnet-isolated6.0-appservice-quickstart'
+ appSettings: [
+ {
+ name: 'FUNCTIONS_EXTENSION_VERSION'
+ value: '~4'
+ }
+ {
+ name: 'AzureWebJobsStorage'
+ value: 'DefaultEndpointsProtocol=https;AccountName=${storageAccountName};AccountKey=${storageAccount.listKeys().keys[0].value}'
+ }
+ {
+ name: 'APPLICATIONINSIGHTS_CONNECTION_STRING'
+ value: applicationInsightsName.properties.ConnectionString
+ }
+ ]
+ alwaysOn: true
+ }
+ }
+ dependsOn: [
+ storageAccount
+ hostingPlan
+ ]
+}
+```
+
-## Customizing a deployment
+## Application configuration
+
+Functions provides the following options for configuring your function app in Azure:
+
+| Configuration | `Microsoft.Web/sites` property |
+| - | - |
+| Site settings | `siteConfig` |
+| Application settings | `siteConfig.appSettings` collection |
+
+The following site settings are required on the `siteConfig` property:
+### [Windows](#tab/windows)
+++ [`alwaysOn`](functions-app-settings.md#alwayson)++ [`netFrameworkVersion`](functions-app-settings.md#netframeworkversion)+
+### [Linux](#tab/linux)
+++ [`alwaysOn`](functions-app-settings.md#alwayson)++ [`linuxFxVersion`](functions-app-settings.md#linuxfxversion)++ [`netFrameworkVersion`](functions-app-settings.md#netframeworkversion)+
+
+
+### [Windows](#tab/windows)
+++ [`netFrameworkVersion`](functions-app-settings.md#netframeworkversion) +
+### [Linux](#tab/linux)
+++ [`linuxFxVersion`](functions-app-settings.md#linuxfxversion)++ [`netFrameworkVersion`](functions-app-settings.md#netframeworkversion) (C#/PowerShell-only)+
+
+++ [`linuxFxVersion`](functions-app-settings.md#linuxfxversion)++ [`alwaysOn`](functions-app-settings.md#alwayson)++ [`linuxFxVersion`](functions-app-settings.md#linuxfxversion)
+These application settings are required (or recommended) for a specific operating system and hosting option:
+
+### [Windows](#tab/windows)
+++ [`APPLICATIONINSIGHTS_CONNECTION_STRING`](functions-app-settings.md#applicationinsights_connection_string)++ [`AzureWebJobsStorage`](functions-app-settings.md#azurewebjobsstorage)++ [`FUNCTIONS_EXTENSION_VERSION`](functions-app-settings.md#functions_extension_version)++ [`FUNCTIONS_WORKER_RUNTIME`](functions-app-settings.md#functions_worker_runtime)++ [`WEBSITE_CONTENTAZUREFILECONNECTIONSTRING`](functions-app-settings.md#website_contentazurefileconnectionstring)++ [`WEBSITE_CONTENTSHARE`](functions-app-settings.md#website_contentshare)++ [`WEBSITE_RUN_FROM_PACKAGE`](functions-app-settings.md#website_run_from_package) (recommended)++ [`WEBSITE_NODE_DEFAULT_VERSION`](functions-app-settings.md#website_node_default_version) (Node.js-only)+
+### [Linux](#tab/linux)
+++ [`APPLICATIONINSIGHTS_CONNECTION_STRING`](functions-app-settings.md#applicationinsights_connection_string)++ [`AzureWebJobsStorage`](functions-app-settings.md#azurewebjobsstorage)++ [`FUNCTIONS_EXTENSION_VERSION`](functions-app-settings.md#functions_extension_version)++ [`FUNCTIONS_WORKER_RUNTIME`](functions-app-settings.md#functions_worker_runtime) ++ [`WEBSITE_CONTENTAZUREFILECONNECTIONSTRING`](functions-app-settings.md#website_contentazurefileconnectionstring)++ [`WEBSITE_CONTENTSHARE`](functions-app-settings.md#website_contentshare)
+
+### [Windows](#tab/windows)
+++ [`APPLICATIONINSIGHTS_CONNECTION_STRING`](functions-app-settings.md#applicationinsights_connection_string)++ [`AzureWebJobsStorage`](functions-app-settings.md#azurewebjobsstorage)++ [`FUNCTIONS_EXTENSION_VERSION`](functions-app-settings.md#functions_extension_version)++ [`FUNCTIONS_WORKER_RUNTIME`](functions-app-settings.md#functions_worker_runtime)++ [`WEBSITE_CONTENTAZUREFILECONNECTIONSTRING`](functions-app-settings.md#website_contentazurefileconnectionstring)++ [`WEBSITE_CONTENTSHARE`](functions-app-settings.md#website_contentshare)++ [`WEBSITE_RUN_FROM_PACKAGE`](functions-app-settings.md#website_run_from_package) (recommended)++ [`WEBSITE_NODE_DEFAULT_VERSION`](functions-app-settings.md#website_node_default_version) (Node.js-only)+
+### [Linux](#tab/linux)
+++ [`APPLICATIONINSIGHTS_CONNECTION_STRING`](functions-app-settings.md#applicationinsights_connection_string)++ [`AzureWebJobsStorage`](functions-app-settings.md#azurewebjobsstorage)++ [`FUNCTIONS_EXTENSION_VERSION`](functions-app-settings.md#functions_extension_version)++ [`FUNCTIONS_WORKER_RUNTIME`](functions-app-settings.md#functions_worker_runtime) ++ [`WEBSITE_CONTENTAZUREFILECONNECTIONSTRING`](functions-app-settings.md#website_contentazurefileconnectionstring)++ [`WEBSITE_CONTENTSHARE`](functions-app-settings.md#website_contentshare)++ [`WEBSITE_RUN_FROM_PACKAGE`](functions-app-settings.md#website_run_from_package) (recommended)+
+### [Windows](#tab/windows)
+++ [`APPLICATIONINSIGHTS_CONNECTION_STRING`](functions-app-settings.md#applicationinsights_connection_string)++ [`AzureWebJobsStorage`](functions-app-settings.md#azurewebjobsstorage)++ [`FUNCTIONS_EXTENSION_VERSION`](functions-app-settings.md#functions_extension_version)++ [`FUNCTIONS_WORKER_RUNTIME`](functions-app-settings.md#functions_worker_runtime)++ [`WEBSITE_RUN_FROM_PACKAGE`](functions-app-settings.md#website_run_from_package) (recommended)++ [`WEBSITE_NODE_DEFAULT_VERSION`](functions-app-settings.md#website_node_default_version) (Node.js-only)+
+### [Linux](#tab/linux)
+++ [`APPLICATIONINSIGHTS_CONNECTION_STRING`](functions-app-settings.md#applicationinsights_connection_string)++ [`AzureWebJobsStorage`](functions-app-settings.md#azurewebjobsstorage)++ [`FUNCTIONS_EXTENSION_VERSION`](functions-app-settings.md#functions_extension_version)++ [`FUNCTIONS_WORKER_RUNTIME`](functions-app-settings.md#functions_worker_runtime) ++ [`WEBSITE_RUN_FROM_PACKAGE`](functions-app-settings.md#website_run_from_package) (recommended)
+
+
+
+
+These application settings are required for container deployments:
+
++ [`APPLICATIONINSIGHTS_CONNECTION_STRING`](functions-app-settings.md#applicationinsights_connection_string)++ [`AzureWebJobsStorage`](functions-app-settings.md#azurewebjobsstorage)++ [`FUNCTIONS_EXTENSION_VERSION`](functions-app-settings.md#functions_extension_version)
+
+
+Keep these considerations in mind when working with site and application settings using Bicep files or ARM templates:
+ :::zone pivot="consumption-plan,premium-plan,dedicated-plan"
++ There are important considerations for using [`WEBSITE_CONTENTSHARE`](functions-app-settings.md#website_contentshare) in an automated deployment. ++ For container deployments, also set [`WEBSITES_ENABLE_APP_SERVICE_STORAGE`](../app-service/reference-app-settings.md#custom-containers) to `false`, since your app content is provided in the container itself. ++ You should always define your application settings as a `siteConfig/appSettings` collection of the `Microsoft.Web/sites` resource being created, as is done in the examples in this article. This makes sure that the settings that your function app needs to run are available on initial startup.+++ When adding or updating application settings using templates, make sure that you include all existing settings with the update. You must do this because the underlying update REST API calls replace the entire `/config/appsettings` resource. If you remove the existing settings, your function app won't run. To programmatically update individual application settings, you can instead use the Azure CLI, Azure PowerShell, or the Azure portal to make these changes. For more information, see [Work with application settings](functions-how-to-use-azure-function-app-settings.md#settings).
+## Slot deployments
+
+Functions lets you deploy different versions of your code to unique endpoints in your function app. This makes it easier to develop, validate, and deploy functions updates without impacting functions running in production. Deployment slots is a feature of Azure App Service. The number of slots available [depends on your hosting plan](./functions-scale.md#service-limits). For more information, see [Azure Functions deployment slots](functions-deployment-slots.md) functions.
+
+A slot resource is defined in the same way as a function app resource (`Microsoft.Web/sites`), but instead you use the `Microsoft.Web/sites/slots` resource identifier. For an example deployment (in both Bicep and ARM templates) that creates both a production and a staging slot in a Premium plan, see [Azure Function App with a Deployment Slot](https://github.com/Azure-Samples/function-app-arm-templates/blob/main/function-app-deployment-slot).
+
+To learn about how to perform the swap by using templates, see [Automate with Resource Manager templates](../app-service/deploy-staging-slots.md#automate-with-resource-manager-templates).
+
+Keep the following considerations in mind when working with slot deployments:
+++ Don't explicitly set the `WEBSITE_CONTENTSHARE` setting in the deployment slot definition. This setting is generated for you when the app is created in the deployment slot. +++ When you swap slots, some application settings are considered "sticky," in that they stay with the slot and not with the code being swapped. For more information, see [Manage settings](functions-deployment-slots.md#manage-settings).
+## Secured deployments
+
+You can create your function app in a deployment where one or more of the resources have been secured by integrating with virtual networks. Virtual network integration for your function app is defined by a `Microsoft.Web/sites/networkConfig` resource. This integration depends on both the referenced function app and virtual network resources. You function app might also depend on other private networking resources, such as private endpoints and routes. For more information, see [Azure Functions networking options](functions-networking-options.md).
+
+These projects provide both Bicep and ARM template examples of how to deploy your function apps in a virtual network, including with network access restrictions:
+
+| Restricted scenario | Description |
+| - | - |
+| [Create a function app with virtual network integration](https://github.com/Azure-Samples/function-app-arm-templates/tree/main/function-app-vnet-integration) | Your function app is created in a virtual network with full access to resources in that network. Inbound and outbound access to your function app isn't restricted. For more information, see [Virtual network integration](functions-networking-options.md#virtual-network-integration). |
+| [Create a function app that accesses a secured storage account](https://github.com/Azure-Samples/function-app-arm-templates/blob/main/function-app-storage-private-endpoints) | Your created function app uses a secured storage account, which Functions accesses by using private endpoints. For more information, see [Restrict your storage account to a virtual network](configure-networking-how-to.md#restrict-your-storage-account-to-a-virtual-network). |
+| [Create a function app and storage account that both use private endpoints](https://github.com/Azure-Samples/function-app-arm-templates/tree/main/function-app-private-endpoints-storage-private-endpoints) | Your created function app can only be accessed by using private endpoints, and it uses private endpoints to access storage resources. For more information, see [Private endpoints](functions-networking-options.md#private-endpoints). |
+
+### Restricted network settings
+
+You might also need to use these settings when your function app has network restrictions:
+
+| Setting | Value | Description |
+| - | - | - |
+| [`WEBSITE_CONTENTOVERVNET`](functions-app-settings.md#website_contentovervnet) | `1` | Application setting that enables your function app to scale when the storage account is restricted to a virtual network. For more information, see [Restrict your storage account to a virtual network](functions-networking-options.md#restrict-your-storage-account-to-a-virtual-network).|
+| [`vnetrouteallenabled`](functions-app-settings.md#vnetrouteallenabled) | `1` | Site setting that forces all traffic from the function app to use the virtual network. For more information, see [Regional virtual network integration](functions-networking-options.md#regional-virtual-network-integration). This site setting supersedes the application setting [`WEBSITE_VNET_ROUTE_ALL`](./functions-app-settings.md#website_vnet_route_all). |
+
+### Considerations for network restrictions
+
+When you're restricting access to the storage account through the private endpoints, you aren't able to access the storage account through the portal or any device outside the virtual network. You can give access to your secured IP address or virtual network in the storage account by [Managing the default network access rule](../storage/common/storage-network-security.md#change-the-default-network-access-rule).
+## Create your template
-A function app has many child resources that you can use in your deployment, including app settings and source control options. You also might choose to remove the **sourcecontrols** child resource, and use a different [deployment option](functions-continuous-deployment.md) instead.
+Experts with Bicep or ARM templates can manually code their deployments using a simple text editor. For the rest of us, there are several ways to make the development process easier:
-Considerations for custom deployments:
++ **Visual Studio Code**: There are extensions available to help you work with both [Bicep files](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-bicep) and [ARM templates](https://marketplace.visualstudio.com/items?itemName=msazurermtools.azurerm-vscode-tools). You can use these tools to help make sure that your code is correct, and they provide some [basic validation](functions-infrastructure-as-code.md?tabs=vs-code#validate-your-template).
-+ To successfully deploy your application by using Azure Resource Manager, it's important to understand how resources are deployed in Azure. In the following example, top-level configurations are applied by using `siteConfig`. It's important to set these configurations at a top level, because they convey information to the Functions runtime and deployment engine. Top-level information is required before the child **sourcecontrols/web** resource is applied. Although it's possible to configure these settings in the child-level **config/appSettings** resource, in some cases your function app must be deployed *before* **config/appSettings** is applied. For example, when you're using functions with [Logic Apps](../logic-apps/index.yml), your functions are a dependency of another resource.
++ **Azure portal**: When you [create your function app and related resources in the portal](./functions-create-function-app-portal.md), the final **Review + create** screen has a **Download a template for automation** link.
- # [Bicep](#tab/bicep)
-
- ```bicep
- resource functionApp 'Microsoft.Web/sites@2022-03-01' = {
- name: functionAppName
- location: location
- kind: 'functionapp'
- properties: {
- serverFarmId: hostingPlan.id
- siteConfig: {
- alwaysOn: true
- appSettings: [
- {
- name: 'FUNCTIONS_EXTENSION_VERSION'
- value: '~4'
- }
- {
- name: 'Project'
- value: 'src'
- }
- ]
- }
- }
- dependsOn: [
- storageAccount
- ]
- }
-
- resource config 'Microsoft.Web/sites/config@2022-03-01' = {
- parent: functionApp
- name: 'appsettings'
- properties: {
- AzureWebJobsStorage: 'DefaultEndpointsProtocol=https;AccountName=${storageAccountName};AccountKey=${storageAccount.listKeys().keys[0].value}'
- AzureWebJobsDashboard: 'DefaultEndpointsProtocol=https;AccountName=${storageAccountName};AccountKey=${storageAccount.listKeys().keys[0].value}'
- FUNCTIONS_EXTENSION_VERSION: '~4'
- FUNCTIONS_WORKER_RUNTIME: 'dotnet'
- Project: 'src'
- }
- dependsOn: [
- sourcecontrol
- storageAccount
- ]
- }
-
- resource sourcecontrol 'Microsoft.Web/sites/sourcecontrols@2022-03-01' = {
- parent: functionApp
- name: 'web'
- properties: {
- repoUrl: repoUrl
- branch: branch
- isManualIntegration: true
- }
- }
- ```
-
- # [JSON](#tab/json)
-
- ```json
- "resources": [
- {
- "type": "Microsoft.Web/sites",
- "apiVersion": "2022-03-01",
- "name": "[variables('functionAppName')]",
- "location": "[parameters('location')]",
- "kind": "functionapp",
- "properties": {
- "serverFarmId": "[resourceId('Microsoft.Web/serverfarms', variables('hostingPlanName'))]",
- "siteConfig": {
- "alwaysOn": true,
- "appSettings": [
- {
- "name": "FUNCTIONS_EXTENSION_VERSION",
- "value": "~4"
- },
- {
- "name": "Project",
- "value": "src"
- }
- ]
- }
- },
- "dependsOn": [
- "[resourceId('Microsoft.Web/serverfarms', variables('hostingPlanName'))]",
- "[resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName'))]"
- ]
- },
- {
- "type": "Microsoft.Web/sites/config",
- "apiVersion": "2022-03-01",
- "name": "[format('{0}/{1}', variables('functionAppName'), 'appsettings')]",
- "properties": {
- "AzureWebJobsStorage": "[format('DefaultEndpointsProtocol=https;AccountName={0};AccountKey={1}', variables('storageAccountName'), listKeys(variables('storageAccountName'), '2021-09-01').keys[0].value)]",
- "AzureWebJobsDashboard": "[format('DefaultEndpointsProtocol=https;AccountName={0};AccountKey={1}', variables('storageAccountName'), listKeys(variables('storageAccountName'), '2021-09-01').keys[0].value)]",
- "FUNCTIONS_EXTENSION_VERSION": "~4",
- "FUNCTIONS_WORKER_RUNTIME": "dotnet",
- "Project": "src"
- },
- "dependsOn": [
- "[resourceId('Microsoft.Web/sites', variables('functionAppName'))]",
- "[resourceId('Microsoft.Web/sites/sourcecontrols', variables('functionAppName'), 'web')]",
- "[resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName'))]"
- ]
- },
- {
- "type": "Microsoft.Web/sites/sourcecontrols",
- "apiVersion": "2022-03-01",
- "name": "[format('{0}/{1}', variables('functionAppName'), 'web')]",
- "properties": {
- "repoUrl": "[parameters('repoURL')]",
- "branch": "[parameters('branch')]",
- "isManualIntegration": true
- },
- "dependsOn": [
- "[resourceId('Microsoft.Web/sites', variables('functionAppName'))]"
- ]
- }
- ]
- ```
+ :::image type="content" source="media/functions-infrastructure-as-code/portal-download-template.png" alt-text="Download template link from the Azure Functions creation process in the Azure portal.":::
-
+ This link shows you the ARM template generated based on the options you chose in portal. While this template can be a bit complex when you're creating a function app with many new resources, it can provide a good reference for how your ARM template might look.
+
+## Validate your template
-+ The previous Bicep file and ARM template use the [Project](https://github.com/projectkudu/kudu/wiki/Customizing-deployments#using-app-settings-instead-of-a-deployment-file) application settings value, which sets the base directory in which the Functions deployment engine (Kudu) looks for deployable code. In our repository, our functions are in a subfolder of the **src** folder. So, in the preceding example, we set the app settings value to `src`. If your functions are in the root of your repository, or if you're not deploying from source control, you can remove this app settings value.
+When you manually create your deployment template file, it's important to validate your template before deployment. All deployment methods validate your template syntax and raise a `validation failed` error message as shown in the following JSON formatted example:
-+ When updating application settings using Bicep or ARM, make sure that you include all existing settings. You must do this because the underlying REST APIs calls replace the existing application settings when the update APIs are called.
+```json
+{"error":{"code":"InvalidTemplate","message":"Deployment template validation failed: 'The resource 'Microsoft.Web/sites/func-xyz' is not defined in the template. Please see https://aka.ms/arm-template for usage details.'.","additionalInfo":[{"type":"TemplateViolation","info":{"lineNumber":0,"linePosition":0,"path":""}}]}}
+```
-## Deploy your template
+The following methods can be used to validate your template before deployment:
-You can use any of the following ways to deploy your Bicep file and template:
+### [Azure Pipelines](#tab/devops)
-# [Bicep](#tab/bicep)
+The following [Azure resource group deployment v2 task](/azure/devops/pipelines/tasks/deploy/azure-resource-group-deployment?view=azure-devops&preserve-view=true) with `deploymentMode: 'Validation'` instructs Azure Pipelines to validate the template.
-- [Azure CLI](../azure-resource-manager/bicep/deploy-cli.md)-- [PowerShell](../azure-resource-manager/bicep/deploy-powershell.md)
+```yml
+- task: AzureResourceManagerTemplateDeployment@3
+ inputs:
+ deploymentScope: 'Resource Group'
+ subscriptionId: # Required subscription ID
+ action: 'Create Or Update Resource Group'
+ resourceGroupName: # Required resource group name
+ location: # Required when action == Create Or Update Resource Group
+ templateLocation: 'Linked artifact'
+ csmFile: # Required when TemplateLocation == Linked Artifact
+ csmParametersFile: # Optional
+ deploymentMode: 'Validation'
+```
+
+### [Azure CLI](#tab/azure-cli)
+
+You can use the [`az deployment group validate`](/cli/azure/deployment/group#az-deployment-group-validate) command to validate your template, as shown in the following example:
+
+```azurecli-interactive
+az deployment group validate --resource-group <resource-group-name> --template-file <template-file-location> --parameters functionAppName='<function-app-name>' packageUri='<zip-package-location>'
+```
+
+### [Visual Studio Code](#tab/vs-code)
+
+In [Visual Studio Code](https://code.visualstudio.com/), install the latest [Bicep extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-bicep) or [Azure Resource Manager Tools extension](https://marketplace.visualstudio.com/items?itemName=msazurermtools.azurerm-vscode-tools).
-# [JSON](#tab/json)
+These extensions report syntactic errors in your code before deployment. For some examples of errors, see the [Fix validation error](../azure-resource-manager/troubleshooting/quickstart-troubleshoot-arm-deployment.md#fix-validation-error) section of the troubleshooting article.
+++
+You can also create a test resource group to find [preflight](../azure-resource-manager/troubleshooting/quickstart-troubleshoot-arm-deployment.md?tabs=azure-cli#fix-preflight-error) and [deployment](../azure-resource-manager/troubleshooting/quickstart-troubleshoot-arm-deployment.md?tabs=azure-cli#fix-deployment-error) errors.
+
+## Deploy your template
+
+You can use any of the following ways to deploy your Bicep file and template:
+
+### [ARM template](#tab/json)
- [Azure portal](../azure-resource-manager/templates/deploy-portal.md) - [Azure CLI](../azure-resource-manager/templates/deploy-cli.md) - [PowerShell](../azure-resource-manager/templates/deploy-powershell.md)
+### [Bicep](#tab/bicep)
+
+- [Azure CLI](../azure-resource-manager/bicep/deploy-cli.md)
+- [PowerShell](../azure-resource-manager/bicep/deploy-powershell.md)
+ ### Deploy to Azure button
Here's an example that uses HTML:
The following PowerShell commands create a resource group and deploy a Bicep file/ARM template that creates a function app with its required resources. To run locally, you must have [Azure PowerShell](/powershell/azure/install-azure-powershell) installed. Run [`Connect-AzAccount`](/powershell/module/az.accounts/connect-azaccount) to sign in.
-# [Bicep](#tab/bicep)
+#### [ARM template](#tab/json)
```powershell # Register Resource Providers if they're not already registered
Register-AzResourceProvider -ProviderNamespace "microsoft.storage"
New-AzResourceGroup -Name "MyResourceGroup" -Location 'West Europe' # Deploy the template
-New-AzResourceGroupDeployment -ResourceGroupName "MyResourceGroup" -TemplateFile main.bicep -Verbose
+New-AzResourceGroupDeployment -ResourceGroupName "MyResourceGroup" -TemplateFile azuredeploy.json -Verbose
```
-# [JSON](#tab/json)
+#### [Bicep](#tab/bicep)
```powershell # Register Resource Providers if they're not already registered
Register-AzResourceProvider -ProviderNamespace "microsoft.storage"
New-AzResourceGroup -Name "MyResourceGroup" -Location 'West Europe' # Deploy the template
-New-AzResourceGroupDeployment -ResourceGroupName "MyResourceGroup" -TemplateFile azuredeploy.json -Verbose
+New-AzResourceGroupDeployment -ResourceGroupName "MyResourceGroup" -TemplateFile main.bicep -Verbose
```
Learn more about how to develop and configure Azure Functions.
- [Azure Functions developer reference](functions-reference.md) - [How to configure Azure function app settings](functions-how-to-use-azure-function-app-settings.md)-- [Create your first Azure function](./functions-get-started.md)
+- [Create your first Azure function](functions-get-started.md)
<!-- LINKS -->
azure-functions Functions Premium Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-premium-plan.md
Title: Azure Functions Premium plan
-description: Details and configuration options (VNet, no cold start, unlimited execution duration) for the Azure Functions Premium plan.
+description: Details and configuration options (virtual network, no cold start, unlimited execution duration) for the Azure Functions Premium plan.
Previously updated : 08/08/2022 Last updated : 11/07/2023
The following articles show you how to create a function app with a Premium plan
+ [Azure portal](create-premium-plan-function-app-portal.md) + [Azure CLI](scripts/functions-cli-create-premium-plan.md)
-+ [Azure Resource Manager template](functions-infrastructure-as-code.md#deploy-on-premium-plan)
++ [Azure Resource Manager template](functions-infrastructure-as-code.md?pivots=premium-plan) ## Eliminate cold starts
-When events or executions don't occur in the Consumption plan, your app may scale to zero instances. When new events come in, a new instance with your app running on it must be specialized. Specializing new instances may take some time depending on the app. This extra latency on the first call is often called app _cold start_.
+When events or executions don't occur in the Consumption plan, your app might scale to zero instances. When new events come in, a new instance with your app running on it must be specialized. Specializing new instances takes time, depending on the app. This extra latency on the first call is often called app [_cold start_](./event-driven-scaling.md#cold-start).
-Premium plan provides two features that work together to effectively eliminate cold starts in your functions: _always ready instances_ and _pre-warmed instances_. Always ready instances are a number of pre-allocated instances unaffected by scaling, and the pre-warmed ones are a buffer as you scale due to HTTP events.
+Premium plan provides two features that work together to effectively eliminate cold starts in your functions: _always ready instances_ and _prewarmed instances_. Always ready instances are a category of preallocated instances unaffected by scaling, and the prewarmed ones are a buffer as you scale due to HTTP events.
-When events begin to trigger the app, they're first routed to the always ready instances. As the function becomes active due to HTTP events, additional instances will be warmed as a buffer. These buffered instances are called pre-warmed instances. This buffer reduces cold start for new instances required during scale.
+When events begin to trigger the app, they're first routed to the always ready instances. As the function becomes active due to HTTP events, other instances are warmed as a buffer. These buffered instances are called prewarmed instances. This buffer reduces cold start for new instances required during scale.
### Always ready instances
-In the Premium plan, you can have your app always ready on a specified number of instances. Your app runs continuously on those instances, regardless of load. If load exceeds what your always ready instances can handle, additional instances are added as necessary, up to your specified maximum.
+In the Premium plan, you can have your app always ready on a specified number of instances. Your app runs continuously on those instances, regardless of load. If load exceeds what your always ready instances can handle, more instances are added as necessary, up to your specified maximum.
-This app-level setting also controls your plan's minimum instances. For example, consider having three function apps in the same Premium plan. When two of your apps have always ready set to one and the third has it set to five, the minimum for your whole plan is five. This also reflects the minimum number of instances for which your plan is billed. The maximum number of always ready instances we support per app is 20.
+This app-level setting also controls your plan's minimum instances. For example, consider having three function apps in the same Premium plan. When two of your apps have always ready instance count set to one and in a third instance it's set to five, the minimum number for your whole plan is five. This also reflects the minimum number of instances for which your plan is billed. The maximum number of always ready instances we support per app is 20.
# [Portal](#tab/portal)
$Resource | Set-AzResource -Force
-### Pre-warmed instances
+### Prewarmed instances
-The pre-warmed instance count setting provides warmed instances as a buffer during HTTP scale and activation events. Pre-warmed instances continue to buffer until the maximum scale-out limit is reached. The default pre-warmed instance count is 1 and, for most scenarios, this value should remain as 1.
+The prewarmed instance count setting provides warmed instances as a buffer during HTTP scale and activation events. Prewarmed instances continue to buffer until the maximum scale-out limit is reached. The default prewarmed instance count is 1 and, for most scenarios, this value should remain as 1.
-Consider a less-common scenario, such as an app running in a custom container. Because custom containers have a long warm-up, you may consider increasing this buffer of pre-warmed instances. A pre-warmed instance becomes active only after all active instances are in use.
+Consider a less-common scenario, such as an app running in a custom container. Because custom containers have a long warm-up, you could consider increasing this buffer of prewarmed instances. A prewarmed instance becomes active only after all active instances are in use.
-You can also define a warmup trigger that is run during the pre-warming process. You can use a warmup trigger to pre-load custom dependencies during the pre-warming process so your functions are ready to start processing requests immediately. To learn more, see [Azure Functions warmup trigger](functions-bindings-warmup.md).
+You can also define a warmup trigger that is run during the prewarming process. You can use a warmup trigger to preload custom dependencies during the prewarming process so your functions are ready to start processing requests immediately. To learn more, see [Azure Functions warmup trigger](functions-bindings-warmup.md).
-Consider this example of how always-ready instances and pre-warmed instances work together. A premium function app has two always ready instances configured, and the default of one pre-warmed instance.
+Consider this example of how always-ready instances and prewarmed instances work together. A premium function app has two always ready instances configured, and the default of one prewarmed instance.
![Scale out graph](./media/functions-premium-plan/scale-graph.png)
-1. When the app is idle and no events are triggering, the app is provisioned and running with two instances. At this time, you're billed for the two always ready instances but aren't billed for a pre-warmed instance as no pre-warmed instance is allocated.
-2. As your application starts receiving HTTP traffic, requests will be load balanced across the two always-ready instances. As soon as those two instances start processing events, an instance gets added to fill the pre-warmed buffer. The app is now running with three provisioned instances: the two always ready instances, and the third pre-warmed and inactive buffer. You're billed for the three instances.
-3. As load increases and your app needs more instances to handle HTTP traffic, that prewarmed instance is swapped to an active instance. HTTP load is now routed to all three instances, and a fourth instance is instantly provisioned to fill the pre-warmed buffer.
-4. This sequence of scaling and pre-warming continues until the maximum instance count for the app is reached or load decreases causing the platform to scale back in after a period. No instances are pre-warmed or activated beyond the maximum.
+1. When the app is idle and no events are triggering, the app is provisioned and running with two instances. At this time, you're billed for the two always ready instances but aren't billed for a prewarmed instance as no prewarmed instance is allocated.
+2. As your application starts receiving HTTP traffic, requests are load balanced across the two always-ready instances. As soon as those two instances start processing events, an instance gets added to fill the prewarmed buffer. The app is now running with three provisioned instances: the two always ready instances, and the third prewarmed and inactive buffer. You're billed for the three instances.
+3. As load increases and your app needs more instances to handle HTTP traffic, that prewarmed instance is swapped to an active instance. HTTP load is now routed to all three instances, and a fourth instance is instantly provisioned to fill the prewarmed buffer.
+4. This sequence of scaling and prewarming continues until the maximum instance count for the app is reached or load decreases causing the platform to scale back in after a period. No instances are prewarmed or activated beyond the maximum.
# [Portal](#tab/portal)
-You can't change the pre-warmed instance count setting in the portal, you must instead use the Azure CLI or Azure PowerShell.
+You can't change the prewarmed instance count setting in the portal, you must instead use the Azure CLI or Azure PowerShell.
# [Azure CLI](#tab/azurecli)
-You can modify the number of pre-warmed instances for an app using the Azure CLI.
+You can modify the number of prewarmed instances for an app using the Azure CLI.
```azurecli-interactive az functionapp update -g <RESOURCE_GROUP> -n <FUNCTION_APP_NAME> --set siteConfig.preWarmedInstanceCount=<YOUR_PREWARMED_COUNT>
az functionapp update -g <RESOURCE_GROUP> -n <FUNCTION_APP_NAME> --set siteConfi
# [Azure PowerShell](#tab/azure-powershell)
-You can modify the number of pre-warmed instances for an app using the Azure PowerShell.
+You can modify the number of prewarmed instances for an app using the Azure PowerShell.
```azurepowershell-interactive $Resource = Get-AzResource -ResourceGroupName <RESOURCE_GROUP> -ResourceName <FUNCTION_APP_NAME>/config/web -ResourceType Microsoft.Web/sites
In addition to the [plan maximum instance count](#plan-and-sku-settings), you ca
## Private network connectivity
-Function apps deployed to a Premium plan can take advantage of [VNet integration for web apps](../app-service/overview-vnet-integration.md). When configured, your app can communicate with resources within your VNet or secured via service endpoints. IP restrictions are also available on the app to restrict incoming traffic.
+Function apps deployed to a Premium plan can take advantage of [virtual network integration for web apps](../app-service/overview-vnet-integration.md). When configured, your app can communicate with resources within your virtual network or secured via service endpoints. IP restrictions are also available on the app to restrict incoming traffic.
When assigning a subnet to your function app in a Premium plan, you need a subnet with enough IP addresses for each potential instance. We require an IP block with at least 100 available addresses.
-For more information, see [integrate your function app with a VNet](functions-create-vnet.md).
+For more information, see [integrate your function app with a virtual network](functions-create-vnet.md).
## Rapid elastic scale
-Additional compute instances are automatically added for your app using the same rapid scaling logic as the Consumption plan. Apps in the same App Service Plan scale independently from one another based on the needs of an individual app. However, Functions apps in the same App Service Plan share VM resources to help reduce costs, when possible. The number of apps associated with a VM depends on the footprint of each app and the size of the VM.
+More compute instances are automatically added for your app using the same rapid scaling logic as the Consumption plan. Apps in the same App Service Plan scale independently from one another based on the needs of an individual app. However, Functions apps in the same App Service Plan share VM resources to help reduce costs, when possible. The number of apps associated with a VM depends on the footprint of each app and the size of the VM.
To learn more about how scaling works, see [Event-driven scaling in Azure Functions](event-driven-scaling.md).
If you have an existing function app, you can use Azure CLI commands to migrate
This migration isn't supported on Linux.
-## Plan and SKU settings
+## <a name="plan-and-sku-settings"></a>Premium plan settings
When you create the plan, there are two plan size settings: the minimum number of instances (or plan size) and the maximum burst limit. If your app requires instances beyond the always-ready instances, it can continue to scale out until the number of instances hits the maximum burst limit. You're billed for instances beyond your plan size only while they're running and allocated to you, on a per-second basis. The platform makes its best effort at scaling your app out to the defined maximum limit.
-# [Portal](#tab/portal)
+### [Portal](#tab/portal)
You can configure the plan size and maximums in the Azure portal by selecting the **Scale Out** options under **Settings** of a function app deployed to that plan. ![Elastic plan size settings in the portal](./media/functions-premium-plan/scale-out.png)
-# [Azure CLI](#tab/azurecli)
+### [Azure CLI](#tab/azurecli)
You can also increase the maximum burst limit from the Azure CLI:
You can also increase the maximum burst limit from the Azure CLI:
az functionapp plan update -g <RESOURCE_GROUP> -n <PREMIUM_PLAN_NAME> --max-burst <YOUR_MAX_BURST> ```
-# [Azure PowerShell](#tab/azure-powershell)
+### [Azure PowerShell](#tab/azure-powershell)
You can also increase the maximum burst limit from the Azure PowerShell:
Update-AzFunctionAppPlan -ResourceGroupName <RESOURCE_GROUP> -Name <PREMIUM_PLAN
```
-The minimum for every plan will be at least one instance. The actual minimum number of instances will be autoconfigured for you based on the always ready instances requested by apps in the plan. For example, if app A requests five always ready instances, and app B requests two always ready instances in the same plan, the minimum plan size will be calculated as five. App A will be running on all 5, and app B will only be running on 2.
+The minimum for every plan is at least one instance. The actual minimum number of instances is determined for you based on the always ready instances requested by apps in the plan. For example, if app A requests five always ready instances, and app B requests two always ready instances in the same plan, the minimum plan size is determined as five. App A is running on all five, and app B is only running on 2.
> [!IMPORTANT] > You are charged for each instance allocated in the minimum instance count regardless if functions are executing or not.
-In most circumstances, this autocalculated minimum is sufficient. However, scaling beyond the minimum occurs at a best effort. It's possible, though unlikely, that at a specific time scale-out could be delayed if additional instances are unavailable. By setting a minimum higher than the autocalculated minimum, you reserve instances in advance of scale-out.
+In most circumstances, this autocalculated minimum is sufficient. However, scaling beyond the minimum occurs at a best effort. It's possible, though unlikely, that at a specific time scale-out could be delayed if other instances are unavailable. By setting a minimum higher than the autocalculated minimum, you reserve instances in advance of scale-out.
-# [Portal](#tab/portal)
+### [Portal](#tab/portal)
You can configure the minimum instances in the Azure portal by selecting the **Scale Out** options under **Settings** of a function app deployed to that plan. ![Minimum instance settings in the portal](./media/functions-premium-plan/scale-out.png)
-# [Azure CLI](#tab/azurecli)
+### [Azure CLI](#tab/azurecli)
Increasing the calculated minimum for a plan can be done using the Azure CLI.
Increasing the calculated minimum for a plan can be done using the Azure CLI.
az functionapp plan update -g <RESOURCE_GROUP> -n <PREMIUM_PLAN_NAME> --min-instances <YOUR_MIN_INSTANCES> ```
-# [Azure PowerShell](#tab/azure-powershell)
+### [Azure PowerShell](#tab/azure-powershell)
Increasing the calculated minimum for a plan can be done using the Azure PowerShell.
Update-AzFunctionAppPlan -ResourceGroupName <RESOURCE_GROUP> -Name <PREMIUM_PLAN
### Available instance SKUs
-When creating or scaling your plan, you can choose between three instance sizes. You'll be billed for the total number of cores and memory provisioned, per second that each instance is allocated to you. Your app can automatically scale out to multiple instances as needed.
+When creating or scaling your plan, you can choose between three instance sizes. You're billed for the total number of cores and memory provisioned, per second that each instance is allocated to you. Your app can automatically scale out to multiple instances as needed.
|SKU|Cores|Memory|Storage| |--|--|--|--|
Running on a machine with more memory doesn't always mean that your function app
For example, a JavaScript function app is constrained by the default memory limit in Node.js. To increase this fixed memory limit, add the app setting `languageWorkers:node:arguments` with a value of `--max-old-space-size=<max memory in MB>`.
-And for plans with more than 4GB memory, ensure the Bitness Platform Setting is set to `64 Bit` under [General Settings](../app-service/configure-common.md#configure-general-settings).
+And for plans with more than 4 GB of memory, ensure the Bitness Platform Setting is set to `64 Bit` under [General Settings](../app-service/configure-common.md#configure-general-settings).
## Region max scale-out
-Below are the currently supported maximum scale-out values for a single plan in each region and OS configuration.
-
-See the complete regional availability of Functions on the [Azure web site](https://azure.microsoft.com/global-infrastructure/services/?products=functions).
+These are the currently supported maximum scale-out values for a single plan in each region and OS configuration:
|Region| Windows | Linux | |--| -- | -- |
See the complete regional availability of Functions on the [Azure web site](http
|West US 2| 100 | 20 | |West US 3| 100 | 20 |
+For more information, see the [complete regional availability of Azure Functions](https://azure.microsoft.com/global-infrastructure/services/?products=functions).
+ ## Next steps * [Understand Azure Functions hosting options](functions-scale.md)
+* [Event-driven scaling in Azure Functions](event-driven-scaling.md)
azure-functions Storage Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/storage-considerations.md
Last updated 06/13/2023
# Storage considerations for Azure Functions
-Azure Functions requires an Azure Storage account when you create a function app instance. The following storage services may be used by your function app:
+Azure Functions requires an Azure Storage account when you create a function app instance. The following storage services could be used by your function app:
|Storage service | Functions usage | |||
-| [Azure Blob Storage](../storage/blobs/storage-blobs-introduction.md) | Maintain bindings state and function keys<sup>1</sup>. <br/>Used by default for [task hubs in Durable Functions](durable/durable-functions-task-hubs.md). <br/>May be used to store function app code for [Linux Consumption remote build](functions-deployment-technologies.md#remote-build) or as part of [external package URL deployments](functions-deployment-technologies.md#external-package-url). |
+| [Azure Blob Storage](../storage/blobs/storage-blobs-introduction.md) | Maintain bindings state and function keys<sup>1</sup>. <br/>Used by default for [task hubs in Durable Functions](durable/durable-functions-task-hubs.md). <br/>Can be used to store function app code for [Linux Consumption remote build](functions-deployment-technologies.md#remote-build) or as part of [external package URL deployments](functions-deployment-technologies.md#external-package-url). |
| [Azure Files](../storage/files/storage-files-introduction.md)<sup>2</sup> | File share used to store and run your function app code in a [Consumption Plan](consumption-plan.md) and [Premium Plan](functions-premium-plan.md). <br/> | | [Azure Queue Storage](../storage/queues/storage-queues-introduction.md) | Used by default for [task hubs in Durable Functions](durable/durable-functions-task-hubs.md). Used for failure and retry handling in [specific Azure Functions triggers](./functions-bindings-storage-blob-trigger.md). Used for object tracking by the [Blob Storage trigger](functions-bindings-storage-blob-trigger.md). | | [Azure Table Storage](../storage/tables/table-storage-overview.md) | Used by default for [task hubs in Durable Functions](durable/durable-functions-task-hubs.md). |
You must strongly consider the following facts regarding the storage accounts us
+ When your function app is hosted on the Consumption plan or Premium plan, your function code and configuration files are stored in Azure Files in the linked storage account. When you delete this storage account, the content is deleted and can't be recovered. For more information, see [Storage account was deleted](functions-recover-storage-account.md#storage-account-was-deleted)
-+ Important data, such as function code, [access keys](functions-bindings-http-webhook-trigger.md#authorization-keys), and other important service-related data, may be persisted in the storage account. You must carefully manage access to the storage accounts used by function apps in the following ways:
++ Important data, such as function code, [access keys](functions-bindings-http-webhook-trigger.md#authorization-keys), and other important service-related data, can be persisted in the storage account. You must carefully manage access to the storage accounts used by function apps in the following ways: + Audit and limit the access of apps and users to the storage account based on a least-privilege model. Permissions to the storage account can come from [data actions in the assigned role](../role-based-access-control/role-definitions.md#control-and-data-actions) or through permission to perform the [listKeys operation].
The storage account must be accessible to the function app. If you need to use a
By default, function apps configure the `AzureWebJobsStorage` connection as a connection string stored in the [AzureWebJobsStorage application setting](./functions-app-settings.md#azurewebjobsstorage), but you can also [configure AzureWebJobsStorage to use an identity-based connection](functions-reference.md#connecting-to-host-storage-with-an-identity) without a secret.
-Function apps are configured to use Azure Files by storing a connection string in the [WEBSITE_CONTENTAZUREFILECONNECTIONSTRING application setting](./functions-app-settings.md#website_contentazurefileconnectionstring) and providing the name of the file share in the [WEBSITE_CONTENTSHARE application setting](./functions-app-settings.md#website_contentshare).
+Function apps running in a Consumption plan (Windows only) or an Elastic Premium plan (Windows or Linux) can use Azure Files to store the images required to enable dynamic scaling. For these plans, set the connection string for the storage account in the [WEBSITE_CONTENTAZUREFILECONNECTIONSTRING](./functions-app-settings.md#website_contentazurefileconnectionstring) setting and the name of the file share in the [WEBSITE_CONTENTSHARE](./functions-app-settings.md#website_contentshare) setting. This is usually the same account used for `AzureWebJobsStorage`. You can also [create a function app that doesn't use Azure Files](#create-an-app-without-azure-files), but scaling might be limited.
> [!NOTE] > A storage account connection string must be updated when you regenerate storage keys. [Read more about storage key management here](../storage/common/storage-account-create.md).
Function apps are configured to use Azure Files by storing a connection string i
It's possible for multiple function apps to share the same storage account without any issues. For example, in Visual Studio you can develop multiple apps using the [Azurite storage emulator](functions-develop-local.md#local-storage-emulator). In this case, the emulator acts like a single storage account. The same storage account used by your function app can also be used to store your application data. However, this approach isn't always a good idea in a production environment.
-You may need to use separate storage accounts to [avoid host ID collisions](#avoiding-host-id-collisions).
+You might need to use separate storage accounts to [avoid host ID collisions](#avoiding-host-id-collisions).
### Lifecycle management policy considerations
-You shouldn't apply [lifecycle management policies](../storage/blobs/lifecycle-management-overview.md) to your Blob Storage account used by your function app. Functions uses Blob storage to persist important information, such as [function access keys](functions-bindings-http-webhook-trigger.md#authorization-keys), and policies may remove blobs (such as keys) needed by the Functions host. If you must use policies, exclude containers used by Functions, which are prefixed with `azure-webjobs` or `scm`.
+You shouldn't apply [lifecycle management policies](../storage/blobs/lifecycle-management-overview.md) to your Blob Storage account used by your function app. Functions uses Blob storage to persist important information, such as [function access keys](functions-bindings-http-webhook-trigger.md#authorization-keys), and policies could remove blobs (such as keys) needed by the Functions host. If you must use policies, exclude containers used by Functions, which are prefixed with `azure-webjobs` or `scm`.
### Storage logs
-Because function code and keys may be persisted in the storage account, logging of activity against the storage account is a good way to monitor for unauthorized access. Azure Monitor resource logs can be used to track events against the storage data plane. See [Monitoring Azure Storage](../storage/blobs/monitor-blob-storage.md) for details on how to configure and examine these logs.
+Because function code and keys might be persisted in the storage account, logging of activity against the storage account is a good way to monitor for unauthorized access. Azure Monitor resource logs can be used to track events against the storage data plane. See [Monitoring Azure Storage](../storage/blobs/monitor-blob-storage.md) for details on how to configure and examine these logs.
The [Azure Monitor activity log](../azure-monitor/essentials/activity-log.md) shows control plane events, including the [listKeys operation]. However, you should also configure resource logs for the storage account to track subsequent use of keys or other identity-based data plane operations. You should have at least the [StorageWrite log category](../storage/blobs/monitor-blob-storage.md#collection-and-routing) enabled to be able to identify modifications to the data outside of normal Functions operations.
You can use the following strategies to avoid host ID collisions:
+ Set an explicit host ID for one or more of the colliding apps. To learn more, see [Host ID override](#override-the-host-id). > [!IMPORTANT]
-> Changing the storage account associated with an existing function app or changing the app's host ID can impact the behavior of existing functions. For example, a Blob Storage trigger tracks whether it's processed individual blobs by writing receipts under a specific host ID path in storage. When the host ID changes or you point to a new storage account, previously processed blobs may be reprocessed.
+> Changing the storage account associated with an existing function app or changing the app's host ID can impact the behavior of existing functions. For example, a Blob Storage trigger tracks whether it's processed individual blobs by writing receipts under a specific host ID path in storage. When the host ID changes or you point to a new storage account, previously processed blobs could be reprocessed.
### Override the host ID
When the collision occurs between slots, you must set a specific host ID for eac
## Azure Arc-enabled clusters
-When your function app is deployed to an Azure Arc-enabled Kubernetes cluster, a storage account may not be required by your function app. In this case, a storage account is only required by Functions when your function app uses a trigger that requires storage. The following table indicates which triggers may require a storage account and which don't.
+When your function app is deployed to an Azure Arc-enabled Kubernetes cluster, a storage account might not be required by your function app. In this case, a storage account is only required by Functions when your function app uses a trigger that requires storage. The following table indicates which triggers might require a storage account and which don't.
-| Not required | May require storage |
+| Not required | might require storage |
| | | | ΓÇó [Azure Cosmos DB](functions-bindings-cosmosdb-v2.md)<br/>ΓÇó [HTTP](functions-bindings-http-webhook.md)<br/>ΓÇó [Kafka](functions-bindings-kafka.md)<br/>ΓÇó [RabbitMQ](functions-bindings-rabbitmq.md)<br/>ΓÇó [Service Bus](functions-bindings-service-bus.md) | ΓÇó [Azure SQL](functions-bindings-azure-sql.md)<br/>ΓÇó [Blob storage](functions-bindings-storage-blob.md)<br/>ΓÇó [Event Grid](functions-bindings-event-grid.md)<br/>ΓÇó [Event Hubs](functions-bindings-event-hubs.md)<br/>ΓÇó [IoT Hub](functions-bindings-event-iot.md)<br/>ΓÇó [Queue storage](functions-bindings-storage-queue.md)<br/>ΓÇó [SendGrid](functions-bindings-sendgrid.md)<br/>ΓÇó [SignalR](functions-bindings-signalr-service.md)<br/>ΓÇó [Table storage](functions-bindings-storage-table.md)<br/>ΓÇó [Timer](functions-bindings-timer.md)<br/>ΓÇó [Twilio](functions-bindings-twilio.md)
Creating your function app resources using methods other than the Azure CLI requ
## Create an app without Azure Files
-Azure Files is set up by default for Elastic Premium and non-Linux Consumption plans to serve as a shared file system in high-scale scenarios. The file system is used by the platform for some features such as log streaming, but it primarily ensures consistency of the deployed function payload. When an app is [deployed using an external package URL](./run-functions-from-deployment-package.md), the app content is served from a separate read-only file system. This means that you can create your function app without Azure Files. If you create your function app with Azure Files, a writeable file system is still provided. However, this file system may not be available for all function app instances.
+Azure Files is set up by default for Elastic Premium and non-Linux Consumption plans to serve as a shared file system in high-scale scenarios. The file system is used by the platform for some features such as log streaming, but it primarily ensures consistency of the deployed function payload. When an app is [deployed using an external package URL](./run-functions-from-deployment-package.md), the app content is served from a separate read-only file system. This means that you can create your function app without Azure Files. If you create your function app with Azure Files, a writeable file system is still provided. However, this file system might not be available for all function app instances.
When Azure Files isn't used, you must meet the following requirements:
When Azure Files isn't used, you must meet the following requirements:
* The app can't use version 1.x of the Functions runtime. * Log streaming experiences in clients such as the Azure portal default to file system logs. You should instead rely on Application Insights logs.
-If the above are properly accounted for, you may create the app without Azure Files. Create the function app without specifying the `WEBSITE_CONTENTAZUREFILECONNECTIONSTRING` and `WEBSITE_CONTENTSHARE` application settings. You can avoid these settings by generating an ARM template for a standard deployment, removing the two settings, and then deploying the template.
+If the above are properly accounted for, you could create the app without Azure Files. Create the function app without specifying the `WEBSITE_CONTENTAZUREFILECONNECTIONSTRING` and `WEBSITE_CONTENTSHARE` application settings. You can avoid these settings by generating an ARM template for a standard deployment, removing the two settings, and then deploying the template.
Because Functions use Azure Files during parts of the dynamic scale-out process, scaling could be limited when running without Azure Files on Consumption and Elastic Premium plans.
For a complete example, see the script in [Create a serverless Python function a
-Currently, only a `storage-type` of `AzureFiles` is supported. You can only mount five shares to a given function app. Mounting a file share may increase the cold start time by at least 200-300 ms, or even more when the storage account is in a different region.
+Currently, only a `storage-type` of `AzureFiles` is supported. You can only mount five shares to a given function app. Mounting a file share can increase the cold start time by at least 200-300 ms, or even more when the storage account is in a different region.
The mounted share is available to your function code at the `mount-path` specified. For example, when `mount-path` is `/path/to/mount`, you can access the target directory by file system APIs, as in the following Python example:
azure-government Documentation Government Csp List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-csp-list.md
Previously updated : 08/31/2023 Last updated : 10/31/2023 # Azure Government authorized reseller list
Below you can find a list of all the authorized Cloud Solution Providers (CSPs),
|[Catapult Systems, LLC](https://www.catapultsystems.com)| |[CGI Federal Inc.](https://www.cgi.com/us/en-us/federal)| |[Cloud Navigator, Inc - formerly ISC](https://cloudnav.com)|
+|[CloudFit Software LLC](https://cloudfitsoftware.com/)|
|[Conquest Cyber](https://conquestcyber.com/)| |[Coretek](https://www.coretek.com/)| |[CyberSheath](https://cybersheath.com)|
azure-maps Creator Facility Ontology https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/creator-facility-ontology.md
The `category` class feature defines category names. For example: "room.conferen
Learn more about Creator for indoor maps by reading:
+> [!div class="nextstepaction"]
+> [What is Azure Maps Creator?]
+ > [!div class="nextstepaction"] > [Creator for indoor maps]
Learn more about Creator for indoor maps by reading:
<! learn.microsoft.com links > [Create a dataset using a GeoJson package]: how-to-dataset-geojson.md [Creator for indoor maps]: creator-indoor-maps.md
+[What is Azure Maps Creator?]: about-creator.md
<! External Links > [Azure Maps services]: https://aka.ms/AzureMaps [feature object]: https://www.rfc-editor.org/rfc/rfc7946#section-3.2
azure-maps Creator Geographic Scope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/creator-geographic-scope.md
The following table describes the mapping between geography and supported Azure
| Europe| West Europe, North Europe | eu.atlas.microsoft.com | |United States | West US 2, East US 2 | us.atlas.microsoft.com |
+## Next steps
+
+> [!div class="nextstepaction"]
+> [What is Azure Maps Creator?]
+ [Azure geographies]: https://azure.microsoft.com/global-infrastructure/geographies
+[What is Azure Maps Creator?]: about-creator.md
azure-maps Drawing Error Visualizer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/drawing-error-visualizer.md
The _ConversionWarningsAndErrors.json_ contains a list of your drawing package e
Learn more by reading:
+> [!div class="nextstepaction"]
+> [What is Azure Maps Creator?]
+ > [!div class="nextstepaction"] > [Creator for indoor maps]
Learn more by reading:
[How to create data registry]: how-to-create-data-registries.md [Postman]: https://www.postman.com/ [subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
+[What is Azure Maps Creator?]: about-creator.md
azure-maps Drawing Package Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/drawing-package-guide.md
You should now have all the DWG drawings prepared to meet Azure Maps Conversion
## Next steps
+> [!div class="nextstepaction"]
+> [What is Azure Maps Creator?]
+
+> [!div class="nextstepaction"]
+> [Creator for indoor maps]
+ > [!div class="nextstepaction"] > [Tutorial: Creating a Creator indoor map]
When finished, select the **Create + Download** button to download a copy of the
## Next steps
+> [!div class="nextstepaction"]
+> [What is Azure Maps Creator?]
+
+> [!div class="nextstepaction"]
+> [Creator for indoor maps]
+ > [!div class="nextstepaction"] > [Create indoor map with the onboarding tool]
When finished, select the **Create + Download** button to download a copy of the
[wayfinding]: creator-indoor-maps.md#wayfinding-preview [facility level]: drawing-requirements.md#facility-level [Create indoor map with the onboarding tool]: creator-onboarding-tool.md+
+[What is Azure Maps Creator?]: about-creator.md
+[Creator for indoor maps]: creator-indoor-maps.md
azure-maps Drawing Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/drawing-requirements.md
For a guide on how to prepare your drawing package, see the drawing package guid
Learn more by reading: > [!div class="nextstepaction"]
-> [Creator for indoor maps](creator-indoor-maps.md)
+> [What is Azure Maps Creator?]
+
+> [!div class="nextstepaction"]
+> [Creator for indoor maps]
+
+[What is Azure Maps Creator?]: about-creator.md
+[Creator for indoor maps]: creator-indoor-maps.md
<! Drawing Package v1 links> [Drawing package]: https://github.com/Azure-Samples/am-creator-indoor-data-examples/tree/master/Drawing%20Package%201.0
azure-maps Indoor Map Dynamic Styling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/indoor-map-dynamic-styling.md
The web application that you previously opened in a browser should now reflect t
Learn more by reading:
+> [!div class="nextstepaction"]
+> [What is Azure Maps Creator?]
+ > [!div class="nextstepaction"] > [Creator for indoor maps](creator-indoor-maps.md)
Learn more by reading:
[Create an indoor map]: tutorial-creator-indoor-maps.md [Open Geospatial Consortium API Features]: https://docs.opengeospatial.org/DRAFTS/17-069r4.html [WFS API]: /rest/api/maps/v2/wfs
+[Creator for indoor maps]: creator-indoor-maps.md
azure-maps Schema Stateset Stylesobject https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/schema-stateset-stylesobject.md
The following JSON illustrates a `BooleanTypeStyleRule` *state* named `occupied`
Learn more about Creator for indoor maps by reading:
+> [!div class="nextstepaction"]
+> [What is Azure Maps Creator?]
+ > [!div class="nextstepaction"] > [Creator for indoor maps]
Learn more about Creator for indoor maps by reading:
[Feature State service]: /rest/api/maps/v2/feature-state [Implement dynamic styling for Creator  indoor maps]: indoor-map-dynamic-styling.md [RangeObject]: #rangeobject
+[What is Azure Maps Creator?]: about-creator.md
azure-maps Tutorial Creator Indoor Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-creator-indoor-maps.md
For more information, see [Map configuration] in the article about indoor map co
## Next steps
+> [!div class="nextstepaction"]
+> [What is Azure Maps Creator?]
+
+> [!div class="nextstepaction"]
+> [Creator for indoor maps]
+ > [!div class="nextstepaction"] > [Use the Azure Maps Indoor Maps module with custom styles](how-to-use-indoor-module.md)
For more information, see [Map configuration] in the article about indoor map co
[Tileset service]: /rest/api/maps/2023-03-01-preview/tileset [tileset get]: /rest/api/maps/2023-03-01-preview/tileset/get [Map configuration]: creator-indoor-maps.md#map-configuration
+[What is Azure Maps Creator?]: about-creator.md
+[Creator for indoor maps]: creator-indoor-maps.md
azure-maps Tutorial Geofence https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-geofence.md
Each of the following sections makes API requests by using the five different lo
5. Enter the following URL. The request should look like the following URL (replace `{udid}` with the `udid` you saved in the [Upload Geofencing GeoJSON data section]). ```HTTP
- https://atlas.microsoft.com/spatial/geofence/json?subscription-key={Your-Azure-Maps-Subscription-key}&api-version=2022-08-01&deviceId=device_01&udid={udid}&lat=47.638237&lon=-122.1324831&searchBuffer=5&isAsync=True&mode=EnterAndExit
+ https://{geography}.atlas.microsoft.com/spatial/geofence/json?subscription-key={Your-Azure-Maps-Subscription-key}&api-version=2022-08-01&deviceId=device_01&udid={udid}&lat=47.638237&lon=-122.1324831&searchBuffer=5&isAsync=True&mode=EnterAndExit
```
+ > [!NOTE]
+ > Replace {geography} with your geographic scope. For more information, see [Azure Maps service geographic scope] and the [Spatial Geofence Get API].
+ 6. Select **Send**. 7. The response should like the following GeoJSON fragment:
In the preceding GeoJSON response, the negative distance from the main site geof
5. Enter the following URL. The request should look like the following URL (replace `{udid}` with the `udid` you saved in the [Upload Geofencing GeoJSON data section]). ```HTTP
- https://atlas.microsoft.com/spatial/geofence/json?subscription-key={Your-Azure-Maps-Subscription-key}&api-version=2022-08-01&deviceId=device_01&udId={udId}&lat=47.63800&lon=-122.132531&searchBuffer=5&isAsync=True&mode=EnterAndExit
+ https://{geography}.atlas.microsoft.com/spatial/geofence/json?subscription-key={Your-Azure-Maps-Subscription-key}&api-version=2022-08-01&deviceId=device_01&udId={udId}&lat=47.63800&lon=-122.132531&searchBuffer=5&isAsync=True&mode=EnterAndExit
``` 6. Select **Send**.
In the preceding GeoJSON response, the equipment has remained in the main site g
5. Enter the following URL. The request should look like the following URL (replace `{udid}` with the `udid` you saved in the [Upload Geofencing GeoJSON data section]). ```HTTP
- https://atlas.microsoft.com/spatial/geofence/json?subscription-key={Your-Azure-Maps-Subscription-key}&api-version=2022-08-01&deviceId=device_01&udid={udid}&lat=47.63810783315048&lon=-122.13336020708084&searchBuffer=5&isAsync=True&mode=EnterAndExit
+ https://{geography}.atlas.microsoft.com/spatial/geofence/json?subscription-key={Your-Azure-Maps-Subscription-key}&api-version=2022-08-01&deviceId=device_01&udid={udid}&lat=47.63810783315048&lon=-122.13336020708084&searchBuffer=5&isAsync=True&mode=EnterAndExit
``` 6. Select **Send**.
In the preceding GeoJSON response, the equipment has remained in the main site g
5. Enter the following URL. The request should look like the following URL (replace `{udid}` with the `udid` you saved in the [Upload Geofencing GeoJSON data section]). ```HTTP
- https://atlas.microsoft.com/spatial/geofence/json?subscription-key={Your-Azure-Maps-Subscription-key}&api-version=2022-08-01&deviceId=device_01&udid={udid}&lat=47.637988&userTime=2023-01-16&lon=-122.1338344&searchBuffer=5&isAsync=True&mode=EnterAndExit
+ https://{geography}.atlas.microsoft.com/spatial/geofence/json?subscription-key={Your-Azure-Maps-Subscription-key}&api-version=2022-08-01&deviceId=device_01&udid={udid}&lat=47.637988&userTime=2023-01-16&lon=-122.1338344&searchBuffer=5&isAsync=True&mode=EnterAndExit
``` 6. Select **Send**.
In the preceding GeoJSON response, the equipment has remained in the main site g
5. Enter the following URL. The request should look like the following URL (replace `{udid}` with the `udid` you saved in the [Upload Geofencing GeoJSON data section]). ```HTTP
- https://atlas.microsoft.com/spatial/geofence/json?subscription-key={Your-Azure-Maps-Subscription-key}&api-version=2022-08-01&deviceId=device_01&udid={udid}&lat=47.63799&lon=-122.134505&searchBuffer=5&isAsync=True&mode=EnterAndExit
+ https://{geography}.atlas.microsoft.com/spatial/geofence/json?subscription-key={Your-Azure-Maps-Subscription-key}&api-version=2022-08-01&deviceId=device_01&udid={udid}&lat=47.63799&lon=-122.134505&searchBuffer=5&isAsync=True&mode=EnterAndExit
``` 6. Select **Send**.
There are no resources that require cleanup.
[az maps account create]: /cli/azure/maps/account?view=azure-cli-latest&preserve-view=true#az-maps-account-create [Azure Event Grid]: ../event-grid/overview.md
+[Azure Maps service geographic scope]: geographic-scope.md
[Azure portal]: https://portal.azure.com [Create your Azure Maps account using an ARM template]: how-to-create-template.md [data registry]: /rest/api/maps/data-registry
azure-monitor Agents Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agents-overview.md
View [supported operating systems for Azure Arc Connected Machine agent](../../a
| Operating system | Azure Monitor agent <sup>1</sup> | Log Analytics agent (legacy) <sup>1</sup> | Diagnostics extension <sup>2</sup>| |:|::|::|::|
-| AlmaLinux 9 | Γ£ô<sup>3</sup> | | |
+| AlmaLinux 9 | Γ£ô<sup>3</sup> | Γ£ô | |
| AlmaLinux 8 | Γ£ô<sup>3</sup> | Γ£ô | | | Amazon Linux 2017.09 | | Γ£ô | | | Amazon Linux 2 | Γ£ô | Γ£ô | |
View [supported operating systems for Azure Arc Connected Machine agent](../../a
| Red Hat Enterprise Linux Server 8.0-8.5 | Γ£ô | Γ£ô | Γ£ô<sup>2</sup> | | Red Hat Enterprise Linux Server 7 | Γ£ô | Γ£ô | Γ£ô | | Red Hat Enterprise Linux Server 6.7+ | | | Γ£ô |
-| Rocky Linux 9 | Γ£ô | | |
+| Rocky Linux 9 | Γ£ô | Γ£ô | |
| Rocky Linux 8 | Γ£ô | Γ£ô | | | SUSE Linux Enterprise Server 15 SP4 | Γ£ô<sup>3</sup> | | | | SUSE Linux Enterprise Server 15 SP3 | Γ£ô | | |
View [supported operating systems for Azure Arc Connected Machine agent](../../a
| SUSE Linux Enterprise Server 15 SP1 | Γ£ô | Γ£ô | | | SUSE Linux Enterprise Server 15 | Γ£ô | Γ£ô | | | SUSE Linux Enterprise Server 12 | Γ£ô | Γ£ô | Γ£ô |
-| Ubuntu 22.04 LTS | Γ£ô | | |
+| Ubuntu 22.04 LTS | Γ£ô | Γ£ô | |
| Ubuntu 20.04 LTS | Γ£ô<sup>3</sup> | Γ£ô | Γ£ô | | Ubuntu 18.04 LTS | Γ£ô<sup>3</sup> | Γ£ô | Γ£ô | | Ubuntu 16.04 LTS | Γ£ô | Γ£ô | Γ£ô |
azure-monitor Action Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/action-groups.md
Write-Host "My Azure AD Application (ObjectId): " + $myApp.ObjectId
Write-Host "My Azure AD Application's Roles" Write-Host $myApp.AppRoles ```
+### Migrate Runbook action from "Run as account" to "Run as Managed Identity"
+> [!NOTE]
+>
+> Azure Automation "Run as account" has [retired](https://azure.microsoft.com/updates/azure-automation-runas-account-retiring-on-30-september-2023/) on 30 September 2023, which affects actions created with action type "Automation Runbook". Existing actions linking to "Run as account" runbooks won't be supported after retirement. However, those runbooks would continue to execute until the expiry of "Run as" certificate of the Automation account.
+
+To ensure you can continue using the runbook actions, you need to:
+1. Edit the action group by adding a new action with action type "Automation Runbook" and choose the same runbook from the dropdown. (All 5 runbooks in the dropdown have been reconfigured at the backend to authenticate using Managed Identity instead of Run as account. System-assigned Managed Identity in Automation account would be enabled with VM Contributor role at the subscription level would be assigned automatically.)
+
+ :::image type="content" source="./media/action-groups/action-group-runbook-add.png" alt-text="Screenshot of adding a runbook action to an action group.":::
+
+ :::image type="content" source="./media/action-groups/action-group-runbook-configure.png" alt-text="Screenshot of configuring the runbook action.":::
+
+2. Delete old runbook action which links to a "Run as account" runbook.
+3. Save the action group.
+ ## Next steps - Get an [overview of alerts](./alerts-overview.md) and learn how to receive alerts.
azure-monitor Resource Manager Alerts Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/resource-manager-alerts-log.md
Title: Resource Manager template samples for log query alerts
description: Sample Azure Resource Manager templates to deploy Azure Monitor log query alerts. Previously updated : 05/11/2022 Last updated : 11/07/2023 # Resource Manager template samples for log alert rules in Azure Monitor
azure-monitor Codeless Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/codeless-overview.md
When using supported SDKs, you can enable SDK injection in configuration to auto
| Language | : | | [ASP.NET Core](./asp-net-core.md?tabs=netcorenew%2Cnetcore6#enable-client-side-telemetry-for-web-applications) |
- | [Node.js](./nodejs.md#automatic-web-instrumentationpreview) |
+ | [Node.js](./nodejs.md#browser-sdk-loader) |
| [Java](./java-standalone-config.md#browser-sdk-loader-preview) | For other methods to instrument your application with the Application Insights JavaScript SDK, see [Get started with the JavaScript SDK](./javascript-sdk.md).
azure-monitor Java Standalone Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-config.md
Starting from version 3.2.0, if you want to capture controller "InProc" dependen
## Browser SDK Loader (preview)
-This feature automatically injects the [Browser SDK Loader](https://github.com/microsoft/ApplicationInsights-JS#snippet-setup-ignore-if-using-npm-setup) into your application's HTML pages, including configuring the appropriate Connection String.
+This feature automatically injects the [Browser SDK Loader](javascript-sdk.md#add-the-javascript-code) into your application's HTML pages, including configuring the appropriate Connection String.
For example, when your java application returns a response like:
azure-monitor Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/nodejs.md
appInsights.defaultClient.context.tags[appInsights.defaultClient.context.keys.cl
appInsights.start(); ```
-### Automatic web Instrumentation[Preview]
+### Browser SDK Loader
+
+> [!NOTE]
+> Available as a public preview. [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/)
Automatic web Instrumentation can be enabled for node server via JavaScript (Web) SDK Loader Script injection by configuration.
+<!-- This feature enables web instrumentation for node server. It automatically injects the [Browser SDK Loader Script](javascript-sdk.md?tabs=javascriptwebsdkloaderscript#add-the-javascript-code) into your application's HTML pages, including configuring the appropriate Connection String. -->
+ ```javascript let appInsights = require("applicationinsights"); appInsights.setup("<connection_string>")
azure-monitor Release And Work Item Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/release-and-work-item-insights.md
The new work item integration offers the following features over [classic](#clas
:::image type="content" source="./media/release-and-work-item-insights/create-template-from-transaction-details.png" alt-text=" Screenshot of end-to-end transaction details tab with create a work item, start with a workbook template selected." lightbox="./media/release-and-work-item-insights/create-template-from-transaction-details.png":::
-2. After you select **create a new template**, you can choose your tracking systems, name your workbook, link to your selected tracking system, and choose a region to storage the template (the default is the region your Application Insights resource is located in). The URL parameters are the default URL for your repository, for example, `https://github.com/myusername/reponame` or `https://mydevops.visualstudio.com/myproject`.
+2. After you select **create a new template**, you can choose your tracking systems, name your workbook, link to your selected tracking system, and choose a region to storage the template (the default is the region your Application Insights resource is located in). The URL parameters are the default URL for your repository, for example, `https://github.com/myusername/reponame` or `https://dev.azure.com/{org}/{project}`.
:::image type="content" source="./media/release-and-work-item-insights/create-workbook.png" alt-text=" Screenshot of create a new work item workbook template.":::
azure-monitor Container Insights Onboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-onboard.md
This article provides an overview of the requirements and options that are avail
Container insights supports the following environments: - [Azure Kubernetes Service (AKS)](../../aks/index.yml)-- [Azure Arc-enabled Kubernetes cluster](../../azure-arc/kubernetes/overview.md)
- - [Azure Stack](/azure-stack/user/azure-stack-kubernetes-aks-engine-overview) or on-premises
- - [Red Hat OpenShift](https://docs.openshift.com/container-platform/latest/welcome/https://docsupdatetracker.net/index.html) version 4.x
+- Following [Azure Arc-enabled Kubernetes cluster distributions](../../azure-arc/kubernetes/validation-program.md):
+ - AKS on Azure Stack HCI
+ - AKS Edge Essentials
+ - Canonical
+ - Cluster API Provider on Azure
+ - K8s on Azure Stack Edge
+ - Red Hat OpenShift version 4.x
+ - SUSE Rancher (Rancher Kubernetes engine)
+ - SUSE Rancher K3s
+ - VMware (ie. TKG)
+
+> [!NOTE]
+> Container insights supports ARM64 nodes on AKS. See [Cluster requirements](../../azure-arc/kubernetes/system-requirements.md#cluster-requirements) for the details of Azure Arc-enabled clusters that support ARM64 nodes.
The versions of Kubernetes and support policy are the same as those versions [supported in AKS](../../aks/supported-kubernetes-versions.md).
After you've enabled monitoring, you can begin analyzing the performance of your
To learn how to use Container insights, see [View Kubernetes cluster performance](container-insights-analyze.md). +
azure-monitor Cross Workspace Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/cross-workspace-query.md
Title: Query across resources with Azure Monitor | Microsoft Docs description: This article describes how you can query against resources from multiple workspaces and an Application Insights app in your subscription. --++ Last updated 05/30/2023
There are two methods to query data that's stored in multiple workspaces and app
## Cross-resource query limits * The number of Application Insights components and Log Analytics workspaces that you can include in a single query is limited to 100.
+* Querying across a large number of resources can substantially slow down the query.
* Cross-resource queries in log alerts are only supported in the current [scheduledQueryRules API](/rest/api/monitor/scheduledqueryrule-2018-04-16/scheduled-query-rules). If you're using the legacy Log Analytics Alerts API, you'll need to [switch to the current API](../alerts/alerts-log-api-switch.md). * References to a cross resource, such as another workspace, should be explicit and can't be parameterized. See [Gather identifiers for Log Analytics workspaces](?tabs=workspace-identifier#gather-identifiers-for-log-analytics-workspaces-and-application-insights-resources) for examples.
azure-monitor Logs Dedicated Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logs-dedicated-clusters.md
When a Log Analytics workspace is linked to a dedicated cluster, the workspace b
When dedicated cluster is configured with customer-managed key (CMK), new ingested data is encrypted with your key, while older data remains encrypted with Microsoft-managed key (MMK). The key configuration is abstracted by Log Analytics and the query across old and new data encryptions is performed seamlessly.
-A cluster can be linked to up to 1,000 workspaces. Linked workspaces can be located in the same region as the cluster. A workspace can't be linked to a cluster more than twice a month, to prevent data fragmentation.
+A cluster can be linked to up to 1,000 workspaces. Linked workspaces must be located in the same region as the cluster. A workspace can't be linked to a cluster more than twice a month, to prevent data fragmentation.
You need 'write' permissions to both the workspace and the cluster resource for workspace link operation:
azure-monitor Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Monitor description: Lists Azure Policy Regulatory Compliance controls available for Azure Monitor. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 10/23/2023 Last updated : 11/06/2023
azure-monitor Grafana Plugin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/grafana-plugin.md
Sign in to Grafana by using the endpoint URL of your Azure Managed Grafana works
Azure Managed Grafana includes an Azure Monitor data source plug-in. By default, the plug-in is preconfigured with a managed identity that can query and visualize monitoring data from all resources in the subscription in which the Grafana workspace was deployed. Skip ahead to the section "Build a Grafana dashboard."
-![Screenshot that shows the Azure Managed Grafana home page.](./media/grafana-plugin/azure-managed-grafana.png)
You can expand the resources that can be viewed by your Azure Managed Grafana workspace by [configuring additional permissions](../../managed-grafan) on other subscriptions or resources.
You can expand the resources that can be viewed by your Azure Managed Grafana wo
1. Select **Add data source**, filter by the name **Azure**, and select the **Azure Monitor** data source.
- ![Screenshot that shows Azure Monitor data source selection.](./media/grafana-plugin/azure-monitor-data-source-list.png)
+ :::image type="content" source="./media/grafana-plugin/azure-monitor-data-source-list.png" lightbox="./media/grafana-plugin/azure-monitor-data-source-list.png" alt-text="Screenshot that shows Azure Monitor data source selection.":::
1. Pick a name for the data source and choose between managed identity or app registration for authentication.
If you're hosting Grafana on your own Azure Virtual Machines or Azure App Servic
1. Select **Save & test** and Grafana will test the credentials. You should see a message similar to the following one.
- ![Screenshot that shows Azure Monitor data source with config-approved managed identity.](./media/grafana-plugin/managed-identity.png)
+ :::image type="content" source="./media/grafana-plugin/managed-identity.png" lightbox="./media/grafana-plugin/managed-identity.png" alt-text="Screenshot that shows Azure Monitor data source with config-approved managed identity.":::
### Use app registration
If you're hosting Grafana on your own Azure Virtual Machines or Azure App Servic
1. Select **Save & test** and Grafana will test the credentials. You should see a message similar to the following one.
- ![Screenshot that shows Azure Monitor data source configuration with the approved app registration.](./media/grafana-plugin/app-registration.png)
+ :::image type="content" source="./media/grafana-plugin/app-registration.png" lightbox="./media/grafana-plugin/app-registration.png" alt-text="Screenshot that shows Azure Monitor data source configuration with the approved app registration.":::
## Use out-of-the-box dashboards Azure Monitor contains out-of-the-box dashboards to use with Azure Managed Grafana and the Azure Monitor plugin. Azure Monitor also supports out-of-the-box dashboards for seamless integration with Azure Monitor managed service for Prometheus. These dashboards are automatically deployed to Azure Managed Grafana when linked to Azure Monitor managed service for Prometheus. ## Build a Grafana dashboard 1. Go to the Grafana home page and select **New Dashboard**.
Azure Monitor also supports out-of-the-box dashboards for seamless integration w
1. A blank graph shows up on your dashboard. Select the panel title and select **Edit** to enter the details of the data you want to plot in this graph chart.
- ![Screenshot that shows Grafana new panel dropdown list options.](./media/grafana-plugin/grafana-new-graph-dark.png)
+ :::image type="content" source="./media/grafana-plugin/grafana-new-graph-dark.png" lightbox="./media/grafana-plugin/grafana-new-graph-dark.png" alt-text="Screenshot that shows Grafana new panel dropdown list options.":::
1. Select the Azure Monitor data source you've configured. * Visualizing Azure Monitor metrics: Select **Azure Monitor** in the service dropdown list. A list of selectors shows up where you can select the resources and metric to monitor in this chart. To collect metrics from a VM, use the namespace `Microsoft.Compute/VirtualMachines`. After you've selected VMs and metrics, you can start viewing their data in the dashboard.
- ![Screenshot that shows Grafana panel config for Azure Monitor metrics.](./media/grafana-plugin/grafana-graph-config-for-azure-monitor-dark.png)
+ :::image type="content" source="./media/grafana-plugin/grafana-graph-config-for-azure-monitor-dark.png" lightbox="./media/grafana-plugin/grafana-graph-config-for-azure-monitor-dark.png" alt-text="Screenshot that shows Grafana panel config for Azure Monitor metrics.":::
* Visualizing Azure Monitor log data: Select **Azure Log Analytics** in the service dropdown list. Select the workspace you want to query and set the query text. You can copy here any log query you already have or create a new one. As you enter your query, IntelliSense suggests autocomplete options. Select the visualization type, **Time series** > **Table**, and run the query. > [!NOTE]
Azure Monitor also supports out-of-the-box dashboards for seamless integration w
> The default query provided with the plug-in uses two macros: `$__timeFilter()` and `$__interval`. > These macros allow Grafana to dynamically calculate the time range and time grain, when you zoom in on part of a chart. You can remove these macros and use a standard time filter, such as `TimeGenerated > ago(1h)`, but that means the graph wouldn't support the zoom-in feature.
- ![Screenshot of Grafana panel config for Azure Monitor logs.](./media/grafana-plugin/grafana-graph-config-for-azure-log-analytics-dark.png)
+ :::image type="content" source="./media/grafana-plugin/grafana-graph-config-for-azure-log-analytics-dark.png" lightbox="./media/grafana-plugin/grafana-graph-config-for-azure-log-analytics-dark.png" alt-text="Screenshot of Grafana panel config for Azure Monitor logs.":::
1. The following dashboard has two charts. The one on the left shows the CPU percentage of two VMs. The chart on the right shows the transactions in an Azure Storage account broken down by the Transaction API type.
- ![Screenshot of Grafana dashboards with two panels.](media/grafana-plugin/grafana6.png)
+ :::image type="content" source="media/grafana-plugin/grafana6.png" lightbox="media/grafana-plugin/grafana6.png" alt-text="Screenshot of Grafana dashboards with two panels.":::
## Pin charts from the Azure portal to Azure Managed Grafana In addition to building your panels in Grafana, you can also quickly pin Azure Monitor visualizations from the Azure portal to new or existing Grafana dashboards by adding panels to your Grafana dashboard directly from Azure Monitor. Go to **Metrics** for your resource. Create a chart and select **Save to dashboard**, followed by **Pin to Grafana**. Choose the workspace and dashboard and select **Pin** to complete the operation.
-[![Screenshot that shows the Pin to Grafana option in the Azure Monitor metrics explorer.](media/grafana-plugin/grafana-pin-to.png)](media/grafana-plugin/grafana-pin-to-expanded.png#lightbox)
## Advanced Grafana features
Usage
You can configure a variable that will list all available **Solution** values and then update your query to use it. To create a new variable, select the dashboard's **Settings** button in the top right area, select **Variables**, and then select **New**. On the variable page, define the data source and query to run to get the list of values.
-![Screenshot that shows a Grafana configure variable.](./media/grafana-plugin/grafana-configure-variable-dark.png)
After it's created, adjust the query to use the selected values, and your charts will respond accordingly:
Usage
| sort by TimeGenerated ```
-![Screenshot that shows Grafana use variables.](./media/grafana-plugin/grafana-use-variables-dark.png)
### Create dashboard playlists One of the many useful features of Grafana is the dashboard playlist. You can create multiple dashboards and add them to a playlist configuring an interval for each dashboard to show. Select **Play** to see the dashboards cycle through. You might want to display them on a large wall monitor to provide a status board for your group.
-![Screenshot that shows a Grafana playlist example.](./media/grafana-plugin/grafana7.png)
## Optional: Monitor other datasources in the same Grafana dashboards There are many data source plug-ins that you can use to bring these metrics together in a dashboard.
Here are good reference articles on how to use Telegraf, InfluxDB, Azure Monitor
Here's an image of a full Grafana dashboard that has metrics from Azure Monitor and Application Insights.
-![Screenshot that shows a Grafana dashboard with multiple panels.](media/grafana-plugin/grafana8.png)
## Clean up resources If you've set up a Grafana environment on Azure, you're charged when resources are running whether you're using them or not. To avoid incurring additional charges, clean up the resource group created in this article.
azure-monitor Tutorial Logs Dashboards https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/tutorial-logs-dashboards.md
description: This tutorial helps you understand how Log Analytics dashboards can
Previously updated : 05/28/2020 Last updated : 11/07/2023
Sign in to the [Azure portal](https://portal.azure.com).
## Create a shared dashboard Select **Dashboard** to open your default [dashboard](../../azure-portal/azure-portal-dashboards.md). Your dashboard will look different from the following example.-
-![Screenshot that shows an Azure portal dashboard.](media/tutorial-logs-dashboards/log-analytics-portal-dashboard.png)
+<!-- convertborder later -->
Here you can bring together operational data that's most important to IT across all your Azure resources, including telemetry from Azure Log Analytics. Before we visualize a log query, let's first create a dashboard and share it. We can then focus on our example performance log query, which will render as a line chart, and add it to the dashboard.
Here you can bring together operational data that's most important to IT across
> - `timechart` To create a dashboard, select **New dashboard**.
+<!-- convertborder later -->
-![Screenshot that shows creating a new dashboard in the Azure portal.](media/tutorial-logs-dashboards/log-analytics-create-dashboard-01.png)
-
-This action creates a new, empty, private dashboard. It opens in a customization mode where you can name your dashboard and add or rearrange tiles. Edit the name of the dashboard and specify **Sample Dashboard** for this tutorial. Then select **Done customizing**.<br><br> ![Screenshot that shows saving a customized Azure dashboard.](media/tutorial-logs-dashboards/log-analytics-create-dashboard-02.png)
+This action creates a new, empty, private dashboard. It opens in a customization mode where you can name your dashboard and add or rearrange tiles. Edit the name of the dashboard and specify **Sample Dashboard** for this tutorial. Then select **Done customizing**.<br><br> <!-- convertborder later -->:::image type="content" source="media/tutorial-logs-dashboards/log-analytics-create-dashboard-02.png" lightbox="media/tutorial-logs-dashboards/log-analytics-create-dashboard-02.png" alt-text="Screenshot that shows saving a customized Azure dashboard." border="false":::
When you create a dashboard, it's private by default, so you're the only person who can see it. To make it visible to others, select **Share**.-
-![Screenshot that shows sharing a new dashboard in the Azure portal.](media/tutorial-logs-dashboards/log-analytics-share-dashboard.png)
+<!-- convertborder later -->
Choose a subscription and resource group for your dashboard to be published to. For convenience, you're guided toward a pattern where you place dashboards in a resource group called **dashboards**. Verify the subscription selected and then select **Publish**. Access to the information displayed in the dashboard is controlled with [Azure role-based access control](../../role-based-access-control/role-assignments-portal.md).
Choose a subscription and resource group for your dashboard to be published to.
In this tutorial, you'll use Log Analytics to create a performance view in graphical form and save it for a future query. Then you'll pin it to the shared dashboard you created earlier. Open Log Analytics by selecting **Logs** on the Azure Monitor menu. It starts with a new blank query.-
-![Screenshot that shows the home page.](media/tutorial-logs-dashboards/homepage.png)
+<!-- convertborder later -->
Enter the following query to return processor utilization records for both Windows and Linux computers. The records are grouped by `Computer` and `TimeGenerated` and displayed in a visual chart. Select **Run** to run the query and view the resulting chart.
Perf
Save the query by selecting **Save**. In the **Save Query** control panel, provide a name such as **Azure VMs - Processor Utilization** and a category such as **Dashboards**. Select **Save**. This way you can create a library of common queries that you can use and modify. Finally, pin this query to the shared dashboard you created earlier. Select the **Pin to dashboard** button in the upper-right corner of the page and then select the dashboard name. Now that we have a query pinned to the dashboard, you'll notice that it has a generic title and comment underneath it. Rename the query with a meaningful name that can be easily understood by anyone who views it. Select **Edit** to customize the title and subtitle for the tile, and then select **Update**. A banner appears that asks you to publish changes or discard. Select **Save a copy**. ## Next steps In this tutorial, you learned how to create a dashboard in the Azure portal and add a log query to it. Follow this link to see prebuilt Log Analytics script samples.
azure-monitor Workbooks Automate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-automate.md
Two types of workbook resources can be managed programmatically:
1. Switch the workbook to edit mode by selecting **Edit**. 1. Open the **Advanced Editor** by using the **</>** button on the toolbar. 1. Ensure you're on the **Gallery Template** tab.-
- ![Screenshot that shows the Gallery Template tab.](./media/workbooks-automate/gallery-template.png)
+ <!-- convertborder later -->
+ :::image type="content" source="./media/workbooks-automate/gallery-template.png" lightbox="./media/workbooks-automate/gallery-template.png" alt-text="Screenshot that shows the Gallery Template tab." border="false":::
1. Copy the JSON in the gallery template to the clipboard. 1. The following sample ARM template deploys a workbook template to the Azure Monitor workbook gallery. Paste the JSON you copied in place of `<PASTE-COPIED-WORKBOOK_TEMPLATE_HERE>`. For a reference ARM template that creates a workbook template, see [this GitHub repository](https://github.com/microsoft/Application-Insights-Workbooks/blob/master/Documentation/ARM-template-for-creating-workbook-template).
Two types of workbook resources can be managed programmatically:
1. Open the **Advanced Editor** by selecting **</>**. 1. In the editor, switch **Template Type** to **ARM template**. 1. The ARM template for creating shows up in the editor. Copy the content and use as-is or merge it with a larger template that also deploys the target resource.-
- ![Screenshot that shows how to get the ARM template from within the workbook UI.](./media/workbooks-automate/programmatic-template.png)
+ <!-- convertborder later -->
+ :::image type="content" source="./media/workbooks-automate/programmatic-template.png" lightbox="./media/workbooks-automate/programmatic-template.png" alt-text="Screenshot that shows how to get the ARM template from within the workbook UI." border="false":::
## Sample ARM template This template shows how to deploy a workbook that displays `Hello World!`.
azure-monitor Workbooks Bring Your Own Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-bring-your-own-storage.md
There are times when you might have a query or some business logic that you want
## Save a workbook with managed identities 1. Before you can save the workbook to your storage, you'll need to create a managed identity by selecting **All Services** > **Managed Identities**. Then give it **Storage Blob Data Contributor** access to your storage account. For more information, see [Azure documentation on managed identities](../../active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-portal.md).-
- [![Screenshot that shows adding a role assignment.](./media/workbooks-bring-your-own-storage/add-identity-role-assignment.png)](./media/workbooks-bring-your-own-storage/add-identity-role-assignment.png#lightbox)
+ <!-- convertborder later -->
+ :::image type="content" source="./media/workbooks-bring-your-own-storage/add-identity-role-assignment.png" lightbox="./media/workbooks-bring-your-own-storage/add-identity-role-assignment.png" alt-text="Screenshot that shows adding a role assignment." border="false":::
1. Create a new workbook. 1. Select **Save** to save the workbook. 1. Select the **Save content to an Azure Storage Account** checkbox to save to an Azure Storage account.-
- ![Screenshot that shows the Save dialog.](./media/workbooks-bring-your-own-storage/saved-dialog-default.png)
+ <!-- convertborder later -->
+ :::image type="content" source="./media/workbooks-bring-your-own-storage/saved-dialog-default.png" lightbox="./media/workbooks-bring-your-own-storage/saved-dialog-default.png" alt-text="Screenshot that shows the Save dialog." border="false":::
1. Select the storage account and container you want. The **Storage account** list is from the subscription selected previously.-
- ![Screenshot that shows the Save dialog with a storage option.](./media/workbooks-bring-your-own-storage/save-dialog-with-storage.png)
+ <!-- convertborder later -->
+ :::image type="content" source="./media/workbooks-bring-your-own-storage/save-dialog-with-storage.png" lightbox="./media/workbooks-bring-your-own-storage/save-dialog-with-storage.png" alt-text="Screenshot that shows the Save dialog with a storage option." border="false":::
1. Select **(change)** to select a managed identity previously created.-
- [![Screenshot that shows the Change identity dialog.](./media/workbooks-bring-your-own-storage/change-managed-identity.png)](./media/workbooks-bring-your-own-storage/change-managed-identity.png#lightbox)
+ <!-- convertborder later -->
+ :::image type="content" source="./media/workbooks-bring-your-own-storage/change-managed-identity.png" lightbox="./media/workbooks-bring-your-own-storage/change-managed-identity.png" alt-text="Screenshot that shows the Change identity dialog." border="false":::
1. After you've selected your storage options, select **Save** to save your workbook.
There are times when you might have a query or some business logic that you want
- After a workbook has been saved to custom storage, it will always be saved to custom storage, and this feature can't be turned off. To save elsewhere, you can use **Save As** and elect to not save the copy to custom storage. - Workbooks in an Application Insights resource are "legacy" workbooks and don't support custom storage. The latest feature for workbooks in an Application Insights resource is the **More** selection. Legacy workbooks don't have **Subscription** options when you save them.
- ![Screenshot that shows a legacy workbook.](./media/workbooks-bring-your-own-storage/legacy-workbooks.png)
+ <!-- convertborder later -->
+ :::image type="content" source="./media/workbooks-bring-your-own-storage/legacy-workbooks.png" lightbox="./media/workbooks-bring-your-own-storage/legacy-workbooks.png" alt-text="Screenshot that shows a legacy workbook." border="false":::
## Next steps
azure-monitor Workbooks Chart Visualizations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-chart-visualizations.md
The following example shows the trend of requests to an app over the previous da
1. Use the query editor to enter the [KQL](/azure/kusto/query/) for your analysis. An example is trend of requests. 1. Set **Visualization** to **Area**, **Bar**, **Bar (categorical)**, **Line**, **Pie**, **Scatter**, or **Time**. 1. Set other parameters like time range, visualization, size, color palette, and legend, if needed.-
-[![Screenshot that shows a log chart in edit mode.](./media/workbooks-chart-visualizations/log-chart.png)](./media/workbooks-chart-visualizations/log-chart.png#lightbox)
+<!-- convertborder later -->
### Log chart parameters
The following query returns a table with two columns: `timestamp` and `Requests`
requests | summarize Requests = count() by bin(timestamp, 1h) ```-
-[![Screenshot that shows a simple time-series log line chart.](./media/workbooks-chart-visualizations/log-chart-line-simple.png)](./media/workbooks-chart-visualizations/log-chart-line-simple.png#lightbox)
+<!-- convertborder later -->
#### Time series with multiple metrics
The following query returns a table with three columns: `timestamp`, `Requests`,
requests | summarize Requests = count(), Users = dcount(user_Id) by bin(timestamp, 1h) ```-
-[![Screenshot that shows a time series with multiple metrics log line chart.](./media/workbooks-chart-visualizations/log-chart-line-multi-metric.png)](./media/workbooks-chart-visualizations/log-chart-line-multi-metric.png#lightbox)
+<!-- convertborder later -->
#### Segmented time series
The following query returns a table with three columns: `timestamp`, `Requests`,
requests | summarize Request = count() by bin(timestamp, 1h), RequestName = name ```-
-[![Screenshot that shows a segmented time-series log line chart.](./media/workbooks-chart-visualizations/log-chart-line-segmented.png)](./media/workbooks-chart-visualizations/log-chart-line-segmented.png#lightbox)
+<!-- convertborder later -->
### Summarize vs. make-series
The following query shows a similar chart with the `summarize` operator:
requests | summarize Request = count() by bin(timestamp, 1h), RequestName = name ```-
-[![Screenshot that shows a log line chart made from a make-series query.](./media/workbooks-chart-visualizations/log-chart-line-make-series.png)](./media/workbooks-chart-visualizations/log-chart-line-make-series.png#lightbox)
+<!-- convertborder later -->
### Categorical bar chart or histogram
requests
``` The query returns two columns: `Requests` metric and `Result` category. Each value of the `Result` column is represented by a bar in the chart with height proportional to the `Requests metric`.-
-[![Screenshot that shows a categorical bar chart for requests by result code.](./media/workbooks-chart-visualizations/log-chart-categorical-bar.png)](./media/workbooks-chart-visualizations/log-chart-categorical-bar.png#lightbox)
+<!-- convertborder later -->
### Pie charts
requests
``` The query returns two columns: `Requests` metric and `Result` category. Each value of the `Result` column gets its own slice in the pie with size proportional to the `Requests` metric.-
-[![Screenshot that shows a pie chart with slices representing result code.](./media/workbooks-chart-visualizations/log-chart-pie-chart.png)](./media/workbooks-chart-visualizations/log-chart-pie-chart.png#lightbox)
+<!-- convertborder later -->
## Metric charts
The following example shows the number of transactions in a storage account over
1. Use the **Add metric** link to add a metric control to the workbook. 1. Select a resource type, for example, **Storage account**. Select the resources to target, the metric namespace and name, and the aggregation to use. 1. Set other parameters like time range, split by, visualization, size, and color palette, if needed.-
-[![Screenshot that shows a metric chart in edit mode.](./media/workbooks-chart-visualizations/metric-chart.png)](./media/workbooks-chart-visualizations/metric-chart.png#lightbox)
+<!-- convertborder later -->
### Metric chart parameters
The following example shows the number of transactions in a storage account over
### Examples Transactions split by API name as a line chart:-
-[![Screenshot that shows a metric line chart for storage transactions split by API name.](./media/workbooks-chart-visualizations/metric-chart-storage-split-line.png)](./media/workbooks-chart-visualizations/metric-chart-storage-split-line.png#lightbox)
+<!-- convertborder later -->
Transactions split by response type as a large bar chart:-
-[![Screenshot that shows a large metric bar chart for storage transactions split by response type.](./media/workbooks-chart-visualizations/metric-chart-storage-bar-large.png)](./media/workbooks-chart-visualizations/metric-chart-storage-bar-large.png#lightbox)
+<!-- convertborder later -->
Average latency as a scatter chart:-
-[![Screenshot that shows a metric scatter chart for storage latency.](./media/workbooks-chart-visualizations/metric-chart-storage-scatter.png)](./media/workbooks-chart-visualizations/metric-chart-storage-scatter.png#lightbox)
+<!-- convertborder later -->
## Chart settings
The **Settings** tab controls:
- **X-axis Settings**, **Y-axis Settings**: Includes which fields. You can use custom formatting to set the number formatting to the axis values and custom ranges. - **Grouping Settings**: Includes which field. Sets the limits before an "Others" group is created. - **Legend Settings**: Shows metrics like series name, colors, and numbers at the bottom, and a legend like series names and colors.-
-![Screenshot that shows chart settings.](./media/workbooks-chart-visualizations/chart-settings.png)
+<!-- convertborder later -->
#### Custom formatting
Number formatting options are shown in this table.
| Maximum fractional digits | Maximum number of fractional digits to use. | | Minimum significant digits | Minimum number of significant digits to use (default 1). | | Maximum significant digits | Maximum number of significant digits to use. |-
-![Screenshot that shows x-axis settings.](./media/workbooks-chart-visualizations/number-format-settings.png)
+<!-- convertborder later -->
### Series Settings tab
You can adjust the labels and colors shown for series in the chart with the **Se
- **Series name**: This field is used to match a series in the data and, if matched, the display label and color are displayed. - **Comment**: This field is useful for template authors because this comment might be used by translators to localize the display labels.-
-![Screenshot that shows series settings.](./media/workbooks-chart-visualizations/series-settings.png)
+<!-- convertborder later -->
## Next steps
azure-monitor Workbooks Composite Bar https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-composite-bar.md
Last updated 06/21/2023
With Azure Workbooks, data can be rendered by using the composite bar. This bar is made up of multiple bars. The following image shows the composite bar for database status. It shows how many servers are online, offline, and in a recovering state.-
-![Screenshot that shows the composite bar for database status.](./media/workbooks-composite-bar/database-status.png)
+<!-- convertborder later -->
The composite bar renderer is supported for grid, tile, and graph visualizations.
The composite bar renderer is supported for grid, tile, and graph visualizations
1. Select **Apply**. The composite bar settings will look like the following screenshot:-
-![Screenshot that shows composite bar column settings with the preceding settings.](./media/workbooks-composite-bar/composite-bar-settings.png)
+<!-- convertborder later -->
The composite bar with the preceding settings:-
-![Screenshot that shows the composite bar.](./media/workbooks-composite-bar/composite-bar.png)
+<!-- convertborder later -->
## Composite bar settings
To add Group By settings:
1. In column settings, go to the column you want to add settings to. 1. In **Tree/Group By Settings**, under **Tree type**, select **Group By**. 1. Select the field you want to group by.-
- ![Screenshot that shows Group By settings.](./media/workbooks-composite-bar/group-by-settings.png)
+ <!-- convertborder later -->
+ :::image type="content" source="./media/workbooks-composite-bar/group-by-settings.png" lightbox="./media/workbooks-composite-bar/group-by-settings.png" alt-text="Screenshot that shows Group By settings." border="false":::
#### None The setting of **None** for aggregation means that no results are displayed for that column for the group rows.-
-![Screenshot that shows the composite bar with the None setting for aggregation.](./media/workbooks-composite-bar/none.png)
+<!-- convertborder later -->
#### Sum If aggregation is set as **Sum**, the column in the group row shows the composite bar by using the sum of the columns used to render it. The label will also use the sum of the columns referred to in it. In the following example, **online**, **offline**, and **recovering** all have max aggregation set to them and the aggregation for the total column is **Sum**.-
-![Screenshot that shows the composite bar with the Sum setting for aggregation.](./media/workbooks-composite-bar/sum.png)
+<!-- convertborder later -->
#### Inherit If aggregation is set as **Inherit**, the column in the group row shows the composite bar by using the aggregation set by users for the columns used to render it. The columns used in **Label** also use the aggregation set by the user. If the current column renderer is **Composite Bar** and is referred to in the label (like **total** in the preceding example), then **Sum** is used as the aggregation for that column. In the following example, **online**, **offline**, and **recovering** all have max aggregation set to them and the aggregation for total column is **Inherit**.-
-![Screenshot that shows the composite bar with the inherit setting for aggregation.](./media/workbooks-composite-bar/inherit.png)
+<!-- convertborder later -->
## Sorting
To make a composite bar renderer for a tile visualization:
1. Select **Apply**. Composite bar settings for tiles:-
-![Screenshot that shows composite bar tile settings with the preceding settings.](./media/workbooks-composite-bar/tiles-settings.png)
+<!-- convertborder later -->
The composite bar view for tiles with the preceding settings will look like this example:-
-![Screenshot that shows composite bar tiles.](./media/workbooks-composite-bar/composite-bar-tiles.png)
+<!-- convertborder later -->
## Graph visualizations
To make a composite bar renderer for a graph visualization (type Hive Clusters):
1. Select **Apply**. Composite bar settings for graphs:-
-![Screenshot that shows composite bar graph settings with the preceding settings.](./media/workbooks-composite-bar/graphs-settings.png)
+<!-- convertborder later -->
The composite bar view for a graph with the preceding settings will look like this example:-
-![Screenshot that shows composite bar graphs with hive clusters.](./media/workbooks-composite-bar/composite-bar-graphs.png)
+<!-- convertborder later -->
## Next steps
azure-monitor Workbooks Create Workbook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-create-workbook.md
To create a new Azure workbook:
## Add text Workbooks allow authors to include text blocks in their workbooks. The text can be human analysis of the telemetry, information to help users interpret the data, section headings, etc. -
- :::image type="content" source="media/workbooks-create-workbook/workbooks-text-example.png" alt-text="Screenshot of adding text to a workbook.":::
+
+ :::image type="content" source="media/workbooks-create-workbook/workbooks-text-example.png" lightbox="media/workbooks-create-workbook/workbooks-text-example.png" alt-text="Screenshot of adding text to a workbook.":::
Text is added through a markdown control into which an author can add their content. An author can use the full formatting capabilities of markdown. These include different heading and font styles, hyperlinks, tables, etc. Markdown allows authors to create rich Word- or Portal-like reports or analytic narratives. Text can contain parameter values in the markdown text, and those parameter references will be updated as the parameters change. **Edit mode**:
- :::image type="content" source="media/workbooks-create-workbook/workbooks-text-control-edit-mode.png" alt-text="Screenshot showing adding text to a workbook in edit mode.":::
+ <!-- convertborder later -->
+ :::image type="content" source="media/workbooks-create-workbook/workbooks-text-control-edit-mode.png" lightbox="media/workbooks-create-workbook/workbooks-text-control-edit-mode.png" alt-text="Screenshot showing adding text to a workbook in edit mode." border="false":::
**Preview mode**:
- :::image type="content" source="media/workbooks-create-workbook/workbooks-text-control-edit-mode-preview.png" alt-text="Screenshot showing adding text to a workbook in preview mode.":::
+ <!-- convertborder later -->
+ :::image type="content" source="media/workbooks-create-workbook/workbooks-text-control-edit-mode-preview.png" lightbox="media/workbooks-create-workbook/workbooks-text-control-edit-mode-preview.png" alt-text="Screenshot showing adding text to a workbook in preview mode." border="false":::
To add text to an Azure workbook:
You can also choose a text parameter as the source of the style. The parameter v
### Text style examples **Info style example**:
- :::image type="content" source="media/workbooks-create-workbook/workbooks-text-control-edit-mode-preview.png" alt-text="Screenshot of adding text to a workbook in preview mode showing info style.":::
+ <!-- convertborder later -->
+ :::image type="content" source="media/workbooks-create-workbook/workbooks-text-control-edit-mode-preview.png" lightbox="media/workbooks-create-workbook/workbooks-text-control-edit-mode-preview.png" alt-text="Screenshot of adding text to a workbook in preview mode showing info style." border="false":::
**Warning style example**:
- :::image type="content" source="media/workbooks-create-workbook/workbooks-text-example-warning.png" alt-text="Screenshot of a text visualization in warning style.":::
+
+ :::image type="content" source="media/workbooks-create-workbook/workbooks-text-example-warning.png" lightbox="media/workbooks-create-workbook/workbooks-text-example-warning.png" alt-text="Screenshot of a text visualization in warning style.":::
## Add queries
To add a parameter to an Azure Workbook:
- Required: 1. Select **Done editing**.-
- :::image type="content" source="media/workbooks-parameters/workbooks-time-settings.png" alt-text="Screenshot showing the creation of a time range parameter.":::
+ <!-- convertborder later -->
+ :::image type="content" source="media/workbooks-parameters/workbooks-time-settings.png" lightbox="media/workbooks-parameters/workbooks-time-settings.png" alt-text="Screenshot showing the creation of a time range parameter." border="false":::
## Add metric charts Most Azure resources emit metric data about state and health such as CPU utilization, storage availability, count of database transactions, failing app requests, etc. Using workbooks, you can create visualizations of the metric data as time-series charts. The example below shows the number of transactions in a storage account over the prior hour. This allows the storage owner to see the transaction trend and look for anomalies in behavior. -
- :::image type="content" source="media/workbooks-create-workbook/workbooks-metric-chart-storage-area.png" alt-text="Screenshot showing a metric area chart for storage transactions in a workbook.":::
+
+ :::image type="content" source="media/workbooks-create-workbook/workbooks-metric-chart-storage-area.png" lightbox="media/workbooks-create-workbook/workbooks-metric-chart-storage-area.png" alt-text="Screenshot showing a metric area chart for storage transactions in a workbook.":::
To add a metric chart to an Azure Workbook:
To add a metric chart to an Azure Workbook:
1. Select **Done Editing**. This is a metric chart in edit mode:-
+<!-- convertborder later -->
### Metric chart parameters
This is a metric chart in edit mode:
### Metric chart examples **Transactions split by API name as a line chart**-
- :::image type="content" source="media/workbooks-create-workbook/workbooks-metric-chart-storage-split-line.png" alt-text="Screenshot showing a metric line chart for Storage transactions split by API name.":::
+ <!-- convertborder later -->
+ :::image type="content" source="media/workbooks-create-workbook/workbooks-metric-chart-storage-split-line.png" lightbox="media/workbooks-create-workbook/workbooks-metric-chart-storage-split-line.png" alt-text="Screenshot showing a metric line chart for Storage transactions split by API name." border="false":::
**Transactions split by response type as a large bar chart**-
- :::image type="content" source="media/workbooks-create-workbook/workbooks-metric-chart-storage-bar-large.png" alt-text="Screenshot showing a large metric bar chart for Storage transactions split by response type.":::
+ <!-- convertborder later -->
+ :::image type="content" source="media/workbooks-create-workbook/workbooks-metric-chart-storage-bar-large.png" lightbox="media/workbooks-create-workbook/workbooks-metric-chart-storage-bar-large.png" alt-text="Screenshot showing a large metric bar chart for Storage transactions split by response type." border="false":::
**Average latency as a scatter chart**-
- :::image type="content" source="media/workbooks-create-workbook/workbooks-metric-chart-storage-scatter.png" alt-text="Screenshot showing a metric scatter chart for storage latency.":::
+ <!-- convertborder later -->
+ :::image type="content" source="media/workbooks-create-workbook/workbooks-metric-chart-storage-scatter.png" lightbox="media/workbooks-create-workbook/workbooks-metric-chart-storage-scatter.png" alt-text="Screenshot showing a metric scatter chart for storage latency." border="false":::
## Add links You can use links to create links to other views, workbooks, other items inside a workbook, or to create tabbed views within a workbook. The links can be styled as hyperlinks, buttons, and tabs. -
- :::image type="content" source="media/workbooks-create-workbook/workbooks-empty-links.png" alt-text="Screenshot of adding a link to a workbook.":::
+ <!-- convertborder later -->
+ :::image type="content" source="media/workbooks-create-workbook/workbooks-empty-links.png" lightbox="media/workbooks-create-workbook/workbooks-empty-links.png" alt-text="Screenshot of adding a link to a workbook." border="false":::
Watch this video to learn how to use tabs, groups, and contextual links in Azure Workbooks: > [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RE59YTe]
Links can use all of the link actions available in [link actions](workbooks-link
### Tabs Most of the time, tab links are combined with the **Set a parameter value** action. Here's an example showing the links step configured to create 2 tabs, where selecting either tab will set a **selectedTab** parameter to a different value (the example shows a third tab being edited to show the parameter name and parameter value placeholders):-
- :::image type="content" source="media/workbooks-create-workbook/workbooks-creating-tabs.png" alt-text="Screenshot of creating tabs in workbooks.":::
+ <!-- convertborder later -->
+ :::image type="content" source="media/workbooks-create-workbook/workbooks-creating-tabs.png" lightbox="media/workbooks-create-workbook/workbooks-creating-tabs.png" alt-text="Screenshot of creating tabs in workbooks." border="false":::
You can then add other items in the workbook that are conditionally visible if the **selectedTab** parameter value is "1" by using the advanced settings:-
- :::image type="content" source="media/workbooks-create-workbook/workbooks-selected-tab.png" alt-text="Screenshot of conditionally visible tab in workbooks.":::
+ <!-- convertborder later -->
+ :::image type="content" source="media/workbooks-create-workbook/workbooks-selected-tab.png" lightbox="media/workbooks-create-workbook/workbooks-selected-tab.png" alt-text="Screenshot of conditionally visible tab in workbooks." border="false":::
The first tab is selected by default, initially setting **selectedTab** to 1, and making that step visible. Selecting the second tab will change the value of the parameter to "2", and different content will be displayed:-
- :::image type="content" source="media/workbooks-create-workbook/workbooks-selected-tab2.png" alt-text="Screenshot of workbooks with content displayed when selected tab is 2.":::
+
+ :::image type="content" source="media/workbooks-create-workbook/workbooks-selected-tab2.png" lightbox="media/workbooks-create-workbook/workbooks-selected-tab2.png" alt-text="Screenshot of workbooks with content displayed when selected tab is 2.":::
A sample workbook with the above tabs is available in [sample Azure Workbooks with links](workbooks-sample-links.md#sample-workbook-with-links). g
Use the Toolbar style to have your links appear styled as a toolbar. In toolbar
- Button text, the text to display on the toolbar. Parameters may be used in this field. - Icon, the icon to display in the toolbar. - Tooltip Text, text to be displayed on the toolbar button's tooltip text. Parameters may be used in this field.-
- :::image type="content" source="media/workbooks-create-workbook/workbooks-links-create-toolbar.png" alt-text="Screenshot of creating links styled as a toolbar in workbooks.":::
+<!-- convertborder later -->
+ :::image type="content" source="media/workbooks-create-workbook/workbooks-links-create-toolbar.png" lightbox="media/workbooks-create-workbook/workbooks-links-create-toolbar.png" alt-text="Screenshot of creating links styled as a toolbar in workbooks." border="false":::
If any required parameters are used in button text, tooltip text, or value fields, and the required parameter is unset, the toolbar button will be disabled. For example, this can be used to disable toolbar buttons when no value is selected in another parameter/control.
To add a group to your workbook:
1. Make sure you are in **Edit** mode by selecting the **Edit** in the toolbar. Add a parameter by doing either of these steps: - Select **Add**, and **Add group** below an existing element, or at the bottom of the workbook. - Select the ellipses (...) to the right of the **Edit** button next to one of the elements in the workbook, then select **Add** and then **Add group**.-
- :::image type="content" source="media/workbooks-create-workbook/workbooks-add-group.png" alt-text="Screenshot showing selecting adding a group to a workbook. ":::
+ <!-- convertborder later -->
+ :::image type="content" source="media/workbooks-create-workbook/workbooks-add-group.png" lightbox="media/workbooks-create-workbook/workbooks-add-group.png" alt-text="Screenshot showing selecting adding a group to a workbook. " border="false":::
1. Select items for your group. 1. Select **Done editing.** This is a group in read mode with two items inside: a text item and a query item.
-
- :::image type="content" source="media/workbooks-create-workbook/workbooks-groups-view.png" alt-text="Screenshot showing a group in read mode in a workbook.":::
+ <!-- convertborder later -->
+ :::image type="content" source="media/workbooks-create-workbook/workbooks-groups-view.png" lightbox="media/workbooks-create-workbook/workbooks-groups-view.png" alt-text="Screenshot showing a group in read mode in a workbook." border="false":::
In edit mode, you can see those two items are actually inside a group item. In the screenshot below, the group is in edit mode. The group contains two items inside the dashed area. Each item can be in edit or read mode, independent of each other. For example, the text step is in edit mode while the query step is in read mode.-
- :::image type="content" source="media/workbooks-create-workbook/workbooks-groups-edit.png" alt-text="Screenshot of a group in edit mode in a workbook.":::
+ <!-- convertborder later -->
+ :::image type="content" source="media/workbooks-create-workbook/workbooks-groups-edit.png" lightbox="media/workbooks-create-workbook/workbooks-groups-edit.png" alt-text="Screenshot of a group in edit mode in a workbook." border="false":::
### Scoping a group
For groups created from a template, the content of the template isn't retrieved
In this mode, a button is displayed where the group would be, and no content is retrieved or created until the user explicitly clicks the button to load the content. This is useful in scenarios where the content might be expensive to compute or rarely used. The author can specify the text to appear on the button. This screenshot shows explicit load settings with a configured "Load more" button.-
- :::image type="content" source="media/workbooks-create-workbook/workbooks-groups-explicitly-loaded.png" alt-text="Screenshot of explicit load settings for a group in workbooks.":::
+ <!-- convertborder later -->
+ :::image type="content" source="media/workbooks-create-workbook/workbooks-groups-explicitly-loaded.png" lightbox="media/workbooks-create-workbook/workbooks-groups-explicitly-loaded.png" alt-text="Screenshot of explicit load settings for a group in workbooks." border="false":::
This is the group before being loaded in the workbook:-
- :::image type="content" source="media/workbooks-create-workbook/workbooks-groups-explicitly-loaded-before.png" alt-text="Screenshot showing an explicit group before being loaded in the workbook.":::
+ <!-- convertborder later -->
+ :::image type="content" source="media/workbooks-create-workbook/workbooks-groups-explicitly-loaded-before.png" lightbox="media/workbooks-create-workbook/workbooks-groups-explicitly-loaded-before.png" alt-text="Screenshot showing an explicit group before being loaded in the workbook." border="false":::
The group after being loaded in the workbook:-
- :::image type="content" source="media/workbooks-create-workbook/workbooks-groups-explicitly-loaded-after.png" alt-text="Screenshot showing an explicit group after being loaded in the workbook.":::
+
+ :::image type="content" source="media/workbooks-create-workbook/workbooks-groups-explicitly-loaded-after.png" lightbox="media/workbooks-create-workbook/workbooks-groups-explicitly-loaded-after.png" alt-text="Screenshot showing an explicit group after being loaded in the workbook.":::
#### Always mode
When a template is loaded into a group, the workbook attempts to merge any param
#### Example 1: All parameters have identical names Suppose you have a template that has two parameters at the top, a time range parameter and a text parameter named "**Filter**":-
- :::image type="content" source="media/workbooks-create-workbook/workbooks-groups-top-level-params.png" alt-text="Screenshot showing top level parameters in a workbook.":::
+ <!-- convertborder later -->
+ :::image type="content" source="media/workbooks-create-workbook/workbooks-groups-top-level-params.png" lightbox="media/workbooks-create-workbook/workbooks-groups-top-level-params.png" alt-text="Screenshot showing top level parameters in a workbook." border="false":::
Then a group item loads a second template that has its own two parameters and a text step, where the parameters are named the same:-
- :::image type="content" source="media/workbooks-create-workbook/workbooks-groups-merged-away.png" alt-text="Screenshot of a workbook template with top level parameters.":::
+ <!-- convertborder later -->
+ :::image type="content" source="media/workbooks-create-workbook/workbooks-groups-merged-away.png" lightbox="media/workbooks-create-workbook/workbooks-groups-merged-away.png" alt-text="Screenshot of a workbook template with top level parameters." border="false":::
When the second template is loaded into the group, the duplicate parameters are merged out. Since all of the parameters are merged away, the inner parameters step is also merged out, resulting in the group containing only the text step. ### Example 2: One parameter has an identical name Suppose you have a template that has two parameters at the top, a **time range** parameter and a text parameter named "**FilterB**" ():-
- :::image type="content" source="media/workbooks-create-workbook/workbooks-groups-wont-merge-away.png" alt-text="Screenshot of a group item with the result of parameters merged away.":::
+ <!-- convertborder later -->
+ :::image type="content" source="media/workbooks-create-workbook/workbooks-groups-wont-merge-away.png" lightbox="media/workbooks-create-workbook/workbooks-groups-wont-merge-away.png" alt-text="Screenshot of a group item with the result of parameters merged away." border="false":::
When the group's item's template is loaded, the **TimeRange** parameter is merged out of the group. The workbook contains the initial parameters step with **TimeRange** and **Filter**, and the group's parameter only includes **FilterB**.-
- :::image type="content" source="media/workbooks-create-workbook/workbooks-groups-wont-merge-away-result.png" alt-text="Screenshot of workbook group where parameters won't merge away.":::
+ <!-- convertborder later -->
+ :::image type="content" source="media/workbooks-create-workbook/workbooks-groups-wont-merge-away-result.png" lightbox="media/workbooks-create-workbook/workbooks-groups-wont-merge-away-result.png" alt-text="Screenshot of workbook group where parameters won't merge away." border="false":::
If the loaded template had contained **TimeRange** and **Filter** (instead of **FilterB**), then the resulting workbook would have a parameters step and a group with only the text step remaining.
azure-monitor Workbooks Data Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-data-sources.md
With workbooks, you can query logs from the following sources:
* Resource-centric data (activity logs) You can use Kusto query language (KQL) queries that transform the underlying resource data to select a result set that can be visualized as text, charts, or grids.-
-![Screenshot that shows a workbook logs report interface.](./media/workbooks-data-sources/logs.png)
+<!-- convertborder later -->
You can easily query across multiple resources to create a unified rich reporting experience.
Tutorial: [Making resource centric log queries in workbooks](workbooks-create-wo
## Metrics Azure resources emit [metrics](../essentials/data-platform-metrics.md) that can be accessed via workbooks. Metrics can be accessed in workbooks through a specialized control that allows you to specify the target resources, the desired metrics, and their aggregation. You can then plot this data in charts or grids.-
-![Screenshot that shows workbook metrics charts of CPU utilization.](./media/workbooks-data-sources/metrics-graph.png)
-
-![Screenshot that shows a workbook metrics interface.](./media/workbooks-data-sources/metrics.png)
+<!-- convertborder later -->
+<!-- convertborder later -->
## Azure Resource Graph Workbooks support querying for resources and their metadata by using Azure Resource Graph. This functionality is primarily used to build custom query scopes for reports. The resource scope is expressed via a KQL subset that Resource Graph supports, which is often sufficient for common use cases. To make a query control that uses this data source, use the **Query type** dropdown and select **Azure Resource Graph**. Then select the subscriptions to target. Use **Query control** to add the Resource Graph KQL subset that selects an interesting resource subset.-
-![Screenshot that shows an Azure Resource Graph KQL query.](./media/workbooks-data-sources/azure-resource-graph.png)
+<!-- convertborder later -->
## Azure Resource Manager
To make a query control that uses this data source, use the **Data source** drop
Workbooks now have support for querying from [Azure Data Explorer](/azure/data-explorer/) clusters with the powerful [Kusto](/azure/kusto/query/index) query language. For the **Cluster Name** field, add the region name following the cluster name. An example is *mycluster.westeurope*.-
-![Screenshot that shows Kusto query window.](./media/workbooks-data-sources/data-explorer.png)
+<!-- convertborder later -->
See also: [Azure Data Explorer query best practices](/azure/data-explorer/kusto/query/best-practices)
With workbooks, you can query different data sources. Workbooks also provide sim
### Combine alerting data with Log Analytics VM performance data The following example combines alerting data with Log Analytics VM performance data to get a rich insights grid.-
-![Screenshot that shows a workbook with a merge control that combines alert and Log Analytics data.](./media/workbooks-data-sources/merge-control.png)
+<!-- convertborder later -->
### Use merge control to combine Resource Graph and Log Analytics data
This provider supports [JSONPath](workbooks-jsonpath.md).
Azure Monitor has functionality that proactively monitors the availability and performance of Windows or Linux guest operating systems. Azure Monitor models key components and their relationships, criteria for how to measure the health of those components, and which components alert you when an unhealthy condition is detected. With workbooks, you can use this information to create rich interactive reports. To make a query control that uses this data source, use the **Query type** dropdown to select **Workload Health**. Then select subscription, resource group, or VM resources to target. Use the health filter dropdowns to select an interesting subset of health incidents for your analytic needs.-
-![Screenshot that shows an alerts query.](./media/workbooks-data-sources/workload-health.png)
+<!-- convertborder later -->
## Azure resource health Workbooks support getting Azure resource health and combining it with other data sources to create rich, interactive health reports. To make a query control that uses this data source, use the **Query type** dropdown and select **Azure health**. Then select the resources to target. Use the health filter dropdowns to select an interesting subset of resource issues for your analytic needs.-
-![Screenshot that shows an alerts query that shows the health filter lists.](./media/workbooks-data-sources/resource-health.png)
+<!-- convertborder later -->
## Azure RBAC
Simple JSON arrays or objects will automatically be converted into grid rows and
## Change Analysis (preview) To make a query control that uses [Application Change Analysis](../app/change-analysis.md) as the data source, use the **Data source** dropdown and select **Change Analysis (preview)**. Then select a single resource. Changes for up to the last 14 days can be shown. Use the **Level** dropdown to filter between **Important**, **Normal**, and **Noisy** changes. This dropdown supports workbook parameters of the type [drop down](workbooks-dropdowns.md).-
+<!-- convertborder later -->
> [!div class="mx-imgBorder"]
-> ![A screenshot that shows a workbook with Change Analysis.](./media/workbooks-data-sources/change-analysis-data-source.png)
+> :::image type="content" source="./media/workbooks-data-sources/change-analysis-data-source.png" lightbox="./media/workbooks-data-sources/change-analysis-data-source.png" alt-text="A screenshot that shows a workbook with Change Analysis." border="false":::
## Prometheus (preview) With [Azure Monitor managed service for Prometheus](../essentials/prometheus-metrics-overview.md), you can collect Prometheus metrics for your Kubernetes clusters. To query Prometheus metrics, select **Prometheus** from the data source dropdown, followed by where the metrics are stored in [Azure Monitor workspace](../essentials/azure-monitor-workspace-overview.md) and the [Prometheus query type](https://prometheus.io/docs/prometheus/latest/querying/api/) for the PromQL query.
-![Screenshot that shows sample PromQL query.](./media/workbooks-data-sources/prometheus-query.png)
+
+<!-- convertborder later; border-bottom is missing, so applying the Learn formatting border -->
> [!NOTE] > Querying from an Azure Monitor workspace is a data plane action and requires an explicit role assignment of Monitoring Data Reader, which is not assigned by default
azure-monitor Workbooks Dropdowns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-dropdowns.md
The easiest way to specify a dropdown parameter is by providing a static list in
1. Select **Update**. 1. Select **Save** to create the parameter. 1. The **Environment** parameter is a dropdown list with the three values.-
- ![Screenshot that shows the creation of a static dropdown parameter.](./media/workbooks-dropdowns/dropdown-create.png)
+ <!-- convertborder later -->
+ :::image type="content" source="./media/workbooks-dropdowns/dropdown-create.png" lightbox="./media/workbooks-dropdowns/dropdown-create.png" alt-text="Screenshot that shows the creation of a static dropdown parameter." border="false":::
## Create a static dropdown list with groups of items
If your query result/JSON contains a `group` field, the dropdown list displays g
{ "value":"prod2", "label":"Prod 2", "group":"Production" } ] ```-
-![Screenshot that shows an example of a grouped dropdown list.](./media/workbooks-dropdowns/grouped-dropDown.png)
+<!-- convertborder later -->
## Create a dynamic dropdown parameter
If your query result/JSON contains a `group` field, the dropdown list displays g
1. Select **Run Query**. 1. Select **Save** to create the parameter. 1. The **RequestName** parameter is a dropdown list with the names of all requests in the app.-
- ![Screenshot that shows the creation of a dynamic dropdown parameter.](./media/workbooks-dropdowns/dropdown-dynamic.png)
+ <!-- convertborder later -->
+ :::image type="content" source="./media/workbooks-dropdowns/dropdown-dynamic.png" lightbox="./media/workbooks-dropdowns/dropdown-dynamic.png" alt-text="Screenshot that shows the creation of a dynamic dropdown parameter." border="false":::
## Reference a dropdown parameter
You can reference dropdown parameters.
``` 1. Run the query to see the results. Optionally, render it as a chart.-
- ![Screenshot that shows a dropdown parameter referenced in KQL.](./media/workbooks-dropdowns/dropdown-reference.png)
+ <!-- convertborder later -->
+ :::image type="content" source="./media/workbooks-dropdowns/dropdown-reference.png" lightbox="./media/workbooks-dropdowns/dropdown-reference.png" alt-text="Screenshot that shows a dropdown parameter referenced in KQL." border="false":::
## Parameter value, label, selection, and group
dependencies
| serialize Rank = row_number() | project value = name, label = strcat('🌐 ', name), selected = iff(Rank == 1, true, false), group = operation_Name ```-
-![Screenshot that shows a dropdown parameter using value, label, selection, and group options.](./media/workbooks-dropdowns/dropdown-more-options.png)
+<!-- convertborder later -->
## Dropdown parameter options
dependencies
``` This example shows the multi-select dropdown parameter at work:-
-![Screenshot that shows a multi-select dropdown parameter.](./media/workbooks-dropdowns/dropdown-multiselect.png)
+<!-- convertborder later -->
## Dropdown special selections
azure-monitor Workbooks Graph Visualizations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-graph-visualizations.md
Last updated 06/21/2023
Azure Workbooks graph visualizations support visualizing arbitrary graphs based on data from logs to show the relationships between monitoring entities. The following graph shows data flowing in and out of a computer via various ports to and from external computers. It's colored by type, for example, computer vs. port vs. external IP. The edge sizes correspond to the amount of data flowing in between. The underlying data comes from KQL query targeting VM connections.-
-[![Screenshot that shows a tile summary view.](./media/workbooks-graph-visualizations/graph.png)](./media/workbooks-graph-visualizations/graph.png#lightbox)
+<!-- convertborder later -->
Watch this video to learn how to create graphs and use links in Azure Workbooks. > [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RE5ah5O]
Watch this video to learn how to create graphs and use links in Azure Workbooks.
* **Node Color Field**: `Kind` * **Color palette**: `Pastel` 1. Select **Save and Close** at the bottom of the pane.-
-[![Screenshot that shows a tile summary view with the preceding query and settings.](./media/workbooks-graph-visualizations/graph-settings.png)](./media/workbooks-graph-visualizations/graph-settings.png#lightbox)
+<!-- convertborder later -->
## Graph settings
You can specify what content goes to the different parts of a node: top, left, c
* **Coloring Type**: `Field Based` * **Node Color Field**: `Color` 1. Select **Save and Close** at the bottom of the pane.-
-[![Screenshot that shows the creation of a graph visualization with field-based node coloring.](./media/workbooks-graph-visualizations/graph-field-based.png)](./media/workbooks-graph-visualizations/graph-field-based.png#lightbox)
+<!-- convertborder later -->
## Next steps
azure-monitor Vminsights Enable Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-enable-portal.md
Last updated 09/28/2023
# Enable VM insights in the Azure portal This article describes how to enable VM insights using the Azure portal for Azure virtual machines, Azure Virtual Machine Scale Sets, and hybrid virtual machines connected with [Azure Arc](../../azure-arc/overview.md).
+> [!NOTE]
+> Azure portal no longer supports enabling VM insights using the legacy Log Analytics agent.
+ ## Prerequisites - [Log Analytics workspace](./vminsights-configure-workspace.md).-- To enable VM insights for Log Analytics agent, [configure your Log Analytics workspace for VM insights](../vm/vminsights-configure-workspace.md). This prerequisite isn't relevant if you're using Azure Monitor Agent. - See [Supported operating systems](./vminsights-enable-overview.md#supported-operating-systems) to ensure that the operating system of the virtual machine or Virtual Machine Scale Set you're enabling is supported. - See [Manage the Azure Monitor agent](../agents/azure-monitor-agent-manage.md#prerequisites) for prerequisites related to Azure Monitor agent.
To enable VM insights on an unmonitored virtual machine or Virtual Machine Scale
1. If you use a manual upgrade model for your Virtual Machine Scale Set, upgrade the instances to complete the setup. You can start the upgrades from the **Instances** page, in the **Settings** section. -
-## Enable VM insights for Log Analytics agent
-
-To enable VM insights on an unmonitored virtual machine or Virtual Machine Scale Set using Log Analytics agent:
-
-1. From the **Monitor** menu in the Azure portal, select **Virtual Machines** > **Overview** > **Not Monitored**.
-
-1. Select **Enable** next to any machine that you want to enable. If a machine is currently running, then you must start it to enable it.
-
- :::image type="content" source="media/vminsights-enable-portal/enable-unmonitored.png" lightbox="media/vminsights-enable-portal/enable-unmonitored.png" alt-text="Screenshot with unmonitored machines in V M insights.":::
-
-1. On the **Insights Onboarding** page, select **Enable**.
-
-1. On the **Monitoring configuration** page, select **Log Analytics agent**.
-
- If the virtual machine isn't already connected to a Log Analytics workspace, then you'll be prompted to select one. If you haven't previously [created a workspace](../logs/quick-create-workspace.md), then you can select a default for the location where the virtual machine or Virtual Machine Scale Set is deployed in the subscription. This workspace will be created and configured if it doesn't already exist. If you select an existing workspace, it will be configured for VM insights if it wasn't already.
-
- > [!NOTE]
- > If you select a workspace that wasn't previously configured for VM insights, the *VMInsights* management pack will be added to this workspace. This will be applied to any agent already connected to the workspace, whether or not it's enabled for VM insights. Performance data will be collected from these virtual machines and stored in the *InsightsMetrics* table.
-
-1. Select **Configure** to modify the configuration. The only option you can modify is the workspace. You'll receive status messages as the configuration is performed.
-
-1. If you use a manual upgrade model for your Virtual Machine Scale Set, upgrade the instances to complete the setup. You can start the upgrades from the **Instances** page, in the **Settings** section.
-- ## Enable Azure Monitor Agent on monitored machines To add Azure Monitor Agent to machines that are already enabled with the Log Analytics agent:
azure-netapp-files Configure Network Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-network-features.md
na Previously updated : 10/17/2023 Last updated : 11/07/2023
Two settings are available for network features:
* When you change the network features option of existing volumes from Basic to Standard network features, access to existing Basic networking volumes might be lost if your UDR or NSG implementations prevent the Basic networking volumes from connecting to DNS and domain controllers. You might also lose the ability to update information, such as the site name, in the Active Directory connector if all volumes canΓÇÖt communicate with DNS and domain controllers. For guidance about UDRs and NSGs, see [Configure network features for an Azure NetApp Files volume](azure-netapp-files-network-topologies.md#udrs-and-nsgs).
+>[!NOTE]
+> The networking features of the DP volume will not be affected by changing the source volume from basic to standard network features.
+ ## <a name="set-the-network-features-option"></a>Set network features option during volume creation This section shows you how to set the network features option when you create a new volume.
azure-netapp-files Create Active Directory Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/create-active-directory-connections.md
na Previously updated : 09/07/2023 Last updated : 11/07/2023 # Create and manage Active Directory connections for Azure NetApp Files
Several features of Azure NetApp Files require that you have an Active Directory
* <a name="administrators-privilege-users"></a>**Administrators privilege users** This option grants additional security privileges to AD DS domain users or groups that require elevated privileges to access the Azure NetApp Files volumes. The specified accounts will have elevated permissions at the file or folder level.
+
+ >[!NOTE]
+ >The domain admins are automatically added to the Administrators privilege users group.
![Screenshot that shows the Administrators box of Active Directory connections window.](../media/azure-netapp-files/active-directory-administrators.png)
azure-resource-manager Azure Services Resource Providers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/azure-services-resource-providers.md
Title: Resource providers by Azure services
+ Title: Find resource providers by Azure services
description: Lists all resource provider namespaces for Azure Resource Manager and shows the Azure service for that namespace. Previously updated : 11/06/2023 Last updated : 11/07/2023 content_well_notification: - AI-contribution
-# Resource providers for Azure services
+# What are the resource providers for Azure services
-This article connects resource provider namespaces to Azure services. If you don't know the resource provider, see [Find resource provider](#find-resource-provider).
+A resource provider is a collection of REST operations that enables functionality for an Azure service. Each resource provider has a namespace in the format of `company-name.service-label`. This article shows the resource providers for Azure services. If you don't know the resource provider, see [Find resource provider](#find-resource-provider).
## AI and machine learning resource providers
+The resource providers for AI and machine learning services are:
+ | Resource provider namespace | Azure service | | | - | | Microsoft.AutonomousSystems | [Autonomous Systems](https://www.microsoft.com/ai/autonomous-systems) |
This article connects resource provider namespaces to Azure services. If you don
## Analytics resource providers
+The resource providers for analytics services are:
+ | Resource provider namespace | Azure service | | | - | | Microsoft.AnalysisServices | [Azure Analysis Services](../../analysis-services/index.yml) |
This article connects resource provider namespaces to Azure services. If you don
## Blockchain resource providers
+The resource providers for Blockchain services are:
+ | Resource provider namespace | Azure service | | | - | | Microsoft.Blockchain | [Azure Blockchain Service](../../blockchain/workbench/index.yml) |
This article connects resource provider namespaces to Azure services. If you don
## Compute resource providers
+The resource providers for compute services are:
+ | Resource provider namespace | Azure service | | | - | | Microsoft.AppPlatform | [Azure Spring Apps](../../spring-apps/overview.md) |
This article connects resource provider namespaces to Azure services. If you don
## Container resource providers
+The resource providers for container services are:
+ | Resource provider namespace | Azure service | | | - | | Microsoft.App | [Azure Container Apps](../../container-apps/index.yml) |
This article connects resource provider namespaces to Azure services. If you don
## Core resource providers
+The resource providers for core services are:
+ | Resource provider namespace | Azure service | | | - | | Microsoft.Addons | core |
This article connects resource provider namespaces to Azure services. If you don
## Database resource providers
+The resource providers for database services are:
+ | Resource provider namespace | Azure service | | | - | | Microsoft.AzureData | SQL Server registry |
This article connects resource provider namespaces to Azure services. If you don
## Developer tools resource providers
+The resource providers for developer tools services are:
+ | Resource provider namespace | Azure service | | | - | | Microsoft.AppConfiguration | [Azure App Configuration](../../azure-app-configuration/index.yml) |
This article connects resource provider namespaces to Azure services. If you don
## DevOps resource providers
+The resource providers for DevOps services are:
+ | Resource provider namespace | Azure service | | | - | | microsoft.visualstudio | [Azure DevOps](/azure/devops/) |
This article connects resource provider namespaces to Azure services. If you don
## Hybrid resource providers
+The resource providers for hybrid services are:
+ | Resource provider namespace | Azure service | | | - | | Microsoft.AzureArcData | Azure Arc-enabled data services |
This article connects resource provider namespaces to Azure services. If you don
## Identity resource providers
+The resource providers for identity services are:
+ | Resource provider namespace | Azure service | | | - | | Microsoft.AAD | [Microsoft Entra Domain Services](../../active-directory-domain-services/index.yml) |
This article connects resource provider namespaces to Azure services. If you don
## Integration resource providers
+The resource providers for integration services are:
+ | Resource provider namespace | Azure service | | | - | | Microsoft.ApiManagement | [API Management](../../api-management/index.yml) |
This article connects resource provider namespaces to Azure services. If you don
## IoT resource providers
+The resource providers for IoT services are:
+ | Resource provider namespace | Azure service | | | - | | Microsoft.Devices | [Azure IoT Hub](../../iot-hub/index.yml)<br />[Azure IoT Hub Device Provisioning Service](../../iot-dps/index.yml) |
This article connects resource provider namespaces to Azure services. If you don
## Management resource providers
+The resource providers for management services are:
+ | Resource provider namespace | Azure service | | | - | | Microsoft.Advisor | [Azure Advisor](../../advisor/index.yml) |
This article connects resource provider namespaces to Azure services. If you don
## Media resource providers
+The resource providers for media services are:
+ | Resource provider namespace | Azure service | | | - | | Microsoft.Media | [Media Services](/azure/media-services/) | ## Migration resource providers
+The resource providers for migration services are:
+ | Resource provider namespace | Azure service | | | - | | Microsoft.ClassicInfrastructureMigrate | Classic deployment model migration |
This article connects resource provider namespaces to Azure services. If you don
## Monitoring resource providers
+The resource providers for monitoring services are:
+ | Resource provider namespace | Azure service | | | - | | Microsoft.AlertsManagement | [Azure Monitor](../../azure-monitor/index.yml) |
This article connects resource provider namespaces to Azure services. If you don
## Network resource providers
+The resource providers for network services are:
+ | Resource provider namespace | Azure service | | | - | | Microsoft.Cdn | [Content Delivery Network](../../cdn/index.yml) |
This article connects resource provider namespaces to Azure services. If you don
## Security resource providers
+The resource providers for security services are:
+ | Resource provider namespace | Azure service | | | - | | Microsoft.Attestation | [Azure Attestation Service](../../attestation/overview.md) |
This article connects resource provider namespaces to Azure services. If you don
## Storage resource providers
+The resource providers for storage services are:
+ | Resource provider namespace | Azure service | | | - | | Microsoft.ClassicStorage | Classic deployment model storage |
This article connects resource provider namespaces to Azure services. If you don
## Web resource providers
+The resource providers for web services are:
+ | Resource provider namespace | Azure service | | | - | | Microsoft.BingMaps | [Bing Maps](/BingMaps/#pivot=main&panel=BingMapsAPI) |
This article connects resource provider namespaces to Azure services. If you don
## 5G & Space resource providers
+The resource providers for 5G & space services are:
+ | Resource provider namespace | Azure service | | | - | | Microsoft.HybridNetwork | [Network Function Manager](../../network-function-manager/index.yml) |
azure-resource-manager Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Resource Manager description: Lists Azure Policy Regulatory Compliance controls available for Azure Resource Manager. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 10/23/2023 Last updated : 11/06/2023
azure-signalr Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure SignalR description: Lists Azure Policy Regulatory Compliance controls available for Azure SignalR. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 10/23/2023 Last updated : 11/06/2023
azure-vmware Configure Azure Elastic San https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/configure-azure-elastic-san.md
+
+ Title: Configure Azure Elastic SAN (Preview)
+description: Learn how to use Elastic SAN with Azure VMware Solution
+++ Last updated : 11/07/2023+++
+# Configure Azure Elastic SAN (Preview)
+
+In this article, learn how to configure Azure Elastic SAN or delete an Elastic SAN-based datastore.
+
+## What is Azure Elastic SAN
+
+[Azure Elastic storage area network](https://review.learn.microsoft.com/azure/storage/elastic-san/elastic-san-introduction?branch=main) (SAN) addresses the problem of workload optimization and integration between your large scale databases and performance-intensive mission-critical applications. Azure Elastic SAN is a fully integrated solution that simplifies deploying, scaling, managing, and configuring a SAN. Azure Elastic SAN also offers built-in cloud capabilities, like high availability.
+
+[Azure VMware Solution](https://learn.microsoft.com/azure/azure-vmware/introduction) supports attaching iSCSI datastores as a persistent storage option. You can create Virtual Machine File System (VMFS) datastores with Azure Elastic SAN volumes and attach them to clusters of your choice. By using VMFS datastores backed by Azure Elastic SAN, you can expand your storage instead of scaling the clusters.
+
+## Prerequisites
+
+The following prerequisites are required to continue.
+
+- Register for the preview by filling out the [form](https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR8FVh9RJVPdOk_mdTpp--pZUN0RKUklROEc4UE1RRFpRMkhNVFAySTM1TC4u).
+- Verify you have a Dev/Test SDDC set up in one of the following regions:
+ - East US
+ - East US 2
+ - South Central US
+ - West US 2
+ - West US 3
+ - North Europe
+ - West Europe
+ - UK South
+ - France Central
+ - Sweden Central
+ - Southeast Asia
+ - Australia East
+- Know the availability zone your SCCD is in.
+ - In the UI, select an Azure VMware Solution host.
+ > [!NOTE]
+ > The host exposes its Availability Zone. You should use that AZ when deploying other Azure resources for the same subscription.
+- You have permission to set up new resources in the subscription your SDDC is in.
+- Verify that you received an email confirmation that your subscription is now allowlisted.
+
+## Set up Elastic SAN
+
+In this section, you create a virtual network for your Elastic SAN. Then you create the Elastic SAN that includes creating at least one volume group and one volume that becomes your VMFS datastore. Next, you set up a Private Endpoint for your Elastic SAN that allows your SDDC to connect to the Elastic SAN volume. Then you're ready to add an Elastic SAN volume as a datastore in your SDDC.
+
+1. Use one of the following instruction options to set up a dedicated virtual network for your Elastic SAN:
+ - [Azure portal](https://learn.microsoft.com/azure/virtual-network/quick-create-portal)
+ - [PowerShell](https://learn.microsoft.com/azure/virtual-network/quick-create-powershell)
+ - [Azure CLI](https://learn.microsoft.com/azure/virtual-network/quick-create-cli)
+2. Use one of the following instruction options to set up an Elastic SAN, your dedicated volume group, and initial volume in that group:
+ > [!IMPORTANT]
+ > Make sure to create this Elastic SAN in the same region and availability zone as your SDDC for best performance.
+ - [Azure portal](https://learn.microsoft.com/azure/storage/elastic-san/elastic-san-create?tabs=azure-portal)
+ - [PowerShell](https://learn.microsoft.com/azure/storage/elastic-san/elastic-san-create?tabs=azure-powershell)
+ - [Azure CLI](https://learn.microsoft.com/azure/storage/elastic-san/elastic-san-create?tabs=azure-cli)
+3. Use one of the following instructions to configure a Private Endpoint (PE) for your Elastic SAN:
+ - [PowerShell](https://learn.microsoft.com/azure/storage/elastic-san/elastic-san-networking?tabs=azure-powershell#configure-a-private-endpoint)
+ - [Azure CLI](https://learn.microsoft.com/azure/storage/elastic-san/elastic-san-networking?tabs=azure-cli#tabpanel_2_azure-cli)
+
+## Add an Elastic SAN volume as a datastore
+
+After you receive confirmation that your subscription is allowlisted, you can use the Azure portal to add the Elastic SAN volume as a datastore in your SDDC. Use the following steps to add, connect, disconnect, and delete Elastic SAN.
+
+## Configure external storage address block
+
+Start by providing an IP block for deploying external storage. Navigate to the **Storage** tab in your Azure VMware Solution private cloud in the Azure portal. The address block should be a /24 network.
++
+- The address block must be unique and not overlap with the /22 used to create your Azure VMware Solution private cloud or any other connected Azure virtual networks or on-premises network.
+- The address block must fall within the following allowed network blocks: 10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16. If you want to use a non-RFC 1918 address block, submit a support request.
+- The address block can't overlap any of the following restricted network blocks: 100.72.0.0/15
+- The address block provided is used to enable multipathing from the ESXi hosts to the target, it canΓÇÖt be edited or changed. If you do need to change it, submit a support request.
+
+After you provide an External storage address block, you can connect to an Elastic SAN volume from the same page.
+
+## Connect Elastic SAN
+
+1. From the left navigation in your Azure VMware Solution private cloud, select **Storage**, then **+ Connect Elastic SAN**.
+2. Select your **Subscription**, **Resource**, **Volume Group**, **Volume(s)**, and **Client cluster**.
+3. From section, "Rename datastore as per VMware requirements", under **Volume name** > **Data store name**, give names to the Elastic SAN volumes.
+ > [!NOTE]
+ > For best performance, verify that your Elastic SAN volume and SDDC are in the same Region and Availability Zone.
+
+## Disconnect and delete an Elastic SAN-based datastore
+
+To delete the Elastic SAN-based datastore, use the following steps from the Azure portal.
+
+1. From the left navigation in your Azure VMware Solution private cloud, select **Storage**, then **Storage list**.
+2. Under **Virtual network**, select **Disconnect** to disconnect the datastore from the Cluster(s).
+3. Optionally you can delete the volume you previously created in your Elastic SAN.
azure-vmware Configure Vmware Hcx https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/configure-vmware-hcx.md
For an end-to-end overview of this procedure, view the [Azure VMware Solution: C
The selections define the resources where VMs can consume VMware HCX services. > [!NOTE]
- > If you have a mixed mode SDDC with a fleet cluster, deployment of service mesh appliances for fleet cluster is not viable/supported.
+ > In a mixed-mode SDDC with an AV64 cluster, deploying service mesh appliances on the AV64 cluster is not viable or supported. Nevertheless, this doesn't impede you from conducting HCX migration or network extension directly onto AV64 clusters. The deployment container can be cluster-1, hosting the HCX appliances.
:::image type="content" source="media/tutorial-vmware-hcx/select-compute-profile-source.png" alt-text="Screenshot that shows selecting the source compute profile." lightbox="media/tutorial-vmware-hcx/select-compute-profile-source.png":::
azure-vmware Migrate Sql Server Always On Availability Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/migrate-sql-server-always-on-availability-group.md
In this article, you learn how to migrate a SQL Server Always On Availability Gr
:::image type="content" source="media/sql-server-hybrid-benefit/sql-always-on-architecture.png" alt-text="Diagram showing the architecture of Always On SQL Server for Azure VMware Solution." border="false" lightbox="media/sql-server-hybrid-benefit/sql-always-on-architecture.png":::
-## Tested configurations
- Microsoft SQL Server (2019 and 2022) was tested with Windows Server (2019 and 2022) Data Center edition with the virtual machines deployed in the on-premises environment. Windows Server and SQL Server have been configured following best practices and recommendations from Microsoft and VMware. The on-premises source infrastructure was VMware vSphere 7.0 Update 3 and VMware vSAN running on Dell PowerEdge servers and Intel Optane P4800X SSD NVMe devices. ## Prerequisites
These are the prerequisites to migrating your SQL Server instance to Azure VMwar
- Ensure that all the network segments in use by SQL Server and workloads using it are extended into your Azure VMware Solution private cloud. To verify this step, see [Configure VMware HCX network extension](configure-hcx-network-extension.md). Either VMware HCX over VPN or ExpressRoute connectivity can be used as the networking configuration for the migration.
-VMWare HCX over VPN, due to its limited bandwidth, is typically suited for workloads that can sustain longer periods of downtime (such as non-production environments).
-For any of the following scenarios, ExpressRoute connectivity is recommended for a migration:
+With VMWare HCX over VPN, due to its limited bandwidth it is typically suited for workloads that can sustain longer periods of downtime (such as non-production environments).
+
+For any of the following, ExpressRoute connectivity is recommended for a migration:
- Production environments - Workloads with large database sizes-- Any case where there is a need to minimize downtime
+- Scenarios in which there is a need to minimize downtime the ExpressRoute connectivity is recommended for the migration.
Further downtime considerations are discussed in the next section.
azure-vmware Migrate Sql Server Failover Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/migrate-sql-server-failover-cluster.md
However, you can overcome this limitation by performing the steps shown in this
> [!NOTE] > This procedure requires a full shutdown of the cluster. Since the SQL Server service will be unavailable during the migration, plan accordingly for the downtime period.
-## Tested configurations
- Microsoft SQL Server 2019 and 2022 were tested with Windows Server 2019 and 2022 Data Center edition with the virtual machines deployed in the on-premises environment. Windows Server and SQL Server have been configured following best practices and recommendations from Microsoft and VMware. The on-premises source infrastructure was VMware vSphere 7.0 Update 3 and VMware vSAN running on Dell PowerEdge servers and Intel Optane P4800X SSD NVMe devices.
The on-premises source infrastructure was VMware vSphere 7.0 Update 3 and VMware
- Ensure that all the network segments in use by SQL Server and workloads using it are extended into your Azure VMware Solution private cloud. To verify this step, see [Configure VMware HCX network extension](configure-hcx-network-extension.md). Either VMware HCX over VPN or ExpressRoute connectivity can be used as the networking configuration for the migration.
-VMWare HCX over VPN, due to its limited bandwidth, is typically suited for workloads that can sustain longer periods of downtime (such as non-production environments).
-For any of the following scenarios, ExpressRoute connectivity is recommended for a migration:
+With VMWare HCX over VPN, due to its limited bandwidth it is typically suited for workloads that can sustain longer periods of downtime (such as non-production environments).
+
+For any of the following, ExpressRoute connectivity is recommended for a migration:
- Production environments - Workloads with large database sizes-- Any case where there is a need to minimize downtime-
-Further downtime considerations are discussed in the next section.
-
+- Scenarios in which there is a need to minimize downtime the ExpressRoute connectivity is recommended for the migration.
## Downtime considerations
azure-vmware Migrate Sql Server Standalone Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/migrate-sql-server-standalone-cluster.md
In both cases, consider the size and criticality of the database being migrated.
For this how-to procedure, we have validated VMware HCX vMotion. VMware HCX Cold Migration is also valid, but it requires a longer downtime period.
+This scenario was validated using the following editions and configurations:
+
+- Microsoft SQL Server (2019 and 2022)
+- Windows Server (2019 and 2022) Data Center edition
+- Windows Server and SQL Server were configured following best practices and recommendations from Microsoft and VMware.
+- The on-premises source infrastructure was VMware vSphere 7.0 Update 3 and VMware vSAN running on Dell PowerEdge servers and Intel Optane P4800X SSD NVMe devices.
+ :::image type="content" source="media/sql-server-hybrid-benefit/migrated-sql-standalone-cluster.png" alt-text="Diagram showing the architecture of Standalone SQL Server for Azure VMware Solution." border="false" lightbox="media/sql-server-hybrid-benefit/migrated-sql-standalone-cluster.png"::: ## Tested configurations
This scenario was validated using the following editions and configurations:
- Ensure that all the network segments in use by the SQL Server and workloads using it are extended into your Azure VMware Solution private cloud. To verify this step in the process, see [Configure VMware HCX network extension](configure-hcx-network-extension.md). Either VMware HCX over VPN or ExpressRoute connectivity can be used as the networking configuration for the migration.+ VMWare HCX over VPN, due to its limited bandwidth, is typically suited for workloads that can sustain longer periods of downtime (such as non-production environments). + For any of the following scenarios, ExpressRoute connectivity is recommended for a migration:
+For any of the following, ExpressRoute connectivity is recommended for a migration:
+
+- Production environments
+- Workloads with large database sizes
+- Scenarios in which there is a need to minimize downtime the ExpressRoute connectivity is recommended for the migration.migration.
- Production environments - Workloads with large database sizes - Any case where there is a need to minimize downtime
azure-web-pubsub Howto Generate Client Access Url https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-generate-client-access-url.md
You can enable Microsoft Entra ID in your service and use the Microsoft Entra to
1. Follow [Authorize from application](./howto-authorize-from-application.md) to enable Microsoft Entra ID. 2. Follow [Get Microsoft Entra token](./howto-authorize-from-application.md#use-postman-to-get-the-microsoft-entra-token) to get the Microsoft Entra token with Postman. 3. Use the Microsoft Entra token to invoke `:generateToken` with Postman:
+
+ > [!NOTE]
+ > Please use the latest version of Postman. Old versions of Postman have [some issue](https://github.com/postmanlabs/postman-app-support/issues/3994#issuecomment-893453089) supporting colon `:` in path.
1. For the URI, enter `https://{Endpoint}/api/hubs/{hub}/:generateToken?api-version=2022-11-01` 2. On the **Auth** tab, select **Bearer Token** and paste the Microsoft Entra token fetched in the previous step
You can enable Microsoft Entra ID in your service and use the Microsoft Entra to
} ```
-4. The Client Access URI is in the format of `wss://<endpoint>/client/hubs/<hub_name>?access_token=<token>`
+5. The Client Access URI is in the format of `wss://<endpoint>/client/hubs/<hub_name>?access_token=<token>`
backup Azure Kubernetes Service Cluster Manage Backups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-kubernetes-service-cluster-manage-backups.md
To enable Trusted Access between Backup vault and AKS cluster, use the following
az aks trustedaccess rolebinding create \ -g $myResourceGroup \  --cluster-name $myAKSCluster 
- –n <randomRoleBindingName> \ 
+ -n <randomRoleBindingName> \ 
--source-resource-id <vaultID> \  --roles Microsoft.DataProtection/backupVaults/backup-operator ```
backup Backup Azure Backup Server Vmware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-backup-server-vmware.md
Title: Back up VMware VMs with Azure Backup Server description: In this article, learn how to use Azure Backup Server to back up VMware VMs running on a VMware vCenter/ESXi server. Previously updated : 03/03/2023 Last updated : 11/07/2023 - # Back up VMware VMs with Azure Backup Server
To remove the disk from exclusion, run the following command:
C:\Program Files\Microsoft Azure Backup Server\DPM\DPM\bin> ./ExcludeDisk.ps1 -Datasource $vmDsInfo[2] -Remove "[datastore1] TestVM4/TestVM4\_1.vmdk" ```
+## ApplicationQuiesceFault
+
+### Fall back to crash consistent backups for VMware VMs
+
+Application consistent backups for VMware VMs running Windows can fail with the *ApplicationQuiesceFault* error if:
+
+- The VSS providers in the VM aren't in a stable state.
+- The VM is under heavy load.
+
+To resolve this quiescing error and retry the failed application consistent backup with a crash consistent backup, use the following registry key on the MABS server running V4 UR1 or above:
+
+```azurepowershell
+Name - FailbackToCrashConsistentBackup DWORD = 1
+Path- SOFTWARE\\MICROSOFT\\MICROSOFT DATA PROTECTION MANAGER\\VMWare
+
+```
+ ## Next steps For troubleshooting issues when setting up backups, review the [troubleshooting guide for Azure Backup Server](./backup-azure-mabs-troubleshoot.md).
backup Backup Azure Manage Mars https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-manage-mars.md
Title: Manage and monitor MARS Agent backups
description: Learn how to manage and monitor Microsoft Azure Recovery Services (MARS) Agent backups by using the Azure Backup service. Previously updated : 12/28/2022 Last updated : 11/07/2023
When you modify backup policy, you can add new items, remove existing items from
- **Remove Items** use this option to remove items from being backed up. - Use **Exclusion Settings** for removing all items within a volume instead of **Remove Items**. - Clearing all selections in a volume causes old backups of the items, to be retained according to retention settings at the time of the last backup, without scope for modification.
- - Reselecting these items, leads to a first full-backup and new policy changes aren't applied to old backups.
+ - By reselecting these items, lead to a first full-backup and new policy changes aren't applied to old backups.
- Unselecting entire volume retains past backup without any scope for modifying retention policy. - **Exclusion Settings** use this option to exclude specific items from being backed up.
You can add exclusion rules to skip files and folders that you don't want to be
## Stop protecting Files and Folder backup
-There are two ways to stop protecting Files and Folders backup:
+There are three ways to stop protecting Files and Folders backup:
- **Stop protection and retain backup data**. - This option will stop all future backup jobs from protection.
There are two ways to stop protecting Files and Folders backup:
- **Stop protection and delete backup data**. - This option will stop all future backup jobs from protecting your data. If the vault security features are not enabled, all recovery points are immediately deleted.<br>If the security features are enabled, the deletion is delayed by 14 days, and you'll receive an alert email with a message: *Your data for this Backup item has been deleted. This data will be temporarily available for 14 days, after which it will be permanently deleted* and a recommended action *Reprotect the Backup item within 14 days to recover your data.*<br>In this state, the retention policy continues to apply, and the backup data remains billable. [Learn more](backup-azure-security-feature.md#enable-security-features) on how to enable vault security features. - To resume protection, reprotect the server within 14 days from the delete operation. In this duration, you can also restore the data to an alternate server.
+- **Stop protection and retain data by policy**.
+ - This option stops future backup jobs from protection.
+ - Azure Backup service will prune recovery points as per the policy configured.
+ - You can restore the backed-up data from existing recovery points.
+ - To resume protection, use the **Re-enable backup schedule** option. After that, data will be retained based on the new retention policy.
+ - If all recovery points expire before reenabling backup, you need to do a full initial backup of the data source.
### Stop protection and retain backup data
There are two ways to stop protecting Files and Folders backup:
After you delete the on-premises backup items, follow the next steps from the portal. +++
+### Stop protection and retain backup data by policy
+
+Follow these steps:
+
+1. Open the *MARS management* console, go to the **Actions** pane, and then select **Schedule Backup**.
+2. On the **Select Policy Item** page, select **Modify a backup schedule for your files and folders** > **Next**.
+3. On the **Modify or Stop a Scheduled Backup** page, select **Stop using this backup schedule, and enable RP pruning as per policy** > **Next**.
+4. On **Pause Scheduled Backup**, review the information and select **Finish**.
+5. On **Modify backup progress**, check if your schedule backup pause is in *Success* status, and select **Close** to finish.
+
+>[!Note]
+>This feature is supported from MARS *2.0.9262.0* or later.
+++++ ## Re-enable protection If you stopped protection while retaining data and decided to resume protection, then you can re-enable the backup schedule using modify backup policy.
To monitor backup data usage and daily churn, follow these steps:
Learn more about [other report tabs](configure-reports.md) and receiving those [reports through email](backup-reports-email.md).
+## List recovery points for a data source
+Follow these steps:
+
+1. On the **MARS agent console**, go to **Status Pane**.
+1. Under **Available Recovery Points**, select **View Details** to list all available recovery points.
++ ## Next steps - For information about supported scenarios and limitations, refer to the [Support Matrix for the MARS Agent](./backup-support-matrix-mars-agent.md).
backup Backup Azure Security Feature https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-security-feature.md
Title: Security features that protect hybrid backups
description: Learn how to use security features in Azure Backup to make backups more secure Previously updated : 03/01/2023 Last updated : 11/07/2023
Checks have been added to make sure only valid users can perform various operati
### Authentication to perform critical operations
-As part of adding an extra layer of authentication for critical operations, you're prompted to enter a security PIN when you perform **Stop Protection with Delete data** and **Change Passphrase** operations.
+As part of adding an extra layer of authentication for critical operations, you're prompted to enter a security PIN when you perform **Stop Protection with Delete data** and **Change Passphrase** operations for DPM, MABS, and MARS.
+
+Additionally, with MARS version *2.0.9262.0* and later, the operations to remove a volume from MARS file/folder backup, add a new exclusion setting for an existing volume, reduce retention duration, and move to a less-frequent backup schedule are also protected with a security pin for additional security.
+++ > [!NOTE] > Currently, for the following DPM and MABS versions, security PIN is supported for **Stop Protection with Delete data** to online storage:
This feature is supported with MARS agent version *2.0.9250.0* and higher from D
The following table lists the disallowed operations on DPM connected to an immutable Recovery:
-| Operation on Immutable vault | Result with DPM 2022 UR1, MABS v4, and latest MARS agent | Result with older DPM/MABS and or MARS agent |
+| Operation on Immutable vault | Result with DPM 2022 UR1, MABS v4, and latest MARS agent. <br><br> With DPM 2022 UR2 or MABS v4 UR1, you can select the option to retain online recovery points by policy when stopping protection or removing a data source from a protection group from the console. | Result with older DPM/MABS and or MARS agent |
| | | | | **Remove Data Source from protection group configured for online backup** | 81001: The backup item(s) can't be deleted because it has active recovery points, and the selected vault is an immutable vault. | 130001: Microsoft Azure Backup encountered an internal error. |
-| **Stop protection with delete data** | 81001: The backup item(s) can't be deleted because it has active recovery points, and the selected vault is an immutable vault. | 130001: Microsoft Azure Backup encountered an internal error. |
+| **Stop protection with delete data** | 81001: The backup item(s) can't be deleted because it has active recovery points, and the selected vault is an immutable vault. <br><br> With DPM 2022 UR2 or MABS v4 UR1, you can select the option to retain online recovery points by policy when stopping protection or removing a data source from a protection group from the console. | 130001: Microsoft Azure Backup encountered an internal error. |
| **Reduce online retention period** | 810002: Reduction in retention during Policy/Protection modification isn't allowed because the selected vault is immutable. | 130001: Microsoft Azure Backup encountered an internal error. |
-| **Remove-DPMChildDatasource command** | 81001: The backup item(s) can't be deleted because it has active recovery points, and the selected vault is an immutable vault. <br><br> Use new option *-EnableOnlineRPsPruning* with *-KeepOnlineData* to retain data only up to policy duration. | 130001: Microsoft Azure Backup encountered an internal error. <br><br> Use the *-KeepOnlineData* flag to retain data. |
+| **Remove-DPMChildDatasource command** | 81001: The backup item(s) can't be deleted because it has active recovery points, and the selected vault is an immutable vault. <br><br> Use new option *-EnableOnlineRPsPruning* with *-KeepOnlineData* to retain data only up to policy duration. <br><br> With DPM 2022 UR2 or MABS v4 UR1, you can select the option to retain online recovery points by policy when stopping protection or removing a data source from a protection group from the console. | 130001: Microsoft Azure Backup encountered an internal error. <br><br> Use the *-KeepOnlineData* flag to retain data. |
### Immutability support for MARS
The following table lists the disallowed operations for MARS when immutability i
| Disallowed operation | Result with latest MARS agent | Result with old MARS agent | | | | | | **Stop protection with delete data for system state** | Error 810001 <br><br> User trying to delete backup item or stop protection with delete data where backup item has valid (unexpired) recovery point. | Error 130001 <br><br> Microsoft Azure Backup encountered an internal error. |
-| **Stop protection with delete data for file/folder** | Error 810001 <br><br> User trying to delete backup item or stop protection with delete data where backup item has valid (unexpired) recovery point. | Error 130001 <br><br> Microsoft Azure Backup encountered an internal error. |
+| **Stop protection with delete data** | Error 810001 <br><br> User trying to delete backup item or stop protection with delete data where backup item has valid (unexpired) recovery point. | Error 130001 <br><br> Microsoft Azure Backup encountered an internal error. <br><br> MARS *2.0.9262.0* and later provide the option of stopping protection and retaining recovery points according to the policy in the console. |
| **Reduce online retention period** | User trying to modify policy or protection with reduction of retention. | 130001 <br><br> Microsoft Azure Backup encountered an internal error. |
-| **Remove-OBPolicy with -DeleteBackup flag** | 810001 <br><br> User trying to delete backup item or stop protection with delete data where backup item has valid (unexpired) recovery point. <br><br> Use *ΓÇôEnablePruning* flag to retain backups up to their retention period. | 130001 <br><br> Microsoft Azure Backup encountered an internal error. <br><br> Don't use the *-DeleteBackup* flag. |
+| **Remove-OBPolicy with -DeleteBackup flag** | 810001 <br><br> User trying to delete backup item or stop protection with delete data where backup item has valid (unexpired) recovery point. <br><br> Use *ΓÇôEnablePruning* flag to retain backups up to their retention period. | 130001 <br><br> Microsoft Azure Backup encountered an internal error. <br><br> Don't use the *-DeleteBackup* flag. <br><br> MARS *2.0.9262.0* and later provide the option of stopping protection and retaining recovery points according to the policy in the console. |
## Next steps
backup Backup Mabs Release Notes V3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-mabs-release-notes-v3.md
Title: Release notes for Microsoft Azure Backup Server v3 description: This article provides the information about the known issues and workarounds for Microsoft Azure Backup Server (MABS) v3. Previously updated : 04/20/2023 Last updated : 11/07/2023 ms.asset: 0c4127f2-d936-48ef-b430-a9198e425d81
This article provides the known issues and workarounds for Microsoft Azure Backup Server (MABS) V3.
+## MABS V4 UR1 known issues and workarounds
+
+No known issues.
+ ## MABS V4 known issues and workarounds If you're protecting Windows Server 2012 and 2012 R2, you need to install Visual C++ redistributable 2015 manually on the protected server. You can download [Visual C++ Redistributable for Visual Studio 2015 from Official Microsoft Download Center](https://www.microsoft.com/en-in/download/details.aspx?id=48145).
If you're protecting Windows Server 2012 and 2012 R2, you need to install Visual
9. Start MSDPM service.
-### After installing UR1, the MABS reports aren't updated with new RDL files
+### After you install UR1, the MABS reports aren't updated with new RDL files
**Description**: With UR1, the MABS report formatting issue is fixed with updated RDL files. The new RDL files aren't automatically replaced with existing files.
backup Backup Mabs Whats New Mabs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-mabs-whats-new-mabs.md
Title: What's new in Microsoft Azure Backup Server description: Microsoft Azure Backup Server gives you enhanced backup capabilities for protecting VMs, files and folders, workloads, and more. Previously updated : 04/25/2023 Last updated : 11/07/2023 - # What's new in Microsoft Azure Backup Server (MABS)? Microsoft Azure Backup Server gives you enhanced backup capabilities to protect VMs, files and folders, workloads, and more. ++
+## What's new in MABS V4 Update Rollup 1 (UR1)?
+
+**Microsoft Azure Backup Server version 4 (MABS V4) Update Rollup 1** includes critical bug fixes and feature enhancements. To view the list of bugs fixed and the installation instructions for MABS V4 UR1, see [KB article 5032421](https://support.microsoft.com/help/5032421/).
+
+The following table lists the new features added in MABS V4 UR1:
+
+| Feature | Supportability |
+| | |
+| Item-level recovery for VMware VMs running Windows directly from online recovery points. | Note that you need *MARS version 2.0.9251.0 or above* to use this feature. |
+| Windows and Basic SMTP Authentication for MABS email reports and alerts. | This enables MABS to send reports and alerts using any vendor supporting SMTP Basic Authentication. [Learn more](/system-center/dpm/monitor-dpm?view=sc-dpm-2022&preserve-view=true#configure-email-for-dpm). <br><br> Note that if you are using Microsoft 365 SMTP with a MABS V4 private fix, reenter the credential using Basic Authentication. |
+| Fall back to crash consistent backups for VMware VMs. | Use a registry key for VMware VMs when backups fail with ApplicationQuiesceFault. [Learn more](backup-azure-backup-server-vmware.md#applicationquiescefault). |
+| **Experience improvements for MABS backups to Azure.** | |
+| List online recovery points for a data source along with the expiry time and soft-delete status. | To view the list of recovery points along with their expiration dates, right-click a data source and select **List recovery points**. |
+| Stop protection and retaining data using the policy duration for immutable vaults directly from the console. | This helps you to save the backup costs when stopping protection for a data source backed up to an immutable vault. [Learn more](backup-azure-security-feature.md#immutability-support). |
+ ## What's new in MABS V4 RTM Microsoft Azure Backup Server version 4 (MABS V4) includes critical bug fixes and the support for Windows Server 2022, SQL 2022, Azure Stack HCI 22H2, and other features and enhancements. To view the list of bugs fixed and the installation instructions for MABS V4, see [KB article 5024199](https://support.microsoft.com/help/5024199/).
backup Install Mars Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/install-mars-agent.md
Title: Install the Microsoft Azure Recovery Services (MARS) agent description: Learn how to install the Microsoft Azure Recovery Services (MARS) agent to back up Windows machines. Previously updated : 08/18/2023 Last updated : 11/07/2023
To modify the storage replication type:
> You can't modify the storage replication type after the vault is set up and contains backup items. If you want to do this, you need to re-create the vault. >
-## Configure Recovery Services vault to save passphrase to Recovery Services vault (preview)
+## Configure Recovery Services vault to save passphrase to Recovery Services vault
Azure Backup using the Recovery Services agent (MARS) allows you to back up file or folder and system state data to Azure Recovery Services vault. This data is encrypted using a passphrase provided during the installation and registration of the MARS agent. This passphrase is required to retrieve and restore the backup data and needs to be saved in a secure external location, such as Azure Key Vault.
backup Restore Azure Backup Server Vmware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/restore-azure-backup-server-vmware.md
Title: Restore VMware VMs with Azure Backup Server description: Use Azure Backup Server (MABS) to restore VMware VMs running on a VMware vCenter/ESXi server. Previously updated : 03/01/2023 Last updated : 11/07/2023
This article explains how to use Microsoft Azure Backup Server (MABS) to restore
You can restore individual files from a protected VM recovery point. This feature is only available for Windows Server VMs. Restoring individual files is similar to restoring the entire VM, except you browse into the VMDK and find the file(s) you want, before starting the recovery process. To recover an individual file or select files from a Windows Server VM: >[!NOTE]
->Restoring an individual file from a VM is available only for Windows VM and Disk Recovery Points.
+>Restoring an individual file from a VM, and from Disk and Online recovery Points.
1. In the MABS Administrator Console, select **Recovery** view.
You can restore individual files from a protected VM recovery point. This featur
3. In the **Recovery Points for:** pane, use the calendar to select the date that contains the desired recovery point(s). Depending on how the backup policy has been configured, dates can have more than one recovery point. Once you've selected the day when the recovery point was taken, make sure you've chosen the correct **Recovery time**. If the selected date has multiple recovery points, choose your recovery point by selecting it in the Recovery time drop-down menu. Once you chose the recovery point, the list of recoverable items appears in the **Path:** pane.
-4. To find the files you want to recover, in the **Path** pane, double-click the item in the **Recoverable item** column to open it. Select the file, files, or folders you want to recover. To select multiple items, press the **Ctrl** key while selecting each item. Use the **Path** pane to search the list of files or folders appearing in the **Recoverable Item** column. **Search list below** doesn't search into subfolders. To search through subfolders, double-click the folder. Use the **Up** button to move from a child folder into the parent folder. You can select multiple items (files and folders), but they must be in the same parent folder. You can't recover items from multiple folders in the same recovery job.
+4. To find the files you want to recover, in the **Path** pane, double-click the item in the **Recoverable item** column to open it. Select the file, files, or folders you want to recover.
+
+ If you use an online recovery point, wait until the recovery point is mounted. Once the mount is complete, select the *VM*, *disk*, and the *volume* you want to restore until the files and folders are listed.
+++
+ To select multiple items, press the **Ctrl** key while selecting each item. Use the **Path** pane to search the list of files or folders appearing in the **Recoverable Item** column. **Search list below** doesn't search into subfolders. To search through subfolders, double-click the folder. Use the **Up** button to move from a child folder into the parent folder. You can select multiple items (files and folders), but they must be in the same parent folder. You can't recover items from multiple folders in the same recovery job.
![Review Recovery Selection](./media/restore-azure-backup-server-vmware/vmware-rp-disk-ilr-2.png)
You can restore individual files from a protected VM recovery point. This featur
9. On the **Specify Recovery Options** screen, choose which security setting to apply. You can opt to modify the network bandwidth usage throttling, but throttling is disabled by default. Also, **SAN Recovery** and **Notification** aren't enabled. 10. On the **Summary** screen, review your settings and select **Recover** to start the recovery process. The **Recovery status** screen shows the progression of the recovery operation.
+>[!Tip]
+>You can also do item-level restore of the online recovery points for VMware VMs running Windows from **Add external DPM Server** for a quick recovery of VM files and folders.
++++ ## VMware parallel restore in MABS v4 (and later) MABS v4 supports restoring more than one VMware VMs protected from the same vCenter in parallel. By default, eight parallel recoveries are supported. You can increase the number of parallel restore jobs by adding the following registry key.
backup Save Backup Passphrase Securely In Azure Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/save-backup-passphrase-securely-in-azure-key-vault.md
Title: Save and manage MARS agent passphrase securely in Azure Key Vault (preview)
+ Title: Save and manage MARS agent passphrase securely in Azure Key Vault
description: Learn how to save MARS agent passphrase securely in Azure Key Vault and retrieve them during restore. Previously updated : 08/18/2023 Last updated : 11/07/2023
-# Save and manage MARS agent passphrase securely in Azure Key Vault (preview)
+# Save and manage MARS agent passphrase securely in Azure Key Vault
Azure Backup using the Recovery Services agent (MARS) allows you back up files/folders and system state data to Azure Recovery Services vault. This data is encrypted using a passphrase you provide during the installation and registration of the MARS agent. This passphrase is required to retrieve and restore the backup data and needs to be saved in a secure external location.
Now, you can save your encryption passphrase securely in Azure Key Vault as a Se
- You should use a single Azure Key Vault to store all your passphrases. [Create a Key Vault](../key-vault/general/quick-create-portal.md) in case you don't have one. - [Azure Key Vault pricing](https://azure.microsoft.com/pricing/details/key-vault/) is applicable when you create a new Azure Key Vault to store your passphrase. - After you create the Key Vault, to protect against accidental or malicious deletion of passphrase, [ensure that soft-delete and purge protection is turned on](../key-vault/general/soft-delete-overview.md).-- This feature is supported only in Azure public regions with MARS agent version *2.0.9254.0* or above.
+- This feature is supported only in Azure public regions with MARS agent version *2.0.9262.0* or above.
## Configure the Recovery Services vault to store passphrase to Azure Key Vault
You can automate this process by using the new KeyVaultUri option in `Set-OBMach
## Save passphrase to Azure Key Vault for an existing MARS installation
-If you have an existing MARS agent installation and want to save your passphrase to Azure Key Vault, [update your agent](upgrade-mars-agent.md) to version *2.0.9254.0* or above and perform a change passphrase operation.
+If you have an existing MARS agent installation and want to save your passphrase to Azure Key Vault, [update your agent](upgrade-mars-agent.md) to version *2.0.9262.0* or above and perform a change passphrase operation.
After updating your MARS agent, ensure that you have [configured the Recovery Services vault to store passphrase to Azure Key Vault](#configure-the-recovery-services-vault-to-store-passphrase-to-azure-key-vault) and you have successfully:
This section lists commonly encountered errors when saving the passphrase to Azu
:::image type="content" source="./media/save-backup-passphrase-securely-in-azure-key-vault/copy-key-vault-url.png" alt-text="Screenshot shows how to copy Kay Vault URL." lightbox="./media/save-backup-passphrase-securely-in-azure-key-vault/copy-key-vault-url.png":::
+ΓÇâ
+### UserErrorSecretExistsSoftDeleted (391282)
+
+**Cause**: A secret in the expected format already exists in the Key Vault, but it's in a soft-deleted state. Unless the secret is restored, MARS can't save the passphrase for that machine to the provided Key Vault.
+
+**Recommended action**: Check if a secret exists in the vault with the name `AzBackup-<machine name>-<vaultname>` and if it's in a soft-deleted state. Recover the soft deleted Secret to save the passphrase to it.
+
+### UserErrorKeyVaultSoftDeleted (391283)
+
+**Cause**: The Key Vault provided to MARS is in a soft-deleted state.
+
+**Recommended action**: Recover the Key Vault or provide a new Key Vault.
+ ### Registration is incomplete **Cause**: You didn't complete the MARS registration by registering the passphrase. So, you'll not be able to configure backups until you register.
backup Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Backup description: Lists Azure Policy Regulatory Compliance controls available for Azure Backup. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 10/23/2023 Last updated : 11/06/2023
backup Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/whats-new.md
Title: What's new in Azure Backup description: Learn about the new features in the Azure Backup service. Previously updated : 11/02/2023 Last updated : 11/07/2023
You can learn more about the new releases by bookmarking this page or by [subscr
## Updates summary - November 2023
+ - [Save your MARS backup passphrase securely to Azure Key Vault is now generally available.](#save-your-mars-backup-passphrase-securely-to-azure-key-vault-is-now-generally-available)
+ - [Update Rollup 1 for Microsoft Azure Backup Server v4 is now generally available](#update-rollup-1-for-microsoft-azure-backup-server-v4-is-now-generally-available)
- [SAP HANA instance snapshot backup support is now generally available](#sap-hana-instance-snapshot-backup-support-is-now-generally-available) - September 2023 - [Multi-user authorization using Resource Guard for Backup vault is now generally available](#multi-user-authorization-using-resource-guard-for-backup-vault-is-now-generally-available)
You can learn more about the new releases by bookmarking this page or by [subscr
- February 2021 - [Backup for Azure Blobs (in preview)](#backup-for-azure-blobs-in-preview)
+## Save your MARS backup passphrase securely to Azure Key Vault is now generally available.
+
+Azure Backup now allows you to save the MARS passphrase to Azure Key Vault automatically from the MARS console during registration or changing passphrase with MARS agent.
+
+The MARS agent from Azure Backup requires a passphrase that you provide to encrypt the backups sent to and stored on Azure Recovery Services vault. This passphrase isn't shared with Microsoft and needs to be saved in a secure location to ensure that the backups can be retrieved if the server backed up with MARS goes down.
+
+For more information, see [Save and manage MARS agent passphrase securely in Azure Key Vault](save-backup-passphrase-securely-in-azure-key-vault.md).
+
+## Update Rollup 1 for Microsoft Azure Backup Server v4 is now generally available
+
+Azure Backup now provides Update Rollup 1 for Microsoft Azure Backup Server (MABS) V4.
+
+- It contains new features, such as item-level recovery from online recovery points for VMware VMs, support for Windows and Basic SMTP authentication for MABS email reports and alerts, and other enhancements.
+- It also contains stability improvements and bug fixes on MABS V4.
+
+For more information, see [What's new in MABS](backup-mabs-whats-new-mabs.md).
++ ## SAP HANA instance snapshot backup support is now generally available Azure Backup now supports SAP HANA instance snapshot backup and enhanced restore, which provides a cost-effective backup solution using managed disk incremental snapshots. Because instant backup uses snapshot, the effect on the database is minimum.
bastion Quickstart Host Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/quickstart-host-arm-template.md
Title: 'Quickstart: Deploy Azure Bastion in a virtual network using an ARM template'
+ Title: 'Quickstart: Deploy Azure Bastion to a virtual network using an ARM template'
-description: Learn how to deploy Azure Bastion in a virtual network using an ARM template.
+description: Learn how to deploy Azure Bastion to a virtual network by using an Azure Resource Manager template.
Last updated 06/27/2022
-Customer intent: As someone with a networking background, I want to deploy Azure Bastion to a virtual machine using a Bastion ARM Template.
+#Customer intent: As someone with a networking background, I want to deploy Azure Bastion to a virtual machine by using an ARM template.
-# Quickstart: Deploy Azure Bastion in a virtual network using an ARM template
+# Quickstart: Deploy Azure Bastion to a virtual network by using an ARM template
-This quickstart describes how to use Azure Bastion template to deploy to a virtual network.
+This quickstart describes how to use an Azure Resource Manager template (ARM template) to deploy Azure Bastion to a virtual network.
An ARM template is a JavaScript Object Notation (JSON) file that defines the infrastructure and configuration for your project. The template uses declarative syntax. In declarative syntax, you describe your intended deployment without writing the sequence of programming commands to create the deployment.
+The following diagram shows the architecture of Bastion.
-If your environment meets the prerequisites and you're familiar with using ARM templates, select the Deploy to Azure button. The template will open in the Azure portal.
+
+If your environment meets the prerequisites and you're familiar with using ARM templates, select the following **Deploy to Azure** button. The template opens in the Azure portal.
[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3a%2f%2fraw.githubusercontent.com%2fAzure%2fazure-quickstart-templates%2fmaster%2fquickstarts%2fmicrosoft.network%2fazure-bastion-nsg%2fazuredeploy.json)
If your environment meets the prerequisites and you're familiar with using ARM t
Verify that you have an Azure subscription. If you don't already have an Azure subscription, you can activate your [MSDN subscriber benefits](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details) or sign up for a [free account](https://azure.microsoft.com/pricing/free-trial). > [!NOTE]
-> The use of Azure Bastion with Azure Private DNS Zones is not supported at this time. Before you begin, please make sure that the virtual network where you plan to deploy your Bastion resource is not linked to a private DNS zone.
->
+> The use of Bastion with Azure Private DNS zones is not supported at this time. Before you begin, make sure that the virtual network where you plan to deploy your Bastion resource is not linked to a private DNS zone.
+ ## Review the template
-To view the entire template used for this quickstart, see [Azure Quickstart Templates: Azure Bastion as a Service](https://azure.microsoft.com/resources/templates/azure-bastion-nsg/).
+To view the entire template that this quickstart uses, see [Azure Bastion as a Service with NSG](https://azure.microsoft.com/resources/templates/azure-bastion-nsg/).
-This template by default, creates an Azure Bastion deployment with a resource group, a virtual network, network security group settings, an AzureBastionSubnet subnet, a bastion host, and a public IP address resource that's used for the bastion host.
+By default, this template creates a Bastion deployment with a resource group, a virtual network, network security group (NSG) settings, an AzureBastionSubnet subnet, a bastion host, and a public IP address resource that's used for the bastion host. Here's the purpose of each part of the template:
* [Microsoft.Network/bastionHosts](/azure/templates/microsoft.network/bastionhosts) creates the bastion host. * [Microsoft.Network/virtualNetworks](/azure/templates/microsoft.network/virtualnetworks) creates a virtual network. * [Microsoft.Network/virtualNetworks/subnets](/azure/templates/microsoft.network/virtualnetworks/subnets) creates the subnet.
-* [Microsoft Network/networkSecurityGroups](/azure/templates/microsoft.network/virtualnetworks/subnets) controls the network security group settings.
-* [Microsoft.Network/publicIpAddresses](/azure/templates/microsoft.network/publicIpAddresses) specifies the public IP address value used for the bastion host.
+* [Microsoft Network/networkSecurityGroups](/azure/templates/microsoft.network/virtualnetworks/subnets) controls the NSG settings.
+* [Microsoft.Network/publicIpAddresses](/azure/templates/microsoft.network/publicIpAddresses) specifies the public IP address value for the bastion host.
### Parameters
-| PARAMETER NAME | DESCRIPTION |
+| Parameter name | Description |
|--|--|
-| Region | Azure region for Bastion and virtual network. |
-| vnet-name | Name of new or existing virtual network to which Azure Bastion should be deployed. |
-| vnet-ip-prefix | IP prefix for available addresses in virtual network address space. |
-| vnet-new-or-existing | Specify whether to deploy new virtual network or deploy to an existing one. |
-| bastion-subnet-ip-prefix | Bastion subnet IP prefix MUST be within the virtual network IP prefix address space. |
-| bastion-host-name | Name of Azure Bastion resource. |
+| `Region` | Azure region for Bastion and the virtual network. |
+| `vnet-name` | Name of a new or existing virtual network to which Bastion should be deployed. |
+| `vnet-ip-prefix` | IP prefix for available addresses in a virtual network address space. |
+| `vnet-new-or-existing` | Choice of whether to deploy new virtual network or deploy to an existing one. |
+| `bastion-subnet-ip-prefix` | Bastion subnet IP prefix, which must be within the virtual network IP prefix's address space. |
+| `bastion-host-name` | Name of the Bastion resource. |
> [!NOTE]
-> To find more templates, see [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/?resourceType=Microsoft.Network&pageNumber=1&sort=Popular).
->
+> To find more templates, see [Azure quickstart templates](https://azure.microsoft.com/resources/templates/?resourceType=Microsoft.Network&pageNumber=1&sort=Popular).
## Deploy the template > [!IMPORTANT] > [!INCLUDE [Pricing](../../includes/bastion-pricing.md)]
->
-In this section, you'll deploy Bastion using the **Deploy to Azure** button below or in the Azure portal. You don't connect and sign in to your virtual machine or deploy Bastion from your VM directly.
+In this section, you deploy Bastion by using the Azure portal. You don't connect and sign in to your virtual machine or deploy Bastion directly from your VM.
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Select the **Deploy to Azure** button below.
+1. Select the following **Deploy to Azure** button:
[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3a%2f%2fraw.githubusercontent.com%2fAzure%2fazure-quickstart-templates%2fmaster%2fquickstarts%2fmicrosoft.network%2fazure-bastion-nsg%2fazuredeploy.json)
-1. In the **Azure Bastion as a Service: Azure Quickstart Template**, enter or select the following information.
+1. In the **Azure Bastion as a Service** template, enter or select information on the **Basics** tab. Keep these considerations in mind:
- * If you're using the template for a test environment, you can use the example values specified.
- * To view the template, click **Edit template**. On this page, you can adjust some of the values such as address space or the name of certain resources. **Save** to save your changes, or **Discard**.
- * If you decide to create your bastion host in an existing VNet, make sure to fill in the values for the template as they are in your deployed environment, or the template will fail.
+ * If you're using the template for a test environment, you can use the example values that this step provides.
+ * To view the template, select **Edit template**. On this page, you can adjust some of the values, such as the address space or the name of certain resources. Select **Save** to save your changes, or select **Discard** to discard them.
+ * If you decide to create your bastion host in an existing virtual network, be sure to fill in the values for the template as they exist in your deployed environment, or the template will fail.
- :::image type="content" source="./media/quickstart-host-arm-template/bastion-template-values.png" alt-text="Screenshot of Bastion ARM template example values." lightbox="./media/quickstart-host-arm-template/bastion-template-values.png":::
+ :::image type="content" source="./media/quickstart-host-arm-template/bastion-template-values.png" alt-text="Screenshot of example values for an Azure Bastion ARM template." lightbox="./media/quickstart-host-arm-template/bastion-template-values.png":::
| Setting | Example value | |--|--|
- | Subscription | Select your Azure subscription |
- | Resource Group |Select **Create new** enter **TestRG1**, and select **OK** |
- | Region | Enter **East US** |
- | vnet-name | Enter **VNet1** |
- | vnet-ip-prefix | Enter **10.1.0.0/16** |
- | vnet-new-or-existing | Select **new** |
- | bastion-subnet-ip-prefix | Enter **10.1.1.0/24** |
- | bastion-host-name | Enter **TestBastionHost** |
-
-1. Select the **Review + create** tab or select the **Review + create** button. Select **Create**.
-1. The deployment will complete within 10 minutes. You can view the progress on the template **Overview** page. If you close the portal, deployment will continue.
+ | **Subscription** | Select your Azure subscription. |
+ | **Resource group** |Select **Create new**, enter **TestRG1**, and then select **OK**. |
+ | **Region** | Enter **East US**. |
+ | **Vnet-name** | Enter **VNet1**. |
+ | **Vnet-ip-prefix** | Enter **10.1.0.0/16**. |
+ | **Vnet-new-or-existing** | Select **new**. |
+ | **Bastion-subnet-ip-prefix** | Enter **10.1.1.0/24**. |
+ | **Bastion-host-name** | Enter **TestBastionHost**. |
+
+1. Select the **Review + create** tab, or select the **Review + create** button. Select **Create**.
+1. The deployment finishes within 10 minutes. You can view the progress on the template **Overview** pane. If you close the portal, deployment continues.
## Validate the deployment
-In this section, you'll validate the deployment of Azure Bastion.
+To validate the deployment of Bastion:
1. Sign in to the [Azure portal](https://portal.azure.com). 1. Select the **TestRG1** resource group that you created in the previous section.
-1. From the Overview page of the resource group, scroll down to **Resources** in the middle pane. Validate the Bastion resource.
- :::image type="content" source="./media/quickstart-host-arm-template/bastion-validate-deployment-full.png" alt-text="Screenshot shows the Azure Bastion resource." lightbox="./media/quickstart-host-arm-template/bastion-validate-deployment.png":::
+1. From the **Overview** pane of the resource group, scroll down to the **Resources** tab. Validate the Bastion resource.
+
+ :::image type="content" source="./media/quickstart-host-arm-template/bastion-validate-deployment-full.png" alt-text="Screenshot that shows the Azure Bastion resource in a resource group." lightbox="./media/quickstart-host-arm-template/bastion-validate-deployment.png":::
## Clean up resources
-When you're done using the virtual network and the virtual machines, delete the resource group and all of the resources it contains:
+When you finish using the virtual network and the virtual machines, delete the resource group and all of the resources that it contains:
-1. Enter the name of your resource group in the **Search** box at the top of the portal and select it from the search results.
+1. Enter the name of your resource group in the **Search** box at the top of the portal, and then select it from the search results.
1. Select **Delete resource group**.
-1. Enter your resource group for **TYPE THE RESOURCE GROUP NAME** and select **Delete**.
+1. Enter your resource group for **TYPE THE RESOURCE GROUP NAME**, and then select **Delete**.
## Next steps
-In this quickstart, you deployed Bastion using the Bastion ARM template, and then connected to a virtual machine securely via Bastion. Next, you can continue with the following steps if you want to copy and paste to your virtual machine.
+In this quickstart, you deployed Bastion by using an ARM template. You then connected to a virtual machine securely via Bastion. Continue with the following steps if you want to copy and paste to your virtual machine.
> [!div class="nextstepaction"] > [Quickstart: Create a Windows virtual machine in the Azure portal](../virtual-machines/windows/quick-create-portal.md)
+> [!div class="nextstepaction"]
> [Create an RDP connection to a Windows VM using Azure Bastion](../bastion/bastion-connect-vm-rdp-windows.md)
bastion Quickstart Host Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/quickstart-host-portal.md
Title: 'Quickstart: Deploy Azure Bastion automatically - Basic SKU'
-description: Learn how to deploy Bastion with default settings from the Azure portal.
+description: Learn how to deploy Azure Bastion with default settings from the Azure portal.
-# Quickstart: Deploy Bastion automatically - Basic SKU
+# Quickstart: Deploy Azure Bastion automatically - Basic SKU
-In this quickstart, you'll learn how to deploy Azure Bastion automatically in the Azure portal using default settings and the Basic SKU. After Bastion is deployed, you can connect (SSH/RDP) to virtual machines (VM) in the virtual network via Bastion using the private IP address of the VM. The VMs you connect to don't need a public IP address, client software, agent, or a special configuration.
+In this quickstart, you learn how to deploy Azure Bastion automatically in the Azure portal by using default settings and the Basic SKU. After you deploy Bastion, you can use SSH or RDP to connect to virtual machines (VMs) in the virtual network via Bastion by using the private IP addresses of the VMs. The VMs that you connect to don't need a public IP address, client software, an agent, or a special configuration.
-The default SKU for this type of deployment is the Basic SKU. If you want to deploy using the Developer SKU instead, see [Deploy Bastion automatically - Developer SKU](quickstart-developer-sku.md). If you want to deploy using the Standard SKU, see the [Tutorial - Deploy Bastion using specified settings](tutorial-create-host-portal.md). For more information about Azure Bastion, see [What is Azure Bastion?](bastion-overview.md)
+The following diagram shows the architecture of Bastion.
+
+The default tier for this type of deployment is the Basic SKU. If you want to deploy by using the Developer SKU instead, see [Quickstart: Deploy Azure Bastion - Developer SKU](quickstart-developer-sku.md). If you want to deploy by using the Standard SKU, see [Tutorial: Deploy Azure Bastion by using specified settings](tutorial-create-host-portal.md). For more information about Bastion, see [What is Azure Bastion?](bastion-overview.md).
The steps in this article help you do the following:
-* Deploy Bastion with default settings from your VM resource using the Azure portal. When you deploy using default settings, the settings are based on the virtual network to which Bastion will be deployed.
-* After you deploy Bastion, you'll then connect to your VM via the portal using RDP/SSH connectivity and the VM's private IP address.
-* If your VM has a public IP address that you don't need for anything else, you can remove it.
+* Deploy Bastion with default settings from your VM resource by using the Azure portal. When you deploy by using default settings, the settings are based on the virtual network where Bastion will be deployed.
+* Connect to your VM via the portal by using SSH or RDP connectivity and the VM's private IP address.
+* Remove your VM's public IP address if you don't need it for anything else.
> [!IMPORTANT] > [!INCLUDE [Pricing](../../includes/bastion-pricing.md)]
->
## <a name="prereq"></a>Prerequisites
-* Verify that you have an Azure subscription. If you don't already have an Azure subscription, you can activate your [MSDN subscriber benefits](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details) or sign up for a [free account](https://azure.microsoft.com/pricing/free-trial).
-* **A VM in a VNet**.
+To complete this quickstart, you need these resources:
+
+* An Azure subscription. If you don't already have one, you can activate your [MSDN subscriber benefits](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details) or sign up for a [free account](https://azure.microsoft.com/pricing/free-trial).
+* A VM in a virtual network.
- When you deploy Bastion using default values, the values are pulled from the virtual network in which your VM resides. This VM doesn't become a part of the Bastion deployment itself, but you do connect to it later in the exercise.
+ When you deploy Bastion by using default values, the values are pulled from the virtual network in which your VM resides. This VM doesn't become a part of the Bastion deployment itself, but you connect to it later in the exercise.
- * If you don't already have a VM in a virtual network, create one using [Quickstart: Create a Windows VM](../virtual-machines/windows/quick-create-portal.md), or [Quickstart: Create a Linux VM](../virtual-machines/linux/quick-create-portal.md).
- * If you need example values, see the [Example values](#values) section.
- * If you already have a virtual network, make sure it's selected on the Networking tab when you create your VM.
- * If you don't have a virtual network, you can create one at the same time you create your VM.
+ If you don't already have a VM in a virtual network, create a VM by using [Quickstart: Create a Windows VM](../virtual-machines/windows/quick-create-portal.md) or [Quickstart: Create a Linux VM](../virtual-machines/linux/quick-create-portal.md).
+
+ If you don't have a virtual network, you can create one at the same time that you create your VM. If you already have a virtual network, make sure that it's selected on the **Networking** tab when you create your VM.
-* **Required VM roles:**
+* Required VM roles:
- * Reader role on the virtual machine.
- * Reader role on the NIC with private IP of the virtual machine.
+ * Reader role on the virtual machine
+ * Reader role on the network adapter (NIC) with the private IP of the virtual machine
-* **Required VM ports inbound ports:**
+* Required VM inbound ports:
* 3389 for Windows VMs * 22 for Linux VMs
The steps in this article help you do the following:
### <a name="values"></a>Example values
-You can use the following example values when creating this configuration, or you can substitute your own.
+You can use the following example values when you're creating this configuration, or you can substitute your own.
-**Basic VNet and VM values:**
+#### Basic virtual network and VM values
-|**Name** | **Value** |
+|Name | Value |
| | |
-| Virtual machine| TestVM |
-| Resource group | TestRG1 |
-| Region | East US |
-| Virtual network | VNet1 |
-| Address space | 10.1.0.0/16 |
-| Subnets | FrontEnd: 10.1.0.0/24 |
+| **Virtual machine**| **TestVM** |
+| **Resource group** | **TestRG1** |
+| **Region** | **East US** |
+| **Virtual network** | **VNet1** |
+| **Address space** | **10.1.0.0/16** |
+| **Subnets** | **FrontEnd: 10.1.0.0/24** |
-**Bastion values:**
+#### Bastion values
-When you deploy from VM settings, Bastion is automatically configured with default values from the virtual network.
+When you deploy from VM settings, Bastion is automatically configured with the following default values from the virtual network.
-|**Name** | **Default value** |
+|Name | Default value |
|||
-|AzureBastionSubnet | This subnet is created within the virtual network as a /26 |
-|SKU | Basic |
-| Name | Based on the virtual network name |
-| Public IP address name | Based on the virtual network name |
+|**AzureBastionSubnet** | Created within the virtual network as a /26 |
+|**SKU** | **Basic** |
+| **Name** | Based on the virtual network name |
+| **Public IP address name** | Based on the virtual network name |
## <a name="createvmset"></a>Deploy Bastion
-When you create Azure Bastion in the portal using **Deploy Bastion**, Azure Bastion deploys automatically using default settings and the Basic SKU. You can't modify or specify additional values for a default deployment. After deployment completes, you can go to the bastion host **Configuration** page to select certain additional settings and features. You can also upgrade a SKU later to add more features, but you can't downgrade a SKU once Bastion is deployed. For more information, see [About configuration settings](configuration-settings.md).
+When you create an Azure Bastion instance in the portal by using **Deploy Bastion**, you deploy Bastion automatically by using default settings and the Basic SKU. You can't modify, or specify additional values for, a default deployment.
+
+After deployment finishes, you can go to the bastion host's **Configuration** page to select certain additional settings and features. You can also upgrade a SKU later to add more features, but you can't downgrade a SKU after Bastion is deployed. For more information, see [About Azure Bastion configuration settings](configuration-settings.md).
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. In the portal, go to the VM to which you want to connect. The values from the virtual network in which this VM resides will be used to create the Bastion deployment.
+1. In the portal, go to the VM that you want to connect to. The values from the virtual network where this VM resides will be used to create the Bastion deployment.
1. On the page for your VM, in the **Operations** section on the left menu, select **Bastion**.
-1. On the Bastion page, select the arrow next to **Dedicated Deployment Options** to expand the section.
+1. On the **Bastion** pane, select the arrow next to **Dedicated Deployment Options** to expand the section.
+1. In the **Create Bastion** section, select **Deploy Bastion**.
- :::image type="content" source="./media/quickstart-host-portal/deploy-bastion-automatically.png" alt-text="Screenshot showing how to expand Dedicated Deployment Options and Deploy Bastion." lightbox="./media/quickstart-host-portal/deploy-bastion-automatically.png":::
-1. In the Create Bastion section, select **Deploy Bastion**.
-1. Bastion begins deploying. This can take around 10 minutes to complete.
+ :::image type="content" source="./media/quickstart-host-portal/deploy-bastion-automatically.png" alt-text="Screenshot that shows dedicated deployment options and the button for deploying an Azure Bastion instance." lightbox="./media/quickstart-host-portal/deploy-bastion-automatically.png":::
+1. Bastion begins deploying. The process can take around 10 minutes to finish.
> [!NOTE] > [!INCLUDE [Bastion failed subnet](../../includes/bastion-failed-subnet.md)]
- >
## <a name="connect"></a>Connect to a VM
-When the Bastion deployment is complete, the screen changes to the **Connect** page.
+When the Bastion deployment is complete, the screen changes to the **Connect** pane.
+
+1. Enter your authentication credentials. Then, select **Connect**.
-1. Type your authentication credentials. Then, select **Connect**.
+ :::image type="content" source="./media/quickstart-host-portal/connect-vm.png" alt-text="Screenshot shows the pane for connecting by using Azure Bastion." lightbox="./media/quickstart-host-portal/connect-vm.png":::
- :::image type="content" source="./media/quickstart-host-portal/connect-vm.png" alt-text="Screenshot shows the Connect using Azure Bastion dialog." lightbox="./media/quickstart-host-portal/connect-vm.png":::
+1. The connection to this virtual machine via Bastion opens directly in the Azure portal (over HTML5) by using port 443 and the Bastion service. When the portal asks you for permissions to the clipboard, select **Allow**. This step lets you use the remote clipboard arrows on the left of the window.
-1. The connection to this virtual machine via Bastion will open directly in the Azure portal (over HTML5) using port 443 and the Bastion service. Select **Allow** when asked for permissions to the clipboard. This lets you use the remote clipboard arrows on the left of the screen.
+ :::image type="content" source="./media/quickstart-host-portal/connected.png" alt-text="Screenshot that shows an RDP connection to a virtual machine." lightbox="./media/quickstart-host-portal/connected.png":::
- * When you connect, the desktop of the VM might look different than the example screenshot.
- * Using keyboard shortcut keys while connected to a VM might not result in the same behavior as shortcut keys on a local computer. For example, when connected to a Windows VM from a Windows client, CTRL+ALT+END is the keyboard shortcut for CTRL+ALT+Delete on a local computer. To do this from a Mac while connected to a Windows VM, the keyboard shortcut is Fn+CTRL+ALT+Backspace.
+ > [!NOTE]
+ > When you connect, the desktop of the VM might look different from the example screenshot.
- :::image type="content" source="./media/quickstart-host-portal/connected.png" alt-text="Screenshot shows an RDP connection to a virtual machine." lightbox="./media/quickstart-host-portal/connected.png":::
+Using keyboard shortcut keys while you're connected to a VM might not result in the same behavior as shortcut keys on a local computer. For example, when you're connected to a Windows VM from a Windows client, Ctrl+Alt+End is the keyboard shortcut for Ctrl+Alt+Delete on a local computer. To do this from a Mac while you're connected to a Windows VM, the keyboard shortcut is Fn+Ctrl+Alt+Backspace.
-### <a name="audio"></a>To enable audio output
+### <a name="audio"></a>Enable audio output
[!INCLUDE [Enable VM audio output](../../includes/bastion-vm-audio.md)]
-## <a name="remove"></a>Remove VM public IP address
+## <a name="remove"></a>Remove a VM's public IP address
[!INCLUDE [Remove a public IP address from a VM](../../includes/bastion-remove-ip.md)] ## Clean up resources
-When you're done using the virtual network and the virtual machines, delete the resource group and all of the resources it contains:
+When you finish using the virtual network and the virtual machines, delete the resource group and all of the resources that it contains:
-1. Enter the name of your resource group in the **Search** box at the top of the portal and select it from the search results.
+1. Enter the name of your resource group in the **Search** box at the top of the portal, and then select it from the search results.
1. Select **Delete resource group**.
-1. Enter your resource group for **TYPE THE RESOURCE GROUP NAME** and select **Delete**.
+1. Enter your resource group for **TYPE THE RESOURCE GROUP NAME**, and then select **Delete**.
## Next steps
-In this quickstart, you deployed Bastion to your virtual network, and then connected to a virtual machine securely via Bastion. Next, you can configure more features and work with VM connections.
+In this quickstart, you deployed Bastion to your virtual network. You then connected to a virtual machine securely via Bastion. Next, you can configure more features and work with VM connections.
> [!div class="nextstepaction"]
-> [VM connections](vm-about.md)
+> [VM connections and features](vm-about.md)
> [!div class="nextstepaction"]
-> [Azure Bastion configuration settings and features](configuration-settings.md)
+> [Azure Bastion configuration settings](configuration-settings.md)
bastion Tutorial Create Host Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/tutorial-create-host-portal.md
Title: 'Tutorial: Deploy Bastion using specified settings: Azure portal'
-description: Learn how to deploy Bastion using settings that you specify - Azure portal.
+ Title: 'Tutorial: Deploy Azure Bastion using specified settings: Azure portal'
+description: Learn how to deploy Azure Bastion by using settings that you specify in the Azure portal.
-# Tutorial: Deploy Bastion using specified settings
+# Tutorial: Deploy Azure Bastion by using specified settings
-This tutorial helps you deploy Azure Bastion from the Azure portal using your own specified manual settings. This article helps you deploy Bastion using a SKU that you specify. The SKU determines the features and connections that are available for your deployment. For more information about SKUs, see [Configuration settings - SKUs](configuration-settings.md#skus).
+This tutorial helps you deploy Azure Bastion from the Azure portal by using your own manual settings and a SKU (product tier) that you specify. The SKU determines the features and connections that are available for your deployment. For more information about SKUs, see [Configuration settings - SKUs](configuration-settings.md#skus).
-In the Azure portal, when you use the **Configure Manually** option to deploy Bastion, you can specify configuration values such as instance counts and SKUs at the time of deployment. After Bastion is deployed, you can connect (SSH/RDP) to virtual machines in the virtual network via Bastion using the private IP address of the VM. When you connect to a VM, it doesn't need a public IP address, client software, agent, or a special configuration.
+In the Azure portal, when you use the **Configure manually** option to deploy Bastion, you can specify configuration values such as instance counts and SKUs at the time of deployment. After Bastion is deployed, you can use SSH or RDP to connect to virtual machines (VMs) in the virtual network via Bastion using the private IP addresses of the VMs. When you connect to a VM, it doesn't need a public IP address, client software, an agent, or a special configuration.
+The following diagram shows the architecture of Bastion.
-In this tutorial, you deploy Bastion using the Standard SKU tier and adjust host scaling (instance count), which the Standard SKU supports. You could optionally deploy using a lower SKU, but you won't be able to adjust host scaling. After the deployment is complete, you connect to your VM via private IP address. If your VM has a public IP address that you don't need for anything else, you can remove it.
-In this tutorial, you'll learn how to:
+In this tutorial, you deploy Bastion by using the Standard SKU. You adjust host scaling (instance count), which the Standard SKU supports. If you use a lower SKU for the deployment, you can't adjust host scaling.
+
+After the deployment is complete, you connect to your VM via private IP address. If your VM has a public IP address that you don't need for anything else, you can remove it.
+
+In this tutorial, you learn how to:
> [!div class="checklist"]
-> * Deploy Bastion to your VNet.
+> * Deploy Bastion to your virtual network.
> * Connect to a virtual machine. > * Remove the public IP address from a virtual machine. ## Prerequisites
-* If you donΓÇÖt have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
-* A [virtual network](../virtual-network/quick-create-portal.md). This will be the virtual network to which you deploy Bastion.
-* A virtual machine in the virtual network. This VM isn't a part of the Bastion configuration and doesn't become a bastion host. You connect to this VM later in this tutorial via Bastion. If you don't have a VM, create one using [Quickstart: Create a VM](../virtual-machines/windows/quick-create-portal.md).
-* **Required VM roles:**
+To complete this tutorial, you need these resources:
+
+* An Azure subscription. If you don't have one, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+* A [virtual network](../virtual-network/quick-create-portal.md) where you'll deploy Bastion.
+* A virtual machine in the virtual network. This VM isn't a part of the Bastion configuration and doesn't become a bastion host. You connect to this VM later in this tutorial via Bastion. If you don't have a VM, create one by using [Quickstart: Create a Windows VM](../virtual-machines/windows/quick-create-portal.md) or [Quickstart: Create a Linux VM](../virtual-machines/linux/quick-create-portal.md).
+* Required VM roles:
- * Reader role on the virtual machine.
- * Reader role on the NIC with private IP of the virtual machine.
+ * Reader role on the virtual machine
+ * Reader role on the network adapter (NIC) with the private IP of the virtual machine
-* **Required inbound ports:**
+* Required inbound ports:
- * For Windows VMs - RDP (3389)
- * For Linux VMs - SSH (22)
+ * For Windows VMs: RDP (3389)
+ * For Linux VMs: SSH (22)
[!INCLUDE [DNS private zone](../../includes/bastion-private-dns-zones-non-support.md)]
In this tutorial, you'll learn how to:
You can use the following example values when creating this configuration, or you can substitute your own.
-**Basic VNet and VM values:**
+#### Basic virtual network and VM values
-|**Name** | **Value** |
+|Name | Value |
| | |
-| Virtual machine| TestVM |
-| Resource group | TestRG1 |
-| Region | East US |
-| Virtual network | VNet1 |
-| Address space | 10.1.0.0/16 |
-| Subnets | FrontEnd: 10.1.0.0/24 |
+| **Virtual machine**| **TestVM** |
+| **Resource group** | **TestRG1** |
+| **Region** | **East US** |
+| **Virtual network** | **VNet1** |
+| **Address space** | **10.1.0.0/16** |
+| **Subnets** | **FrontEnd: 10.1.0.0/24** |
-**Azure Bastion values:**
+#### Bastion values
-|**Name** | **Value** |
+|Name | Value |
| | |
-| Name | VNet1-bastion |
-| + Subnet Name | AzureBastionSubnet |
-| AzureBastionSubnet addresses | A subnet within your VNet address space with a subnet mask /26 or larger.<br> For example, 10.1.1.0/26. |
-| Tier/SKU | Standard |
-| Instance count (host scaling)| 3 or greater |
-| Public IP address | Create new |
-| Public IP address name | VNet1-ip |
-| Public IP address SKU | Standard |
-| Assignment | Static |
+| **Name** | **VNet1-bastion** |
+| **+ Subnet Name** | **AzureBastionSubnet** |
+| **AzureBastionSubnet addresses** | A subnet within your virtual network address space with a subnet mask of /26 or larger; for example, **10.1.1.0/26** |
+| **Tier/SKU** | **Standard** |
+| **Instance count (host scaling)**| **3** or greater |
+| **Public IP address** | **Create new** |
+| **Public IP address name** | **VNet1-ip** |
+| **Public IP address SKU** | **Standard** |
+| **Assignment** | **Static** |
## <a name="createhost"></a>Deploy Bastion
-This section helps you deploy Bastion to your virtual network. Once Bastion is deployed, you can connect securely to any VM in the virtual network using its private IP address.
+This section helps you deploy Bastion to your virtual network. After Bastion is deployed, you can connect securely to any VM in the virtual network using its private IP address.
> [!IMPORTANT] > [!INCLUDE [Pricing](../../includes/bastion-pricing.md)]
->
1. Sign in to the [Azure portal](https://portal.azure.com). 1. Go to your virtual network.
-1. On the page for your virtual network, in the left pane, select **Bastion** to open the **Bastion** page.
+1. On the page for your virtual network, on the left pane, select **Bastion**.
-1. On the Bastion page, expand **Dedicated Deployment Options**.
-1. Select **Configure manually**. This lets you configure specific additional settings (such as the SKU) when deploying Bastion to your virtual network.
+1. On the **Bastion** pane, expand **Dedicated Deployment Options**.
+1. Select **Configure manually**. This option lets you configure specific additional settings (such as the SKU) when you're deploying Bastion to your virtual network.
- :::image type="content" source="./media/tutorial-create-host-portal/manual-configuration.png" alt-text="Screenshot of Bastion page showing configure bastion on my own." lightbox="./media/tutorial-create-host-portal/manual-configuration.png":::
+ :::image type="content" source="./media/tutorial-create-host-portal/manual-configuration.png" alt-text="Screenshot that shows dedicated deployment options for Azure Bastion and the button for manual configuration." lightbox="./media/tutorial-create-host-portal/manual-configuration.png":::
-1. On the **Create a Bastion** page, configure the settings for your bastion host. Project details are populated from your virtual network values. Configure the **Instance details** values.
+1. On the **Create a Bastion** pane, configure the settings for your bastion host. Project details are populated from your virtual network values. Under **Instance details**, configure these values:
- * **Name**: Type the name that you want to use for your bastion resource.
+ * **Name**: The name that you want to use for your Bastion resource.
- * **Region**: The Azure public region in which the resource will be created. Choose the region in which your virtual network resides.
+ * **Region**: The Azure public region in which the resource will be created. Choose the region where your virtual network resides.
- * **Tier:** The tier is also known as the **SKU**. For this tutorial, select **Standard**. For information about the features available for each SKU, see [Configuration settings - SKU](configuration-settings.md#skus).
+ * **Tier**: The SKU. For this tutorial, select **Standard**. For information about the features available for each SKU, see [Configuration settings - SKU](configuration-settings.md#skus).
- * **Instance count:** This is the setting for **host scaling** and is available for the Standard SKU. Host scaling is configured in scale unit increments. Use the slider or type a number to configure the instance count that you want. For this tutorial, you can select the instance count you'd prefer. For more information, see [Host scaling](configuration-settings.md#instance) and [Pricing](https://azure.microsoft.com/pricing/details/azure-bastion).
+ * **Instance count**: The setting for host scaling, which is available for the Standard SKU. You configure host scaling in scale unit increments. Use the slider or enter a number to configure the instance count that you want. For more information, see [Instances and host scaling](configuration-settings.md#instance) and [Azure Bastion pricing](https://azure.microsoft.com/pricing/details/azure-bastion).
- :::image type="content" source="./media/tutorial-create-host-portal/instance-values.png" alt-text="Screenshot of Bastion page instance values." lightbox="./media/tutorial-create-host-portal/instance-values.png":::
+ :::image type="content" source="./media/tutorial-create-host-portal/instance-values.png" alt-text="Screenshot of Azure Bastion instance details." lightbox="./media/tutorial-create-host-portal/instance-values.png":::
-1. Configure the **virtual networks** settings. Select your virtual network from the dropdown. If you don't see your virtual network in the dropdown list, make sure you selected the correct Region in the previous settings on this page.
+1. Configure the **Virtual networks** settings. Select your virtual network from the dropdown list. If your virtual network isn't in the dropdown list, make sure that you selected the correct **Region** value in the previous step.
-1. To configure the AzureBastionSubnet, select **Manage subnet configuration**.
+1. To configure AzureBastionSubnet, select **Manage subnet configuration**.
- :::image type="content" source="./media/tutorial-create-host-portal/select-vnet.png" alt-text="Screenshot of configure virtual networks section." lightbox="./media/tutorial-create-host-portal/select-vnet.png":::
+ :::image type="content" source="./media/tutorial-create-host-portal/select-vnet.png" alt-text="Screenshot of the section for configuring virtual networks." lightbox="./media/tutorial-create-host-portal/select-vnet.png":::
-1. On the **Subnets** page, select **+Subnet** to open the **Add subnet** page.
+1. On the **Subnets** pane, select **+Subnet**.
-1. On the **Add subnet page**, create the 'AzureBastionSubnet' subnet using the following values. Leave the other values as default.
+1. On the **Add subnet** pane, create the AzureBastionSubnet subnet by using the following values. Leave the other values as default.
* The subnet name must be **AzureBastionSubnet**.
- * The subnet must be at least **/26 or larger** (/26, /25, /24 etc.) to accommodate features available with the Standard SKU.
+ * The subnet must be **/26** or larger (for example, **/26**, **/25**, or **/24**) to accommodate features available with the Standard SKU.
- Select **Save** at the bottom of the page to save your values.
+ Select **Save** at the bottom of the pane to save your values.
-1. At the top of the **Subnets** page, select **Create a Bastion** to return to the Bastion configuration page.
+1. At the top of the **Subnets** pane, select **Create a Bastion** to return to the Bastion configuration pane.
- :::image type="content" source="./media/tutorial-create-host-portal/create-page.png" alt-text="Screenshot of Create a Bastion."lightbox="./media/tutorial-create-host-portal/create-page.png":::
+ :::image type="content" source="./media/tutorial-create-host-portal/create-page.png" alt-text="Screenshot of the pane that lists Azure Bastion subnets."lightbox="./media/tutorial-create-host-portal/create-page.png":::
-1. The **Public IP address** section is where you configure the public IP address of the Bastion host resource on which RDP/SSH will be accessed (over port 443). The public IP address must be in the same region as the Bastion resource you're creating. Create a new IP address. You can leave the default naming suggestion.
+1. The **Public IP address** section is where you configure the public IP address of the bastion host resource on which RDP/SSH will be accessed (over port 443). The public IP address must be in the same region as the Bastion resource that you're creating.
-1. When you finish specifying the settings, select **Review + Create**. This validates the values.
+ Create a new IP address. You can leave the default naming suggestion.
-1. Once validation passes, you can deploy Bastion. Select **Create**. You'll see a message letting you know that your deployment is in process. Status displays on this page as the resources are created. It takes about 10 minutes for the Bastion resource to be created and deployed.
+1. When you finish specifying the settings, select **Review + Create**. This step validates the values.
+
+1. After the values pass validation, you can deploy Bastion. Select **Create**.
+
+ A message says that your deployment is in process. The status appears on this page as the resources are created. It takes about 10 minutes for the Bastion resource to be created and deployed.
## <a name="connect"></a>Connect to a VM You can use any of the following detailed articles to connect to a VM. Some connection types require the Bastion [Standard SKU](configuration-settings.md#skus). -
-You can also use the basic [Connection steps](#steps) in the following section to connect to your VM.
-### <a name="steps"></a>Connection steps
+You can also use these basic connection steps to connect to your VM:
[!INCLUDE [Connect to a VM](../../includes/bastion-vm-connect.md)]
-### <a name="audio"></a>To enable audio output
+### <a name="audio"></a>Enable audio output
[!INCLUDE [Enable VM audio output](../../includes/bastion-vm-audio.md)]
-## <a name="ip"></a>Remove VM public IP address
+## <a name="ip"></a>Remove a VM's public IP address
[!INCLUDE [Remove a public IP address from a VM](../../includes/bastion-remove-ip.md)] ## Clean up resources
-If you're not going to continue to use this application, delete
-your resources using the following steps:
+When you finish using this application, delete your resources:
-1. Enter the name of your resource group in the **Search** box at the top of the portal. When you see your resource group in the search results, select it.
+1. Enter the name of your resource group in the **Search** box at the top of the portal. When your resource group appears in the search results, select it.
1. Select **Delete resource group**.
-1. Enter the name of your resource group for **TYPE THE RESOURCE GROUP NAME:** and select **Delete**.
+1. Enter the name of your resource group for **TYPE THE RESOURCE GROUP NAME**, and then select **Delete**.
## Next steps In this tutorial, you deployed Bastion to a virtual network and connected to a VM. You then removed the public IP address from the VM. Next, learn about and configure additional Bastion features. > [!div class="nextstepaction"]
-> [Bastion features and configuration settings](configuration-settings.md)
+> [Azure Bastion configuration settings](configuration-settings.md)
> [!div class="nextstepaction"]
-> [Bastion - VM connections and features](vm-about.md)
+> [VM connections and features](vm-about.md)
batch Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Batch description: Lists Azure Policy Regulatory Compliance controls available for Azure Batch. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 10/23/2023 Last updated : 11/06/2023
certification Program Requirements Edge Secured Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/certification/program-requirements-edge-secured-core.md
zone_pivot_groups: app-service-platform-windows-linux-sphere-rtos
## Edge Secured-Core certification requirements ## ### Program purpose ###
-Edge Secured-core is an incremental certification in the Azure Certified Device program for IoT devices running a full operating system, such as Linux, Windows 10 IoT or Azure Sphere OS. This program enables device partners to differentiate their devices by meeting an additional set of security criteria. Devices meeting this criteria enable these promises:
+Edge Secured-core is a security certification for devices running a full operating system. Edge Secured-core currently supports Windows IoT and Azure Sphere OS. Linux support is coming in the future. This program enables device partners to differentiate their devices by meeting an additional set of security criteria. Devices meeting this criteria enable these promises:
1. Hardware-based device identity 2. Capable of enforcing system integrity
Edge Secured-core for Windows IoT requires Windows 10 IoT Enterprise version 190
</br>
-|Name|SecuredCore.Hardware.Identity|x86/AMD64|Arm64|
-|:|:|:|:|
-|Status|Required|2023|2024|
+|Name|SecuredCore.Hardware.Identity|
+|:|:|
+|Status|Required|
|Description|The purpose of the requirement is to validate the device identity is rooted in hardware and can be the primary authentication method with Azure IoT Hub Device Provisioning Service (DPS).| |Requirements dependency|TPM v2.0 device| |Validation Type|Manual/Tools|
Edge Secured-core for Windows IoT requires Windows 10 IoT Enterprise version 190
</br>
-|Name|SecuredCore.Hardware.MemoryProtection|x86/AMD64|Arm64|
-|:|:|:|:|
-|Status|Required|2023|2024|
+|Name|SecuredCore.Hardware.MemoryProtection|
+|:|:|
+|Status|Required|
|Description|The purpose of the requirement is to validate that DMA isn't enabled on externally accessible ports.| |Requirements dependency|Only if DMA capable ports exist| |Validation Type|Manual/Tools|
Edge Secured-core for Windows IoT requires Windows 10 IoT Enterprise version 190
</br>
-|Name|SecuredCore.Firmware.Protection|x86/AMD64|Arm64|
-|:|:|:|:|
-|Status|Required|2023|2024|
+|Name|SecuredCore.Firmware.Protection|
+|:|:|
+|Status|Required|
|Description|The purpose of the requirement is to ensure that device has adequate mitigations from Firmware security threats.| |Requirements dependency|DRTM + UEFI| |Validation Type|Manual/Tools|
Edge Secured-core for Windows IoT requires Windows 10 IoT Enterprise version 190
</br>
-|Name|SecuredCore.Firmware.SecureBoot|x86/AMD64|Arm64|
-|:|:|:|:|
-|Status|Required|2023|2024|
+|Name|SecuredCore.Firmware.SecureBoot|
+|:|:|
+|Status|Required|
|Description|The purpose of the requirement is to validate the boot integrity of the device.| |Requirements dependency|UEFI| |Validation Type|Manual/Tools|
Edge Secured-core for Windows IoT requires Windows 10 IoT Enterprise version 190
</br>
-|Name|SecuredCore.Firmware.Attestation|x86/AMD64|Arm64|
-|:|:|:|:|
-|Status|Required|2023|2024|
+|Name|SecuredCore.Firmware.Attestation|
+|:|:|
+|Status|Required|
|Description|The purpose of the requirement is to ensure the device can remotely attest to the Microsoft Azure Attestation service.| |Requirements dependency|Azure Attestation Service| |Validation Type|Manual/Tools|
Edge Secured-core for Windows IoT requires Windows 10 IoT Enterprise version 190
</br>
-|Name|SecuredCore.Encryption.Storage|x86/AMD64|Arm64|
-|:|:|:|:|
-|Status|Required|2023|2024|
+|Name|SecuredCore.Encryption.Storage|
+|:|:|
+|Status|Required|
|Description|The purpose of the requirement to validate that sensitive data can be encrypted on nonvolatile storage.| |Validation Type|Manual/Tools| |Validation|Device to be validated through [Edge Secured-core Agent](https://aka.ms/Scforwiniot) toolset to ensure Secure-boot and BitLocker is enabled and bound to PCR7.|
Edge Secured-core for Windows IoT requires Windows 10 IoT Enterprise version 190
</br>
-|Name|SecuredCore.Encryption.TLS|x86/AMD64|Arm64|
-|:|:|:|:|
-|Status|Required|2023|2024|
+|Name|SecuredCore.Encryption.TLS|
+|:|:|
+|Status|Required|
|Description|The purpose of the requirement is to validate support for required TLS versions and cipher suites.|
-|Requirements dependency|Windows 10 IoT Enterprise Version 1903 or greater. Note: other requirements may require greater versions for other services. |
+|Requirements dependency|Windows 10 IoT Enterprise Version 1903 or greater. Note: other requirements might require greater versions for other services. |
|Validation Type|Manual/Tools| Validation|Device to be validated through toolset to ensure the device supports a minimum TLS version of 1.2 and supports the following required TLS cipher suites.<ul><li>TLS_RSA_WITH_AES_128_GCM_SHA256</li><li>TLS_RSA_WITH_AES_128_CBC_SHA256</li><li>TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256</li><li>TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256</li><li>TLS_DHE_RSA_WITH_AES_128_GCM_SHA256</li><li>TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256</li><li>TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256</li></ul>| |Resources| [TLS support in IoT Hub](../iot-hub/iot-hub-tls-support.md) <br /> [TLS Cipher suites in Windows 10](/windows/win32/secauthn/tls-cipher-suites-in-windows-10-v1903) |
Validation|Device to be validated through toolset to ensure the device supports
</br>
-|Name|SecuredCore.Protection.CodeIntegrity|x86/AMD64|Arm64|
-|:|:|:|:|
-|Status|Required|2023|2024|
+|Name|SecuredCore.Protection.CodeIntegrity|
+|:|:|
+|Status|Required|
|Description|The purpose of this requirement is to validate that code integrity is available on this device.| |Requirements dependency|HVCI is enabled on the device.| |Validation Type|Manual/Tools|
Validation|Device to be validated through toolset to ensure the device supports
</br>
-|Name|SecuredCore.Protection.NetworkServices|x86/AMD64|Arm64|
-|:|:|:|:|
-|Status|Required|2023|2024|
+|Name|SecuredCore.Protection.NetworkServices|
+|:|:|
+|Status|Required|
|Description|The purpose of the requirement is to validate that services listening for input from the network aren't running with elevated privileges.| |Validation Type|Manual/Tools|
-|Validation|Device to be validated through [Edge Secured-core Agent](https://aka.ms/Scforwiniot) toolset to ensure that third party services accepting network connections aren't running with elevated LocalSystem and LocalService privileges. <ol><li>Exceptions may apply</li></ol>|
+|Validation|Device to be validated through [Edge Secured-core Agent](https://aka.ms/Scforwiniot) toolset to ensure that third party services accepting network connections aren't running with elevated LocalSystem and LocalService privileges. <ol><li>Exceptions might apply</li></ol>|
Validation|Device to be validated through toolset to ensure the device supports
</br>
-|Name|SecuredCore.Built-in.Security|x86/AMD64|Arm64|
-|:|:|:|:|
-|Status|Required|Future|Future|
+|Name|SecuredCore.Built-in.Security|
+|:|:|
+|Status|Required|
|Description|The purpose of the requirement is to make sure devices can report security information and events by sending data to Azure Defender for IoT. <br>Note: Download and deploy security agent from GitHub| |Target Availability|2022| |Validation Type|Manual/Tools|
Validation|Device to be validated through toolset to ensure the device supports
</br>
-|Name|SecuredCore.Protection.Baselines|x86/AMD64|Arm64|
-|:|:|:|:|
-|Status|Required|Future|Future|
+|Name|SecuredCore.Protection.Baselines|
+|:|:|
+|Status|Required|
|Description|The purpose of the requirement is to validate that the system conforms to a baseline security configuration.| |Target Availability|2022| |Requirements dependency|Azure Defender for IoT|
Some requirements of this program are based on a business agreement between your
::: zone pivot="platform-linux" ## Linux OS Support
-OS Support is determined through underlying requirements of Azure services and our ability to validate scenarios.
-
-The Edge Secured-core program for Linux is enabled through the IoT Edge runtime, which is supported based on [Tier 1 and Tier 2 operating systems](../iot-edge/support.md).
-
-## IoT Edge
-Edge Secured-core validation on Linux based devices is executed through a container run on the IoT Edge runtime. For this reason, all devices that are certifying Edge Secured-core must have the IoT Edge runtime installed.
-
-## Linux Hardware/Firmware Requirements
>[!Note]
-> * Hardware must support TPM v2.0, SRTM, Secure-boot or UBoot.
-> * Firmware will be submitted to Microsoft for vulnerability and configuration evaluation.
+> Linux is not yet supported. The below represent expected requirements. Please contact iotcert@microsoft.com if you are interested in certifying a Linux device, including device HW and OS specs, and whether or not it meets each of the draft requirements below.
+## Linux Hardware/Firmware Requirements
-|Name|SecuredCore.Hardware.Identity|x86/AMD64|Arm64|
-|:|:|:|:|
-|Status|Required|2023|2023|
-|Description|The purpose of the requirement is to validate the device identify is rooted in hardware.|||
-|Requirements dependency||TPM v2.0 device|TPM v2.0 </br><sup>or *other supported method</sup>|
-|Validation Type|Manual/Tools|||
-|Validation|Device to be validated through toolset to ensure that the device has a HWRoT present and that it can be provisioned through DPS using TPM or SE.|||
-|Resources|[Setup auto provisioning with DPS](../iot-dps/quick-setup-auto-provision.md)|||
+|Name|SecuredCore.Hardware.Identity|
+|:|:|
+|Status|Required|
+|Description|The purpose of the requirement is to validate the device identify is rooted in hardware.|
+|Requirements dependency|TPM v2.0 </br><sup>or *other supported method</sup>|
+|Validation Type|Manual/Tools|
+|Validation|Device to be validated through toolset to ensure that the device has a HWRoT present and that it can be provisioned through DPS using TPM or SE.|
+|Resources|[Setup auto provisioning with DPS](../iot-dps/quick-setup-auto-provision.md)|
</br>
-|Name|SecuredCore.Hardware.MemoryProtection|x86/AMD64|Arm64|
-|:|:|:|:|
-|Status|Required|2023|2023|
+|Name|SecuredCore.Hardware.MemoryProtection|
+|:|:|
+|Status|Required|
|Description|The purpose of the requirement is to validate ensure that memory integrity helps protect the device from vulnerable peripherals.| |Validation Type|Manual/Tools| |Validation|memory regions for peripherals must be gated with hardware/firmware such as memory region domain controllers or SMMU (System memory management Unit).|
Edge Secured-core validation on Linux based devices is executed through a contai
</br>
-|Name|SecuredCore.Firmware.Protection|x86/AMD64|Arm64|
-|:|:|:|:|
-|Status|Required|2023|2023|
+|Name|SecuredCore.Firmware.Protection|
+|:|:|
+|Status|Required|
|Description|The purpose of the requirement is to ensure that device has adequate mitigations from Firmware security threats.| |Validation Type|Manual/Tools| |Validation|Device to be validated through toolset to confirm it's protected from firmware security threats through one of the following approaches: <ul><li>Approved FW that does SRTM + runtime firmware hardening</li><li>Firmware scanning and evaluation by approved Microsoft third party</li></ul> |
Edge Secured-core validation on Linux based devices is executed through a contai
</br>
-|Name|SecuredCore.Firmware.SecureBoot|x86/AMD64|Arm64|
-|:|:|:|:|
-|Status|Required|2023|2023|
+|Name|SecuredCore.Firmware.SecureBoot|
+|:|:|
+|Status|Required|
|Description|The purpose of the requirement is to validate the boot integrity of the device.| |Validation Type|Manual/Tools| |Validation|Device to be validated through toolset to ensure that firmware and kernel signatures are validated every time the device boots. <ul><li>UEFI: Secure boot is enabled</li><li>Uboot: Verified boot is enabled</li></ul>|
Edge Secured-core validation on Linux based devices is executed through a contai
</br>
-|Name|SecuredCore.Firmware.Attestation|x86/AMD64|Arm64|
-|:|:|:|:|
-|Status|Required|2023|2023|
+|Name|SecuredCore.Firmware.Attestation|
+|:|:|
+|Status|Required|
|Description|The purpose of the requirement is to ensure the device can remotely attest to the Microsoft Azure Attestation service.|
-|Dependency||TPM 2.0|TPM 2.0 </br><sup>or *supported OP-TEE based application chained to a HWRoT (Secure Element or Secure Enclave)</sup>|
+|Dependency|TPM 2.0 </br><sup>or *supported OP-TEE based application chained to a HWRoT (Secure Element or Secure Enclave)</sup>|
|Validation Type|Manual/Tools| |Validation|Device to be validated through toolset to ensure that platform boot logs and applicable runtime measurements can be collected and remotely attested to the Microsoft Azure Attestation service.| |Resources| [Microsoft Azure Attestation](../attestation/index.yml) </br> Certification portal test includes an attestation client that when combined with the TPM 2.0 can validate the Microsoft Azure Attestation service.|
Edge Secured-core validation on Linux based devices is executed through a contai
</br>
-|Name|SecuredCore.Hardware.SecureEnclave|x86/AMD64|Arm64|
-|:|:|:|:|
-|Status|Required|Future|Future|
+|Name|SecuredCore.Hardware.SecureEnclave|
+|:|:|
+|Status|Required|
|Description|The purpose of the requirement to validate the existence of a secure enclave and that the enclave can be used for security functions.| |Validation Type|Manual/Tools| |Validation||
Edge Secured-core validation on Linux based devices is executed through a contai
## Linux Configuration Requirements
-|Name|SecuredCore.Encryption.Storage|x86/AMD64|Arm64|
-|:|:|:|:|
-|Status|Required|2023|2023|
+|Name|SecuredCore.Encryption.Storage|
+|:|:|
+|Status|Required|
|Description|The purpose of the requirement to validate that sensitive data can be encrypted on nonvolatile storage.| |Validation Type|Manual/Tools| |Validation|Device to be validated through toolset to ensure storage encryption is enabled and default algorithm is XTS-AES, with key length 128 bits or higher.|
Edge Secured-core validation on Linux based devices is executed through a contai
</br>
-|Name|SecuredCore.Encryption.TLS|x86/AMD64|Arm64|
-|:|:|:|:|
-|Status|Required|2023|2023|
+|Name|SecuredCore.Encryption.TLS|
+|:|:|
+|Status|Required|
|Description|The purpose of the requirement is to validate support for required TLS versions and cipher suites.| |Validation Type|Manual/Tools| Validation|Device to be validated through toolset to ensure the device supports a minimum TLS version of 1.2 and supports the following required TLS cipher suites.<ul><li>TLS_RSA_WITH_AES_128_GCM_SHA256</li><li>TLS_RSA_WITH_AES_128_CBC_SHA256</li><li>TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256</li><li>TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256</li><li>TLS_DHE_RSA_WITH_AES_128_GCM_SHA256</li><li>TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256</li><li>TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256</li></ul>|
Validation|Device to be validated through toolset to ensure the device supports
</br>
-|Name|SecuredCore.Protection.CodeIntegrity|x86/AMD64|Arm64|
-|:|:|:|:|
-|Status|Required|2023|2023|
+|Name|SecuredCore.Protection.CodeIntegrity|
+|:|:|
+|Status|Required|
|Description|The purpose of this requirement is to validate that authorized code runs with least privilege.| |Validation Type|Manual/Tools| |Validation|Device to be validated through toolset to ensure that code integrity is enabled by validating dm-verity and IMA|
Validation|Device to be validated through toolset to ensure the device supports
</br>
-|Name|SecuredCore.Protection.NetworkServices|x86/AMD64|Arm64|
-|:|:|:|:|
-|Status|<sup>*</sup>Required|2023|2023|
+|Name|SecuredCore.Protection.NetworkServices|
+|:|:|
+|Status|<sup>*</sup>Required|
|Description|The purpose of the requirement is to validate that applications accepting input from the network aren't running with elevated privileges.| |Validation Type|Manual/Tools| |Validation|Device to be validated through toolset to ensure that services accepting network connections aren't running with SYSTEM or root privileges.|
Validation|Device to be validated through toolset to ensure the device supports
## Linux Software/Service Requirements
-|Name|SecuredCore.Built-in.Security|x86/AMD64|Arm64|
-|:|:|:|:|
-|Status|Required|2023|2023|
+|Name|SecuredCore.Built-in.Security|
+|:|:|
+|Status|Required|
|Description|The purpose of the requirement is to make sure devices can report security information and events by sending data to Microsoft Defender for IoT.| |Validation Type|Manual/Tools| |Validation |<ol><li>Device must generate security logs and alerts.</li><li>Device logs and alerts messages to Azure Security Center.</li><li>Device must have the Azure Defender for IoT microagent running</li><li>Configuration_Certification_Check must report TRUE in the module twin</li><li>Validate alert messages from Azure Defender for IoT.</li></ol>|
Validation|Device to be validated through toolset to ensure the device supports
</br>
-|Name|SecuredCore.Manageability.Configuration|x86/AMD64|Arm64|
-|:|:|:|:|
-|Status|Required|2023|2023|
+|Name|SecuredCore.Manageability.Configuration|
+|:|:|
+|Status|Required|
|Description|The purpose of the requirement is to validate that device supports auditing and setting of system configuration (and certain management actions such as reboot) through Azure.| |Dependency|azure-osconfig| |Validation Type|Manual/Tools|
Validation|Device to be validated through toolset to ensure the device supports
</br>
-|Name|SecuredCore.Update|x86/AMD64|Arm64|
-|:|:|:|:|
-|Status|Audit|2023|2023|
+|Name|SecuredCore.Update|
+|:|:|
+|Status|Audit|
|Description|The purpose of the requirement is to validate the device can receive and update its firmware and software.| |Validation Type|Manual/Tools| |Validation|Partner confirmation that they were able to send an update to the device through Azure Device update and other approved services.|
Validation|Device to be validated through toolset to ensure the device supports
</br>
-|Name|SecuredCore.Protection.Baselines|x86/AMD64|Arm64|
-|:|:|:|:|
-|Status|Required|2023|2023|
+|Name|SecuredCore.Protection.Baselines|
+|:|:|
+|Status|Required|
|Description|The purpose of the requirement is to validate the extent to which the device implements the Azure Security Baseline| |Dependency|azure-osconfig| |Validation Type|Manual/Tools| |Validation|OSConfig is present on the device and reporting to what extent it implements the Azure Security Baseline.|
-|Resources| <ul><li>https://techcommunity.microsoft.com/t5/microsoft-security-baselines/bg-p/Microsoft-Security-Baselines </li><li> https://www.cisecurity.org/cis-benchmarks/ </li><li>https://learn.microsoft.com/en-us/azure/governance/policy/samples/guest-configuration-baseline-linux|</li></ul>
+|Resources|<ul><li>https://techcommunity.microsoft.com/t5/microsoft-security-baselines/bg-p/Microsoft-Security-Baselines</li><li>https://www.cisecurity.org/cis-benchmarks/</li><li>https://learn.microsoft.com/en-us/azure/governance/policy/samples/guest-configuration-baseline-linux</li></ul>|
</br>
-|Name|SecuredCore.Protection.SignedUpdates|x86/AMD64|Arm64|
-|:|:|:|:|
-|Status|Required|2023|2023|
+|Name|SecuredCore.Protection.SignedUpdates|
+|:|:|
+|Status|Required|
|Description|The purpose of the requirement is to validate that updates must be signed.| |Validation Type|Manual/Tools| |Validation|Device to be validated through toolset to ensure that updates to the operating system, drivers, application software, libraries, packages and firmware won't be applied unless properly signed and validated.
The Mediatek MT3620AN must be included in your design. Additional guidance for b
## Azure Sphere Hardware/Firmware Requirements
-|Name|SecuredCore.Hardware.Identity|Azure Sphere|
-|:|:|:|
-|Status|Required|2023|
-|Description|The purpose of the requirement is to validate the device identity is rooted in hardware.||
-|Validation Type|Prevalidated, no additional validation is required||
-|Validation|Provided by Microsoft||
+|Name|SecuredCore.Hardware.Identity|
+|:|:|
+|Status|Required|
+|Description|The purpose of the requirement is to validate the device identity is rooted in hardware.|
+|Validation Type|Prevalidated, no additional validation is required|
+|Validation|Provided by Microsoft|
</br>
-|Name|SecuredCore.Hardware.MemoryProtection|Azure Sphere|
-|:|:|:|
-|Status|Required|2023|
+|Name|SecuredCore.Hardware.MemoryProtection|
+|:|:|
+|Status|Required|
|Description|The purpose of the requirement is to ensure that memory integrity helps protect the device from vulnerable peripherals.| |Validation Type|Prevalidated, no additional validation is required| |Validation|Provided by Microsoft|
The Mediatek MT3620AN must be included in your design. Additional guidance for b
</br>
-|Name|SecuredCore.Firmware.Protection|Azure Sphere|
-|:|:|:|
-|Status|Required|2023|
+|Name|SecuredCore.Firmware.Protection|
+|:|:|
+|Status|Required|
|Description|The purpose of the requirement is to ensure that device has adequate mitigations from Firmware security threats.| |Validation Type|Prevalidated, no additional validation is required| |Validation|Provided by Microsoft|
The Mediatek MT3620AN must be included in your design. Additional guidance for b
</br>
-|Name|SecuredCore.Firmware.SecureBoot|Azure Sphere|
-|:|:|:|
-|Status|Required|2023|
+|Name|SecuredCore.Firmware.SecureBoot|
+|:|:|
+|Status|Required|
|Description|The purpose of the requirement is to validate the boot integrity of the device.| |Validation Type|Prevalidated, no additional validation is required| |Validation|Provided by Microsoft|
The Mediatek MT3620AN must be included in your design. Additional guidance for b
</br>
-|Name|SecuredCore.Firmware.Attestation|Azure Sphere|
-|:|:|:|
-|Status|Required|2023|
+|Name|SecuredCore.Firmware.Attestation|
+|:|:|
+|Status|Required|
|Description|The purpose of the requirement is to ensure the device can remotely attest to a Microsoft Azure Attestation service.| |Validation Type|Prevalidated, no additional validation is required| |Validation|Provided by Microsoft|
The Mediatek MT3620AN must be included in your design. Additional guidance for b
</br>
-|Name|SecuredCore.Hardware.SecureEnclave|Azure Sphere|
-|:|:|:|
-|Status|Required|2023|
+|Name|SecuredCore.Hardware.SecureEnclave|
+|:|:|
+|Status|Required|
|Description|The purpose of this requirement is to validate hardware security that is accessible from a secure operating system.| |Validation Type|Prevalidated, no additional validation is required| |Validation|Provided by Microsoft|
The Mediatek MT3620AN must be included in your design. Additional guidance for b
## Azure Sphere OS Configuration Requirements
-|Name|SecuredCore.Encryption.Storage|Azure Sphere|
-|:|:|:|
-|Status|Required|2023|
+|Name|SecuredCore.Encryption.Storage|
+|:|:|
+|Status|Required|
|Description|The purpose of this requirement is to validate that sensitive data can be encrypted on nonvolatile storage.| |Validation Type|Prevalidated, no additional validation is required| |Validation|Provided by Microsoft|
The Mediatek MT3620AN must be included in your design. Additional guidance for b
</br>
-|Name|SecuredCore.Encryption.TLS|Azure Sphere|
-|:|:|:|
-|Status|Required|2023|
+|Name|SecuredCore.Encryption.TLS|
+|:|:|
+|Status|Required|
|Description|The purpose of the requirement is to validate support for required TLS versions and cipher suites.| |Validation Type|Prevalidated, no additional validation is required| |Validation|Provided by Microsoft|
The Mediatek MT3620AN must be included in your design. Additional guidance for b
</br>
-|Name|SecuredCore.Protection.CodeIntegrity|Azure Sphere|
-|:|:|:|
-|Status|Required|2023|
+|Name|SecuredCore.Protection.CodeIntegrity|
+|:|:|
+|Status|Required|
|Description|The purpose of this requirement is to validate that authorized code runs with least privilege.| |Validation Type|Prevalidated, no additional validation is required| |Validation|Provided by Microsoft|
The Mediatek MT3620AN must be included in your design. Additional guidance for b
</br>
-|Name|SecuredCore.Protection.NetworkServices|Azure Sphere|
-|:|:|:|
-|Status|Required|2023|
+|Name|SecuredCore.Protection.NetworkServices|
+|:|:|
+|Status|Required|
|Description|The purpose of the requirement is to validate that applications accepting input from the network aren't running with elevated privileges.| |Validation Type|Prevalidated, no additional validation is required| |Validation|Provided by Microsoft|
The Mediatek MT3620AN must be included in your design. Additional guidance for b
</br>
-|Name|SecuredCore.Protection.NetworkFirewall|Azure Sphere|
-|:|:|:|
-|Status|Required|2023|
+|Name|SecuredCore.Protection.NetworkFirewall|
+|:|:|
+|Status|Required|
|Description|The purpose of this requirement is to validate that applications can't connect to endpoints that haven't been authorized.| |Validation Type|Prevalidated, no additional validation is required| |Validation|Provided by Microsoft|
The Mediatek MT3620AN must be included in your design. Additional guidance for b
## Azure Sphere Software/Service Requirements
-|Name|SecuredCore.Built-in.Security|Azure Sphere|
-|:|:|:|
-|Status|Required|2023|
+|Name|SecuredCore.Built-in.Security|
+|:|:|
+|Status|Required|
|Description|The purpose of this requirement is to make sure devices can report security information and events by sending data to a Microsoft telemetry service.| |Validation Type|Prevalidated, no additional validation is required| |Validation|Provided by Microsoft|
The Mediatek MT3620AN must be included in your design. Additional guidance for b
</br>
-|Name|SecuredCore.Manageability.Configuration|Azure Sphere|
-|:|:|:|
-|Status|Required|2023|
+|Name|SecuredCore.Manageability.Configuration|
+|:|:|
+|Status|Required|
|Description|The purpose of this requirement is to validate the device supports remote administration via service-based configuration control.| |Validation Type|Prevalidated, no additional validation is required| |Validation|Provided by Microsoft|
The Mediatek MT3620AN must be included in your design. Additional guidance for b
</br>
-|Name|SecuredCore.Update|Azure Sphere|
-|:|:|:|
-|Status|Required|2023|
+|Name|SecuredCore.Update|
+|:|:|
+|Status|Required|
|Description|The purpose of the requirement is to validate the device can receive and update its firmware and software.| |Validation Type|Prevalidated, no additional validation is required| |Validation|Provided by Microsoft|
The Mediatek MT3620AN must be included in your design. Additional guidance for b
</br>
-|Name|SecuredCore.Protection.Baselines|Azure Sphere|
-|:|:|:|
-|Status|Required|2023|
+|Name|SecuredCore.Protection.Baselines|
+|:|:|
+|Status|Required|
|Description|The purpose of the requirement is to validate that the system conforms to a baseline security configuration| |Validation Type|Prevalidated, no additional validation is required| |Validation|Provided by Microsoft|
The Mediatek MT3620AN must be included in your design. Additional guidance for b
</br>
-|Name|SecuredCore.Protection.SignedUpdates|Azure Sphere|
-|:|:|:|
-|Status|Required|2023|
+|Name|SecuredCore.Protection.SignedUpdates|
+|:|:|
+|Status|Required|
|Description|The purpose of the requirement is to validate that updates must be signed.| |Validation Type|Prevalidated, no additional validation is required| |Validation|Provided by Microsoft|
The Mediatek MT3620AN must be included in your design. Additional guidance for b
</br>
chaos-studio Chaos Studio Fault Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-fault-library.md
Currently, only virtual machine scale sets configured with the **Uniform** orche
] } ```+
+## Start load test (Azure load testing)
+
+| Property | Value |
+| - | |
+| Capability name | Start-1.0 |
+| Target type | Microsoft-AzureLoadTest |
+| Description | Starts a load test (from Azure load testing) based on the provided load test ID. |
+| Prerequisites | A load test with a valid load test ID must be created in the Azure load testing service. |
+| Urn | urn:csci:microsoft:azureLoadTest:start/1.0 |
+| Fault type | Discrete. |
+| Parameters (key, value) | |
+| testID | The ID of a specific load test created in the Azure load testing service. |
+
+### Sample JSON
+
+```json
+{
+ "name": "branchOne",
+ "actions": [
+ {
+ "type": "discrete",
+ "name": "urn:csci:microsoft:azureLoadTest:start/1.0",
+ "parameters": [
+ {
+ "key": "testID",
+ "value": "0"
+ }
+ ],
+ "selectorid": "myResources"
+ }
+ ]
+}
+```
+
+## Stop load test (Azure load testing)
+
+| Property | Value |
+| - | |
+| Capability name | Stop-1.0 |
+| Target type | Microsoft-AzureLoadTest |
+| Description | Stops a load test (from Azure load testing) based on the provided load test ID. |
+| Prerequisites | A load test with a valid load test ID must be created in the Azure load testing service. |
+| Urn | urn:csci:microsoft:azureLoadTest:stop/1.0 |
+| Fault type | Discrete. |
+| Parameters (key, value) | |
+| testID | The ID of a specific load test created in the Azure load testing service. |
+
+### Sample JSON
+
+```json
+{
+ "name": "branchOne",
+ "actions": [
+ {
+ "type": "discrete",
+ "name": "urn:csci:microsoft:azureLoadTest:stop/1.0",
+ "parameters": [
+ {
+ "key": "testID",
+ "value": "0"
+ }
+ ],
+ "selectorid": "myResources"
+ }
+ ]
+}
+```
+
chaos-studio Chaos Studio Tutorial Availability Zone Down Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-tutorial-availability-zone-down-portal.md
+
+ Title: Use an Azure Chaos Studio experiment template to take down Virtual Machine Scale Set availability zones with autoscale disabled
+description: Use the Azure portal to create an experiment from the Availability Zone Down experiment template.
++++ Last updated : 09/27/2023+++
+# Use a chaos experiment template to take down Virtual Machine Scale Set availability zones with autoscale disabled
+
+You can use a chaos experiment to verify that your application is resilient to failures by causing those failures in a controlled environment. In this article, you take down an availability zone (with autoscale disabled) of a Virtual Machine Scale Sets instance using a pre-populated experiment template and Azure Chaos Studio.
+
+## Prerequisites
+
+- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+- A Virtual Machine Scale Sets instance.
+- An Autoscale Settings instance.
+
+## Enable Chaos Studio on your Virtual Machine Scale Sets and Autoscale Settings instances
+
+Azure Chaos Studio can't inject faults against a resource until that resource is added to Chaos Studio. To add a resource to Chaos Studio, create a [target and capabilities](chaos-studio-targets-capabilities.md) on the resource. Virtual Machine Scale Sets has only one target type (`Microsoft-VirtualMachineScaleSet`) and one capability (`shutdown`). Autoscale Settings has only one target type (`Microsoft-AutoScaleSettings`) and one capability (`disableAutoscale`). Other resources might have up to two target types. One target type is for service-direct faults. Another target type is for agent-based faults. Other resources might have many other capabilities.
+
+1. Open the [Azure portal](https://portal.azure.com).
+1. Search for **Chaos Studio** in the search bar.
+1. Select **Targets** and find your autoscale setting resource.
+1. Select the autoscale setting resource and select **Enable targets** > **Enable service-direct targets**.
+
+ [![Screenshot that shows the Targets screen in Chaos Studio, with the autoscale setting resource selected.](images/chaos-studio-tutorial-availability-zone-down-portal/target-enable-one.png) ](images/chaos-studio-tutorial-availability-zone-down-portal/target-enable-one.png#lightbox)
+1. Select **Review + Enable** > **Enable**.
+1. Find your virtual machine scale set resource.
+1. Select the virtual machine scale set resource and select **Enable targets** > **Enable service-direct targets**.
+
+ [![Screenshot that shows the Targets screen in Chaos Studio, with the virtual machine scale set resource selected.](images/chaos-studio-tutorial-availability-zone-down-portal/target-enable-two.png) ](images/chaos-studio-tutorial-availability-zone-down-portal/target-enable-two.png#lightbox)
+1. Select **Review + Enable** > **Enable**.
+
+You've now successfully added your autoscale setting and virtual machine scale set to Chaos Studio.
+
+## Create an experiment from a template
+
+Now you can create your experiment from a pre-filled experiment template. A chaos experiment defines the actions you want to take against target resources. The actions are organized and run in sequential steps. The chaos experiment also defines the actions you want to take against branches, which run in parallel.
+
+1. In Chaos Studio, go to **Experiments** > **Create** > **New from template**.
+
+ [![Screenshot that shows the Experiments screen, with the New from template button highlighted.](images/chaos-studio-tutorial-availability-zone-down-portal/create-from.png)](images/chaos-studio-tutorial-availability-zone-down-portal/create-from.png#lightbox)
+1. Select **Availability Zone Down**.
+
+ [![Screenshot that shows the experiment templates screen, with the Availability Zone down template button highlighted.](images/chaos-studio-tutorial-availability-zone-down-portal/template-selection.png)](images/chaos-studio-tutorial-availability-zone-down-portal/template-selection.png#lightbox)
+1. Add a name for your experiment that complies with resource naming guidelines. Select **Next: Permissions**.
+
+ [![Screenshot that shows the experiment basics screen, with the permissions tab button highlighted.](images/chaos-studio-tutorial-availability-zone-down-portal/basics.png)](images/chaos-studio-tutorial-availability-zone-down-portal/basics.png#lightbox)
+1. For your chaos experiment to run successfully, it must have [sufficient permissions on target resources](chaos-studio-permissions-security.md). Select a system-assigned managed identity or a user-assigned managed identity for your experiment. You can choose to enable custom role assignment if you would like Chaos Studio to add the necessary permissions to run (in the form of a custom role) to your experiment's identity. Select **Next: Experiment designer**.
+
+ [![Screenshot that shows the experiment permissions screen, with the experiment designer tab button highlighted.](images/chaos-studio-tutorial-availability-zone-down-portal/permissions-page.png)](images/chaos-studio-tutorial-availability-zone-down-portal/permissions-page.png#lightbox)
+1. Within the **Disable Autoscale** fault, select **Edit**.
+
+ [![Screenshot that shows the experiment designer screen, with the edit button within the disable autoscale fault highlighted.](images/chaos-studio-tutorial-availability-zone-down-portal/fault-one-edit.png)](images/chaos-studio-tutorial-availability-zone-down-portal/fault-one-edit.png#lightbox)
+1. Review fault parameters and select **Next: Target resources**.
+
+ [![Screenshot that shows the fault parameters pane for autoscale, with the target resources button highlighted.](images/chaos-studio-tutorial-availability-zone-down-portal/fault-one-details.png)](images/chaos-studio-tutorial-availability-zone-down-portal/fault-one-details.png#lightbox)
+1. Select the autoscale setting resource that you want to use in the experiment. Select **Save**.
+
+ [![Screenshot that shows the fault targets pane for autoscale, with the save button highlighted.](images/chaos-studio-tutorial-availability-zone-down-portal/fault-one-target.png)](images/chaos-studio-tutorial-availability-zone-down-portal/fault-one-target.png#lightbox)
+1. Within the **VMSS Shutdown (version 2.0)** fault, select **Edit**.
+
+ [![Screenshot that shows the experiment designer screen, with the edit button within the Virtual Machine Scale Set shutdown fault highlighted.](images/chaos-studio-tutorial-availability-zone-down-portal/fault-two-edit.png)](images/chaos-studio-tutorial-availability-zone-down-portal/fault-two-edit.png#lightbox)
+1. Review fault parameters and select **Next: Target resources**.
+
+ [![Screenshot that shows the fault parameters pane for the virtual machine scale set, with the target resources button highlighted.](images/chaos-studio-tutorial-availability-zone-down-portal/fault-two-details.png)](images/chaos-studio-tutorial-availability-zone-down-portal/fault-two-details.png#lightbox)
+1. Select the virtual machine scale set resource that you want to use in the experiment. Select **Next: Scope**.
+
+ [![Screenshot that shows the fault targets pane for Virtual Machine Scale Set, with the save button highlighted.](images/chaos-studio-tutorial-availability-zone-down-portal/fault-two-target.png)](images/chaos-studio-tutorial-availability-zone-down-portal/fault-two-target.png#lightbox)
+1. Select the zone(s) within your virtual machine scale set you would like to take down. Select **Save**.
+
+ [![Screenshot that shows the scope pane, with the save button highlighted.](images/chaos-studio-tutorial-availability-zone-down-portal/scope.png)](images/chaos-studio-tutorial-availability-zone-down-portal/scope.png#lightbox)
+1. Select **Review + create** > **Create** to save the experiment.
+
+## Run your experiment
+You're now ready to run your experiment.
+
+1. In the **Experiments** view, select your experiment. Select **Start** > **OK**.
+1. When **Status** changes to *Running*, select **Details** for the latest run under **History** to see details for the running experiment.
+
+## Next steps
+Now that you've run an Availability Zone Down template experiment, you're ready to:
+- [Manage your experiment](chaos-studio-run-experiment.md)
+- [Create an experiment that induces an outage on an Azure Active Directory instance](chaos-studio-tutorial-aad-outage-portal.md)
+
chaos-studio Sample Policy Targets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/sample-policy-targets.md
This article includes sample [Azure Policy](../governance/policy/overview.md) de
In these samples, we add service-direct targets and capabilities for each [supported resource type](chaos-studio-fault-providers.md) by using [targets and capabilities](chaos-studio-targets-capabilities.md).
+> [!NOTE]
+> Each of these policies differs slightly, and you should consult the documentation of the Resource (e.g. Compute, Storage, etc.) you are using in addition to these below sample definitions to ensure you are setting everything ocrrectly for your specific scenario
++
+> [!NOTE]
+> Make sure the subscription you are using for the automated Azure policy deployment has the correct [RBAC permissions](../governance/policy/overview.md) to do this.
+ ## Azure Cache for Redis policy definition ```json
In these samples, we add service-direct targets and capabilities for each [suppo
} ```
+## Troubleshooting issues related to Azure Policy/RBAC
+Please visit [Troubleshoot errors with using Azure Policy](../governance/policy/troubleshoot/general.md) to do this.
++ ## Next steps * [Learn more about Chaos Studio](chaos-studio-overview.md)
communication-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/whats-new.md
Last updated 10/05/2023
-# What's new in Azure Communication Services, September 2023
+# What's new in Azure Communication Services, October 2023
-We've created this page to keep you updated on new features, blog posts, and other useful information related to Azure Communication Services. Be sure to check back monthly for all the newest and latest information!
+We created this page to keep you updated on new features, blog posts, and other useful information related to Azure Communication Services. Be sure to check back monthly for all the newest and latest information!
<br> <br> <br> + ## New features Get detailed information on the latest Azure Communication Services feature launches.
-### Number Lookup Public Preview
-The Number Lookup API offers number type details that help developers to determine whether a particular number can receive SMS messages.
+### Managed identities in public preview
+
+We're thrilled to announce the support for Azure Managed Identities for Azure Communication Services in public preview This is an Azure Enterprise Promise that enhances security for customers, and simplifies workflows to manage identities in their ACS resources.
-[Read more in the customer documentation](./concepts/numbers/number-lookup-concept.md)
-[Check out the SDK overview](./concepts/numbers/number-lookup-sdk.md)
-[Try the quickstart](./quickstarts/telephony/number-lookup.md)
+[Read the documentation](./how-tos/managed-identity.md)
+[Try the quickstart to connect to Azure AI services](./concepts/call-automation/azure-communication-services-azure-cognitive-services-integration.md)
+
+[Try the quickstart to bring your own storage ](./quickstarts/call-automation/call-recording/bring-your-own-storage.md)
<br> <br>
+
+### Advanced messaging enables WhatsApp
+
+Available in public preview, developers can integrate WhatsApp Business Platform into their applications with Azure Communication Services Advanced Messaging.
+
+The Advanced Messaging SDK from Azure Communication Services enables businesses to reach more customers at scale and deliver reliable communications to users worldwide.
-### Call Automation Extensibility into Microsoft Teams
-The public preview of Azure Communication Services call automation extensibility into Microsoft Teams, enabling businesses to optimize customer service operations by bringing Microsoft Teams users into their B2C calling workflows is now available. Azure Communication Services Call Automation provides developers the ability to build programmable customer interactions using real-time event triggers to perform actions based on custom business logic.
+**Effortlessly Connect with WhatsApp Users**
-[Read more in the customer documentation](./concepts/call-automation/call-automation-teams-interop.md)
+WhatsApp is one of the most popular messaging apps. Businesses can now communicate with WhatsApp users that request to hear from them directly from their Azure applications. This enables efficient and effective communication with their target audiences that prefer effortless, personalized, and secure communications with their favorite brands.
+
+**Incorporate WhatsApp into key communication scenarios**
-[Try the quickstart](./how-tos/call-automation/teams-interop-call-automation.md)
+With Advanced Messaging, you can build conversational scenarios such as contact center support and professional advising, as well as notification and follow-up scenarios such as sending appointment reminders, transaction receipts, shipping updates, or one-time passcodes. You can also integrate WhatsApp with other communication channels such as SMS, email, chat, voice, and video using the Azure Communication Services platform. Adding WhatsApp as a channel to your application allows you to reach customers in one of the largest user communities spanning the globe.
+[Read the full blog post](https://techcommunity.microsoft.com/t5/azure-communication-services/advanced-messaging-enables-whatsapp/ba-p/3952721)
<br> <br>
-
-### Advanced Actions with Azure Cognitive Services
-Azure Communication Services has released two new advanced actions with Azure Cognitive
+### Pre-Registered Alpha IDs GA!
-- Enhance play action with support for Text-to-Speech and SSML-- Recognize voice input using Azure Speech-to-Text+
+Alphanumeric Sender IDs is a one-way SMS messaging number type, formed by alphabetic and numeric characters that allow our customers to use their company name as sender of an SMS message providing improved brand recognition. Alphanumeric Sender IDs also support higher message throughput than toll-free and geographic numbers.
+
+We're entering GA in five European regions that require preregistration to be able to use Alphanumeric Sender IDs: Norway, Finland, Slovakia, Slovenia and Czech Republic.
-WeΓÇÖve also updated the process to connect your Azure Communication Services to Azure Cognitive Services.
-[Read more in the customer documentation](./concepts/call-automation/azure-communication-services-azure-cognitive-services-integration.md)
-[Try the Text-To-Speech Quickstart](./how-tos/call-automation/play-action.md)
-[Try the voice input Quickstart](./how-tos/call-automation/recognize-action.md)
+### Calling Web UI Library updates
-<br>
-<br>
+The web version of our UI Library launched many new features this month, including:
+#### Blur and custom backgrounds
+We're excited to announce the general availability of blurred and custom backgrounds for desktop web within the Web UI SDK. These features make it easier for users to customize their video backgrounds and improve the overall video calling experience.
-### Call Automation Dual Tone Multi-Frequency (DTMF) Features
+[Learn more about Custom Backgrounds](https://azure.github.io/communication-ui-library/?path=/docs/videoeffects--page)
-The enhanced Dual Tone Multi-Frequency (DTMF) features, Continuous DTMF Recognition and Send DTMF are now available in public preview through Azure Communication Services Call Automation SDKs, with added support for NodeJS and Python.
+#### Closed Captions in Interoperability
+Closed captions are a textual representation of audio during a video conversation that is displayed to users in real-time. Closed captions are also a useful tool for end users who prefer to read the audio text in order to engage more actively in conversations and meetings. Closed captions also help in scenarios where end users might be in noisy environments or having difficulties with their audio equipment. Azure Communication Services collaboration with Teams offers developers the ability to integrate these closed captions into their applications.
-- Continuous DTMF Recognition: With Continuous DTMF Recognition, developers will be notified in real-time when a call participant presses keys on a dialpad/numpad.-- Send DTMF: The Send DTMF action can be used in scenarios where a contact center agent needs to invite an external consultant/specialist into the call to assist the customer.
+[Learn more about how you can use Closed captions for your application](./concepts/interop/enable-closed-captions.md)
-[Read more in the customer documentation](./how-tos/call-automation/control-mid-call-media-actions.md)
-<br>
-<br>
+[Try the quickstart](./how-tos/calling-sdk/closed-captions-teams-interop-how-to.md)
-### PSTN direct offers in new regions
-Customers can acquire telephone numbers from 15 new regions, including Australia, China, Finland, Hong Kong, Israel, South Korea, Malaysia, New Zealand, Philippines, Poland, Saudi Arabia, Singapore, Taiwan, Thailand, and the United Arab Emirates.
+#### Interoperability Roles and Capabilities
+Support for interoperability Microsoft Teams roles and capabilities is now in general availability. This feature enables users to control what features other users can have within a call. This signals the enabling of the Capabilities API within the Azure Communication Services Web UI Library. With the Capabilities API, users within Microsoft Teams interoperability calls can be assigned different roles that have different capabilities and access to different features. For example, a presenter might have the ability to share their screen, while a participant might only have the ability to view the presenter's screen.
-[Read more about our new availability](./concepts/numbers/sub-eligibility-number-capability.md)
-## Blog posts and case studies
-Go deeper on common scenarios and learn more about how customers are using advanced Azure Communication
-Services features.
+[Learn more about Roles and Capabilities](https://azure.github.io/communication-ui-library/?path=/docs/capabilities--page)
-### Capgemini and Microsoft are transforming customer experiences with intelligent communications
-Customer experience strategy leader Capgemini partners with Azure Communication Services to provide intelligent communication capabilities for enterprises.
-[Read the full blog post](https://techcommunity.microsoft.com/t5/azure-communication-services/capgemini-and-microsoft-are-transforming-customer-experiences/ba-p/3907619)
+#### Pinned Layouts and Rendering Options
+Azure Communication Services UI library: Pinning and additional rendering options initially launched earlier in the year are now generally available. These features make it easier for developers to create responsive and flexible user interfaces.
+[Learn more about Pinned Layouts](https://azure.github.io/communication-ui-library/?path=/docs/ui-components-videogallery--video-gallery#pinning-participants)
-<br>
-<br>
+#### Raise Hands
+The Raise Hand feature, introduced in April this year, is now generally available in both the Azure Communication Services calling SDK and the stable version of the UI Web SDK from version 1.18.0. The ability to raise hands in a meeting is a game-changer when in large virtual meetings where users can raise hands to keep order while asking questions, participate in Q&A sessions, request assistance, vote, bid farewell politely, or even signify your readiness to move forwardΓÇöall without interrupting the flow of conversation.
+
+[Get started with Raise Hands](https://azure.github.io/communication-ui-library/?path=/docs/ui-components-controlbar-buttons-raisehand--raise-hand)
+### Calling Native UI Library picture in picture (PIP)
-## From the community
-See examples and get inspired by what's being done in the community of Azure Communication Services users.
+With the new Picture in Picture (PiP) functionality, now in public preview, users can shrink the ongoing call into a small, draggable window. This feature allows for uninterrupted multitasking ΓÇô whether you're browsing, checking notes, or using other apps, your call remains on-screen, ensuring you never miss a beat.
+
-### Build GPT-automated customer support with Azure Communication Services
+Another challenge many users faced in the past is the risk of breaking the call experience when switching between apps. The UI Library tackles this problem head-on. Now, users can easily go back to the same app or even switch to a different one without ever losing focus on the call. This means that if you're discussing a document on a call, you can seamlessly navigate to that document and back to the call, ensuring a fluid, integrated user experience.
-Watch Bob Serr, Azure Communication Services VP, join Jeremy Chapman and Microsoft Mechanics to share how to build GPT-automated customer support with Azure Communication Services
+Start using this feature in [Android](https://github.com/Azure/communication-ui-library-android/releases/tag/calling-v1.5.0-beta.1) or [iOS](https://github.com/Azure/communication-ui-library-ios/releases/tag/AzureCommunicationUICalling_1.5.0-beta.1)
-[Watch the video](https://www.youtube.com/watch?v=N0Cay8md9s4)
+### Number Lookup
-[Read the blog post](https://techcommunity.microsoft.com/t5/azure-communication-services/accelerate-customer-outcomes-with-azure-ai-services-and-azure/ba-p/3937262)
+The Azure Communication Services public preview of the Number Lookup API is now available. This service enables developers with the necessary tools to integrate simple, highly accurate, and fast number lookup capabilities into their application. The API is designed to provide the highest quality possible, with data aggregated from reliable suppliers and updated regularly and itΓÇÖs easy to use, with simple integration and detailed documentation to guide developers through the process.
-[View the sample code](https://github.com/Azure-Samples/communication-services-AI-customer-service-sample#readme)
+[Read more in documentation](./concepts/numbers/number-lookup-concept.md)
+[Read the SDK overview](./concepts/numbers/number-lookup-sdk.md)
<br>
-<br>
+## Blog posts and case studies
+Go deeper on common scenarios and learn more about how customers are using advanced Azure Communication
+Services features.
+
+### HCLTech and Microsoft drive intelligent B2C communications for the enterprise
+
+We're excited to announce that our collaboration with HCLTech is now live, and we're able to bring the best of HCLTech's implementation to Microsoft clients globally to help them achieve more through intelligent B2C communications across every interaction with their customers, patients and consumers.
+
+To achieve more through intelligent communications, HCLTech brings together the best of Microsoft technology including Azure Communication Services, Azure AI Services, Azure OpenAI service, plus Microsoft Teams to create endpoint solutions between businesses and customers. These capabilities draw from an organization's data, such as a CRM system, and integrates with Azure's powerful data and analytics platform. The goal is to drive continuous customer satisfaction leading to brand loyalty harnessed by migrating all the organizations customer communications to one intelligent B2C communications platform with Microsoft.
+[Read the full blog post](https://techcommunity.microsoft.com/t5/azure-communication-services/hcltech-and-microsoft-drive-intelligent-b2c-communications-for/ba-p/3968123)
++
+<br>
+<br>
-### View of new features from September 2023
-We haven't slowed down at all and continue to add new features. Check out the blog page for September to see the complete list
+### View of new features from October 2023
-[View the complete list from September](https://techcommunity.microsoft.com/t5/azure-communication-services/azure-communication-services-september-2023-feature-updates/ba-p/3925569) of all new features added to Azure Communication Services in September.
+View the complete list of all features launched in October
+[View the complete list](https://techcommunity.microsoft.com/t5/azure-communication-services/azure-communication-services-october-2023-feature-updates/ba-p/3952205) of all new features added to Azure Communication Services in October.
<br> <br>
container-apps Scale App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/scale-app.md
Azure Container Apps manages automatic horizontal scaling through a set of decla
Adding or editing scaling rules creates a new revision of your container app. A revision is an immutable snapshot of your container app. See revision [change types](./revisions.md#change-types) to review which types of changes trigger a new revision.
+[Event-driven Container Apps jobs](jobs.md#event-driven-jobs) use scaling rules to trigger executions based on events.
+ ## Scale definition Scaling is defined by the combination of limits, rules, and behavior.
If you define more than one scale rule, the container app begins to scale once t
## HTTP
-With an HTTP scaling rule, you have control over the threshold of concurrent HTTP requests that determines how your container app revision scales.
+With an HTTP scaling rule, you have control over the threshold of concurrent HTTP requests that determines how your container app revision scales. [Container Apps jobs](jobs.md) don't support HTTP scaling rules.
In the following example, the revision scales out up to five replicas and can scale in to zero. The scaling property is set to 100 concurrent requests per second.
az containerapp create \
## TCP
-With a TCP scaling rule, you have control over the threshold of concurrent TCP connections that determines how your app scales.
+With a TCP scaling rule, you have control over the threshold of concurrent TCP connections that determines how your app scales. [Container Apps jobs](jobs.md) don't support TCP scaling rules.
In the following example, the container app revision scales out up to five replicas and can scale in to zero. The scaling threshold is set to 100 concurrent connections per second.
You can create a custom Container Apps scaling rule based on any [ScaledObject](
| Polling interval | 30 | | Cool down period | 300 |
+For [event-driven Container Apps jobs](jobs.md#event-driven-jobs), you can create a custom scaling rule based on any [ScaledJob](https://keda.sh/docs/latest/concepts/scaling-jobs/)-based KEDA scalers.
+ The following example demonstrates how to create a custom scale rule. ### Example
A KEDA scaler may support using secrets in a [TriggerAuthentication](https://ked
::: zone pivot="azure-portal"
-1. Go to your container app in the Azure portal
+1. Go to your container app in the Azure portal.
1. Select **Scale**. 1. Select **Edit and deploy**.
-1. Select the **Scale** tab.
+1. Select the **Scale and replicas** tab.
1. Select the minimum and maximum replica range.
container-registry Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/security-controls-policy.md
description: Lists Azure Policy Regulatory Compliance controls available for Azu
Previously updated : 10/23/2023 Last updated : 11/06/2023
cosmos-db Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Cosmos DB description: Lists Azure Policy Regulatory Compliance controls available for Azure Cosmos DB. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 10/23/2023 Last updated : 11/06/2023
cost-management-billing Direct Ea Azure Usage Charges Invoices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/direct-ea-azure-usage-charges-invoices.md
You can use the Download Advanced Report to get reports that cover specific date
:::image type="content" source="./media/direct-ea-azure-usage-charges-invoices/download-advanced-report.png" alt-text="Screenshot showing the Download Advanced Report page." lightbox="./media/direct-ea-azure-usage-charges-invoices/download-advanced-report.png" :::
+> [!NOTE]
+> - Inactive accounts for the selected time range aren't shown.
+> - You can redownload reports from the Report History since they were first created. For new reports, the selected time range must be within the last 90 days.
+ ### Download your Azure invoices (.pdf) For EA enrollments, you can download your invoice in the Azure portal.
data-factory Connector Microsoft Fabric Lakehouse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-microsoft-fabric-lakehouse.md
The following properties are under `storeSettings` settings in format-based copy
| OPTION 2: wildcard<br>- wildcardFileName | The file name with wildcard characters under the given folderPath/wildcardFolderPath to filter source files. <br>Allowed wildcards are: `*` (matches zero or more characters) and `?` (matches zero or single character); use `^` to escape if your actual file name has wildcard or this escape char inside. See more examples in [Folder and file filter examples](#folder-and-file-filter-examples). | Yes | | OPTION 3: a list of files<br>- fileListPath | Indicates to copy a given file set. Point to a text file that includes a list of files you want to copy, one file per line, which is the relative path to the path configured in the dataset.<br/>When using this option, don't specify file name in dataset. See more examples in [File list examples](#file-list-examples). |No | | ***Additional settings:*** | | |
-| recursive | Indicates whether the data is read recursively from the subfolders or only from the specified folder. Note that when recursive is set to true and the sink is a file-based store, an empty folder or subfolder isn't copied or created at the sink. <br>Allowed values are **true** (default) and **false**.<br>This property doesn't apply when you configure `fileListPath`. |No |
-| deleteFilesAfterCompletion | Indicates whether the binary files will be deleted from source store after successfully moving to the destination store. The file deletion is per file, so when copy activity fails, you'll see some files have already been copied to the destination and deleted from source, while others are still remaining on source store. <br/>This property is only valid in binary files copy scenario. The default value: false. |No |
-| modifiedDatetimeStart | Files filter based on the attribute: Last Modified. <br>The files will be selected if their last modified time is greater than or equal to `modifiedDatetimeStart` and less than `modifiedDatetimeEnd`. The time is applied to UTC time zone in the format of "2018-12-01T05:00:00Z". <br> The properties can be NULL, which means no file attribute filter will be applied to the dataset. When `modifiedDatetimeStart` has datetime value but `modifiedDatetimeEnd` is NULL, it means the files whose last modified attribute is greater than or equal with the datetime value will be selected. When `modifiedDatetimeEnd` has datetime value but `modifiedDatetimeStart` is NULL, it means the files whose last modified attribute is less than the datetime value will be selected.<br/>This property doesn't apply when you configure `fileListPath`. | No |
+| recursive | Indicates whether the data is read recursively from the subfolders or only from the specified folder. When recursive is set to true and the sink is a file-based store, an empty folder or subfolder isn't copied or created at the sink. <br>Allowed values are **true** (default) and **false**.<br>This property doesn't apply when you configure `fileListPath`. |No |
+| deleteFilesAfterCompletion | Indicates whether the binary files will be deleted from source store after successfully moving to the destination store. The file deletion is per file, so when copy activity fails, you see some files have already been copied to the destination and deleted from source, while others are still remaining on source store. <br/>This property is only valid in binary files copy scenario. The default value: false. |No |
+| modifiedDatetimeStart | Files filter based on the attribute: Last Modified. <br>The files will be selected if their last modified time is greater than or equal to `modifiedDatetimeStart` and less than `modifiedDatetimeEnd`. The time is applied to UTC time zone in the format of "2018-12-01T05:00:00Z". <br> The properties can be NULL, which means no file attribute filter is applied to the dataset. When `modifiedDatetimeStart` has datetime value but `modifiedDatetimeEnd` is NULL, it means the files whose last modified attribute is greater than or equal with the datetime value will be selected. When `modifiedDatetimeEnd` has datetime value but `modifiedDatetimeStart` is NULL, it means the files whose last modified attribute is less than the datetime value will be selected.<br/>This property doesn't apply when you configure `fileListPath`. | No |
| modifiedDatetimeEnd | Same as above. | No |
-| enablePartitionDiscovery | For files that are partitioned, specify whether to parse the partitions from the file path and add them as additional source columns.<br/>Allowed values are **false** (default) and **true**. | No |
-| partitionRootPath | When partition discovery is enabled, specify the absolute root path in order to read partitioned folders as data columns.<br/><br/>If it isn't specified, by default,<br/>- When you use file path in dataset or list of files on source, partition root path is the path configured in dataset.<br/>- When you use wildcard folder filter, partition root path is the sub-path before the first wildcard.<br/><br/>For example, assuming you configure the path in dataset as "root/folder/year=2020/month=08/day=27":<br/>- If you specify partition root path as "root/folder/year=2020", copy activity will generate two more columns `month` and `day` with value "08" and "27" respectively, in addition to the columns inside the files.<br/>- If partition root path isn't specified, no extra column will be generated. | No |
+| enablePartitionDiscovery | For files that are partitioned, specify whether to parse the partitions from the file path and add them as another source columns.<br/>Allowed values are **false** (default) and **true**. | No |
+| partitionRootPath | When partition discovery is enabled, specify the absolute root path in order to read partitioned folders as data columns.<br/><br/>If it isn't specified, by default,<br/>- When you use file path in dataset or list of files on source, partition root path is the path configured in dataset.<br/>- When you use wildcard folder filter, partition root path is the subpath before the first wildcard.<br/><br/>For example, assuming you configure the path in dataset as "root/folder/year=2020/month=08/day=27":<br/>- If you specify partition root path as "root/folder/year=2020", copy activity generates two more columns `month` and `day` with value "08" and "27" respectively, in addition to the columns inside the files.<br/>- If partition root path isn't specified, no extra column is generated. | No |
| maxConcurrentConnections | The upper limit of concurrent connections established to the data store during the activity run. Specify a value only when you want to limit concurrent connections.| No | **Example:**
The following properties are under `storeSettings` settings in format-based copy
| | | -- | | type | The type property under `storeSettings` must be set to **LakehouseWriteSettings**. | Yes | | copyBehavior | Defines the copy behavior when the source is files from a file-based data store.<br/><br/>Allowed values are:<br/><b>- PreserveHierarchy (default)</b>: Preserves the file hierarchy in the target folder. The relative path of the source file to the source folder is identical to the relative path of the target file to the target folder.<br/><b>- FlattenHierarchy</b>: All files from the source folder are in the first level of the target folder. The target files have autogenerated names. <br/><b>- MergeFiles</b>: Merges all files from the source folder to one file. If the file name is specified, the merged file name is the specified name. Otherwise, it's an autogenerated file name. | No |
-| blockSizeInMB | Specify the block size in MB used to write data to Microsoft Fabric Lakehouse. Learn more [about Block Blobs](/rest/api/storageservices/understanding-block-blobs--append-blobs--and-page-blobs#about-block-blobs). <br/>Allowed value is **between 4 MB and 100 MB**. <br/>By default, ADF automatically determines the block size based on your source store type and data. For non-binary copy into Microsoft Fabric Lakehouse, the default block size is 100 MB so as to fit in at most approximately 4.75-TB data. It might be not optimal when your data isn't large, especially when you use Self-hosted Integration Runtime with poor network resulting in operation timeout or performance issue. You can explicitly specify a block size, while ensure blockSizeInMB*50000 is big enough to store the data, otherwise copy activity run will fail. | No |
+| blockSizeInMB | Specify the block size in MB used to write data to Microsoft Fabric Lakehouse. Learn more [about Block Blobs](/rest/api/storageservices/understanding-block-blobs--append-blobs--and-page-blobs#about-block-blobs). <br/>Allowed value is **between 4 MB and 100 MB**. <br/>By default, ADF automatically determines the block size based on your source store type and data. For nonbinary copy into Microsoft Fabric Lakehouse, the default block size is 100 MB so as to fit in at most approximately 4.75-TB data. It might be not optimal when your data isn't large, especially when you use Self-hosted Integration Runtime with poor network resulting in operation timeout or performance issue. You can explicitly specify a block size, while ensure blockSizeInMB*50000 is large enough to store the data, otherwise copy activity run fails. | No |
| maxConcurrentConnections | The upper limit of concurrent connections established to the data store during the activity run. Specify a value only when you want to limit concurrent connections.| No | | metadata |Set custom metadata when copy to sink. Each object under the `metadata` array represents an extra column. The `name` defines the metadata key name, and the `value` indicates the data value of that key. If [preserve attributes feature](./copy-activity-preserve-metadata.md#preserve-metadata) is used, the specified metadata will union/overwrite with the source file metadata.<br/><br/>Allowed data values are:<br/>- `$$LASTMODIFIED`: a reserved variable indicates to store the source files' last modified time. Apply to file-based source with binary format only.<br/><b>- Expression<b><br/>- <b>Static value<b>| No |
To use Microsoft Fabric Lakehouse Table dataset as a source or sink dataset in m
#### Microsoft Fabric Lakehouse Table as a source type There are no configurable properties under source options.
+> [!NOTE]
+> CDC support for Lakehouse table source is currently not available.
#### Microsoft Fabric Lakehouse Table as a sink type
The following properties are supported in the Mapping Data Flows **sink** sectio
| - | -- | -- | -- | - | | Update method | When you select "Allow insert" alone or when you write to a new delta table, the target receives all incoming rows regardless of the Row policies set. If your data contains rows of other Row policies, they need to be excluded using a preceding Filter transform. <br><br> When all Update methods are selected a Merge is performed, where rows are inserted/deleted/upserted/updated as per the Row Policies set using a preceding Alter Row transform. | yes | `true` or `false` | insertable <br> deletable <br> upsertable <br> updateable | | Optimized Write | Achieve higher throughput for write operation via optimizing internal shuffle in Spark executors. As a result, you might notice fewer partitions and files that are of a larger size | no | `true` or `false` | optimizedWrite: true |
-| Auto Compact | After any write operation has completed, Spark will automatically execute the ```OPTIMIZE``` command to re-organize the data, resulting in more partitions if necessary, for better reading performance in the future | no | `true` or `false` | autoCompact: true |
-| Merge Schema | Merge schema option allows schema evolution, i.e. any columns that are present in the current incoming stream but not in the target Delta table is automatically added to its schema. This option is supported across all update methods. | no | `true` or `false` | mergeSchema: true |
+| Auto Compact | After any write operation has completed, Spark will automatically execute the ```OPTIMIZE``` command to reorganize the data, resulting in more partitions if necessary, for better reading performance in the future | no | `true` or `false` | autoCompact: true |
+| Merge Schema | Merge schema option allows schema evolution, that is, any columns that are present in the current incoming stream but not in the target Delta table is automatically added to its schema. This option is supported across all update methods. | no | `true` or `false` | mergeSchema: true |
**Example: Microsoft Fabric Lakehouse Table sink**
data-factory Source Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/source-control.md
Previously updated : 05/09/2023 Last updated : 11/06/2023 # Source control in Azure Data Factory
Below is a list of some of the advantages git integration provides to the author
- Ability to track/audit changes. - Ability to revert changes that introduced bugs. - **Partial saves:** When authoring against the data factory service, you can't save changes as a draft and all publishes must pass data factory validation. Whether your pipelines are not finished or you simply don't want to lose changes if your computer crashes, git integration allows for incremental changes of data factory resources regardless of what state they are in. Configuring a git repository allows you to save changes, letting you only publish when you have tested your changes to your satisfaction.-- **Collaboration and control:** If you have multiple team members contributing to the same factory, you may want to let your teammates collaborate with each other via a code review process. You can also set up your factory such that not every contributor has equal permissions. Some team members may only be allowed to make changes via Git and only certain people in the team are allowed to publish the changes to the factory.
+- **Collaboration and control:** If you have multiple team members contributing to the same factory, you might want to let your teammates collaborate with each other via a code review process. You can also set up your factory such that not every contributor has equal permissions. Some team members might only be allowed to make changes via Git and only certain people in the team are allowed to publish the changes to the factory.
- **Better CI/CD:** If you are deploying to multiple environments with a [continuous delivery process](continuous-integration-delivery.md), git integration makes certain actions easier. Some of these actions include: - Configure your release pipeline to trigger automatically as soon as there are any changes made to your 'dev' factory. - Customize the properties in your factory that are available as parameters in the Resource Manager template. It can be useful to keep only the required set of properties as parameters, and have everything else hard coded.
The configuration pane shows the following Azure Repos code repository settings:
If any adjustments need to be made to the settings of your configured Azure Repos Git repository, you can choose to **Edit**. You can update your publish branch and decide whether or not to disable the publish button from the ADF studio. If you choose to disable the publish button from the studio, the publish button will be grayed out in the studio. This will help to avoid overwriting the last automated publish deployment.
If you connect to GitHub Enterprise Server, you need to use personal access toke
- GitHub integration with the Data Factory visual authoring tools only works in the generally available version of Data Factory.
+#### Connecting to Azure DevOps Server 2022
+
+If you connect to Azure DevOps Server 2022, you need to use a personal access token for authentication. [Learn how to create a personal access token here](https://learn.microsoft.com/azure/devops/organizations/accounts/use-personal-access-tokens-to-authenticate).
+
+Connect to on-premises Azure DevOps by providing the ``Azure DevOps Server URL`` and ``Azure DevOps Project Collection``
++
+Provide the token with access scope as read/write for code.
++ ## Version control Version control systems (also known as _source control_) let developers collaborate on code and track changes that are made to the code base. Source control is an essential tool for multi-developer projects.
Choose either method appropriately as needed.
### All resources showing as new on publish
-While publishing, all resources may show as new even if they were previously published. This can happen if the *lastCommitId* property is reset on the factoryΓÇÖs *repoConfiguration* property either by re-deploying a factory ARM template or updating the factory *repoConfiguration* property through PowerShell or the REST API. Continuing to publish the resources will resolve the issue, but to prevent to it from occurring again, avoid updating the factory *repoConfiguration* property.
+While publishing, all resources might show as new even if they were previously published. This can happen if the *lastCommitId* property is reset on the factoryΓÇÖs *repoConfiguration* property either by re-deploying a factory ARM template or updating the factory *repoConfiguration* property through PowerShell or the REST API. Continuing to publish the resources will resolve the issue, but to prevent to it from occurring again, avoid updating the factory *repoConfiguration* property.
data-lake-analytics Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Data Lake Analytics description: Lists Azure Policy Regulatory Compliance controls available for Azure Data Lake Analytics. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 10/23/2023 Last updated : 11/06/2023
data-lake-store Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Data Lake Storage Gen1 description: Lists Azure Policy Regulatory Compliance controls available for Azure Data Lake Storage Gen1. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 10/23/2023 Last updated : 11/06/2023
databox Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Data Box description: Lists Azure Policy Regulatory Compliance controls available for Azure Data Box. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 10/23/2023 Last updated : 11/06/2023
ddos-protection Ddos Disaster Recovery Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/ddos-disaster-recovery-guidance.md
Previously updated : 10/12/2022 Last updated : 11/06/2023
ddos-protection Ddos Protection Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/ddos-protection-features.md
Previously updated : 10/12/2022 Last updated : 11/06/2023 # Azure DDoS Protection features
DDoS Protection applies three auto-tuned mitigation policies (TCP SYN, TCP, and
The policy thresholds are auto-configured via machine learning-based network traffic profiling. DDoS mitigation occurs for an IP address under attack only when the policy threshold is exceeded.
+For more information, see [View and configure DDoS Protection telemetry](telemetry.md).
+ ### Metric for an IP address under DDoS attack If the public IP address is under attack, the value for the metric **Under DDoS attack or not** changes to 1 as DDoS Protection performs mitigation on the attack traffic.
If you have DDoS Protection, make sure that it's enabled on the virtual network
Monitor your applications independently. Understand the normal behavior of an application. Prepare to act if the application is not behaving as expected during a DDoS attack.
-Learn how your services will respond to an attack by [testing through simulations](test-through-simulations.md).
+Learn how your services will respond to an attack by [testing through DDoS simulations](test-through-simulations.md).
## Next steps
ddos-protection Ddos Protection Partner Onboarding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/ddos-protection-partner-onboarding.md
Previously updated : 10/12/2022 Last updated : 11/06/2023 # Partnering with Azure DDoS Protection
ddos-protection Ddos Rapid Response https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/ddos-rapid-response.md
Previously updated : 10/12/2022 Last updated : 11/06/2023 # Azure DDoS Rapid Response
During an active attack, Azure DDoS Network Protection customers have access to
## Prerequisites -- Before you can complete the steps in this tutorial, you must first create a [Azure DDoS Protection plan](manage-ddos-protection.md).
+- If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+- Before you can complete the steps in this guide, you must first create a [Azure DDoS Protection plan](manage-ddos-protection.md).
## When to engage DRR
You should only engage DRR if:
2. Choose **Service** as **DDOS Protection**. 3. Choose a resource in the resource drop-down menu. _You must select a DDoS Plan thatΓÇÖs linked to the virtual network being protected by DDoS Protection to engage DRR._
- ![Choose Resource](./media/ddos-rapid-response/choose-resource.png)
+ :::image type="content" source="./media/ddos-rapid-response/choose-resource.png" alt-text="Screenshot of creating a DDoS Support Ticket in Azure.":::
4. On the next **Problem** page, select the **severity** as A -Critical Impact and **Problem Type** as ΓÇÿUnder attack.ΓÇÖ
- ![PSeverity and Problem Type](./media/ddos-rapid-response/severity-and-problem-type.png)
+ :::image type="content" source="./media/ddos-rapid-response/severity-and-problem-type.png" alt-text="Screenshot of choosing Severity and Problem Type.":::
5. Complete additional details and submit the support request.
ddos-protection Inline Protection Glb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/inline-protection-glb.md
Previously updated : 10/12/2022 Last updated : 11/06/2023
ddos-protection Manage Ddos Protection Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/manage-ddos-protection-template.md
Previously updated : 10/12/2022 Last updated : 11/06/2023 # Quickstart: Create and configure Azure DDoS Network Protection using ARM template
ddos-protection Manage Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/manage-permissions.md
Previously updated : 10/12/2022 Last updated : 11/06/2023 # Manage DDoS Protection Plans: permissions and restrictions
-A DDoS protection plan works across regions and subscriptions. The same plan can be linked to virtual networks from other subscriptions in different regions, across your tenant. The subscription the plan is associated to incurs the monthly recurring bill for the plan, as well as overage charges, in case the number of protected public IP addresses exceed 100. For more information on DDoS pricing, see [pricing details](https://azure.microsoft.com/pricing/details/ddos-protection/).
+A DDoS protection plan works across regions and subscriptions. The same plan can be linked to virtual networks from other subscriptions in different regions, across your tenant. The associated subscription incurs the plan's monthly bill and overage charges if the protected public IP addresses exceed 100. For more information on DDoS pricing, see [pricing details](https://azure.microsoft.com/pricing/details/ddos-protection/).
## Prerequisites
To enable DDoS protection for a virtual network, your account must also be assig
## Azure Policy
-Creation of more than one plan is not required for most organizations. A plan cannot be moved between subscriptions. If you want to change the subscription a plan is in, you have to delete the existing plan and create a new one.
+Creation of more than one plan isn't required for most organizations. A plan can't be moved between subscriptions. If you want to change the subscription a plan is in, you have to delete the existing plan and create a new one.
-For customers who have various subscriptions, and who want to ensure a single plan is deployed across their tenant for cost control, you can use Azure Policy to restrict creation of Azure DDoS Protection plans. This policy will block the creation of any DDoS plans, unless the subscription has been previously marked as an exception. This policy will also show a list of all subscriptions that have a DDoS plan deployed but should not, marking them as out of compliance.
+For customers who have various subscriptions, and who want to ensure a single plan is deployed across their tenant for cost control, you can use Azure Policy to restrict creation of Azure DDoS Protection plans. This policy blocks the creation of any DDoS plans, unless the subscription has been previously marked as an exception. This policy will also show a list of all subscriptions that have a DDoS plan deployed but shouldn't, marking them as out of compliance.
## Next steps
ddos-protection Telemetry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/telemetry.md
Previously updated : 10/12/2022 Last updated : 11/06/2023 # Tutorial: View and configure Azure DDoS protection telemetry
In this tutorial, you'll learn how to:
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. ## Prerequisites
-* Before you can complete the steps in this tutorial, you must first create an [Azure DDoS Protection plan](manage-ddos-protection.md). DDoS Network Protection must be enabled on a virtual network or DDoS IP Protection must be enabled on a public IP address.
-* DDoS monitors public IP addresses assigned to resources within a virtual network. If you don't have any resources with public IP addresses in the virtual network, you must first create a resource with a public IP address. You can monitor the public IP address of all resources deployed through Resource Manager (not classic) listed in [Virtual network for Azure services](../virtual-network/virtual-network-for-azure-services.md#services-that-can-be-deployed-into-a-virtual-network) (including Azure Load Balancers where the backend virtual machines are in the virtual network), except for Azure App Service Environments. To continue with this tutorial, you can quickly create a [Windows](../virtual-machines/windows/quick-create-portal.md?toc=%2fazure%2fvirtual-network%2ftoc.json) or [Linux](../virtual-machines/linux/quick-create-portal.md?toc=%2fazure%2fvirtual-network%2ftoc.json) virtual machine.
+
+* If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+* Before you can complete the steps in this tutorial, you must first create a DDoS simulation attack to generate the telemetry. Telemetry data is recorded during an attack. For more information, see [Test DDoS Protection through simulation](test-through-simulations.md).
## View Azure DDoS Protection telemetry
ddos-protection Test Through Simulations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/test-through-simulations.md
Previously updated : 11/06/2023 Last updated : 11/07/2023
You'll then configure diagnostic logs and alerts to monitor for attacks and traf
## Configure DDoS Protection metrics and alerts
-In this tutorial, we'll configure DDoS Protection metrics and alerts to monitor for attacks and traffic patterns.
+In this tutorial, we'll configure DDoS Protection metrics and alerts to monitor for attacks and traffic patterns.
### Configure diagnostic logs
BreakingPoint Cloud is a self-service traffic generator where you can generate t
BreakingPoint Cloud offers: - A simplified user interface and an ΓÇ£out-of-the-boxΓÇ¥ experience.-- pay-per-use model.
+- Pay-per-use model.
- Predefined DDoS test sizing and test duration profiles enable safer validations by eliminating the potential of configuration errors.
+- A free trail account.
> [!NOTE] > For BreakingPoint Cloud, you must first [create a BreakingPoint Cloud account](https://www.ixiacom.com/products/breakingpoint-cloud).
defender-for-cloud Defender For Sql Autoprovisioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-sql-autoprovisioning.md
Last updated 09/21/2023
-# Migrate to SQL server-targeted Azure Monitoring Agent's (AMA) autoprovisioning process (Preview)
+# Migrate to SQL server-targeted Azure Monitoring Agent's (AMA) autoprovisioning process
-Microsoft Monitoring Agent (MMA) is being deprecated in August 2024. As a result, a new SQL server-targeted Azure Monitoring Agent (AMA) autoprovisioning process is being released in preview. You can learn more about the [Defender for SQL Server on machines Log Analytics Agent's deprecation plan](upcoming-changes.md#defender-for-sql-server-on-machines).
+Microsoft Monitoring Agent (MMA) is being deprecated in August 2024. As a result, a new SQL server-targeted Azure Monitoring Agent (AMA) autoprovisioning process was released. You can learn more about the [Defender for SQL Server on machines Log Analytics Agent's deprecation plan](upcoming-changes.md#defender-for-sql-server-on-machines).
-During the preview, customers who are using the current autoprovisioning process with Azure Monitor Agent (Preview) option, should migrate to the new Azure Monitoring Agent for SQL server on machines (Preview) autoprovisioning process. The migration process is seamless and provides continuous protection for all machines.
+Customers who are using the current **Log Analytics agent/Azure Monitor agent** autoprovisioning process, should migrate to the new **Azure Monitoring Agent for SQL server on machines** autoprovisioning process. The migration process is seamless and provides continuous protection for all machines.
## Migrate to the SQL server-targeted AMA autoprovisioning process
During the preview, customers who are using the current autoprovisioning process
1. Select the relevant subscription. 1. Under the Databases plan, select **Action required**.- :::image type="content" source="media/defender-sql-autoprovisioning/action-required.png" alt-text="Screenshot that shows where the option to select action required is on the Defender plans page." lightbox="media/defender-sql-autoprovisioning/action-required.png"::: > [!NOTE]
- > If you do not see the action required button, under the Databases plan select **Settings** and then toggle the Azure Monitoring Agent for SQL server on machines (Preview) option to **On**. Then select **Continue** > **Save**.
-
+ > If you do not see the action required button, under the Databases plan select **Settings** and then toggle the **Azure Monitoring Agent for SQL server on machines** option to **On**. Then select **Continue** > **Save**.
1. In the pop-up window, select **Enable**. :::image type="content" source="media/defender-sql-autoprovisioning/update-sql.png" alt-text="Screenshot that shows you where to select the Azure Monitor Agent on the screen." lightbox="media/defender-sql-autoprovisioning/update-sql.png"::: 1. Select **Save**.
-Once the SQL server-targeted AMA autoprovisioning process has been enabled, you should disable the Log Analytics agent/Azure Monitor agent autoprovisioning process.
+Once the SQL server-targeted AMA autoprovisioning process has been enabled, you should disable the **Log Analytics agent/Azure Monitor agent** autoprovisioning process.
> [!NOTE] > If you have the Defender for Server plan enabled, you will need to [review the Defender for Servers Log Analytics deprecation plan](upcoming-changes.md#defender-for-servers) for Log Analytics agent/Azure Monitor agent dependency before disabling the process.
For related information, see these resources:
- [Set up email notifications for security alerts](configure-email-notifications.md) - [Learn more about Microsoft Sentinel](../sentinel/index.yml) - Check out [common questions](faq-defender-for-databases.yml) about Defender for Databases.+
defender-for-cloud Defender For Sql Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-sql-usage.md
Defender for SQL servers on machines protects your SQL servers hosted in Azure,
|-|-| |Release state:|General availability (GA)| |Pricing:|**Microsoft Defender for SQL servers on machines** is billed as shown on the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/)|
-|Protected SQL versions:|SQL Server version: 2012, 2014, 2016, 2017, 2019, 2022 <br>- [SQL on Azure virtual machines](/azure/azure-sql/virtual-machines/windows/sql-server-on-azure-vm-iaas-what-is-overview)<br>- [SQL Server on Azure Arc-enabled servers](/sql/sql-server/azure-arc/overview)<br>- On-premises SQL servers on Windows machines without Azure Arc<br>|
+|Protected SQL versions:|SQL Server version: 2012, 2014, 2016, 2017, 2019, 2022 <br>- [SQL on Azure virtual machines](/azure/azure-sql/virtual-machines/windows/sql-server-on-azure-vm-iaas-what-is-overview)<br>- [SQL Server on Azure Arc-enabled servers](/sql/sql-server/azure-arc/overview)<br><br>|
|Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure Government<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Microsoft Azure operated by 21Vianet **(Advanced Threat Protection Only)**| ## Set up Microsoft Defender for SQL servers on machines
-The Defender for SQL server on machines plan requires either the Microsoft Monitoring Agent (MMA) or Azure Monitoring Agent (AMA) to prevent attacks and detect misconfigurations. The planΓÇÖs autoprovisioning process is automatically enabled with the plan and is responsible for the configuration of all of the agent components required for the plan to function. This includes, installation and configuration of MMA/AMA, workspace configuration and the installation of the planΓÇÖs VM extension/solution.
+The Defender for SQL server on machines plan requires Microsoft Monitoring Agent (MMA) or Azure Monitoring Agent (AMA) to prevent attacks and detect misconfigurations. The planΓÇÖs autoprovisioning process is automatically enabled with the plan and is responsible for the configuration of all of the agent components required for the plan to function. This includes, installation and configuration of MMA/AMA, workspace configuration and the installation of the planΓÇÖs VM extension/solution.
-Microsoft Monitoring Agent (MMA) is set to be retired in August 2024. Defender for Cloud [updated its strategy](upcoming-changes.md#defender-for-cloud-plan-and-strategy-for-the-log-analytics-agent-deprecation) accordingly by releasing a SQL Server-targeted Azure Monitoring Agent (AMA) autoprovisioning process to replace the Microsoft Monitoring Agent (MMA) process which is set to be deprecated. Learn more about the [AMA for SQL server on machines (Preview) autoprovisioning process](defender-for-sql-autoprovisioning.md) and how to migrate to it.
+Microsoft Monitoring Agent (MMA) is set to be retired in August 2024. Defender for Cloud [updated its strategy](upcoming-changes.md#defender-for-cloud-plan-and-strategy-for-the-log-analytics-agent-deprecation) and released a SQL Server-targeted Azure Monitoring Agent (AMA) autoprovisioning process to replace the Microsoft Monitoring Agent (MMA) process which is set to be deprecated. Learn more about the [AMA for SQL server on machines autoprovisioning process](defender-for-sql-autoprovisioning.md) and how to migrate to it.
> [!NOTE]
-> During the **Azure Monitoring Agent for SQL Server on machines (Preview)**, customers who are currently using the **Log Analytics agent/Azure Monitor agent** processes will be asked to [migrate to the AMA for SQL server on machines (Preview) autoprovisioning process](defender-for-sql-autoprovisioning.md).
+> Customers who are currently using the **Log Analytics agent/Azure Monitor agent** processes will be asked to [migrate to the AMA for SQL server on machines autoprovisioning process](defender-for-sql-autoprovisioning.md).
-**To enable the plan**:
+**To enable the plan on a subscription**:
1. Sign in to the [Azure portal](https://portal.azure.com).
Microsoft Monitoring Agent (MMA) is set to be retired in August 2024. Defender f
1. Select **Save**. 1. **(Optional)** Configure advanced autoprovisioning settings:- 1. Navigate to the **Environment settings** page.
- 1. Select **Settings & monitoring**.
+ 1. Select **Settings & monitoring**.
+ - For customer using the new autoprovisioning process, select **Edit configuration** for the **Azure Monitoring Agent for SQL server on machines** component.
+ - For customer using the previouse autoprovisioning process, select **Edit configuration** for the **Log Analytics agent/Azure Monitor agent** component.
+
+**To enable the plan on a SQL VM/Arc-enabled SQL Server**:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Navigate to your SQL VM/Arc-enabled SQL Server .
- - For customer using the current generally available autoprovisioning process, select **Edit configuration** for the **Log Analytics agent/Azure Monitor agent** component.
+1. In the SQL VM/Arc-enabled SQL Server menu, under Security, selectΓÇ»**Microsoft Defender for Cloud**.
- - For customer using the preview of the autoprovisioning process, select **Edit configuration** for the **Azure Monitoring Agent for SQL server on machines (Preview)** component.
+1. In the Microsoft Defender for SQL server on machines section and select **Enable**.
## Explore and investigate security alerts
For related information, see these resources:
- [Set up email notifications for security alerts](configure-email-notifications.md) - [Learn more about Microsoft Sentinel](../sentinel/index.yml) - Check out [common questions](faq-defender-for-databases.yml) about Defender for Databases.+
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
To learn about *planned* changes that are coming soon to Defender for Cloud, see
If you're looking for items older than six months, you can find them in the [Archive for What's new in Microsoft Defender for Cloud](release-notes-archive.md).
+## November 2023
+
+|Date |Update |
+|-|-|
+| November 6 | [New version of the recommendation to find missing system updates is now GA](#new-version-of-the-recommendation-to-find-missing-system-updates-is-now-ga) |
+
+### New version of the recommendation to find missing system updates is now GA
+
+An additional agent is no longer needed on your Azure VMs and Azure Arc machines to ensure the machines have all of the latest security or critical system updates.
+
+The new system updates recommendation, `System updates should be installed on your machines (powered by Azure Update Manager)` in the `Apply system updates` control, is based on the [Update Manager](/azure/update-center/overview) and is now fully GA. The recommendation relies on a native agent embedded in every Azure VM and Azure Arc machines instead of an installed agent. The quick fix in the new recommendation navigates you to a one-time installation of the missing updates in the Update Manager portal.
+
+The old and the new versions of the recommendations to find missing system updates will both be available until August 2024, which is when the older version will be deprecated. Both recommendations: `System updates should be installed on your machines (powered by Azure Update Manager)`and `System updates should be installed on your machines` are available under the same control: `Apply system updates` and has the same results. Thus, there's no duplication in the effect on the secure score.
+
+We recommend migrating to the new recommendation and remove the old one, by disabling it from Defender for Cloud's built-in initiative in Azure policy.
+
+The recommendation `[Machines should be configured to periodically check for missing system updates](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/90386950-71ca-4357-a12e-486d1679427c)` is also GA and is a prerequisite, which will have a negative effect on your Secure Score. You can remediate the negative effect with the available Fix.
+
+To apply the new recommendation, you need to:
+
+1. Connect your non-Azure machines to Arc.
+1. Turn on the [periodic assessment property](/azure/update-center/assessment-options). You can use the Quick Fix in the new recommendation, `[Machines should be configured to periodically check for missing system updates](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/90386950-71ca-4357-a12e-486d1679427c)` to fix the recommendation.
+
+> [!NOTE]
+> Enabling periodic assessments for Arc enabled machines that Defender for Servers Plan 2 isn't enabled on their related Subscription or Connector, is subject to [Azure Update Manager pricing](https://azure.microsoft.com/pricing/details/azure-update-management-center/). Arc enabled machines that [Defender for Servers Plan 2](plan-defender-for-servers-select-plan.md#plan-features) is enabled on their related Subscription or Connectors, or any Azure VM, are eligible for this capability with no additional cost.
+ ## October 2023 |Date |Update | |-|-| | October 30 | [Changing adaptive application controlΓÇÖs security alert's severity](#changing-adaptive-application-controls-security-alerts-severity) | | October 25 | [Offline Azure API Management revisions removed from Defender for APIs](#offline-azure-api-management-revisions-removed-from-defender-for-apis) |
-| October 19 |[DevOps security posture management recommendations available in public preview](#devops-security-posture-management-recommendations-available-in-public-preview)
+| October 19 | [DevOps security posture management recommendations available in public preview](#devops-security-posture-management-recommendations-available-in-public-preview) |
| October 18 | [Releasing CIS Azure Foundations Benchmark v2.0.0 in Regulatory Compliance dashboard](#releasing-cis-azure-foundations-benchmark-v200-in-regulatory-compliance-dashboard) | ### Changing adaptive application controls security alert's severity
New DevOps posture management recommendations are now available in public previe
October 18, 2023 Microsoft Defender for Cloud now supports the latest [CIS Azure Security Foundations Benchmark - version 2.0.0](https://www.cisecurity.org/benchmark/azure) in the Regulatory Compliance [dashboard](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/SecurityMenuBlade/~/22), and a built-in policy initiative in Azure Policy. The release of version 2.0.0 in Microsoft Defender for Cloud is a joint collaborative effort between Microsoft, the Center for Internet Security (CIS), and the user communities. The version 2.0.0 significantly expands assessment scope, which now includes 90+ built-in Azure policies and succeed the prior versions 1.4.0 and 1.3.0 and 1.0 in Microsoft Defender for Cloud and Azure Policy. For more information, you can check out this [blog post](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/microsoft-defender-cloud-now-supports-cis-azure-security/ba-p/3944860).
-Microsoft Defender Cloud now supports the latest [CIS Azure Security Foundations Benchmark - version 2.0.0](https://www.cisecurity.org/benchmark/azure) in the Regulatory Compliance [dashboard](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/SecurityMenuBlade/~/22), and a built-in policy initiative in Azure Policy. The release of version 2.0.0 in Microsoft Defender for Cloud is a joint collaborative effort between Microsoft, the Center for Internet Security (CIS), and the user communities. The version 2.0.0 significantly expands assessment scope, which now includes 90+ built-in Azure policies and succeed the prior versions 1.4.0 and 1.3.0 and 1.0 in Microsoft Defender for Cloud and Azure Policy. For more information, you can check out this [blog post](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/microsoft-defender-cloud-now-supports-cis-azure-security/ba-p/3944860).
## September 2023 |Date |Update | |-|-| | September 27 | [Data security dashboard available in public preview](#data-security-dashboard-available-in-public-preview)
-| September 21 | [Preview release: New autoprovisioning process for SQL Server on machines plan](#preview-release-new-autoprovisioning-process-for-sql-server-on-machines-plan) |
+| September 21 | [Preview release: New autoprovisioning process for SQL Server on machines plan](#preview-release-new-autoprovisioning-process-for-sql-server-on-machines-plan) |
| September 20 | [GitHub Advanced Security for Azure DevOps alerts in Defender for Cloud](#github-advanced-security-for-azure-devops-alerts-in-defender-for-cloud) | | September 11 | [Exempt functionality now available for Defender for APIs recommendations](#exempt-functionality-now-available-for-defender-for-apis-recommendations) | | September 11 | [Create sample alerts for Defender for APIs detections](#create-sample-alerts-for-defender-for-apis-detections) |
September 27, 2023
The data security dashboard is now available in public preview as part of the Defender CSPM plan. The data security dashboard is an interactive, data-centric dashboard that illuminates significant risks to sensitive data, prioritizing alerts and potential attack paths for data across hybrid cloud workloads. Learn more about the [data security dashboard](data-aware-security-dashboard-overview.md).
-### Preview release: New autoprovisioning process for SQL Server on machines plan
+### Preview release: New autoprovisioning process for SQL Server on machines plan
September 21, 2023 Microsoft Monitoring Agent (MMA) is being deprecated in August 2024. Defender for Cloud [updated it's strategy](upcoming-changes.md#defender-for-cloud-plan-and-strategy-for-the-log-analytics-agent-deprecation) by replacing MMA with the release of a SQL Server-targeted Azure Monitoring Agent autoprovisioning process.
-During the preview, customers who are using the MMA autoprovisioning process with Azure Monitor Agent (Preview) option, are requested to [migrate to the new Azure Monitoring Agent for SQL server on machines (Preview) autoprovisioning process](defender-for-sql-autoprovisioning.md#migrate-to-the-sql-server-targeted-ama-autoprovisioning-process). The migration process is seamless and provides continuous protection for all machines.
+During the preview, customers who are using the MMA autoprovisioning process with Azure Monitor Agent (Preview) option, are requested to [migrate to the new Azure Monitoring Agent for SQL server on machines (Preview) autoprovisioning process](defender-for-sql-autoprovisioning.md#migrate-to-the-sql-server-targeted-ama-autoprovisioning-process). The migration process is seamless and provides continuous protection for all machines.
For more information, see [Migrate to SQL server-targeted Azure Monitoring Agent autoprovisioning process](defender-for-sql-autoprovisioning.md).
For more information, see [Migrate to SQL server-targeted Azure Monitoring Agent
September 20, 2023
-You can now view GitHub Advanced Security for Azure DevOps (GHAzDO) alerts related to CodeQL, secrets, and dependencies in Defender for Cloud. Results are displayed in the DevOps blade and in Recommendations. To see these results, onboard your GHAzDO-enabled repositories to Defender for Cloud.
+You can now view GitHub Advanced Security for Azure DevOps (GHAzDO) alerts related to CodeQL, secrets, and dependencies in Defender for Cloud. Results are displayed in the DevOps blade and in Recommendations. To see these results, onboard your GHAzDO-enabled repositories to Defender for Cloud.
-Learn more about [GitHub Advanced Security for Azure DevOps](https://azure.microsoft.com/products/devops/github-advanced-security).
+Learn more about [GitHub Advanced Security for Azure DevOps](https://azure.microsoft.com/products/devops/github-advanced-security).
### Exempt functionality now available for Defender for APIs recommendations September 11, 2023
-You can now exempt recommendations for the following Defender for APIs security recommendations.
+You can now exempt recommendations for the following Defender for APIs security recommendations.
| Recommendation | Description & related policy | Severity | |--|--|--|
September 11, 2023
You can now generate sample alerts for the security detections that were released as part of the Defender for APIs public preview. Learn more about [generating sample alerts in Defender for Cloud](/azure/defender-for-cloud/alert-validation#generate-sample-security-alerts). - ### Preview release: containers vulnerability assessment powered by Microsoft Defender Vulnerability Management now supports scan on pull September 6, 2023
You can learn more about data aware security posture in the following articles:
September 1, 2023
-Malware scanning is now generally available (GA) as an add-on to Defender for Storage. Malware scanning in Defender for Storage helps protect your storage accounts from malicious content by performing a full malware scan on uploaded content in near real time, using Microsoft Defender Antivirus capabilities. It's designed to help fulfill security and compliance requirements for handling untrusted content. The malware scanning capability is an agentless SaaS solution that allows setup at scale, and supports automating response at scale.
+Malware scanning is now generally available (GA) as an add-on to Defender for Storage. Malware scanning in Defender for Storage helps protect your storage accounts from malicious content by performing a full malware scan on uploaded content in near real time, using Microsoft Defender Antivirus capabilities. It's designed to help fulfill security and compliance requirements for handling untrusted content. The malware scanning capability is an agentless SaaS solution that allows setup at scale, and supports automating response at scale.
Learn more about [malware scanning in Defender for Storage](defender-for-storage-malware-scan.md).
defender-for-cloud Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md
Once the recommendations are released for GA, they will be included in the secur
> [!NOTE] > Images scanned both by our container VA offering powered by Qualys and Container VA offering powered by MDVM, will only be billed once.
+The below Qualys recommendations for Containers Vulnerability Assessment will be renamed and will continue to be availible for customers that enabled Defender for Containers on any of their subscriptions. New customers onboarding to Defender for Containers after November 15th will only see the new Container vulnerability assessment recommendations powered by Microsoft Defender Vulnerability Management.
+
+|Current recommendation name|New recommendation name|Description|Assessment key|
+|--|--|--|--|
+|Container registry images should have vulnerability findings resolved (powered by Qualys)|Azure registry container images should have vulnerabilities resolved (powered by Qualys)|Container image vulnerability assessment scans your registry for security vulnerabilities and exposes detailed findings for each image. Resolving the vulnerabilities can greatly improve your containers' security posture and protect them from attacks. |dbd0cb49-b563-45e7-9724-889e799fa648|
+|Running container images should have vulnerability findings resolved (powered by Qualys)|Azure running container images should have vulnerabilities resolved - (powered by Qualys)|Container image vulnerability assessment scans container images running on your Kubernetes clusters for security vulnerabilities and exposes detailed findings for each image. Resolving the vulnerabilities can greatly improve your containers' security posture and protect them from attacks.|41503391-efa5-47ee-9282-4eff6131462|
+ ## Changes to how Microsoft Defender for Cloud's costs are presented in Microsoft Cost Management **Announcement date: October 26, 2023**
defender-for-iot Billing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/billing.md
Title: Microsoft Defender for IoT billing description: Learn how you're billed for the Microsoft Defender for IoT service. Previously updated : 09/13/2023 Last updated : 11/07/2023 #CustomerIntent: As a Defender for IoT customer, I want to understand how I'm billed for Defender for IoT services so that I can best plan my deployment.
For more information, see:
- [Manage Defender for IoT plans for OT monitoring](how-to-manage-subscriptions.md) - [Manage Defender for IoT plans for Enterprise IoT monitoring](manage-subscriptions-enterprise.md) - [Operational Technology (OT) networks frequently asked questions](faqs-ot.md)
+- [Microsoft Defender for IoT Plans and Pricing](https://www.microsoft.com/en-us/security/business/endpoint-security/microsoft-defender-iot-pricing)
defender-for-iot Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/whats-new.md
For more information, see:
- [Securing IoT devices in the enterprise](concept-enterprise.md) - [Enable Enterprise IoT security with Defender for Endpoint](eiot-defender-for-endpoint.md) - [Defender for IoT subscription billing](billing.md)
+- [Microsoft Defender for IoT Plans and Pricing](https://www.microsoft.com/en-us/security/business/endpoint-security/microsoft-defender-iot-pricing#x2a571382a40c482a993d13d7239102eb)
+- Blog: [Enterprise IoT security with Defender for IoT now included in Microsoft 365 E5 and E5 Security plans](https://techcommunity.microsoft.com/t5/microsoft-365-defender-blog/enterprise-iot-security-with-defender-for-iot-now-included-in/ba-p/3967533)
### Updated security stack integration guidance
dms Dms Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/dms-overview.md
The following table compares the functionality of the versions of the Database M
|SQL Server on Azure SQL VM - Online migration | No | Yes | Yes |Migrate to SQL Server on Azure VMs online with minimal downtime.| |SQL Server on Azure SQL VM - Offline migration | Yes |Yes | Yes | Migrate to SQL Server on Azure VMs offline. | |Migrate logins|Yes | Yes | No | Migrate logins from your source to your target.|
-|Migrate schemas| Yes | No | No | Migrate schemas from your source to your target. |
+|Migrate schemas| Yes | No | Yes | Migrate schemas from your source to your target. |
|Azure portal support |Yes | Partial | Yes | Create and Monitor your migration by using the Azure portal. | |Integration with Azure Data Studio | No | Yes | No | Migration support integrated with Azure Data Studio. | |Regional availability|Yes |Yes | Yes | More regions are available with the extension. |
dms Migration Using Azure Data Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/migration-using-azure-data-studio.md
SQL Server to SQL Server on an Azure virtual machine|[Online](./tutorial-sql-ser
SQL Server to Azure SQL Database | [Offline](./tutorial-sql-server-azure-sql-database-offline.md) > [!IMPORTANT]
-> If your target is Azure SQL Database, make sure you deploy the database schema before you begin the migration. You can use tools like the [SQL Server dacpac extension](/azure-data-studio/extensions/sql-server-dacpac-extension) or the [SQL Database Projects extension](/azure-data-studio/extensions/sql-database-project-extension) for Azure Data Studio.
+> If your target is Azure SQL Database, you can migrate database Schema and data both using Database Migration Service via Azure Portal. Also, you can use tools like the [SQL Server dacpac extension](/azure-data-studio/extensions/sql-server-dacpac-extension) or the [SQL Database Projects extension](/azure-data-studio/extensions/sql-database-project-extension) for Azure Data Studio to deploy the database schema before you begin the data migration.
The following video explains recent updates and features added to the Azure SQL Migration extension for Azure Data Studio:
event-grid Event Schema Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/event-schema-resources.md
+
+ Title: Azure Resource Notifications - Resources events in Azure Event Grid
+description: This article provides information on Azure Event Grid events supported by Azure Resource Notifications resources. It provides the schema and links to how-to articles.
+ Last updated : 10/06/2023++
+# Azure Resource Notifications - Resources events in Azure Event Grid
+The Azure Resource Management system topic provides insights into the life cycle of various Azure resources.
+
+The Event Grid system topics for Azure subscriptions and Azure resource groups provide resource life cycle events using a broader range of event types including action, write, and delete events for scenarios involving success, failure, and cancellation. However, it's worth noting that they don't include the resource payload. For details about these events, see [Event Grid system topic for Azure subscriptions](event-schema-subscriptions.md) and [Event Grid system topic for Azure resource groups](event-schema-resource-groups.md).
+
+In contrast, the Azure Resource Notifications (ARN) powered Azure Resource Management system topic offers a more targeted selection of event types, specifically `CreatedOrUpdated` (corresponding to `ResourceWriteSuccess` in the Event Grid Azure subscription system topic), and `Deleted` (corresponding to `ResourceDeleteSuccess` in the Event Grid Azure subscription system topic). These events come with comprehensive payload information, making it easier for customers to apply filtering and refine their notification stream.
+
+For the list of resource types exposed, see [Azure Resource Graph resources](/azure/governance/resource-graph/reference/supported-tables-resources#resources) or use the following Azure Resource Graph query.
+
+```kusto
+resources
+| distinct ['type']
+```
+
+> [!NOTE]
+> Azure Resource Management system topic doesn't yet support all the resource types from the resources table of Azure Resource Graph. We are working on improving this experience.
+
+## Event types
+ARN Resources system topic offers two event types for consumption:
+
+| Event type | Description |
+| - | -- |
+| `Microsoft.ResourceNotifications.Resources.CreatedOrUpdated` | Raised when a resource is successfully created or updated. |
+| `Microsoft.ResourceNotifications.Resources.Deleted` | Raised when a resource is deleted. |
+
+## Role-based access control
+Currently, these events are exclusively emitted at the Azure subscription scope. It implies that the entity creating the event subscription for this topic type receives notifications throughout this Azure subscription. For security reasons, it's imperative to restrict the ability to create event subscriptions on this topic to principals with read access over the entire Azure subscription. To access data via this system topic, in addition to the generic permissions required by Event Grid, the following Azure Resource Notifications specific permission is necessary: `Microsoft.ResourceNotifications/systemTopics/subscribeToResources/action`.
+
+## Event schemas
+This section provides schemas for the `CreatedOrUpdated` and `Deleted` events.
+
+### Event schema for CreatedOrUpdated event
+
+# [Event Grid event schema](#tab/event-grid-event-schema)
+
+Here's the schema:
+
+```json
+{
+ "id": "string",
+ "topic": "string",
+ "subject": "string",
+ "data": {
+ "resourceInfo": {
+ "id": "string",
+ "name": "string",
+ "type": "string",
+ "location": "string",
+ "tags": "string",
+ "properties": {
+ "_comment": "<< object-unique-to-each-publisher >>"
+ }
+ },
+ "apiVersion": "string",
+ "operationalInfo": {
+ "resourceEventTime": "datetime"
+ }
+ },
+ "eventType": "string",
+ "dataVersion": "string",
+ "metadataVersion": "string",
+ "eventTime": "string"
+}
+```
+++
+# [Cloud event schema](#tab/cloud-event-schema)
+
+Here's the schema:
+
+```json
+{
+ "id": "string",
+ "source": "string",
+ "subject": "string",
+ "data": {
+ "resourceInfo": {
+ "id": "string",
+ "name": "string",
+ "type": "string",
+ "location": "string",
+ "tags": "string",
+ "properties": {
+ "_comment": "object-unique-to-each-publisher"
+ }
+ },
+ "apiVersion": "string",
+ "operationalInfo": {
+ "resourceEventTime": "datetime"
+ }
+ },
+ "type": "string",
+ "specversion": "string",
+ "time": "string"
+}
+```
+++
+### Event schema for Deleted event
+
+# [Event Grid event schema](#tab/event-grid-event-schema)
+
+Here's the schema:
+
+```json
+{
+ "id": "string",
+ "topic": "string",
+ "subject": "string",
+ "data": {
+ "resourceInfo": {
+ "id": "string",
+ "name": "string",
+ "type": "string"
+ },
+ "operationalInfo": {
+ "resourceEventTime": "datetime"
+ }
+ },
+ "eventType": "string",
+ "dataVersion": "string",
+ "metadataVersion": "string",
+ "eventTime": "string"
+}
+```
+++
+# [Cloud event schema](#tab/cloud-event-schema)
+
+Here's the schema:
+
+```json
+{
+ "id": "string",
+ "source": "string",
+ "subject": "string",
+ "data": {
+ "resourceInfo": {
+ "id": "string",
+ "name": "string",
+ "type": "string"
+ },
+ "operationalInfo": {
+ "resourceEventTime": "datetime"
+ }
+ },
+ "type": "string",
+ "specversion": "string",
+ "time": "string"
+}
+```
+++
+An event in the Event Grid event schema format has the following top-level properties:
+
+| Property | Type | Description |
+| -- | - | -- |
+| `id` | String | Unique identifier of the event |
+| `topic` | String | The Azure subscription for which this system topic is being created |
+| `subject` | String | Publisher defined path to the base resource on which this event is emitted. |
+| `data` | Object | Contains event data specific to the resource provider. For more information, see the next table. |
+| `eventType` | String | Registered event type of this system topic type |
+| `dataVersion` | String | The schema version of the data object |
+| `metadataVersion` | String | The schema version of the event metadata |
+| `eventTime` | String <br/> Format: `2022-11-07T18:43:09.2894075Z` | The time the event is generated based on the provider's UTC time |
+
+An event in the cloud event schema format has the following top-level properties:
+
+| Property | Type | Description |
+| -- | - | -- |
+| `id` | String | Unique identifier of the event |
+| `source` | String | The Azure subscription for which this system topic is being created. |
+| `subject` | String | Publisher defined path to the base resource on which this event is emitted. |
+| `type` | String | Registered event type of this system topic type |
+| `time` | String <br/> Format: `2022-11-07T18:43:09.2894075Z` | The time the event is generated based on the provider's UTC time |
+| `data` | Object | Contains event data specific to the resource provider. For more information, see the next table. |
+| `specversion` | String | CloudEvents schema specification version. |
++
+The `data` object has the following properties:
+
+| Property | Type | Description |
+| -- | - | -- |
+| `resourceInfo` | Object | Data specific to the resource. For more information, see the next table. |
+| `apiVersion` | String | API version of the resource properties. |
+| `operationalInfo` | Object | Details of operational information pertaining to the resource. |
+
+The `resourceInfo` object has the following common properties across `CreatedOrUpdated` and `Deleted` events:
+
+| Property | Type | Description |
+| -- | - | -- |
+| `id` | String | Publisher defined path to the event subject |
+| `name` | String | This field indicates the Event-id. It always takes the value of the last section of the `id` field. |
+| `type` | String | The type of event that is being emitted. In this context, it's either `Microsoft.ResourceNotifications.Resources.CreatedOrUpdated` or `Microsoft.ResourceNotifications.Resources.Deleted`. |
+
+The `resourceInfo` object for the `CreatedOrUpdated` event has the following extra properties:
+
+| Property | Type | Description |
+| -- | - | -- |
+| `location` | String | Location or region where the resource is located. |
+| `tags` | String | Tags for the resource. |
+| `properties` | Object | Payload of the resource. |
+
+Only the `CreatedOrUpdated` event includes the `properties` object. The schema of this `properties` object is unique to each publisher. To discover the schema, see the [REST API documentation for the specific Azure resource](/rest/api/azure/). You can find an example in the **Examples events** section of this article.
+
+```json
+ "properties": {
+ "_comment": "<< object-unique-to-each-publisher >>"
+ }
+```
+
+The `operationalInfo` object has the following properties:
+
+| Property | Type | Description |
+| -- | - | -- |
+| `resourceEventTime` | DateTime | Date and time when the resource was created or updated (for `CreatedOrUpdated` event), or deleted (for `Deleted` event). |
+
+## Example events
+
+### CreatedOrUpdated event
+This section shows the `CreatedOrUpdated` event generated when an Azure Storage account is created in the Azure subscription on which the system topic is created.
+
+# [Event Grid event schema](#tab/event-grid-event-schema)
+
+```json
+{
+ "id": "4eef929a-a65c-47dd-93e2-46b8c17c6c17",
+ "topic": "/subscriptions/{subscription-id}",
+ "subject": "/subscriptions/{subscription-id}/resourceGroups/{rg-name}/providers/Microsoft.Storage/storageAccounts/{storageAccount-name}",
+ "data": {
+ "resourceInfo": {
+ "tags": {},
+ "id": "/subscriptions/{subcription-id}/resourceGroups/{rg-name}/providers/Microsoft.Storage/storageAccounts/{storageAccount-name}",
+ "name": "StorageAccount-name",
+ "type": "Microsoft.Storage/storageAccounts",
+ "location": "eastus",
+ "properties": {
+ "privateEndpointConnections": [],
+ "minimumTlsVersion": "TLS1_2",
+ "allowBlobPublicAccess": 1,
+ "allowSharedKeyAccess": 1,
+ "networkAcls": {
+ "bypass": "AzureServices",
+ "virtualNetworkRules": [],
+ "ipRules": [],
+ "defaultAction": "Allow"
+ },
+ "supportsHttpsTrafficOnly": 1,
+ "encryption": {
+ "requireInfrastructureEncryption": 0,
+ "services": {
+ "file": {
+ "keyType": "Account",
+ "enabled": 1,
+ "lastEnabledTime": "2023-07-28T20:12:50.6380308Z"
+ },
+ "blob": {
+ "keyType": "Account",
+ "enabled": 1,
+ "lastEnabledTime": "2023-07-28T20:12:50.6380308Z"
+ }
+ },
+ "keySource": "Microsoft.Storage"
+ },
+ "accessTier": "Hot",
+ "provisioningState": "Succeeded",
+ "creationTime": "2023-07-28T20:12:50.4661564Z",
+ "primaryEndpoints": {
+ "dfs": "https://{storageAccount-name}.dfs.core.windows.net/",
+ "web": "https://{storageAccount-name}.z13.web.core.windows.net/",
+ "blob": "https://{storageAccount-name}.blob.core.windows.net/",
+ "queue": "https://{storageAccount-name}.queue.core.windows.net/",
+ "table": "https://{storageAccount-name}.table.core.windows.net/",
+ "file": "https://{storageAccount-name}.file.core.windows.net/"
+ },
+ "primaryLocation": "eastus",
+ "statusOfPrimary": "available",
+ "secondaryLocation": "westus",
+ "statusOfSecondary": "available",
+ "secondaryEndpoints": {
+ "dfs": "https://{storageAccount-name} -secondary.dfs.core.windows.net/",
+ "web": "https://{storageAccount-name}-secondary.z13.web.core.windows.net/",
+ "blob": "https://{storageAccount-name}-secondary.blob.core.windows.net/",
+ "queue": "https://{storageAccount-name}-secondary.queue.core.windows.net/",
+ "table": "https://{storageAccount-name}-secondary.table.core.windows.net/"
+ }
+ }
+ },
+ "apiVersion": "2019-06-01",
+ "operationalInfo": {
+ "resourceEventTime": "2023-07-28T20:13:10.8418063Z"
+ }
+ },
+ "eventType": "Microsoft.ResourceNotifications.Resources.CreatedOrUpdated",
+ "dataVersion": "1",
+ "metadataVersion": "1",
+ "eventTime": "2023-07-28T20:13:10.8418063Z"
+}
+```
+
+# [Cloud event schema](#tab/cloud-event-schema)
++
+```json
+{
+ "id": "4eef929a-a65c-47dd-93e2-46b8c17c6c17",
+ "source": "/subscriptions/{subscription-id}",
+ "subject": "/subscriptions/{subscription-id}/resourceGroups/{rg-name}/providers/Microsoft.Storage/storageAccounts/{storageAccount-name}",
+ "data": {
+ "resourceInfo": {
+ "tags": {},
+ "id": "/subscriptions/{subcription-id}/resourceGroups/{rg-name}/providers/Microsoft.Storage/storageAccounts/{storageAccount-name}",
+ "name": "StorageAccount-name",
+ "type": "Microsoft.Storage/storageAccounts",
+ "location": "eastus",
+ "properties": {
+ "privateEndpointConnections": [],
+ "minimumTlsVersion": "TLS1_2",
+ "allowBlobPublicAccess": 1,
+ "allowSharedKeyAccess": 1,
+ "networkAcls": {
+ "bypass": "AzureServices",
+ "virtualNetworkRules": [],
+ "ipRules": [],
+ "defaultAction": "Allow"
+ },
+ "supportsHttpsTrafficOnly": 1,
+ "encryption": {
+ "requireInfrastructureEncryption": 0,
+ "services": {
+ "file": {
+ "keyType": "Account",
+ "enabled": 1,
+ "lastEnabledTime": "2023-07-28T20:12:50.6380308Z"
+ },
+ "blob": {
+ "keyType": "Account",
+ "enabled": 1,
+ "lastEnabledTime": "2023-07-28T20:12:50.6380308Z"
+ }
+ },
+ "keySource": "Microsoft.Storage"
+ },
+ "accessTier": "Hot",
+ "provisioningState": "Succeeded",
+ "creationTime": "2023-07-28T20:12:50.4661564Z",
+ "primaryEndpoints": {
+ "dfs": "https://{storageAccount-name}.dfs.core.windows.net/",
+ "web": "https://{storageAccount-name}.z13.web.core.windows.net/",
+ "blob": "https://{storageAccount-name}.blob.core.windows.net/",
+ "queue": "https://{storageAccount-name}.queue.core.windows.net/",
+ "table": "https://{storageAccount-name}.table.core.windows.net/",
+ "file": "https://{storageAccount-name}.file.core.windows.net/"
+ },
+ "primaryLocation": "eastus",
+ "statusOfPrimary": "available",
+ "secondaryLocation": "westus",
+ "statusOfSecondary": "available",
+ "secondaryEndpoints": {
+ "dfs": "https://{storageAccount-name} -secondary.dfs.core.windows.net/",
+ "web": "https://{storageAccount-name}-secondary.z13.web.core.windows.net/",
+ "blob": "https://{storageAccount-name}-secondary.blob.core.windows.net/",
+ "queue": "https://{storageAccount-name}-secondary.queue.core.windows.net/",
+ "table": "https://{storageAccount-name}-secondary.table.core.windows.net/"
+ }
+ }
+ },
+ "apiVersion": "2019-06-01",
+ "operationalInfo": {
+ "resourceEventTime": "2023-07-28T20:13:10.8418063Z"
+ }
+ },
+ "type": "Microsoft.ResourceNotifications.Resources.CreatedOrUpdated",
+ "specversion": "1.0",
+ "time": "2023-07-28T20:13:10.8418063Z"
+}
+```
+++
+### Deleted event
+This section shows the `Deleted` event generated when an Azure Storage account is deleted in the Azure subscription on which the system topic is created.
+
+# [Event Grid event schema](#tab/event-grid-event-schema)
+
+```json
+{
+ "id": "d4611260-d179-4f86-b196-3a9d4128be2d",
+ "topic": "/subscriptions/{subscription-id}",
+ "subject": "/subscriptions/{subscription-id}/resourceGroups/{rg-name}/providers/Microsoft.Storage/storageAccounts/{storageAccount-name}",
+ "data": {
+ "resourceInfo": {
+ "id": "/subscriptions/{subscription-id}/resourceGroups/{rg-name}/providers/Microsoft.Storage/storageAccounts/{storageAccount-name}",
+ "name": "storageAccount-name",
+ "type": "Microsoft.Storage/storageAccounts"
+ },
+ "operationalInfo": {
+ "resourceEventTime": "2023-07-28T20:11:36.6347858Z"
+ }
+ },
+ "eventType": "Microsoft.ResourceNotifications.Resources.Deleted",
+ "dataVersion": "1",
+ "metadataVersion": "1",
+ "eventTime": "2023-07-28T20:11:36.6347858Z"
+}
+```
+
+# [Cloud event schema](#tab/cloud-event-schema)
++
+```json
+{
+ "id": "d4611260-d179-4f86-b196-3a9d4128be2d",
+ "source": "/subscriptions/{subscription-id}",
+ "subject": "/subscriptions/{subscription-id}/resourceGroups/{rg-name}/providers/Microsoft.Storage/storageAccounts/{storageAccount-name}",
+ "data": {
+ "resourceInfo": {
+ "id": "/subscriptions/{subscription-id}/resourceGroups/{rg-name}/providers/Microsoft.Storage/storageAccounts/{storageAccount-name}",
+ "name": "storageAccount-name",
+ "type": "Microsoft.Storage/storageAccounts"
+ },
+ "operationalInfo": {
+ "resourceEventTime": "2023-07-28T20:11:36.6347858Z"
+ }
+ },
+ "type": "Microsoft.ResourceNotifications.Resources.Deleted",
+ "specversion": "1.0",
+ "time": "2023-07-28T20:11:36.6347858Z"
+}
+```
+++
+## Contact us
+If you have any questions or feedback on this feature, don't hesitate to reach us at [arnsupport@microsoft.com](mailto:arnsupport@microsoft.com).
+
+## Next steps
+See [Subscribe to Azure Resource Notifications - Resources events](subscribe-to-resource-notifications-resources-events.md).
event-grid Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Event Grid description: Lists Azure Policy Regulatory Compliance controls available for Azure Event Grid. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 10/23/2023 Last updated : 11/06/2023
event-grid Subscribe To Resource Notifications Resources Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/subscribe-to-resource-notifications-resources-events.md
+
+ Title: Subscribe to Azure Resource Notifications - Resources events
+description: This article explains how to subscribe to events published by Azure Resource Notifications - Resources.
+ Last updated : 10/08/2023++
+# Subscribe to events raised by Azure Resource Notifications - Resources system topic
+This article explains the steps needed to subscribe to events published by Azure Resource Notifications - Resources. For detailed information about these events, see [Azure Resource Notifications - Resources events](event-schema-resources.md).
+
+## Create Resources system topic
+This section shows you how to create a system topic of type `microsoft.resourcenotifications.resources`.
+
+# [Azure CLI](#tab/azure-cli)
+
+1. Set the account to the Azure subscription where you wish to create the system topic.
+
+ ```azurecli-interactive
+ az account set ΓÇôs AZURESUBSCRIPTIONID
+ ```
+2. Create a system topic of type `microsoft.resourcenotifications.resources` using the [`az eventgrid system-topic create`](/cli/azure/eventgrid/system-topic#az-eventgrid-system-topic-create) command.
+
+ ```azurecli-interactive
+ az eventgrid system-topic create \
+ --name SYSTEMTOPICNAME \
+ --resource-group RESOURCEGROUPNAME \
+ --source /subscriptions/AZURESUBSCRIPTIONID \
+ --topic-type microsoft.resourcenotifications.resources \
+ --location Global
+ ```
+# [Azure PowerShell](#tab/azure-powershell)
+
+1. Set the account to the Azure subscription where you wish to create the system topic.
+
+ ```azurepowershell-interactive
+ Set-AzContext -Subscription AZURESUBSCRIPTIONID
+ ```
+2. Create a system topic of type `microsoft.resourcenotifications.resources` using the [New-AzEventGridSystemTopic](/powershell/module/az.eventgrid/new-azeventgridsystemtopic) command.
+
+ ```azurepowershell-interactive
+ New-AzEventGridSystemTopic -name SYSTEMTOPICNAME `
+ -resourcegroup RESOURCEGROUPNAME `
+ -source /subscriptions/AZURESUBSCRIPTIONID `
+ -topictype microsoft.resourcenotifications.resources `
+ -location global
+ ```
+
+# [Azure portal](#tab/azure-portal)
+
+1. Sign into the [Azure portal](https://portal.azure.com).
+1. In the search bar, type **Event Grid System Topics**, and select it from the drop-down list.
+1. On the **Event Grid system topics** page, select **Create** on the toolbar.
+1. On the **Create Event Grid System Topic** page, select **Azure Resource Management - Preview** for **Topic type**.
+
+ :::image type="content" source="./media/subscribe-to-resources-events/create-system-topic.png" alt-text="Screenshot that shows the Create System Topic page." lightbox="./media/subscribe-to-resources-events/create-system-topic.png":::
+1. Select the **resource group** in which you want to create the system topic.
+1. Enter a **name** for the system topic.
+1. Select **Review + create**
+1. On the **Review + create** page, select **Create**.
+1. On the successful deployment page, select **Go to resource** to navigate to the page for your system topic. You see the details about your system topic on this page.
+
++
+## Subscribe to events
+
+# [Azure CLI](#tab/azure-cli)
+Create an event subscription for the above topic using the [`az eventgrid system-topic event-subscription create`](/cli/azure/eventgrid/system-topic/event-subscription#az-eventgrid-system-topic-event-subscription-create) command.
+
+The following sample command creates an event subscription for both **CreatedOrUpdated** and **Deleted** events. If you don't specify `included-event-types`, all the event types are included by default.
+
+```azurecli-interactive
+az eventgrid system-topic event-subscription create \
+ --name EVENTSUBSCRIPTIONNAME \
+ --resource-group RESOURCEGROUPNAME \
+ --system-topic-name SYSTEMTOPICNAME \
+ ΓÇô-included-event-types Microsoft.ResourceNotifications.Resources.CreatedOrUpdated, Microsoft.ResourceNotifications.Resources.Deleted \
+ --endpoint /subscriptions/AZURESUBSCRIPTIONID/resourceGroups/RESOURCEGROUPNAME/providers/Microsoft.EventHub/namespaces/MYEVENTHUBSNAMESPACE/eventhubs/MYEVENTHUB \
+ --endpoint-type eventhub
+```
+
+# [Azure PowerShell](#tab/azure-powershell)
+
+Create an event subscription for the above topic using the [New-AzEventGridSystemTopicEventSubscription](/powershell/module/az.eventgrid/new-azeventgridsystemtopiceventsubscription) command.
+
+The following sample command creates an event subscription for both **CreatedOrUpdated** and **Deleted** events. If you don't specify `IncludedEventType`, all the event types are included by default.
+
+```azurepowershell-interactive
+New-AzEventGridSystemTopicEventSubscription -EventSubscriptionName EVENTSUBSCRIPTIONNAME `
+ -ResourceGroupName RESOURCEGROUPNAME `
+ -SystemtopicName SYSTEMTOPICNAME `
+ -IncludedEventType Microsoft.ResourceNotifications.Resources.CreatedOrUpdated, Microsoft.ResourceNotifications.Resources.Deleted `
+ -Endpoint /subscriptions/AZURESUBSCRIPTIONID/resourceGroups/RESOURCEGROUPNAME/providers/Microsoft.EventHub/namespaces/EVENTHUBSNAMESPACE/eventhubs/EVENTHUB `
+ -EndpointType eventhub
+```
+
+# [Azure portal](#tab/azure-portal)
+
+1. On the **Event Grid System Topic** page, select **Event Subscription** on the toolbar.
+1. Confirm that the **Topic Type**, **Source Resource**, and **Topic Name** are automatically populated.
+1. Enter a name for the event subscription.
+1. For **Filter to event types**, select the event, for example, **CreatedOrUpdated** or **Deleted**.
+
+ :::image type="content" source="./media/subscribe-to-resources-events/create-event-subscription-select-event.png" alt-text="Screenshot that shows the Create Event Subscription page." lightbox="./media/subscribe-to-resources-events/create-event-subscription-select-event.png":::
+1. Select **endpoint type**.
+1. Configure event handler based no the endpoint type you selected. In the following example, an Azure event hub is selected.
+
+ :::image type="content" source="./media/subscribe-to-resources-events/select-endpoint.png" alt-text="Screenshot that shows the Create Event Subscription page with an event handler." lightbox="./media/subscribe-to-resources-events/select-endpoint.png":::
+1. Select the **Filters** tab to provide subject filtering and advanced filtering. For example, to filter for events from resources in a specific resource group, follow these steps:
+ 1. Select **Enable subject filtering**.
+ 1. In the **Subject Filters** section, for **Subject begins with**, provide the value of the resource group in this format: `/subscriptions/{subscription-id}/resourceGroups/{resourceGroup-id}`.
+
+ :::image type="content" source="./media/subscribe-to-resources-events/filter.png" alt-text="Screenshot that shows the Filters tab of the Create Event Subscription page." lightbox="./media/subscribe-to-resources-events/filter.png":::
+1. Then, select **Create** to create the event subscription.
+++
+## Delete event subscription and system topic
+
+# [Azure CLI](#tab/azure-cli)
+
+To delete the event subscription, use the [`az eventgrid system-topic event-subscription delete`](/cli/azure/eventgrid/system-topic/event-subscription#az-eventgrid-system-topic-event-subscription-delete) command. Here's an example:
+
+```azurecli-interactive
+az eventgrid system-topic event-subscription delete --name firstEventSubscription --resourcegroup sampletestrg --system-topic-name arnSystemTopicResources
+```
+
+To delete the system topic, use the [`az eventgrid system-topic delete`](/cli/azure/eventgrid/system-topic#az-eventgrid-system-topic-delete) command. Here's an example:
+
+```azurecli-interactive
+az eventgrid system-topic delete --name arnSystemTopicResources --resource-group sampletestrg
+```
+
+# [Azure PowerShell](#tab/azure-powershell)
+To delete an event subscription, use the [`Remove-AzEventGridSystemTopicEventSubscription`](/powershell/module/az.eventgrid/remove-azeventgridsystemtopiceventsubscription) command. Here's an example:
+
+```azurepowershell-interactive
+Remove-AzEventGridSystemTopicEventSubscription -EventSubscriptionName firstEventSubscription -ResourceGroupName sampletestrg -SystemTopicName arnSystemTopicResources
+```
+
+To delete the system topic, use the [`Remove-AzEventGridSystemTopic`](/powershell/module/az.eventgrid/remove-azeventgridsystemtopic) command. Here's an example:
+
+```azurepowershell-interactive
+Remove-AzEventGridSystemTopic -ResourceGroupName sampletestrg -Name arnSystemTopicResources
+```
++
+# [Azure portal](#tab/azure-portal)
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. In the search bar, type **Event Grid System Topics**, and press ENTER.
+1. Select the system topic.
+1. On the **Event Grid System Topic** page, select **Delete** on the toolbar.
+++
+## Filtering examples
+
+### Subscribe to create, update, delete notifications for virtual machines in an Azure subscription
+This section shows filtering example of subscribing to create, update, and delete notifications for virtual machines in an Azure subscription.
+
+# [Azure CLI](#tab/azure-cli)
+
+```azurecli-interactive
+az eventgrid system-topic event-subscription create \
+ --name firstEventSubscription \
+ --resource-group sampletestrg \
+ --system-topic-name arnSystemTopicResources
+ --included-event-types Microsoft.ResourceNotifications.Resources.CreatedOrUpdated, Microsoft.ResourceNotifications.Resources.Deleted \
+ --endpoint /subscriptions/000000000-0000-0000-0000-000000000000/resourceGroups/sampletestrg/providers/Microsoft.EventHub/namespaces/testEventHub/eventhubs/ehforsystemtopicresources \
+ --endpoint-type evenhub \
+ --advanced-filter data.resourceInfo.type StringEndsWith virtualMachines
+```
+
+# [Azure PowerShell](#tab/azure-powershell)
+
+```azurepowershell-interactive
+New-AzEventGridSystemTopicEventSubscription -EventSubscriptionName firstEventSubscription `
+ -ResourceGroupName sampletestrg `
+ -SystemtopicName arnSystemTopicResources `
+ -IncludedEventType Microsoft.ResourceNotifications.Resources.CreatedOrUpdated, Microsoft.ResourceNotifications.Resources.Deleted `
+ -Endpoint /subscriptions/000000000-0000-0000-0000-000000000000/resourceGroups/sampletestrg/providers/Microsoft.EventHub/namespaces/testEventHub/eventhubs/ehforsystemtopicresources `
+ -EndpointType eventhub `
+ -AdvancedFilter @(@{operator = "StringEndsWith"; key = "data.resourceInfo.type" ; value ="virtualMachines"})
+```
+
+# [Azure portal](#tab/azure-portal)
+
+1. Choose **CreatedOrUpdated** and **Deleted** event types.
+1. In the **Filters** tab of the event subscription, choose the following advanced filters.
+
+ ```
+ Key = "data.resourceInfo.type"
+ Operator = "StringEndsWith"
+ Value = "virtualMachines"
+ ```
+++
+### Subscribe to VM create, update, and delete notifications by a particular resource group
+
+# [Azure CLI](#tab/azure-cli)
+
+```azurecli-interactive
+az eventgrid system-topic event-subscription create \
+ --name firstEventSubscription \
+ --resource-group sampletestrg \
+ --system-topic-name arnSystemTopicResources \
+ --included-event-types Microsoft.ResourceNotifications.Resources.CreatedOrUpdated, Microsoft.ResourceNotifications.Resources.Deleted \
+ --endpoint/subscriptions/000000000-0000-0000-0000-0000000000000/resourceGroups/sampletestrg/providers/Microsoft.EventHub/namespaces/testEventHub/eventhubs/ehforsystemtopicresources \
+ --endpoint-type evenhub \
+ --subject-begins-with /subscription/{Azure subscription ID}/resourceGroups/<Resource group name>/
+ --advanced-filter data.resourceInfo.type StringEndsWith virtualMachines
+```
+
+# [Azure PowerShell](#tab/azure-powershell)
+
+```azurepowershell-interactive
+New-AzEventGridSystemTopicEventSubscription -EventSubscriptionName firstEventSubscription `
+ -ResourceGroupName sampletestrg `
+ -SystemtopicName arnSystemTopicResources `
+ -IncludedEventType Microsoft.ResourceNotifications.Resources.CreatedOrUpdated, Microsoft.ResourceNotifications.Resources.Deleted `
+ -Endpoint /subscriptions/000000000-0000-0000-0000-000000000000/resourceGroups/sampletestrg/providers/Microsoft.EventHub/namespaces/testEventHub/eventhubs/ehforsystemtopicresources `
+ -EndpointType eventhub -AdvancedFilter @(@{operator = "StringEndsWith"; key = "data.resourceInfo.type" ; value ="virtualMachines"})
+```
+
+# [Azure portal](#tab/azure-portal)
+
+In the **Filters** tab of the event subscription, enable subject filtering, and use the following subject filter:
+
+```
+Subject begins with = /subscriptions/{subscription-id}/resourceGroups/{resourceGroup-id}
+```
+
+Then, choose the following advanced filters.
+
+```
+Key = "data.resourceInfo.type"
+Operator = "String ends with"
+Value = "virtualMachines"
+```
+++
+### Subscribe to VM create and update notifications by a particular location within a subscription
+
+# [Azure CLI](#tab/azure-cli)
+
+```azurecli-interactive
+az eventgrid system-topic event-subscription create \
+ --name firstEventSubscription \
+ --resource-group sampletestrg \
+ --system-topic-name arnSystemTopicResources \
+ --included-event-types Microsoft.ResourceNotifications.Resources.CreatedOrUpdated \
+ --endpoint/subscriptions/000000000-0000-0000-0000-0000000000000/resourceGroups/sampletestrg/providers/Microsoft.EventHub/namespaces/testEventHub/eventhubs/ehforsystemtopicresources \
+ --endpoint-type evenhub \
+ --subject-begins-with /subscription/{Azure subscription ID}/resourceGroups/<Resource group name>/
+ --advanced-filter data.resourceInfo.location StringIn eastus
+ ΓÇô-advanced-filter data.resourceInfo.type StringEndsWith virtualMachines
+```
+
+# [Azure PowerShell](#tab/azure-powershell)
+
+```azurepowershell-interactive
+New-AzEventGridSystemTopicEventSubscription -EventSubscriptionName firstEventSubscription `
+ -ResourceGroupName sampletestrg `
+ -SystemtopicName arnSystemTopicResources `
+ -IncludedEventType Microsoft.ResourceNotifications.Resources.CreatedOrUpdated, Microsoft.ResourceNotifications.Resources.Deleted `
+ -Endpoint /subscriptions/000000000-0000-0000-0000-000000000000/resourceGroups/sampletestrg/providers/Microsoft.EventHub/namespaces/testEventHub/eventhubs/ehforsystemtopicresources `
+ -EndpointType eventhub `
+ -AdvancedFilter @(@{operator = "StringIn"; key = "data.resourceInfo.location"; value ="eastus"}, @{operator = "StringEndsWith"; key = "data.resourceInfo.type" ; value ="virtualMachines"})
+```
+
+# [Azure portal](#tab/azure-portal)
+
+In the **Filters** tab of the event subscription, enable subject filtering, and use the following subject filter:
+
+```
+Subject begins with = /subscriptions/{subscription-id}/resourceGroups/{resourceGroup-id}
+```
+
+Then, choose the following advanced filters.
+
+```
+Key = "data.resourceInfo.location",
+Operator = "String is in"
+Value = "eastus"
+```
+
+AND
+
+Key = "data.resourceInfo.type",
+Operator = "String ends with"
+Value = "virtualMachines"
+++
+## Contact us
+If you have any questions or feedback on this feature, don't hesitate to reach us at [arnsupport@microsoft.com](mailto:arnsupport@microsoft.com).
+
+To better assist you with specific feedback about a certain event, provide the following information:
+
+### For missing events:
+
+- System topic type name
+- Approximate timestamp in UTC when the operation was executed
+- Base resource ID for which the notification was generated
+- Navigate to your resource in Azure portal and select JSON view at the far right corner. Resource ID is the first field on the JSON view page.
+- Expected event type
+- Operation executed (for example, VM started or stopped, Storage account created etc.)
+- Description of issue encountered (for example, VM started and no Microsoft.ResourceNotifications.HealthResources.AvailabilityStatusChanged event generated)
+- If possible, provide the correlation ID of operation executed
+
+### For event that was delayed or has unexpected content
+
+- System topic type name
+- Entire contents of the notification excluding data.resourceInfo.properties
+- Description of issue encountered and impacted field values
+
+Ensure that you aren't providing any end user identifiable information while you're sharing this data.
+
+## Next steps
+For detailed information about these events, see [Azure Resource Notifications - Resources events](event-schema-resources.md).
event-hubs Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Event Hubs description: Lists Azure Policy Regulatory Compliance controls available for Azure Event Hubs. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 10/23/2023 Last updated : 11/06/2023
expressroute Expressroute Locations Providers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-locations-providers.md
Previously updated : 08/28/2023 Last updated : 11/06/2023
The following table shows connectivity locations and the service providers for e
| Location | Address | Zone | Local Azure regions | ER Direct | Service providers | |--|--|--|--|--|--| | **Abu Dhabi** | Etisalat KDC | 3 | UAE Central | Supported | |
-| **Amsterdam** | [Equinix AM5](https://www.equinix.com/locations/europe-colocation/netherlands-colocation/amsterdam-data-centers/am5/) | 1 | West Europe | Supported | Aryaka Networks<br/>AT&T NetBond<br/>British Telecom<br/>Colt<br/>Equinix<br/>euNetworks<br/>GÉANT<br/>InterCloud<br/>Interxion<br/>KPN<br/>IX Reach<br/>Level 3 Communications<br/>Megaport<br/>NTT Communications<br/>Orange<br/>Tata Communications<br/>Telefonica<br/>Telenor<br/>Telia Carrier<br/>Verizon<br/>Zayo |
-| **Amsterdam2** | [Interxion AMS8](https://www.interxion.com/Locations/amsterdam/schiphol/) | 1 | West Europe | Supported | BICS<br/>British Telecom<br/>CenturyLink Cloud Connect<br/>Colt<br/>DE-CIX<br/>Equinix<br/>euNetworks<br/>GÉANT<br/>Interxion<br/>Megaport<br/>NL-IX<br/>NOS<br/>NTT Global DataCenters EMEA<br/>Orange<br/>Vodafone |
+| **Amsterdam** | [Equinix AM5](https://www.equinix.com/locations/europe-colocation/netherlands-colocation/amsterdam-data-centers/am5/) | 1 | West Europe | Supported | Aryaka Networks<br/>AT&T NetBond<br/>British Telecom<br/>Colt<br/>Deutsche Telekom AG<br/>Equinix<br/>euNetworks<br/>GÉANT<br/>InterCloud<br/>Interxion<br/>KPN<br/>IX Reach<br/>Level 3 Communications<br/>Megaport<br/>NTT Communications<br/>Orange<br/>Tata Communications<br/>Telefonica<br/>Telenor<br/>Telia Carrier<br/>Verizon<br/>Zayo |
+| **Amsterdam2** | [Interxion AMS8](https://www.interxion.com/Locations/amsterdam/schiphol/) | 1 | West Europe | Supported | BICS<br/>British Telecom<br/>CenturyLink Cloud Connect<br/>Cinia<br/>Colt<br/>DE-CIX<br/>Equinix<br/>euNetworks<br/>GÉANT<br/>Interxion<br/>Megaport<br/>NL-IX<br/>NOS<br/>NTT Global DataCenters EMEA<br/>Orange<br/>Vodafone |
| **Atlanta** | [Equinix AT1](https://www.equinix.com/data-centers/americas-colocation/united-states-colocation/atlanta-data-centers/at1) | 1 | n/a | Supported | Equinix<br/>Megaport | | **Auckland** | [Vocus Group NZ Albany](https://www.vocus.co.nz/business/cloud-data-centres) | 2 | n/a | Supported | Devoli<br/>Kordia<br/>Megaport<br/>REANNZ<br/>Spark NZ<br/>Vocus Group NZ | | **Bangkok** | [AIS](https://business.ais.co.th/solution/en/azure-expressroute.html) | 2 | n/a | Supported | AIS<br/>National Telecom UIH | | **Berlin** | [NTT GDC](https://services.global.ntt/en-us/newsroom/ntt-ltd-announces-access-to-microsoft-azure-expressroute-at-ntts-berlin-1-data-center) | 1 | Germany North | Supported | Colt<br/>Equinix<br/>NTT Global DataCenters EMEA |
-| **Bogota** | [Equinix BG1](https://www.equinix.com/locations/americas-colocation/colombia-colocation/bogota-data-centers/bg1/) | 4 | n/a | Supported | CenturyLink Cloud Connect<br/>Equinix |
| **Busan** | [LG CNS](https://www.lgcns.com/business/cloud/datacenter/) | 2 | Korea South | n/a | LG CNS | | **Campinas** | [Ascenty](https://www.ascenty.com/en/data-centers-en/campinas/) | 3 | Brazil South | Supported | Ascenty | | **Canberra** | [CDC](https://cdcdatacentres.com.au/about-us/) | 1 | Australia Central | Supported | CDC |
The following table shows connectivity locations and the service providers for e
| **Chennai2** | Airtel | 2 | South India | Supported | Airtel | | **Chicago** | [Equinix CH1](https://www.equinix.com/locations/americas-colocation/united-states-colocation/chicago-data-centers/ch1/) | 1 | North Central US | Supported | Aryaka Networks<br/>AT&T Dynamic Exchange<br/>AT&T NetBond<br/>British Telecom<br/>CenturyLink Cloud Connect<br/>Cologix<br/>Colt<br/>Comcast<br/>Coresite<br/>Equinix<br/>InterCloud<br/>Internet2<br/>Level 3 Communications<br/>Megaport<br/>Momentum Telecom<br/>PacketFabric<br/>PCCW Global Limited<br/>Sprint<br/>Tata Communications<br/>Telia Carrier<br/>Verizon<br/>Vodafone<br/>Zayo | | **Chicago2** | [CoreSite CH1](https://www.coresite.com/data-center/ch1-chicago-il) | 1 | North Central US | Supported | CoreSite<br/>DE-CIX |
-| **Copenhagen** | [Interxion CPH1](https://www.interxion.com/Locations/copenhagen/) | 1 | n/a | Supported | GlobalConnect<br/>Interxion |
-| **Dallas** | [Equinix DA3](https://www.equinix.com/locations/americas-colocation/united-states-colocation/dallas-data-centers/da3/) | 1 | n/a | Supported | Aryaka Networks<br/>AT&T Dynamic Exchange<br/>AT&T NetBond<br/>Cologix<br/>Cox Business Cloud Port<br/>Equinix<br/>Intercloud<br/>Internet2<br/>Level 3 Communications<br/>Megaport<br/>Neutrona Networks<br/>Orange<br/>PacketFabric<br/>Telmex Uninet<br/>Telia Carrier<br/>Transtelco<br/>Verizon<br/>Vodafone<br/>Zayo |
+| **Copenhagen** | [Interxion CPH1](https://www.interxion.com/Locations/copenhagen/) | 1 | n/a | Supported | DE-CIX<br/>GlobalConnect<br/>Interxion |
+| **Dallas** | [Equinix DA3](https://www.equinix.com/locations/americas-colocation/united-states-colocation/dallas-data-centers/da3/)<br/>[Equinix DA6](https://www.equinix.com/data-centers/americas-colocation/united-states-colocation/dallas-data-centers/da6) | 1 | n/a | Supported | Aryaka Networks<br/>AT&T Dynamic Exchange<br/>AT&T NetBond<br/>Cologix<br/>Cox Business Cloud Port<br/>Equinix<br/>GTT<br/>Intercloud<br/>Internet2<br/>Level 3 Communications<br/>Megaport<br/>Neutrona Networks<br/>Orange<br/>PacketFabric<br/>Telmex Uninet<br/>Telia Carrier<br/>Telefonica<br/>Transtelco<br/>Verizon<br/>Vodafone<br/>Zayo |
| **Denver** | [CoreSite DE1](https://www.coresite.com/data-centers/locations/denver/de1) | 1 | West Central US | Supported | CoreSite<br/>Megaport<br/>PacketFabric<br/>Zayo | | **Doha** | [MEEZA MV2](https://www.meeza.net/services/data-centre-services/) | 3 | Qatar Central | Supported | Ooredoo Cloud Connect<br/>Vodafone | | **Doha2** | [Ooredoo](https://www.ooredoo.qa/) | 3 | Qatar Central | Supported | Ooredoo Cloud Connect | | **Dubai** | [PCCS](http://www.pacificcontrols.net/cloudservices/) | 3 | UAE North | Supported | Etisalat UAE |
-| **Dubai2** | [du datamena](http://datamena.com/solutions/data-centre) | 3 | UAE North | n/a | DE-CIX<br/>du datamena<br/>Equinix<br/>GBI<br/>Megaport<br/>Orange<br/>Orixcom |
+| **Dubai2** | [du datamena](http://datamena.com/solutions/data-centre) | 3 | UAE North | n/a | DE-CIX<br/>du datamena<br/>Equinix<br/>GBI<br/>Lightstorm<br/>Megaport<br/>Orange<br/>Orixcom |
| **Dublin** | [Equinix DB3](https://www.equinix.com/locations/europe-colocation/ireland-colocation/dublin-data-centers/db3/) | 1 | North Europe | Supported | CenturyLink Cloud Connect<br/>Colt<br/>eir<br/>Equinix<br/>GEANT<br/>euNetworks<br/>Interxion<br/>Megaport<br/>Zayo |
-| **Dublin2** | [Interxion DUB2](https://www.interxion.com/locations/europe/dublin) | 1 | North Europe | Supported | Interxion<br/>KPN<br/>Orange |
+| **Dublin2** | [Interxion DUB2](https://www.interxion.com/locations/europe/dublin) | 1 | North Europe | Supported | InterCloud<br/>Interxion<br/>KPN<br/>Orange |
| **Frankfurt** | [Interxion FRA11](https://www.digitalrealty.com/data-centers/emea/frankfurt) | 1 | Germany West Central | Supported | AT&T NetBond<br/>British Telecom<br/>CenturyLink Cloud Connect<br/>China Unicom Global<br/>Colt<br/>DE-CIX<br/>Equinix<br/>euNetworks<br/>GBI<br/>GEANT<br/>InterCloud<br/>Interxion<br/>Megaport<br/>NTT Global DataCenters EMEA<br/>Orange<br/>Telia Carrier<br/>T-Systems<br/>Verizon<br/>Zayo |
-| **Frankfurt2** | [Equinix FR7](https://www.equinix.com/locations/europe-colocation/germany-colocation/frankfurt-data-centers/fr7/) | 1 | Germany West Central | Supported | DE-CIX<br/>Deutsche Telekom AG<br/>Equinix<br/>InterCloud |
+| **Frankfurt2** | [Equinix FR7](https://www.equinix.com/locations/europe-colocation/germany-colocation/frankfurt-data-centers/fr7/) | 1 | Germany West Central | Supported | DE-CIX<br/>Deutsche Telekom AG<br/>Equinix<br/>InterCloud<br/>Telefonica |
| **Geneva** | [Equinix GV2](https://www.equinix.com/locations/europe-colocation/switzerland-colocation/geneva-data-centers/gv2/) | 1 | Switzerland West | Supported | Colt<br/>Equinix<br/>InterCloud<br/>Megaport<br/>Swisscom |
-| **Hong Kong** | [Equinix HK1](https://www.equinix.com/data-centers/asia-pacific-colocation/hong-kong-colocation/hong-kong-data-centers/hk1) | 2 | East Asia | Supported | Aryaka Networks<br/>British Telecom<br/>CenturyLink Cloud Connect<br/>Chief Telecom<br/>China Telecom Global<br/>China Unicom<br/>Colt<br/>Equinix<br/>InterCloud<br/>Megaport<br/>NTT Communications<br/>Orange<br/>PCCW Global Limited<br/>Tata Communications<br/>Telia Carrier<br/>Verizon<br/>Zayo |
+| **Hong Kong** | [Equinix HK1](https://www.equinix.com/data-centers/asia-pacific-colocation/hong-kong-colocation/hong-kong-data-centers/hk1) | 2 | East Asia | Supported | Aryaka Networks<br/>British Telecom<br/>CenturyLink Cloud Connect<br/>Chief Telecom<br/>China Telecom Global<br/>China Unicom Global<br/>Colt<br/>Equinix<br/>InterCloud<br/>Megaport<br/>NTT Communications<br/>Orange<br/>PCCW Global Limited<br/>Tata Communications<br/>Telia Carrier<br/>Telefonica<br/>Verizon<br/>Zayo |
| **Hong Kong2** | [iAdvantage MEGA-i](https://www.iadvantage.net/index.php/locations/mega-i) | 2 | East Asia | Supported | China Mobile International<br/>China Telecom Global<br/>Deutsche Telekom AG<br/>Equinix<br/>iAdvantage<br/>Megaport<br/>PCCW Global Limited<br/>SingTel<br/>Vodafone | | **Jakarta** | [Telin](https://www.telin.net/) | 4 | n/a | Supported | NTT Communications<br/>Telin<br/>XL Axiata | | **Johannesburg** | [Teraco JB1](https://www.teraco.co.za/data-centre-locations/johannesburg/#jb1) | 3 | South Africa North | Supported | BCX<br/>British Telecom<br/>Internet Solutions - Cloud Connect<br/>Liquid Telecom<br/>MTN Global Connect<br/>Orange<br/>Teraco<br/>Vodacom | | **Kuala Lumpur** | [TIME dotCom Menara AIMS](https://www.time.com.my/enterprise/connectivity/direct-cloud) | 2 | n/a | n/a | DE-CIX<br/>TIME dotCom | | **Las Vegas** | [Switch LV](https://www.switch.com/las-vegas) | 1 | n/a | Supported | CenturyLink Cloud Connect<br/>Megaport<br/>PacketFabric | | **London** | [Equinix LD5](https://www.equinix.com/locations/europe-colocation/united-kingdom-colocation/london-data-centers/ld5/) | 1 | UK South | Supported | AT&T NetBond<br/>Bezeq International<br/>British Telecom<br/>CenturyLink<br/>Colt<br/>Equinix<br/>euNetworks<br/>Intelsat<br/>InterCloud<br/>Internet Solutions - Cloud Connect<br/>Interxion<br/>Jisc<br/>Level 3 Communications<br/>Megaport<br/>MTN<br/>NTT Communications<br/>Orange<br/>PCCW Global Limited<br/>Tata Communications<br/>Telehouse - KDDI<br/>Telenor<br/>Telia Carrier<br/>Verizon<br/>Vodafone<br/>Zayo |
-| **London2** | [Telehouse North Two](https://www.telehouse.net/data-centres/emea/uk-data-centres/london-data-centres/north-two) | 1 | UK South | Supported | BICS<br/>British Telecom<br/>CenturyLink Cloud Connect<br/>Colt<br/>Equinix<br/>GTT<br/>Interxion<br/>IX Reach<br/>JISC<br/>Megaport<br/>NTT Global DataCenters EMEA<br/>Ooredoo Cloud Connect<br/>Orange<br/>SES<br/>Sohonet<br/>Telehouse - KDDI<br/>Zayo<br/>Vodafone |
+| **London2** | [Telehouse North Two](https://www.telehouse.net/data-centres/emea/uk-data-centres/london-data-centres/north-two) | 1 | UK South | Supported | BICS<br/>British Telecom<br/>CenturyLink Cloud Connect<br/>Colt<br/>Equinix<br/>Epsilon Global Communications<br/>GTT<br/>Interxion<br/>IX Reach<br/>JISC<br/>Megaport<br/>NTT Global DataCenters EMEA<br/>Ooredoo Cloud Connect<br/>Orange<br/>SES<br/>Sohonet<br/>Telehouse - KDDI<br/>Zayo<br/>Vodafone |
| **Los Angeles** | [CoreSite LA1](https://www.coresite.com/data-centers/locations/los-angeles/one-wilshire) | 1 | n/a | Supported | AT&T Dynamic Exchange<br/>CoreSite<br/>Cloudflare<br/>Equinix*<br/>Megaport<br/>Neutrona Networks<br/>NTT<br/>Zayo</br></br> **New ExpressRoute circuits are no longer supported with Equinix in Los Angeles. Create new circuits in Los Angeles2.* |
-| **Los Angeles2** | [Equinix LA1](https://www.equinix.com/locations/americas-colocation/united-states-colocation/los-angeles-data-centers/la1/) | 1 | n/a | Supported | Equinix<br/>PacketFabric |
-| **Madrid** | [Interxion MAD1](https://www.interxion.com/es/donde-estamos/europa/madrid) | 1 | West Europe | Supported | DE-CIX<br/>Interxion<br/>Megaport<br/>Telefonica |
+| **Los Angeles2** | [Equinix LA1](https://www.equinix.com/locations/americas-colocation/united-states-colocation/los-angeles-data-centers/la1/) | 1 | n/a | Supported | Equinix<br/>GTT<br/>PacketFabric |
+| **Madrid** | [Interxion MAD1](https://www.interxion.com/es/donde-estamos/europa/madrid) | 1 | West Europe | Supported | DE-CIX<br/>InterCloud<br/>Interxion<br/>Megaport<br/>Telefonica |
| **Marseille** | [Interxion MRS1](https://www.interxion.com/Locations/marseille/) | 1 | France South | n/a | Colt<br/>DE-CIX<br/>GEANT<br/>Interxion<br/>Jaguar Network<br/>Ooredoo Cloud Connect | | **Melbourne** | [NextDC M1](https://www.nextdc.com/data-centres/m1-melbourne-data-centre) | 2 | Australia Southeast | Supported | AARNet<br/>Devoli<br/>Equinix<br/>Megaport<br/>NETSG<br/>NEXTDC<br/>Optus<br/>Orange<br/>Telstra Corporation<br/>TPG Telecom |
-| **Miami** | [Equinix MI1](https://www.equinix.com/locations/americas-colocation/united-states-colocation/miami-data-centers/mi1/) | 1 | n/a | Supported | AT&T Dynamic Exchange<br/>Claro<br/>C3ntro<br/>Equinix<br/>Megaport<br/>Neutrona Networks |
-| **Milan** | [IRIDEOS](https://irideos.it/en/data-centers/) | 1 | n/a | Supported | Colt<br/>Equinix<br/>Fastweb<br/>IRIDEOS<br/>Retelit<br/>Vodafone |
+| **Miami** | [Equinix MI1](https://www.equinix.com/locations/americas-colocation/united-states-colocation/miami-data-centers/mi1/) | 1 | n/a | Supported | AT&T Dynamic Exchange<br/>Claro<br/>C3ntro<br/>Equinix<br/>Megaport<br/>Neutrona Networks<br/>PitChile |
+| **Milan** | [IRIDEOS](https://irideos.it/en/data-centers/) | 1 | Italy North | Supported | Colt<br/>Equinix<br/>Fastweb<br/>IRIDEOS<br/>Retelit<br/>Vodafone |
+| **Milan2** | [DATA4](https://www.data4group.com/it/data-center-a-milano-italia/) | 1 | Italy North | Supported | |
| **Minneapolis** | [Cologix MIN1](https://www.cologix.com/data-centers/minneapolis/min1/) and [Cologix MIN3](https://www.cologix.com/data-centers/minneapolis/min3/) | 1 | n/a | Supported | Cologix<br/>Megaport |
-| **Montreal** | [Cologix MTL3](https://www.cologix.com/data-centers/montreal/mtl3/) | 1 | n/a | Supported | Bell Canada<br/>CenturyLink Cloud Connect<br/>Cologix<br/>Fibrenoire<br/>Megaport<br/>Telus<br/>Zayo |
-| **Mumbai** | Tata Communications | 2 | West India | Supported | BSNL<br/>British Telecom<br/>DE-CIX<br/>Global CloudXchange (GCX)<br/>Reliance Jio<br/>Sify<br/>Tata Communications<br/>Verizon |
+| **Montreal** | [Cologix MTL3](https://www.cologix.com/data-centers/montreal/mtl3/) | 1 | n/a | Supported | Bell Canada<br/>CenturyLink Cloud Connect<br/>Cologix<br/>Fibrenoire<br/>Megaport<br/>RISQ<br/>Telus<br/>Zayo |
+| **Mumbai** | Tata Communications | 2 | West India | Supported | BSNL<br/>British Telecom<br/>DE-CIX<br/>Global CloudXchange (GCX)<br/>InterCloud<br/>Reliance Jio<br/>Sify<br/>Tata Communications<br/>Verizon |
| **Mumbai2** | Airtel | 2 | West India | Supported | Airtel<br/>Sify<br/>Orange<br/>Vodafone Idea | | **Munich** | [EdgeConneX](https://www.edgeconnex.com/locations/europe/munich/) | 1 | n/a | Supported | Colt<br/>DE-CIX<br/>Megaport | | **New York** | [Equinix NY5](https://www.equinix.com/locations/americas-colocation/united-states-colocation/new-york-data-centers/ny5/) | 1 | n/a | Supported | CenturyLink Cloud Connect<br/>Coresite<br/>Crown Castle<br/>DE-CIX<br/>Equinix<br/>InterCloud<br/>Lightpath<br/>Megaport<br/>Momentum Telecom<br/>NTT Communications<br/>Packet<br/>Zayo | | **Newport(Wales)** | [Next Generation Data](https://www.nextgenerationdata.co.uk) | 1 | UK West | Supported | British Telecom<br/>Colt<br/>Jisc<br/>Level 3 Communications<br/>Next Generation Data |
-| **Osaka** | [Equinix OS1](https://www.equinix.com/locations/asia-colocation/japan-colocation/osaka-data-centers/os1/) | 2 | Japan West | Supported | AT TOKYO<br/>BBIX<br/>Colt<br/>Equinix<br/>Internet Initiative Japan Inc. - IIJ<br/>Megaport<br/>NTT Communications<br/>NTT SmartConnect<br/>Softbank<br/>Tokai Communications |
+| **Osaka** | [Equinix OS1](https://www.equinix.com/locations/asia-colocation/japan-colocation/osaka-data-centers/os1/) | 2 | Japan West | Supported | AT TOKYO<br/>BBIX<br/>Colt<br/>DE-CIX<br/>Equinix<br/>Internet Initiative Japan Inc. - IIJ<br/>Megaport<br/>NTT Communications<br/>NTT SmartConnect<br/>Softbank<br/>Tokai Communications |
| **Oslo** | [DigiPlex Ulven](https://www.digiplex.com/locations/oslo-datacentre) | 1 | Norway East | Supported | GlobalConnect<br/>Megaport<br/>Telenor<br/>Telia Carrier |
-| **Paris** | [Interxion PAR5](https://www.interxion.com/Locations/paris/) | 1 | France Central | Supported | British Telecom<br/>CenturyLink Cloud Connect<br/>Colt<br/>Equinix<br/>Intercloud<br/>Interxion<br/>Jaguar Network<br/>Megaport<br/>Orange<br/>Telia Carrier<br/>Zayo<br/>Verizon |
-| **Paris2** | [Equinix](https://www.equinix.com/data-centers/europe-colocation/france-colocation/paris-data-centers/pa4) | 1 | France Central | Supported | Equinix |
+| **Paris** | [Interxion PAR5](https://www.interxion.com/Locations/paris/) | 1 | France Central | Supported | British Telecom<br/>CenturyLink Cloud Connect<br/>Colt<br/>Equinix<br/>euNetworks<br/>Intercloud<br/>Interxion<br/>Jaguar Network<br/>Megaport<br/>Orange<br/>Telia Carrier<br/>Zayo<br/>Verizon |
+| **Paris2** | [Equinix](https://www.equinix.com/data-centers/europe-colocation/france-colocation/paris-data-centers/pa4) | 1 | France Central | Supported | Equinix<br/>InterCloud<br/>Orange |
| **Perth** | [NextDC P1](https://www.nextdc.com/data-centres/p1-perth-data-centre) | 2 | n/a | Supported | Equinix<br/>Megaport<br/>NextDC | | **Phoenix** | [EdgeConneX PHX01](https://www.cyrusone.com/data-centers/north-america/arizona/phx1-phx8-phoenix) | 1 | West US 3 | Supported | Cox Business Cloud Port<br/>CenturyLink Cloud Connect<br/>DE-CIX<br/>Megaport<br/>Zayo | | **Portland** | [EdgeConnex POR01](https://www.edgeconnex.com/locations/north-america/portland-or/) | 1 | West US 2 | Supported | |
-| **Pune** | [STT GDC Pune DC1](https://www.sttelemediagdc.in/our-data-centres-in-india) | 2 | Central India | Supported | Lightstorm<br/>Tata Communications |
-| **Quebec City** | [Vantage](https://vantage-dc.com/data_centers/quebec-city-data-center-campus/) | 1 | Canada East | Supported | Bell Canada<br/>Equinix<br/>Megaport<br/>Telus |
+| **Pune** | [STT GDC Pune DC1](https://www.sttelemediagdc.in/our-data-centres-in-india) | 2 | Central India | Supported | Airtel<br/>Lightstorm<br/>Tata Communications |
+| **Quebec City** | [Vantage](https://vantage-dc.com/data_centers/quebec-city-data-center-campus/) | 1 | Canada East | Supported | Bell Canada<br/>Equinix<br/>Megaport<br/>RISQ<br/>Telus |
| **Queretaro (Mexico)** | [KIO Networks QR01](https://www.kionetworks.com/es-mx/) | 4 | n/a | Supported | Cirion Technologies<br/>Megaport<br/>Transtelco | | **Quincy** | [Sabey Datacenter - Building A](https://sabeydatacenters.com/data-center-locations/central-washington-data-centers/quincy-data-center) | 1 | West US 2 | Supported | | | **Rio de Janeiro** | [Equinix-RJ2](https://www.equinix.com/locations/americas-colocation/brazil-colocation/rio-de-janeiro-data-centers/rj2/) | 3 | Brazil Southeast | Supported | Cirion Technologies<br/>Equinix |
The following table shows connectivity locations and the service providers for e
| **Santiago** | [EdgeConnex SCL](https://www.edgeconnex.com/locations/south-america/santiago/) | 3 | n/a | Supported | PitChile | | **Sao Paulo** | [Equinix SP2](https://www.equinix.com/locations/americas-colocation/brazil-colocation/sao-paulo-data-centers/sp2/) | 3 | Brazil South | Supported | Aryaka Networks<br/>Ascenty Data Centers<br/>British Telecom<br/>Equinix<br/>InterCloud<br/>Level 3 Communications<br/>Neutrona Networks<br/>Orange<br/>RedCLARA<br/>Tata Communications<br/>Telefonica<br/>UOLDIVEO | | **Sao Paulo2** | [TIVIT TSM](https://www.tivit.com/en/tivit/) | 3 | Brazil South | Supported | Ascenty Data Centers<br/>Tivit |
-| **Seattle** | [Equinix SE2](https://www.equinix.com/locations/americas-colocation/united-states-colocation/seattle-data-centers/se2/) | 1 | West US 2 | Supported | Aryaka Networks<br/>CenturyLink Cloud Connect<br/>Equinix<br/>Level 3 Communications<br/>Megaport<br/>PacketFabric<br/>Telus<br/>Zayo |
+| **Seattle** | [Equinix SE2](https://www.equinix.com/locations/americas-colocation/united-states-colocation/seattle-data-centers/se2/) | 1 | West US 2 | Supported | Aryaka Networks<br/>CenturyLink Cloud Connect<br/>DE-CIX<br/>Equinix<br/>Level 3 Communications<br/>Megaport<br/>PacketFabric<br/>Telus<br/>Zayo |
| **Seoul** | [KINX Gasan IDC](https://www.kinx.net/?lang=en) | 2 | Korea Central | Supported | KINX<br/>KT<br/>LG CNS<br/>LGUplus<br/>Equinix<br/>Sejong Telecom<br/>SK Telecom | | **Seoul2** | [KT IDC](https://www.kt-idc.com/eng/introduce/sub1_4_10.jsp#tab) | 2 | Korea Central | n/a | KT | | **Silicon Valley** | [Equinix SV1](https://www.equinix.com/locations/americas-colocation/united-states-colocation/silicon-valley-data-centers/sv1/) | 1 | West US | Supported | Aryaka Networks<br/>AT&T Dynamic Exchange<br/>AT&T NetBond<br/>British Telecom<br/>CenturyLink Cloud Connect<br/>Colt<br/>Comcast<br/>Coresite<br/>Cox Business Cloud Port<br/>Equinix<br/>InterCloud<br/>Internet2<br/>IX Reach<br/>Packet<br/>PacketFabric<br/>Level 3 Communications<br/>Megaport<br/>Orange<br/>Sprint<br/>Tata Communications<br/>Telia Carrier<br/>Verizon<br/>Vodafone<br/>Zayo | | **Silicon Valley2** | [Coresite SV7](https://www.coresite.com/data-centers/locations/silicon-valley/sv7) | 1 | West US | Supported | Colt<br/>Coresite |
-| **Singapore** | [Equinix SG1](https://www.equinix.com/data-centers/asia-pacific-colocation/singapore-colocation/singapore-data-center/sg1) | 2 | Southeast Asia | Supported | Aryaka Networks<br/>AT&T NetBond<br/>British Telecom<br/>China Mobile International<br/>Epsilon Global Communications<br/>Equinix<br/>InterCloud<br/>Level 3 Communications<br/>Megaport<br/>NTT Communications<br/>Orange<br/>PCCW Global Limited<br/>SingTel<br/>Tata Communications<br/>Telstra Corporation<br/>Verizon<br/>Vodafone |
-| **Singapore2** | [Global Switch Tai Seng](https://www.globalswitch.com/locations/singapore-data-centres/) | 2 | Southeast Asia | Supported | CenturyLink Cloud Connect<br/>China Unicom Global<br/>Colt<br/> DE-CIX<br/>Epsilon Global Communications<br/>Equinix<br/>Megaport<br/>PCCW Global Limited<br/>SingTel<br/>Telehouse - KDDI |
+| **Singapore** | [Equinix SG1](https://www.equinix.com/data-centers/asia-pacific-colocation/singapore-colocation/singapore-data-center/sg1) | 2 | Southeast Asia | Supported | Aryaka Networks<br/>AT&T NetBond<br/>British Telecom<br/>China Mobile International<br/>Epsilon Global Communications<br/>Equinix<br/>GTT<br/>InterCloud<br/>Level 3 Communications<br/>Megaport<br/>NTT Communications<br/>Orange<br/>PCCW Global Limited<br/>SingTel<br/>Tata Communications<br/>Telstra Corporation<br/>Telefonica<br/>Verizon<br/>Vodafone |
+| **Singapore2** | [Global Switch Tai Seng](https://www.globalswitch.com/locations/singapore-data-centres/) | 2 | Southeast Asia | Supported | CenturyLink Cloud Connect<br/>China Unicom Global<br/>Colt<br/>DE-CIX<br/>Epsilon Global Communications<br/>Equinix<br/>Lightstorm<br/>Megaport<br/>PCCW Global Limited<br/>SingTel<br/>Telehouse - KDDI |
| **Stavanger** | [Green Mountain DC1](https://greenmountain.no/dc1-stavanger/) | 1 | Norway West | Supported | GlobalConnect<br/>Megaport<br/>Telenor | | **Stockholm** | [Equinix SK1](https://www.equinix.com/locations/europe-colocation/sweden-colocation/stockholm-data-centers/sk1/) | 1 | Sweden Central | Supported | Equinix<br/>GlobalConnect<br/>Interxion<br/>Megaport<br/>Telia Carrier |
-| **Sydney** | [Equinix SY2](https://www.equinix.com/locations/asia-colocation/australia-colocation/sydney-data-centers/sy2/) | 2 | Australia East | Supported | AARNet<br/>AT&T NetBond<br/>British Telecom<br/>Devoli<br/>Equinix<br/>Kordia<br/>Megaport<br/>NEXTDC<br/>NTT Communications<br/>Optus<br/>Orange<br/>Spark NZ<br/>Telstra Corporation<br/>TPG Telecom<br/>Verizon<br/>Vocus Group NZ |
+| **Sydney** | [Equinix SY2](https://www.equinix.com/locations/asia-colocation/australia-colocation/sydney-data-centers/sy2/) | 2 | Australia East | Supported | AARNet<br/>AT&T NetBond<br/>British Telecom<br/>Devoli<br/>Equinix<br/>GTT<br/>Kordia<br/>Megaport<br/>NEXTDC<br/>NTT Communications<br/>Optus<br/>Orange<br/>Spark NZ<br/>Telstra Corporation<br/>TPG Telecom<br/>Verizon<br/>Vocus Group NZ |
| **Sydney2** | [NextDC S1](https://www.nextdc.com/data-centres/s1-sydney-data-centre) | 2 | Australia East | Supported | Megaport<br/>NETSG<br/>NextDC | | **Taipei** | Chief Telecom | 2 | n/a | Supported | Chief Telecom<br/>Chunghwa Telecom<br/>FarEasTone |
-| **Tel Aviv** | Bezeq International | 2 | n/a | Supported | |
+| **Tel Aviv** | Bezeq International | 2 | Israel Central | Supported | Bezeq International |
+| **Tel Aviv2** | SDS | 2 | Israel Central | Supported | |
| **Tokyo** | [Equinix TY4](https://www.equinix.com/locations/asia-colocation/japan-colocation/tokyo-data-centers/ty4/) | 2 | Japan East | Supported | Aryaka Networks<br/>AT&T NetBond<br/>BBIX<br/>British Telecom<br/>CenturyLink Cloud Connect<br/>Colt<br/>Equinix<br/>Intercloud<br/>Internet Initiative Japan Inc. - IIJ<br/>Megaport<br/>NTT Communications<br/>NTT EAST<br/>Orange<br/>Softbank<br/>Telehouse - KDDI<br/>Verizon </br></br> |
-| **Tokyo2** | [AT TOKYO](https://www.attokyo.com/) | 2 | Japan East | Supported | AT TOKYO<br/>China Unicom Global<br/>Colt<br/>Equinix<br/>IX Reach<br/>Megaport<br/>PCCW Global Limited<br/>Tokai Communications |
+| **Tokyo2** | [AT TOKYO](https://www.attokyo.com/) | 2 | Japan East | Supported | AT TOKYO<br/>China Unicom Global<br/>Colt<br/>DE-CIX<br/>Equinix<br/>IX Reach<br/>Megaport<br/>PCCW Global Limited<br/>Tokai Communications |
| **Tokyo3** | [NEC](https://www.nec.com/en/global/solutions/cloud/inzai_datacenter.html) | 2 | Japan East | Supported | NEC<br/>SCSK | | **Toronto** | [Cologix TOR1](https://www.cologix.com/data-centers/toronto/tor1/) | 1 | Canada Central | Supported | AT&T NetBond<br/>Bell Canada<br/>CenturyLink Cloud Connect<br/>Cologix<br/>Equinix<br/>IX Reach Megaport<br/>Orange<br/>Telus<br/>Verizon<br/>Zayo |
-| **Toronto2** | [Allied REIT](https://www.alliedreit.com/property/905-king-st-w/) | 1 | Canada Central | Supported | Fibrenoire |
+| **Toronto2** | [Allied REIT](https://www.alliedreit.com/property/905-king-st-w/) | 1 | Canada Central | Supported | Fibrenoire<br/>Zayo |
| **Vancouver** | [Cologix VAN1](https://www.cologix.com/data-centers/vancouver/van1/) | 1 | n/a | Supported | Bell Canada<br/>Cologix<br/>Megaport<br/>Telus<br/>Zayo | | **Warsaw** | [Equinix WA1](https://www.equinix.com/data-centers/europe-colocation/poland-colocation/warsaw-data-centers/wa1) | 1 | Poland Central | Supported | Equinix, Orange Poland, T-mobile Poland |
-| **Washington DC** | [Equinix DC2](https://www.equinix.com/locations/americas-colocation/united-states-colocation/washington-dc-data-centers/dc2/)<br/>[Equinix DC6](https://www.equinix.com/data-centers/americas-colocation/united-states-colocation/washington-dc-data-centers/dc6) | 1 | East US<br/>East US 2 | Supported | Aryaka Networks<br/>AT&T NetBond<br/>British Telecom<br/>CenturyLink Cloud Connect<br/>Cologix<br/>Colt<br/>Comcast<br/>Coresite<br/>Cox Business Cloud Port<br/>Crown Castle<br/>Equinix<br/>Internet2<br/>InterCloud<br/>Iron Mountain<br/>IX Reach<br/>Level 3 Communications<br/>Lightpath<br/>Megaport<br/>Neutrona Networks<br/>NTT Communications<br/>Orange<br/>PacketFabric<br/>SES<br/>Sprint<br/>Tata Communications<br/>Telia Carrier<br/>Verizon<br/>Zayo |
+| **Washington DC** | [Equinix DC2](https://www.equinix.com/locations/americas-colocation/united-states-colocation/washington-dc-data-centers/dc2/)<br/>[Equinix DC6](https://www.equinix.com/data-centers/americas-colocation/united-states-colocation/washington-dc-data-centers/dc6) | 1 | East US<br/>East US 2 | Supported | Aryaka Networks<br/>AT&T NetBond<br/>British Telecom<br/>CenturyLink Cloud Connect<br/>Cologix<br/>Colt<br/>Comcast<br/>Coresite<br/>Cox Business Cloud Port<br/>Crown Castle<br/>Equinix<br/>Internet2<br/>InterCloud<br/>Iron Mountain<br/>IX Reach<br/>Level 3 Communications<br/>Lightpath<br/>Megaport<br/>Neutrona Networks<br/>NTT Communications<br/>Orange<br/>PacketFabric<br/>SES<br/>Sprint<br/>Tata Communications<br/>Telia Carrier<br/>Telefonica<br/>Verizon<br/>Zayo |
| **Washington DC2** | [Coresite VA2](https://www.coresite.com/data-center/va2-reston-va) | 1 | East US<br/>East US 2 | n/a | CenturyLink Cloud Connect<br/>Coresite<br/>Intelsat<br/>Megaport<br/>Momentum Telecom<br/>Viasat<br/>Zayo | | **Zurich** | [Interxion ZUR2](https://www.interxion.com/Locations/zurich/) | 1 | Switzerland North | Supported | Colt<br/>Equinix<br/>Intercloud<br/>Interxion<br/>Megaport<br/>Swisscom<br/>Zayo |-
+| **Zurich2** | [Equinix ZH5](https://www.equinix.com/data-centers/europe-colocation/switzerland-colocation/zurich-data-centers/zh5) | 1 | Switzerland North | Supported | Equinix |
### National cloud environments
expressroute Expressroute Locations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-locations.md
Previously updated : 09/15/2023 Last updated : 11/06/2023
The following table shows locations by service provider. If you want to view ava
|Service provider | Microsoft Azure | Microsoft 365 | Locations | | | | | | | **[AARNet](https://www.aarnet.edu.au/network-and-services/connectivity-services/azure-expressroute)** |Supported |Supported | Melbourne<br/>Sydney |
-| **[Airtel](https://www.airtel.in/business/#/)** | Supported | Supported | Chennai2<br/>Mumbai2 |
+| **[Airtel](https://www.airtel.in/business/#/)** | Supported | Supported | Chennai2<br/>Mumbai2<br/>Pune |
| **[AIS](https://business.ais.co.th/solution/en/azure-expressroute.html)** | Supported | Supported | Bangkok | | **[Aryaka Networks](https://www.aryaka.com/)** | Supported | Supported | Amsterdam<br/>Chicago<br/>Dallas<br/>Hong Kong<br/>Sao Paulo<br/>Seattle<br/>Silicon Valley<br/>Singapore<br/>Tokyo<br/>Washington DC | | **[Ascenty Data Centers](https://www.ascenty.com/en/cloud/microsoft-express-route)** | Supported | Supported | Campinas<br/>Sao Paulo<br/>Sao Paulo2 |
The following table shows locations by service provider. If you want to view ava
| **[BBIX](https://www.bbix.net/en/service/ix/)** | Supported | Supported | Osaka<br/>Tokyo<br/>Tokyo2 | | **[BCX](https://www.bcx.co.za/solutions/connectivity/)** | Supported | Supported | Cape Town<br/>Johannesburg| | **[Bell Canada](https://business.bell.ca/shop/enterprise/cloud-connect-access-to-cloud-partner-services)** | Supported | Supported | Montreal<br/>Toronto<br/>Quebec City<br/>Vancouver |
-| **[Bezeq International](https://selfservice.bezeqint.net/english)** | Supported | Supported | London |
+| **[Bezeq International](https://selfservice.bezeqint.net/english)** | Supported | Supported | London<br/>Tel Aviv |
| **[BICS](https://www.bics.com/cloud-connect/)** | Supported | Supported | Amsterdam2<br/>London2 | | **[British Telecom](https://www.globalservices.bt.com/en/solutions/products/cloud-connect-azure)** | Supported | Supported | Amsterdam<br/>Amsterdam2<br/>Chicago<br/>Frankfurt<br/>Hong Kong<br/>Johannesburg<br/>London<br/>London2<br/>Mumbai<br/>Newport(Wales)<br/>Paris<br/>Sao Paulo<br/>Silicon Valley<br/>Singapore<br/>Sydney<br/>Tokyo<br/>Washington DC | | **BSNL** | Supported | Supported | Chennai<br/>Mumbai |
The following table shows locations by service provider. If you want to view ava
| **[Chief Telecom](https://www.chief.com.tw/)** |Supported |Supported | Hong Kong<br/>Taipei | | **China Mobile International** |Supported |Supported | Hong Kong<br/>Hong Kong2<br/>Singapore | | **China Telecom Global** |Supported |Supported | Hong Kong<br/>Hong Kong2 |
+| **China Unicom Global** |Supported |Supported | Frankfurt<br/>Hong Kong<br/>Singapore2<br/>Tokyo2 |
| **Chunghwa Telecom** |Supported |Supported | Taipei |
+| **[Cinia](https://www.cinia.fi/)** |Supported |Supported | Amsterdam2 |
+| **[Cirion Technologies](https://lp.ciriontechnologies.com/cloud-connect-lp-latam?c_campaign=HOTSITE&c_tactic=&c_subtactic=&utm_source=SOLUCIONES-CTA&utm_medium=Organic&utm_content=&utm_term=&utm_campaign=HOTSITE-ESP)** | Supported | Supported | Queretaro<br/>Rio De Janeiro |
| **Claro** |Supported |Supported | Miami | | **Cloudflare** |Supported |Supported | Los Angeles | | **[Cologix](https://cologix.com/connectivity/cloud/cloud-connect/microsoft-azure/)** |Supported |Supported | Chicago<br/>Dallas<br/>Minneapolis<br/>Montreal<br/>Toronto<br/>Vancouver<br/>Washington DC |
-| **[Cirion Technologies](https://lp.ciriontechnologies.com/cloud-connect-lp-latam?c_campaign=HOTSITE&c_tactic=&c_subtactic=&utm_source=SOLUCIONES-CTA&utm_medium=Organic&utm_content=&utm_term=&utm_campaign=HOTSITE-ESP)** | Supported | Supported | Bogota<br/>Queretaro<br/>Rio De Janeiro |
| **[Claro](https://www.usclaro.com/enterprise-mnc/connectivity/mpls/)** |Supported |Supported | Miami | | **Cloudflare** |Supported |Supported | Los Angeles | | **[Cologix](https://cologix.com/connectivity/cloud/cloud-connect/microsoft-azure/)** | Supported | Supported | Chicago<br/>Dallas<br/>Minneapolis<br/>Montreal<br/>Toronto<br/>Vancouver<br/>Washington DC |
The following table shows locations by service provider. If you want to view ava
| **[CoreSite](https://www.coresite.com/solutions/cloud-services/public-cloud-providers/microsoft-azure-expressroute)** | Supported | Supported | Chicago<br/>Chicago2<br/>Denver<br/>Los Angeles<br/>New York<br/>Silicon Valley<br/>Silicon Valley2<br/>Washington DC<br/>Washington DC2 | | **[Cox Business Cloud Port](https://www.cox.com/business/networking/cloud-connectivity.html)** | Supported | Supported | Dallas<br/>Phoenix<br/>Silicon Valley<br/>Washington DC | | **Crown Castle** | Supported | Supported | New York<br/>Washington DC |
-| **[DE-CIX](https://www.de-cix.net/en/services/directcloud/microsoft-azure)** | Supported |Supported | Amsterdam2<br/>Chennai<br/>Chicago2<br/>Dallas<br/>Dubai2<br/>Frankfurt<br/>Frankfurt2<br/>Kuala Lumpur<br/>Madrid<br/>Marseille<br/>Mumbai<br/>Munich<br/>New York<br/>Phoenix<br/>Singapore2 |
+| **[DE-CIX](https://www.de-cix.net/en/services/directcloud/microsoft-azure)** | Supported |Supported | Amsterdam2<br/>Chennai<br/>Chicago2<br/>Copenhagen<br/>Dallas<br/>Dubai2<br/>Frankfurt<br/>Frankfurt2<br/>Kuala Lumpur<br/>Madrid<br/>Marseille<br/>Mumbai<br/>Munich<br/>New York<br/>Osaka<br/>Phoenix<br/>Seattle<br/>Singapore2<br/>Tokyo2 |
| **[Devoli](https://devoli.com/expressroute)** | Supported |Supported | Auckland<br/>Melbourne<br/>Sydney | | **[Deutsche Telekom AG IntraSelect](https://geschaeftskunden.telekom.de/vernetzung-digitalisierung/produkt/intraselect)** | Supported |Supported | Frankfurt |
-| **[Deutsche Telekom AG](https://www.t-systems.com/de/en/cloud-services/managed-platform-services/azure-managed-services/cloudconnect-for-azure)** | Supported |Supported | Frankfurt2<br/>Hong Kong2 |
+| **[Deutsche Telekom AG](https://www.t-systems.com/de/en/cloud-services/managed-platform-services/azure-managed-services/cloudconnect-for-azure)** | Supported |Supported | Amsterdam<br/>Frankfurt2<br/>Hong Kong2 |
| **du datamena** |Supported |Supported | Dubai2 | | **[eir evo](https://www.eirevo.ie/cloud-services/cloud-connectivity)** |Supported |Supported | Dublin |
-| **[Epsilon Global Communications](https://epsilontel.com/solutions/cloud-connect/)** | Supported | Supported | Hong Kong2<br/>Singapore<br/>Singapore2 |
-| **[Equinix](https://www.equinix.com/partners/microsoft-azure/)** | Supported | Supported | Amsterdam<br/>Amsterdam2<br/>Atlanta<br/>Berlin<br/>Bogota<br/>Canberra2<br/>Chicago<br/>Dallas<br/>Dubai2<br/>Dublin<br/>Frankfurt<br/>Frankfurt2<br/>Geneva<br/>Hong Kong<br/>Hong Kong2<br/>London<br/>London2<br/>Los Angeles*<br/>Los Angeles2<br/>Melbourne<br/>Miami<br/>Milan<br/>New York<br/>Osaka<br/>Paris<br/>Paris2<br/>Perth<br/>Quebec City<br/>Rio de Janeiro<br/>Sao Paulo<br/>Seattle<br/>Seoul<br/>Silicon Valley<br/>Singapore<br/>Singapore2<br/>Stockholm<br/>Sydney<br/>Tokyo<br/>Tokyo2<br/>Toronto<br/>Washington DC<br/>Warsaw<br/>Zurich</br></br> **New ExpressRoute circuits are no longer supported with Equinix in Los Angeles. Create new circuits in Los Angeles2.* |
+| **[Epsilon Global Communications](https://epsilontel.com/solutions/cloud-connect/)** | Supported | Supported | Hong Kong2<br/>London2<br/>Singapore<br/>Singapore2 |
+| **[Equinix](https://www.equinix.com/partners/microsoft-azure/)** | Supported | Supported | Amsterdam<br/>Amsterdam2<br/>Atlanta<br/>Berlin<br/>Canberra2<br/>Chicago<br/>Dallas<br/>Dubai2<br/>Dublin<br/>Frankfurt<br/>Frankfurt2<br/>Geneva<br/>Hong Kong<br/>Hong Kong2<br/>London<br/>London2<br/>Los Angeles*<br/>Los Angeles2<br/>Melbourne<br/>Miami<br/>Milan<br/>New York<br/>Osaka<br/>Paris<br/>Paris2<br/>Perth<br/>Quebec City<br/>Rio de Janeiro<br/>Sao Paulo<br/>Seattle<br/>Seoul<br/>Silicon Valley<br/>Singapore<br/>Singapore2<br/>Stockholm<br/>Sydney<br/>Tokyo<br/>Tokyo2<br/>Toronto<br/>Washington DC<br/>Warsaw<br/>Zurich</br>Zurich2</br></br> **New ExpressRoute circuits are no longer supported with Equinix in Los Angeles. Create new circuits in Los Angeles2.* |
| **Etisalat UAE** |Supported |Supported | Dubai |
-| **[euNetworks](https://eunetworks.com/services/solutions/cloud-connect/microsoft-azure-expressroute/)** | Supported | Supported | Amsterdam<br/>Amsterdam2<br/>Dublin<br/>Frankfurt<br/>London |
+| **[euNetworks](https://eunetworks.com/services/solutions/cloud-connect/microsoft-azure-expressroute/)** | Supported | Supported | Amsterdam<br/>Amsterdam2<br/>Dublin<br/>Frankfurt<br/>London<br/>Paris |
| **[FarEasTone](https://www.fetnet.net/corporate/en/Enterprise.html)** | Supported | Supported | Taipei | | **[Fastweb](https://www.fastweb.it/grandi-aziende/dati-voce/scheda-prodotto/fast-company/)** | Supported |Supported | Milan | | **[Fibrenoire](https://fibrenoire.ca/en/services/cloudextn-2/)** | Supported | Supported | Montreal<br/>Quebec City<br/>Toronto2 |
The following table shows locations by service provider. If you want to view ava
| **[GÉANT](https://www.geant.org/Networks)** | Supported | Supported | Amsterdam<br/>Amsterdam2<br/>Dublin<br/>Frankfurt<br/>Marseille | | **[GlobalConnect](https://www.globalconnect.no/tjenester/nettverk/cloud-access)** | Supported | Supported | Copenhagen<br/>Oslo<br/>Stavanger<br/>Stockholm | | **[GlobalConnect DK](https://www.globalconnect.no/tjenester/nettverk/cloud-access)** | Supported | Supported | Amsterdam |
-| **GTT** |Supported |Supported | Amsterdam<br/>London2<br/>Washington DC |
+| **GTT** |Supported |Supported | Amsterdam<br/>Dallas<br/>Los Angeles2<br/>London2<br/>Singapore<br/>Sydney<br/>Washington DC |
| **[Global Cloud Xchange (GCX)](https://globalcloudxchange.com/cloud-platform/cloud-x-fusion/)** | Supported| Supported | Chennai<br/>Mumbai | | **[iAdvantage](https://www.scx.sunevision.com/)** | Supported | Supported | Hong Kong2 | | **Intelsat** | Supported | Supported | London2<br/>Washington DC2 |
-| **[InterCloud](https://www.intercloud.com/)** |Supported |Supported | Amsterdam<br/>Chicago<br/>Dallas<br/>Frankfurt<br/>Frankfurt2<br/>Geneva<br/>Hong Kong<br/>London<br/>New York<br/>Paris<br/>Sao Paulo<br/>Silicon Valley<br/>Singapore<br/>Tokyo<br/>Washington DC<br/>Zurich |
+| **[InterCloud](https://www.intercloud.com/)** |Supported |Supported | Amsterdam<br/>Chicago<br/>Dallas<br/>Dublin2<br/>Frankfurt<br/>Frankfurt2<br/>Geneva<br/>Hong Kong<br/>London<br/>Madrid<br/>Mumbai<br/>New York<br/>Paris<br/>Paris2<br/>Sao Paulo<br/>Silicon Valley<br/>Singapore<br/>Tokyo<br/>Washington DC<br/>Zurich |
| **[Internet2](https://internet2.edu/services/cloud-connect/#service-cloud-connect)** | Supported | Supported | Chicago<br/>Dallas<br/>Silicon Valley<br/>Washington DC | | **[Internet Initiative Japan Inc. - IIJ](https://www.iij.ad.jp/en/news/pressrelease/2015/1216-2.html)** | Supported | Supported | Osaka<br/>Tokyo<br/>Tokyo2 | | **[Internet Solutions - Cloud Connect](https://www.is.co.za/solution/cloud-connect/)** | Supported | Supported | Cape Town<br/>Johannesburg<br/>London |
The following table shows locations by service provider. If you want to view ava
| **[Level 3 Communications](https://www.lumen.com/en-us/hybrid-it-cloud/cloud-connect.html)** | Supported | Supported | Amsterdam<br/>Chicago<br/>Dallas<br/>London<br/>Newport (Wales)<br/>Sao Paulo<br/>Seattle<br/>Silicon Valley<br/>Singapore<br/>Washington DC | | **LG CNS** | Supported | Supported | Busan<br/>Seoul | | **Lightpath** | Supported | Supported | New York<br/>Washington DC |
-| **[Lightstorm](https://polarin.lightstorm.net/)** | Supported | Supported | Pune<br/>Chennai |
+| **[Lightstorm](https://polarin.lightstorm.net/)** | Supported | Supported | Chennai<br/>Dubai2<br/>Pune<br/>Singapore2 |
| **[Liquid Intelligent Technologies](https://liquidcloud.africa/connect/)** | Supported | Supported | Cape Town<br/>Johannesburg | | **[LGUplus](http://www.uplus.co.kr/)** |Supported |Supported | Seoul |
-| **[Megaport](https://www.megaport.com/services/microsoft-expressroute/)** | Supported | Supported | Amsterdam<br/>Atlanta<br/>Auckland<br/>Chicago<br/>Dallas<br/>Denver<br/>Dubai2<br/>Dublin<br/>Frankfurt<br/>Geneva<br/>Hong Kong<br/>Hong Kong2<br/>Las Vegas<br/>London<br/>London2<br/>Los Angeles<br/>Madrid<br/>Melbourne<br/>Miami<br/>Minneapolis<br/>Montreal<br/>Munich<br/>New York<br/>Osaka<br/>Oslo<br/>Paris<br/>Perth<br/>Phoenix<br/>Quebec City<br/>Queretaro (Mexico)<br/>San Antonio<br/>Seattle<br/>Silicon Valley<br/>Singapore<br/>Singapore2<br/>Stavanger<br/>Stockholm<br/>Sydney<br/>Sydney2<br/>Tokyo<br/>Tokyo2 Toronto<br/>Vancouver<br/>Washington DC<br/>Washington DC2<br/>Zurich |
+| **[Megaport](https://www.megaport.com/services/microsoft-expressroute/)** | Supported | Supported | Amsterdam<br/>Amsterdam2<br/>Atlanta<br/>Auckland<br/>Chicago<br/>Dallas<br/>Denver<br/>Dubai2<br/>Dublin<br/>Frankfurt<br/>Geneva<br/>Hong Kong<br/>Hong Kong2<br/>Las Vegas<br/>London<br/>London2<br/>Los Angeles<br/>Madrid<br/>Melbourne<br/>Miami<br/>Minneapolis<br/>Montreal<br/>Munich<br/>New York<br/>Osaka<br/>Oslo<br/>Paris<br/>Perth<br/>Phoenix<br/>Quebec City<br/>Queretaro (Mexico)<br/>San Antonio<br/>Seattle<br/>Silicon Valley<br/>Singapore<br/>Singapore2<br/>Stavanger<br/>Stockholm<br/>Sydney<br/>Sydney2<br/>Tokyo<br/>Tokyo2<br/>Toronto<br/>Vancouver<br/>Washington DC<br/>Washington DC2<br/>Zurich |
| **[Momentum Telecom](https://gomomentum.com/)** | Supported | Supported | Chicago<br/>New York<br/>Washington DC2 | | **[MTN](https://www.mtnbusiness.co.za/en/Cloud-Solutions/Pages/microsoft-express-route.aspx)** | Supported | Supported | London | | **MTN Global Connect** | Supported | Supported | Cape Town<br/>Johannesburg|
The following table shows locations by service provider. If you want to view ava
| **[NTT SmartConnect](https://cloud.nttsmc.com/cxc/azure.html)** |Supported |Supported | Osaka | | **[Ooredoo Cloud Connect](https://www.ooredoo.com.kw/portal/en/b2bOffConnAzureExpressRoute)** |Supported |Supported | Doha<br/>Doha2<br/>London2<br/>Marseille | | **[Optus](https://www.optus.com.au/enterprise/networking/network-connectivity/express-link/)** |Supported |Supported | Melbourne<br/>Sydney |
-| **[Orange](https://www.orange-business.com/en/products/business-vpn-galerie)** |Supported |Supported | Amsterdam<br/>Amsterdam2<br/>Chicago<br/>Dallas<br/>Dubai2<br/>Frankfurt<br/>Hong Kong<br/>Johannesburg<br/>London<br/>London2<br/>Mumbai2<br/>Melbourne<br/>Paris<br/>Sao Paulo<br/>Silicon Valley<br/>Singapore<br/>Sydney<br/>Tokyo<br/>Washington DC |
-| **[Orixcom](https://www.orixcom.com/solutions/azure-expressroute)** | Supported | Supported | Dubai2 |
-| **[PacketFabric](https://www.packetfabric.com/cloud-connectivity/microsoft-azure)** |Supported |Supported | Amsterdam<br/>Chicago<br/>Dallas<br/>Denver<br/>Las Vegas<br/>London<br/>Los Angeles2<br/>Miami<br/>New York<br/>Silicon Valley<br/>Toronto<br/>Washington DC |
-| **[PCCW Global Limited](https://consoleconnect.com/clouds/#azureRegions)** |Supported |Supported | Chicago<br/>Hong Kong<br/>Hong Kong2<br/>London<br/>Singapore<br/>Singapore2<br/>Tokyo2 |
-| **[NTT EAST](https://business.ntt-east.co.jp/service/crossconnect/)** | Supported | Supported | Tokyo |
-| **[NTT Global DataCenters EMEA](https://hello.global.ntt/)** | Supported | Supported | Amsterdam2<br/>Berlin<br/>Frankfurt<br/>London2 |
-| **[NTT SmartConnect](https://cloud.nttsmc.com/cxc/azure.html)** | Supported | Supported | Osaka |
-| **[Ooredoo Cloud Connect](https://www.ooredoo.com.kw/portal/en/b2bOffConnAzureExpressRoute)** | Supported | Supported | Doha<br/>Doha2<br/>London2<br/>Marseille |
-| **[Optus](https://www.optus.com.au/enterprise/networking/network-connectivity/express-link/)** | Supported | Supported | Melbourne<br/>Sydney |
-| **[Orange](https://www.orange-business.com/en/products/business-vpn-galerie)** | Supported | Supported | Amsterdam<br/>Amsterdam2<br/>Chicago<br/>Dallas<br/>Dubai2<br/>Dublin2 Frankfurt<br/>Hong Kong<br/>Johannesburg<br/>London<br/>London2<br/>Mumbai2<br/>Melbourne<br/>Paris<br/>Sao Paulo<br/>Silicon
+| **[Orange](https://www.orange-business.com/en/products/business-vpn-galerie)** |Supported |Supported | Amsterdam<br/>Amsterdam2<br/>Chicago<br/>Dallas<br/>Dubai2<br/>Dublin2<br/>Frankfurt<br/>Hong Kong<br/>Johannesburg<br/>London<br/>London2<br/>Mumbai2<br/>Melbourne<br/>Paris<br/>Paris2<br/>Sao Paulo<br/>Silicon Valley<br/>Singapore<br/>Sydney<br/>Tokyo<br/>Toronto<br/>Washington DC |
| **[Orange Poland](https://www.orange.pl/duze-firmy)** | Supported | Supported | Warsaw | | **[Orixcom](https://www.orixcom.com/solutions/azure-expressroute)** | Supported | Supported | Dubai2 | | **[PacketFabric](https://www.packetfabric.com/cloud-connectivity/microsoft-azure)** | Supported | Supported | Amsterdam<br/>Chicago<br/>Dallas<br/>Denver<br/>Las Vegas<br/>London<br/>Los Angeles2<br/>Miami<br/>New York<br/>Seattle<br/>Silicon Valley<br/>Toronto<br/>Washington DC | | **[PCCW Global Limited](https://consoleconnect.com/clouds/#azureRegions)** | Supported | Supported | Chicago<br/>Hong Kong<br/>Hong Kong2<br/>London<br/>Singapore<br/>Singapore2<br/>Tokyo2 |
-| **PitChile** | Supported | Supported | Santiago |
+| **PitChile** | Supported | Supported | Santiago<br/>Miami |
| **[REANNZ](https://www.reannz.co.nz/products-and-services/cloud-connect/)** | Supported | Supported | Auckland | | **RedCLARA** | Supported | Supported | Sao Paulo | | **[Reliance Jio](https://www.jio.com/business/jio-cloud-connect)** | Supported | Supported | Mumbai | | **[Retelit](https://www.retelit.it/EN/Home.aspx)** | Supported | Supported | Milan |
+| **RISQ** |Supported | Supported | Quebec City<br/>Montreal |
| **SCSK** |Supported | Supported | Tokyo3 | | **[Sejong Telecom](https://www.sejongtelecom.net/en/pages/service/cloud_ms)** | Supported | Supported | Seoul | | **[SES](https://www.ses.com/networks/signature-solutions/signature-cloud/ses-and-azure-expressroute)** | Supported | Supported | London2<br/>Washington DC |
The following table shows locations by service provider. If you want to view ava
| **[Spark NZ](https://www.sparkdigital.co.nz/solutions/connectivity/cloud-connect/)** | Supported | Supported | Auckland<br/>Sydney | | **[Swisscom](https://www.swisscom.ch/en/business/enterprise/offer/cloud-data-center/microsoft-cloud-services/microsoft-azure-von-swisscom.html)** | Supported | Supported | Geneva<br/>Zurich | | **[Tata Communications](https://www.tatacommunications.com/solutions/network/cloud-ready-networks/)** | Supported | Supported | Amsterdam<br/>Chennai<br/>Chicago<br/>Hong Kong<br/>London<br/>Mumbai<br/>Pune<br/>Sao Paulo<br/>Silicon Valley<br/>Singapore<br/>Washington DC |
-| **[Telefonica](https://www.telefonica.com/es/home)** | Supported | Supported | Amsterdam<br/>Sao Paulo<br/>Madrid |
+| **[Telefonica](https://www.telefonica.com/es/home)** | Supported | Supported | Amsterdam<br/>Dallas<br/>Frankfurt2<br/>Hong Kong<br/>Madrid<br/>Sao Paulo<br/>Singapore<br/>Washington DC |
| **[Telehouse - KDDI](https://www.telehouse.net/solutions/cloud-services/cloud-link)** | Supported | Supported | London<br/>London2<br/>Singapore2 | | **Telenor** |Supported |Supported | Amsterdam<br/>London<br/>Oslo<br/>Stavanger | | **[Telia Carrier](https://www.teliacarrier.com/)** | Supported | Supported | Amsterdam<br/>Chicago<br/>Dallas<br/>Frankfurt<br/>Hong Kong<br/>London<br/>Oslo<br/>Paris<br/>Seattle<br/>Silicon Valley<br/>Stockholm<br/>Washington DC |
The following table shows locations by service provider. If you want to view ava
| **[Vi (Vodafone Idea)](https://www.myvi.in/business/enterprise-solutions/connectivity/vpn-extended-connect)** | Supported | Supported | Chennai<br/>Mumbai2 | | **Vodafone Qatar** | Supported | Supported | Doha | | **XL Axiata** | Supported | Supported | Jakarta |
-| **[Zayo](https://www.zayo.com/services/packet/cloudlink/)** | Supported | Supported | Amsterdam<br/>Chicago<br/>Dallas<br/>Denver<br/>Dublin<br/>Frankfurt<br/>Hong Kong<br/>London<br/>London2<br/>Los Angeles<br/>Montreal<br/>New York<br/>Paris<br/>Phoenix<br/>San Antonio<br/>Seattle<br/>Silicon Valley<br/>Toronto<br/>Vancouver<br/>Washington DC<br/>Washington DC2<br/>Zurich|
+| **[Zayo](https://www.zayo.com/services/packet/cloudlink/)** | Supported | Supported | Amsterdam<br/>Chicago<br/>Dallas<br/>Denver<br/>Dublin<br/>Frankfurt<br/>Hong Kong<br/>London<br/>London2<br/>Los Angeles<br/>Montreal<br/>New York<br/>Paris<br/>Phoenix<br/>San Antonio<br/>Seattle<br/>Silicon Valley<br/>Toronto<br/>Toronto2<br/>Vancouver<br/>Washington DC<br/>Washington DC2<br/>Zurich|
### National cloud environment
governance Policy For Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/policy-for-kubernetes.md
az k8s-extension delete --cluster-type connectedClusters --cluster-name <CLUSTER
The Azure Policy language structure for managing Kubernetes follows that of existing policy definitions. There are sample definition files available to assign in [Azure Policy's built-in policy library](../samples/built-in-policies.md) that can be used to govern your cluster components.
-Azure Policy for Kubernetes also support custom definition creation at the component-level for both Azure Kubernetes Service clusters and Azure Arc-enabled Kubernetes clusters. Constraint template and mutation template samples are available in the [Gatekeeper community library](https://github.com/open-policy-agent/gatekeeper-library/tree/master). [Azure Policy's VS Code Extension](../how-to/extension-for-vscode.md#create-policy-definition-from-constraint-template) can be used to help translate an existing constraint template or mutation template to a custom Azure Policy policy definition.
+Azure Policy for Kubernetes also support custom definition creation at the component-level for both Azure Kubernetes Service clusters and Azure Arc-enabled Kubernetes clusters. Constraint template and mutation template samples are available in the [Gatekeeper community library](https://github.com/open-policy-agent/gatekeeper-library/tree/master). [Azure Policy's VS Code Extension](../how-to/extension-for-vscode.md#create-policy-definition-from-a-constraint-template-or-mutation-template) can be used to help translate an existing constraint template or mutation template to a custom Azure Policy policy definition.
With a [Resource Provider mode](./definition-structure.md#resource-provider-modes) of `Microsoft.Kubernetes.Data`, the effects [audit](./effects.md#audit), [deny](./effects.md#deny), [disabled](./effects.md#disabled), and [mutate](./effects.md#mutate-preview) are used to manage your Kubernetes clusters.
governance Extension For Vscode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/how-to/extension-for-vscode.md
example:
> > The evaluation feature does not work on macOS and Linux installations of the extension.
-### Create policy definition from constraint template
+### Create policy definition from a constraint template or mutation template
The VS Code extension can create a policy definition from an existing [Open Policy Agent](https://www.openpolicyagent.org/) (OPA) GateKeeper v3
-[constraint template](https://open-policy-agent.github.io/gatekeeper/website/docs/howto/#constraint-templates). The YAML Ain't Markup Language (YAML)
+[constraint template](https://open-policy-agent.github.io/gatekeeper/website/docs/howto/#constraint-templates) or an existing [mutation template](https://open-policy-agent.github.io/gatekeeper/website/docs/mutation/). The YAML Ain't Markup Language (YAML)
file must be open in VS Code for the Command Palette to be an option. 1. Open a valid OPA GateKeeper v3 constraint template YAML file. 1. From the menu bar, go to **View** > **Command Palette**, and enter **Azure Policy for Kubernetes:
- Create Policy Definition from Constraint Template**.
+ Create Policy Definition from Constraint Template Or Mutation**.
1. Select the appropriate _sourceType_ value.
governance Australia Ism https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/australia-ism.md
Title: Regulatory Compliance details for Australian Government ISM PROTECTED description: Details of the Australian Government ISM PROTECTED Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/23/2023 Last updated : 11/06/2023
governance Azure Security Benchmark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/azure-security-benchmark.md
Title: Regulatory Compliance details for Microsoft cloud security benchmark description: Details of the Microsoft cloud security benchmark Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/23/2023 Last updated : 11/06/2023
initiative definition.
|[An Azure Active Directory administrator should be provisioned for PostgreSQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb4dec045-250a-48c2-b5cc-e0c4eec8b5b4) |Audit provisioning of an Azure Active Directory administrator for your PostgreSQL server to enable Azure AD authentication. Azure AD authentication enables simplified permission management and centralized identity management of database users and other Microsoft services |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_AuditServerADAdmins_Audit.json) | |[Automation account variables should be encrypted](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3657f5a0-770e-44a3-b44e-9431ba1e9735) |It is important to enable encryption of Automation account variable assets when storing sensitive data |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Automation/Automation_AuditUnencryptedVars_Audit.json) | |[Azure MySQL flexible server should have Azure Active Directory Only Authentication enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F40e85574-ef33-47e8-a854-7a65c7500560) |Disabling local authentication methods and allowing only Azure Active Directory Authentication improves security by ensuring that Azure MySQL flexible server can exclusively be accessed by Azure Active Directory identities. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MySQL_FlexibleServers_ADOnlyEnabled_Audit.json) |
-|[Azure SQL Database should have Azure Active Directory Only Authentication enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabda6d70-9778-44e7-84a8-06713e6db027) |Disabling local authentication methods and allowing only Azure Active Directory Authentication improves security by ensuring that Azure SQL Databases can exclusively be accessed by Azure Active Directory identities. Learn more at: aka.ms/adonlycreate. |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_ADOnlyEnabled_Deny.json) |
-|[Azure SQL Managed Instance should have Azure Active Directory Only Authentication enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F78215662-041e-49ed-a9dd-5385911b3a1f) |Disabling local authentication methods and allowing only Azure Active Directory Authentication improves security by ensuring that Azure SQL Managed Instances can exclusively be accessed by Azure Active Directory identities. Learn more at: aka.ms/adonlycreate. |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlManagedInstance_ADOnlyEnabled_Deny.json) |
+|[Azure SQL Database should have Microsoft Entra-only authentication enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabda6d70-9778-44e7-84a8-06713e6db027) |Disabling local authentication methods and allowing only Microsoft Entra authentication improves security by ensuring that Azure SQL Databases can exclusively be accessed by Microsoft Entra identities. Learn more at: aka.ms/adonlycreate. |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_ADOnlyEnabled_Deny.json) |
+|[Azure SQL Managed Instance should have Microsoft Entra-only authentication enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F78215662-041e-49ed-a9dd-5385911b3a1f) |Disabling local authentication methods and allowing only Microsoft Entra authentication improves security by ensuring that Azure SQL Managed Instances can exclusively be accessed by Microsoft Entra identities. Learn more at: aka.ms/adonlycreate. |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlManagedInstance_ADOnlyEnabled_Deny.json) |
|[Service Fabric clusters should have the ClusterProtectionLevel property set to EncryptAndSign](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F617c02be-7f02-4efd-8836-3180d47b6c68) |Service Fabric provides three levels of protection (None, Sign and EncryptAndSign) for node-to-node communication using a primary cluster certificate. Set the protection level to ensure that all node-to-node messages are encrypted and digitally signed |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Service%20Fabric/ServiceFabric_AuditClusterProtectionLevel_Audit.json) |
-|[Synapse Workspaces should use only Azure Active Directory identities for authentication](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2158ddbe-fefa-408e-b43f-d4faef8ff3b8) |Azure Active Directory (AAD) only authentication methods improves security by ensuring that Synapse Workspaces exclusively require AAD identities for authentication. Learn more at: [https://aka.ms/Synapse](https://aka.ms/Synapse). |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Synapse/SynaspeWorkspaceAadOnlyAuthentication_Audit.json) |
+|[Synapse Workspaces should use only Microsoft Entra identities for authentication](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2158ddbe-fefa-408e-b43f-d4faef8ff3b8) |Microsoft Entra-only authentication improves security by ensuring that Synapse Workspaces exclusively require Microsoft Entra identities for authentication. Learn more at: [https://aka.ms/Synapse](https://aka.ms/Synapse). |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Synapse/SynaspeWorkspaceAadOnlyAuthentication_Audit.json) |
|[Transparent Data Encryption on SQL databases should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F17k78e20-9358-41c9-923c-fb736d382a12) |Transparent data encryption should be enabled to protect data-at-rest and meet compliance requirements |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlDBEncryption_Audit.json) | |[Virtual machines and virtual machine scale sets should have encryption at host enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffc4d8e41-e223-45ea-9bf5-eada37891d87) |Use encryption at host to get end-to-end encryption for your virtual machine and virtual machine scale set data. Encryption at host enables encryption at rest for your temporary disk and OS/data disk caches. Temporary and ephemeral OS disks are encrypted with platform-managed keys when encryption at host is enabled. OS/data disk caches are encrypted at rest with either customer-managed or platform-managed key, depending on the encryption type selected on the disk. Learn more at [https://aka.ms/vm-hbe](https://aka.ms/vm-hbe). |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/HostBasedEncryptionRequired_Deny.json) | |[Virtual machines should encrypt temp disks, caches, and data flows between Compute and Storage resources](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0961003e-5a0a-4549-abde-af6a37f2724d) |By default, a virtual machine's OS and data disks are encrypted-at-rest using platform-managed keys. Temp disks, data caches and data flowing between compute and storage aren't encrypted. Disregard this recommendation if: 1. using encryption-at-host, or 2. server-side encryption on Managed Disks meets your security requirements. Learn more in: Server-side encryption of Azure Disk Storage: [https://aka.ms/disksse,](https://aka.ms/disksse,) Different disk encryption offerings: [https://aka.ms/diskencryptioncomparison](https://aka.ms/diskencryptioncomparison) |AuditIfNotExists, Disabled |[2.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_UnencryptedVMDisks_Audit.json) |
governance Canada Federal Pbmm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/canada-federal-pbmm.md
Title: Regulatory Compliance details for Canada Federal PBMM description: Details of the Canada Federal PBMM Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/23/2023 Last updated : 11/06/2023
governance Cis Azure 1 1 0 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/cis-azure-1-1-0.md
Title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark 1.1.0 description: Details of the CIS Microsoft Azure Foundations Benchmark 1.1.0 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/23/2023 Last updated : 11/06/2023
governance Cis Azure 1 3 0 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/cis-azure-1-3-0.md
Title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark 1.3.0 description: Details of the CIS Microsoft Azure Foundations Benchmark 1.3.0 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/23/2023 Last updated : 11/06/2023
governance Cis Azure 1 4 0 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/cis-azure-1-4-0.md
Title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark 1.4.0 description: Details of the CIS Microsoft Azure Foundations Benchmark 1.4.0 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/23/2023 Last updated : 11/06/2023
governance Cis Azure 2 0 0 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/cis-azure-2-0-0.md
Title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark 2.0.0 description: Details of the CIS Microsoft Azure Foundations Benchmark 2.0.0 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/23/2023 Last updated : 11/06/2023
governance Cmmc L3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/cmmc-l3.md
Title: Regulatory Compliance details for CMMC Level 3 description: Details of the CMMC Level 3 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/23/2023 Last updated : 11/06/2023
governance Fedramp High https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/fedramp-high.md
Title: Regulatory Compliance details for FedRAMP High description: Details of the FedRAMP High Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/23/2023 Last updated : 11/06/2023
governance Fedramp Moderate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/fedramp-moderate.md
Title: Regulatory Compliance details for FedRAMP Moderate description: Details of the FedRAMP Moderate Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/23/2023 Last updated : 11/06/2023
governance Gov Azure Security Benchmark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-azure-security-benchmark.md
Title: Regulatory Compliance details for Microsoft cloud security benchmark (Azure Government) description: Details of the Microsoft cloud security benchmark (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/23/2023 Last updated : 11/06/2023
governance Gov Cis Azure 1 1 0 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-cis-azure-1-1-0.md
Title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark 1.1.0 (Azure Government) description: Details of the CIS Microsoft Azure Foundations Benchmark 1.1.0 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/23/2023 Last updated : 11/06/2023
governance Gov Cis Azure 1 3 0 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-cis-azure-1-3-0.md
Title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark 1.3.0 (Azure Government) description: Details of the CIS Microsoft Azure Foundations Benchmark 1.3.0 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/23/2023 Last updated : 11/06/2023
governance Gov Cmmc L3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-cmmc-l3.md
Title: Regulatory Compliance details for CMMC Level 3 (Azure Government) description: Details of the CMMC Level 3 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/23/2023 Last updated : 11/06/2023
governance Gov Fedramp High https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-fedramp-high.md
Title: Regulatory Compliance details for FedRAMP High (Azure Government) description: Details of the FedRAMP High (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/23/2023 Last updated : 11/06/2023
governance Gov Fedramp Moderate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-fedramp-moderate.md
Title: Regulatory Compliance details for FedRAMP Moderate (Azure Government) description: Details of the FedRAMP Moderate (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/23/2023 Last updated : 11/06/2023
governance Gov Irs 1075 Sept2016 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-irs-1075-sept2016.md
Title: Regulatory Compliance details for IRS 1075 September 2016 (Azure Government) description: Details of the IRS 1075 September 2016 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/23/2023 Last updated : 11/06/2023
governance Gov Iso 27001 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-iso-27001.md
Title: Regulatory Compliance details for ISO 27001:2013 (Azure Government) description: Details of the ISO 27001:2013 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/23/2023 Last updated : 11/06/2023
governance Gov Nist Sp 800 171 R2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-nist-sp-800-171-r2.md
Title: Regulatory Compliance details for NIST SP 800-171 R2 (Azure Government) description: Details of the NIST SP 800-171 R2 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/23/2023 Last updated : 11/06/2023
governance Gov Nist Sp 800 53 R4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-nist-sp-800-53-r4.md
Title: Regulatory Compliance details for NIST SP 800-53 Rev. 4 (Azure Government) description: Details of the NIST SP 800-53 Rev. 4 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/23/2023 Last updated : 11/06/2023
governance Gov Nist Sp 800 53 R5 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-nist-sp-800-53-r5.md
Title: Regulatory Compliance details for NIST SP 800-53 Rev. 5 (Azure Government) description: Details of the NIST SP 800-53 Rev. 5 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/23/2023 Last updated : 11/06/2023
governance Hipaa Hitrust 9 2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/hipaa-hitrust-9-2.md
Title: Regulatory Compliance details for HIPAA HITRUST 9.2 description: Details of the HIPAA HITRUST 9.2 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/23/2023 Last updated : 11/06/2023
governance Irs 1075 Sept2016 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/irs-1075-sept2016.md
Title: Regulatory Compliance details for IRS 1075 September 2016 description: Details of the IRS 1075 September 2016 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/23/2023 Last updated : 11/06/2023
initiative definition.
|[Management ports of virtual machines should be protected with just-in-time network access control](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb0f33259-77d7-4c9e-aac6-3aabcfae693c) |Possible network Just In Time (JIT) access will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_JITNetworkAccess_Audit.json) | |[Service Fabric clusters should only use Azure Active Directory for client authentication](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb54ed75b-3e1a-44ac-a333-05ba39b99ff0) |Audit usage of client authentication only via Azure Active Directory in Service Fabric |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Service%20Fabric/ServiceFabric_AuditADAuth_Audit.json) |
-### Information Flow Enforcement (AC-4)
+### Account Management (AC-2)
-**ID**: IRS 1075 9.3.1.4
+**ID**: IRS 1075 9.3.1.2
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
governance Iso 27001 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/iso-27001.md
Title: Regulatory Compliance details for ISO 27001:2013 description: Details of the ISO 27001:2013 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/23/2023 Last updated : 11/06/2023
governance New Zealand Ism https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/new-zealand-ism.md
Title: Regulatory Compliance details for New Zealand ISM Restricted description: Details of the New Zealand ISM Restricted Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/23/2023 Last updated : 11/06/2023
governance Nist Sp 800 171 R2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/nist-sp-800-171-r2.md
Title: Regulatory Compliance details for NIST SP 800-171 R2 description: Details of the NIST SP 800-171 R2 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/23/2023 Last updated : 11/06/2023
governance Nist Sp 800 53 R4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/nist-sp-800-53-r4.md
Title: Regulatory Compliance details for NIST SP 800-53 Rev. 4 description: Details of the NIST SP 800-53 Rev. 4 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/23/2023 Last updated : 11/06/2023
governance Nist Sp 800 53 R5 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/nist-sp-800-53-r5.md
Title: Regulatory Compliance details for NIST SP 800-53 Rev. 5 description: Details of the NIST SP 800-53 Rev. 5 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/23/2023 Last updated : 11/06/2023
governance Nl Bio Cloud Theme https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/nl-bio-cloud-theme.md
Title: Regulatory Compliance details for NL BIO Cloud Theme description: Details of the NL BIO Cloud Theme Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/23/2023 Last updated : 11/06/2023
governance Nz Ism Restricted 3 5 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/nz-ism-restricted-3-5.md
Title: Regulatory Compliance details for NZ ISM Restricted v3.5 description: Details of the NZ ISM Restricted v3.5 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/23/2023 Last updated : 11/06/2023
governance Pci Dss 3 2 1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/pci-dss-3-2-1.md
Title: Regulatory Compliance details for PCI DSS 3.2.1 description: Details of the PCI DSS 3.2.1 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/23/2023 Last updated : 11/06/2023
governance Pci Dss 4 0 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/pci-dss-4-0.md
Title: Regulatory Compliance details for PCI DSS v4.0 description: Details of the PCI DSS v4.0 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/23/2023 Last updated : 11/06/2023
governance Rbi Itf Banks 2016 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/rbi-itf-banks-2016.md
Title: Regulatory Compliance details for Reserve Bank of India IT Framework for Banks v2016 description: Details of the Reserve Bank of India IT Framework for Banks v2016 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/23/2023 Last updated : 11/06/2023
governance Rbi Itf Nbfc 2017 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/rbi-itf-nbfc-2017.md
Title: Regulatory Compliance details for Reserve Bank of India - IT Framework for NBFC description: Details of the Reserve Bank of India - IT Framework for NBFC Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/23/2023 Last updated : 11/06/2023
governance Rmit Malaysia https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/rmit-malaysia.md
Title: Regulatory Compliance details for RMIT Malaysia description: Details of the RMIT Malaysia Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/23/2023 Last updated : 11/06/2023
governance Swift Csp Cscf 2021 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/swift-csp-cscf-2021.md
Title: Regulatory Compliance details for SWIFT CSP-CSCF v2021 description: Details of the SWIFT CSP-CSCF v2021 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/23/2023 Last updated : 11/06/2023
governance Ukofficial Uknhs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/ukofficial-uknhs.md
Title: Regulatory Compliance details for UK OFFICIAL and UK NHS description: Details of the UK OFFICIAL and UK NHS Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/23/2023 Last updated : 11/06/2023
governance Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/troubleshoot/alerts.md
Title: Troubleshoot Azure Resource Graph alerts description: Learn how to troubleshoot issues with Azure Resource Graph alerts integration with Log Analytics. Previously updated : 10/31/2023 Last updated : 11/07/2023
The following descriptions help you troubleshoot queries for Azure Resource Graph alerts that integrate with Log Analytics.
-## Azure Resource Graph operators
+## Operators and functions
-Only the operators supported in Azure Resource Graph Explorer are supported as part of this integration with Log Analytics for alerts. For more information, go to [supported operators](../concepts/query-language.md#supported-kql-language-elements).
+Many [supported operators](../concepts/query-language.md#supported-kql-language-elements) in Azure Resource Graph Explorer work with the Log Analytics integration for alerts.
+
+But because Azure Resource Graph alerts is in preview, there are operators and functions that work in Azure Resource Graph but are unsupported with the Log Analytics integration.
+
+The following are known unsupported operators and functions:
+
+| Operator/function | Type |
+| - | - |
+| `join` | operator <br/>The integration works when you join an Azure Resource Graph table with a Log Analytics table. The integration doesn't work if you join two or more Azure Resource Graph tables. |
+| `mv-apply` | operator |
+| `arg_min()` | scalar function |
+| `avg()`, `avgif()` | aggregation function |
+| `percentile()`, `percentiles()`, `percentilew()`, `percentilesw()` | aggregation function |
+| `rand()` | scalar function |
+| `stdev()`, `stdevif()`, `stdevp()` | aggregation function |
+| `variance()`, `varianceif()`, `variancep()` | aggregation function |
+| Using keys with bag functions | scalar function |
+
+For more information about operators and functions, go to [tabular operators](/azure/data-explorer/kusto/query/queries), [scalar functions](/azure/data-explorer/kusto/query/scalarfunctions), and [aggregation functions](/azure/data-explorer/kusto/query/aggregation-functions).
## Pagination
Azure Resource Graph has pagination in its dedicated APIs. But with the way Log
The managed identity for your alert must have the role [Log Analytics Contributor](../../../role-based-access-control/built-in-roles.md#log-analytics-contributor) or [Log Analytics Reader](../../../role-based-access-control/built-in-roles.md#log-analytics-reader). The role provides the permissions to get monitoring information.
-When you set up an alert, the results can be different than the result after the alert is fired. The reason is that a fired alert is run based on managed identity, but when you manually test an alert it's based on the user's identity.
+When you set up an alert, the results can be different than the result after the alert is fired. The reason is that a fired alert is run using a managed identity, but when you manually test an alert it uses the user's identity.
## Table names
hdinsight-aks Monitor With Prometheus Grafana https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/monitor-with-prometheus-grafana.md
Title: Monitoring with Azure Managed Prometheus and Grafana
description: Learn how to use monitor With Azure Managed Prometheus and Grafana Previously updated : 10/27/2023 Last updated : 11/07/2023 # Monitoring with Azure Managed Prometheus and Grafana
User permission: For viewing Azure Managed Grafana, ΓÇ£Grafana ViewerΓÇ¥ role is
> For viewing other roles for Grafana users see [here](../managed-grafan). ## View metrics
-You can use the Grafana dashboard to view the service and system. Trino cluster as an example, assuming few jobs are executed in the cluster.
-1. Open the Grafana link in the cluster overview page.
+We are using an Apache SparkΓäó cluster as an example in this case, assuming few jobs are executed in the cluster, in order to have the metrics.
- :::image type="content" source="./media/monitor-with-prometheus-grafana/view-metrics.png" alt-text="Screenshot showing how to view-metrics." border="true" lightbox="./media/monitor-with-prometheus-grafana/view-metrics.png":::
+Review the following steps to use the Grafana sample templates:
-1. The default value on the Explore tab is **Grafana**.
-1. Select on the dropdown and click on the `Managed Prometheus.…. <workspace name>` option and select the parameters of the time frame required.
+1. Download the sample template from [here](https://github.com/Azure-Samples/hdinsight-aks/tree/main/sample-grafana-template) for the respective workloads (download the Apache Spark template in this case).
- :::image type="content" source="./media/monitor-with-prometheus-grafana/set-time-frame.png" alt-text="Screenshot showing how to set time frame." border="true" lightbox="./media/monitor-with-prometheus-grafana/set-time-frame.png":::
+1. Login to the Grafana Dashboard from your cluster.
-1. Next Select the metric you want to see.
+ :::image type="content" source="./media/monitor-with-prometheus-grafana/login-to-grafana-dashboard.png" alt-text="Screenshot showing how to set time frame." border="true" lightbox="./media/monitor-with-prometheus-grafana/login-to-grafana-dashboard.png":::
- :::image type="content" source="./media/monitor-with-prometheus-grafana/metric-type.png" alt-text="Screenshot showing how to metric type." border="true" lightbox="./media/monitor-with-prometheus-grafana/metric-type.png":::
+1. Once the Grafana Dashboard page is opened, click on New > Import
-1. Click on **Run Query** and select the timeframe on how often the query should be run.
+ :::image type="content" source="./media/monitor-with-prometheus-grafana/grafana-dashboard.png" alt-text="Screenshot showing how to metric type." border="true" lightbox="./media/monitor-with-prometheus-grafana/grafana-dashboard.png":::
- :::image type="content" source="./media/monitor-with-prometheus-grafana/run-query.png" alt-text="Screenshot showing how to run query." border="true" lightbox="./media/monitor-with-prometheus-grafana/run-query.png":::
+1. Click on the Upload Dashboard JSON file and upload the Apache Spark Grafana template that you have downloaded and click on **Import**.
-1. View the metric as per selection.
+ :::image type="content" source="./media/monitor-with-prometheus-grafana/upload-dashboard-json-file.png" alt-text="Screenshot showing how to run query." border="true" lightbox="./media/monitor-with-prometheus-grafana/upload-dashboard-json-file.png":::
- :::image type="content" source="./media/monitor-with-prometheus-grafana/view-output.png" alt-text="Screenshot showing how to view the output." border="true" lightbox="./media/monitor-with-prometheus-grafana/view-output.png":::
+1. After the upload is complete, you can click on the dashboard to view the metrics.
+ :::image type="content" source="./media/monitor-with-prometheus-grafana/matrix-view.png" alt-text="Screenshot showing how to view the output." border="true" lightbox="./media/monitor-with-prometheus-grafana/matrix-view.png":::
+
## Reference * Apache, Apache Spark, Spark, and associated open source project names are [trademarks](./trademarks.md) of the [Apache Software Foundation](https://www.apache.org/) (ASF).
healthcare-apis Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure API for FHIR description: Lists Azure Policy Regulatory Compliance controls available for Azure API for FHIR. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 10/23/2023 Last updated : 11/06/2023
healthcare-apis Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/get-started.md
# Get started with the MedTech service
-This article and diagram outlines the basic steps to get started with the MedTech service in the [Azure Health Data Services](../healthcare-apis-overview.md). These steps may help you to assess the [MedTech service deployment methods](deploy-choose-method.md) and determine which deployment method is best for you.
+This article and diagram outlines the basic steps to get started with the MedTech service in the [Azure Health Data Services](../healthcare-apis-overview.md). These steps might help you to assess the [MedTech service deployment methods](deploy-choose-method.md) and determine which deployment method is best for you.
-As a prerequisite, you need an Azure subscription and have been granted the proper permissions to deploy Azure resource groups and resources. You can follow all the steps, or skip some if you have an existing environment. Also, you can combine all the steps and complete them in Azure PowerShell, Azure CLI, or REST API scripts.
+As a prerequisite, you need an Azure subscription and granted the proper permissions to deploy Azure resource groups and resources. You can follow all the steps, or skip some if you have an existing environment. Also, you can combine all the steps and complete them in Azure PowerShell, Azure CLI, or REST API scripts.
> [!TIP] > See the MedTech service article, [Choose a deployment method for the MedTech service](deploy-choose-method.md), for a description of the different deployment methods that can help to simplify and automate the deployment of the MedTech service.
After you obtain the required subscription prerequisites, the first step is to d
* Azure resource group. * Azure Event Hubs namespace and event hub. * Azure Health Data Services workspace.
-* Azure Health Data Services FHIR service.
+* Azure Health Data Service FHIR&reg; service.
Once the prerequisite resources are available, deploy:
Deploy a [FHIR service](../fhir/fhir-portal-quickstart.md) into your resource gr
### Deploy a MedTech service
-If you have successfully deployed the prerequisite resources, you're now ready to deploy the [MedTech service](deploy-manual-portal.md) using your workspace.
+If you successfully deployed the prerequisite resources, you're now ready to deploy the [MedTech service](deploy-manual-portal.md) using your workspace.
## Next steps
healthcare-apis Git Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/git-projects.md
Check out our open-source software (OSS) projects on GitHub, which provide sourc
### FHIR integration
-* [microsoft/iomt-fhir](https://github.com/microsoft/iomt-fhir): Open-source version of the Azure Health Data Services MedTech service managed service. Can be used with any FHIR service that supports [HL7 FHIR](https://www.hl7.org/implement/standards/product_brief.cfm?product_id=491).
+* [microsoft/iomt-fhir](https://github.com/microsoft/iomt-fhir): Open-source version of the Azure Health Data Services MedTech service managed service. Can be used with any FHIR&reg; service that supports [HL7 FHIR](https://www.hl7.org/implement/standards/product_brief.cfm?product_id=491).
### Wearables integration
healthcare-apis How To Configure Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-configure-metrics.md
Metric category|Metric name|Metric description|
|Errors|Total Error Count|The total number of errors.| |Latency|Average Group Stage Latency|The average latency of the group stage. The [group stage](overview-of-device-data-processing-stages.md#groupoptional) performs buffering, aggregating, and grouping on normalized messages.| |Latency|Average Normalize Stage Latency|The average latency of the normalized stage. The [normalized stage](overview-of-device-data-processing-stages.md#normalize) performs normalization on raw incoming messages.|
-|Traffic|Number of Fhir resources saved|The total number of FHIR resources [updated or persisted](overview-of-device-data-processing-stages.md#persist) by the MedTech service.|
+|Traffic|Number of Fhir resources saved|The total number of FHIR&reg; resources [updated or persisted](overview-of-device-data-processing-stages.md#persist) by the MedTech service.|
|Traffic|Number of Incoming Messages|The number of received raw [incoming messages](overview-of-device-data-processing-stages.md#ingest) (for example, the device events) from the configured source event hub.| |Traffic|Number of Measurements|The number of normalized value readings received by the FHIR [transformation stage](overview-of-device-data-processing-stages.md#transform) of the MedTech service.| |Traffic|Number of Message Groups|The number of groups that have messages aggregated in the designated time window.|
healthcare-apis How To Enable Diagnostic Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-enable-diagnostic-settings.md
If you choose to include your Log Analytics workspace as a destination option fo
:::image type="content" source="media/how-to-enable-diagnostic-settings/query-result-with-errors.png" alt-text="Screenshot of query with health issues." lightbox="media/how-to-enable-diagnostic-settings/query-result-with-errors.png":::
-5. Select the down arrow in one of the error logs to display the full error log message, which can be used to help troubleshoot issues with your MedTech service. In this example, the error log message shows that the MedTech service wasn't able to authenticate with the FHIR service.
+5. Select the down arrow in one of the error logs to display the full error log message, which can be used to help troubleshoot issues with your MedTech service. In this example, the error log message shows that the MedTech service wasn't able to authenticate with the FHIR&reg; service.
:::image type="content" source="media/how-to-enable-diagnostic-settings/display-log-error-message.png" alt-text="Screenshot of log error message." lightbox="media/how-to-enable-diagnostic-settings/display-log-error-message.png":::
healthcare-apis How To Use Calculatedcontent Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-use-calculatedcontent-templates.md
The CalculatedContent templates allow matching on and extracting values from a d
|timestampExpression|The expression to extract the timestamp value for the measurement's `OccurrenceTimeUtc` value.|`$.matchedToken.endDate`|`@.matchedToken.endDate`| |patientIdExpression|The expression to extract the patient identifier. *Required* when the MedTech services's **Resolution type** is set to **Create**, and *optional* when the MedTech service's **Resolution type** is set to **Lookup**.|`$.matchedToken.patientId`|`@.matchedToken.patientId`| |encounterIdExpression|*Optional*: The expression to extract the encounter identifier.|`$.matchedToken.encounterId`|`@.matchedToken.encounterId`
-|correlationIdExpression|*Optional*: The expression to extract the correlation identifier. You can use this output to group values into a single observation in the FHIR destination mapping.|`$.matchedToken.correlationId`|`@.matchedToken.correlationId`|
+|correlationIdExpression|*Optional*: The expression to extract the correlation identifier. You can use this output to group values into a single observation in the FHIR&reg; destination mapping.|`$.matchedToken.correlationId`|`@.matchedToken.correlationId`|
|values[].valueExpression|The expression to extract the wanted value.|`$.matchedToken.heartRate`|`@.matchedToken.heartRate`| > [!NOTE]
healthcare-apis How To Use Custom Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-use-custom-functions.md
# How to use custom functions with the MedTech service device mapping
-Many functions are available when using **JMESPath** as the expression language. Besides the built-in functions available as part of the [JMESPath specification](https://jmespath.org/specification.html#built-in-functions), many more custom functions may also be used. This article describes how to use the MedTech service-specific custom functions with the MedTech service [device mapping](overview-of-device-mapping.md).
+Many functions are available when using **JMESPath** as the expression language. Besides the built-in functions available as part of the [JMESPath specification](https://jmespath.org/specification.html#built-in-functions), many more custom functions can also be used. This article describes how to use the MedTech service-specific custom functions with the MedTech service [device mapping](overview-of-device-mapping.md).
> [!TIP]
-> You can use the MedTech service [Mapping debugger](how-to-use-mapping-debugger.md) for assistance creating, updating, and troubleshooting the MedTech service device and FHIR destination mappings. The Mapping debugger enables you to easily view and make inline adjustments in real-time, without ever having to leave the Azure portal. The Mapping debugger can also be used for uploading test device messages to see how they'll look after being processed into normalized messages and transformed into FHIR Observations.
+> You can use the MedTech service [Mapping debugger](how-to-use-mapping-debugger.md) for assistance creating, updating, and troubleshooting the MedTech service device and FHIR&reg; destination mappings. The Mapping debugger enables you to easily view and make inline adjustments in real-time, without ever having to leave the Azure portal. The Mapping debugger can also be used for uploading test device messages to see how they'll look after being processed into normalized messages and transformed into FHIR Observations.
## Function signature
The signature indicates the valid types for the arguments. If an invalid type is
> [!IMPORTANT] > When math-related functions are done, the end result must be able to fit within a [C# long](/dotnet/csharp/language-reference/builtin-types/integral-numeric-types#characteristics-of-the-integral-types) value. If the end result is unable to fit within a C# long value, then a mathematical error will occur.
-As stated previously, these functions may only be used when specifying **JmesPath** as the expression language. By default, the expression language is **JsonPath**. The expression language can be changed when defining the expression.
+As stated previously, these functions can only be used when specifying **JmesPath** as the expression language. By default, the expression language is **JsonPath**. The expression language can be changed when defining the expression.
For example:
This example uses the [insertString](#insertstring) expression to generate the p
## Literal values
-Constant values may be supplied to functions.
+Constant values can be supplied to functions.
- Numeric values should be enclosed within backticks: \` - Example: add(\`10\`, \`10\`)
For more information, see the [JMESPath specification](https://jmespath.org/spec
## Exception handling
-Exceptions may occur at various points within the device data processing lifecycle. Here are the various points where exceptions can occur:
+Exceptions can occur at various points within the device data processing lifecycle. Here are the various points where exceptions can occur:
-|Action|When|Exceptions that may occur during parsing of the device mapping|Outcome|
-||-|--|-|
+|Action|When|Exceptions that can occur during parsing of the device mapping|Outcome|
+||-|-|-|
|**Device mapping parsing**|Each time a new batch of device messages are received, the device mapping is loaded and parsed.|Failure to parse the device mapping.|System attempts to reload and parse the latest device mapping until parsing succeeds. No new device messages are processed until parsing is successful.| |**Device mapping parsing**|Each time a new batch of device messages are received, the device mapping is loaded and parsed.|Failure to parse any expressions.|System attempts to reload and parse the latest device mapping until parsing succeeds. No new device messages are processed until parsing is successful.| |**Function execution**|Each time a function is executed against device data within a device message.|Input device data doesn't match that of the function signature.|System stops processing that device message. The device message isn't retried.|
healthcare-apis How To Use Iotjsonpathcontent Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-use-iotjsonpathcontent-templates.md
The IotJsonPathContent templates allow matching on and extracting values from a
|typeMatchExpression|The expression that the MedTech service evaluates against the device message payload. If the service finds a matching token value, it considers the template a match.|`$..[?(@heartRate)]`| |patientIdExpression|The expression to extract the patient identifier. *Required* when the MedTech services's **Resolution type** is set to **Create**, and *optional* when the MedTech service's **Resolution type** is set to **Lookup**.|`$.SystemProperties.iothub-connection-device-id`| |encounterIdExpression|*Optional*: The expression to extract the encounter identifier.|`$.Body.encounterId`|
-|correlationIdExpression|*Optional*: The expression to extract the correlation identifier. You can use this output to group values into a single observation in the FHIR destination mapping.|`$.Body.correlationId`|
+|correlationIdExpression|*Optional*: The expression to extract the correlation identifier. You can use this output to group values into a single observation in the FHIR&reg; destination mapping.|`$.Body.correlationId`|
|values[].valueExpression|The expression to extract the wanted value.|`$.Body.heartRate`| > [!IMPORTANT]
healthcare-apis How To Use Mapping Debugger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-use-mapping-debugger.md
> [!IMPORTANT] > This feature is currently in Public Preview. See [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-In this article, learn how to use the MedTech service Mapping debugger. The Mapping debugger is a self-service tool that is used for creating, updating, and troubleshooting the MedTech service [device](overview-of-device-mapping.md) and [FHIR destination](overview-of-fhir-destination-mapping.md) mappings. The Mapping debugger enables you to easily view and make inline adjustments in real-time, without ever having to leave the Azure portal. The Mapping debugger can also be used for uploading test device messages to see how they'll look after being processed into normalized messages and transformed into FHIR Observations.
+In this article, learn how to use the MedTech service Mapping debugger. The Mapping debugger is a self-service tool that is used for creating, updating, and troubleshooting the MedTech service [device](overview-of-device-mapping.md) and [FHIR&reg; destination](overview-of-fhir-destination-mapping.md) mappings. The Mapping debugger enables you to easily view and make inline adjustments in real-time, without ever having to leave the Azure portal. The Mapping debugger can also be used for uploading test device messages to see how they'll look after being processed into normalized messages and transformed into FHIR Observations.
> [!TIP] > To learn about how the MedTech service transforms and persists device message data into the FHIR service see, [Overview of the MedTech service device data processing stages](overview-of-device-data-processing-stages.md).
For this troubleshooting example, we're using a test device message that is [mes
:::image type="content" source="media\how-to-use-mapping-debugger\mapping-debugger-select-test-device-message-manual.png" alt-text="Screenshot of the Mapping debugger and Select a file box." lightbox="media\how-to-use-mapping-debugger\mapping-debugger-select-test-device-message-manual.png":::
-3. Copy/paste or type the test device message into the **Upload test device message** box. The **Validation** box may still be *red* if the either of the mappings has an error/warning. As long as **No errors** is *green*, the test device message is valid with the provided device and FHIR destination mappings.
+3. Copy/paste or type the test device message into the **Upload test device message** box. The **Validation** box can still be *red* if the either of the mappings has an error/warning. As long as **No errors** is *green*, the test device message is valid with the provided device and FHIR destination mappings.
> [!NOTE] >The Mapping debugger also displays [enrichments](../../iot-hub/iot-hub-message-enrichments-overview.md) performed on the test device message if it has been [messaged routed](../../iot-hub/iot-hub-devguide-messages-d2c.md) from an [Azure IoT Hub](../../iot-hub/iot-concepts-and-iot-hub.md) (for example: the addition of the **Body**, **Properties**, and **SystemProperties** elements).
For this troubleshooting example, we're using a test device message that is [mes
Select the **X** in the right corner to close the **Upload test device message** box.
-4. Once a valid test device message is uploaded, the **View normalized message** and **View FHIR observation** buttons become available so that you may view the sample outputs of the normalization and FHIR transformation stages. These sample outputs can be used to validate your device and FHIR destination mappings are properly configured for processing device messages according to your requirements.
+4. Once a valid test device message is uploaded, the **View normalized message** and **View FHIR observation** buttons become available so that you can view the sample outputs of the normalization and FHIR transformation stages. These sample outputs can be used to validate your device and FHIR destination mappings are properly configured for processing device messages according to your requirements.
:::image type="content" source="media\how-to-use-mapping-debugger\mapping-debugger-normalized-and-FHIR-selections-available.png" alt-text="Screenshot View normalized message and View FHIR observation available." lightbox="media\how-to-use-mapping-debugger\mapping-debugger-normalized-and-FHIR-selections-available.png":::
healthcare-apis How To Use Monitoring And Health Checks Tabs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-use-monitoring-and-health-checks-tabs.md
Metric category|Metric name|Metric description|
|Errors|**Total Error Count**|The total number of errors.| |Latency|**Average Group Stage Latency**|The average latency of the group stage. The [group stage](overview-of-device-data-processing-stages.md#groupoptional) performs buffering, aggregating, and grouping on normalized messages.| |Latency|**Average Normalize Stage Latency**|The average latency of the normalized stage. The [normalized stage](overview-of-device-data-processing-stages.md#normalize) performs normalization on raw incoming messages.|
-|Traffic|Number of Fhir resources saved|The total number of FHIR resources [updated or persisted](overview-of-device-data-processing-stages.md#persist) by the MedTech service.|
+|Traffic|Number of Fhir resources saved|The total number of FHIR&reg; resources [updated or persisted](overview-of-device-data-processing-stages.md#persist) by the MedTech service.|
|Traffic|**Number of Incoming Messages**|The number of received raw [incoming messages](overview-of-device-data-processing-stages.md#ingest) (for example, the device events) from the configured source event hub.| |Traffic|**Number of Measurements**|The number of normalized value readings received by the FHIR [transformation stage](overview-of-device-data-processing-stages.md#transform) of the MedTech service.| |Traffic|**Number of Message Groups**|The number of groups that have messages aggregated in the designated time window.|
Metric category|Metric name|Metric description|
:::image type="content" source="media\how-to-use-monitoring-and-health-checks-tabs\health-checks-without-errors.png" alt-text="Screenshot of the MedTech service health checks tab without errors." lightbox="media\how-to-use-monitoring-and-health-checks-tabs\health-checks-without-errors.png":::
-2. In this example, we can see that the MedTech service is indicating that the **Health check** for **Event hub connection** is showing a **Status** of **Disconnected**. To find out how to troubleshoot this failed health check, you may select the **Accessing the MedTech service from the event hub** link under the **Learn more** row to be directed to the MedTech service troubleshooting guide section for addressing this failed health check.
+2. In this example, we can see that the MedTech service is indicating that the **Health check** for **Event hub connection** is showing a **Status** of **Disconnected**. To find out how to troubleshoot this failed health check, you can select the **Accessing the MedTech service from the event hub** link under the **Learn more** row to be directed to the MedTech service troubleshooting guide section for addressing this failed health check.
:::image type="content" source="media\how-to-use-monitoring-and-health-checks-tabs\health-checks-with-error.png" alt-text="Screenshot of the MedTech service health checks tab with errors." lightbox="media\how-to-use-monitoring-and-health-checks-tabs\health-checks-with-error.png":::
healthcare-apis Overview Of Device Data Processing Stages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/overview-of-device-data-processing-stages.md
# Overview of the MedTech service device data processing stages
-This article provides an overview of the device data processing stages within the [MedTech service](overview.md). The MedTech service transforms device data into [FHIR Observations](https://www.hl7.org/fhir/observation.html) for persistence in the [FHIR service](../fhir/overview.md).
+This article provides an overview of the device data processing stages within the [MedTech service](overview.md). The MedTech service transforms device data into [FHIR&reg; Observations](https://www.hl7.org/fhir/observation.html) for persistence in the [FHIR service](../fhir/overview.md).
The MedTech service device data processing follows these stages and in this order:
healthcare-apis Overview Of Device Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/overview-of-device-mapping.md
# Overview of the MedTech service device mapping
-> [!NOTE]
-> [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification.
- This article provides an overview of the MedTech service device mapping.
-The MedTech service requires two types of [JSON](https://www.json.org/) mappings that are added to your MedTech service through the Azure portal or Azure Resource Manager (ARM) API. The device mapping is the first type and controls mapping values in the device data sent to the MedTech service to an internal, normalized data object. The device mapping contains expressions that the MedTech service uses to extract types, device identifiers, measurement date time, and measurement value(s). The [FHIR destination mapping](overview-of-fhir-destination-mapping.md) is the second type and controls the mapping for [FHIR Observations](https://www.hl7.org/fhir/observation.html).
+The MedTech service requires two types of [JSON](https://www.json.org/) mappings that are added to your MedTech service through the Azure portal or Azure Resource Manager (ARM) API. The device mapping is the first type and controls mapping values in the device data sent to the MedTech service to an internal, normalized data object. The device mapping contains expressions that the MedTech service uses to extract types, device identifiers, measurement date time, and measurement value(s). The [FHIR&reg; destination mapping](overview-of-fhir-destination-mapping.md) is the second type and controls the mapping for [FHIR Observations](https://www.hl7.org/fhir/observation.html).
> [!NOTE] > The device and FHIR destination mappings are re-evaluated each time a device message is processed. Any updates to either mapping will take effect immediately.
healthcare-apis Overview Of Fhir Destination Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/overview-of-fhir-destination-mapping.md
# Overview of the MedTech service FHIR destination mapping
-This article provides an overview of the MedTech service FHIR destination mapping.
+This article provides an overview of the MedTech service FHIR&reg; destination mapping.
The MedTech service requires two types of [JSON](https://www.json.org/) mappings that are added to your MedTech service through the Azure portal or Azure Resource Manager API. The [device mapping](overview-of-device-mapping.md) is the first type and controls mapping values in the device data sent to the MedTech service to an internal, normalized data object. The device mapping contains expressions that the MedTech service uses to extract types, device identifiers, measurement date time, and measurement value(s). The FHIR destination mapping is the second type and controls how the normalized data is mapped to [FHIR Observations](https://www.hl7.org/fhir/observation.html).
healthcare-apis Overview Of Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/overview-of-samples.md
# Overview of the MedTech service scenario-based mappings samples
-The [MedTech service](overview.md) scenario-based [samples](https://github.com/Azure-Samples/azure-health-data-and-ai-samples/tree/main/samples/medtech-service-mappings) provide conforming and valid [device](overview-of-device-mapping.md) and [FHIR destination](overview-of-fhir-destination-mapping.md) mappings and test device messages. Theses samples can be used to help with the authoring and troubleshooting of your own MedTech service mappings.
+The [MedTech service](overview.md) scenario-based [samples](https://github.com/Azure-Samples/azure-health-data-and-ai-samples/tree/main/samples/medtech-service-mappings) provide conforming and valid [device](overview-of-device-mapping.md) and [FHIR&reg; destination](overview-of-fhir-destination-mapping.md) mappings and test device messages. Theses samples can be used to help with the authoring and troubleshooting of your own MedTech service mappings.
## Sample resources
healthcare-apis Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/overview.md
# What is the MedTech service?
-The MedTech service is a Platform as a Service (PaaS) within the Azure Health Data Services. The MedTech service enables you to ingest device data, transform it into a unified FHIR format, and store it in an enterprise-scale, secure, and compliant cloud environment. 
+The MedTech service is a Platform as a Service (PaaS) within the Azure Health Data Services. The MedTech service enables you to ingest device data, transform it into a unified FHIR&reg; format, and store it in an enterprise-scale, secure, and compliant cloud environment. 
The MedTech service is built to help customers that are dealing with the challenge of gaining relevant insights from device data coming in from multiple and diverse sources. No matter the device or structure, the MedTech service normalizes that device data into a common format, allowing the end user to then easily capture trends, run analytics, and build Artificial Intelligence (AI) models. In the enterprise healthcare setting, the MedTech service is used in the context of remote patient monitoring, virtual health, and clinical trials.
healthcare-apis Troubleshoot Errors Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/troubleshoot-errors-deployment.md
Here's a list of errors that can be found in the Azure Resource Manager (ARM) AP
**Displayed**: ARM API and Azure portal
-**Description**: MedTech service's FHIR destination resource quota is reached (default is one per MedTech service).
+**Description**: MedTech service's FHIR&reg; destination resource quota is reached (default is one per MedTech service).
**Fix**: Delete the existing instance of the MedTech service's FHIR destination resource. Only one FHIR destination resource is permitted per MedTech service.
healthcare-apis Troubleshoot Errors Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/troubleshoot-errors-logs.md
This property represents the severity of the occurred error. Here's a list of po
|Severity|Description| |--|--|
-|Non-blocking|An issue exists in the data flow process, but processing of device messages doesn't stop.|
+|Nonblocking|An issue exists in the data flow process, but processing of device messages doesn't stop.|
|Blocking|An issue exists in the data flow process, and no device messages are expected to process.| ## Operation being performed by the MedTech service
The health checks' names are listed in the following table, and the fixes for an
### FhirService:IsAuthenticated
-**Description**: Checks that the FHIR destination is valid and that the MedTech service has write access to it.
+**Description**: Checks that the FHIR&reg; destination is valid and that the MedTech service has write access to it.
**Severity**: Blocking
The errors' names are listed in the following table, and the fixes for them are
### FhirResourceNotFoundException
-**Description**: This error occurs when a FHIR resource with the identifier given in the device message can't be found in the FHIR destination. If the FHIR resourceΓÇÖs type is Patient, then the error may be that the Device FHIR resource with the device identifier given in the device message doesn't reference a Patient FHIR resource. The FHIR resourceΓÇÖs type (for example, Device, Patient, Encounter, or Observation) is specified in the error message. **Note**: This error can only occur when the MedTech serviceΓÇÖs resolution type is set to **Lookup**.
+**Description**: This error occurs when a FHIR resource with the identifier given in the device message can't be found in the FHIR destination. If the FHIR resourceΓÇÖs type is Patient, then the error might be that the Device FHIR resource with the device identifier given in the device message doesn't reference a Patient FHIR resource. The FHIR resourceΓÇÖs type (for example, Device, Patient, Encounter, or Observation) is specified in the error message. **Note**: This error can only occur when the MedTech serviceΓÇÖs resolution type is set to **Lookup**.
-**Severity**: Non-blocking
+**Severity**: Nonblocking
**Fix**: Ensure that your device messages contain the identifier for the FHIR resource that has the type specified in the error message. Also, on the Azure portal, go to the **Device mapping** blade of your MedTech service, and ensure that the `{FHIR resourceΓÇÖs type specified in the error message}IdExpression` (for example, `deviceIdExpression`) value in the device mapping exists and correctly references the identifierΓÇÖs key in your device messages. ### IncompatibleDataException
-**Description**: There's an incompatibility between the device message and the device mapping (for example, a required property may be missing or blank in the device message and/or in the device mapping). The device mapping property with the error is specified in the error message.
+**Description**: There's an incompatibility between the device message and the device mapping (for example, a required property might be missing or blank in the device message and/or in the device mapping). The device mapping property with the error is specified in the error message.
-**Severity**: Non-blocking
+**Severity**: Nonblocking
**Fix**: Ensure that your device messages contain: * The key that is referenced by the device mapping property specified in the error message.
-* A non-blank value for the key.
+* A nonblank value for the key.
Also, on the Azure portal, go to the **Device mapping** blade of your MedTech service, and ensure that the device mapping property specified in the error message has a value that correctly references the corresponding key in your device messages.
Also, on the Azure portal, go to the **Device mapping** blade of your MedTech se
**Description**: A device message isn't in a format that can be parsed into a JSON object.
-**Severity**: Non-blocking
+**Severity**: Nonblocking
**Fix**: Ensure that your device messages are in JSON format. One way to confirm JSON format is to use an online JSON validator.
Also, on the Azure portal, go to the **Device mapping** blade of your MedTech se
### InvalidQuantityFhirValueException
-**Description**: The value with a Quantity resource data type is invalid (for example, it may be in a format that isnΓÇÖt supported). The value with the error is specified in the error message.
+**Description**: The value with a Quantity resource data type is invalid (for example, it might be in a format that isnΓÇÖt supported). The value with the error is specified in the error message.
-**Severity**: Non-blocking
+**Severity**: Nonblocking
**Fix**: Ensure that the values in your device messages are in supported datatypes according to the [FHIR Quantity.value specifications](https://build.fhir.org/datatypes-definitions.html#Quantity.value).
The templateΓÇÖs type and line with the error are specified in the error message
### ManagedIdentityCredentialNotFound
-**Description**: When the MedTech service is connecting to the event hub, the MedTech serviceΓÇÖs system-assigned managed identity is disabled or doesn't exist, or a user-assigned managed identity isn't configured for the MedTech service. **Note**: This error may occur if the MedTech service was deployed using a misconfigured Azure Resource Manager (ARM) template.
+**Description**: When the MedTech service is connecting to the event hub, the MedTech serviceΓÇÖs system-assigned managed identity is disabled or doesn't exist, or a user-assigned managed identity isn't configured for the MedTech service. **Note**: This error might occur if the MedTech service was deployed using a misconfigured Azure Resource Manager (ARM) template.
**Severity**: Blocking
If you'd like to use a user-assigned managed identity:
**Description**: Multiple FHIR resources with the same identifier, which is taken from the device message, are found in the FHIR destination, but only one FHIR resource should have been found. The FHIR resourceΓÇÖs type (for example, Device, Patient, Encounter, or Observation) is specified in the error message.
-**Severity**: Non-blocking
+**Severity**: Nonblocking
**Fix**: Ensure that an identifier isn't assigned to more than one FHIR resource that has the type specified in the error message.
If you'd like to use a user-assigned managed identity:
**Description**: A Device resource in the FHIR destination references a Patient FHIR resource with an identifier that doesnΓÇÖt match the patient identifier given in the device message (meaning, the device is linked to another patient).
-**Severity**: Non-blocking
+**Severity**: Nonblocking
**Fix**: Ensure that a patient identifier isn't assigned to more than one device.
If you'd like to use a user-assigned managed identity:
**Description**: This error occurs when the FHIR resourceΓÇÖs identifier isnΓÇÖt present in a device message, or when the expression to parse the FHIR resourceΓÇÖs identifier from the device message isnΓÇÖt configured in the device mapping. The FHIR resourceΓÇÖs type (for example, Device, Patient, Encounter, or Observation) is specified in the error message. **Note**: This error can only occur when the MedTech serviceΓÇÖs resolution type is set to **Create**.
-**Severity**: Non-blocking
+**Severity**: Nonblocking
**Fix**: Ensure that your device messages contain the identifier for the FHIR resource that has the type specified in the error message. Also, on the Azure portal, go to the **Device mapping** blade of your MedTech service, and ensure that the `{FHIR resourceΓÇÖs type specified in the error message}IdExpression` (for example, `deviceIdExpression`) value in the device mapping exists and correctly references the identifierΓÇÖs key in your device messages.
The expression and line with the error are specified in the error message.
**Description**: A template in the device mapping doesn't have a matching template with the same type within the FHIR destination mapping. The templateΓÇÖs type is specified in the error message.
-**Severity**: Non-blocking
+**Severity**: Nonblocking
**Fix**: On the Azure portal, go to the **Device mapping** blade and the **Destination** blade of your MedTech service, and ensure that, for each template in the device mapping, there's a template with the same `typeName` value within the FHIR destination mapping.
healthcare-apis Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Health Data Services FHIR service description: Lists Azure Policy Regulatory Compliance controls available. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 10/23/2023 Last updated : 11/06/2023
iot-hub Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure IoT Hub description: Lists Azure Policy Regulatory Compliance controls available for Azure IoT Hub. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 10/23/2023 Last updated : 11/06/2023
key-vault About Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/certificates/about-certificates.md
As an example, here are some use cases of using certificates to secure communica
* **IoT and networking devices**: Protect and secure your devices by using certificates for authentication and communication. * **Cloud/multicloud**: Secure cloud-based applications on-premises, cross-cloud, or in your cloud provider's tenant.
-### Code signing
-
-A certificate can help secure the code/script of software, to ensure that the author can share the software over the internet without interference by malicious entities. After the author signs the code by using a certificate and taking advantage of code-signing technology, the software is marked with a stamp of authentication that displays the author and their website. The certificate used in code signing helps validate the software's authenticity, promoting end-to-end security.
- ## Next steps - [Certificate creation methods](create-certificate.md) - [About Key Vault](../general/overview.md)
key-vault Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Key Vault description: Lists Azure Policy Regulatory Compliance controls available for Azure Key Vault. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 10/23/2023 Last updated : 11/06/2023
load-balancer Load Balancer Custom Probe Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-custom-probe-overview.md
The protocol used by the health probe can be configured to one of the following
The interval value determines how frequently the health probe checks for a response from your backend pool instances. If the health probe fails, your backend pool instances are immediately marked as unhealthy. If the health probe succeeds on the next healthy probe up, Azure Load Balancer marks your backend pool instances as healthy. The health probe attempts to check the configured health probe port every 5 seconds by default but can be explicitly set to another value. In order to ensure a timely response is received, HTTP/S health probes have built-in timeouts. The following are the timeout durations for TCP and HTTP/S probes:
-* TCP probe timeout duration: N/A (probes will fail once the configured probe interval duration has passed and the next probe has beeen sent)
+* TCP probe timeout duration: N/A (probes will fail once the configured probe interval duration has passed and the next probe has been sent)
* HTTP/S probe timeout duration: 30 seconds For HTTP/S probes, if the configured interval is longer than the above timeout period, the health probe will timeout and fail if no response is received during the timeout period. For example, if an HTTP health probe is configured with a probe interval of 120 seconds (every 2 minutes), and no probe response is received within the first 30 seconds, the probe will have reached its timeout period and fail.
load-balancer Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/whats-new.md
The product group is actively working on resolutions for the following known iss
|Issue |Description |Mitigation | | - ||| | IP based LB outbound IP | IP based LB uses Azure's Default Outbound Access IP for outbound | In order to prevent outbound access from this IP, use NAT Gateway for a predictable IP address and to prevent SNAT port exhaustion |
-| numberOfProbes, "Unhealthy threshold" | Health probe configuration property numberOfProbes, otherwise known as "Unhealthy threshold" in Portal, isn't respected. Load Balancer health probes will probe up/down immediately after one probe regardless of the property's configured value | To reflect the current behavior, set the value of numberOfProbes ("Unhealthy threshold" in Portal) as 1 |
+| numberOfProbes, "Unhealthy threshold" | Health probe configuration property numberOfProbes, otherwise known as "Unhealthy threshold" in Portal, isn't respected. Load Balancer health probes will probe up/down immediately after one probe regardless of the property's configured value | To control the number of successful or failed consecutive probes necessary to mark backend instances as healthy or unhealthy, please leverage the property ["probeThreshold"](https://learn.microsoft.com/azure/templates/microsoft.network/loadbalancers?pivots=deployment-language-arm-template#probepropertiesformat-1) instead |
logic-apps Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Logic Apps description: Lists Azure Policy Regulatory Compliance controls available for Azure Logic Apps. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 10/23/2023 Last updated : 11/06/2023
machine-learning How To Auto Train Image Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-auto-train-image-models.md
Previously updated : 07/16/2023 Last updated : 11/07/2023 #Customer intent: I'm a data scientist with ML knowledge in the computer vision space, looking to build ML models using image data in Azure Machine Learning with full control of the model architecture, hyperparameters, and training and deployment environments.
machine-learning Transparency Note https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/transparency-note.md
Title: Transparency Note for Auto-Generate Prompt Variants in Prompt Flow
+ Title: Transparency Note for auto-generate prompt variants in prompt flow
-description: Transparency Note for Auto-Generate Prompt Variants in Prompt Flow
+description: Transparency Note for auto-generate prompt variants in prompt flow
Last updated 10/20/2023
-# Transparency Note for Auto-Generate Prompt Variants in Prompt Flow
+# Transparency Note for auto-generate prompt variants in prompt flow
## What is a Transparency Note?
An AI system includes not only the technology, but also the people who use it, t
Microsoft's Transparency Notes are part of a broader effort at Microsoft to put our AI Principles into practice. To find out more, see the [Microsoft's AI principles](https://www.microsoft.com/ai/responsible-ai).
-## The basics of Auto-Generate Prompt Variants in Prompt Flow
+## The basics of auto-generate prompt variants in prompt flow
### Introduction
-Prompt engineering is at the center of building applications using Large Language Models. Microsoft's Prompt Flow offers rich capabilities to interactively edit, bulk test, and evaluate prompts with built-in flows to pick the best prompt. With the Auto-Generate Prompt Variants feature in Prompt Flow, we provide the ability to automatically generate variations of a user's base prompt with help of large language models and allow users to test them in Prompt Flow to reach the optimal solution for the user's model and use case needs.
+Prompt engineering is at the center of building applications using Large Language Models. Microsoft's prompt flow offers rich capabilities to interactively edit, bulk test, and evaluate prompts with built-in flows to pick the best prompt. With the auto-generate prompt variants feature in prompt flow, we provide the ability to automatically generate variations of a user's base prompt with help of large language models and allow users to test them in prompt Flow to reach the optimal solution for the user's model and use case needs.
### Key terms | **Term** | **Definition** | | | |
-| Prompt flow | Prompt Flow offers rich capabilities to interactively edit prompts and bulk test them with built-in evaluation flows to pick the best prompt. More information available at [What is prompt flow](./overview-what-is-prompt-flow.md) |
+| Prompt flow | Prompt flow offers rich capabilities to interactively edit prompts and bulk test them with built-in evaluation flows to pick the best prompt. More information available at [What is prompt flow](./overview-what-is-prompt-flow.md) |
| Prompt engineering | The practice of crafting and refining input prompts to elicit more desirable responses from a large language model, particularly in large language models. | | Prompt variants | Different versions or modifications of a given input prompt designed to test or achieve varied responses from a large language model. | | Base prompt | The initial or primary prompt that serves as a starting point for eliciting response from large language models. In this case it is provided by the user and is modified to create prompt variants. |
Prompt engineering is at the center of building applications using Large Languag
### System behavior
-The Auto-Generate Prompt Variants feature, as part of the Prompt Flow experience, provides the ability to automatically generate and easily assess prompt variations to quickly find the best prompt for your use case. This feature further empowers Prompt Flow's rich set of capabilities to interactively edit and evaluate prompts, with the goal of simplifying prompt engineering.
+The auto-generate prompt variants feature, as part of the prompt flow experience, provides the ability to automatically generate and easily assess prompt variations to quickly find the best prompt for your use case. This feature further empowers prompt flow's rich set of capabilities to interactively edit and evaluate prompts, with the goal of simplifying prompt engineering.
-When provided with the user's base prompt the Auto-Generate Prompt Variants feature generates several variations using the generative power of Azure OpenAI models and an internal system prompt. While Azure OpenAI provides content management filters, we recommend verifying any prompts generated before using them in production scenarios.
+When provided with the user's base prompt the auto-generate prompt variants feature generates several variations using the generative power of Azure OpenAI models and an internal system prompt. While Azure OpenAI provides content management filters, we recommend verifying any prompts generated before using them in production scenarios.
### Use cases #### Intended uses
-Auto-Generate Prompt Variants can be used in the following scenarios. The system's intended use is:
+Auto-generate prompt variants can be used in the following scenarios. The system's intended use is:
**Generate new prompts from a provided base prompt**: "Generate Variants" feature will allow the users of prompt flow to automatically generate variants of their provided base prompt with help of LLMs (Large Language Models). #### Considerations when choosing a use case
-**Do not use Auto-Generate Prompt Variants for decisions that might have serious adverse impacts.**
+**Do not use auto-generate prompt variants for decisions that might have serious adverse impacts.**
-Auto-Generate Prompt Variants was not designed or tested to recommend items that require additional considerations related to accuracy, governance, policy, legal, or expert knowledge as these often exist outside the scope of the usage patterns carried out by regular (non-expert) users. Examples of such use cases include medical diagnostics, banking, or financial recommendations, hiring or job placement recommendations, or recommendations related to housing.
+Auto-generate prompt variants was not designed or tested to recommend items that require additional considerations related to accuracy, governance, policy, legal, or expert knowledge as these often exist outside the scope of the usage patterns carried out by regular (non-expert) users. Examples of such use cases include medical diagnostics, banking, or financial recommendations, hiring or job placement recommendations, or recommendations related to housing.
## Limitations Explicitly in the generation of prompt variants, it is important to understand that while AI systems are incredibly valuable tools, they are **non-deterministic**. This means that perfect **accuracy** (the measure of how well the system-generated events correspond to real events that happened in a space) of predictions is not possible. A good model will have high accuracy, but it will occasionally output incorrect predictions. Failure to understand this limitation can lead to over-reliance on the system and unmerited decisions that can impact stakeholders.
-Furthermore, the prompt variants that are generated using LLMs, are returned to the user as is. It is encouraged to evaluate and compare these variants to determine the best prompt for a given scenario. There are **additional concerns** here because many of the evaluations offered in the Prompt Flow ecosystems also depend on LLMs, potentially further decreasing the utility of any given prompt. Manual review is strongly recommended.
+Furthermore, the prompt variants that are generated using LLMs, are returned to the user as is. It is encouraged to evaluate and compare these variants to determine the best prompt for a given scenario. There are **additional concerns** here because many of the evaluations offered in the prompt flow ecosystems also depend on LLMs, potentially further decreasing the utility of any given prompt. Manual review is strongly recommended.
### Technical limitations, operational factors, and ranges
-As mentioned previously, the Auto-Generate Prompt Variants feature does not provide a measurement or evaluation of the provided prompt variants. It is strongly recommended that the user of this feature evaluates the suggested prompts in the way which best aligns with their specific use case and requirements.
+As mentioned previously, the auto-generate prompt variants feature does not provide a measurement or evaluation of the provided prompt variants. It is strongly recommended that the user of this feature evaluates the suggested prompts in the way which best aligns with their specific use case and requirements.
-The Auto-Generate Prompt Variants feature is limited to generating a maximum of five variations from a given base prompt. If more are required, additional prompt variants can be generated after modifying the original base prompt.
+The auto-generate prompt variants feature is limited to generating a maximum of five variations from a given base prompt. If more are required, additional prompt variants can be generated after modifying the original base prompt.
-Auto-Generate Prompt Variants only supports Azure OpenAI models at this time. In addition to limiting users to only the models which are supported by Azure OpenAI, it also limits content to what is acceptable in terms of the Azure OpenAI's content management policy. Uses outside of this policy are not supported by this feature.
+Auto-generate prompt variants only supports Azure OpenAI models at this time. In addition to limiting users to only the models which are supported by Azure OpenAI, it also limits content to what is acceptable in terms of the Azure OpenAI's content management policy. Uses outside of this policy are not supported by this feature.
## System performance
-Performance for the Auto-Generate Prompt Variants feature is determined by the user's use case in each individual scenario ΓÇô in this way the feature does not evaluate each prompt or generate metrics.
+Performance for the auto-generate prompt variants feature is determined by the user's use case in each individual scenario ΓÇô in this way the feature does not evaluate each prompt or generate metrics.
-Operating in the Prompt Flow ecosystem, which focuses on Prompt Engineering, provides a strong story for error handling. Often retrying the operation will resolve an error. One error which might arise specific to this feature is response filtering from the Azure OpenAI resource for content or harm detection, this would happen in the case that content in the base prompt is determined to be against Azure OpenAI's content management policy. To resolve these errors please update the base prompt in accordance with the guidance at [Azure OpenAI Service content filtering](/azure/ai-services/openai/concepts/content-filter).
+Operating in the prompt flow ecosystem, which focuses on Prompt Engineering, provides a strong story for error handling. Often retrying the operation will resolve an error. One error which might arise specific to this feature is response filtering from the Azure OpenAI resource for content or harm detection, this would happen in the case that content in the base prompt is determined to be against Azure OpenAI's content management policy. To resolve these errors please update the base prompt in accordance with the guidance at [Azure OpenAI Service content filtering](/azure/ai-services/openai/concepts/content-filter).
### Best practices for improving system performance
To improve performance there are several parameters which can be modified, depen
- **Number of Variants**: This parameter specifies how many variants to generate. A larger number of variants will produce more prompts and therefore the likelihood of finding the best prompt for the use case. - **Base Prompt**: Since this tool generates variants of the provided base prompt, a strong base prompt can set up the tool to provide the maximum value for your case. Please review the guidelines at Prompt engineering techniques with [Azure OpenAI](/azure/ai-services/openai/concepts/advanced-prompt-engineering).
-## Evaluation of Auto-Generate Prompt Variants
+## Evaluation of auto-generate prompt variants
### Evaluation methods
-The Auto-Generate Prompt Variants feature been testing by the internal development team, targeting fit for purpose and harm mitigation.
+The auto-generate prompt variants feature been testing by the internal development team, targeting fit for purpose and harm mitigation.
### Evaluation results
Evaluation of harm management showed staunch support for the combination of syst
Fit for purpose testing supported the quality of generated prompts from creative purposes (poetry) and chat-bot agents. The reader is cautioned from drawing sweeping conclusions given the breadth of possible base prompt and potential use cases. As previously mentioned, please use evaluations appropriate to the required use cases and ensure a human reviewer is part of the process.
-## Evaluating and integrating Auto-Generate Prompt Variants for your use
+## Evaluating and integrating auto-generate prompt variants for your use
-The performance of the Auto-Generate Prompt Variants feature will vary depending on the base prompt and use case in it is used. True usage of the generated prompts will depend on a combination of the many elements of the system in which the prompt is used.
+The performance of the auto-generate prompt variants feature will vary depending on the base prompt and use case in it is used. True usage of the generated prompts will depend on a combination of the many elements of the system in which the prompt is used.
-To ensure optimal performance in their scenarios, customers should conduct their own evaluations of the solutions they implement using Auto-Generate Prompt Variants. Customers should, generally, follow an evaluation process that:
+To ensure optimal performance in their scenarios, customers should conduct their own evaluations of the solutions they implement using auto-generate prompt variants. Customers should, generally, follow an evaluation process that:
- Uses internal stakeholders to evaluate any generated prompt. - Uses internal stakeholders to evaluate results of any system which uses a generated prompt.
To ensure optimal performance in their scenarios, customers should conduct their
- [Microsoft responsible AI resources](https://www.microsoft.com/ai/responsible-ai-resources) - [Microsoft Azure Learning courses on responsible AI](/training/paths/responsible-ai-business-principles/)
-## Learn more about Auto-Generate Prompt Variants
+## Learn more about auto-generate prompt variants
- [What is prompt flow](./overview-what-is-prompt-flow.md)
machine-learning Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Machine Learning description: Lists Azure Policy Regulatory Compliance controls available for Azure Machine Learning. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 10/23/2023 Last updated : 11/06/2023
machine-learning Tutorial Auto Train Image Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-auto-train-image-models.md
Previously updated : 08/26/2023 Last updated : 11/07/2023
managed-ccf Confidential Consortium Framework Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-ccf/confidential-consortium-framework-overview.md
The Confidential Consortium Framework (CCF) is an open-source framework for buil
The following diagram shows a basic CCF network made of three nodes. All nodes run the same application code inside an enclave. The effects of user (business) and member (governance) transactions are eventually committed to a replicated, encrypted ledger. A consortium of members is in charge of governing the network. + ## Core Concepts ### Network and Nodes
To learn more about CCF applications and start building it, refer to the [Get St
All changes to the Key-Value Store are encrypted and recorded by each node of the network to disk to a decentralized auditable ledger. The integrity of the ledger is guaranteed by a Merkle Tree whose root is periodically signed by the current primary or leader node.
-Find out how to audit the CCF ledger in the [Audit]https://microsoft.github.io/CCF/main/audit/https://docsupdatetracker.net/index.html) section in the CCF documentation.
+Find out how to audit the CCF ledger in the [Audit](https://microsoft.github.io/CCF/main/audit/https://docsupdatetracker.net/index.html) section in the CCF documentation.
### Governance
managed-grafana Concept Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/concept-whats-new.md
Previously updated : 10/30/2023 Last updated : 10/01/2023
Last updated 10/30/2023
* [Microsoft Entra groups](how-to-sync-teams-with-azure-ad-groups.md) is available in preview in Azure Managed Grafana.
-* Plugin management is available in preview. This feature lets you manage installed Grafana plugins directly within an Azure Managed Grafana workspace.
+* [Plugin management](how-to-manage-plugins.md) is available in preview. This feature lets you manage installed Grafana plugins directly within an Azure Managed Grafana workspace.
* Azure Monitor workspaces integration is available in preview. This feature allows you to link your Grafana dashboard to Azure Monitor workspaces. This integration simplifies the process of connecting AKS clusters to an Azure Managed Grafana workspace and collecting metrics.
managed-grafana How To Data Source Plugins Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-data-source-plugins-managed-identity.md
Title: How to configure data sources for Azure Managed Grafana
-description: In this how-to guide, discover how you can configure data sources for Azure Managed Grafana using Managed Identity.
+ Title: How to manage data sources for Azure Managed Grafana
+description: In this how-to guide, discover how you can configure data sources for Azure Managed Grafana
Previously updated : 10/13/2023 Last updated : 10/24/2023
-# How to configure data sources for Azure Managed Grafana
+# How to manage data sources in Azure Managed Grafana
+
+In this guide, you learn about data sources supported in each Azure Managed Grana plan and learn how to add, manage and remove these data sources.
## Prerequisites
-[An Azure Managed Grafana instance](./how-to-permissions.md)
+[An Azure Managed Grafana instance](./how-to-permissions.md).
## Supported Grafana data sources By design, Grafana can be configured with multiple *data sources*. A data source is an externalized storage backend that holds telemetry information.
-Azure Managed Grafana supports many popular data sources. The table below lists the data sources that can be added to Azure Managed Grafana for each service tier.
+### Grafana core data sources
+
+Azure Managed Grafana supports many popular data sources. The table below lists the Grafana core data sources that can be added to Azure Managed Grafana for each service tier.
| Data sources | Essential (preview) | Standard | |-|--|-|
Azure Managed Grafana supports many popular data sources. The table below lists
| [PostgreSQL](https://grafana.com/docs/grafana/latest/datasources/postgres/) | - | Γ£ö | | [Prometheus](https://grafana.com/docs/grafana/latest/datasources/prometheus/) | Γ£ö | Γ£ö | | [Tempo](https://grafana.com/docs/grafana/latest/datasources/tempo/) | - | Γ£ö |
-| [TestData](https://grafana.com/docs/grafana/latest/datasources/testdata/) | Γ£ö | Γ£ö |
+| [TestData](https://grafana.com/docs/grafana/latest/datasources/testdata/) | Γ£ö | Γ£ö |
| [Zipkin](https://grafana.com/docs/grafana/latest/datasources/zipkin/) | - | Γ£ö |
+### Data sources for Grafana Enterprise customers
+ Within the Standard service tier, users who have subscribed to the Grafana Enterprise option can also access the following data sources. * [AppDynamics](https://grafana.com/grafana/plugins/dlopes7-appdynamics-datasource)
Within the Standard service tier, users who have subscribed to the Grafana Enter
* [Splunk Infrastructure monitoring (SignalFx)](https://grafana.com/grafana/plugins/grafana-splunk-monitoring-datasource) * [Wavefront](https://grafana.com/grafana/plugins/grafana-wavefront-datasource)
+### Additional data sources
+
+More data sources can be added from the [Plugin management (preview) feature](how-to-manage-plugins.md).
+ For more information about data sources, go to [Data sources](https://grafana.com/docs/grafana/latest/datasources/) on the Grafana Labs website. ## Add a data source
-A number of data sources, such as Azure Monitor, are added to your Grafana instance by default. To add more data sources, follow the steps below using the Azure portal or the Azure CLI.
+To add a data source to Azure Managed Grafana, follow the steps below using the Azure portal or the Azure CLI.
### [Portal](#tab/azure-portal)
+### Grafana core data sources
+
+To add a [Grafana core data source](https://grafana.com/docs/grafana/latest/datasources/#built-in-core-data-sources) with the Azure portal:
+ 1. Open your Azure Managed Grafana instance in the Azure portal. 1. Select **Overview** from the left menu, then open the **Endpoint** URL.
-1. In the Grafana portal, deploy the menu on the left and select **Connections**.
-1. Under Connect data, select a data source from the list, and add the data source to your instance.
-1. Fill out the form with the data source settings and select **Save and test** to validate the connection to your data source.
+1. In the Grafana portal, deploy the menu on the left and select **Connections** > **Connect data**.
+1. Select a data source from the list, and add it to your instance by selecting **Create** or **Add** in the top right hand corner.
+1. Fill out the form and select **Save and test** to test and update the data source configuration.
:::image type="content" source="media/data-sources/add-data-source.png" alt-text="Screenshot of the Add data source page.":::
+### Other data sources
+
+1. To add a data source that isn't part of the Grafana built-in core data sources, start by [installing the corresponding data source plugin](how-to-manage-plugins.md#add-a-plugin).
+
+1. Then add the datasource from the Grafana portal.
+
+ 1. In the Grafana portal, go to **Connections** > **Connect data**.
+ 1. Select a data source from the list, and add the data source to your instance by selecting **Create** in the top right hand corner.
+ 1. Fill out the form and select **Save and test** to test and update the data source configuration.
+ ### [Azure CLI](#tab/azure-cli)
-Run the [az grafana data-source create](/cli/azure/grafana/data-source#az-grafana-data-source-create) command to add and manage Azure Managed Grafana data sources with the Azure CLI.
+Run the [az grafana data-source create](/cli/azure/grafana/data-source#az-grafana-data-source-create) command to add a [Grafana core data source](https://grafana.com/docs/grafana/latest/datasources/#built-in-core-data-sources) with the Azure CLI.
For example, to add an Azure SQL data source, run:
-```azurecli-interactive
+```azurecli
az grafana data-source create --name <instance-name> --definition '{ "access": "proxy",
az grafana data-source create --name <instance-name> --definition '{
}' ```
+Other data sources can be added [from the Azure portal](#other-data-sources).
+ > [!TIP] > If you can't connect to a data source, you may need to [modify access permissions](how-to-permissions.md) to allow access from your Azure Managed Grafana instance.
-## Update a data source
+## Configure a data source
+
+The content below, shows how to configure some of the most popular data sources in Azure Managed Grafana: Azure Monitor and Azure Data Explorer. A similar process can be used to configure other types of data sources. For more information about a specific data source, refer to [Grafana's documentation](https://grafana.com/docs/grafana/latest/datasources/#built-in-core-data-sources).
### Azure Monitor configuration
Azure Managed Grafana can also access data sources using a service principal set
-## Next steps
+## Remove a data source
-> [!div class="nextstepaction"]
-> [Connect to a data source privately](./how-to-connect-to-data-source-privately.md)
+This section describes the steps for removing a data source.
+
+> [!CAUTION]
+> Removing a data source that is used in a dashboard will make the dashboard unable to collect the corresponding data and will trigger an error or result in no data being shown in the panel.
+
+### [Portal](#tab/azure-portal)
+
+Remove a data source in the Azure portal:
+
+1. Open your Azure Managed Grafana instance in the Azure portal.
+1. Select **Overview** from the left menu, then open the **Endpoint** URL.
+1. In the Grafana portal, go to **Connections** > **Your connections**
+1. Select the data source you want to remove and select **Delete**.
+
+### [Azure CLI](#tab/azure-cli)
+
+Run the [az grafana data-source delete](/cli/azure/grafana/data-source#az-grafana-data-source-delete) command to remove an Azure Managed Grafana data source using the Azure CLI. In the sample below, replace the placeholders `<instance-name>` and `<id>` with the name of the Azure Managed Grafana workspace and the name, ID or UID of the data source.
+
+```azurecli
+az grafana data-source delete --name <instance-name> --data-source <id>
+```
+++
+## Next steps
> [!div class="nextstepaction"]
-> [Share an Azure Managed Grafana instance](./how-to-share-grafana-workspace.md)
+> [Create a dashboard](how-to-create-dashboard.md)
managed-grafana How To Manage Plugins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-manage-plugins.md
+
+ Title: How to manage plugins in Azure Managed Grafana
+description: In this how-to guide, discover how you can add a Grafana plugin or remove a Grafana plugin you no longer need.
++++ Last updated : 10/26/2023++
+# How to manage Grafana plugins (Preview)
+
+Grafana supports data source, panel, and app plugins. When you create a new Grafana instance, some plugins, such as Azure Monitor, are installed by default. In the following guide, learn how you can add or remove optional plugins.
+
+> [!NOTE]
+> Installing and removing plugins isn't available from the Grafana UI or the Azure CLI at this stage. Plugin management is done from the Azure Managed Grafana workspace in the Azure portal.
+
+> [!IMPORTANT]
+> Plugin management is currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+## Prerequisites
+
+[An Azure Managed Grafana instance](./how-to-permissions.md)
+
+## Add a plugin
+
+To install Grafana plugins, follow the process below.
+
+1. Open your Azure Managed Grafana instance in the Azure portal.
+1. Select **Plugin management (Preview)**. This page shows a table with three columns containing checkboxes, plugin names, and plugin IDs. Review the checkboxes. A checked box indicates that the corresponding plugin is already installed and can be removed, an unchecked box indicates that the corresponding plugin isn't installed and can be added.
+
+ > [!NOTE]
+ > This page only shows optional plugins. Core Grafana plugins that are included in your pricing plan by default aren't listed here.
+
+1. Select a plugin to add to your Grafana instance by checking its checkbox. A refresh icon appears in the table next to the plugin you selected, indicating that a change is pending.
+
+ :::image type="content" source="media/plugin-management/add-plugin.png" alt-text="Screenshot of the Plugin management feature data source page.":::
+
+1. Select **Save**. Azure displays a message stating which plugins will be added or removed. Select **Yes** to confirm.
+1. An **Updating** status bar indicates that the update is in progress. The update might take a while.
+1. A notification appears, indicating if the update operation has been successful.
+1. Select **Refresh** above the table to get an updated list of installed plugins.
+
+## Remove a plugin
+
+To remove a plugin that isn't part of the Grafana built-in core plugins, follow the steps below:
+
+1. Open your Azure Managed Grafana instance in the Azure portal.
+1. Select **Plugin management (Preview)**. This page displays a table with data source plugins. It contains three columns including checkboxes, plugin names, and plugin IDs. Review the checkboxes. A checked box indicates that the corresponding plugin is already installed and can be removed, an unchecked box indicates that the corresponding plugin can be added.
+1. Select a plugin to remove from your Grafana instance by unchecking its checkbox. A refresh icon appears in the table next to the plugin you selected, indicating that a change is pending.
+
+ :::image type="content" source="media/plugin-management/remove-plugin.png" alt-text="Screenshot of the Plugin management feature data source page. Remove plugin.":::
+
+1. Select **Save**. Azure displays a message stating which plugins will be added or removed. Select **Yes** to confirm.
+1. An **Updating** status bar indicates that the update is in progress. The update might take a while.
+1. A notification appears, indicating if the update operation has been successful.
+1. Select **Refresh** above the table to get an updated list of installed plugins.
+
+> [!IMPORTANT]
+> Plugin management is currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+> [!CAUTION]
+> Removing a data source that is used in a dashboard will make the dashboard unable to collect the corresponding data and will trigger an error or result in no data being shown in the panel.
+
+## Next steps
+
+Now that you know how to add and remove plugins, learn how to manage data sources.
+
+> [!div class="nextstepaction"]
+> [Configure a data source](./how-to-data-source-plugins-managed-identity.md)
mariadb Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Database for MariaDB description: Lists Azure Policy Regulatory Compliance controls available for Azure Database for MariaDB. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 10/23/2023 Last updated : 11/06/2023
migrate Migrate Support Matrix Hyper V Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix-hyper-v-migration.md
All on-premises VMs replicated to Azure must meet the Azure VM requirements summ
Operating system disk size | Up to 2,048 GB. | Check fails if unsupported. Operating system disk count | 1 | Check fails if unsupported. Data disk count | 16 or less. | Check fails if unsupported.
-Data disk size | Up to 4,095 GB | Check fails if unsupported.
+Data disk size | Up to 32 TB | Check fails if unsupported.
Network adapters | Multiple adapters are supported. | Shared VHD | Not supported. | Check fails if unsupported. FC disk | Not supported. | Check fails if unsupported.
mysql Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/security-controls-policy.md
Previously updated : 10/23/2023 Last updated : 11/06/2023 # Azure Policy Regulatory Compliance controls for Azure Database for MySQL
networking Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure networking services description: Lists Azure Policy Regulatory Compliance controls available for Azure networking services. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 10/23/2023 Last updated : 11/06/2023
notification-hubs Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/private-link.md
+
+ Title: Azure Notification Hubs Private Link
+description: Learn how to use the Private Link feature in Azure Notification Hubs.
++++ Last updated : 11/06/2023+++
+# Use Private Link
+
+This article describes how to use *Private Link* to restrict access to managing resources in your subscriptions. Private links enable you to access Azure services over a private endpoint in your virtual network. This prevents exposure of the service to the public internet.
+
+This article describes the Private Link setup process using the [Azure portal](https://portal.azure.com).
+
+> [!IMPORTANT]
+> You can enable this feature on tiers, for an additional fee.
+
+## Create a private endpoint along with a new notification hub in the portal
+
+The following procedure creates a private endpoint along with a new notification hub using the Azure portal:
+
+1. Create a new notification hub, and select the **Networking** tab.
+1. Select **Private access**, then select **Create**.
+
+ :::image type="content" source="media/private-link/create-hub.png" alt-text="Screenshot of notification hub creation page on portal showing private link option." lightbox="media/private-link/create-hub.png":::
+
+1. Fill in the subscription, resource group, location, and a name for the new private endpoint. Choose a virtual network and a subnet. In **Integrate with Private DNS Zone**, select **Yes** and type **privatelink.notificationhubs.windows.net** in the **Private DNS Zone** box.
+
+ :::image type="content" source="media/private-link/create-private-endpoint.png" alt-text="Screenshot of notification hub private endpoint creation page." lightbox="media/private-link/create-private-endpoint.png":::
+
+1. Select **OK** to see confirmation of namespace and hub creation with a private endpoint.
+1. Select **Create** to create the notification hub with a private endpoint connection.
+
+ :::image type="content" source="media/private-link/private-endpoint-confirm.png" alt-text="Screenshot of notification hub private endpoint confirmation page." lightbox="media/private-link/private-endpoint-confirm.png":::
+
+### Create a private endpoint for an existing notification hub in the portal
+
+1. In the portal, on the left-hand side under the **Security + networking** section, select **Notification Hubs**, then select **Networking**.
+1. Select the **Private access** tab.
+
+ :::image type="content" source="media/private-link/networking-private-access.png" alt-text="Screenshot of private access tab." lightbox="media/private-link/networking-private-access.png":::
+
+1. Fill in the subscription, resource group, location, and a name for the new private endpoint. Choose a virtual network and subnet. Select **Create**.
+
+ :::image type="content" source="media/private-link/create-properties.png" alt-text="Screenshot of private link creation properties." lightbox="media/private-link/create-properties.png":::
+
+## Create a private endpoint using PowerShell
+
+The following example shows how to use PowerShell to create a private endpoint connection to a Notification Hubs namespace. Your private endpoint uses a private IP address in your virtual network.
+
+1. Sign in to Azure via PowerShell and set a subscription:
+
+ ```powershell
+ Login-AzAccount
+ Set-AzContext -SubscriptionId <azure_subscription_id>
+ ```
+
+1. Create a new resource group:
+
+ ```powershell
+ New-AzResourceGroup -Name <resource_group_name> -Location <azure_region>
+ ```
+
+1. Register **Microsoft.NotificationHubs** as a resource provider:
+
+ ```powershell
+ Register-AzResourceProvider -ProviderNamespace Microsoft.NotificationHubs
+ ```
+
+1. Create a new Azure Notification Hubs namespace:
+
+ ```powershell
+ New-AzNotificationHubsNamespace -ResourceGroup <resource_group_name> -Location <azure_region> -Namespace <namespace_name> -SkuTier "Standard"
+ ```
+
+1. Create a new notification hub. First, create a JSON file with the notification hub details. This file is used as an input to the create notification hub PowerShell command. Paste the following content into the JSON file:
+
+ ```json
+ {
+ "ResourceGroup": "resource_group_name",
+ "NamespaceName": "namespace_name",
+ "Location": "azure_region",
+ "Name": "notification_hub_name"
+ }
+ ```
+
+1. Run the following PowerShell command:
+
+ ```powershell
+ New-AzNotificationHub -ResourceGroup <resource_group_name> -Namespace <namespace_name> -InputFile <path_to_json_file>
+ ```
+
+1. Create a virtual network with a subnet:
+
+ ```powershell
+ New-AzVirtualNetwork -ResourceGroup <resource_group_name> -Location <azure_region> -Name <your_VNet_name> -AddressPrefix <address_prefix>
+ Add-AzVirtualNetworkSubnetConfig -VirtualNetwork (Get-AzVirtualNetwork -Name <your_VNet_name> -ResourceGroup <resource_group_name>) -Name <subnet_name> -AddressPrefix <address_prefix>
+ ```
+
+1. Disable virtual network policies:
+
+ ```powershell
+ $net = @{
+ Name = 'myVNet'
+ ResourceGroupName = 'RG'
+ }
+ $vnet = Get-AzVirtualNetwork @net
+
+ $sub = @{
+ Name = <subnet_name>
+ VirtualNetwork = $vnet
+ PrivateEndpointNetworkPoliciesFlag = 'Disabled'
+ }
+ Set-AzVirtualNetworkSubnetConfig @sub
+ ```
+
+1. Add private DNS zones and link them to the virtual network:
+
+ ```powershell
+ New-AzPrivateDnsZone -ResourceGroup <resource_group_name> -Name privatelink.servicebus.windows.net
+ New-AzPrivateDnsZone -ResourceGroup <resource_group_name> -Name privatelink.notificationhub.windows.net
+
+ New-AzPrivateDnsVirtualNetworkLink -ResourceGroup <resource_group_name> -Name <dns_Zone_Link_Name> -ZoneName "privatelink.servicebus.windows.net" -VirtualNetworkId "/subscriptions/<azure_subscription_id>/resourceGroups/<resource_group_name>/providers/Microsoft.Network/virtualNetworks/<vNet_name>"
+
+ New-AzPrivateDnsVirtualNetworkLink -ResourceGroup <resource_group_name> -Name <dns_Zone_Link_Name> -ZoneName "privatelink.notificationhub.windows.net" -VirtualNetworkId "/subscriptions/<azure_subscription_id>/resourceGroups/<resource_group_name>/providers/Microsoft.Network/virtualNetworks/<vNet_name>"
+ ```
+
+1. Create a private endpoint:
+
+ ```powershell
+ $plsConnection= New-AzPrivateLinkServiceConnection -Name <private_link_connection_name> -PrivateLinkServiceId '/subscriptions/<azure_subscription_id> /resourceGroups/<resource_group_name>/providers/Microsoft.NotificationHubs/namespaces/<namespace_name>'
+
+ New-AzPrivateEndpoint -ResourceGroup <resource_group_name> -Location <azure_region> -Name <private_endpoint_name> -Subnet (Get-AzVirtualNetworkSubnetConfig -Name <subnet_name> -VirtualNetwork (Get-AzVirtualNetwork -Name <vNet_name> -ResourceGroup <resource_group_name>)) -PrivateLinkServiceConnection $plsConnection
+ ```
+
+1. Show the connection status:
+
+ ```powershell
+ Get-AzPrivateEndpointConnection -ResourceGroup <resource_group_name> -Name <private_endpoint_name>
+ ```
+
+## Create a private endpoint using CLI
+
+1. Sign in to Azure CLI and set a subscription:
+
+ ```azurecli
+ az login
+ az account set --subscription <azure_subscription_id>
+ ```
+
+1. Create a new resource group:
+
+ ```azurecli
+ az group create -n <resource_group_name> -l <azure_region>
+ ```
+
+1. Register **Microsoft.NotificationHubs** as a provider:
+
+ ```azurecli
+ az provider register -n Microsoft.NotificationHubs
+ ```
+
+1. Create a new Notification Hubs namespace and hub:
+
+ ```azurecli
+ az notification-hub namespace create
+ --name <namespace_name>
+ --resource-group <resource_group_name>
+ --location <azure_region>
+ --sku "Standard"
+
+ az notification-hub create
+ --name <notification_hub_name>
+ --namespace-name <namespace_name>
+ --resource-group <resource_group_name>
+ --location <azure_region>
+ ```
+
+1. Create a virtual network with a subnet:
+
+ ```azurecli
+ az network vnet create
+ --resource-group <resource_group_name>
+ --name <vNet name>
+ --location <azure_region>
+
+ az network vnet subnet create
+ --resource-group <resource_group_name>
+ --vnet-name <vNet_name>
+ --name <subnet_name>
+ --address-prefixes <address_prefix>
+ ```
+
+1. Disable virtual network policies:
+
+ ```azurecli
+ az network vnet subnet update
+ --name <subnet_name>
+ --resource-group <resource_group_name>
+ --vnet-name <vNet_name>
+ --disable-private-endpoint-network-policies true
+ ```
+
+1. Add private DNS zones and link them to a virtual network:
+
+ ```azurecli
+ az network private-dns zone create
+ --resource-group <resource_group_name>
+ --name privatelink.servicebus.windows.net
+
+ az network private-dns zone create
+ --resource-group <resource_group_name>
+ --name privatelink.notoficationhub.windows.net
+
+ az network private-dns link vnet create
+ --resource-group <resource_group_name>
+ --virtual-network <vNet_name>
+ --zone-name privatelink.servicebus.windows.net
+ --name <dns_zone_link_name>
+ --registration-enabled true
+
+ az network private-dns link vnet create
+ --resource-group <resource_group_name>
+ --virtual-network <vNet_name>
+ --zone-name privatelink.notificationhub.windows.net
+ --name <dns_zone_link_name>
+ --registration-enabled true
+ ```
+
+1. Create a private endpoint (automatically approved):
+
+ ```azurecli
+ az network private-endpoint create
+ --resource-group <resource_group_name>
+ --vnet-name <vNet_name>
+ --subnet <subnet_name>
+ --name <private_endpoint_name>
+ --private-connection-resource-id "/subscriptions/<azure_subscription_id>/resourceGroups/<resource_group_name>/providers/Microsoft.NotificationHubs/namespaces/<namespace_name>"
+ --group-ids namespace
+ --connection-name <private_link_connection_name>
+ --location <azure-region>
+ ```
+
+1. Create a private endpoint (with manual request approval):
+
+ ```azurecli
+ az network private-endpoint create
+ --resource-group <resource_group_name>
+ --vnet-name <vnet_name>
+ --subnet <subnet_name>
+ --name <private_endpoint_name>
+ --private-connection-resource-id "/subscriptions/<azure_subscription_id>/resourceGroups/<resource_group_name>/providers/Microsoft.NotificationHubs/namespaces/<namespace_name>"
+ --group-ids namespace
+ --connection-name <private_link_connection_name>
+ --location <azure-region>
+ --manual-request
+ ```
+
+1. Show the connection status:
+
+ ```azurecli
+ az network private-endpoint show --resource-group <resource_group_name> --name <private_endpoint_name>
+ ```
+
+## Manage private endpoints using the portal
+
+When you create a private endpoint, the connection must be approved. If the resource for which you're creating a private endpoint is in your directory, you can approve the connection request, provided you have sufficient permissions. If you're connecting to an Azure resource in another directory, you must wait for the owner of that resource to approve your connection request.
+
+There are four provisioning states:
+
+| Service action | Service consumer private endpoint state | Description |
+|--|--||
+| None | Pending | Connection is created manually and is pending approval from the private link resource owner. |
+| Approve | Approved | Connection was automatically or manually approved and is ready to be used. |
+| Reject | Rejected | Connection was rejected by the private link resource owner. |
+| Remove | Disconnected | Connection was removed by the private link resource owner. The private endpoint becomes informative and should be deleted for cleanup. |
+
+### Approve, reject, or remove a private endpoint connection
+
+1. Sign in to the Azure portal.
+1. In the search bar, type **Notification Hubs**.
+1. Select the namespace that you want to manage.
+1. Select the **Networking** tab.
+1. Go to the appropriate section based on the operation you want to approve, reject, or remove.
+
+### Approve a private endpoint connection
+
+1. If there are any connections that are pending, a connection is displayed with **Pending** in the provisioning state.
+1. Select the private endpoint you want to approve.
+1. Select **Approve**.
+
+ :::image type="content" source="media/private-link/networking-approve.png" alt-text="Screenshot showing Networking tab ready for approval." lightbox="media/private-link/networking-approve.png":::
+
+1. On the **Approve connection** page, enter an optional comment, then select **Yes**. If you select **No**, nothing happens.
+
+ :::image type="content" source="media/private-link/approve-connection.png" alt-text="Screenshot showing approve connection page." lightbox="media/private-link/approve-connection.png":::
+
+1. You should see the status of the connection in the list change to **Approved**.
+
+### Reject a private endpoint connection
+
+1. If there are any private endpoint connections you want to reject, whether it is a pending request or existing connection that was approved earlier, select the endpoint connect icon and select **Reject**.
+
+ :::image type="content" source="media/private-link/reject-connection.png" alt-text="Screenshot showing reject connection option." lightbox="media/private-link/reject-connection.png":::
+
+1. On the **Reject connection** page, enter an optional comment, then select **Yes**. If you select **No**, nothing happens.
+1. You should see the status of the connection in the list change to **Rejected**.
+
+### Remove a private endpoint connection
+
+1. To remove a private endpoint connection, select it in the list, and select **Remove** on the toolbar:
+
+ :::image type="content" source="media/private-link/remove-connection.png" alt-text="Screenshot showing remove connection page." lightbox="media/private-link/remove-connection.png":::
+
+1. On the **Delete connection** page, select **Yes** to confirm the deletion of the private endpoint. If you select **No**, nothing happens.
+1. You should see the status of the connection in the list change to **Disconnected**. The endpoint then disappears from the list.
+
+### Validate that the private link connection works
+
+You should validate that resources within the virtual network of the private endpoint are connecting to your Notification Hubs namespace over a private IP address, and that they have the correct private DNS zone integration.
+
+First, create a virtual machine by following the steps in [Create a Windows virtual machine in the Azure portal](/azure/virtual-machines/windows/quick-create-portal).
+
+In the **Networking** tab:
+
+1. Specify the **Virtual network** and **Subnet**. You must select the Virtual Network on which you deployed the private endpoint.
+1. Specify a **public IP** resource.
+1. For **NIC network security group**, select **None**.
+1. For **Load balancing**, select **No**.
+
+Connect to the VM, open a command line, and run the following command:
+
+```powershell
+Resolve-DnsName <namespace_name>.privatelink.servicebus.windows.net
+```
+
+When the command is executed from the VM, it returns the IP address of the private endpoint connection. When it's executed from an external network, it returns the public IP address of one of the Notification Hubs clusters.
+
+## Limitations and design considerations
+
+**Limitations**: This feature is available in all Azure public regions.
+**Maximum number of private endpoints per Notification Hubs namespace**: 200
+
+For more information, see [Azure Private Link service: Limitations](/azure/private-link/private-link-service-overview#limitations).
+
+## Next steps
+
+- [Azure Notification Hubs overview](notification-hubs-push-notification-overview.md)
openshift Howto Tag Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-tag-resources.md
You can remediate previously assigned tags and add new tags using an Azure Polic
1. Trigger the remediation task: ```
- az policy assignment update -n $POLICY_ASSIGNMENT
- --params param-values.json
+ az policy remediation create --resource-group $MANAGED_RESOURCE_GROUP --name myRemediation --policy-assignment $POLICY_ASSIGNMENT
+ ``` 1. Allow the remediation task time to run and observe the tags being updated on the managed resource group and its resources.
operator-nexus How To Customize Kubernetes Cluster Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/how-to-customize-kubernetes-cluster-dns.md
+
+ Title: Customize DNS for an Azure Operator Nexus Kubernetes cluster
+description: Learn how to customize DNS.
+++ Last updated : 10/9/2023++++
+# Customize DNS on a Nexus Kubernetes cluster
+
+Nexus Kubernetes clusters use a combination of CoreDNS and [node-local-dns](https://kubernetes.io/docs/tasks/administer-cluster/nodelocaldns/) for cluster DNS management and resolution, with node-local-dns taking precedence for name resolution outside the cluster.
+
+Azure Operator Nexus is a managed service, so you can't modify the main configuration for CoreDNS or node-local-dns. Instead, you use a Kubernetes *ConfigMap* to override the default settings. To see the default CoreDNS and node-local-dns ConfigMaps, use the `kubectl get configmaps --namespace=kube-system coredns -o yaml` or `kubectl get configmaps --namespace=kube-system node-local-dns -o yaml`command.
+
+This article shows you how to use ConfigMaps for basic DNS customization options in your Nexus Kubernetes cluster.
+
+## Prerequisites
+
+Before proceeding with this how-to guide, it's recommended that you:
+
+ * Refer to the Nexus Kubernetes cluster [QuickStart guide][qs] for a
+ comprehensive overview and steps involved.
+ * Ensure that you meet the outlined prerequisites to ensure smooth
+ implementation of the guide.
+
+[qs]: ./quickstarts-kubernetes-cluster-deployment-bicep.md
+## The ConfigMap data format
+
+Both CoreDNS and node-local-DNS use a Kubernetes `ConfigMap` to store configuration options. To see the default CoreDNS and node-local-dns `ConfigMap`s, use `kubectl`:
+
+<code>```console</code>
+kubectl get configmaps --namespace=kube-system coredns -o yaml
+kubectl get configmaps --namespace=kube-system node-local-dns -o yaml
+<code>```</code>
+
+When you create configurations like the examples below, the names in the `data` field *must* end in `.server` or `.override`.
+
+<!-- ## Plugin support
+
+All built-in CoreDNS plugins are supported. No add-on/third party plugins are supported. -->
+
+<!-- ## Rewrite DNS
+
+You can customize CoreDNS with AKS to perform on-the-fly DNS name rewrites.
+
+1. Create a file named `corednsms.yaml` and paste the following example configuration. Make sure to replace `<domain to be rewritten>` with your own fully qualified domain name.
+
+ ```yaml
+ apiVersion: v1
+ kind: ConfigMap
+ metadata:
+ name: coredns-custom
+ namespace: kube-system
+ data:
+ test.server: |
+ <domain to be rewritten>.com:53 {
+ log
+ errors
+ rewrite stop {
+ name regex (.*)\.<domain to be rewritten>.com {1}.default.svc.cluster.local
+ answer name (.*)\.default\.svc\.cluster\.local {1}.<domain to be rewritten>.com
+ }
+ forward . /etc/resolv.conf # you can redirect this to a specific DNS server such as 10.0.0.10, but that server must be able to resolve the rewritten domain name
+ }
+ ```
+
+ > [!IMPORTANT]
+ > If you redirect to a DNS server, such as the CoreDNS service IP, that DNS server must be able to resolve the rewritten domain name.
+
+2. Create the ConfigMap using the [`kubectl apply configmap`][kubectl-apply] command and specify the name of your YAML manifest.
+
+ ```console
+ kubectl apply -f corednsms.yaml
+ ```
+
+3. Verify the customizations have been applied using the [`kubectl get configmaps`][kubectl-get] and specify your *coredns-custom* ConfigMap.
+
+ ```console
+ kubectl get configmaps --namespace=kube-system coredns-custom -o yaml
+ ```
+
+4. To reload the ConfigMap and enable Kubernetes Scheduler to restart CoreDNS without downtime, perform a rolling restart using [`kubectl rollout restart`][kubectl-rollout].
+
+ ```console
+ kubectl -n kube-system rollout restart deployment coredns
+ ``` -->
+
+## Custom forward server
+
+If you need to specify a forward server for your network traffic, you can create a `ConfigMap` to customize DNS.
+
+1. Create a file named `customdns.yaml` and paste the following example configuration. Make sure to replace the `forward` name and the address with the values for your own environment. The `bind 169.254.20.10` line is required and should not be changed.
+
+ ```yaml
+ apiVersion: v1
+ kind: ConfigMap
+ metadata:
+ name: coredns-custom
+ namespace: kube-system
+ data:
+ test.server: | # you can select any name here, but it must end with the .server file extension
+ <domain to be rewritten>.com:53 {
+ forward foo.com 1.1.1.1
+ bind 169.254.20.10
+ }
+ ```
+
+2. Create the `ConfigMap`.
+
+ ```console
+ kubectl apply -f customdns.yaml
+ ```
+
+3. Restart CoreDNS without downtime by performing a `Daemonset` rollout.
+
+ ```console
+ kubectl rollout restart -n kube-system daemonset/node-local-dns
+ ```
+
+## Use custom domains
+
+You might want to configure custom domains that can only be resolved internally. For example, you might want to resolve the custom domain `puglife.local`, which isn't a valid top-level domain. Without a custom domain `ConfigMap`, the Nexus Kubernetes cluster can't resolve the address.
+
+1. Create a new file named `customdns.yaml` and paste the following example configuration. Make sure to update the custom domain and IP address with the values for your own environment. The `bind 169.254.20.10` line is required and should not be modified.
+
+ ```yaml
+ apiVersion: v1
+ kind: ConfigMap
+ metadata:
+ name: coredns-custom
+ namespace: kube-system
+ data:
+ puglife.server: | # you can select any name here, but it must end with the .server file extension
+ puglife.local:53 {
+ errors
+ cache 30
+ forward . 192.11.0.1 # this is my test/dev DNS server
+ bind 169.254.20.10
+ }
+ ```
+
+2. Create the `ConfigMap`.
+
+ ```console
+ kubectl apply -f customdns.yaml
+ ```
+
+3. Restart CoreDNS without downtime by performing a `Daemonset` rollout.
+
+ ```console
+ kubectl rollout restart -n kube-system daemonset/node-local-dns
+ ```
+
+## Stub domains
+
+CoreDNS can also be used to configure stub domains.
+
+1. Create a file named `customdns.yaml` and paste the following example configuration. Make sure to update the custom domains and IP addresses with the values for your own environment. The `bind 169.254.20.10` line is required and should not be modified.
+
+ ```yaml
+ apiVersion: v1
+ kind: ConfigMap
+ metadata:
+ name: coredns-custom
+ namespace: kube-system
+ data:
+ test.server: | # you can select any name here, but it must end with the .server file extension
+ abc.com:53 {
+ errors
+ cache 30
+ forward . 1.2.3.4
+ bind 169.254.20.10
+ }
+ my.cluster.local:53 {
+ errors
+ cache 30
+ forward . 2.3.4.5
+ bind 169.254.20.10
+ }
+
+ ```
+
+2. Create the `ConfigMap`.
+
+ ```console
+ kubectl apply -f customdns.yaml
+ ```
+
+3. Restart CoreDNS without downtime by performing a `Daemonset` rollout.
+
+ ```console
+ kubectl rollout restart -n kube-system daemonset/node-local-dns
+ ```
+
+## Hosts plugin
+
+The hosts plugin is available to customize as well.
+
+```yaml
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: coredns-custom # this is the name of the configmap you can overwrite with your changes
+ namespace: kube-system
+data:
+ test.override: | # you can select any name here, but it must end with the .override file extension
+ hosts {
+ 10.0.0.1 example1.org
+ 10.0.0.2 example2.org
+ 10.0.0.3 example3.org
+ fallthrough
+ }
+```
+
+## Troubleshooting
+<!--
+For general CoreDNS troubleshooting steps, such as checking the endpoints or resolution, see [Debugging DNS resolution][coredns-troubleshooting]. -->
++
+### Enable DNS query logging
+
+1. Add the following configuration to your coredns-custom `ConfigMap`:
+
+ ```yaml
+ apiVersion: v1
+ kind: ConfigMap
+ metadata:
+ name: coredns-custom
+ namespace: kube-system
+ data:
+ log.override: | # you can select any name here, but it must end with the .override file extension
+ log
+ ```
+
+2. Apply the configuration changes, and restart CoreDNS without downtime by performing a `Daemonset` rollout.:
+
+ ```console
+ # Apply configuration changes
+ kubectl apply -f customdns.yaml
+
+ # Force CoreDNS to reload the ConfigMap
+ kubectl rollout restart -n kube-system daemonset/node-local-dns
+ ```
+
+3. View the CoreDNS debug logging using the `kubectl logs` command.
+
+ ```console
+ kubectl logs --namespace kube-system -l k8s-app=node-local-dns
+ ```
+
+<!-- ## Next steps -->
++
+<!-- LINKS - external -->
+
+<!-- LINKS - internal -->
operator-nexus Howto Kubernetes Cluster Action Restart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-kubernetes-cluster-action-restart.md
+
+ Title: Restart Azure Operator Nexus Kubernetes cluster node
+description: Learn how to restart Azure Operator Nexus Kubernetes cluster node
++++ Last updated : 10/22/2023 +++
+# Restart Azure Operator Nexus Kubernetes cluster node
+
+Occasionally, a Nexus Kubernetes node might become unreachable. This article explains how to restart the node via the `az networkcloud kubernetescluster restart-node` CLI command.
+
+Restarting a Nexus Kubernetes node can take up to 5 minutes to complete. However, if the Virtual Machine is in bad state, the restart action will eventually time out. Open an Azure support ticket for such instances.
+
+## Before you begin
+> [!NOTE]
+> The approach outlined in this article represents an aggressive method for recovering an unreachable cluster VM. Workloads that are running on the VM will be terminated; therefore, this restart action should be considered a last resort.
+> Before performing a restart on a VM, consider first cordoning and draining the node, then gracefully shutting the VM down and bringing it back up.
+
+Make sure you have the latest version of [necessary Azure CLI extensions](./howto-install-cli-extensions.md).
+
+## Restart cluster node
+
+### Get node name
+In order to restart the cluster VM, node-name is required, which can be obtained through
+- The az CLI command `az networkcloud kubernetescluster show --name "kubernetesClusterName" --resource-group "resourceGroupName" --subscription "subscriptionName` lists the details of the node.
+- Alternatively, `kubectl get node` lists the nodes.
+
+### Run the CLI command to restart the Nexus Kubernetes cluster node
+
+To restart a cluster node, run the command as follows:
+
+``` azurecli
+az networkcloud kubernetescluster restart-node --node-name "nodeName" --kubernetes-cluster-name "kubernetesClusterName" --resource-group "resourceGroupName" --subscription "subscriptionName"
+```
+To use this command, you need to understand the various options for specifying the node, Nexus Kubernetes cluster, and resource group. Here are the available options:
+
+- `--node-name` - is a required argument that specifies the name of the node that you want to restart within the Nexus Kubernetes cluster. You must provide the exact name of the node that you want to restart.
+- `--kubernetes-cluster-name` - is a required argument that specifies the name of the Nexus Kubernetes cluster that the node is a part of. You must provide the exact name of the cluster.
+- `--resource-group` - is a required argument that specifies the name of the resource group that the Nexus Kubernetes cluster is located in. You must provide the exact name of the resource group.
+- `--subscription` - is an optional argument that specifies the subscription that the resource group is located in. If you have multiple subscriptions, you have to specify which one to use.
++
+Sample output is as followed:
+
+```json
+{
+ "endTime": "2023-10-20T19:28:31.972299Z",
+ "id": "/subscriptions/000000000-0000-0000-0000-000000000000/providers/Microsoft.NetworkCloud/locations/<location>/operationStatuses/000000000-0000-0000-0000-000000000000",
+ "name":"7f835f51-9b85-4607-9be1-41f09c11bc24*B684BCD26460AF4CD9525D5F4FFABA73B623C6A465E9C1E26D7B12EDB3D3EA78",
+ "resourceId": "/subscriptions/000000000-0000-0000-0000-000000000000/resourceGroups/myResourceGroup/providers/Microsoft.NetworkCloud/kubernetesClusters/myNexusK8sCluster",
+ "startTime": "2023-10-20T19:27:52.561479Z",
+ "status": "succeeded"
+}
+```
+
++
operator-nexus Howto Platform Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-platform-prerequisites.md
Terminal Server has been deployed and configured as follows:
- Syslog Primary: 172.27.255.210 - Syslog Secondary: 172.27.255.211 - SMTP Gateway IP address or FQDN: not set by operator during setup
- - Email Sender Domain Name: not set by operator during setup
- - Email Address(es) to be alerted: not set by operator during setup
+ - Email Sender Domain Name: domain name of the sender of the email (example.com)
+ - Email Address(es) to be alerted: List of emails where email alerts will be sent. (someone@example.com)
- Proxy Server and Port: not set by operator during setup - Management: Virtual Interface - IP Address: 172.27.255.200
operator-nexus Howto Use Mde Runtime Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-use-mde-runtime-protection.md
export MANAGED_RESOURCE_GROUP="contoso-cluster-managed-rg"
export CLUSTER_NAME="contoso-cluster" ```
-## Configuring enforcement level
-The `az networkcloud cluster update` allows you to update of the settings for Cluster runtime protection *enforcement level* by using the argument `--runtime-protection-configuration enforcement-level="<enforcement level>"`.
-
-The following command configures the `enforcement level` for your Cluster.
+## Enabling & disabling MDE service on all nodes
+To use the MDE runtime protection service on the Cluster, you need to make the cluster aware of it first. The Cluster is not aware of this functionality by default.
+To do this, execute the following command with `enforcement-level="Disabled"`.
```bash az networkcloud cluster update \ --subscription ${SUBSCRIPTION_ID} \ --resource-group ${RESOURCE_GROUP} \ --cluster-name ${CLUSTER_NAME} \runtime-protection-configuration enforcement-level="<enforcement level>"
+--runtime-protection-configuration enforcement-level="Disabled"
```
-Allowed values for `<enforcement level>`: `Audit`, `Disabled`, `OnDemand`, `Passive`, `RealTime`.
+Upon execution, inspect the output for the following:
-## Enabling & Disabling MDE Service on All Nodes
-By default the MDE service isn't active. You need to enable it before you can trigger an MDE scan.
-To enable the MDE service, execute the following command.
+```json
+ "runtimeProtectionConfiguration": {
+ "enforcementLevel": "Disabled"
+ }
+```
+
+Running this command will make the Cluster aware of the MDE runtime protection service. To use the service and benefit from its features, you need to set the `enforcement-level`
+to a value other than `Disabled` in the next section
+
+> [!NOTE]
+>As you have noted, the argument `--runtime-protection-configuration enforcement-level="<enforcement level>"` serves two purposes: enabling/disabling MDE service and updating the enforcement level.
+
+If you want to disable the MDE service across your Cluster, use an `<enforcement level>` of `Disabled`.
+
+## Configuring enforcement level
+The `az networkcloud cluster update` allows you to update of the settings for Cluster runtime protection *enforcement level* by using the argument `--runtime-protection-configuration enforcement-level="<enforcement level>"`.
+
+The following command configures the `enforcement level` for your Cluster.
```bash az networkcloud cluster update \
az networkcloud cluster update \
--runtime-protection-configuration enforcement-level="<enforcement level>" ```
-where `<enforcement level>` value must be a value other than `Disabled`.
+Allowed values for `<enforcement level>`: `Audit`, `Disabled`, `OnDemand`, `Passive`, `RealTime`.
-> [!NOTE]
->As you have noted, the argument `--runtime-protection-configuration enforcement-level="<enforcement level>"` serves two purposes: enabling/disabling MDE service and updating the enforcement level.
+Upon execution, inspect the output for the following:
-If you want to disable the MDE service across your Cluster, use an `<enforcement level>` of `Disabled`.
+```json
+ "runtimeProtectionConfiguration": {
+ "enforcementLevel": "<enforcement level>"
+ }
+```
## Triggering MDE scan on all nodes Once you have set an enforcement level for your Cluster, you can trigger an MDE scan with the following command:
operator-nexus Reference Nexus Kubernetes Cluster Supported Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/reference-nexus-kubernetes-cluster-supported-versions.md
For the past release history, see [Kubernetes history](https://github.com/kubern
| 1.26 | Sep 2023 | Mar 2024 | Until 1.32 GA | | 1.27* | Sep 2023 | Jul 2024, LTS until Jul 2025 | Until 1.33 GA | | 1.28 | Nov 2023 | Oct 2024 | Until 1.34 GA |
+| 1.29 | Feb 2024 | | Until 1.35 GA |
*\* Indicates the version is designated for Long Term Support*
Note the following important changes to make before you upgrade to any of the av
| Kubernetes Version | Version Bundle | Components | OS components | Breaking Changes | Notes | |--|-|--|||--|
-| 1.25.4 | 1 | Calico v3.24.0<br>metrics-server v0.6.3<br>Multus v3.8.0<br>CoreDNS v1.8.4<br>etcd v3.5.6-5<br>sriov-dp v3.5.1 | Mariner 2.0 (2023-05-04) | No breaking changes | |
-| 1.25.4 | 2 | Calico v3.24.0<br>metrics-server v0.6.3<br>Multus v3.8.0<br>CoreDNS v1.8.4<br>etcd v3.5.6-5<br>sriov-dp v3.5.1 | Mariner 2.0 (2023-06-18) | No breaking changes | |
-| 1.25.4 | 3 | Calico v3.24.0<br>metrics-server v0.6.3<br>Multus v3.8.0<br>CoreDNS v1.8.4<br>etcd v3.5.6-5<br>sriov-dp v3.5.1 | Mariner 2.0 (2023-06-18) | No breaking changes | |
-| 1.25.4 | 4 | Calico v3.24.0<br>metrics-server v0.6.3<br>Multus v3.8.0<br>CoreDNS v1.8.4<br>etcd v3.5.6-5<br>sriov-dp v3.5.1 | Mariner 2.0 (2023-09-21) | No breaking changes | |
-| 1.25.6 | 1 | Calico v3.24.0<br>metrics-server v0.6.3<br>Multus v3.8.0<br>CoreDNS v1.8.6<br>etcd v3.5.6-5<br>sriov-dp v3.5.1 | Mariner 2.0 (2023-09-21) | No breaking changes | |
-| 1.26.3 | 1 | Calico v3.24.0<br>metrics-server v0.6.3<br>Multus v3.8.0<br>CoreDNS v1.8.6<br>etcd v3.5.6-5<br>sriov-dp v3.5.1 | Mariner 2.0 (2023-09-21) | No breaking changes | |
-| 1.27.1 | 1 | Calico v3.24.0<br>metrics-server v0.6.3<br>Multus v3.8.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.5.1 | Mariner 2.0 (2023-09-21) | Cgroupv2 | Steps to disable cgroupv2 can be found [here](./howto-disable-cgroupsv2.md) |
+| 1.25.4 | 1 | Calico v3.24.0<br>metrics-server v0.6.3<br>Multus v3.8.0<br>CoreDNS v1.8.4<br>etcd v3.5.6-5<br>sriov-dp v3.5.1 | Azure Linux 2.0 | No breaking changes | |
+| 1.25.4 | 2 | Calico v3.24.0<br>metrics-server v0.6.3<br>Multus v3.8.0<br>CoreDNS v1.8.4<br>etcd v3.5.6-5<br>sriov-dp v3.5.1 | Azure Linux 2.0 | No breaking changes | |
+| 1.25.4 | 3 | Calico v3.24.0<br>metrics-server v0.6.3<br>Multus v3.8.0<br>CoreDNS v1.8.4<br>etcd v3.5.6-5<br>sriov-dp v3.5.1 | Azure Linux 2.0 | No breaking changes | |
+| 1.25.4 | 4 | Calico v3.24.0<br>metrics-server v0.6.3<br>Multus v3.8.0<br>CoreDNS v1.8.4<br>etcd v3.5.6-5<br>sriov-dp v3.5.1 | Azure Linux 2.0 | No breaking changes | |
+| 1.25.4 | 5 | Calico v3.26.1<br>metrics-server v0.6.3<br>Multus v3.8.0<br>CoreDNS v1.8.4<br>etcd v3.5.6-5<br>sriov-dp v3.6.0 | Azure Linux 2.0 | No breaking changes | |
+| 1.25.6 | 1 | Calico v3.24.0<br>metrics-server v0.6.3<br>Multus v3.8.0<br>CoreDNS v1.8.6<br>etcd v3.5.6-5<br>sriov-dp v3.5.1 | Azure Linux 2.0 | No breaking changes | |
+| 1.25.6 | 2 | Calico v3.26.1<br>metrics-server v0.6.3<br>Multus v3.8.0<br>CoreDNS v1.8.6<br>etcd v3.5.6-5<br>sriov-dp v3.6.0 | Azure Linux 2.0 | No breaking changes | |
+| 1.26.3 | 1 | Calico v3.24.0<br>metrics-server v0.6.3<br>Multus v3.8.0<br>CoreDNS v1.8.6<br>etcd v3.5.6-5<br>sriov-dp v3.5.1 | Azure Linux 2.0 | No breaking changes | |
+| 1.26.3 | 2 | Calico v3.26.1<br>metrics-server v0.6.3<br>Multus v3.8.0<br>CoreDNS v1.8.6<br>etcd v3.5.6-5<br>sriov-dp v3.6.0 | Azure Linux 2.0 | No breaking changes | |
+| 1.27.1 | 1 | Calico v3.24.0<br>metrics-server v0.6.3<br>Multus v3.8.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.5.1 | Azure Linux 2.0 | Cgroupv2 | Steps to disable cgroupv2 can be found [here](./howto-disable-cgroupsv2.md) |
+| 1.27.1 | 2 | Calico v3.26.1<br>metrics-server v0.6.3<br>Multus v3.8.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.6.0 | Azure Linux 2.0 | Cgroupv2 | Steps to disable cgroupv2 can be found [here](./howto-disable-cgroupsv2.md) |
## Upgrading Kubernetes versions
partner-solutions Dynatrace Free Trial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/dynatrace/dynatrace-free-trial.md
Title: 'Dynatrace: start a free trial' description: This article describes how to use the Azure portal to try Dynatrace for free. Previously updated : 10/30/2023 Last updated : 11/07/2023++ # QuickStart: Start a free trial
-A 30-day free trial of Azure Native Dynatrace Service is available on [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/dynatrace.dynatrace_portal_integration?tab=Overview). You can sign up using the trial plan published by Dynatrace. During the trial period, you can create a Dynatrace resource on Azure and use integrated services such as log forwarding, agent based monitoring and unified Azure billing. Before the free trial expires, you can seamlessly upgrade to a paid public plan or a private offer customized for your organization.
+A 30-day free trial of Azure Native Dynatrace Service is available on [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/dynatrace.dynatrace_portal_integration?tab=Overview). You can sign up using the trial plan published by Dynatrace. During the trial period, you can create a Dynatrace resource on Azure and use integrated services such as log forwarding, metrics integration, and agent based monitoring. Before the free trial expires, you can seamlessly upgrade to a paid public plan or a private offer customized for your organization.
## Subscribe to a free trial You can access the trial plan by finding Azure Native Dynatrace Service on Azure portal or in the Azure Marketplace. Refer to the guide to [create a new resource](dynatrace-create.md#find-offer) and choose the free trial public plan while subscribing. + ## Free trial upgrade and expiry Azure Native Dynatrace Service gives an option to upgrade to a paid plan through the portal experience. Select **Upgrade to paid** to choose one of the paid plans published by Dynatrace or contact [sales@dynatrace.com](mailto:sales@dynatrace.com) for a custom offer for your organization.
playwright-testing How To Try Playwright Testing Free https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/playwright-testing/how-to-try-playwright-testing-free.md
The following table lists the limits for the Microsoft Playwright Testing free t
|-|-| | Duration of trial | 30 days | | Total test minutes┬╣ | 100 minutes |
-| Number of workspaces┬╣┬▓┬│ | 1 |
+| Number of workspaces┬▓┬│ | 1 |
┬╣ If you run a test that exceeds the free trial test minute limit, only the overage test minutes account towards the pay-as-you-go billing model.
The following table lists the limits for the Microsoft Playwright Testing free t
┬│ If you delete the free trial workspace, you can't create a new free trial workspace anymore.
-If you exceed any of these limits, the workspace is automatically converted to the pay-as-you-go billing model. Learn more about the [Microsoft Playwright Testing pricing](https://aka.ms/mpt/pricing).
+> [!CAUTION]
+> If you exceed any of these limits, the workspace is automatically converted to the pay-as-you-go billing model. Learn more about the [Microsoft Playwright Testing pricing](https://aka.ms/mpt/pricing).
## Create a workspace
To create a workspace in the Playwright portal:
||| |**Workspace name** | Enter a unique name to identify your workspace.<BR>The name can only consist of alphanumerical characters, and have a length between 3 and 64 characters. | |**Azure subscription** | Select the Azure subscription that you want to use for this Microsoft Playwright Testing workspace. |
- |**Region** | Select a geographic location to host your workspace. <BR>This is the location where the test run data is stored for the workspace. |
+ |**Region** | Select a geographic location to host your workspace. <BR>This location is where the test run data is stored for the workspace. |
1. Select **Create workspace**.
playwright-testing Quickstart Automate End To End Testing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/playwright-testing/quickstart-automate-end-to-end-testing.md
Update the CI workflow definition to run your Playwright tests with the Playwrig
When the CI workflow is triggered, your Playwright tests will run in your Microsoft Playwright Testing workspace on cloud-hosted browsers, across 20 parallel workers.
+> [!CAUTION]
+> With Microsoft Playwright Testing, you get charged based on the number of total test minutes. If you're a first-time user or [getting started with a free trial](./how-to-try-playwright-testing-free.md), you might start with running a single test at scale instead of your full test suite to avoid exhausting your free test minutes.
+>
+> After you validate that the test runs successfully, you can gradually increase the test load by running more tests with the service.
+>
+> You can run a single test with the service by using the following command-line:
+>
+> ```npx playwright test {name-of-file.spec.ts} --config=playwright.service.config.ts```
+ ## Related content You've successfully set up a continuous end-to-end testing workflow to run your Playwright tests at scale on cloud-hosted browsers.
playwright-testing Quickstart Run End To End Tests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/playwright-testing/quickstart-run-end-to-end-tests.md
We recommend that you use the `dotenv` module to manage your environment. With `
> [!CAUTION] > Make sure that you don't add the `.env` file to your source code repository to avoid leaking your access token value.
-## Add Microsoft Playwright Testing configuration
+## Add a service configuration file
To run your Playwright tests in your Microsoft Playwright Testing workspace, you need to add a service configuration file alongside your Playwright configuration file. The service configuration file references the environment variables to get the workspace endpoint and your access token.
To add the service configuration to your project:
// }, workers: 20,
- // Enable screenshot testing and configure directory with expectations.
+ // Enable screenshot testing and configure directory with expectations.
// https://learn.microsoft.com/azure/playwright-testing/how-to-configure-visual-comparisons ignoreSnapshots: false, snapshotPathTemplate: `{testDir}/__screenshots__/{testFilePath}/${os}/{arg}{ext}`,
To add the service configuration to your project:
You've now prepared the configuration for running your Playwright tests in the cloud with Microsoft Playwright Testing. You can either use the Playwright CLI to run your tests, or use the [Playwright Test Visual Studio Code extension](https://marketplace.visualstudio.com/items?itemName=ms-playwright.playwright).
-Perform the following steps to run your Playwright tests with Microsoft Playwright Testing.
+### Run a single test at scale
+
+With Microsoft Playwright Testing, you get charged based on the number of total test minutes. If you're a first-time user or [getting started with a free trial](./how-to-try-playwright-testing-free.md), you might start with running a single test at scale instead of your full test suite to avoid exhausting your free test minutes.
+
+After you validate that the test runs successfully, you can gradually increase the test load by running more tests with the service.
+
+Perform the following steps to run a single Playwright test with Microsoft Playwright Testing:
# [Playwright CLI](#tab/playwrightcli)
-When you use the Playwright CLI to run your tests, specify the service configuration file in the command-line to connect to use remote browsers.
+To use the Playwright CLI to run your tests with Microsoft Playwright Testing, pass the service configuration file as a command-line parameter.
-Open a terminal window and enter the following command to run your Playwright tests on remote browsers in your workspace:
+1. Open a terminal window.
-```bash
-npx playwright test --config=playwright.service.config.ts --workers=20
-```
-
-Depending on the size of your test suite, this command runs your tests on up to 20 parallel workers.
+1. Enter the following command to run your Playwright test on remote browsers in your workspace:
-You should see a similar output when the tests complete:
+ Replace the `{name-of-file.spec.ts}` text placeholder with the name of your test specification file.
-```output
-Running 6 tests using 6 workers
- 6 passed (18.2s)
+ ```bash
+ npx playwright test {name-of-file.spec.ts} --config=playwright.service.config.ts
+ ```
-To open last HTML report run:
+ After the test completes, you can view the test status in the terminal.
+ ```output
+ Running 1 test using 1 worker
+ 1 passed (2.2s)
+
+ To open last HTML report run:
+
npx playwright show-report
-```
+ ```
# [Visual Studio Code](#tab/vscode)
-To run your Playwrights tests in Visual Studio Code with Microsoft Playwright Testing:
+To run a single Playwright test in Visual Studio Code with Microsoft Playwright Testing, select the service configuration file in the **Test Explorer** view. Then select and run the test from the list of tests.
1. Install the [Playwright Test Visual Studio Code extension](https://marketplace.visualstudio.com/items?itemName=ms-playwright.playwright). 1. Open the **Test Explorer** view in the activity bar.
- The test explorer automatically detects your Playwright tests and the service configuration.
+ The test explorer automatically detects your Playwright tests and the service configuration in your project.
:::image type="content" source="./media/quickstart-run-end-to-end-tests/visual-studio-code-test-explorer.png" alt-text="Screenshot that shows the Test Explorer view in Visual Studio Code, which lists the Playwright tests." lightbox="./media/quickstart-run-end-to-end-tests/visual-studio-code-test-explorer.png":::
-1. Select a service profile to run your tests with Microsoft Playwright Testing.
+1. Select **Select Default Profile**, and then select your default projects from the service configuration file.
Notice that the service run profiles are coming from the `playwright.service.config.ts` file you added previously.
- Optionally, select **Select Default Profile**, and then select your default projects. By setting a default profile, you can automatically run your services with the service, or run multiple Playwright projects simultaneously.
+ By setting a default profile, you can automatically run your tests with the service, or run multiple Playwright projects simultaneously.
:::image type="content" source="./media/quickstart-run-end-to-end-tests/visual-studio-code-choose-run-profile.png" alt-text="Screenshot that shows the menu to choose a run profile for your tests, highlighting the projects from the service configuration file." lightbox="./media/quickstart-run-end-to-end-tests/visual-studio-code-choose-run-profile.png":::
+1. From the list of tests, select the **Run test** button next to a test to run it.
+
+ The test runs on the projects you selected in the default profile. If you selected one or more projects from the service configuration, the test runs on remote browsers in your workspace.
+
+ :::image type="content" source="./media/quickstart-run-end-to-end-tests/visual-studio-code-run-test.png" alt-text="Screenshot that shows how to run a single test in Visual Studio Code." lightbox="./media/quickstart-run-end-to-end-tests/visual-studio-code-run-test.png":::
+ > [!TIP]
- > You can still debug your test code when you run your tests on remote browsers.
+ > You can still debug your test code when you run your tests on remote browsers by using the **Debug test** button.
1. You can view the test results directly in Visual Studio Code.
To run your Playwrights tests in Visual Studio Code with Microsoft Playwright Te
+You can now run multiple tests with the service, or run your entire test suite on remote browsers.
+
+> [!CAUTION]
+> Depending on the size of your test suite, you might incur additional charges for the test minutes beyond your allotted free test minutes.
+
+### Run a full test suite at scale
+
+Now that you've validated that you can run a single test with Microsoft Playwright Testing, you can run a full Playwright test suite at scale.
+
+Perform the following steps to run a full Playwright test suite with Microsoft Playwright Testing:
+
+# [Playwright CLI](#tab/playwrightcli)
+
+When you run multiple Playwright tests or a full test suite with Microsoft Playwright Testing, you can optionally specify the number of parallel workers as a command-line parameter.
+
+1. Open a terminal window.
+
+1. Enter the following command to run your Playwright test suite on remote browsers in your workspace:
+
+ ```bash
+ npx playwright test --config=playwright.service.config.ts --workers=20
+ ```
+
+ Depending on the size of your test suite, this command runs your tests on up to 20 parallel workers.
+
+ After the test completes, you can view the test status in the terminal.
+
+ ```output
+ Running 6 tests using 6 workers
+ 6 passed (18.2s)
+
+ To open last HTML report run:
+
+ npx playwright show-report
+ ```
+
+# [Visual Studio Code](#tab/vscode)
+
+To run your Playwright test suite in Visual Studio Code with Microsoft Playwright Testing:
+
+1. Open the **Test Explorer** view in the activity bar.
+
+1. Select the **Run tests** button to run all tests with Microsoft Playwright Testing.
+
+ When you run all tests, the default profile is used. In the previous step, you configured the default profile to use projects from the service configuration.
+
+ :::image type="content" source="./media/quickstart-run-end-to-end-tests/visual-studio-code-run-all-tests.png" alt-text="Screenshot that shows how to run all tests in Visual Studio Code." lightbox="./media/quickstart-run-end-to-end-tests/visual-studio-code-run-all-tests.png":::
+
+ > [!TIP]
+ > You can still debug your test code when you run your tests on remote browsers by using the **Debug tests** button.
+
+1. Alternately, you can select a specific service configuration from the list to only run the tests for a specific browser configuration.
+
+ :::image type="content" source="./media/quickstart-run-end-to-end-tests/visual-studio-code-run-all-tests-select-project.png" alt-text="Screenshot that shows how to run all tests for a specific browser configuration, by selecting the project in Visual Studio Code." lightbox="./media/quickstart-run-end-to-end-tests/visual-studio-code-run-all-tests-select-project.png":::
+
+1. You can view all test results in the **Test results** tab.
+++
+## View test runs in the Playwright portal
+ Go to the [Playwright portal](https://aka.ms/mpt/portal) to view the test run metadata and activity log for your workspace. :::image type="content" source="./media/quickstart-run-end-to-end-tests/playwright-testing-activity-log.png" alt-text="Screenshot that shows the activity log for a workspace in the Playwright Testing portal." lightbox="./media/quickstart-run-end-to-end-tests/playwright-testing-activity-log.png":::
The activity log lists for each test run the following details: the total test c
## Optimize parallel worker configuration
-Once your tests are running smoothly with the service, experiment with varying the number of parallel workers to determine the optimal configuration that minimizes test completion time. With Microsoft Playwright Testing, you can run with up to 50 parallel workers. Several factors influence the best configuration for your project, such as the CPU, memory, and network resources of your client machine, the target application's load-handling capacity, and the type of actions carried out in your tests.
+Once your tests are running smoothly with the service, experiment with varying the number of parallel workers to determine the optimal configuration that minimizes test completion time.
+
+With Microsoft Playwright Testing, you can run with up to 50 parallel workers. Several factors influence the best configuration for your project, such as the CPU, memory, and network resources of your client machine, the target application's load-handling capacity, and the type of actions carried out in your tests.
+
+You can specify the number of parallel workers on the Playwright CLI command-line, or configure the `workers` property in the Playwright service configuration file.
Learn more about how to [determine the optimal configuration for optimizing test suite completion](./concept-determine-optimal-configuration.md).
postgresql Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/overview.md
One advantage of running your workload in Azure is global reach. The flexible se
| East US 2 | :heavy_check_mark: (v3/v4 only) | :x: $ | :heavy_check_mark: | :heavy_check_mark: | | France Central | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | France South | :heavy_check_mark: (v3/v4 only) | :x: | :heavy_check_mark: | :heavy_check_mark: |
-| Germany West Central | :heavy_check_mark: (v3/v4 only) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
+| Germany West Central | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| Japan East | :heavy_check_mark: (v3/v4 only) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | Japan West | :heavy_check_mark: (v3/v4 only) | :x: | :heavy_check_mark: | :heavy_check_mark: | | Jio India West | :heavy_check_mark: (v3 only) | :x: | :heavy_check_mark: | :x: |
postgresql Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/security-controls-policy.md
Previously updated : 10/23/2023 Last updated : 11/06/2023 # Azure Policy Regulatory Compliance controls for Azure Database for PostgreSQL
private-5g-core Azure Private 5G Core Release Notes 2310 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/azure-private-5g-core-release-notes-2310.md
+
+ Title: Azure Private 5G Core 2310 release notes
+description: Discover what's new in the Azure Private 5G Core 2310 release
++++ Last updated : 11/07/2023++
+# Azure Private 5G Core 2310 release notes
+
+The following release notes identify the new features, critical open issues, and resolved issues for the 2308 release of Azure Private 5G Core (AP5GC). The release notes are continuously updated, with critical issues requiring a workaround added as theyΓÇÖre discovered. Before deploying this new version, review the information contained in these release notes.
+
+This article applies to the AP5GC 2310 release (2310.0-X). This release is compatible with the Azure Stack Edge Pro 1 GPU and Azure Stack Edge Pro 2 running the ASE 2309 release and supports the 2023-09-01, 2023-06-01 and 2022-11-01 [Microsoft.MobileNetwork](/rest/api/mobilenetwork) API versions.
+
+For more information about compatibility, see [Packet core and Azure Stack Edge compatibility](azure-stack-edge-packet-core-compatibility.md).
++
+## Support lifetime
+
+Packet core versions are supported until two subsequent versions are released (unless otherwise noted). You should plan to upgrade your packet core in this time frame to avoid losing support.
+
+### Currently supported packet core versions
+The following table shows the support status for different Packet Core releases and when they are expected to no longer be supported.
+
+| Release | Support Status |
+||-|
+| AP5GC 2310 | Supported until AP5GC 2403 is released |
+| AP5GC 2308 | Supported until AP5GC 2401 is released |
+| AP5GC 2307 and earlier | Out of Support |
+
+## What's new
+
+### Optional N2/N3/S1/N6 gateway
+This feature makes the N2, N3 and N6 gateways optional during the network configuration of an ASE if the RAN and Packet Core are on the same subnet. This feature provides flexibility to use AP5GC without gateways if there's direct connectivity available with the RAN and/or DN.
+
+### Improved software download time
+This feature improves overall AP5GC software download time by reducing the size of underlying software packages. The overall size of the software image is reduced by around 40%.
+
+### Per-UE information in Azure portal and API
+This feature allows you to view UE-level information in the Azure portal, including a list of SIMs with high level information and a detailed view for each SIM. This information is the current snapshot of the UE in the system and can be fetched on-demand with a throttling period of 5 min. See [Manage existing SIMs for Azure Private 5G Core - Azure portal](manage-existing-sims.md).
+
+### Per gNB metrics in Azure portal
+This feature categorizes a few metrics based on the RAN identifier, for example UL/DL bandwidth etc. These metrics are exposed via Azure monitor under Packet Core Control Plane and Packet Core Data Plane resources. These metrics can be used to correlate the RAN and packet core metrics and troubleshoot.
+
+### Combined 4G/5G on a single packet core
+This feature allows a packet core that supports both 4G and 5G networks on a single Mobile Network site. You can deploy a RAN network with both 4G and 5G radios and connect to a single packet core.
++
+## Issues fixed in the AP5GC 2310 release
+
+The following table provides a summary of issues fixed in this release.
+
+ |No. |Feature | Issue |
+ |--|--|--|
+ | 1 | Packet Forwarding | In scenarios of sustained high load (for example, continuous setup of 100s of TCP flows per second) in 4G setups, AP5GC might encounter an internal error, leading to a short period of service disruption resulting in some call failures. |
++
+## Known issues in the AP5GC 2310 release
+<!--**TO BE UPDATED**>
+ |No. |Feature | Issue | Workaround/comments |
+ |--|--|--|--|
+ | 1 | | | |
+<-->
+
+The following table provides a summary of known issues carried over from the previous releases.
+
+ |No. |Feature | Issue | Workaround/comments |
+ |--|--|--|--|
+ | 1 | Packet Forwarding | A slight(0.01%) increase in packet drops is observed in latest AP5GC release installed on ASE Platform Pro2 with ASE-2309 for throughput higher than 3.0 Gbps. | None |
+ | 2 | Local distributed tracing | In Multi PDN session establishment/Release call flows with different DNs, the distributed tracing web GUI fails to display some of 4G NAS messages (Activate/deactivate Default EPS Bearer Context Request) and some S1AP messages (ERAB request, ERAB Release). | None |
+ | 3 | Local distributed tracing | When a web proxy is enabled on the Azure Stack Edge appliance that the packet core is running on and Azure Active Directory is used to authenticate access to AP5GC Local Dashboards, the traffic to Azure Active Directory doesn't transmit via the web proxy. If there's a firewall blocking traffic that doesn't go via the web proxy then enabling Azure Active Directory causes the packet core install to fail. | Disable Azure Active Directory and use password based authentication to authenticate access to AP5GC Local Dashboards instead. |
+
+
+
+## Next steps
+
+- [Upgrade the packet core instance in a site - Azure portal](upgrade-packet-core-azure-portal.md)
+- [Upgrade the packet core instance in a site - ARM template](upgrade-packet-core-arm-template.md)
private-5g-core Support Lifetime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/support-lifetime.md
Previously updated : 09/21/2023 Last updated : 11/07/2023 # Support lifetime
-Only the two most recent packet core versions are supported at any time (unless otherwise noted). Each packet core version is typically supported for two months from the date of its release. You should plan to upgrade your packet core in this time frame to avoid losing support.
+Only the two most recent packet core versions are supported at any time (unless otherwise noted). You should plan to upgrade your packet core in this time frame to avoid losing support.
### Currently Supported Packet Core Versions
-The following table shows the support status for different Packet Core releases.
+The following table shows the support status for different Packet Core releases and when they are expected to no longer be supported.
| Release | Support Status | ||-|
-| AP5GC 2308 | Supported until AP5GC 2311 released |
-| AP5GC 2307 | Supported until AP5GC 2310 released |
-| AP5GC 2306 and earlier | Out of Support |
+| AP5GC 2310 | Supported until AP5GC 2403 is released |
+| AP5GC 2308 | Supported until AP5GC 2401 is released |
+| AP5GC 2307 and earlier | Out of Support |
private-5g-core Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/whats-new.md
Previously updated : 09/21/2023 Last updated : 11/07/2023 # What's new in Azure Private 5G Core?
To help you stay up to date with the latest developments, this article covers:
This page is updated regularly with the latest developments in Azure Private 5G Core.
+## October 2023
+### Packet core 2310
+
+**Type:** New release
+
+**Date available:** October 7, 2023
+
+The 2310 release for the Azure Private 5G Core packet core is now available. For more information, see [Azure Private 5G Core 2310 release notes](azure-private-5g-core-release-notes-2310.md).
+
+### Optional N2/N3/S1/N6 gateway
+This feature makes the N2, N3 and N6 gateways optional during the network configuration of an ASE if the RAN and Packet Core are on the same subnet. This feature provides flexibility to use AP5GC without gateways if there's direct connectivity available with the RAN and/or DN.
+
+### Improved software download time
+This feature improves overall AP5GC software download time by reducing the size of underlying software packages. The overall size of the software image is reduced by around 40%.
+
+### Per-UE information in Azure portal and API
+This feature allows you to view UE-level information in the Azure portal, including a list of SIMs with high level information and a detailed view for each SIM. This information is the current snapshot of the UE in the system and can be fetched on-demand with a throttling period of 5 min. See [Manage existing SIMs for Azure Private 5G Core - Azure portal](manage-existing-sims.md).
+
+### Per gNB metrics in Azure portal
+This feature categorizes a few metrics based on the RAN identifier, for example UL/DL bandwidth etc. These metrics are exposed via Azure monitor under Packet Core Control Plane and Packet Core Data Plane resources. These metrics can be used to correlate the RAN and packet core metrics and troubleshoot.
+
+### Combined 4G/5G on a single packet core
+This feature allows a packet core that supports both 4G and 5G networks on a single Mobile Network site. You can deploy a RAN network with both 4G and 5G radios and connect to a single packet core.
++ ## September 2023 ### Packet core 2308
In this release, the default MTU values are changed as follows:
Customers upgrading to 2308 see a change in the MTU values on their packet core.
-When the UE MTU is set to any valid value (see API Spec) then the other MTUs will be set to:
+If the UE MTU is set to any valid value (see API Spec) then the other MTUs will be set to:
- Access MTU: UE MTU + 60 - Data MTU: UE MTU
-Rollbacks to Packet Core versions earlier than 2308 are not possible if the UE MTU field is changed following an upgrade.
+Rollbacks to Packet Core versions earlier than 2308 aren't possible if the UE MTU field is changed following an upgrade.
### MTU Interop setting
Rollbacks to Packet Core versions earlier than 2308 are not possible if the UE M
**Date available:** September 07, 2023
-In this release, the MTU Interop setting is deprecated and cannot be set for Packet Core versions 2308 and above.
+In this release, the MTU Interop setting is deprecated and can't be set for Packet Core versions 2308 and above.
## July 2023 ### Packet core 2307
If you use the Azure portal to manage your deployment and all your resources wer
ARM API users with existing resources can continue to use the 2022-04-01-preview API or 2022-11-01 without updating their templates. ARM API users can migrate to the 2023-06-01 API with their current resources with no ARM template changes (other than specifying the newer API version).
-Note: ARM API users who have done a PUT using the 2023-06-01 API and have enabled configuration only accessible in the up-level API cannot go back to using the 2022-11-01 API for PUTs. If they do, then the up-level config will be deleted.
+Note: ARM API users who have done a PUT using the 2023-06-01 API and have enabled configuration only accessible in the up-level API can't go back to using the 2022-11-01 API for PUTs. If they do, then the up-level config will be deleted.
### New cloud monitoring option - Azure Monitor Workbooks
The 2306 release for the Azure Private 5G Core packet core is now available. For
**Date available:** July 10, 2023
-It is now possible to:
+It's now possible to:
- attach a new or existing data network - modify an attached data network's configuration
For details, see [Create more Packet Core instances for a site using the Azure p
**Date available:** May 1, 2023
-It is now possible to add multiple packet cores in the same site using the Azure portal.
+It's now possible to add multiple packet cores in the same site using the Azure portal.
For details, see [Create a Site and dependant resources](deploy-private-mobile-network-with-site-powershell.md#create-a-site-and-dependant-resources).
It's now possible to secure access to a siteΓÇÖs local monitoring tools with a c
This feature has the following limitations: - Certificate deletion requires a pod restart to be reflected at the edge.-- User-assigned managed identities are not currently supported for certificate provisioning.
+- User-assigned managed identities aren't currently supported for certificate provisioning.
- Actions on key vaults and certificates not involving a modification on the **Packet Core Control Plane** object can take up to an hour to be reflected at the edge. You can add a custom certificate to secure access to your local monitoring tools during [site creation](collect-required-information-for-a-site.md#collect-local-monitoring-values). For existing sites, you can add a custom HTTPS certificate by following [Modify the local access configuration in a site](modify-local-access-configuration.md).
If you use the Azure portal to manage your deployment and all your resources wer
If you use ARM templates and want to keep using your existing templates, follow [Upgrade your ARM templates to the 2022-11-01 API](#upgrade-your-arm-templates-to-the-2022-11-01-api) to upgrade your 2022-04-01-preview API templates to the 2022-11-01 API.
-If you used an API version older than 2022-04-01-preview to create any of your resources, you need to take action to prevent them from becoming unmanageable. As soon as possible, delete these resources and redeploy them using the new 2022-11-01 API. You can redeploy the resources using the Azure portal or by upgrading your ARM templates as described in [Upgrade your ARM templates to the 2022-11-01 API](#upgrade-your-arm-templates-to-the-2022-11-01-api). These instructions may not be comprehensive for older templates.
+If you used an API version older than 2022-04-01-preview to create any of your resources, you need to take action to prevent them from becoming unmanageable. As soon as possible, delete these resources and redeploy them using the new 2022-11-01 API. You can redeploy the resources using the Azure portal or by upgrading your ARM templates as described in [Upgrade your ARM templates to the 2022-11-01 API](#upgrade-your-arm-templates-to-the-2022-11-01-api). These instructions might not be comprehensive for older templates.
#### Upgrade your ARM templates to the 2022-11-01 API
reliability Reliability Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-functions.md
Zone-redundant Premium plans are available in the following regions:
Availability zone support is a property of the Premium plan. The following are the current requirements/limitations for enabling availability zones: - You can only enable availability zones when creating a Premium plan for your function app. You can't convert an existing Premium plan to use availability zones.-- You must use a [zone redundant storage account (ZRS)](../storage/common/storage-redundancy.md#zone-redundant-storage) for your function app's [storage account](../azure-functions/storage-considerations.md#storage-account-requirements). If you use a different type of storage account, Functions may show unexpected behavior during a zonal outage.
+- You must use a [zone redundant storage account (ZRS)](../storage/common/storage-redundancy.md#zone-redundant-storage) for your function app's [storage account](../azure-functions/storage-considerations.md#storage-account-requirements). If you use a different type of storage account, Functions can show unexpected behavior during a zonal outage.
- Both Windows and Linux are supported. - Must be hosted on an [Elastic Premium](../azure-functions/functions-premium-plan.md) or Dedicated hosting plan. To learn how to use zone redundancy with a Dedicated plan, see [Migrate App Service to availability zone support](../availability-zones/migrate-app-service.md). - Availability zone support isn't currently available for function apps on [Consumption](../azure-functions/consumption-plan.md) plans. - Function apps hosted on a Premium plan must have a minimum [always ready instances](../azure-functions/functions-premium-plan.md#always-ready-instances) count of three.
- - The platform will enforce this minimum count behind the scenes if you specify an instance count fewer than three.
+ - The platform enforces this minimum count behind the scenes if you specify an instance count fewer than three.
- If you aren't using Premium plan or a scale unit that supports availability zones, are in an unsupported region, or are unsure, see the [migration guidance](../reliability/migrate-functions.md). ### Pricing
-There's no additional cost associated with enabling availability zones. Pricing for a zone redundant Premium plan is the same as a single zone Premium plan. You'll be charged based on your Premium plan SKU, the capacity you specify, and any instances you scale to based on your autoscale criteria. If you enable availability zones but specify a capacity less than three, the platform will enforce a minimum instance count of three and charge you for those three instances.
+There's no extra cost associated with enabling availability zones. Pricing for a zone redundant Premium plan is the same as a single zone Premium plan. You are charged based on your Premium plan SKU, the capacity you specify, and any instances you scale to based on your autoscale criteria. If you enable availability zones but specify a capacity less than three, the platform enforces a minimum instance count of three and charge you for those three instances.
### Create a zone-redundant Premium plan and function app
There are currently two ways to deploy a zone-redundant Premium plan and functio
# [ARM template](#tab/arm-template)
-You can use an [ARM template](../azure-resource-manager/templates/quickstart-create-templates-use-visual-studio-code.md) to deploy to a zone-redundant Premium plan. A guide to hosting Functions on Premium plans can be found [here](../azure-functions/functions-infrastructure-as-code.md#deploy-on-premium-plan).
+You can use an [ARM template](../azure-resource-manager/templates/quickstart-create-templates-use-visual-studio-code.md) to deploy to a zone-redundant Premium plan. To learn how to deploy function apps to a Premium plan, see [Automate resource deployment in Azure Functions](../azure-functions/functions-infrastructure-as-code.md?pivots=premium-plan).
The only properties to be aware of while creating a zone-redundant hosting plan are the new `zoneRedundant` property and the plan's instance count (`capacity`) fields. The `zoneRedundant` property must be set to `true` and the `capacity` property should be set based on the workload requirement, but not less than `3`. Choosing the right capacity varies based on several factors and high availability/fault tolerance strategies. A good rule of thumb is to ensure sufficient instances for the application such that losing one zone of instances leaves sufficient capacity to handle expected load.
After the zone-redundant plan is created and deployed, any function app hosted o
### Migrate your function app to a zone-redundant plan
-Azure Function Apps currently doesn't support in-place migration of existing function apps instances. For information on how to migrate the public multi-tenant Premium plan from non-availability zone to availability zone support, see [Migrate App Service to availability zone support](../reliability/migrate-functions.md).
+Azure Function Apps currently doesn't support in-place migration of existing function apps instances. For information on how to migrate the public multitenant Premium plan from non-availability zone to availability zone support, see [Migrate App Service to availability zone support](../reliability/migrate-functions.md).
### Zone down experience
When you run the same function code in multiple regions, there are two patterns
#### Active-active pattern for HTTP trigger functions
-With an active-active pattern, functions in both regions are actively running and processing events, either in a duplicate manner or in rotation. It's recommended that you use an active-active pattern in combination with [Azure Front Door](../frontdoor/front-door-overview.md) for your critical HTTP triggered functions, which can route and round-robin HTTP requests between functions running in multiple regions. Front door can also periodically checks the health of each endpoint. When a function in one region stops responding to health checks, Azure Front Door takes it out of rotation, and only forwards traffic to the remaining healthy functions.
+With an active-active pattern, functions in both regions are actively running and processing events, either in a duplicate manner or in rotation. It's recommended that you use an active-active pattern in combination with [Azure Front Door](../frontdoor/front-door-overview.md) for your critical HTTP triggered functions, which can route and round-robin HTTP requests between functions running in multiple regions. Front door can also periodically check the health of each endpoint. When a function in one region stops responding to health checks, Azure Front Door takes it out of rotation, and only forwards traffic to the remaining healthy functions.
![Architecture for Azure Front Door and Function](../azure-functions/media/functions-geo-dr/front-door.png)
reliability Reliability Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-virtual-machines.md
For information on how to migrate your existing VMs to availability zone support
#### :::image type="icon" source="media/icon-recommendation-high.svg"::: **Migrate VMs using availability sets to Virtual Machine Scale Sets Flex**
-Availability sets will be retired soon. Modernize your workloads by migrating them from VMs to Virtual Machine Scale Sets Flex.
+Modernize your workloads by migrating them from VMs to Virtual Machine Scale Sets Flex.
With Virtual Machine Scale Sets Flex, you can deploy your VMs in one of two ways:
For deploying virtual machines, you can use [flexible orchestration](../virtual-
- [Express Route with Azure VM disaster recovery](../site-recovery/azure-vm-disaster-recovery-with-expressroute.md) - [Virtual Machine Scale Sets](../virtual-machine-scale-sets/index.yml) - [Reliability in Azure](/azure/reliability/availability-zones-overview)+
role-based-access-control Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure RBAC description: Lists Azure Policy Regulatory Compliance controls available for Azure role-based access control (Azure RBAC). These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 10/23/2023 Last updated : 11/06/2023
sap Configure System https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/configure-system.md
This section contains the parameters related to the Azure infrastructure.
> [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | > | - | -- | - |
+> | `custom_disk_sizes_filename` | Defines the disk sizing file name, See [Custom sizing](configure-extra-disks.md). | Optional |
> | `disk_encryption_set_id` | The disk encryption key to use for encrypting managed disks by using customer-provided keys. | Optional | > | `proximityplacementgroup_arm_ids` | Specifies the Azure resource identifiers of existing proximity placement groups. | | > | `proximityplacementgroup_names` | Specifies the names of the proximity placement groups. | | > | `resource_offset` | Provides an offset for resource naming. | Optional |
-> | `use_loadbalancers_for_standalone_deployments` | Controls if load balancers are deployed for standalone installations. | Optional |
+> | `use_loadbalancers_for_standalone_deployments` | Controls if load balancers are deployed for standalone installations | Optional |
> | `use_scalesets_for_deployment` | Use Flexible Virtual Machine Scale Sets for the deployment | Optional |
-> | `use_msi_for_clusters` | If defined, configures the Pacemaker cluster by using managed identities. | Optional |
-> | `use_simple_mount` | Specifies if simple mounts are used (applicable for SLES 15 SP# or newer). | Optional |
-> | `custom_disk_sizes_filename` | Defines the disk sizing file name, See [Custom sizing](configure-extra-disks.md). | Optional |
+> | `scaleset_id` | Azure resource identifier for the virtual machine scale set | Optional |
+> | `user_assigned_identity_id | User assigned identity to assign to the virtual machines | Optional |
The `resource_offset` parameter controls the naming of resources. For example, if you set the `resource_offset` to 1, the first disk will be named `disk1`. The default value is 0.
See [High-availability configuration](configure-system.md#high-availability-conf
- The virtual machine and the operating system image are defined by using the following structure: ```python
This section contains the parameters related to the cluster configuration.
> | `database_cluster_disk_size` | The size of the shared disk for the Database cluster. | Optional | > | `database_cluster_type` | Cluster quorum type; AFA (Azure Fencing Agent), ASD (Azure Shared Disk), ISCSI | Optional | > | `fencing_role_name` | Specifies the Azure role assignment to assign to enable fencing. | Optional |
+> | `idle_timeout_scs_ers` | Sets the idle timeout setting for the SCS and ERS loadbalancer. | Optional |
> | `scs_cluster_disk_lun` | Specifies the The LUN of the shared disk for the Central Services cluster. | Optional | > | `scs_cluster_disk_size` | The size of the shared disk for the Central Services cluster. | Optional | > | `scs_cluster_type` | Cluster quorum type; AFA (Azure Fencing Agent), ASD (Azure Shared Disk), ISCSI | Optional |
-> | `use_msi_for_clusters` | If defined, configures the Pacemaker cluster by using managed identities. | Optional |
+> | `use_msi_for_clusters` | If defined, configures the Pacemaker cluster by using managed identities. | Optional |
> | `use_simple_mount` | Specifies if simple mounts are used (applicable for SLES 15 SP# or newer). | Optional |
-> | `idle_timeout_scs_ers` | Sets the idle timeout setting for the SCS and ERS loadbalancer. | Optional |
> [!NOTE] > The highly available central services deployment requires using a shared file system for `sap_mnt`. You can use Azure Files or Azure NetApp Files by using the `NFS_provider` attribute. The default is Azure Files. To use Azure NetApp Files, set the `NFS_provider` attribute to `ANF`.
sap Acss Backup Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/center-sap-solutions/acss-backup-integration.md
In this how-to guide, you'll learn to configure and monitor Azure Backup for your SAP system through the Virtual Instance for SAP solutions (VIS) resource in Azure Center for SAP solutions.
-When you configure Azure Backup from the VIS resource, you get to enable Backup for all your **Central service and Application server virtual machines** and **HANA Database** in one go. For HANA Database, Azure Center for SAP solutions automates the step of running the [Pre-Registration script](/azure/backup/tutorial-backup-sap-hana-db#what-the-pre-registration-script-does).
+When you configure Azure Backup from the VIS resource, you can enable Backup for your SAP Central Services instance, Application server and Database virtual machines and HANA Database in one step. For the HANA Database, Azure Center for SAP solutions automates the step of running the [Pre-Registration script](/azure/backup/tutorial-backup-sap-hana-db#what-the-pre-registration-script-does).
Once backup is configured, you can monitor the status of your Backup Jobs for both virtual machines and HANA DB from the VIS.
Before you can go ahead and use this feature in preview, register for it from th
## Prerequisites - A Virtual Instance for SAP solutions (VIS) resource representing your SAP system on Azure Center for SAP solutions. - An Azure account with **Contributor** role access on the Subscription in which your SAP system exists.-- Register **Microsoft.Features** Resource Provider on your subscription. -- Register your subscription for this preview feature in Azure Center for SAP solutions.-- After you have successfully registered for the Preview feature, re-register **Microsoft.Workloads** resource provider on the Subscription.-- To be able to configure Backup from the VIS resource, assign the following roles to **Azure Workloads Connector Service** first-party app+
+To be able to configure Backup from the VIS resource, assign the following roles to **Azure Workloads Connector Service** first-party app
1. **Backup Contributor** role access on the Subscription or specific Resource group which has the Recovery services vault that will be used for Backup. 2. **Virtual Machine Contributor** role access on the Subscription or Resource groups which have the Compute resources of the SAP systems.
- - You can skip this step if you have already configured Backup for your VMs and HANA DB using Azure Backup Center. You will be able to monitor Backup of your SAP system from the VIS.
- - Once you have completed configuring Backup from the VIS experience, it is recommended that you remove role access assigned to **Azure Workloads Connector Service** first-party app, as the access is no longer needed when monitoring backup status from VIS.
+You can skip this step if you have already configured Backup for your VMs and HANA DB using Azure Backup Center. You will be able to monitor Backup of your SAP system from the VIS.
+
+> [!IMPORTANT]
+> Once you have completed configuring Backup from the VIS experience, it is recommended that you remove role access assigned to **Azure Workloads Connector Service** first-party app, as the access is no longer needed when monitoring backup status from VIS.
+ - For HANA database backup, ensure the [prerequisites](/azure/backup/tutorial-backup-sap-hana-db#prerequisites) required by Azure Backup are in place.-- For HANA database backup, create a HDB Userstore key that will be used for preparing HANA DB for configuring Backup.
+- For HANA database backup, create a **HDB Userstore key** that will be used for preparing HANA DB for configuring Backup. For a **highly available(HA)** HANA database, the Userstore key should be created in both **Primary** and **Secondary** databases.
> [!NOTE] > If you are configuring backup for HANA database from the Virtual Instance for SAP solutions resource, you can skip running the [Backup pre-registration script](/azure/backup/tutorial-backup-sap-hana-db#what-the-pre-registration-script-does). Azure Center for SAP solutions runs this script before configuring HANA backup.
-## Register for Backup integration preview feature
-Before you can start configuring Backup from the VIS resource or viewing Backup status on VIS resource in case Backup is already configured, you need to register for the Backup integration feature in Azure Center for SAP solutions. Follow these steps to register for the feature:
-
-1. Sign in to the [Azure portal](https://portal.azure.com) as a user with **Contributor** role access.
-2. Search for **ACSS** and select **Azure Center for SAP solutions** from search results.
-3. On the left navigation, select **Virtual Instance for SAP solutions**.
-4. Select the **Backup (preview)** tab on the left navigation.
-5. Select the **Register for Preview** button.
-6. Registration for features can take upto 30 minutes and once it is complete, you can configure backup or view status of already configured backup.
- ## Configure Backup for your SAP system
-You can configure Backup for your Central service and Application server virtual machines and HANA database from the Virtual Instance for SAP solutions resource following these steps:
+You can configure Backup for your Central service, Application server and Database virtual machines and HANA database from the Virtual Instance for SAP solutions resource following these steps:
1. Sign in to the [Azure portal](https://portal.azure.com). 2. Search for **ACSS** and select **Azure Center for SAP solutions** from search results. 3. On the left navigation, select **Virtual Instance for SAP solutions**. 4. Select the **Backup (preview)** tab on the left navigation.
-5. If you have not registered for the preview feature, complete the registration process by selecting the **Register** button. This step is needed only once per Subscription.
-6. Select **Configure** button on the Backup (preview) page.
+5. Select **Configure** button on the Backup (preview) page.
7. Select the checkboxes **Central service + App server VMs Backup** and **Database Backup**.
-8. For Central service + App server VMs Backup, select an existing Recovery Services vault or Create new.
- - Select a Backup policy that is to be used for backing up Central service and App server VMs.
+8. For **Central service + App server VMs Backup**, select an existing Recovery Services vault or **Create new**.
+ - Select a Backup policy that is to be used for backing up Central service, App server and Database VMs.
+ - Select **Include database servers for virtual machine backup** if you want to have Azure VM backup configured for database VMs. If this is not selected, only Central service and App server VMs will have VM backup configured.
+ - If you choose to include database VMs for backup, then you can decide if all disks associated to the VM must be backed up or **OS disk only**.
9. For Database Backup, select an existing Recovery Services vault or Create new. - Select a Backup policy that is to be used for backing up HANA database. 10. Provide a **HANA DB User Store** key name.
+ > [!IMPORTANT]
+ > If you are configuring backup for a HSR enabled HANA database, then you must ensure the HANA DB user store key is available on both primary and secondary databases.
11. If SSL enforce is enabled for the HANA database, provide the key store, trust store path and SSL hostname and crypto provider details. > [!NOTE]
-> If you are configuring backup for an HSR enabled HANA database from the Virtual Instance for SAP solutions resource, then the [Backup pre-registration script](/azure/backup/tutorial-backup-sap-hana-db#what-the-pre-registration-script-does) is run and backup configured only for the Primary HANA database node. In case of a failover, you will need to configure Backup on the new primary node.
+> If you are configuring backup for a HSR enabled HANA database from the Virtual Instance for SAP solutions resource, then the [Backup pre-registration script](/azure/backup/tutorial-backup-sap-hana-db#what-the-pre-registration-script-does) is run on both Primary and Secondary HANA VMs. This is inline with Azure Backup configuration process for HSR enabled HANA databases, to ensure Azure Backup service is able to connect to any new primary node automatically without any manual intervention. [Learn more](/azure/backup/sap-hana-database-with-hana-system-replication-backup).
## Monitor Backup status of your SAP system After you configure Backup for the Virtual Machines and HANA Database of your SAP system either from the Virtual Instance for SAP solutions resource or from the Backup Center, you can monitor the status of Backup from the Virtual Instance for SAP solutions resource.
To monitor Backup status:
2. Search for **ACSS** and select **Azure Center for SAP solutions** from search results. 3. On the left navigation, select **Virtual Instance for SAP solutions**. 4. Select the **Backup (preview)** tab on the left navigation.
-5. If you have not registered for the preview feature, complete the registration process by selecting the **Register** button. This step is needed only once per Subscription.
-6. For Central service + App server VMs and HANA Database, view protection status of **Backup instances** and status of **Backup jobs** in the last 24 hours.
-
-> [!NOTE]
-> For a highly available HANA database, if you have configured Backup using the HSR Backup feature from Backup Center, that would not be detected and displayed under Database Backup section.
+5. For **Central service + App server VMs** and **HANA Database**, view protection status of **Backup instances** and status of **Backup jobs** in the last 24 hours.
## Next steps - [Monitor SAP system from the Azure portal](monitor-portal.md)
sap Quickstart Register System Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/center-sap-solutions/quickstart-register-system-powershell.md
To register an existing SAP system in Azure Center for SAP solutions:
| East US 2 | East US 2| | North Central US | South Central US | | South Central US | South Central US |
- | West Central US | South Central US |
| Central US | South Central US | | West US | West US 3 | | West US 2 | West US 2 |
sap Cal S4h https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/cal-s4h.md
The online library is continuously updated with Appliances for demo, proof of co
| [**SAP S/4HANA 2022 FPS02, Fully-Activated Appliance**](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/983008db-db92-4d4d-ac79-7e2afa95a2e0)| July 16 2023 |This appliance contains SAP S/4HANA 2022 (FPS02) with pre-activated SAP Best Practices for SAP S/4HANA core functions, and further scenarios for Service, Master Data Governance (MDG), Portfolio Mgmt. (PPM), Human Capital Management (HCM), Analytics, and more. User access happens via SAP Fiori, SAP GUI, SAP HANA Studio, Windows remote desktop, or the backend operating system for full administrative access. | [Create Appliance](https://cal.sap.com/registration?sguid=983008db-db92-4d4d-ac79-7e2afa95a2e0&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) | [**SAP S/4HANA 2022 FPS01, Fully-Activated Appliance**](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/3722f683-42af-4059-90db-4e6a52dc9f54) | April 20 2023 |This appliance contains SAP S/4HANA 2022 (FPS01) with pre-activated SAP Best Practices for SAP S/4HANA core functions, and further scenarios for Service, Master Data Governance (MDG), Portfolio Mgmt. (PPM), Human Capital Management (HCM), Analytics, and more. User access happens via SAP Fiori, SAP GUI, SAP HANA Studio, Windows remote desktop, or the backend operating system for full administrative access. | [Create Appliance](https://cal.sap.com/registration?sguid=3722f683-42af-4059-90db-4e6a52dc9f54&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) | | [**SAP S/4HANA 2021 FPS01, Fully-Activated Appliance**](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/a954cc12-da16-4caa-897e-cf84bc74cf15)| April 26 2022 |This appliance contains SAP S/4HANA 2021 (FPS01) with pre-activated SAP Best Practices for SAP S/4HANA core functions, and further scenarios for Service, Master Data Governance (MDG), Portfolio Mgmt. (PPM), Human Capital Management (HCM), Analytics, Migration Cockpit, and more. User access happens via SAP Fiori, SAP GUI, SAP HANA Studio, Windows remote desktop, or the backend operating system for full administrative access. |[Create Appliance](https://cal.sap.com/registration?sguid=a954cc12-da16-4caa-897e-cf84bc74cf15&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) |
-| [**SAP NetWeaver AS ABAP 7.51 SP02 on ASE**](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/56fea1da-3460-4398-bc75-c612a4bc345e)| January 03 2018 |The ABAP AS on ASE 16.0 provides a great platform for trying out the ABAP language and toolset. It is extensively pre-configured with Fiori launchpad, SAP Cloud Connector, SAP Java Virtual Machine, pre-configured backend /frontend connections, roles, and sample applications. It also includes all the standard ABAP AS infrastructure: Transaction Management, database operations / persistence, Change and Transport System, SAP Gateway, interoperability with ABAP Development Toolkit and SAP WebIDE, and much more | [Create Appliance](https://cal.sap.com/registration?sguid=56fea1da-3460-4398-bc75-c612a4bc345e&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8)
+| [**SAP BW/4HANA 2021 SP04 Developer Edition**](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/1b0ac659-a5b4-4d3b-b1ae-f1a1cb89c6db) | March 23 2023 | This solution offers you an insight of SAP BW/4HANA2021 SP04. SAP BW/4HANA is the next generation Data Warehouse optimized for SAP HANA. Beside the basic BW/4HANA options the solution offers a bunch of SAP HANA optimized BW/4HANA Content and the next step of Hybrid Scenarios with SAP Data Warehouse Cloud. | [Create Appliance](https://cal.sap.com/registration?sguid=1b0ac659-a5b4-4d3b-b1ae-f1a1cb89c6db&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) |
| [**SAP ABAP Platform 1909, Developer Edition**](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/5a830213-f0cb-423e-ab5f-f7736e57f5a1)| May 10 2023 | The SAP ABAP Platform on SAP HANA gives you access to your own copy of SAP ABAP Platform 1909 Developer Edition on SAP HANA. Note that this solution is preconfigured with many additional elements, including: SAP ABAP RESTful Application Programming Model, SAP Fiori launchpad, SAP gCTS, SAP ABAP Test Cockpit, and preconfigured frontend / backend connections, etc It also includes all the standard ABAP AS infrastructure: Transaction Management, database operations / persistence, Change and Transport System, SAP Gateway, interoperability with ABAP Development Toolkit and SAP WebIDE, and much more. | [Create Appliance](https://cal.sap.com/registration?sguid=5a830213-f0cb-423e-ab5f-f7736e57f5a1&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) | | [**SAP Solution Manager 7.2 SP17 & Focused Solutions SP12 (Baseline)**](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/811a4b92-3ea1-4108-9661-c38e775ca488)| September 24 2023 |This template contains a partly configured SAP Solution Manager 7.2 SP17 (incl. Focused Build and Focused Insights 2.0 SP12). Only the Mandatory Configuration and Focused Build configuration are performed. The system is clean and does not contain pre-defined demo scenarios. | [Create Appliance](https://cal.sap.com/registration?sguid=811a4b92-3ea1-4108-9661-c38e775ca488&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) |
search Search Howto Index Sharepoint Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-index-sharepoint-online.md
Previously updated : 10/03/2023 Last updated : 11/07/2023 # Index data from SharePoint document libraries
These are the limitations of this feature:
+ Indexing SharePoint .ASPX site content is not supported. ++ OneNote notebook files are not supported+ + [Private endpoint](search-indexer-howto-access-private.md) is not supported.
-+ SharePoint supports a granular authorization model that determines per-user access at the document level. The SharePoint indexer does not pull these permissions into the search index, and Cognitive Search does not support document-level authorization. When a document is indexed from SharePoint into a search service, the content is available to anyone who has read access to the index. If you require document-level permissions, you should investigate [security filters to trim results](search-security-trimming-for-azure-search-with-aad.md) of unauthorized content.
++ SharePoint supports a granular authorization model that determines per-user access at the document level. The SharePoint indexer does not pull these permissions into the search index, and Cognitive Search does not support document-level authorization. When a document is indexed from SharePoint into a search service, the content is available to anyone who has read access to the index. If you require document-level permissions, you should consider [security filters to trim results](search-security-trimming-for-azure-search-with-aad.md) and automate copyng the permissions at a file level to the index. These are the considerations when using this feature:
-+ If there is a requirement to implement a SharePoint content indexing solution with Cognitive Search in a production environment, consider create a custom connector using [Microsoft Graph Data Connect](/graph/data-connect-concept-overview) with [Blob indexer](search-howto-indexing-azure-blob-storage.md) and [Microsoft Graph API](/graph/use-the-api) for incremental indexing.
++ If there is a requirement to implement a SharePoint content indexing solution with Cognitive Search in a production environment, consider creating a custom connector with [SharePoint Webhooks](/sharepoint/dev/apis/webhooks/overview-sharepoint-webhooks) calling [Microsoft Graph API](/graph/use-the-api) to export the data to an Azure Blob container and use the [Azure Blob indexer](search-howto-indexing-azure-blob-storage.md) for incremental indexing. + There could be Microsoft 365 processes that update SharePoint file system-metadata (based on different configurations in SharePoint) and will cause the SharePoint indexer to trigger. Make sure that you test your setup and understand the document processing count prior to using any AI enrichment. Since this is a third-party connector to Azure (since SharePoint is located in Microsoft 365), SharePoint configuration is not checked by the indexer.
search Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Cognitive Search description: Lists Azure Policy Regulatory Compliance controls available for Azure Cognitive Search. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 10/23/2023 Last updated : 11/06/2023
service-bus-messaging Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Service Bus Messaging description: Lists Azure Policy Regulatory Compliance controls available for Azure Service Bus Messaging. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 10/23/2023 Last updated : 11/06/2023
service-connector How To Integrate Confluent Kafka https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-confluent-kafka.md
Previously updated : 08/11/2022 Last updated : 11/07/2023 # Integrate Apache Kafka on Confluent Cloud with Service Connector
-This page shows the supported authentication types and client types of Apache kafka on Confluent Cloud with Service using Service Connector. You might still be able to connect to Apache kafka on Confluent Cloud in other programming languages without using Service Connector. This page also shows default environment variable names and values (or Spring Boot configuration) you get when you create the service connection. You can learn more about [Service Connector environment variable naming convention](concept-service-connector-internals.md).
+This page shows supported authentication methods and clients to connect Apache kafka on Confluent Cloud to other cloud services using Service Connector. You might still be able to connect to Apache kafka on Confluent Cloud in other programming languages without using Service Connector. This page also shows default environment variable names and values (or Spring Boot configuration) you get when you create the service connection.
## Supported compute services
Supported authentication and clients for App Service, Container Apps and Azure S
## Default environment variable names or application properties
-Use the connection details below to connect compute services to Kafka. For each example below, replace the placeholder texts `<server-name>`, `<Bootstrap-server-key>`, `<Bootstrap-server-secret>`, `<schema-registry-key>`, and `<schema-registry-secret>` with your server name, Bootstrap server key, Bootstrap server secret, schema registry key, and schema registry secret.
+Use the connection details below to connect compute services to Kafka. For each example below, replace the placeholder texts `<server-name>`, `<Bootstrap-server-key>`, `<Bootstrap-server-secret>`, `<schema-registry-key>`, and `<schema-registry-secret>` with your server name, Bootstrap server key, Bootstrap server secret, schema registry key, and schema registry secret. For more information about naming conventions, check the [Service Connector internals](concept-service-connector-internals.md#configuration-naming-convention) article. Refer to [Kafka Client Examples](https://docs.confluent.io/cloud/current/client-apps/examples.html#) to build kafka client applications on Confluent Cloud.
-### Azure App Service and Azure Container Apps
+### Secret / Connection String
-| Default environment variable name | Description | Example value |
-|||--|
-| AZURE_CONFLUENTCLOUDKAFKA_BOOTSTRAPSERVER | Your Kafka bootstrap server | `pkc-<server-name>.eastus.azure.confluent.cloud:9092` |
-| AZURE_CONFLUENTCLOUDKAFKA_KAFKASASLCONFIG | Your Kafka SASL configuration | `org.apache.kafka.common.security.plain.PlainLoginModule required username='<Bootstrap-server-key>' password='<Bootstrap-server-secret>';` |
-| AZURE_CONFLUENTCLOUDSCHEMAREGISTRY_URL | Your Confluent registry URL | `https://psrc-<server-name>.westus2.azure.confluent.cloud` |
-| AZURE_CONFLUENTCLOUDSCHEMAREGISTRY_USERINFO | Your Confluent registry user information | `<schema-registry-key>:<schema-registry-secret>` |
-
-### Azure Spring Apps
+#### SpringBoot client type
| Default environment variable name | Description | Example value | |--||--|
Use the connection details below to connect compute services to Kafka. For each
| spring.kafka.properties.schema.registry.url | Your Confluent registry URL | `https://psrc-<server-name>.westus2.azure.confluent.cloud` | | spring.kafka.properties.schema.registry.basic.auth.user.info | Your Confluent registry user information | `<schema-registry-key>:<schema-registry-secret>` |
+#### Other client types
+
+| Default environment variable name | Description | Example value |
+|||--|
+| AZURE_CONFLUENTCLOUDKAFKA_BOOTSTRAPSERVER | Your Kafka bootstrap server | `pkc-<server-name>.eastus.azure.confluent.cloud:9092` |
+| AZURE_CONFLUENTCLOUDKAFKA_KAFKASASLCONFIG | Your Kafka SASL configuration | `org.apache.kafka.common.security.plain.PlainLoginModule required username='<Bootstrap-server-key>' password='<Bootstrap-server-secret>';` |
+| AZURE_CONFLUENTCLOUDSCHEMAREGISTRY_URL | Your Confluent registry URL | `https://psrc-<server-name>.westus2.azure.confluent.cloud` |
+| AZURE_CONFLUENTCLOUDSCHEMAREGISTRY_USERINFO | Your Confluent registry user information | `<schema-registry-key>:<schema-registry-secret>` |
+ ## Next steps Follow the tutorials listed below to learn more about Service Connector.
service-connector How To Integrate Cosmos Table https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-cosmos-table.md
Previously updated : 08/11/2022 Last updated : 11/01/2023 # Integrate the Azure Cosmos DB for Table with Service Connector
-This page shows the supported authentication types and client types for the Azure Cosmos DB for Table using Service Connector. You might still be able to connect to the Azure Cosmos DB for Table in other programming languages without using Service Connector. This page also shows default environment variable names and values you get when you create the service connection. You can learn more about [Service Connector environment variable naming convention](concept-service-connector-internals.md).
+This page shows supported authentication methods and clients, and shows sample code you can use to connect the Azure Cosmos DB for Table to other cloud services using Service Connector. You might still be able to connect to the Azure Cosmos DB for Table in other programming languages without using Service Connector. This page also shows default environment variable names and values you get when you create the service connection, as well as sample code.
## Supported compute services
This page shows the supported authentication types and client types for the Azur
Supported authentication and clients for App Service, Container Apps and Azure Spring Apps:
-### [Azure App Service](#tab/app-service)
-
-| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
-|--|--|--|--|--|
-| .NET | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-| Java | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-| Node.js | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-| Python | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-
-### [Azure Container Apps](#tab/container-apps)
- | Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal | |--|--|--|--|--| | .NET | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
Supported authentication and clients for App Service, Container Apps and Azure S
| Node.js | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | | Python | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-### [Azure Spring Apps](#tab/spring-apps)
-
-| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
-|--|--|--|--|--|
-| .NET | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-| Java | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-| Node.js | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-| Python | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-
-## Default environment variable names or application properties
-
-Use the connection details below to connect your compute services to the Azure Cosmos DB for Table. For each example below, replace the placeholder texts `<account-name>`, `<table-name>`, `<account-key>`, `<resource-group-name>`, `<subscription-ID>`, `<client-ID>`, `<client-secret>`, `<tenant-id>` with your own information.
+## Default environment variable names or application properties and sample code
-### Azure App Service and Azure Container Apps
+Use the connection details below to connect your compute services to Azure Cosmos DB for Table. For each example below, replace the placeholder texts `<account-name>`, `<table-name>`, `<account-key>`, `<resource-group-name>`, `<subscription-ID>`, `<client-ID>`, `<client-secret>`, `<tenant-id>` with your own information. For more information about naming conventions, check the [Service Connector internals](concept-service-connector-internals.md#configuration-naming-convention) article.
-#### Secret / Connection string
-
-| Default environment variable name | Description | Example value |
-|--|-|-|
-| AZURE_COSMOS_CONNECTIONSTRING | Azure Cosmos DB for Table connection string | `DefaultEndpointsProtocol=https;AccountName=<account-name>;AccountKey=<account-key>;TableEndpoint=https://<table-name>.table.cosmos.azure.com:443/; ` |
#### System-assigned managed identity
Use the connection details below to connect your compute services to the Azure C
| AZURE_COSMOS_SCOPE | Your managed identity scope | `https://management.azure.com/.default` | | AZURE_COSMOS_RESOURCEENDPOINT | Your resource endpoint | `https://<table-name>.documents.azure.com:443/` |
+#### Sample code
+
+Refer to the steps and code below to connect to Azure Cosmos DB for Table using a system-assigned managed identity.
+ #### User-assigned managed identity | Default environment variable name | Description | Example value |
Use the connection details below to connect your compute services to the Azure C
| AZURE_COSMOS_CLIENTID | Your client secret ID | `<client-ID>` | | AZURE_COSMOS_RESOURCEENDPOINT | Your resource endpoint | `https://<table-name>.documents.azure.com:443/` |
+#### Sample code
+
+Refer to the steps and code below to connect to Azure Cosmos DB for Table using a user-assigned managed identity.
+
+#### Connection string
+
+| Default environment variable name | Description | Example value |
+|--|-|-|
+| AZURE_COSMOS_CONNECTIONSTRING | Azure Cosmos DB for Table connection string | `DefaultEndpointsProtocol=https;AccountName=<account-name>;AccountKey=<account-key>;TableEndpoint=https://<table-name>.table.cosmos.azure.com:443/; ` |
+
+#### Sample code
+
+Refer to the steps and code below to connect to Azure Cosmos DB for Table using a connection string.
+ #### Service principal | Default environment variable name | Description | Example value |
Use the connection details below to connect your compute services to the Azure C
| AZURE_COSMOS_TENANTID | Your tenant ID | `<tenant-ID>` | | AZURE_COSMOS_RESOURCEENDPOINT | Your resource endpoint | `https://<table-name>.documents.azure.com:443/` |
+#### Sample code
+
+Refer to the steps and code below to connect to Azure Cosmos DB for Table using a service principal.
+ ## Next steps Follow the tutorials listed below to learn more about Service Connector.
service-connector How To Integrate Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-key-vault.md
Previously updated : 08/11/2022 Last updated : 11/02/2023 # Integrate Azure Key Vault with Service Connector > [!NOTE]
-> When you use Service Connector to connect your key vault or manage key vault connections, Service Connector use your token to perform the corresponding operations.
+> When you use Service Connector to connect your key vault or manage key vault connections, Service Connector uses your token to perform the corresponding operations.
-This page shows the supported authentication types and client types of Azure Key Vault using Service Connector. You might still be able to connect to Azure Key Vault in other programming languages without using Service Connector. This page also shows default environment variable names and values (or Spring Boot configuration) you get when you create the service connection. You can learn more about [Service Connector environment variable naming convention](concept-service-connector-internals.md).
+This page shows supported authentication methods and clients, and shows sample code you can use to connect Azure Key Vault to other cloud services using Service Connector. You might still be able to connect to Azure Key Vault in other programming languages without using Service Connector. This page also shows default environment variable names and values (or Spring Boot configuration) you get when you create the service connection, as well as sample code.
## Supported compute service
This page shows the supported authentication types and client types of Azure Key
Supported authentication and clients for App Service, Container Apps and Azure Spring Apps:
-### [Azure App Service](#tab/app-service)
- | Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal | |--|--|--|-|--| | .NET | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | | Java | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) |
-| Java - Spring Boot | | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) |
+| Java - Spring Boot | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) |
| Node.js | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | | Python | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | | None | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) |
-### [Azure Container Apps](#tab/container-apps)
-
-| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
-|--|--|--|-|--|
-| .NET | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) |
-| Java | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) |
-| Java - Spring Boot | | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) |
-| Node.js | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) |
-| Python | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) |
-| None | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) |
+
-### [Azure Spring Apps](#tab/spring-apps)
+## Default environment variable names or application properties and sample code
-| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
-|--|--|--|-|--|
-| .NET | ![yes icon](./media/green-check.png) | | | ![yes icon](./media/green-check.png) |
-| Java | ![yes icon](./media/green-check.png) | | | ![yes icon](./media/green-check.png) |
-| Java - Spring Boot | ![yes icon](./media/green-check.png) | | | ![yes icon](./media/green-check.png) |
-| Node.js | ![yes icon](./media/green-check.png) | | | ![yes icon](./media/green-check.png) |
-| Python | ![yes icon](./media/green-check.png) | | | ![yes icon](./media/green-check.png) |
-| None | ![yes icon](./media/green-check.png) | | | ![yes icon](./media/green-check.png) |
+Use the connection details below to connect compute services to Azure Key Vault. For each example below, replace the placeholder texts `<vault-name>`, `<client-ID>`, `<client-secret>`, and `<tenant-id>` with your key vault name, client-ID, client secret and tenant ID. For more information about naming conventions, check the [Service Connector internals](concept-service-connector-internals.md#configuration-naming-convention) article.
-
+### System-assigned managed identity
-## Default environment variable names or application properties
+#### SpringBoot client type
-Use the connection details below to connect compute services to Azure Key Vault. For each example below, replace the placeholder texts `<vault-name>`, `<client-ID>`, `<client-secret>`, and `<tenant-id>` with your key vault name, client-ID, client secret and tenant ID.
+| Default environment variable name | Description | Example value |
+|--|--|-|
+| azure.keyvault.uri | Your Key Vault endpoint URL | `"https://<vault-name>.vault.azure.net/"` |
+| azure.keyvault.scope | Your Azure RBAC scope | `https://management.azure.com/.default` |
+| spring.cloud.azure.keyvault.secret.credential.managed-identity-enabled | Whether to enable managed identity for Spring Cloud Azure version 4.0 and above | `true` |
+| spring.cloud.azure.keyvault.secret.endpoint | Your Key Vault endpoint URL for Spring Cloud Azure version 4.0 and above | `"https://<vault-name>.vault.azure.net/"` |
-### System-assigned managed identity
+#### Other client types
| Default environment variable name | Description | Example value | |--|-|--| | AZURE_KEYVAULT_SCOPE | Your Azure RBAC scope | `https://management.azure.com/.default` | | AZURE_KEYVAULT_RESOURCEENDPOINT | Your Key Vault endpoint | `https://<vault-name>.vault.azure.net/` |
+#### Sample code
+
+Refer to the steps and code below to connect to Azure Key Vault using a system-assigned managed identity.
+ ### User-assigned managed identity
+#### SpringBoot client type
+
+| Default environment variable name | Description | Example value |
+|--|--|-|
+| azure.keyvault.uri | Your Key Vault endpoint URL | `"https://<vault-name>.vault.azure.net/"` |
+| azure.keyvault.client-id | Your Client ID | `<client-ID>` |
+| azure.keyvault.scope | Your Azure RBAC scope | `https://management.azure.com/.default` |
+| spring.cloud.azure.keyvault.secret.credential.managed-identity-enabled | Whether to enable managed identity for Spring Cloud Azure version 4.0 and above | `true` |
+| spring.cloud.azure.keyvault.secret.endpoint | Your Key Vault endpoint URL for Spring Cloud Azure version 4.0 and above | `"https://<vault-name>.vault.azure.net/"` |
+| spring.cloud.azure.keyvault.secret.credential.client-id | Your Client ID for Spring Cloud Azure version 4.0 and above | `<client-ID>` |
+
+#### Other client types
+ | Default environment variable name | Description | Example value | |--|-|--| | AZURE_KEYVAULT_SCOPE | Your Azure RBAC scope | `https://management.azure.com/.default` | | AZURE_KEYVAULT_RESOURCEENDPOINT | Your Key Vault endpoint | `https://<vault-name>.vault.azure.net/` | | AZURE_KEYVAULT_CLIENTID | Your Client ID | `<client-ID>` |
+#### Sample code
+
+Refer to the steps and code below to connect to Azure Key Vault using a system-assigned managed identity.
+ ### Service principal
+#### SpringBoot client type
+
+| Default environment variable name | Description | Example value |
+|--|--|-|
+| azure.keyvault.uri | Your Key Vault endpoint URL | `"https://<vault-name>.vault.azure.net/"` |
+| azure.keyvault.client-id | Your Client ID | `<client-ID>` |
+| azure.keyvault.client-key | Your Client secret | `<client-secret>` |
+| azure.keyvault.tenant-id | Your Tenant ID | `<tenant-id>` |
+| azure.keyvault.scope | Your Azure RBAC scope | `https://management.azure.com/.default` |
+| spring.cloud.azure.keyvault.secret.endpoint | Your Key Vault endpoint URL for Spring Cloud Azure version 4.0 and above | `"https://<vault-name>.vault.azure.net/"` |
+| spring.cloud.azure.keyvault.secret.credential.client-id | Your Client ID for Spring Cloud Azure version 4.0 and above | `<client-ID>` |
+| spring.cloud.azure.keyvault.secret.credential.client-secret | Your Client secret for Spring Cloud Azure version 4.0 and above | `<client-secret>` |
+| spring.cloud.azure.keyvault.secret.profile.tenant-id | Your Tenant ID for Spring Cloud Azure version 4.0 and above | `<tenant-id>` |
+
+#### Other client types
+ | Default environment variable name | Description | Example value | |--|-|--| | AZURE_KEYVAULT_SCOPE | Your Azure RBAC scope | `https://management.azure.com/.default` |
Use the connection details below to connect compute services to Azure Key Vault.
| AZURE_KEYVAULT_CLIENTSECRET | Your Client secret | `<client-secret>` | | AZURE_KEYVAULT_TENANTID | Your Tenant ID | `<tenant-id>` |
-### Java - Spring Boot service principal
+#### Sample code
-| Default environment variable name | Description | Example value |
-|--|--|-|
-| azure.keyvault.uri | Your Key Vault endpoint URL | `"https://<vault-name>.vault.azure.net/"` |
-| azure.keyvault.client-id | Your Client ID | `<client-ID>` |
-| azure.keyvault.client-key | Your Client secret | `<client-secret>` |
-| azure.keyvault.tenant-id | Your Tenant ID | `<tenant-id>` |
-| azure.keyvault.scope | Your Azure RBAC scope | `https://management.azure.com/.default` |
+Refer to the steps and code below to connect to Azure Key Vault using a system-assigned managed identity.
## Next steps
service-connector How To Integrate Postgres https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-postgres.md
Supported authentication and clients for App Service, Container Apps, and Azure
> [!NOTE] > System-assigned managed identity, User-assigned managed identity and Service principal are only supported on Azure CLI.
-## Default environment variable names or application properties and Sample code
+## Default environment variable names or application properties and sample code
Reference the connection details and sample code in the following tables, according to your connection's authentication type and client type, to connect compute services to Azure Database for PostgreSQL.
-### Connect with System-assigned Managed Identity
+### System-assigned Managed Identity
#### [.NET](#tab/dotnet)
Reference the connection details and sample code in the following tables, accord
-### Sample code
+#### Sample code
Refer to the steps and code below to connect to Azure Database for PostgreSQL. [!INCLUDE [code sample for postgresql system mi](./includes/code-postgres-me-id.md)]
-### Connect with User-assigned Managed Identity
+### User-assigned Managed Identity
#### [.NET](#tab/dotnet)
Refer to the steps and code below to connect to Azure Database for PostgreSQL.
-### Sample code
+#### Sample code
Refer to the steps and code below to connect to Azure Database for PostgreSQL. [!INCLUDE [code sample for postgresql user mi](./includes/code-postgres-me-id.md)]
-### Connect with Connection String
+### Connection String
#### [.NET](#tab/dotnet)
Refer to the steps and code below to connect to Azure Database for PostgreSQL.
-### Sample code
+#### Sample code
Refer to the steps and code below to connect to Azure Database for PostgreSQL. [!INCLUDE [code sample for postgresql secrets](./includes/code-postgres-secret.md)]
-### Connect with Service Principal
+### Service Principal
#### [.NET](#tab/dotnet)
Refer to the steps and code below to connect to Azure Database for PostgreSQL.
-### Sample code
+#### Sample code
Refer to the steps and code below to connect to Azure Database for PostgreSQL. [!INCLUDE [code sample for postgresql service principal](./includes/code-postgres-me-id.md)]
service-connector How To Integrate Signalr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-signalr.md
Previously updated : 08/11/2022 Last updated : 10/31/2023 - ignite-fall-2021 - kr2b-contr-experiment
# Integrate Azure SignalR Service with Service Connector
-This article shows the supported authentication types and client types of Azure SignalR Service using Service Connector. This article also shows default environment variable name and value or Spring Boot configuration that you get when you create the service connection. For more information, see [Service Connector environment variable naming convention](concept-service-connector-internals.md).
+This article supported authentication methods and clients, and shows sample code you can use to connect Azure SignalR Service to other cloud services using Service Connector. This article also shows default environment variable name and value or Spring Boot configuration that you get when you create the service connection. For more information about naming conventions, check the [Service Connector internals](concept-service-connector-internals.md#configuration-naming-convention) article.
## Supported compute service
This article shows the supported authentication types and client types of Azure
Supported authentication and clients for App Service and Container Apps:
-### [Azure App Service](#tab/app-service)
-
-| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
-|-|--|--|--|--|
-| .NET | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-| None | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-
-### [Azure Container Apps](#tab/container-apps)
- | Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal | |-|--|--|--|--| | .NET | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
Supported authentication and clients for App Service and Container Apps:
## Default environment variable names or application properties
-Use the connection details below to connect compute services to SignalR. For each example below, replace the placeholder texts
+Use environment variable names listed below to connect compute services to Azure SignalR Service. For each example below, replace the placeholder texts
`<SignalR-name>`, `<access-key>`, `<client-ID>`, `<tenant-ID>`, and `<client-secret>` with your own SignalR name, access key, client ID, tenant ID and client secret.
-### .NET
-
-#### Secret / Connection string
+### System-assigned Managed Identity
| Default environment variable name | Description | Example value | | | | |
- | AZURE_SIGNALR_CONNECTIONSTRING | SignalR Service connection string | `Endpoint=https://<SignalR-name>.service.signalr.net;AccessKey=<access-key>;Version=1.0;` |
+ | AZURE_SIGNALR_CONNECTIONSTRING | SignalR Service connection string with Managed Identity | `Endpoint=https://<SignalR-name>.service.signalr.net;AuthType=aad;<client-ID>;Version=1.0;` |
-#### System-assigned Managed Identity
+#### Sample code
+Refer to the steps and code below to connect to Azure SignalR Service using a system-assigned managed identity.
+
+### User-assigned Managed Identity
| Default environment variable name | Description | Example value | | | | |
- | AZURE_SIGNALR_CONNECTIONSTRING | SignalR Service connection string with Managed Identity | `Endpoint=https://<SignalR-name>.service.signalr.net;AuthType=aad;<client-ID>;Version=1.0;` |
+ | AZURE_SIGNALR_CONNECTIONSTRING | SignalR Service connection string with Managed Identity | `Endpoint=https://<SignalR-name>.service.signalr.net;AuthType=aad;client-id=<client-id>;Version=1.0;` |
+
+#### Sample code
+Refer to the steps and code below to connect to Azure SignalR Service using a user-assigned managed identity.
+
-#### User-assigned Managed Identity
+### Connection string
| Default environment variable name | Description | Example value | | | | |
- | AZURE_SIGNALR_CONNECTIONSTRING | SignalR Service connection string with Managed Identity | `Endpoint=https://<SignalR-name>.service.signalr.net;AuthType=aad;client-id=<client-id>;Version=1.0;` |
+ | AZURE_SIGNALR_CONNECTIONSTRING | SignalR Service connection string | `Endpoint=https://<SignalR-name>.service.signalr.net;AccessKey=<access-key>;Version=1.0;` |
-#### Service Principal
+#### Sample code
+Refer to the steps and code below to connect to Azure SignalR Service using a connection string.
+
+### Service Principal
| Default environment variable name | Description | Example value | | | | | | AZURE_SIGNALR_CONNECTIONSTRING | SignalR Service connection string with Service Principal | `Endpoint=https://<SignalR-name>.service.signalr.net;AuthType=aad;ClientId=<client-ID>;ClientSecret=<client-secret>;TenantId=<tenant-ID>;Version=1.0;` |
+#### Sample code
+Refer to the steps and code below to connect to Azure SignalR Service using a service principal.
+ ## Next steps > [!div class="nextstepaction"]
service-connector How To Integrate Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-sql-database.md
Previously updated : 11/29/2022 Last updated : 10/26/2023 # Integrate Azure SQL Database with Service Connector
-This page shows all the supported compute services, clients, and authentication types to connect services to Azure SQL Database instances, using Service Connector. This page also shows the default environment variable names and application properties needed to create service connections. You might still be able to connect to an Azure SQL Database instance using other programming languages, without using Service Connector. Learn more about the [Service Connector environment variable naming conventions](concept-service-connector-internals.md).
+This page shows supported authentication methods and clients, and shows sample code you can use to connect compute services to Azure SQL Database using Service Connector. You might still be able to connect to Azure SQL Database using other methods. For more information about naming conventions, check the [Service Connector internals](concept-service-connector-internals.md#configuration-naming-convention) article.
## Supported compute services
Supported authentication and clients for App Service, Container Apps, and Azure
> [!NOTE] > System-assigned managed identity,User-assigned managed identity and Service principal are only supported on Azure CLI.
-## Default environment variable names or application properties
+## Default environment variable names or application properties and sample code
-Use the environment variable names and application properties listed below to connect compute services to Azure SQL Database. For each example below, replace the placeholder texts `<sql-server>`, `<sql-database>`, `<sql-username>`, and `<sql-password>` with your own server name, database name, user ID and password.
+Use the connection details below to connect compute services to Azure SQL Database. This page also shows default environment variable names and values you get when you create the service connection, as well as sample code. For each example below, replace the placeholder texts `<sql-server>`, `<sql-database>`, `<sql-username>`, and `<sql-password>` with your own server name, database name, user ID and password. For more information about naming conventions, check the [Service Connector internals](concept-service-connector-internals.md#configuration-naming-convention) article.
-### .NET (sqlClient)
-
-#### .NET System-assigned managed identity
+### System-assigned Managed Identity
+#### [.NET](#tab/sql-me-id-dotnet)
> [!div class="mx-tdBreakAll"] > | Default environment variable name | Description | Sample value | > | | | |
-> | `Azure_SQL_CONNECTIONSTRING` | Azure SQL Database connection string | `Data Source=<sql-server>.database.windows.net,1433;Initial Catalog=<sql-database>;Authentication=ActiveDirectoryManagedIdentity` |
+> | `AZURE_SQL_CONNECTIONSTRING` | Azure SQL Database connection string | `Data Source=<sql-server>.database.windows.net,1433;Initial Catalog=<sql-database>;Authentication=ActiveDirectoryManagedIdentity` |
-#### .NET User-assigned managed identity
+#### [Java](#tab/sql-me-id-java)
> [!div class="mx-tdBreakAll"]
-> | Default environment variable name | Description | Sample value |
-> | | | |
-> | `Azure_SQL_CONNECTIONSTRING` | Azure SQL Database connection string | `Data Source=<sql-server>.database.windows.net,1433;Initial Catalog=<sql-database>;User ID=<identity-client-ID>;Authentication=ActiveDirectoryManagedIdentity` |
+> | Default environment variable name | Description | Sample value |
+> |--|--|--|
+> | `AZURE_SQL_CONNECTIONSTRING` | Azure SQL Database connection string | `jdbc:sqlserver://<sql-server>.database.windows.net:1433;databaseName=<sql-database>;authentication=ActiveDirectoryMSI;` |
-#### .NET secret / connection string
+#### [SpringBoot](#tab/sql-me-id-spring)
> [!div class="mx-tdBreakAll"]
-> | Default environment variable name | Description | Sample value |
-> | | | |
-> | `Azure_SQL_CONNECTIONSTRING` | Azure SQL Database connection string | `Data Source=<sql-server>.database.windows.net,1433;Initial Catalog=<sql-database>;Password=<sql-password>` |
+> | Default environment variable name | Description | Sample value |
+> |--|-|--|
+> | `spring.datasource.url` | Azure SQL Database datasource URL | `jdbc:sqlserver://<sql-server>.database.windows.net:1433;databaseName=<sql-db>;authentication=ActiveDirectoryMSI;` |
-#### .NET Service principal
+#### [Python](#tab/sql-me-id-python)
> [!div class="mx-tdBreakAll"]
->| Default environment variable name | Description | Example value |
->|--|--||
->| `Azure_SQL_CLIENTID` | Your client ID | `<client-ID>` |
->| `Azure_SQL_CLIENTSECRET` | Your client secret | `<client-secret>` |
->| `Azure_SQL_TENANTID` | Your tenant ID | `<tenant-ID>` |
->| `Azure_SQL_CONNECTIONSTRING` | Azure SQL Database connection string | `Data Source=<sql-server>.database.windows.net,1433;Initial Catalog=<sql-database>;User ID=a30eeedc-e75f-4301-b1a9-56e81e0ce99c;Password=asdfghwerty;Authentication=ActiveDirectoryServicePrincipal` |
+> | Default environment variable name | Description | Sample value |
+> |--|--|-|
+> | `AZURE_SQL_SERVER` | Azure SQL Database server | `<sql-server>.database.windows.net` |
+> | `AZURE_SQL_PORT` | Azure SQL Database port | `1433` |
+> | `AZURE_SQL_DATABASE` | Azure SQL Database database | `<sql-database>` |
+> | `AZURE_SQL_AUTHENTICATION` | Azure SQL authentication | `ActiveDirectoryMsi` |
+#### [NodeJS](#tab/sql-me-id-nodejs)
-### Go (go-mssqldb)
+> [!div class="mx-tdBreakAll"]
+> | Default environment variable name | Description | Sample value |
+> |--|--|-|
+> | `AZURE_SQL_SERVER` | Azure SQL Database server | `<sql-server>.database.windows.net` |
+> | `AZURE_SQL_PORT` | Azure SQL Database port | `1433` |
+> | `AZURE_SQL_DATABASE` | Azure SQL Database database | `<sql-database>` |
+> | `AZURE_SQL_AUTHENTICATIONTYPE` | Azure SQL Database authentication type | `azure-active-directory-default` |
-#### Go (go-mssqldb) secret / connection string
+
-> [!div class="mx-tdBreakAll"]
-> | Default environment variable name | Description | Sample value |
-> |--|--||
-> | `Azure_SQL_CONNECTIONSTRING` | Azure SQL Database connection string | `server=<sql-server>.database.windows.net;port=1433;database=<sql-database>;user id=<sql-username>;password=<sql-password>;` |
+#### Sample code
+Refer to the steps and code below to connect to Azure SQL Database using a system-assigned managed identity.
-### Java Database Connectivity (JDBC)
+### User-assigned managed identity
-#### Java Database Connectivity (JDBC) System-assigned managed identity
+#### [.NET](#tab/sql-me-id-dotnet)
> [!div class="mx-tdBreakAll"]
-> | Default environment variable name | Description | Sample value |
-> |--|--|--|
-> | `Azure_SQL_CONNECTIONSTRING` | Azure SQL Database connection string | `jdbc:sqlserver://<sql-server>.database.windows.net:1433;databaseName=<sql-database>;authentication=ActiveDirectoryMSI;` |
+> | Default environment variable name | Description | Sample value |
+> | | | |
+> | `AZURE_SQL_CONNECTIONSTRING` | Azure SQL Database connection string | `Data Source=<sql-server>.database.windows.net,1433;Initial Catalog=<sql-database>;User ID=<identity-client-ID>;Authentication=ActiveDirectoryManagedIdentity` |
-#### Java Database Connectivity (JDBC) User-assigned managed identity
+#### [Java](#tab/sql-me-id-java)
> [!div class="mx-tdBreakAll"] > | Default environment variable name | Description | Sample value | > |--|--|--|
-> | `Azure_SQL_CONNECTIONSTRING` | Azure SQL Database connection string | `jdbc:sqlserver://<sql-server>.database.windows.net:1433;databaseName=<sql-database>;msiClientId=<msiClientId>;authentication=ActiveDirectoryMSI;` |
+> | `AZURE_SQL_CONNECTIONSTRING` | Azure SQL Database connection string | `jdbc:sqlserver://<sql-server>.database.windows.net:1433;databaseName=<sql-database>;msiClientId=<msiClientId>;authentication=ActiveDirectoryMSI;` |
-#### Java Database Connectivity (JDBC) secret / connection string
+#### [SpringBoot](#tab/sql-me-id-spring)
> [!div class="mx-tdBreakAll"]
-> | Default environment variable name | Description | Sample value |
-> | | | |
-> | `Azure_SQL_CONNECTIONSTRING` | Azure SQL Database connection string | `jdbc:sqlserver://<sql-server>.database.windows.net:1433;databaseName=<sql-database>;user=<sql-username>;password=<sql-password>;` |
+> | Default environment variable name | Description | Sample value |
+> |--|-|--|
+> | `spring.datasource.url` | Azure SQL Database datasource URL | `jdbc:sqlserver://<sql-server>.database.windows.net:1433;databaseName=<sql-db>;msiClientId=<msiClientId>;authentication=ActiveDirectoryMSI;` |
+
+#### [Python](#tab/sql-me-id-python)
-#### Java Database Connectivity (JDBC) Service principal
-
> [!div class="mx-tdBreakAll"]
-> | Default environment variable name | Description | Sample value |
-> |--|--|--|
-> | `Azure_SQL_CONNECTIONSTRING` | Azure SQL Database connection string | `jdbc:sqlserver://<sql-server>.database.windows.net:1433;databaseName=<sql-database>;user=<client-Id>;password=<client-secret>;authentication=ActiveDirectoryServicePrincipal;` |
+> | Default environment variable name | Description | Sample value |
+> |--|--|-|
+> | `AZURE_SQL_SERVER` | Azure SQL Database server | `<sql-server>.database.windows.net` |
+> | `AZURE_SQL_PORT` | Azure SQL Database port | `1433` |
+> | `AZURE_SQL_DATABASE` | Azure SQL Database database | `<sql-database>` |
+> | `AZURE_SQL_USER` | Azure SQL Database user | `Object (principal) ID` |
+> | `AZURE_SQL_AUTHENTICATION` | Azure SQL authentication | `ActiveDirectoryMsi` |
+#### [NodeJS](#tab/sql-me-id-nodejs)
+
+> [!div class="mx-tdBreakAll"]
+> | Default environment variable name | Description | Sample value |
+> |--|-|-|
+> | `AZURE_SQL_SERVER` | Azure SQL Database server | `<sql-server>.database.windows.net` |
+> | `AZURE_SQL_PORT` | Azure SQL Database port | `1433` |
+> | `AZURE_SQL_DATABASE` | Azure SQL Database database | `<sql-database>` |
+> | `AZURE_SQL_AUTHENTICATIONTYPE` | Azure SQL Database authentication type | `azure-active-directory-default` |
+> | `AZURE_SQL_CLIENTID` | Azure SQL Database client ID | `<identity-client-ID>` |
++
-### Java Spring Boot (spring-boot-starter-jdbc)
+#### Sample code
-#### Java Spring Boot System-assigned managed identity
+Refer to the steps and code below to connect to Azure SQL Database using a user-assigned managed identity.
++
+### Connection String
+
+#### [.NET](#tab/sql-secret-dotnet)
> [!div class="mx-tdBreakAll"]
-> | Default environment variable name | Description | Sample value |
-> |--|-|--|
-> | `spring.datasource.url` | Azure SQL Database datasource URL | `jdbc:sqlserver://<sql-server>.database.windows.net:1433;databaseName=<sql-db>;authentication=ActiveDirectoryMSI;` |
+> | Default environment variable name | Description | Sample value |
+> | | | |
+> | `AZURE_SQL_CONNECTIONSTRING` | Azure SQL Database connection string | `Data Source=<sql-server>.database.windows.net,1433;Initial Catalog=<sql-database>;Password=<sql-password>` |
-#### Java Spring Boot User-assigned managed identity
+#### [Java](#tab/sql-secret-java)
> [!div class="mx-tdBreakAll"]
-> | Default environment variable name | Description | Sample value |
-> |--|-|--|
-> | `spring.datasource.url` | Azure SQL Database datasource URL | `jdbc:sqlserver://<sql-server>.database.windows.net:1433;databaseName=<sql-db>;msiClientId=<msiClientId>;authentication=ActiveDirectoryMSI;` |
+> | Default environment variable name | Description | Sample value |
+> | | | |
+> | `AZURE_SQL_CONNECTIONSTRING` | Azure SQL Database connection string | `jdbc:sqlserver://<sql-server>.database.windows.net:1433;databaseName=<sql-database>;user=<sql-username>;password=<sql-password>;` |
-#### Java Spring Boot secret / connection string
+#### [SpringBoot](#tab/sql-secret-spring)
> [!div class="mx-tdBreakAll"] > | Default environment variable name | Description | Sample value |
Use the environment variable names and application properties listed below to co
> | `spring.datasource.username` | Azure SQL Database datasource username | `<sql-user>` | > | `spring.datasource.password` | Azure SQL Database datasource password | `<sql-password>` |
-#### Java Spring Boot Service principal
+#### [Python](#tab/sql-secret-python)
> [!div class="mx-tdBreakAll"]
-> | Default environment variable name | Description | Sample value |
-> |--|-|-|
-> | `spring.datasource.url` | Azure SQL Database datasource URL | `jdbc:sqlserver://<sql-server>.database.windows.net:1433;databaseName=<sql-db>;authentication=ActiveDirectoryServicePrincipal;` |
-> | `spring.datasource.username` | Azure SQL Database datasource username | `<client-Id>` |
-> | `spring.datasource.password` | Azure SQL Database datasource password | `<client-Secret>` |
-
-### Node.js
+> | Default environment variable name | Description | Sample value |
+> |--|--|-|
+> | `AZURE_SQL_SERVER` | Azure SQL Database server | `<sql-server>.database.windows.net` |
+> | `AZURE_SQL_PORT` | Azure SQL Database port | `1433` |
+> | `AZURE_SQL_DATABASE` | Azure SQL Database database | `<sql-database>` |
+> | `AZURE_SQL_USER` | Azure SQL Database user | `<sql-username>` |
+> | `AZURE_SQL_PASSWORD` | Azure SQL Database password | `<sql-password>` |
-#### Node.js System-assigned managed identity
+#### [Django](#tab/sql-secret-django)
> [!div class="mx-tdBreakAll"] > | Default environment variable name | Description | Sample value | > |--|--|-|
-> | `Azure_SQL_SERVER` | Azure SQL Database server | `<sql-server>.database.windows.net` |
-> | `Azure_SQL_PORT` | Azure SQL Database port | `1433` |
-> | `Azure_SQL_DATABASE` | Azure SQL Database database | `<sql-database>` |
-> | `Azure_SQL_AUTHENTICATIONTYPE` | Azure SQL Database authentication type | `azure-active-directory-default` |
+> | `AZURE_SQL_HOST` | Azure SQL Database host | `<sql-server>.database.windows.net` |
+> | `AZURE_SQL_PORT` | Azure SQL Database port | `1433` |
+> | `AZURE_SQL_NAME` | Azure SQL Database name | `<sql-database>` |
+> | `AZURE_SQL_USER` | Azure SQL Database user | `<sql-username>` |
+> | `AZURE_SQL_PASSWORD` | Azure SQL Database password | `<sql-password>` |
-#### Node.js User-assigned managed identity
+#### [Go](#tab/sql-secret-go)
> [!div class="mx-tdBreakAll"]
-> | Default environment variable name | Description | Sample value |
-> |--|-|-|
-> | `Azure_SQL_SERVER` | Azure SQL Database server | `<sql-server>.database.windows.net` |
-> | `Azure_SQL_PORT` | Azure SQL Database port | `1433` |
-> | `Azure_SQL_DATABASE` | Azure SQL Database database | `<sql-database>` |
-> | `Azure_SQL_AUTHENTICATIONTYPE` | Azure SQL Database authentication type | `azure-active-directory-default` |
-> | `Azure_SQL_CLIENTID` | Azure SQL Database client ID | `<identity-client-ID>` |
-
-#### Node.js secret / connection string
+> | Default environment variable name | Description | Sample value |
+> |--|--||
+> | `AZURE_SQL_CONNECTIONSTRING` | Azure SQL Database connection string | `server=<sql-server>.database.windows.net;port=1433;database=<sql-database>;user id=<sql-username>;password=<sql-password>;` |
+
+#### [NodeJS](#tab/sql-secret-nodejs)
> [!div class="mx-tdBreakAll"] > | Default environment variable name | Description | Sample value | > |--|--|-|
-> | `Azure_SQL_SERVER` | Azure SQL Database server | `<sql-server>.database.windows.net` |
-> | `Azure_SQL_PORT` | Azure SQL Database port | `1433` |
-> | `Azure_SQL_DATABASE` | Azure SQL Database database | `<sql-database>` |
-> | `Azure_SQL_USERNAME` | Azure SQL Database username | `<sql-username>` |
-> | `Azure_SQL_PASSWORD` | Azure SQL Database password | `<sql-password>` |
-
-#### Node.js Service principal
-
-> [!div class="mx-tdBreakAll"]
-> | Default environment variable name | Description | Sample value |
-> |--|-|-|
-> | `Azure_SQL_SERVER` | Azure SQL Database server | `<sql-server>.database.windows.net` |
-> | `Azure_SQL_PORT` | Azure SQL Database port | `1433` |
-> | `Azure_SQL_DATABASE` | Azure SQL Database database | `<sql-database>` |
-> | `Azure_SQL_AUTHENTICATIONTYPE` | Azure SQL Database authentication type | `azure-active-directory-default` |
-> | `Azure_SQL_CLIENTID` | Azure SQL Database client ID | `<your Client ID>` |
-> | `Azure_SQL_CLIENTSECRET` | Azure SQL Database client Secret | `<your Client Secret >` |
-> | `Azure_SQL_TENANTID` | Azure SQL Database Tenant ID | `<your Tenant ID>` |
+> | `AZURE_SQL_SERVER` | Azure SQL Database server | `<sql-server>.database.windows.net` |
+> | `AZURE_SQL_PORT` | Azure SQL Database port | `1433` |
+> | `AZURE_SQL_DATABASE` | Azure SQL Database database | `<sql-database>` |
+> | `AZURE_SQL_USERNAME` | Azure SQL Database username | `<sql-username>` |
+> | `AZURE_SQL_PASSWORD` | Azure SQL Database password | `<sql-password>` |
-### PHP
-
-#### PHP secret / connection string
+#### [PHP](#tab/sql-secret-php)
> [!div class="mx-tdBreakAll"] > | Default environment variable name | Description | Sample value | > |--|--|-|
-> | `Azure_SQL_SERVERNAME` | Azure SQL Database servername | `<sql-server>.database.windows.net,1433` |
-> | `Azure_SQL_DATABASE` | Azure SQL Database database | `<sql-database>` |
-> | `Azure_SQL_UID` | Azure SQL Database unique identifier (UID) | `<sql-username>` |
-> | `Azure_SQL_PASSWORD` | Azure SQL Database password | `<sql-password>` |
-
-### Python (pyobdc)
+> | `AZURE_SQL_SERVERNAME` | Azure SQL Database servername | `<sql-server>.database.windows.net,1433` |
+> | `AZURE_SQL_DATABASE` | Azure SQL Database database | `<sql-database>` |
+> | `AZURE_SQL_UID` | Azure SQL Database unique identifier (UID) | `<sql-username>` |
+> | `AZURE_SQL_PASSWORD` | Azure SQL Database password | `<sql-password>` |
-#### Python (pyobdc) system-assigned managed identity
+#### [Ruby](#tab/sql-secret-ruby)
> [!div class="mx-tdBreakAll"] > | Default environment variable name | Description | Sample value | > |--|--|-|
-> | `Azure_SQL_SERVER` | Azure SQL Database server | `<sql-server>.database.windows.net` |
-> | `Azure_SQL_PORT` | Azure SQL Database port | `1433` |
-> | `Azure_SQL_DATABASE` | Azure SQL Database database | `<sql-database>` |
-> | `Azure_SQL_AUTHENTICATION` | Azure SQL authentication | `ActiveDirectoryMsi` |
+> | `AZURE_SQL_HOST` | Azure SQL Database host | `<sql-server>.database.windows.net` |
+> | `AZURE_SQL_PORT` | Azure SQL Database port | `1433` |
+> | `AZURE_SQL_DATABASE` | Azure SQL Database database | `<sql-database>` |
+> | `AZURE_SQL_USERNAME` | Azure SQL Database username | `<sql-username>` |
+> | `AZURE_SQL_PASSWORD` | Azure SQL Database password | `<sql-password>` |
-#### Python (pyobdc) User-assigned managed identity
+
-> [!div class="mx-tdBreakAll"]
-> | Default environment variable name | Description | Sample value |
-> |--|--|-|
-> | `Azure_SQL_SERVER` | Azure SQL Database server | `<sql-server>.database.windows.net` |
-> | `Azure_SQL_PORT` | Azure SQL Database port | `1433` |
-> | `Azure_SQL_DATABASE` | Azure SQL Database database | `<sql-database>` |
-> | `Azure_SQL_USER` | Azure SQL Database user | `Object (principal) ID` |
-> | `Azure_SQL_AUTHENTICATION` | Azure SQL authentication | `ActiveDirectoryMsi` |
+#### Sample code
-#### Python (pyobdc) secret / connection string
+Refer to the steps and code below to connect to Azure SQL Database using a connection string.
++
+### Service Principal
+
+#### [.NET](#tab/sql-me-id-dotnet)
> [!div class="mx-tdBreakAll"]
-> | Default environment variable name | Description | Sample value |
-> |--|--|-|
-> | `Azure_SQL_SERVER` | Azure SQL Database server | `<sql-server>.database.windows.net` |
-> | `Azure_SQL_PORT` | Azure SQL Database port | `1433` |
-> | `Azure_SQL_DATABASE` | Azure SQL Database database | `<sql-database>` |
-> | `Azure_SQL_USER` | Azure SQL Database user | `<sql-username>` |
-> | `Azure_SQL_PASSWORD` | Azure SQL Database password | `<sql-password>` |
+> | Default environment variable name | Description | Example value |
+> |--|--||
+> | `AZURE_SQL_CLIENTID` | Your client ID | `<client-ID>` |
+> | `AZURE_SQL_CLIENTSECRET` | Your client secret | `<client-secret>` |
+> | `AZURE_SQL_TENANTID` | Your tenant ID | `<tenant-ID>` |
+> | `AZURE_SQL_CONNECTIONSTRING` | Azure SQL Database connection string | `Data Source=<sql-server>.database.windows.net,1433;Initial Catalog=<sql-database>;User ID=a30eeedc-e75f-4301-b1a9-56e81e0ce99c;Password=asdfghwerty;Authentication=ActiveDirectoryServicePrincipal` |
-#### Python (pyobdc) Service principal
+#### [Java](#tab/sql-me-id-java)
> [!div class="mx-tdBreakAll"]
-> | Default environment variable name | Description | Sample value |
-> |--|--|-|
-> | `Azure_SQL_SERVER` | Azure SQL Database server | `<sql-server>.database.windows.net` |
-> | `Azure_SQL_PORT` | Azure SQL Database port | `1433` |
-> | `Azure_SQL_DATABASE` | Azure SQL Database database | `<sql-database>` |
-> | `Azure_SQL_USER` | Azure SQL Database user | `your Client Id` |
-> | `Azure_SQL_AUTHENTICATION` | Azure SQL authentication | `ActiveDirectoryServerPrincipal` |
-> | `Azure_SQL_PASSWORD` | Azure SQL Database password | `your Client Secret` |
+> | Default environment variable name | Description | Sample value |
+> |--|--|--|
+> | `AZURE_SQL_CONNECTIONSTRING` | Azure SQL Database connection string | `jdbc:sqlserver://<sql-server>.database.windows.net:1433;databaseName=<sql-database>;user=<client-Id>;password=<client-secret>;authentication=ActiveDirectoryServicePrincipal;` |
-### Python-Django (mssql-django)
+#### [SpringBoot](#tab/sql-me-id-spring)
+
+> [!div class="mx-tdBreakAll"]
+> | Default environment variable name | Description | Sample value |
+> |--|-|-|
+> | `spring.datasource.url` | Azure SQL Database datasource URL | `jdbc:sqlserver://<sql-server>.database.windows.net:1433;databaseName=<sql-db>;authentication=ActiveDirectoryServicePrincipal;` |
+> | `spring.datasource.username` | Azure SQL Database datasource username | `<client-Id>` |
+> | `spring.datasource.password` | Azure SQL Database datasource password | `<client-Secret>` |
-#### Python-Django (mssql-django) secret / connection string
+
+#### [Python](#tab/sql-me-id-python)
> [!div class="mx-tdBreakAll"] > | Default environment variable name | Description | Sample value | > |--|--|-|
-> | `Azure_SQL_HOST` | Azure SQL Database host | `<sql-server>.database.windows.net` |
-> | `Azure_SQL_PORT` | Azure SQL Database port | `1433` |
-> | `Azure_SQL_NAME` | Azure SQL Database name | `<sql-database>` |
-> | `Azure_SQL_USER` | Azure SQL Database user | `<sql-username>` |
-> | `Azure_SQL_PASSWORD` | Azure SQL Database password | `<sql-password>` |
+> | `AZURE_SQL_SERVER` | Azure SQL Database server | `<sql-server>.database.windows.net` |
+> | `AZURE_SQL_PORT` | Azure SQL Database port | `1433` |
+> | `AZURE_SQL_DATABASE` | Azure SQL Database database | `<sql-database>` |
+> | `AZURE_SQL_USER` | Azure SQL Database user | `your Client Id` |
+> | `AZURE_SQL_AUTHENTICATION` | Azure SQL authentication | `ActiveDirectoryServerPrincipal` |
+> | `AZURE_SQL_PASSWORD` | Azure SQL Database password | `your Client Secret` |
-### Ruby
- #### Ruby secret / connection string
+#### [NodeJS](#tab/sql-me-id-nodejs)
> [!div class="mx-tdBreakAll"]
-> | Default environment variable name | Description | Sample value |
-> |--|--|-|
-> | `Azure_SQL_HOST` | Azure SQL Database host | `<sql-server>.database.windows.net` |
-> | `Azure_SQL_PORT` | Azure SQL Database port | `1433` |
-> | `Azure_SQL_DATABASE` | Azure SQL Database database | `<sql-database>` |
-> | `Azure_SQL_USERNAME` | Azure SQL Database username | `<sql-username>` |
-> | `Azure_SQL_PASSWORD` | Azure SQL Database password | `<sql-password>` |
+> | Default environment variable name | Description | Sample value |
+> |--|-|-|
+> | `AZURE_SQL_SERVER` | Azure SQL Database server | `<sql-server>.database.windows.net` |
+> | `AZURE_SQL_PORT` | Azure SQL Database port | `1433` |
+> | `AZURE_SQL_DATABASE` | Azure SQL Database database | `<sql-database>` |
+> | `AZURE_SQL_AUTHENTICATIONTYPE` | Azure SQL Database authentication type | `azure-active-directory-default` |
+> | `AZURE_SQL_CLIENTID` | Azure SQL Database client ID | `<your Client ID>` |
+> | `AZURE_SQL_CLIENTSECRET` | Azure SQL Database client Secret | `<your Client Secret >` |
+> | `AZURE_SQL_TENANTID` | Azure SQL Database Tenant ID | `<your Tenant ID>` |
+++
+#### Sample code
+
+Refer to the steps and code below to connect to Azure SQL Database using a service principal.
## Next steps
service-connector Quickstart Cli Container Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/quickstart-cli-container-apps.md
Previously updated : 04/13/2023 Last updated : 10/31/2023 ms.devlang: azurecli
-# Quickstart: Create a service connection in Container Apps with the Azure CLI
+# Quickstart: Create a service connection in Azure Container Apps with the Azure CLI
This quickstart shows you how to connect Azure Container Apps to other Cloud resources using the Azure CLI and Service Connector. Service Connector lets you quickly connect compute services to cloud services, while managing your connection's authentication and networking settings.
This quickstart shows you how to connect Azure Container Apps to other Cloud res
## Create a service connection
-You can create a connection using an access key or a managed identity.
+Create a connection using an access key or a managed identity.
### [Access key](#tab/using-access-key)
-1. Run the `az containerapp connection create` command to create a service connection between Container Apps and Azure Blob Storage with an access key.
+1. Run the `az containerapp connection create` command to create a service connection between Container Apps and Azure Blob Storage using an access key.
```azurecli az containerapp connection create storage-blob --secret
service-connector Quickstart Cli Spring Cloud Connection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/quickstart-cli-spring-cloud-connection.md
Previously updated : 04/13/2022 Last updated : 10/31/2022 ms.devlang: azurecli
Service Connector lets you quickly connect compute services to cloud services, w
- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] -- At least one application hosted by Azure Spring Apps in a [region supported by Service Connector](./concept-region-support.md). If you don't have one, [deploy your first application to Azure Spring Apps](../spring-apps/quickstart.md).
+- At least one application hosted by Azure Spring Apps in a [region supported by Service Connector](./concept-region-support.md). If you don't have one, [deploy your first application to Azure Spring Apps](../spring-apps/quickstart.md).
[!INCLUDE [azure-cli-prepare-your-environment-no-header.md](~/articles/reusable-content/azure-cli/azure-cli-prepare-your-environment-no-header.md)] -- Version 2.37.0 or higher of the Azure CLI must be installed. To upgrade to the latest version, run `az upgrade`. If using Azure Cloud Shell, the latest version is already installed.
+- Version 2.37.0 or higher of the Azure CLI. To upgrade to the latest version, run `az upgrade`. If using Azure Cloud Shell, the latest version is already installed.
- The Azure Spring Apps extension must be installed in the Azure CLI or the Cloud Shell. To install it, run `az extension add --name spring`.
Service Connector lets you quickly connect compute services to cloud services, w
> [!TIP] > You can check if the resource provider has already been registered by running the command `az provider show -n "Microsoft.ServiceLinker" --query registrationState`. If the output is `Registered`, then Service Connector has already been registered. - 1. Optionally, run the command [az spring connection list-support-types](/cli/azure/spring/connection#az-spring-connection-list-support-types) to get a list of supported target services for Azure Spring Apps. ```azurecli
Service Connector lets you quickly connect compute services to cloud services, w
## Create a service connection
-You can create a connection from Azure Spring Apps using an access key or a managed identity.
+Create a connection from Azure Spring Apps using an access key or a managed identity.
### [Access key](#tab/Using-access-key)
-1. Run the `az spring connection create` command to create a service connection between Azure Spring Apps and an Azure Blob Storage with an access key.
+1. Run the `az spring connection create` command to create a service connection between Azure Spring Apps and an Azure Blob Storage using an access key.
```azurecli az spring connection create storage-blob --secret
You can create a connection from Azure Spring Apps using an access key or a mana
> [!TIP] > If you don't have a Blob Storage, you can run `az spring connection create storage-blob --new --secret` to provision a new Blob Storage and directly connect it to your application hosted by Azure Spring Apps using a connection string.
-### [Managed Identity](#tab/Using-Managed-Identity)
+### [Managed identity](#tab/Using-Managed-Identity)
> [!IMPORTANT]
-> To use Managed Identity, you must have the permission to modify [role assignments in Microsoft Entra ID](../active-directory/managed-identities-azure-resources/howto-assign-access-portal.md). Without this permission, your connection creation will fail. Ask your subscription owner to grant you a role assignment permission or use an access key to create the connection.
+> To use a managed identity, you must have the permission to modify [role assignments in Microsoft Entra ID](../active-directory/managed-identities-azure-resources/howto-assign-access-portal.md). Without this permission, your connection creation will fail. Ask your subscription owner to grant you a role assignment permission or use an access key to create the connection.
1. Run the `az spring connection create` command to create a service connection to a Blob Storage with a system-assigned managed identity
The output also displays the provisioning state of your connections: failed or s
## Next steps
-Check the guides below for more information about Service Connector and Azure Spring Apps:
+Check the guides below for more information about Service Connector and Azure Spring Apps.
> [!div class="nextstepaction"] > [Tutorial: Azure Spring Apps + MySQL](./tutorial-java-spring-mysql.md)
service-connector Quickstart Portal Container Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/quickstart-portal-container-apps.md
Title: Quickstart - Create a service connection in Container Apps from the Azure portal
-description: Quickstart showing how to create a service connection in Azure Container Apps from the Azure portal
+description: This quickstart shows how to create a service connection in Azure Container Apps from the Azure portal
Previously updated : 08/09/2022 Last updated : 10/31/2023
-#Customer intent: As an app developer, I want to connect a Container App to a storage account in the Azure portal using Service Connector.
+#Customer intent: As an app developer, I want to connect Azure Container Apps to a storage account in the Azure portal using Service Connector.
# Quickstart: Create a service connection in Azure Container Apps from the Azure portal This quickstart shows you how to connect Azure Container Apps to other Cloud resources using the Azure portal and Service Connector. Service Connector lets you quickly connect compute services to cloud services, while managing your connection's authentication and networking settings.
+> [!NOTE]
+> For information on connecting resources using Azure CLI, see [Create a service connection in Container Apps with the Azure CLI](./quickstart-cli-container-apps.md).
+ > [!IMPORTANT] > This feature in Container Apps is currently in preview. > See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
Sign in to the Azure portal at [https://portal.azure.com/](https://portal.azure.
## Create a new service connection
-You'll use Service Connector to create a new service connection in Container Apps.
+Use Service Connector to create a new service connection in Container Apps.
-1. To create a new service connection in Container Apps, select the **Search resources, services and docs (G +/)** search bar at the top of the Azure portal, type *Container Apps* in the filter and select **Container Apps**.
+1. Select the **Search resources, services and docs (G +/)** search bar at the top of the Azure portal, type *Container Apps* in the filter and select **Container Apps**.
:::image type="content" source="./media/container-apps-quickstart/select-container-apps.png" alt-text="Screenshot of the Azure portal, selecting Container Apps.":::
You'll use Service Connector to create a new service connection in Container App
| Setting | Example | Description | ||-||
- | **Container** | *my-container* | The container of your container app. |
+ | **Container** | *my-container-app* | The container of your container app. |
| **Service type** | *Storage - Blob* | The type of service you're going to connect to your container. | | **Subscription** | *my-subscription* | The subscription that contains your target service (the service you want to connect to). The default value is the subscription that this container app is in. | | **Connection name** | *storageblob_700ae* | The connection name that identifies the connection between your container app and target service. Use the connection name provided by Service Connector or choose your own connection name. | | **Storage account** | *my-storage-account* | The target storage account you want to connect to. If you choose a different service type, select the corresponding target service instance. |
- | **Client type** | *.NET* | The application stack that works with the target service you selected. The default value is None, which will generate a list of configurations. If you know about the app stack or the client SDK in the container you selected, select the same app stack for the client type. |
-
- :::image type="content" source="./media/container-apps-quickstart/basics.png" alt-text="Screenshot of the Azure portal, filling out the Basics tab.":::
+ | **Client type** | *.NET* | The application stack that works with the target service you selected. The default value is None, which generates a list of configurations. If you know about the app stack or the client SDK in the container you selected, select the same app stack for the client type. |
1. Select **Next: Authentication** to choose an authentication method: system-assigned managed identity (SMI), user-assigned managed identity (UMI), connection string, or service principal.
You'll use Service Connector to create a new service connection in Container App
1. Select **Managed identities** and select **Create** 1. Enter a subscription, resource group, region and instance name 1. Select **Review + create** and the **Create**
- 1. Once your managed identity has been deployed, go to your Service Connector tab, select **Previous** and then **Next** to refresh the form's data, and under **User-assigned managed identity**, select the identity you've created.
+ 1. Once your managed identity has been deployed, go to your Service Connector tab, select **Previous** and then **Next** to refresh the form's data, and under **User-assigned managed identity**, select the identity you created.
For more information, go to [create a user-assigned managed identity](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md?pivots=identity-mi-methods-azp).
You'll use Service Connector to create a new service connection in Container App
## View service connections
-1. Container Apps connections are displayed under **Settings > Service Connector**.
+1. Container Apps connections are displayed under **Settings > Service Connector (preview)**. Select **>** to expand the list and see the properties required by your application.
-1. Select **>** to expand the list and see the environment variables required by your application.
+1. Select your connection and then **Validate** to prompt Service Connector to check your connection.
-1. Select **Validate** check your connection status, and select **Learn more** to review the connection validation details.
+1. Select **Learn more** to review the connection validation details.
:::image type="content" source="./media/container-apps-quickstart/validation-result.png" alt-text="Screenshot of the Azure portal, get connection validation result.":::
service-connector Tutorial Portal Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/tutorial-portal-key-vault.md
Title: Tutorial - Create a service connection and store secrets into Key Vault
-description: Tutorial showing how to create a service connection and store secrets into Key Vault
+ Title: Tutorial - Connect Azure services and store secrets in Key Vault
+description: Tutorial showing how to store your web app's secrets in Azure Key Vault using Service Connector
Previously updated : 05/23/2022 Last updated : 10/31/2023
-# Quickstart: Create a service connection and store secrets into Key Vault
+# Quickstart: Connect Azure services and store secrets in Azure Key Vault
Azure Key Vault is a cloud service that provides a secure store for secrets. You can securely store keys, passwords, certificates, and other secrets. When you create a service connection, you can securely store access keys and secrets into connected Key Vault. In this tutorial, you'll complete the following tasks using the Azure portal. Both methods are explained in the following procedures.
To create a service connection and store secrets in Key Vault with Service Conne
To store your connection access keys and secrets into a key vault, start by connecting your App Service to a key vault.
-1. Select the **All resources** button on the left of the Azure portal. Type **App Service** in the filter and select the name of the App Service you want to use from the list.
+1. In the Azure portal, type **App Service** in the search menu and select the name of the App Service you want to use from the list.
1. Select **Service Connector** from the left table of contents. Then select **Create**. 1. Select or enter the following settings. | Setting | Suggested value | Description | | | - | -- |
- | **Service type** | Key Vault | Target service type. If you don't have a Key Vault, you need to [create one](../key-vault\general\quick-create-portal.md). |
+ | **Service type** | Key Vault | Target service type. If you don't have a Key Vault, [create one](../key-vault\general\quick-create-portal.md). |
| **Subscription** | One of your subscriptions. | The subscription in which your target service is deployed. The target service is the service you want to connect to. The default value is the subscription listed for the App Service. | | **Connection name** | Generated unique name | The connection name that identifies the connection between your App Service and target service |
- | **Key vault name** | Your Key vault name | The target Key Vault you want to connect to. |
+ | **Key vault name** | Your Key Vault name | The target Key Vault you want to connect to. |
| **Client type** | The same app stack on this App Service | Your application stack that works with the target service you selected. The default value comes from the App Service runtime stack. | 1. Select **Next: Authentication** to select the authentication type. Then select **System assigned managed identity** to connect your Key Vault.
To store your connection access keys and secrets into a key vault, start by conn
Now you can create a service connection to another target service and directly store access keys into a connected Key Vault when using a connection string/access key or a Service Principal for authentication. We'll use Blob Storage as an example below. Follow the same process for other target services.
-### [Connection string](#tab/connectionstring)
-
-1. Select the **All resources** button on the left of the Azure portal. Type **App Service** in the filter and select the name of the App Service you want to use from the list.
+1. In the Azure portal, type **App Service** in the search menu and select the name of the App Service you want to use from the list.
1. Select **Service Connector** from the left table of contents. Then select **Create**.+ 1. Select or enter the following settings. | Setting | Suggested value | Description |
Now you can create a service connection to another target service and directly s
| **Storage account** | Your storage account | The target storage account you want to connect to. If you choose a different service type, select the corresponding target service instance. | | **Client type** | The same app stack on this App Service | Your application stack that works with the target service you selected. The default value comes from the App Service runtime stack. |
-1. Select **Next: Authentication** to select the authentication type. Then select **Connection string** to use an access key to connect your Blob storage account.
+1. Set up authentication
+
+ ### [Connection string](#tab/connectionstring)
+
+ Select **Next: Authentication** to select the authentication type and select **Connection string** to use an access key to connect your storage account.
| Setting | Suggested value | Description | | | - | -- | | **Store Secret to Key Vault** | Check | This option lets Service Connector store the connection string/access key into your Key Vault. | | **Key Vault connection** | One of your Key Vault connections | Select the Key Vault in which you want to store your connection string/access key. |
-1. Select **Next: Network** to select the network configuration. Then select **Enable firewall settings** to update firewall allowlist in Key Vault so that your App Service can reach the Key Vault.
-
-1. Then select **Next: Review + Create** to review the provided information. Then select **Create** to create the service connection. It might take one minute to complete the operation.
+ ### [Service principal](#tab/serviceprincipal)
-### [Service principal](#tab/serviceprincipal)
-
-1. Select the **All resources** button on the left of the Azure portal. Type **App Service** in the filter and select the name of the App Service you want to use from the list.
-1. Select **Service Connector** from the left table of contents. Then select **Create**.
-1. Select or enter the following settings.
-
- | Setting | Suggested value | Description |
- | | - | -- |
- | **Service type** | Blob Storage | Target service type. If you don't have a Storage Blob container, you can [create one](../storage/blobs/storage-quickstart-blobs-portal.md) or use another service type. |
- | **Subscription** | One of your subscriptions | The subscription in which your target service is deployed. The target service is the service you want to connect to. The default value is the subscription listed for the App Service. |
- | **Connection name** | Generated unique name | The connection name that identifies the connection between your App Service and target service. |
- | **Storage account** | Your storage account | The target storage account you want to connect to. If you choose a different service type, select the corresponding target service instance. |
- | **Client type** | The same app stack for this App Service | Your application stack that works with the target service you selected. The default value comes from the App Service runtime stack. |
-
-1. Select **Next: Authentication** to select the authentication type and select **Service Principal** to use Service Principal to connect your Blob storage account.
+ Select **Next: Authentication** to select the authentication type and select **Service Principal** to use Service Principal to connect your storage account.
| Setting | Suggested value | Description | | | - | -- |
Now you can create a service connection to another target service and directly s
| **Store Secret to Key Vault** | Check | This option lets Service Connector store the service principal ID and secret into Key Vault. | | **Key Vault connection** | One of your key vault connections | Select the Key Vault in which you want to store your service principal ID and secret. |
-1. Select **Next: Network** to select the network configuration. Then select **Enable firewall settings** to update firewall allowlist in Key Vault so that your App Service can reach the Key Vault.
+
-1. Then select **Next: Review + Create** to review the provided information. Then select **Create** to create the service connection. It might take one minute to complete the operation.
+1. Select **Next: Network** and **Enable firewall settings** to update the firewall allowlist in Key Vault so that your App Service can reach the Key Vault.
-
+1. Then select **Next: Review + Create** to review the provided information.
+
+1. Select **Create** to create the service connection. It might take up to one minute to complete the operation.
## View your configuration in Key Vault
Now you can create a service connection to another target service and directly s
1. Select **Secrets** in the Key Vault left ToC, and select the blob storage secret name.
- > [!TIP]
- > Don't have permission to list secrets? Refer to [troubleshooting](../key-vault/general/troubleshooting-access-issues.md#im-not-able-to-list-or-get-secretskeyscertificate-im-seeing-a-something-went-wrong-error).
+ > [!TIP]
+ > Don't have permission to list secrets? Refer to [troubleshooting Azure Key Vault](../key-vault/general/troubleshooting-access-issues.md#im-not-able-to-list-or-get-secretskeyscertificate-im-seeing-a-something-went-wrong-error).
-4. Select a version ID from the Current Version list.
+1. Select a version ID from the Current Version list.
-5. Select **Show Secret Value** button and you'll see the actual connection string of this blob storage connection.
+1. Select **Show Secret Value** to get the connection string of this blob storage connection.
## Clean up resources
service-fabric Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/security-controls-policy.md
Previously updated : 10/23/2023 Last updated : 11/06/2023 # Azure Policy Regulatory Compliance controls for Azure Service Fabric
spring-apps How To Bind Cosmos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-bind-cosmos.md
All the connection strings and credentials are injected as environment variables
For the default environment variable names, see the following articles:
-* [Azure Cosmos DB for Table](../service-connector/how-to-integrate-cosmos-table.md?tabs=spring-apps#default-environment-variable-names-or-application-properties)
+* [Azure Cosmos DB for Table](../service-connector/how-to-integrate-cosmos-table.md?tabs=spring-apps#default-environment-variable-names-or-application-properties-and-sample-code)
* [Azure Cosmos DB for NoSQL](../service-connector/how-to-integrate-cosmos-sql.md?tabs=spring-apps#default-environment-variable-names-or-application-properties-and-sample-code) * [Azure Cosmos DB for MongoDB](../service-connector/how-to-integrate-cosmos-db.md?tabs=spring-apps#default-environment-variable-names-or-application-properties) * [Azure Cosmos DB for Gremlin](../service-connector/how-to-integrate-cosmos-gremlin.md?tabs=spring-apps#default-environment-variable-names-or-application-properties)
spring-apps How To Configure Planned Maintenance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-configure-planned-maintenance.md
+
+ Title: How to configure planned maintenance for Azure Spring Apps
+description: Describes how to configure planned maintenance for Azure Spring Apps.
++++ Last updated : 11/07/2023+++
+# How to configure planned maintenance
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
+
+**This article applies to:** ❌ Standard consumption and dedicated (Preview) ✔️ Basic/Standard ✔️ Enterprise
+
+This article describes how to configure planned maintenance in Azure Spring Apps.
+
+Routine maintenance is necessary to keep the Azure Spring Apps platform up-to-date and secure. The maintenance, also called auto patching, includes security updates, bug fixes, new features, or performance improvements. Auto patching can be performed on components managed by Azure Spring Apps to support your Java applications, including JDK, APM, base OS image, managed middleware, and runtime infrastructure. For the maintenance to take effect, your applications restart within the maintenance window you specify, but the service quality and uptime guarantees continue to apply during this time.
+
+## Configure maintenance for Azure Spring Apps
+
+### [Azure portal](#tab/Azure-portal)
+
+Use the following steps to configure planned maintenance in Azure Spring Apps:
+
+1. Go to the service **Overview** page and select **Planned Maintenance**.
+
+ :::image type="content" source="media/how-to-configure-planned-maintenance/maintenance-section.png" alt-text="Screenshot of Azure portal that shows the Azure Spring Apps sidebar with Planned Maintenance highlighted.":::
+
+1. Select **Choose your preferred time** to specify detailed configuration for the maintenance window.
+
+ :::image type="content" source="media/how-to-configure-planned-maintenance/maintenance-checkbox.png" alt-text="Screenshot of the Azure portal that shows the Planned maintenance page with the Choose your preferred time checkbox highlighted.":::
+
+1. Select **Day of the week** to schedule the maintenance.
+
+ :::image type="content" source="media/how-to-configure-planned-maintenance/maintenance-week.png" alt-text="Screenshot of Azure portal that shows the Planned maintenance page with the Day of week option highlighted.":::
+
+1. Select **Start time of upgrade**.
+
+ :::image type="content" source="media/how-to-configure-planned-maintenance/maintenance-time.png" alt-text="Screenshot of Azure portal that shows the Planned maintenance page with the Start time of upgrade option highlighted.":::
+
+1. Select **Apply** to submit your configuration for planned maintenance.
+
+### [Azure CLI](#tab/azure-cli)
+
+Use the following command to configure planned maintenance:
+
+```azurecli
+az spring update \
+ --resource-group <resource-group-name> \
+ --name <Azure-Spring-Apps-instance-name> \
+ --enable-planned-maintenance \
+ --planned-maintenance-day $DAY_OF_WEEK \
+ --planned-maintenance-start-hour $START_HOUR
+```
+++
+Updating the configuration can take a few minutes. You get a notification when the configuration is complete.
+
+> [!NOTE]
+> If you don't configure planned maintenance, the maintenance takes place at a time chosen by the service team, with the best effort to minimize business risks for most customers.
+
+## Manage maintenance notification
+
+Notifications and messages are sent out before and during the maintenance. The following table describes the message types and time details:
+
+| Sequence number | Message type | Channel | Time the message is sent out |
+|--|--||--|
+| 1 | Release note | Activity Log | At the end of the release rollout. |
+| 2 | Maintenance announcement | Planned Maintenance | Two weeks before the first available maintenance window. |
+| 3 | Start of maintenance window | Activity Log | At the start of the execution of the entire maintenance. |
+| 4 | Changelog of components | Activity Log | At the end of upgrade for each managed component. |
+| 5 | End of maintenance window | Activity Log | At the end of the execution of the entire maintenance. |
+| 6 | Feature update | What's New article | After the new feature becomes available to the customers. |
+
+## Manage maintenance frequency
+
+Currently, Azure Spring Apps performs one regular planned maintenance to upgrade the underlying infrastructure every three months. For a detailed maintenance timeline, check the notifications on the [Azure Service Health](https://azure.microsoft.com/get-started/azure-portal/service-health) page.
+
+## Best practices
+
+- When you configure planned maintenance for multiple service instances in the same region, the maintenance takes place within the same week. For example, if maintenance for cluster A is set on Monday and cluster B on Sunday, then cluster A is maintained before cluster B, in the same week.
+- If you have two service instances that span across [Azure paired regions](../availability-zones/cross-region-replication-azure.md#azure-paired-regions), the maintenance takes place in different weeks for such service instances, but there's no guarantee which region is maintained first. Follow each maintenance announcement for the exact information.
+- The length of the time window for the planned maintenance is fixed to 8 hours. For example, if the start time is set to 10:00, then the maintenance job is executed at any time between 10:00 and 18:00. The service team tries its best to finish the maintenance within this time window, but sometimes it might take longer.
+- You can't exempt a maintenance job regardless of how or whether planned maintenance is configured. If you have special requests for a maintenance time that can't be met with this feature, open a support ticket.
+
+## Next steps
+
+- [Analyze logs and metrics with diagnostics settings](./diagnostic-services.md)
spring-apps Quickstart Deploy Event Driven App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-deploy-event-driven-app.md
This article provides the following options for deploying to Azure Spring Apps:
::: zone pivot="sc-enterprise" -- The Azure portal is the easiest and fastest way to create resources and deploy applications with a single click. This method is suitable for Spring developers who want to quickly deploy applications to Azure cloud services.-- The Azure CLI is a powerful command line tool to manage Azure resources. It's suitable for Spring developers who are familiar with Azure cloud services.-
+- The **Azure portal** option is the easiest and fastest way to create resources and deploy applications with a single click. This option is suitable for Spring developers who want to quickly deploy applications to Azure cloud services.
+- The **Azure portal + Maven plugin** option is a more conventional way to create resources and deploy applications step by step. This option is suitable for Spring developers using Azure cloud services for the first time.
+- The **Azure CLI** option uses a powerful command line tool to manage Azure resources. This option is suitable for Spring developers who are familiar with Azure cloud services.
::: zone-end ## 1. Prerequisites
This article provides the following options for deploying to Azure Spring Apps:
### [Azure portal](#tab/Azure-portal) -- An Azure subscription. If you don't have a subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
+- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
### [Azure portal + Maven plugin](#tab/Azure-portal-maven-plugin)
This article provides the following options for deploying to Azure Spring Apps:
### [Azure portal](#tab/Azure-portal-ent) -- An Azure subscription. If you don't have a subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
+- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+- If you're deploying an Azure Spring Apps Enterprise plan instance for the first time in the target subscription, see the [Requirements](./how-to-enterprise-marketplace-offer.md#requirements) section of [View Azure Spring Apps Enterprise tier offering in Azure Marketplace](./how-to-enterprise-marketplace-offer.md).
+
+### [Azure portal + Maven plugin](#tab/Azure-portal-maven-plugin-ent)
+
+- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+- If you're deploying an Azure Spring Apps Enterprise plan instance for the first time in the target subscription, see the [Requirements](./how-to-enterprise-marketplace-offer.md#requirements) section of [View Azure Spring Apps Enterprise tier offering in Azure Marketplace](./how-to-enterprise-marketplace-offer.md).
+- [Git](https://git-scm.com/downloads).
+- [Java Development Kit (JDK)](/java/azure/jdk/), version 17.
### [Azure CLI](#tab/Azure-CLI)
Use the following steps to confirm that the event-driven app works correctly. Yo
::: zone pivot="sc-consumption-plan,sc-standard"
-1. Send a message to the `lower-case` queue with Service Bus Explorer. For more information, see the [Send a message to a queue or topic](../service-bus-messaging/explorer.md#send-a-message-to-a-queue-or-topic) section of [Use Service Bus Explorer to run data operations on Service Bus](../service-bus-messaging/explorer.md).
-
-1. Confirm that there's a new message sent to the `upper-case` queue. For more information, see the [Peek a message](../service-bus-messaging/explorer.md#peek-a-message) section of [Use Service Bus Explorer to run data operations on Service Bus](../service-bus-messaging/explorer.md).
-1. Go to the Azure Spring Apps instance **Overview** page and select **Logs** to check the app's logs.
+3. Go to the Azure Spring Apps instance **Overview** page and select **Logs** to check the app's logs.
:::image type="content" source="media/quickstart-deploy-event-driven-app/logs.png" alt-text="Screenshot of the Azure portal that shows the Azure Spring Apps Logs page." lightbox="media/quickstart-deploy-event-driven-app/logs.png":::
Use the following steps to confirm that the event-driven app works correctly. Yo
### [Azure portal](#tab/Azure-portal-ent)
-1. Send a message to the `lower-case` queue with Service Bus Explorer. For more information, see the [Send a message to a queue or topic](../service-bus-messaging/explorer.md#send-a-message-to-a-queue-or-topic) section of [Use Service Bus Explorer to run data operations on Service Bus](../service-bus-messaging/explorer.md).
-1. Confirm that there's a new message sent to the `upper-case` queue. For more information, see the [Peek a message](../service-bus-messaging/explorer.md#peek-a-message) section of [Use Service Bus Explorer to run data operations on Service Bus](../service-bus-messaging/explorer.md).
+3. Go to the Azure Spring Apps instance **Overview** page and select **Logs** to check the app's logs.
-1. Go to the Azure Spring Apps instance **Overview** page and select **Logs** to check the app's logs.
+ :::image type="content" source="media/quickstart-deploy-event-driven-app/logs.png" alt-text="Screenshot of the Azure portal that shows the Azure Spring Apps Logs page." lightbox="media/quickstart-deploy-event-driven-app/logs.png":::
+
+1. Check the details for each resource deployment, which are useful for investigating any deployment issues.
+
+### [Azure portal + Maven plugin](#tab/Azure-portal-maven-plugin-ent)
++
+3. Go to the Azure Spring Apps instance **Overview** page and select **Logs** to check the app's logs.
:::image type="content" source="media/quickstart-deploy-event-driven-app/logs.png" alt-text="Screenshot of the Azure portal that shows the Azure Spring Apps Logs page." lightbox="media/quickstart-deploy-event-driven-app/logs.png":::
Use the following steps to confirm that the event-driven app works correctly. Yo
### [Azure CLI](#tab/Azure-CLI)
-1. Send a message to the `lower-case` queue with Service Bus Explorer. For more information, see the [Send a message to a queue or topic](../service-bus-messaging/explorer.md#send-a-message-to-a-queue-or-topic) section of [Use Service Bus Explorer to run data operations on Service Bus](../service-bus-messaging/explorer.md).
-
-1. Confirm that there's a new message sent to the `upper-case` queue. For more information, see the [Peek a message](../service-bus-messaging/explorer.md#peek-a-message) section of [Use Service Bus Explorer to run data operations on Service Bus](../service-bus-messaging/explorer.md).
-1. Go to the Azure Spring Apps instance **Overview** page and select **Logs** to check the app's logs.
+3. Go to the Azure Spring Apps instance **Overview** page and select **Logs** to check the app's logs.
:::image type="content" source="media/quickstart-deploy-event-driven-app/logs.png" alt-text="Screenshot of the Azure portal that shows the Azure Spring Apps Logs page." lightbox="media/quickstart-deploy-event-driven-app/logs.png":::
Use the following steps to confirm that the event-driven app works correctly. Yo
> [Set up Azure Spring Apps CI/CD with GitHub Actions](./how-to-github-actions.md) > [!div class="nextstepaction"]
-> [Set up Azure Spring Apps CI/CD with Azure DevOps](./how-to-cicd.md)
+> [Automate application deployments to Azure Spring Apps](./how-to-cicd.md)
> [!div class="nextstepaction"] > [Use managed identities for applications in Azure Spring Apps](./how-to-use-managed-identities.md)
Use the following steps to confirm that the event-driven app works correctly. Yo
::: zone pivot="sc-standard, sc-consumption-plan" > [!div class="nextstepaction"]
-> [Run microservice apps(Pet Clinic)](./quickstart-sample-app-introduction.md)
+> [Run microservice apps (Pet Clinic)](./quickstart-sample-app-introduction.md)
::: zone-end ::: zone pivot="sc-enterprise" > [!div class="nextstepaction"]
-> [Run polyglot apps on Enterprise plan(ACME Fitness Store)](./quickstart-sample-app-acme-fitness-store-introduction.md)
+> [Introduction to the Fitness Store sample app](./quickstart-sample-app-acme-fitness-store-introduction.md)
::: zone-end
spring-apps Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart.md
zone_pivot_groups: spring-apps-plan-selection
This article explains how to deploy a small application to run on Azure Spring Apps.
-The application code used in this tutorial is a simple app. When you've completed this example, the application is accessible online, and you can manage it through the Azure portal.
-
+The application code used in this tutorial is a simple app. When you complete this example, the application is accessible online, and you can manage it through the Azure portal.
[!INCLUDE [quickstart-tool-introduction](includes/quickstart/quickstart-tool-introduction.md)] --
-This article provides the following options for deploying to Azure Spring Apps:
--- The Azure portal is the easiest and fastest way to create resources and deploy applications with a single click. This option is suitable for Spring developers who want to quickly deploy applications to Azure cloud services.-- The Azure CLI is a powerful command line tool to manage Azure resources. This option is suitable for Spring developers who are familiar with Azure cloud services.-- IntelliJ is a powerful Java IDE to easily manage Azure resources. This option is suitable for Spring developers who are familiar with Azure cloud services and IntelliJ IDEA.-- Visual Studio Code is a lightweight but powerful source code editor, which can easily manage Azure resources. This option is suitable for Spring developers who are familiar with Azure cloud services and Visual Studio Code.-- ## 1. Prerequisites ::: zone pivot="sc-consumption-plan,sc-standard" ### [Azure portal](#tab/Azure-portal) -- An Azure subscription. If you don't have a subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
+- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
### [Azure portal + Maven plugin](#tab/Azure-portal-maven-plugin)
This article provides the following options for deploying to Azure Spring Apps:
### [Azure portal](#tab/Azure-portal-ent) -- An Azure subscription. If you don't have a subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
+- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+- If you're deploying an Azure Spring Apps Enterprise plan instance for the first time in the target subscription, see the [Requirements](./how-to-enterprise-marketplace-offer.md#requirements) section of [Enterprise plan in Azure Marketplace](./how-to-enterprise-marketplace-offer.md).
+
+### [Azure portal + Maven plugin](#tab/Azure-portal-maven-plugin-ent)
+
+- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+- If you're deploying an Azure Spring Apps Enterprise plan instance for the first time in the target subscription, see the [Requirements](./how-to-enterprise-marketplace-offer.md#requirements) section of [Enterprise plan in Azure Marketplace](./how-to-enterprise-marketplace-offer.md).
+- [Git](https://git-scm.com/downloads).
+- [Java Development Kit (JDK)](/java/azure/jdk/), version 17.
### [Azure CLI](#tab/Azure-CLI)
This section describes how to validate your application.
After the deployment finishes, find the application URL from the deployment outputs. Use the following steps to validate:
+1. Access the application URL from the **Outputs** page of the **Deployment**. When you open the app, you get the response `Hello World`.
-1. Access the application URL. When you open the app, you get the response `Hello World`.
+ :::image type="content" source="media/quickstart/hello-app-url.png" alt-text="Screenshot of the Azure portal that shows the Outputs page of the Deployment." border="false" lightbox="media/quickstart/hello-app-url.png":::
1. Check the details for each resource deployment, which are useful for investigating any deployment issues.
After the deployment finishes, find the application URL from the deployment outp
After the deployment finishes, access the application with the output application URL. Use the following steps to check the app's logs to investigate any deployment issue:
-1. Access the output application URL. When you open the app, you get the response `Hello World`.
+1. Access the output application URL from the **Outputs** page of the **Deployment**. When you open the app, you get the response `Hello World`.
+
+ :::image type="content" source="media/quickstart/hello-app-url.png" alt-text="Screenshot of the Azure portal that shows the Outputs page of the Deployment." border="false" lightbox="media/quickstart/hello-app-url.png":::
1. From the navigation pane of the Azure Spring Apps instance **Overview** page, select **Logs** to check the app's logs.
After the deployment finishes, access the application with the output endpoint.
After the deployment finishes, use the following steps to find the application URL from the deployment outputs:
+1. Access the application URL from the **Outputs** page of the **Deployment**. When you open the app, you get the response `Hello World`.
-1. Access the application URL. When you open the app, you get the response `Hello World`.
+ :::image type="content" source="media/quickstart/hello-app-url.png" alt-text="Screenshot of the Azure portal that shows the Outputs page of the Deployment." border="false" lightbox="media/quickstart/hello-app-url.png":::
1. Check the details for each resource deployment, which are useful for investigating any deployment issues.
After the deployment finishes, use the following steps to find the application U
After the deployment finishes, use the following steps to check the app's logs to investigate any deployment issue:
-1. Access the application with the output application URL. When you open the app, you get the response `Hello World`.
+1. Access the application URL from the **Outputs** page of the **Deployment**. When you open the app, you get the response `Hello World`.
+
+ :::image type="content" source="media/quickstart/hello-app-url.png" alt-text="Screenshot of the Azure portal that shows the Outputs page of the Deployment." border="false" lightbox="media/quickstart/hello-app-url.png":::
1. From the navigation pane of the Azure Spring Apps instance overview page, select **Logs** to check the app's logs.
After the deployment finishes, access the application with the output endpoint.
After the deployment finishes, use the following steps to find the application URL from the deployment outputs:
+1. Access the application URL from the **Outputs** page of the **Deployment**. When you open the app, you get the response `Hello World`.
-1. Access the application URL. When you open the app, you get the response `Hello World`.
+ :::image type="content" source="media/quickstart/hello-app-url.png" alt-text="Screenshot of the Azure portal that shows the Outputs page of the Deployment." border="false" lightbox="media/quickstart/hello-app-url.png":::
1. Check the details for each resource deployment, which are useful for investigating any deployment issues.
+### [Azure portal + Maven plugin](#tab/Azure-portal-maven-plugin-ent)
+
+After the deployment finishes, use the following steps to validate the app:
+
+1. Access the application URL. When you open the app, you get the response `Hello World`.
+
+1. Check the console logs, which are useful for investigating any deployment issues.
+ ### [Azure CLI](#tab/Azure-CLI) After the deployment finishes, use the following steps to check the app's logs to investigate any deployment issue:
spring-apps Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Spring Apps description: Lists Azure Policy Regulatory Compliance controls available for Azure Spring Apps. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 10/23/2023 Last updated : 11/06/2023
static-web-apps Branch Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/branch-environments.md
# Create branch preview environments in Azure Static Web Apps
-You can configure your site to deploy every change made to branches that aren't a production branch. This preview deployment is published at a stable URL that includes the branch name. For example, if the branch is named `dev`, then the environment is available at a location like `<DEFAULT_HOST_NAME>-dev.<LOCATION>.azurestaticapps.net`.
+You can configure your site to deploy every change made to branches that aren't a production branch. This preview deployment is published at a stable URL that includes the branch name. For example, if the branch is named `dev`, then the environment is available at a location like `<DEFAULT_HOST_NAME>-dev.<LOCATION>.azurestaticapps.net`. You can delete a branch environment in the portal via the *Environments* tab of your static web app.
## Configuration
static-web-apps Preview Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/preview-environments.md
The following deployment types are available in Azure Static Web Apps.
- [**Pull requests**](review-publish-pull-requests.md): Pull requests against your production branch deploy to a temporary environment that disappears after the pull request is closed. The URL for this environment includes the PR number as a suffix. For example, if you make your first PR, the preview location looks something like `<DEFAULT_HOST_NAME>-1.<LOCATION>.azurestaticapps.net`. -- [**Branch**](branch-environments.md): You can optionally configure your site to deploy every change made to branches that aren't a production branch. This preview deployment is published at a stable URL that includes the branch name. For example, if the branch is named `dev`, then the environment is available at a location like `<DEFAULT_HOST_NAME>-dev.<LOCATION>.azurestaticapps.net`.
+- [**Branch**](branch-environments.md): You can optionally configure your site to deploy every change made to branches that aren't a production branch. This preview deployment is published at a stable URL that includes the branch name. For example, if the branch is named `dev`, then the environment is available at a location like `<DEFAULT_HOST_NAME>-dev.<LOCATION>.azurestaticapps.net`. You can delete a branch environment in the portal via the *Environments* tab of your static web app.
- [**Named environment**](named-environments.md): You can configure your pipeline to deploy all changes to a named environment. This preview deployment is published at a stable URL that includes the environment name. For example, if the deployment environment is named `release`, then the environment is available at a location like `<DEFAULT_HOST_NAME>-release.<LOCATION>.azurestaticapps.net`.
static-web-apps Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/private-endpoint.md
The default DNS resolution of the static web app still exists and routes to a pu
If you are connecting from on-prem or do not wish to use a private DNS zone, manually configure the DNS records for your application so that requests are routed to the appropriate IP address of the private endpoint. You can find more information on private endpoint DNS resolution [here](../private-link/private-endpoint-dns.md).
+> [!NOTE]
+> Private endpoints restrict the incoming traffic going to the website to a specific virtual network. They do not apply to deployments of new site assets.
+ ## Prerequisites - An Azure account with an active subscription.
static-web-apps Review Publish Pull Requests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/review-publish-pull-requests.md
This article shows you how to use pre-production environments to review changes to applications that are deployed with [Azure Static Web Apps](overview.md). A pre-production environment is a fully functional staged version of your application that includes changes not available in production.
-Azure Static Web Apps generates a GitHub Actions workflow in the repo. When a pull request is created against a branch that the workflow watches, the pre-production environment gets built. The pre-production environment stages the app, so you can review the changes before you push them to production.
+Azure Static Web Apps generates a GitHub Actions workflow in the repo. When a pull request is created against a branch that the workflow watches, the pre-production environment gets built. The pre-production environment stages the app, so you can review the changes before you push them to production. The lifecycle of a pre-production environment is tied to the pull request. Once the pull request is closed, the pre-production environment is automatically deleted.
You can do the following tasks within pre-production environments:
storage-mover Deployment Planning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage-mover/deployment-planning.md
EDIT PASS: not started
# Plan a successful Azure Storage Mover deployment
-Deploying Azure Storage Mover in one of your Azure subscriptions is the first step in realizing your migration goals. Azure Storage Mover can help you with the migration of your files and folders into Azure Storage. This article discusses the important decisions and best practices for a Storage Mover deployment.
+Deploying Azure Storage Mover in one of your Azure subscriptions is the first step in realizing your migration goals. Azure Storage Mover can help you migrate your files and folders into Azure Storage. This article discusses the important decisions and best practices for a Storage Mover deployment.
## Make sure the service works for your scenario
-Azure Storage Mover aspires to work for a wide range of migration scenarios. However, the service is new and therefore supports a relatively limited number of migration scenarios. Ensure that the service works for you by consulting the [supported sources and targets section](service-overview.md#supported-sources-and-targets) in the [Azure Storage Mover overview article](service-overview.md).
+Azure Storage Mover aspires to work for a wide range of migration scenarios. However, the service is relatively new and therefore presently supports a limited number of migration scenarios. Ensure that the service works for your specific scenario by consulting the [supported sources and targets section](service-overview.md#supported-sources-and-targets) in the [Azure Storage Mover overview article](service-overview.md).
## Deployment basics
If you want to learn more about how the agent gets access to migrate the data, r
## Next steps <!-- Add a context sentence for the following links --> These articles can help you become more familiar with the Storage Mover service.+ - [Understanding the Storage Mover resource hierarchy](resource-hierarchy.md) - [Deploying a Storage Mover resource](storage-mover-create.md) - [Deploying a Storage Mover agent](agent-deploy.md)
storage-mover Endpoint Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage-mover/endpoint-manage.md
Current doc score: 100 (3365 words and 0 issues)
While the term *endpoint* is often used in networking, it's used in the context of the Storage Mover service to describe a storage location with a high level of detail.
-A storage mover endpoint is a resource that contains the path to either a source or destination location and other relevant information. Endpoints are used in the creation of a job definition to define the source and target locations for a particular copy operation. Only certain types of endpoints may be used as a source or a target, respectively. For example, data contained within an NFS (Network File System) file share endpoint can only be copied to a blob storage container. Similarly, copy operations with an SMB-based (Server Message Block) file share target can only be migrated to an Azure file share,
+A storage mover endpoint is a resource that contains the path to either a source or destination location and other relevant information. Endpoints are used in the creation of a job definition to define the source and target locations for a particular copy operation. Only certain types of endpoints can be used as a source or a target, respectively. For example, data contained within an NFS (Network File System) file share endpoint can only be copied to a blob storage container. Similarly, copy operations with an SMB-based (Server Message Block) file share target can only be migrated to an Azure file share,
-This article guides you through the creation and management of Azure Storage Mover endpoints. To follow these examples, you need a top-level storage mover resource. If you haven't yet created one, follow the steps within the [Create a Storage Mover resource](storage-mover-create.md) article before continuing.
+This article guides you through the creation and management of Azure Storage Mover endpoints. To follow these examples, you need a top-level storage mover resource. If you haven't created one, follow the steps within the [Create a Storage Mover resource](storage-mover-create.md) article before continuing.
After you complete the steps within this article, you'll be able to create and manage endpoints using the Azure portal and Azure PowerShell.
After you complete the steps within this article, you'll be able to create and m
Within the Azure Storage Mover resource hierarchy, a migration project is used to organize migration jobs into logical tasks or components. A migration project in turn contains at least one job definition, which describes both the source and target locations for your migration project. The [Understanding the Storage Mover resource hierarchy](resource-hierarchy.md) article contains more detailed information about the relationships between a Storage Mover, its endpoints, and its projects.
-Because a migration requires both a well-defined source and target, endpoints are parented to the top-level storage mover resource. This placement allows you to reuse endpoints across any number of job definitions. While there's only a single endpoint resource, the properties of each endpoint may vary based on its type. For example, NFS (Network File System) shares, SMB shares, and Azure Storage blob container endpoints each require fundamentally different information.
+Because a migration requires both a well-defined source and target, endpoints are parented to the top-level storage mover resource. This placement allows you to reuse endpoints across any number of job definitions. While only a single endpoint resource exists, the properties of each individual endpoint might vary based on its type. For example, NFS (Network File System) shares, SMB shares, and Azure Storage blob container endpoints each require fundamentally different information.
[!INCLUDE [protocol-endpoint-agent](includes/protocol-endpoint-agent.md)] ### SMB endpoints
- SMB uses the ACL (access control list) concept and user-based authentication to provide access to shared files for selected users. To maintain security, Storage Mover relies on Azure Key Vault integration to securely store and tightly control access to user credentials and other secrets. During a migration, storage mover agent resources connect to your SMB endpoints with Key Vault secrets rather than with unsecure hard-coded credentials. This approach greatly reduces the chance that secrets may be accidentally leaked.
+ SMB uses the ACL (access control list) concept and user-based authentication to provide access to shared files for selected users. To maintain security, Storage Mover relies on Azure Key Vault integration to securely store and tightly control access to user credentials and other secrets. During a migration, storage mover agent resources connect to your SMB endpoints with Key Vault secrets rather than with unsecure hard-coded credentials. This approach greatly reduces the chance that secrets might be accidentally leaked.
After your local file share source is configured, add secrets for both a username and a password to your Key Vault. You need to supply both your Key Vault's name or Uniform Resource Identifier (URI), and the names or URIs of the credential secrets when creating your SMB endpoints.
Azure Storage Mover supports migration scenarios using NFS and SMB protocols. Th
### Create a source endpoint
-Source endpoints identify locations from which your data is migrated. Source endpoints are used to define the origin the data specified within your migration project. Azure Storage Mover handles source locations in the form of file shares. These locations may reside on Network Attached Storage (NAS), a server, or even on a workstation. Common protocols for file shares are SMB (Server Message Block) and NFS (Network File System).
+Source endpoints identify locations from which your data is migrated. Source endpoints are used to define the origin the data specified within your migration project. Azure Storage Mover handles source locations in the form of file shares. These locations might reside on Network Attached Storage (NAS), a server, or even on a workstation. Common protocols for file shares are SMB (Server Message Block) and NFS (Network File System).
The following steps describe the process of creating a source endpoint.
The following steps describe the process of creating a source endpoint.
> [!IMPORTANT] > Depending on your DNS configuration, you may need to use your fully qualified domain name (FQDN) instead of your hostname.
- You may also add an optional **Description** value of up to 1024 characters in length. Next, select **Protocol version** to expand the protocol selection menu and select the appropriate option for your source target.
+ You can also add an optional **Description** value of up to 1024 characters in length. Next, select **Protocol version** to expand the protocol selection menu and select the appropriate option for your source target.
Storage mover agents use secrets stored within Key Vault to connect to SMB endpoints. When you create an SMB source endpoint, you need to provide both the name of the Key Vault containing the secrets and the names of the secrets themselves.
The following steps describe the process of creating a source endpoint.
:::image type="content" source="media/endpoint-manage/key-vault.png" alt-text="Screenshot of the Create Source pane showing the drop-down list containing a resource group's Key Vaults.":::
- After you've selected the appropriate Key Vault, you can supply values for the required **Select secret for username** and **Select secret for password** fields. These values can be supplied by providing the URI to the secrets, or by selecting the secrets from a list. Select the **Select secret** button to enable the menu and select the username and password values. Alternatively, you can enable the **Enter secret from URI** option and supply the appropriate URI to the username and password secret.
+ After you select the appropriate Key Vault, you can supply values for the required **Select secret for username** and **Select secret for password** fields. These values can be supplied by providing the URI to the secrets, or by selecting the secrets from a list. Select the **Select secret** button to enable the menu and select the username and password values. Alternatively, you can enable the **Enter secret from URI** option and supply the appropriate URI to the username and password secret.
- The values for host and share name are concatenated to form the full migration source path. The path value is displayed in the **Full source path** field. Copy the path provided and verify that you're able to access it before committing your changes. Finally, when you've confirmed that all values are correct and that you can access the source path, select **Create** to add your new endpoint.
+ The values for host and share name are concatenated to form the full migration source path. The path value is displayed in the **Full source path** field. Copy the path provided and verify that you're able to access it before committing your changes. Finally, after confirming that all values are correct and that you can access the source path, select **Create** to add your new endpoint.
:::image type="content" source="media/endpoint-manage/secrets.png" alt-text="Screenshot of the Create Endpoint pane showing the location of the Secrets options." lightbox="media/endpoint-manage/secrets-lrg.png":::
The following steps describe the process of creating a source endpoint.
The `New-AzStorageMoverSmbEndpoint` and `New-AzStorageMoverNfsEndpoint` cmdlets are used to create a new endpoint within a [storage mover resource](storage-mover-create.md) you previously deployed.
- If you haven't yet installed the `Az.StorageMover` module:
+ If you haven't installed the `Az.StorageMover` module:
```powershell ## Ensure you are running the latest version of PowerShell 7
The following steps describe the process of creating a target endpoint.
[!INCLUDE [protocol-endpoint-agent](includes/protocol-endpoint-agent.md)]
- Depending on the target type you choose, select either your **Blob container** or your **File share** from the corresponding drop-down list. Finally, you may add an optional **Description** value for your target of up to 1024 characters in length and select **Create** to deploy your endpoint.
+ Depending on the target type you choose, select either your **Blob container** or your **File share** from the corresponding drop-down list. Finally, you can add an optional **Description** value for your target of up to 1024 characters in length and select **Create** to deploy your endpoint.
:::image type="content" source="media/endpoint-manage/endpoint-target-create.png" alt-text="Screenshot of the Create Endpoint pane showing the location of the required fields and Create button." lightbox="media/endpoint-manage/endpoint-target-create-lrg.png":::
The following steps describe the process of creating a target endpoint.
## View and edit an endpoint's properties
-Depending on your use case, you may need to retrieve either a specific endpoint, or a complete list of all your endpoint resources. You may also need to add or edit an endpoint's description.
+Depending on your use case, you might need to retrieve either a specific endpoint, or a complete list of all your endpoint resources. You might also need to add or edit an endpoint's description.
Follow the steps in this section to view endpoints accessible to your Storage Mover resource.
Follow the steps in this section to view endpoints accessible to your Storage Mo
## Delete an endpoint
-The removal of an endpoint resource should be a relatively rare occurrence in your production environment, though there may be occasions where it may be helpful. To delete a Storage Mover endpoint resource, follow the provided example.
+The removal of an endpoint resource should be a relatively rare occurrence in your production environment, though there might be occasions where it might be helpful. To delete a Storage Mover endpoint resource, follow the provided example.
> [!WARNING] > Deleting an endpoint is a permanent action and cannot be undone. It's a good idea to ensure that you're prepared to delete the endpoint since you will not be able to restore it at a later time.
storage-mover Job Definition Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage-mover/job-definition-create.md
Previously updated : 08/04/2023 Last updated : 10/30/2023 <!--
There are three prerequisites to the definition the migration of your source sha
## Create and start a job definition
-A job definition is created within a project resource. Creating a job definition requires you to select or configure a project, a source and target storage endpoint, and a job name. If you've followed the examples contained in previous articles, you may have an existing project within a previously deployed storage mover resource. Follow the steps in this section to add a job definition to a project.
+A job definition is created within a project resource. Creating a job definition requires you to select or configure a project, a source and target storage endpoint, and a job name. If you've followed the examples contained in previous articles, you might have an existing project within a previously deployed storage mover resource. Follow the steps in this section to add a job definition to a project.
Storage endpoints are separate resources in your storage mover. You need to create a source and target endpoint before you can reference them within a job definition. The examples in this section describe the process of creating endpoints.
Refer to the [resource naming convention](../azure-resource-manager/management/r
:::image type="content" source="media/job-definition-create/project-selected-sml.png" alt-text="Screen capture of the Project Explorer's Overview tab within the Azure portal highlighting the use of filters." lightbox="media/job-definition-create/project-selected-lrg.png":::
-1. In the **Basics** tab of the **Create a migration job** window, enter a value in the required **Name** field. You may also add an optional description value of less than 1024 characters. Finally, in the **Migration agent** section, select the agent to perform the data migration and then select **Next** to open the **Source** tab. You should choose an agent located as near your data source as possible. The selected agent should also have resources appropriate to the size and complexity of the job. You can assign a different agent to your job at a later time if desired.
+1. In the **Basics** tab of the **Create a migration job** window, enter a value in the required **Name** field. You can also add an optional description value of less than 1024 characters. Finally, in the **Migration agent** section, select the agent to perform the data migration and then select **Next** to open the **Source** tab. You should choose an agent located as near your data source as possible. The selected agent should also have resources appropriate to the size and complexity of the job. You can assign a different agent to your job at a later time if desired.
:::image type="content" source="media/job-definition-create/tab-basics-sml.png" alt-text="Screen capture of the migration job's Basics tab, showing the location of the data fields." lightbox="media/job-definition-create/tab-basics-lrg.png":::
Refer to the [resource naming convention](../azure-resource-manager/management/r
:::image type="content" source="media/job-definition-create/endpoint-source-existing-sml.png" alt-text="Screen capture of the Source tab illustrating the location of the Existing Source Endpoint field." border="false" lightbox="media/job-definition-create/endpoint-source-existing-lrg.png":::
- To define a new source endpoint from which to migrate your data, select the **Create a new endpoint** option. Next, provide values for the required **Host name or IP**, **Share name**, and **Protocol version** fields. You may also add an optional description value of less than 1024 characters.
+ To define a new source endpoint from which to migrate your data, select the **Create a new endpoint** option. Next, provide values for the required **Host name or IP**, **Share name**, and **Protocol version** fields. You can also add an optional description value of less than 1024 characters.
:::image type="content" source="media/job-definition-create/endpoint-source-new-sml.png" alt-text="Screen capture of the Source tab illustrating the location of the New Source Endpoint fields." lightbox="media/job-definition-create/endpoint-source-new-lrg.png":::
- Only certain types of endpoints may be used as a source or a target, respectively. The steps to create different endpoint types are similar, as are their corresponding data fields. The key differentiator between the creation of NFS- and SMB-enabled endpoints is the use of Azure Key Vault to store the shared credential for SMB resources. When you create an endpoint resource that supports the SMB protocol, you're required to provide values for the Key Vault name, and the names of the username and password secrets as well.
+ Only certain types of endpoints can be used as a source or a target, respectively. The steps to create different endpoint types are similar, as are their corresponding data fields. The key differentiator between the creation of NFS- and SMB-enabled endpoints is the use of Azure Key Vault to store the shared credential for SMB resources. When you create an endpoint resource that supports the SMB protocol, you're required to provide values for the Key Vault name, and the names of the username and password secrets as well.
Select the name of the Key Vault from the **Key Vault** drop-down lists. You can provide values for the **Secret for username** and **Secret for password** by selecting the relevant secret from the corresponding drop-down list. Alternatively, you can provide the URI to the secret as shown in the following screen capture.
- For more details on endpoint resources, see the [Managing Storage Mover endpoints](endpoint-manage.md) article.
+ For more information on endpoint resources, see the [Managing Storage Mover endpoints](endpoint-manage.md) article.
:::image type="content" source="media/job-definition-create/endpoint-smb-new-sml.png" alt-text="Screen capture of the fields required to create a new SMB source endpoint resource." lightbox="media/job-definition-create/endpoint-smb-new-lrg.png":::
Refer to the [resource naming convention](../azure-resource-manager/management/r
:::image type="content" source="media/job-definition-create/endpoint-target-existing-sml.png" alt-text="Screen capture of the Target tab illustrating the location of the Existing Target Endpoint field." border="false" lightbox="media/job-definition-create/endpoint-target-existing-lrg.png":::
- Similarly, to define a new target endpoint, choose the **Create a new endpoint** option. Next, select values from the drop-down lists for the required **Subscription** and **Storage account** fields. You may also add an optional description value of less than 1024 characters. Depending on your use case, select the appropriate ***Target type**.
+ Similarly, to define a new target endpoint, choose the **Create a new endpoint** option. Next, select values from the drop-down lists for the required **Subscription** and **Storage account** fields. You can also add an optional description value of less than 1024 characters. Depending on your use case, select the appropriate ***Target type**.
- Recall that certain types of endpoints may be used as a source or a target, respectively.
+ Recall that certain types of endpoints can only be used as a source or a target, respectively.
[!INCLUDE [protocol-endpoint-agent](includes/protocol-endpoint-agent.md)]
- > [!IMPORTANT]
- > Support for the SMB protocol is currently in public preview and some functionality may not yet be available. Currently, the only supported migration path consists of an SMB mount source to an Azure file share destination.
- :::image type="content" source="media/job-definition-create/endpoint-target-new-sml.png" alt-text="Screen capture of the Target tab illustrating the location of the New Target Endpoint fields." lightbox="media/job-definition-create/endpoint-target-new-lrg.png"::: A target subpath value can be used to specify a location within the target container where your migrated data to be copied. The subpath value is relative to the container's root. You can provide a unique value to generate a new subfolder. If you omit the subpath value, data is copied to the root.
storage-mover Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage-mover/release-notes.md
Previously updated : 08/04/2023 Last updated : 09/06/2023 # Release notes for the Azure Storage Mover service
-Azure Storage Mover is a hybrid service, which continuously introduces new features and improvements to its cloud service and the agent components. New features often require a matching agent version that supports them. This article provides a summary of key improvements for each service and agent version combination that is released. The article also points out limitations and if possible, workarounds for identified issues.
+Azure Storage Mover is a hybrid service, which continuously introduces new features and improvements to its cloud service and agent components. New features often require a matching agent version that supports them. This article provides a summary of key improvements for each service and agent version combination that is released. The article also points out limitations and if possible, workarounds for identified issues.
## Supported agent versions
The following Azure Storage Mover agent versions are supported:
| Milestone | Version number | Release date | Status | ||-|--|-|
+| Major refresh release | 2.0.358 | November 6, 2023 | Current |
| Refresh release | 2.0.287 | August 5, 2023 | Supported |
-| Refresh release | 1.1.256 | June 14, 2023 | Supported |
-| General availability release | 1.0.229 | April 17, 2023 | Supported |
+| Refresh release | 1.1.256 | June 14, 2023 | Functioning. No longer supported by Microsoft Azure Support teams.|
+| General availability release | 1.0.229 | April 17, 2023 | Functioning. No longer supported by Microsoft Azure Support teams.|
| Public preview release | 0.1.116 | September 15, 2022 | Functioning. No longer supported by Microsoft Azure Support teams.| ### Azure Storage Mover update policy
Azure Storage Mover is a hybrid service, which continuously introduces new featu
> [!IMPORTANT] > Preview versions of the Storage Mover agent cannot update themselves. You must replace them manually by deploying the [latest available agent](https://aka.ms/StorageMover/agent).
+## 2023 November 6
+
+Major refresh release notes for:
+
+- Service version: November 6, 2023
+- Agent version: 2.0.358
+
+### Migration scenarios
+- Migrating your SMB shares to Azure file shares has become generally available.
+- The Storage Mover agent is now supported on VMware ESXi 6.7 hypervisors, as a public preview.
+- Migrating NFS shares to Azure Data Lake Gen2 storage is now available as a public preview.
+
+### Service
+
+- Migrations from NFS shares to Azure storage accounts with the hierarchical namespace service feature (HNS) enabled, are now supported and automatically leverage the ADLS Gen2 REST APIs for migration. This allows the migration of files and folders in a Data Lake compliant way. Full fidelity is preserved in just the same way as with the previously existing blob container migration path.
+- [Error codes and messages](status-code.md) have been improved.
+
+### Agent
+
+- Changes required for the previously mentioned migration paths.
+- Improved handling and logging of files that fail migration when they contain invalid characters or are in use during a migration.
+- Added support for file and folder security descriptors larger than 8KiB. (ACLs)
+- Avoid a job error condition when the source is an empty SMB share.
+- Improvements to agent-local network configuration like applying a static IP to the agent, or an error listing certain network configuration.
+- Security improvements.
+- The same agent version is now supported to run across Hyper-V and VMware ESXi 6.7 hypervisors.
+
+### Limitations
+
+> [!IMPORTANT]
+> Based on the previously described [Azure Storage Mover update policy](#azure-storage-mover-update-policy), agents are automatically updated to the latest version. However, some improvements require a download and [provisioning](agent-deploy.md) of a new agent VM, using the latest agent image from [Microsoft Download Center](https://aka.ms/StorageMover/agent). This is recommended for all customers with agent deployments prior to this release date.
+ ## 2023 August 5 Refresh release notes for:
storage-mover Resource Hierarchy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage-mover/resource-hierarchy.md
Your agents appear in your storage mover after they've been registered. Registra
> The proximity and network quality between your migration agent and the target storage in Azure determine migration velocity in early stages of your migration. The region of the storage mover resource you've deployed doesn't play a role for performance. > [!NOTE]
-> In order to minimize downtime for your workload, you may decide to copy multiple times from source to target. In later copy runs, migration velocity is often influenced more by the speed at which the migration agent can evaluate if a file needs to be copied or not. That means local compute and memory resources on an agent can become more important to the migration velocity than network quality.
+> In order to minimize downtime for your workload, you might decide to copy multiple times from source to target. In later copy runs, migration velocity is often influenced more by the speed at which the migration agent can evaluate if a file needs to be copied or not. That means local compute and memory resources on an agent can become more important to the migration velocity than network quality.
## Migration project
A project allows you to organize your larger scale cloud migrations into smaller
The smallest unit of a migration can be defined as the contents of one source moving into one target, but data center migrations are rarely that simple. Often multiple sources support one workload and must be migrated together for timely failover of the workload to the new cloud storage locations in Azure.
-In a different example, one source may even need to be split into multiple target locations. The reverse is also possible, where you need to combine multiple sources into subpaths of the same target location in Azure.
+In a different example, one source might even need to be split into multiple target locations. The reverse is also possible, where you need to combine multiple sources into subpaths of the same target location in Azure.
:::image type="content" source="media/resource-hierarchy/project-illustration.png" alt-text="an image showing the nested relationship of a project into a storage mover resource. It also shows child objects of the resource, called job definitions, described later in this article." lightbox="media/resource-hierarchy/project-illustration-large.png":::
A Job definition is contained within a project. The job definition describes a s
> [!IMPORTANT] > After a job definition is created, source and target information cannot be changed. However, migration settings can be changed any time. A change won't affect a running migration job, but will take effect the next time you start a migration job.
-It may not seem immediately logical that changing source and target information in an existing job definition isn't permitted. By way of example, imagine you define *Share A* as the migration source and that run several copy operations. Imagine also that you change the migration source to *Share B*. This change could have potentially dangerous consequences.
+It might not seem immediately logical that changing source and target information in an existing job definition isn't permitted. By way of example, imagine you define *Share A* as the migration source and that run several copy operations. Imagine also that you change the migration source to *Share B*. This change could have potentially dangerous consequences.
*Mirroring* is a common migration setting that creates a "mirror" image of a source within a target. If this setting is applied to our example, files from *Share A* might get deleted in the target when the copy operation begins migrating files from *Share B*. To prevent mistakes and maintain the integrity of a job run history, you can't edit a provisioned job definition's source or target. Source, target, and their optional subpath information are locked when a job definition is created. If you want to reuse the same target but use a different source (or vice versa), you're required to create a new job definition.
Learn more about telemetry, metrics and logs in the job definition monitoring ar
Migrations require well defined source and target locations. While the term *endpoint* is often used in networking, here it describes a storage location to a high level of detail. An endpoint contains the path to the storage location and additional information.
-While there's a single endpoint resource, the properties of each endpoint may vary, based on the type of endpoint. For example, NFS shares, SMB shares, and Azure Storage blob container endpoints each require fundamentally different information.
+While only a single endpoint resource exists, the properties of each individual endpoint can vary, based on the type of endpoint. For example, NFS shares, SMB shares, and Azure Storage blob container endpoints each require fundamentally different information.
-Endpoints are used in the creation of a job definition. Only certain types of endpoints may be used as a source or a target, respectively. Refer to the [Supported sources and targets](service-overview.md#supported-sources-and-targets) section in the Azure Storage Mover overview article.
+Endpoints are used in the creation of a job definition. Only certain types of endpoints can be used as a source or a target, respectively. Refer to the [Supported sources and targets](service-overview.md#supported-sources-and-targets) section in the Azure Storage Mover overview article.
Endpoints are parented to the top-level storage mover resource and can be reused across different job definitions. ## Next steps
-After understanding the resources involved in an Azure Storage Mover deployment, it's a good idea to start a proof-of-concept deployment. These articles may be good, next reads:
+After understanding the resources involved in an Azure Storage Mover deployment, it's a good idea to start a proof-of-concept deployment. These articles are good, next reads:
- [Deploy a storage mover resource in your subscription.](storage-mover-create.md) - [Deploy an Azure Storage Mover agent VM.](agent-deploy.md)
storage-mover Service Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage-mover/service-overview.md
Document score: 100 (520 words and 0 issues)
# What is Azure Storage Mover?
-Azure Storage Mover is a new, fully managed migration service that enables you to migrate your files and folders to Azure Storage while minimizing downtime for your workload. You can use Storage Mover for different migration scenarios such as *lift-and-shift*, and for cloud migrations that you have to repeat occasionally. Azure Storage Mover also helps maintain oversight and manage the migration of all your globally distributed file shares from a single storage mover resource.
+Azure Storage Mover is a relatively new, fully managed migration service that enables you to migrate your files and folders to Azure Storage while minimizing downtime for your workload. You can use Storage Mover for different migration scenarios such as *lift-and-shift*, and for cloud migrations that you have to repeat occasionally. Azure Storage Mover also helps maintain oversight and manage the migration of all your globally distributed file shares from a single storage mover resource.
## Supported sources and targets
Azure Storage Mover is a new, fully managed migration service that enables you t
An Azure blob container without the hierarchical namespace service feature doesnΓÇÖt have a traditional file system. A standard blob container uses ΓÇ£virtualΓÇ¥ folders to mimic this functionality. When this approach is used, files in folders on the source get their path prepended to their name and placed in a flat list in the target blob container.
-When the SMB protocol is used during a data migration, Storage Mover supports the same level of file fidelity as the underlying Azure file share. Folder structure and metadata values such as file and folder timestamps, ACLs, and file attributes are maintained. When the NFS protocol is used, the Storage Mover service represents empty folders as an empty blob in the target. The metadata of the source folder is persisted in the custom metadata field of this blob, just as they are with files.
+When migrating data from a source endpoint using the SMB protocol, Storage Mover supports the same level of file fidelity as the underlying Azure file share. Folder structure and metadata values such as file and folder timestamps, ACLs, and file attributes are maintained. When migrating data from an NFS source, the Storage Mover service represents empty folders as an empty blob in the target. The metadata of the source folder is persisted in the custom metadata field of this blob, just as they are with files.
+
+However, migrating data from a source endpoint using the NFS protocol might require ΓÇ£virtualΓÇ¥ folders during the migration. Because Azure blob containers without HNS support donΓÇÖt have a traditional file system, Storage Mover uses these folders to mimic a local file system. When files are found within folders on a source endpoint, Storage Mover prepends their paths to their names and places the file in a flat list within in the target blob container.
:::image type="content" source="media/overview/source-to-target.png" alt-text="A screenshot illustrating a source NFS share migrated through an Azure Storage Mover agent VM to an Azure Storage blob container." lightbox="media/overview/source-to-target-lrg.png" :::
storage Encryption Scope Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/encryption-scope-manage.md
Previously updated : 05/10/2023 Last updated : 11/07/2023
This article shows how to create an encryption scope. It also shows how to speci
## Create an encryption scope
-You can create an encryption scope that is protected with a Microsoft-managed key or with a customer-managed key that is stored in an Azure Key Vault or in an Azure Key Vault Managed Hardware Security Model (HSM). To create an encryption scope with a customer-managed key, you must first create a key vault or managed HSM and add the key you intend to use for the scope. The key vault or managed HSM must have purge protection enabled. The storage account and key vault can be in different regions.
+You can create an encryption scope that is protected with a Microsoft-managed key or with a customer-managed key that is stored in an Azure Key Vault or in an Azure Key Vault Managed Hardware Security Model (HSM). To create an encryption scope with a customer-managed key, you must first create a key vault or managed HSM and add the key you intend to use for the scope. The key vault or managed HSM must have purge protection enabled.
+
+The storage account and the key vault can be in the same tenant, or in different tenants. In either case, the storage account and key vault can be in different regions.
An encryption scope is automatically enabled when you create it. After you create the encryption scope, you can specify it when you create a blob. You can also specify a default encryption scope when you create a container, which automatically applies to all blobs in the container.
To create an encryption scope with PowerShell, install the [Az.Storage](https://
### Create an encryption scope protected by Microsoft-managed keys
-To create a new encryption scope that is protected by Microsoft-managed keys, call the [New-AzStorageEncryptionScope](/powershell/module/az.storage/new-azstorageencryptionscope) command with the `-StorageEncryption` parameter.
+To create an encryption scope that is protected by Microsoft-managed keys, call the [New-AzStorageEncryptionScope](/powershell/module/az.storage/new-azstorageencryptionscope) command with the `-StorageEncryption` parameter.
If infrastructure encryption is enabled for the storage account, then it will automatically be enabled for the new encryption scope. Otherwise, you can choose whether to enable infrastructure encryption for the encryption scope. To create the new scope with infrastructure encryption enabled, include the `-RequireInfrastructureEncryption` parameter. Remember to replace the placeholder values in the example with your own values:
-```powershell
+```azurepowershell
$rgName = "<resource-group>" $accountName = "<storage-account>"
-$scopeName1 = "customer1scope"
+$scopeName = "<encryption-scope>"
New-AzStorageEncryptionScope -ResourceGroupName $rgName ` -StorageAccountName $accountName `
New-AzStorageEncryptionScope -ResourceGroupName $rgName `
-StorageEncryption ```
-### Create an encryption scope protected by customer-managed keys
+### Create an encryption scope protected by customer-managed keys in the same tenant
-To create a new encryption scope that is protected by customer-managed keys stored in a key vault or managed HSM, first configure customer-managed keys for the storage account. You must assign a managed identity to the storage account and then use the managed identity to configure the access policy for the key vault or managed HSM so that the storage account has permissions to access it.
+To create an encryption scope that is protected by customer-managed keys stored in a key vault or managed HSM that is in the same tenant as the storage account, first configure customer-managed keys for the storage account. You must assign a managed identity to the storage account that has permissions to access the key vault. The managed identity can be either a user-assigned managed identity or a system-assigned managed identity. To learn more about configuring customer-managed keys, see [Configure customer-managed keys in the same tenant for an existing storage account](../common/customer-managed-keys-configure-existing-account.md).
-To configure customer-managed keys for use with an encryption scope, purge protection must be enabled on the key vault or managed HSM. The key vault or managed HSM can be in a different region from the storage account.
+To grant the managed identity permissions to access the key vault, assign the **Key Vault Crypto Service Encryption User** role the managed identity.
-Remember to replace the placeholder values in the example with your own values:
+To configure customer-managed keys for use with an encryption scope, purge protection must be enabled on the key vault or managed HSM.
-```powershell
+The following example shows how to configure an encryption scope with a system-assigned managed identity. Remember to replace the placeholder values in the example with your own values:
+
+```azurepowershell
$rgName = "<resource-group>" $accountName = "<storage-account>" $keyVaultName = "<key-vault>"
-$keyUri = "<key-uri>"
-$scopeName2 = "customer2scope"
+$scopeName = "<encryption-scope>"
-# Assign a system managed identity to the storage account.
+# Assign a system-assigned managed identity to the storage account.
$storageAccount = Set-AzStorageAccount -ResourceGroupName $rgName ` -Name $accountName ` -AssignIdentity
-# Configure the access policy for the key vault.
-Set-AzKeyVaultAccessPolicy `
- -VaultName $keyVaultName `
- -ObjectId $storageAccount.Identity.PrincipalId `
- -PermissionsToKeys wrapkey,unwrapkey,get
+# Assign the necessary permissions to the managed identity
+# so that it can access the key vault.
+$principalId = $storageAccount.Identity.PrincipalId
+$keyVault = Get-AzKeyVault $keyVaultName
+
+New-AzRoleAssignment -ObjectId $storageAccount.Identity.PrincipalId `
+ -RoleDefinitionName "Key Vault Crypto Service Encryption User" `
+ -Scope $keyVault.ResourceId
``` Next, call the [New-AzStorageEncryptionScope](/powershell/module/az.storage/new-azstorageencryptionscope) command with the `-KeyvaultEncryption` parameter, and specify the key URI. Including the key version on the key URI is optional. If you omit the key version, then the encryption scope will automatically use the most recent key version. If you include the key version, then you must update the key version manually to use a different version.
+The format of the key URI is similar to the following examples, and can be constructed from the key vault's [VaultUri](/dotnet/api/microsoft.azure.commands.keyvault.models.pskeyvault.vaulturi) property and the key name:
+
+```http
+# Without the key version
+https://<key-vault>.vault.azure.net/keys/<key>
+
+# With the key version
+https://<key-vault>.vault.azure.net/keys/<key>/<version>
+```
+ If infrastructure encryption is enabled for the storage account, then it will automatically be enabled for the new encryption scope. Otherwise, you can choose whether to enable infrastructure encryption for the encryption scope. To create the new scope with infrastructure encryption enabled, include the `-RequireInfrastructureEncryption` parameter. Remember to replace the placeholder values in the example with your own values:
-```powershell
+```azurepowershell
+$keyUri = $keyVault.VaultUri + "keys/" + $keyName
+ New-AzStorageEncryptionScope -ResourceGroupName $rgName ` -StorageAccountName $accountName `
- -EncryptionScopeName $scopeName2 `
+ -EncryptionScopeName $scopeName `
+ -KeyUri $keyUri `
+ -KeyvaultEncryption
+```
+
+### Create an encryption scope protected by customer-managed keys in a different tenant
+
+To create an encryption scope that is protected by customer-managed keys stored in a key vault or managed HSM that is in a different tenant than the storage account, first configure customer-managed keys for the storage account. You must configure a user-assigned managed identity for the storage account that has permissions to access the key vault in the other tenant. To learn more about configuring cross-tenant customer-managed keys, see [Configure cross-tenant customer-managed keys for an existing storage account](../common/customer-managed-keys-configure-cross-tenant-existing-account.md).
+
+To configure customer-managed keys for use with an encryption scope, purge protection must be enabled on the key vault or managed HSM.
+
+After you have configured cross-tenant customer-managed keys for the storage account, you can create an encryption scope on the storage account in one tenant that is scoped to a key in a key vault in the other tenant. You will need the key URI to create the cross-tenant encryption scope.
+
+Remember to replace the placeholder values in the example with your own values:
+
+```azurepowershell
+$rgName = "<resource-group>"
+$accountName = "<storage-account>"
+$scopeName = "<encryption-scope>"
+
+# Construct the key URI from the key vault URI and key name.
+$keyUri = $kvUri + "keys/" + $keyName
+
+New-AzStorageEncryptionScope -ResourceGroupName $rgName `
+ -StorageAccountName $accountName `
+ -EncryptionScopeName $scopeName `
-KeyUri $keyUri ` -KeyvaultEncryption ```
To create an encryption scope with Azure CLI, first install Azure CLI version 2.
### Create an encryption scope protected by Microsoft-managed keys
-To create a new encryption scope that is protected by Microsoft-managed keys, call the [az storage account encryption-scope create](/cli/azure/storage/account/encryption-scope#az-storage-account-encryption-scope-create) command, specifying the `--key-source` parameter as `Microsoft.Storage`.
+To create an encryption scope that is protected by Microsoft-managed keys, call the [az storage account encryption-scope create](/cli/azure/storage/account/encryption-scope#az-storage-account-encryption-scope-create) command, specifying the `--key-source` parameter as `Microsoft.Storage`.
If infrastructure encryption is enabled for the storage account, then it will automatically be enabled for the new encryption scope. Otherwise, you can choose whether to enable infrastructure encryption for the encryption scope. To create the new scope with infrastructure encryption enabled, include the `--require-infrastructure-encryption` parameter and set its value to `true`. Remember to replace the placeholder values with your own values:
-```azurecli-interactive
+```azurecli
az storage account encryption-scope create \ --resource-group <resource-group> \ --account-name <storage-account> \
- --name <scope> \
+ --name <encryption-scope> \
--key-source Microsoft.Storage ```
-### Create an encryption scope protected by customer-managed keys
+### Create an encryption scope protected by customer-managed keys in the same tenant
-To create a new encryption scope that is protected by customer-managed keys in a key vault or managed HSM, first configure customer-managed keys for the storage account. You must assign a managed identity to the storage account and then use the managed identity to configure the access policy for the key vault so that the storage account has permissions to access it. For more information, see [Customer-managed keys for Azure Storage encryption](../common/customer-managed-keys-overview.md).
+To create an encryption scope that is protected by customer-managed keys stored in a key vault or managed HSM that is in the same tenant as the storage account, first configure customer-managed keys for the storage account. You must assign a managed identity to the storage account that has permissions to access the key vault. The managed identity can be either a user-assigned managed identity or a system-assigned managed identity. To learn more about configuring customer-managed keys, see [Configure customer-managed keys in the same tenant for an existing storage account](../common/customer-managed-keys-configure-existing-account.md).
-To configure customer-managed keys for use with an encryption scope, purge protection must be enabled on the key vault or managed HSM. The key vault or managed HSM can be in a different region from the storage account.
+To grant the managed identity permissions to access the key vault, assign the **Key Vault Crypto Service Encryption User** role the managed identity.
-Remember to replace the placeholder values in the example with your own values:
+To configure customer-managed keys for use with an encryption scope, purge protection must be enabled on the key vault or managed HSM.
-```azurecli-interactive
-az login
-az account set --subscription <subscription-id>
+The following example shows how to configure an encryption scope with a system-assigned managed identity. Remember to replace the placeholder values in the example with your own values:
+```azurecli
az storage account update \ --name <storage-account> \ --resource-group <resource_group> \ --assign-identity
-storage_account_principal=$(az storage account show \
- --name <storage-account> \
- --resource-group <resource-group> \
+principalId=$(az storage account show --name <storage-account> \
+ --resource-group <resource_group> \
--query identity.principalId \ --output tsv)
-az keyvault set-policy \
+$kvResourceId=$(az keyvault show \
+ --resource-group <resource-group> \
--name <key-vault> \
- --resource-group <resource_group> \
- --object-id $storage_account_principal \
- --key-permissions get unwrapKey wrapKey
+ --query id \
+ --output tsv)
+
+az role assignment create --assignee-object-id $principalId \
+ --role "Key Vault Crypto Service Encryption User" \
+ --scope $kvResourceId
``` Next, call the [az storage account encryption-scope](/cli/azure/storage/account/encryption-scope#az-storage-account-encryption-scope-create) command with the `--key-uri` parameter, and specify the key URI. Including the key version on the key URI is optional. If you omit the key version, then the encryption scope will automatically use the most recent key version. If you include the key version, then you must update the key version manually to use a different version.
+The format of the key URI is similar to the following examples, and can be constructed from the key vault's **vaultUri** property and the key name:
+
+```http
+# Without the key version
+https://<key-vault>.vault.azure.net/keys/<key>
+
+# With the key version
+https://<key-vault>.vault.azure.net/keys/<key>/<version>
+```
+ If infrastructure encryption is enabled for the storage account, then it will automatically be enabled for the new encryption scope. Otherwise, you can choose whether to enable infrastructure encryption for the encryption scope. To create the new scope with infrastructure encryption enabled, include the `--require-infrastructure-encryption` parameter and set its value to `true`. Remember to replace the placeholder values in the example with your own values:
-```azurecli-interactive
+```azurecli
az storage account encryption-scope create \ --resource-group <resource-group> \ --account-name <storage-account> \
- --name <scope> \
+ --name <encryption-scope> \
--key-source Microsoft.KeyVault \ --key-uri <key-uri> ``` -
+### Create an encryption scope protected by customer-managed keys in a different tenant
-To learn how to configure Azure Storage encryption with customer-managed keys in a key vault or managed HSM, see the following articles:
+To create an encryption scope that is protected by customer-managed keys stored in a key vault or managed HSM that is in a different tenant than the storage account, first configure customer-managed keys for the storage account. You must configure a user-assigned managed identity for the storage account that has permissions to access the key vault in the other tenant. To learn more about configuring cross-tenant customer-managed keys, see [Configure cross-tenant customer-managed keys for an existing storage account](../common/customer-managed-keys-configure-cross-tenant-existing-account.md).
-- [Configure encryption with customer-managed keys stored in Azure Key Vault](../common/customer-managed-keys-configure-key-vault.md)-- [Configure encryption with customer-managed keys stored in Azure Key Vault Managed HSM](../common/customer-managed-keys-configure-key-vault-hsm.md)
+To configure customer-managed keys for use with an encryption scope, purge protection must be enabled on the key vault or managed HSM.
-To learn more about infrastructure encryption, see [Enable infrastructure encryption for double encryption of data](../common/infrastructure-encryption-enable.md).
+After you have configured cross-tenant customer-managed keys for the storage account, you can create an encryption scope on the storage account in one tenant that is scoped to a key in a key vault in the other tenant. You will need the key URI to create the cross-tenant encryption scope.
+
+Remember to replace the placeholder values in the example with your own values:
+
+```azurecli
+az storage account encryption-scope create \
+ --resource-group <resource-group> \
+ --account-name <storage-account> \
+ --name <encryption-scope> \
+ --key-source Microsoft.KeyVault \
+ --key-uri <key-uri>
+```
++ ## List encryption scopes for storage account
To view details for a customer-managed key, including the key URI and version an
To list the encryption scopes available for a storage account with PowerShell, call the **Get-AzStorageEncryptionScope** command. Remember to replace the placeholder values in the example with your own values:
-```powershell
+```azurepowershell
Get-AzStorageEncryptionScope -ResourceGroupName $rgName ` -StorageAccountName $accountName ``` To list all encryption scopes in a resource group by storage account, use the pipeline syntax:
-```powershell
+```azurepowershell
Get-AzStorageAccount -ResourceGroupName $rgName | Get-AzStorageEncryptionScope ```
Get-AzStorageAccount -ResourceGroupName $rgName | Get-AzStorageEncryptionScope
To list the encryption scopes available for a storage account with Azure CLI, call the [az storage account encryption-scope list](/cli/azure/storage/account/encryption-scope#az-storage-account-encryption-scope-list) command. Remember to replace the placeholder values in the example with your own values:
-```azurecli-interactive
+```azurecli
az storage account encryption-scope list \ --account-name <storage-account> \ --resource-group <resource-group>
An individual blob can be created with its own encryption scope, unless the cont
To create a container with a default encryption scope in the Azure portal, first create the encryption scope as described in [Create an encryption scope](#create-an-encryption-scope). Next, follow these steps to create the container:
-1. Navigate to the list of containers in your storage account, and select the **Add** button to create a new container.
+1. Navigate to the list of containers in your storage account, and select the **Add** button to create a container.
1. Expand the **Advanced** settings in the **New Container** pane. 1. In the **Encryption scope** drop-down, select the default encryption scope for the container. 1. To require that all blobs in the container use the default encryption scope, select the checkbox to **Use this encryption scope for all blobs in the container**. If this checkbox is selected, then an individual blob in the container cannot override the default encryption scope.
To create a container with a default encryption scope in the Azure portal, first
To create a container with a default encryption scope with PowerShell, call the [New-AzStorageContainer](/powershell/module/az.storage/new-azstoragecontainer) command, specifying the scope for the `-DefaultEncryptionScope` parameter. To force all blobs in a container to use the container's default scope, set the `-PreventEncryptionScopeOverride` parameter to `true`.
-```powershell
+```azurepowershell
$containerName1 = "container1" $ctx = New-AzStorageContext -StorageAccountName $accountName -UseConnectedAccount
To create a container with a default encryption scope with Azure CLI, call the [
The following example uses your Microsoft Entra account to authorize the operation to create the container. You can also use the account access key. For more information, see [Authorize access to blob or queue data with Azure CLI](./authorize-data-operations-cli.md).
-```azurecli-interactive
+```azurecli
az storage container create \ --account-name <storage-account> \ --resource-group <resource-group> \ --name <container> \
- --default-encryption-scope <scope> \
+ --default-encryption-scope <encryption-scope> \
--prevent-encryption-scope-override true \ --auth-mode login ```
To upload a blob with an encryption scope via the Azure portal, first create the
To upload a blob with an encryption scope via PowerShell, call the [Set-AzStorageBlobContent](/powershell/module/az.storage/set-azstorageblobcontent) command and provide the encryption scope for the blob.
-```powershell
+```azurepowershell
$containerName2 = "container2" $localSrcFile = "C:\temp\helloworld.txt" $ctx = New-AzStorageContext -StorageAccountName $accountName -UseConnectedAccount
-# Create a new container with no default scope defined.
+# Create a container with no default scope defined.
New-AzStorageContainer -Name $containerName2 -Context $ctx # Upload a block upload with an encryption scope specified.
To upload a blob with an encryption scope via Azure CLI, call the [az storage bl
If you are using Azure Cloud Shell, follow the steps described in [Upload a blob](storage-quickstart-blobs-cli.md#upload-a-blob) to create a file in the root directory. You can then upload this file to a blob using the following sample.
-```azurecli-interactive
+```azurecli
az storage blob upload \ --account-name <storage-account> \ --container-name <container> \ --file <file> \ --name <file> \
- --encryption-scope <scope>
+ --encryption-scope <encryption-scope>
```
To change the key that protects a scope in the Azure portal, follow these steps:
To change the key that protects an encryption scope from a customer-managed key to a Microsoft-managed key with PowerShell, call the **Update-AzStorageEncryptionScope** command and pass in the `-StorageEncryption` parameter:
-```powershell
+```azurepowershell
Update-AzStorageEncryptionScope -ResourceGroupName $rgName ` -StorageAccountName $accountName ` -EncryptionScopeName $scopeName2 `
Update-AzStorageEncryptionScope -ResourceGroupName $rgName `
Next, call the **Update-AzStorageEncryptionScope** command and pass in the `-KeyUri` and `-KeyvaultEncryption` parameters:
-```powershell
+```azurepowershell
Update-AzStorageEncryptionScope -ResourceGroupName $rgName ` -StorageAccountName $accountName ` -EncryptionScopeName $scopeName1 `
Update-AzStorageEncryptionScope -ResourceGroupName $rgName `
To change the key that protects an encryption scope from a customer-managed key to a Microsoft-managed key with Azure CLI, call the [az storage account encryption-scope update](/cli/azure/storage/account/encryption-scope#az-storage-account-encryption-scope-update) command and pass in the `--key-source` parameter with the value `Microsoft.Storage`:
-```azurecli-interactive
+```azurecli
az storage account encryption-scope update \ --account-name <storage-account> \ --resource-group <resource-group>
- --name <scope> \
+ --name <encryption-scope> \
--key-source Microsoft.Storage ```
Next, call the **az storage account encryption-scope update** command, pass in t
az storage account encryption-scope update \ --resource-group <resource-group> \ --account-name <storage-account> \
- --name <scope> \
+ --name <encryption-scope> \
--key-source Microsoft.KeyVault \ --key-uri <key-uri> ```
To disable an encryption scope in the Azure portal, navigate to the **Encryption
To disable an encryption scope with PowerShell, call the Update-AzStorageEncryptionScope command and include the `-State` parameter with a value of `disabled`, as shown in the following example. To re-enable an encryption scope, call the same command with the `-State` parameter set to `enabled`. Remember to replace the placeholder values in the example with your own values:
-```powershell
+```azurepowershell
Update-AzStorageEncryptionScope -ResourceGroupName $rgName ` -StorageAccountName $accountName ` -EncryptionScopeName $scopeName1 `
Update-AzStorageEncryptionScope -ResourceGroupName $rgName `
To disable an encryption scope with Azure CLI, call the [az storage account encryption-scope update](/cli/azure/storage/account/encryption-scope#az-storage-account-encryption-scope-update) command and include the `--state` parameter with a value of `Disabled`, as shown in the following example. To re-enable an encryption scope, call the same command with the `--state` parameter set to `Enabled`. Remember to replace the placeholder values in the example with your own values:
-```azurecli-interactive
+```azurecli
az storage account encryption-scope update \ --account-name <storage-account> \ --resource-group <resource-group> \
- --name <scope> \
+ --name <encryption-scope> \
--state Disabled ```
storage Customer Managed Keys Configure Existing Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/customer-managed-keys-configure-existing-account.md
Previously updated : 06/07/2023 Last updated : 11/07/2023
# Configure customer-managed keys in the same tenant for an existing storage account
-Azure Storage encrypts all data in a storage account at rest. By default, data is encrypted with Microsoft-managed keys. For additional control over encryption keys, you can manage your own keys. Customer-managed keys must be stored in Azure Key Vault or Key Vault Managed Hardware Security Model (HSM).
+Azure Storage encrypts all data in a storage account at rest. By default, data is encrypted with Microsoft-managed keys. For more control over encryption keys, you can manage your own keys. Customer-managed keys must be stored in Azure Key Vault or Key Vault Managed Hardware Security Model (HSM).
This article shows how to configure encryption with customer-managed keys for an existing storage account when the storage account and key vault are in the same tenant. The customer-managed keys are stored in a key vault.
To learn how to configure encryption with customer-managed keys stored in a mana
## Choose a managed identity to authorize access to the key vault
-When you enable customer-managed keys for an existing storage account, you must specify a managed identity that will be used to authorize access to the key vault that contains the key. The managed identity must have permissions to access the key in the key vault.
+When you enable customer-managed keys for an existing storage account, you must specify a managed identity to be used to authorize access to the key vault that contains the key. The managed identity must have permissions to access the key in the key vault.
-The managed identity that authorizes access to the key vault may be either a user-assigned or system-assigned managed identity. To learn more about system-assigned versus user-assigned managed identities, see [Managed identity types](../../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types).
+The managed identity that authorizes access to the key vault can be either a user-assigned or system-assigned managed identity. To learn more about system-assigned versus user-assigned managed identities, see [Managed identity types](../../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types).
### Use a user-assigned managed identity to authorize access
az role assignment create --assignee-object-id $principalId \
When you configure encryption with customer-managed keys for an existing storage account, you can choose to automatically update the key version used for Azure Storage encryption whenever a new version is available in the associated key vault. Alternately, you can explicitly specify a key version to be used for encryption until the key version is manually updated.
-When the key version is changed, whether automatically or manually, the protection of the root encryption key changes, but the data in your Azure Storage account remains encrypted at all times. There is no additional action required on your part to ensure that your data is protected. Rotating the key version doesn't impact performance. There is no downtime associated with rotating the key version.
+When the key version is changed, whether automatically or manually, the protection of the root encryption key changes, but the data in your Azure Storage account remains encrypted at all times. There's no further action required on your part to ensure that your data is protected. Rotating the key version doesn't impact performance. There's no downtime associated with rotating the key version.
You can use either a system-assigned or user-assigned managed identity to authorize access to the key vault when you configure customer-managed keys for an existing storage account.
To configure customer-managed keys for an existing account with automatic updati
1. Save your changes.
-After you've specified the key, the Azure portal indicates that automatic updating of the key version is enabled and displays the key version currently in use for encryption. The portal also displays the type of managed identity used to authorize access to the key vault and the principal ID for the managed identity.
+After you specify the key, the Azure portal indicates that automatic updating of the key version is enabled and displays the key version currently in use for encryption. The portal also displays the type of managed identity used to authorize access to the key vault and the principal ID for the managed identity.
:::image type="content" source="media/customer-managed-keys-configure-existing-account/portal-auto-rotation-enabled.png" alt-text="Screenshot showing automatic updating of the key version enabled.":::
After you've specified the key, the Azure portal indicates that automatic updati
To configure customer-managed keys for an existing account with automatic updating of the key version with PowerShell, install the [Az.Storage](https://www.powershellgallery.com/packages/Az.Storage) module, version 2.0.0 or later.
-Next, call [Set-AzStorageAccount](/powershell/module/az.storage/set-azstorageaccount) to update the storage account's encryption settings. Include the `KeyvaultEncryption` parameter to enable customer-managed keys for the storage account, and set `KeyVersion` to an empty string to enable automatic updating of the key version. If the storage account was previously configured for customer-managed keys with a specific key version, then setting the key version to an empty string will enable automatic updating of the key version going forward.
+Next, call [Set-AzStorageAccount](/powershell/module/az.storage/set-azstorageaccount) to update the storage account's encryption settings. Include the `KeyvaultEncryption` parameter to enable customer-managed keys for the storage account, and set `KeyVersion` to an empty string to enable automatic updating of the key version. If the storage account was previously configured for customer-managed keys with a specific key version, then setting the key version to an empty string enables automatic updating of the key version going forward.
```azurepowershell $accountName = "<storage-account>"
Set-AzStorageAccount -ResourceGroupName $rgName `
To configure customer-managed keys for an existing account with automatic updating of the key version with Azure CLI, install [Azure CLI version 2.4.0](/cli/azure/release-notes-azure-cli#april-21-2020) or later. For more information, see [Install the Azure CLI](/cli/azure/install-azure-cli).
-Next, call [az storage account update](/cli/azure/storage/account#az-storage-account-update) to update the storage account's encryption settings. Include the `--encryption-key-source` parameter and set it to `Microsoft.Keyvault` to enable customer-managed keys for the account, and set `encryption-key-version` to an empty string to enable automatic updating of the key version. If the storage account was previously configured for customer-managed keys with a specific key version, then setting the key version to an empty string will enable automatic updating of the key version going forward.
+Next, call [az storage account update](/cli/azure/storage/account#az-storage-account-update) to update the storage account's encryption settings. Include the `--encryption-key-source` parameter and set it to `Microsoft.Keyvault` to enable customer-managed keys for the account, and set `encryption-key-version` to an empty string to enable automatic updating of the key version. If the storage account was previously configured for customer-managed keys with a specific key version, then setting the key version to an empty string enables automatic updating of the key version going forward.
```azurecli accountName="<storage-account>"
Set-AzStorageAccount -ResourceGroupName $rgName `
-KeyVersion $key.Version ```
-When you manually update the key version, you'll need to update the storage account's encryption settings to use the new version. First, call [Get-AzKeyVaultKey](/powershell/module/az.keyvault/get-azkeyvaultkey) to get the latest version of the key. Then call [Set-AzStorageAccount](/powershell/module/az.storage/set-azstorageaccount) to update the storage account's encryption settings to use the new version of the key, as shown in the previous example.
+When you manually update the key version, you then need to update the storage account's encryption settings to use the new version. First, call [Get-AzKeyVaultKey](/powershell/module/az.keyvault/get-azkeyvaultkey) to get the latest version of the key. Then call [Set-AzStorageAccount](/powershell/module/az.storage/set-azstorageaccount) to update the storage account's encryption settings to use the new version of the key, as shown in the previous example.
# [Azure CLI](#tab/azure-cli)
az storage account update \
--encryption-key-vault $keyVaultUri ```
-When you manually update the key version, you'll need to update the storage account's encryption settings to use the new version. First, query for the key vault URI by calling [az keyvault show](/cli/azure/keyvault#az-keyvault-show), and for the key version by calling [az keyvault key list-versions](/cli/azure/keyvault/key#az-keyvault-key-list-versions). Then call [az storage account update](/cli/azure/storage/account#az-storage-account-update) to update the storage account's encryption settings to use the new version of the key, as shown in the previous example.
+When you manually update the key version, you then need to update the storage account's encryption settings to use the new version. First, query for the key vault URI by calling [az keyvault show](/cli/azure/keyvault#az-keyvault-show), and for the key version by calling [az keyvault key list-versions](/cli/azure/keyvault/key#az-keyvault-key-list-versions). Then call [az storage account update](/cli/azure/storage/account#az-storage-account-update) to update the storage account's encryption settings to use the new version of the key, as shown in the previous example.
[!INCLUDE [storage-customer-managed-keys-change-include](../../../includes/storage-customer-managed-keys-change-include.md)]
-If the new key is in a different key vault, you must [grant the managed identity access to the key in the new vault](#choose-a-managed-identity-to-authorize-access-to-the-key-vault). If you opt for manual updating of the key version, you will also need to [update the key vault URI](#configure-encryption-for-manual-updating-of-key-versions).
+If the new key is in a different key vault, you must [grant the managed identity access to the key in the new vault](#choose-a-managed-identity-to-authorize-access-to-the-key-vault). If you opt for manual updating of the key version, you also need to [update the key vault URI](#configure-encryption-for-manual-updating-of-key-versions).
[!INCLUDE [storage-customer-managed-keys-revoke-include](../../../includes/storage-customer-managed-keys-revoke-include.md)]
storage Infrastructure Encryption Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/infrastructure-encryption-enable.md
Previously updated : 10/19/2022 Last updated : 11/06/2023
# Enable infrastructure encryption for double encryption of data
-Azure Storage automatically encrypts all data in a storage account at the service level using 256-bit AES encryption, one of the strongest block ciphers available, and is FIPS 140-2 compliant. Customers who require higher levels of assurance that their data is secure can also enable 256-bit AES encryption at the Azure Storage infrastructure level for double encryption. Double encryption of Azure Storage data protects against a scenario where one of the encryption algorithms or keys may be compromised. In this scenario, the additional layer of encryption continues to protect your data.
+Azure Storage automatically encrypts all data in a storage account at the service level using 256-bit AES encryption, one of the strongest block ciphers available, and is FIPS 140-2 compliant. Customers who require higher levels of assurance that their data is secure can also enable 256-bit AES encryption at the Azure Storage infrastructure level for double encryption. Double encryption of Azure Storage data protects against a scenario where one of the encryption algorithms or keys might be compromised. In this scenario, the additional layer of encryption continues to protect your data.
Infrastructure encryption can be enabled for the entire storage account, or for an encryption scope within an account. When infrastructure encryption is enabled for a storage account or an encryption scope, data is encrypted twice &mdash; once at the service level and once at the infrastructure level &mdash; with two different encryption algorithms and two different keys.
To use the Azure portal to create a storage account with infrastructure encrypti
1. On the **Encryption** tab, locate **Enable infrastructure encryption**, and select **Enabled**. 1. Select **Review + create** to finish creating the storage account.
- :::image type="content" source="media/infrastructure-encryption-enable/create-account-infrastructure-encryption-portal.png" alt-text="Screenshot showing how to enable infrastructure encryption when creating account":::
+ :::image type="content" source="media/infrastructure-encryption-enable/create-account-infrastructure-encryption-portal.png" alt-text="Screenshot showing how to enable infrastructure encryption when creating account.":::
To verify that infrastructure encryption is enabled for a storage account with the Azure portal, follow these steps: 1. Navigate to your storage account in the Azure portal.
-1. Under **Settings**, choose **Encryption**.
+1. Under **Security + networking**, choose **Encryption**.
- :::image type="content" source="media/infrastructure-encryption-enable/verify-infrastructure-encryption-portal.png" alt-text="Screenshot showing how to verify that infrastructure encryption is enabled for account":::
+ :::image type="content" source="media/infrastructure-encryption-enable/verify-infrastructure-encryption-portal.png" alt-text="Screenshot showing how to verify that infrastructure encryption is enabled for account.":::
# [PowerShell](#tab/powershell)
storage Sas Expiration Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/sas-expiration-policy.md
Previously updated : 12/12/2022 Last updated : 11/07/2023
# Configure an expiration policy for shared access signatures
-You can use a shared access signature (SAS) to delegate access to resources in your Azure Storage account. A SAS token includes the targeted resource, the permissions granted, and the interval over which access is permitted. Best practices recommend that you limit the interval for a SAS in case it is compromised. By setting a SAS expiration policy for your storage accounts, you can provide a recommended upper expiration limit when a user creates a service SAS or an account SAS.
+You can use a shared access signature (SAS) to delegate access to resources in your Azure Storage account. A SAS token includes the targeted resource, the permissions granted, and the interval over which access is permitted. Best practices recommend that you limit the interval for a SAS in case it's compromised. By setting a SAS expiration policy for your storage accounts, you can provide a recommended upper expiration limit when a user creates a service SAS or an account SAS.
For more information about shared access signatures, see [Grant limited access to Azure Storage resources using shared access signatures (SAS)](storage-sas-overview.md).
You can configure a SAS expiration policy on the storage account. The SAS expira
The validity interval for the SAS is calculated by subtracting the date/time value of the signed start field from the date/time value of the signed expiry field. If the resulting value is less than or equal to the recommended upper limit, then the SAS is in compliance with the SAS expiration policy.
-After you configure the SAS expiration policy, then a user who creates a SAS with an interval that exceeds the recommended upper limit will see a warning.
+After you configure the SAS expiration policy, any user who creates a service SAS or account SAS with an interval that exceeds the recommended upper limit will see a warning.
-A SAS expiration policy does not prevent a user from creating a SAS with an expiration that exceeds the limit recommended by the policy. When a user creates a SAS that violates the policy, they'll see a warning, together with the recommended maximum interval. If you have configured a diagnostic setting for logging with Azure Monitor, then Azure Storage writes a message to the **SasExpiryStatus** property in the logs whenever a user creates or uses a SAS that expires after the recommended interval. The message indicates that the validity interval of the SAS exceeds the recommended interval.
+A SAS expiration policy doesn't prevent a user from creating a SAS with an expiration that exceeds the limit recommended by the policy. When a user creates a SAS that violates the policy, they see a warning, along with the recommended maximum interval. If you've configured a diagnostic setting for logging with Azure Monitor, then Azure Storage writes a message to the **SasExpiryStatus** property in the logs whenever a user *uses* a SAS that expires after the recommended interval. The message indicates that the validity interval of the SAS exceeds the recommended interval.
-When a SAS expiration policy is in effect for the storage account, the signed start field is required for every SAS. If the signed start field is not included on the SAS, and you have configured a diagnostic setting for logging with Azure Monitor, then Azure Storage writes a message to the **SasExpiryStatus** property in the logs whenever a user creates or uses a SAS without a value for the signed start field.
+When a SAS expiration policy is in effect for the storage account, the signed start field is required for every SAS. If the signed start field isn't included on the SAS, and you've configured a diagnostic setting for logging with Azure Monitor, then Azure Storage writes a message to the **SasExpiryStatus** property in the logs whenever a user *uses* a SAS without a value for the signed start field.
## Configure a SAS expiration policy
When you configure a SAS expiration policy on a storage account, the policy appl
### Do I need to rotate the account access keys first?
-Before you can configure a SAS expiration policy, you might need to rotate each of your account access keys at least once. If the **keyCreationTime** property of the storage account has a null value for either of the account access keys (key1 and key2), you will need to rotate them. To determine whether the **keyCreationTime** property is null, see [Get the creation time of the account access keys for a storage account](storage-account-get-info.md#get-the-creation-time-of-the-account-access-keys-for-a-storage-account). If you attempt to configure a SAS expiration policy and the keys need to be rotated first, the operation will fail.
+Before you can configure a SAS expiration policy, you might need to rotate each of your account access keys at least once. If the **keyCreationTime** property of the storage account has a null value for either of the account access keys (key1 and key2), you'll need to rotate them. To determine whether the **keyCreationTime** property is null, see [Get the creation time of the account access keys for a storage account](storage-account-get-info.md#get-the-creation-time-of-the-account-access-keys-for-a-storage-account). If you attempt to configure a SAS expiration policy and the keys need to be rotated first, the operation fails.
### How to configure a SAS expiration policy
To configure a SAS expiration policy in the Azure portal, follow these steps:
#### [PowerShell](#tab/azure-powershell)
-To configure a SAS expiration policy, use the [Set-AzStorageAccount](/powershell/module/az.storage/set-azstorageaccount) command, and then set the `-SasExpirationPeriod` parameter to the number of days, hours, minutes, and seconds that a SAS token can be active from the time that a SAS is signed. The string that you provide the `-SasExpirationPeriod` parameter uses the following format: `<days>.<hours>:<minutes>:<seconds>`. For example, if you wanted the SAS to expire 1 day, 12 hours, 5 minutes, and 6 seconds after it is signed, then you would use the string `1.12:05:06`.
+To configure a SAS expiration policy, use the [Set-AzStorageAccount](/powershell/module/az.storage/set-azstorageaccount) command, and then set the `-SasExpirationPeriod` parameter to the number of days, hours, minutes, and seconds that a SAS token can be active from the time that a SAS is signed. The string that you provide the `-SasExpirationPeriod` parameter uses the following format: `<days>.<hours>:<minutes>:<seconds>`. For example, if you wanted the SAS to expire 1 day, 12 hours, 5 minutes, and 6 seconds after it's signed, then you would use the string `1.12:05:06`.
```powershell $account = Set-AzStorageAccount -ResourceGroupName <resource-group> `
The SAS expiration period appears in the console output.
#### [Azure CLI](#tab/azure-cli)
-To configure a SAS expiration policy, use the [az storage account update](/cli/azure/storage/account#az-storage-account-update) command, and then set the `--key-exp-days` parameter to the number of days, hours, minutes, and seconds that a SAS token can be active from the time that a SAS is signed. The string that you provide the `--key-exp-days` parameter uses the following format: `<days>.<hours>:<minutes>:<seconds>`. For example, if you wanted the SAS to expire 1 day, 12 hours, 5 minutes, and 6 seconds after it is signed, then you would use the string `1.12:05:06`.
+To configure a SAS expiration policy, use the [az storage account update](/cli/azure/storage/account#az-storage-account-update) command, and then set the `--key-exp-days` parameter to the number of days, hours, minutes, and seconds that a SAS token can be active from the time that a SAS is signed. The string that you provide the `--key-exp-days` parameter uses the following format: `<days>.<hours>:<minutes>:<seconds>`. For example, if you wanted the SAS to expire 1 day, 12 hours, 5 minutes, and 6 seconds after it's signed, then you would use the string `1.12:05:06`.
```azurecli-interactive az storage account update \
The SAS expiration period appears in the console output.
## Query logs for policy violations
-To log the creation of a SAS that is valid over a longer interval than the SAS expiration policy recommends, first create a diagnostic setting that sends logs to an Azure Log Analytics workspace. For more information, see [Send logs to Azure Log Analytics](../../azure-monitor/platform/diagnostic-settings.md).
+To log the use of a SAS that is valid over a longer interval than the SAS expiration policy recommends, first create a diagnostic setting that sends logs to an Azure Log Analytics workspace. For more information, see [Send logs to Azure Log Analytics](../../azure-monitor/platform/diagnostic-settings.md).
Next, use an Azure Monitor log query to monitor whether policy has been violated. Create a new query in your Log Analytics workspace, add the following query text, and press **Run**.
To monitor your storage accounts for compliance with the key expiration policy,
1. On the Azure Policy dashboard, locate the built-in policy definition for the scope that you specified in the policy assignment. You can search for `Storage accounts should have shared access signature (SAS) policies configured` in the **Search** box to filter for the built-in policy. 1. Select the policy name with the desired scope.
-1. On the **Policy assignment** page for the built-in policy, select **View compliance**. Any storage accounts in the specified subscription and resource group that do not meet the policy requirements appear in the compliance report.
+1. On the **Policy assignment** page for the built-in policy, select **View compliance**. Any storage accounts in the specified subscription and resource group that don't meet the policy requirements appear in the compliance report.
:::image type="content" source="media/sas-expiration-policy/policy-compliance-report-portal-inline.png" alt-text="Screenshot showing how to view the compliance report for the SAS expiration built-in policy" lightbox="media/sas-expiration-policy/policy-compliance-report-portal-expanded.png":::
storage Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Storage description: Lists Azure Policy Regulatory Compliance controls available for Azure Storage. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 10/23/2023 Last updated : 11/06/2023
storage Container Storage Aks Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/container-storage/container-storage-aks-quickstart.md
description: Create a Linux-based Azure Kubernetes Service (AKS) cluster, instal
Previously updated : 11/03/2023 Last updated : 11/06/2023
[!INCLUDE [container-storage-prerequisites](../../../includes/container-storage-prerequisites.md)]
-> [!IMPORTANT]
-> This Quickstart will work for most use cases. An exception is if you plan to use Azure Elastic SAN Preview as backing storage for your storage pool and you don't have owner-level access to the Azure subscription. If both these statements apply to you, use the [manual installation steps](install-container-storage-aks.md) instead. Alternatively, you can complete this Quickstart with the understanding that a storage pool won't be automatically created, and then [create an Elastic SAN storage pool manually](use-container-storage-with-elastic-san.md).
- ## Getting started -- Take note of your Azure subscription ID. We recommend using a subscription on which you have an [Owner](../../role-based-access-control/built-in-roles.md#owner) role.
+- Take note of your Azure subscription ID. We recommend using a subscription on which you have a [Kubernetes contributor](../../role-based-access-control/built-in-roles.md#kubernetes-extension-contributor) role if you want to use Azure Disks or Ephemeral Disk as data storage. If you want to use Azure Elastic SAN Preview as data storage, you'll need an [Owner](../../role-based-access-control/built-in-roles.md#owner) role on the Azure subscription.
- [Launch Azure Cloud Shell](https://shell.azure.com), or if you're using a local installation, sign in to the Azure CLI by using the [az login](/cli/azure/reference-index#az-login) command.
Upgrade to the latest version of the `aks-preview` cli extension by running the
az extension add --upgrade --name aks-preview ```
-Add or upgrade to the latest version of k8s-extension by running the following command.
+Add or upgrade to the latest version of `k8s-extension` by running the following command.
```azurecli-interactive az extension add --upgrade --name k8s-extension
Before deploying Azure Container Storage, you'll need to decide which back-end s
You'll specify the storage pool type when you install Azure Container Storage.
+> [!NOTE]
+> For Azure Elastic SAN Preview and Azure Disks, Azure Container Storage will deploy the backing storage for you as part of the installation. You don't need to create your own Elastic SAN or Azure Disk.
+ ## Choose a VM type for your cluster If you intend to use Azure Elastic SAN Preview or Azure Disks as backing storage, then you should choose a [general purpose VM type](../../virtual-machines/sizes-general.md) such as **standard_d4s_v5** for the cluster nodes. If you intend to use Ephemeral Disk, choose a [storage optimized VM type](../../virtual-machines/sizes-storage.md) with NVMe drives such as **standard_l8s_v3**. In order to use Ephemeral Disk, the VMs must have NVMe drives. You'll specify the VM type when you create the cluster in the next section.
az aks create -n <cluster-name> -g <resource-group-name> --node-vm-size Standard
The deployment will take 10-15 minutes to complete.
+## Display available storage pools
+
+To get the list of available storage pools, run the following command:
+
+```azurecli-interactive
+kubectl get sp ΓÇôn acstor
+```
+
+> [!IMPORTANT]
+> If you specified Azure Elastic SAN Preview as backing storage for your storage pool and you don't have owner-level access to the Azure subscription, only Azure Container Storage will be installed and a storage pool won't be created. In this case, you'll have to [create an Elastic SAN storage pool manually](use-container-storage-with-elastic-san.md).
+ ## Install Azure Container Storage on an existing AKS cluster If you already have an AKS cluster that meets the [VM requirements](#choose-a-vm-type-for-your-cluster), run the following command to install Azure Container Storage on the cluster and create a storage pool. Replace `<cluster-name>` and `<resource-group-name>` with your own values. Replace `<storage-pool-type>` with `azureDisk`, `ephemeraldisk`, or `elasticSan`.
If you want to install Azure Container Storage on specific node pools, follow th
```azurecli-interactive az aks update -n <cluster-name> -g <resource-group-name>ΓÇ»--enable-azure-container-storageΓÇ»<storage-pool-type> --azure-container-storage-nodepools <comma separated values of nodepool names> ```+
+## Next steps
+
+To create persistent volumes, select the link for the backing storage type you selected.
+
+- [Create persistent volume claim with Azure managed disks](use-container-storage-with-managed-disks.md#create-a-persistent-volume-claim)
+- [Create persistent volume claim with Ephemeral Disk](use-container-storage-with-local-disk.md#create-a-persistent-volume-claim)
+- [Create persistent volume claim with Azure Elastic SAN Preview](use-container-storage-with-elastic-san.md#create-a-persistent-volume-claim)
storage Install Container Storage Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/container-storage/install-container-storage-aks.md
description: Learn how to install Azure Container Storage Preview for use with A
Previously updated : 10/27/2023 Last updated : 11/07/2023
## Getting started -- Take note of your Azure subscription ID. We recommend using a subscription on which you have an [Owner](../../role-based-access-control/built-in-roles.md#owner) role. If you don't have access to one, you can still proceed, but you'll need admin assistance to complete the steps in this article.
+- Take note of your Azure subscription ID. We recommend using a subscription on which you have a [Kubernetes contributor](../../role-based-access-control/built-in-roles.md#kubernetes-extension-contributor) role if you want to use Azure Disks or Ephemeral Disk as data storage. If you want to use Azure Elastic SAN Preview as data storage, you'll need an [Owner](../../role-based-access-control/built-in-roles.md#owner) role on the Azure subscription.
- [Launch Azure Cloud Shell](https://shell.azure.com), or if you're using a local installation, sign in to the Azure CLI by using the [az login](/cli/azure/reference-index#az-login) command.
storage Elastic San Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-create.md
description: Learn how to deploy an Azure Elastic SAN Preview with the Azure por
Previously updated : 10/19/2023 Last updated : 11/07/2023 -+ # Deploy an Elastic SAN Preview
This article explains how to deploy and configure an elastic storage area networ
- If you're using Azure PowerShell, install the [latest Azure PowerShell module](/powershell/azure/install-azure-powershell). - If you're using Azure CLI, install the [latest version](/cli/azure/install-azure-cli). - Once you've installed the latest version, run `az extension add -n elastic-san` to install the extension for Elastic SAN.
-There are no additional registration steps required.
+There are no extra registration steps required.
## Limitations
There are no additional registration steps required.
1. Sign in to the Azure portal and search for **Elastic SAN**. 1. Select **+ Create a new SAN** 1. On the basics page, fill in the appropriate values.
- - **Elastic SAN name** must be between 3 and 24 characters long. The name may only contain lowercase letters, numbers, hyphens and underscores, and must begin and end with a letter or a number. Each hyphen and underscore must be preceded and followed by an alphanumeric character.
+ - **Elastic SAN name** must be between 3 and 24 characters long. The name can only contain lowercase letters, numbers, hyphens and underscores, and must begin and end with a letter or a number. Each hyphen and underscore must be preceded and followed by an alphanumeric character.
For best performance, your SAN should be in the same zone as your VM. 1. Specify the amount of base capacity you require, and any additional capacity, then select next.
Use one of these sets of sample code to create an Elastic SAN that uses locally
| Placeholder | Description | |-|-| | `<ResourceGroupName>` | The name of the resource group where the resources will be deployed. |
-| `<ElasticSanName>` | The name of the Elastic SAN to be created.<br>*The Elastic SAN name must be between 3 and 24 characters long. The name may only contain lowercase letters, numbers, hyphens and underscores, and must begin and end with a letter or a number. Each hyphen and underscore must be preceded and followed by an alphanumeric character.* |
+| `<ElasticSanName>` | The name of the Elastic SAN to be created.<br>*The Elastic SAN name must be between 3 and 24 characters long. The name can only contain lowercase letters, numbers, hyphens and underscores, and must begin and end with a letter or a number. Each hyphen and underscore must be preceded and followed by an alphanumeric character.* |
| `<ElasticSanVolumeGroupName>` | The name of the Elastic SAN Volume Group to be created. | | `<VolumeName>` | The name of the Elastic SAN Volume to be created. | | `<Location>` | The region where the new resources will be created. |
Use one of these sets of sample code to create an Elastic SAN that uses locally
| Placeholder | Description | |-|-| | `<ResourceGroupName>` | The name of the resource group where the resources will be deployed. |
-| `<ElasticSanName>` | The name of the Elastic SAN to be created.<br>*The Elastic SAN name must be between 3 and 24 characters long. The name may only contain lowercase letters, numbers, hyphens and underscores, and must begin and end with a letter or a number. Each hyphen and underscore must be preceded and followed by an alphanumeric character.* |
+| `<ElasticSanName>` | The name of the Elastic SAN to be created.<br>*The Elastic SAN name must be between 3 and 24 characters long. The name can only contain lowercase letters, numbers, hyphens and underscores, and must begin and end with a letter or a number. Each hyphen and underscore must be preceded and followed by an alphanumeric character.* |
| `<ElasticSanVolumeGroupName>` | The name of the Elastic SAN Volume Group to be created. | | `<VolumeName>` | The name of the Elastic SAN Volume to be created. | | `<Location>` | The region where the new resources will be created. |
-| `<Zone>` | The availability zone where the Elastic SAN will be created.<br> *Specify the same availability zone as the zone that will host your workload.*<br>*Use only if the Elastic SAN will use locally-redundant storage.*<br> *Must be a zone supported in the target location such as `1`, `2`, or `3`.* |
+| `<Zone>` | The availability zone where the Elastic SAN will be created.<br> *Specify the same availability zone as the zone that will host your workload.*<br>*Use only if the Elastic SAN uses locally-redundant storage.*<br> *Must be a zone supported in the target location such as `1`, `2`, or `3`.* |
The following command creates an Elastic SAN that uses **locally-redundant** storage.
Now that you've configured the basic settings and provisioned your storage, you
# [Portal](#tab/azure-portal) 1. Select **+ Create volume group** and name your volume group.
- - The name must be between 3 and 63 characters long. The name may only contain lowercase letters, numbers and hyphens, and must begin and end with a letter or a number. Each hyphen must be preceded and followed by an alphanumeric character. The volume group name can't be changed once created.
+ - The name must be between 3 and 63 characters long. The name can only contain lowercase letters, numbers and hyphens, and must begin and end with a letter or a number. Each hyphen must be preceded and followed by an alphanumeric character. The volume group name can't be changed once created.
1. Select **Next : Volumes**
storage Elastic San Encryption Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-encryption-overview.md
Last updated 11/06/2023
+ # Encrypt an Azure Elastic SAN Preview
The following list explains the numbered steps in the diagram:
1. Azure Elastic SAN wraps the data encryption key with the customer-managed key from the key vault. 1. For read/write operations, Azure Elastic SAN sends requests to Azure Key Vault to unwrap the account encryption key to perform encryption and decryption operations.
+### Regional availability
++ ## Next steps - [Configure customer-managed keys for an Elastic SAN volume group](elastic-san-configure-customer-managed-keys.md)
storage Elastic San Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-introduction.md
description: An overview of Azure Elastic SAN Preview, a service that enables yo
Previously updated : 08/15/2023 Last updated : 11/07/2023
Azure Elastic storage area network (SAN) is Microsoft's answer to the problem of workload optimization and integration between your large scale databases and performance-intensive mission-critical applications. Elastic SAN Preview is a fully integrated solution that simplifies deploying, scaling, managing, and configuring a SAN, while also offering built-in cloud capabilities like high availability.
-Elastic SAN is designed for large scale IO-intensive workloads and top tier databases such as SQL, MariaDB, and support hosting the workloads on virtual machines, or containers such as Azure Kubernetes Service. Instead of having to deploy and manage individual storage options for each individual compute deployment, you can provision an Elastic SAN and use the SAN volumes as backend storage for all your workloads. Consolidating your storage like this can be more cost effective if you have a sizeable amount of large scale IO-intensive workloads and top tier databases.
+Elastic SAN is interoperable with multiple types of compute resources such as Azure Virtual Machines, Azure VMware Solutions, and Azure Kubernetes Service. Instead of having to deploy and manage individual storage options for each individual compute deployment, you can provision an Elastic SAN and use the SAN volumes as backend storage for all your workloads. Consolidating your storage like this can be more cost effective if you have a sizeable amount of large scale IO-intensive workloads and top tier databases.
## Benefits of Elastic SAN
The following diagram illustrates the relationship and mapping of an Azure Elast
When you configure an Elastic SAN, you select the redundancy of the entire SAN and provision storage. The storage you provision determines how much performance your SAN has, and the total capacity that can be distributed to each volume within the SAN.
-Your Elastic SAN's name has some requirements. The name may only contain lowercase letters, numbers, hyphens and underscores, and must begin and end with a letter or a number. Each hyphen and underscore must be preceded and followed by an alphanumeric character. The name must be between 3 and 24 characters long.
+Your Elastic SAN's name has some requirements. The name can only contain lowercase letters, numbers, hyphens and underscores, and must begin and end with a letter or a number. Each hyphen and underscore must be preceded and followed by an alphanumeric character. The name must be between 3 and 24 characters long.
### Volume groups Volume groups are management constructs that you use to manage volumes at scale. Any settings or configurations applied to a volume group, such as virtual network rules, are inherited by any volumes associated with that volume group.
-Your volume group's name has some requirements. The name may only contain lowercase letters, numbers and hyphens, and must begin and end with a letter or a number. Each hyphen must be preceded and followed by an alphanumeric character. The name must be between 3 and 63 characters long.
+Your volume group's name has some requirements. The name can only contain lowercase letters, numbers and hyphens, and must begin and end with a letter or a number. Each hyphen must be preceded and followed by an alphanumeric character. The name must be between 3 and 63 characters long.
### Volumes You partition the SAN's storage capacity into individual volumes. These individual volumes can be mounted to your clients with iSCSI.
-The name of your volume is part of their iSCSI IQN. The name may only contain lowercase letters, numbers, hyphens and underscores, and must begin and end with a letter or a number. Each hyphen and underscore must be preceded and followed by an alphanumeric character. The name must also be between 3 and 63 characters long.
+The name of your volume is part of their iSCSI IQN. The name can only contain lowercase letters, numbers, hyphens and underscores, and must begin and end with a letter or a number. Each hyphen and underscore must be preceded and followed by an alphanumeric character. The name must also be between 3 and 63 characters long.
## Support for Azure Storage features The following table indicates support for Azure Storage features with Azure Elastic SAN.
-The status of items in this table may change over time.
+The status of items in this table might change over time.
| Storage feature | Supported for Elastic SAN | |--||
storage Elastic San Networking Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-networking-concepts.md
After configuring endpoints, you can configure network rules to further control
You can enable or disable public Internet access to your Elastic SAN endpoints at the SAN level. Enabling public network access for an Elastic SAN allows you to configure public access to individual volume groups in that SAN over storage service endpoints. By default, public access to individual volume groups is denied even if you allow it at the SAN level. If you disable public access at the SAN level, access to the volume groups within that SAN is only available over private endpoints.
+### Regional availability
++ ## Storage service endpoints [Azure Virtual Network service endpoints](../../virtual-network/virtual-network-service-endpoints-overview.md) provide secure and direct connectivity to Azure services using an optimized route over the Azure backbone network. Service endpoints allow you to secure your critical Azure service resources so only specific virtual networks can access them.
storage Elastic San Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-networking.md
Deploying a private endpoint for an Elastic SAN Volume group using PowerShell in
1. Get the Elastic SAN Volume Group. 1. Create a private link service connection using the volume group as input. 1. Create the private endpoint using the subnet and the private link service connection as input.
-1. **(Optional** *if you're using the two-step process (creation, then approval))*: The Elastic SAN Network Admin approves the connection.
+1. **(Optional)** *if you're using the two-step process (creation, then approval))*: The Elastic SAN Network Admin approves the connection.
Use this sample code to create a private endpoint for your Elastic SAN volume group with PowerShell. Replace the values of `RgName`, `VnetName`, `SubnetName`, `EsanName`, `EsanVgName`, `PLSvcConnectionName`, `EndpointName`, and `Location` with your own values:
storage Elastic San Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-performance.md
description: Learn how your workload's performance is handled by Azure Elastic S
Previously updated : 10/19/2023 Last updated : 11/06/2023
The throughput of an Elastic SAN increases by 80 MB/s per base TiB. So if you ha
### Elastic SAN volumes
-The performance of an individual volume is determined by its capacity. The maximum IOPS of a volume increase by 750 per GiB, up to a maximum of 64,000 IOPS. The maximum throughput increases by 60 MB/s per GiB, up to a maximum of 1,024 MB/s. A volume needs at least 86 GiB to be capable of using 64,000 IOPS. A volume needs at least 18 GiB in order to be capable of using the maximum 1,024 MB/s. The combined IOPS and throughput of all your volumes can't exceed the IOPS and throughput of your SAN.
+The performance of an individual volume is determined by its capacity. The maximum IOPS of a volume increase by 750 per GiB, up to a maximum of 80,000 IOPS. The maximum throughput increases by 60 MB/s per GiB, up to a maximum of 1,024 MB/s. A volume needs at least 107 GiB to be capable of using 80,000 IOPS. A volume needs at least 22 GiB in order to be capable of using the maximum 1,280 MB/s. The combined IOPS and throughput of all your volumes can't exceed the IOPS and throughput of your SAN.
## Example configuration
Each of the example scenarios in this article uses the following configuration f
|Resource |Capacity |IOPS | ||||
-|Elastic SAN |25 TiB |135,000 (provisioned) |
-|AKS SAN volume |3 TiB | Up to 64,000 |
-|Workload 1 SAN volume |10 TiB |Up to 64,000 |
-|Workload 2 SAN volume |4 TiB |Up to 64,000 |
-|Workload 3 SAN volume |2 TiB |Up to 64,000 |
+|Elastic SAN |27 TiB |135,000 (provisioned) |
+|AKS SAN volume |3 TiB | Up to 80,000 |
+|Workload 1 SAN volume |10 TiB |Up to 80,000 |
+|Workload 2 SAN volume |4 TiB |Up to 80,000 |
+|Workload 3 SAN volume |2 TiB |Up to 80,000 |
## Example scenarios
The following example scenarios depict how your Elastic SAN handles performance
|Workload 2 |8,000 |8,000 | |Workload 3 |20,000 |20,000 |
-In this scenario, no throttling occurs at either the VM or SAN level. The SAN itself has 135,000 IOPS, each volume is large enough to serve up to 64,000 IOPS, enough IOPS are available from the SAN, none of the VM's IOPS limits have been surpassed, and the total IOPS requested is 41,000. So the workloads all execute without any throttling.
+In this scenario, no throttling occurs at either the VM or SAN level. The SAN itself has 135,000 IOPS, each volume is large enough to serve up to 80,000 IOPS, enough IOPS are available from the SAN, none of the VM's IOPS limits have been surpassed, and the total IOPS requested is 41,000. So the workloads all execute without any throttling.
:::image type="content" source="media/elastic-san-performance/typical-workload.png" alt-text="Average scenario example diagram." lightbox="media/elastic-san-performance/typical-workload.png":::
In this scenario, no throttling occurs at either the VM or SAN level. The SAN it
|AKS workload |2,000 |2,000 |N/A | |Workload 1 |10,000 |10,000 |N/A | |Workload 2 |10,000 |10,000 |N/A |
-|Workload 3 |64,000 |64,000 |9:00 am |
+|Workload 3 |80,000 |80,000 |9:00 am |
-In this scenario, no throttling occurs. Workload 3 spiked at 9am, requesting 64,000 IOPS. None of the other workloads spiked and the SAN had enough free IOPS to distribute to the workload, so there was no throttling.
+In this scenario, no throttling occurs. Workload 3 spiked at 9am, requesting 80,000 IOPS. None of the other workloads spiked and the SAN had enough free IOPS to distribute to the workload, so there was no throttling.
Generally, this is the ideal configuration for a SAN sharing workloads. It's best to have enough performance to handle the normal operations of workloads, and occasional peaks.
storage Elastic San Planning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-planning.md
When allocating storage for an Elastic SAN, consider how much storage you requir
You create volumes from the storage that you allocated to your Elastic SAN. When you create a volume, think of it like partitioning a section of the storage of your Elastic SAN. The maximum performance of an individual volume is determined by the amount of storage allocated to it. Individual volumes can have fairly high IOPS and throughput, but the total IOPS and throughput of all your volumes can't exceed the total IOPS and throughput your SAN has.
-Using the same example of a 100 TiB SAN that has 250,000 IOPS and 4,000 MB/s. Say this SAN had 100 1 TiB volumes. You could potentially have three of these volumes operating at their maximum performance (64,000 IOPS, 1,024 MB/s) since this would be below the SAN's limits. But if four or five volumes all needed to operate at maximum at the same time, they wouldn't be able to. Instead the performance of the SAN would be split evenly among them.
+Using the same example of a 100 TiB SAN that has 500,000 IOPS and 20,000 MB/s. Say this SAN had 100 1 TiB volumes. You could potentially have six of these volumes operating at their maximum performance (80,000 IOPS, 1,280 MB/s) since this would be below the SAN's limits. But if seven volumes all needed to operate at maximum at the same time, they wouldn't be able to. Instead the performance of the SAN would be split evenly among them.
## Networking
stream-analytics Confluent Kafka Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/confluent-kafka-input.md
Previously updated : 11/03/2023 Last updated : 11/06/2023 # Stream data from confluent cloud Kafka with Azure Stream Analytics
Use the following steps to grant special permissions to your stream analytics jo
| Input Alias | A friendly name used in queries to reference your input | | Bootstrap server addresses | A list of host/port pairs to establish the connection to your confluent cloud kafka cluster. Example: pkc-56d1g.eastus.azure.confluent.cloud:9092 | | Kafka topic | The name of your kafka topic in your confluent cloud kafka cluster.|
-| Security Protocol | Select **SASL_SSL** |
+| Security Protocol | Select **SASL_SSL**. The mechanism supported is PLAIN. The SASL_SSL protocol doesn't support SCRAM. |
| Event Serialization format | The serialization format (JSON, CSV, Avro, Parquet, Protobuf) of the incoming data stream. | > [!IMPORTANT]
stream-analytics Confluent Kafka Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/confluent-kafka-output.md
Previously updated : 11/03/2023 Last updated : 11/06/2023 # Stream data from Azure Stream Analytics into confluent cloud
Use the following steps to grant special permissions to your stream analytics jo
| Output Alias | A friendly name used in queries to reference your input | | Bootstrap server addresses | A list of host/port pairs to establish the connection to your confluent cloud kafka cluster. Example: pkc-56d1g.eastus.azure.confluent.cloud:9092 | | Kafka topic | The name of your kafka topic in your confluent cloud kafka cluster.|
-| Security Protocol | Select **SASL_SSL** |
+| Security Protocol | Select **SASL_SSL**. The mechanism supported is PLAIN. The SASL_SSL protocol doesn't support SCRAM. |
| Event Serialization format | The serialization format (JSON, CSV, Avro, Parquet, Protobuf) of the incoming data stream. | | Partition key | Azure Stream Analytics assigns partitions using round partitioning. Keep blank if a key doesn't partition your input | | Kafka event compression type | The compression type used for outgoing data streams, such as Gzip, Snappy, Lz4, Zstd, or None. |
stream-analytics Kafka Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/kafka-output.md
Previously updated : 11/02/2023 Last updated : 11/06/2023 # Kafka output from Azure Stream Analytics (Preview)
You can use four types of security protocols to connect to your Kafka clusters:
|Property name |Description | |-|--| |mTLS |encryption and authentication |
-|SASL_SSL |It combines two different security mechanisms - SASL (Simple Authentication and Security Layer) and SSL (Secure Sockets Layer) - to ensure both authentication and encryption are in place for data transmission. |
+|SASL_SSL |It combines two different security mechanisms - SASL (Simple Authentication and Security Layer) and SSL (Secure Sockets Layer) - to ensure both authentication and encryption are in place for data transmission. The mechanism supported is PLAIN. The SASL_SSL protocol doesn't support SCRAM. |
|SASL_PLAINTEXT |standard authentication with username and password without encryption | |None |The serialization format (JSON, CSV, Avro, Parquet) of the incoming data stream. | > [!IMPORTANT]
-> Confluent Cloud supports authentication using API Keys, OAuth, or SAML single sign-on (SSO). Azure Stream Analytics does not support authentication using OAuth or SAML single sign-on (SSO).
+> Confluent Cloud supports authentication using API Keys, OAuth, or SAML single sign-on (SSO). Azure Stream Analytics doesn't support authentication using OAuth or SAML single sign-on (SSO).
> You can connect to confluent cloud using an API Key that has topic-level access via the SASL_SSL security protocol. ### Connect to Confluent Cloud using API key
-The ASA Kafka output is a librdkafka-based client, and to connect to confluent cloud, you need TLS certificates that confluent cloud uses for server auth.
-Confluent uses TLS certificates from LetΓÇÖs Encrypt, an open certificate authority (CA) You can download the ISRG Root X1 certificate in PEM format on the site of [LetsEncrypt](https://letsencrypt.org/certificates/).
+Azure stream analytics is a librdkafka-based client, and to connect to confluent cloud, you need TLS certificates that confluent cloud uses for server authentication. Confluent cloud uses TLS certificates from LetΓÇÖs Encrypt, an open certificate authority (CA).
+
+Download the ISRG Root X1 certificate in **PEM** format on the site of [LetsEncrypt](https://letsencrypt.org/certificates/).
+ > [!IMPORTANT] > You must use Azure CLI to upload the certificate as a secret to your key vault. You cannot use Azure Portal to upload a certificate that has multiline secrets to key vault.
+> The default timestamp type for a topic in a confluent cloud kafka cluster is **CreateTime**, make sure you update it to **LogAppendTime**.
+> Azure Stream Analytics supports only numerical decimal format.
To authenticate using the API Key confluent offers, you must use the SASL_SSL protocol and complete the configuration as follows: | Setting | Value | | | |
- | Username | Key/ Username from API Key |
- | Password | Secret/ Password from API key |
- | KeyVault | Name of Azure Key vault with Uploaded certificate from LetΓÇÖs Encrypt |
- | Certificate | name of the certificate uploaded to KeyVault downloaded from LetΓÇÖs Encrypt (Download the ISRG Root X1 certificate in PEM format). Note: you must upload the certificate as a secret using Azure CLI. Refer to the **Key vault integration** guide below |
+ | Username | confluent cloud API key |
+ | Password | confluent cloud API secret |
+ | Key vault name | name of Azure Key vault with uploaded certificate |
+ | Truststore certificates | name of the Key Vault Secret that holds the ISRG Root X1 certificate |
+
+ :::image type="content" source="./media/kafka/kafka-input.png" alt-text="Screenshot showing how to configure kafka input for a stream analytics job." lightbox="./media/kafka/kafka-input.png" :::
> [!NOTE] > Depending on how your confluent cloud kafka cluster is configured, you may need a certificate different from the standard certificate confluent cloud uses for server authentication. Confirm with the admin of the confluent cloud kafka cluster to verify what certificate to use.
->
+
+For step-by-step tutorial on connecting to confluent cloud kakfa, visit the documentation:
+
+Confluent cloud kafka input: [Stream data from confluent cloud Kafka with Azure Stream Analytics](confluent-kafka-input.md)
+Confluent cloud kafka output: [Stream data from Azure Stream Analytics into confluent cloud](confluent-kafka-output.md)
+ ## Key vault integration
Certificates are stored as secrets in the key vault and must be in PEM format.
### Configure Key vault with permissions You can create a key vault resource by following the documentation [Quickstart: Create a key vault using the Azure portal](../key-vault/general/quick-create-portal.md)
-To be able to upload certificates, you must have "**Key Vault Administrator**" access to your Key vault. Follow the following to grant admin access.
+To upload certificates, you must have "**Key Vault Administrator**" access to your Key vault. Follow the following to grant admin access.
> [!NOTE] > You must have "**Owner**" permissions to grant other key vault permissions.
To be able to upload certificates, you must have "**Key Vault Administrator**"
> You must upload the certificate as a secret. You must use Azure CLI to upload certificates as secrets to your key vault. > Your Azure Stream Analytics job will fail when the certificate used for authentication expires. To resolve this, you must update/replace the certificate in your key vault and restart your Azure Stream Analytics job.
-Below are some steps you can follow to upload your certificate as a secret to Azure CLI using your PowerShell:
- Make sure you have Azure CLI configured locally with PowerShell. You can visit this page to get guidance on setting up Azure CLI: [Get started with Azure CLI](https://learn.microsoft.com/cli/azure/get-started-with-azure-cli#how-to-sign-into-the-azure-cli)
-Below are some steps you can follow to upload your certificate as a secret to Azure CLI using your PowerShell
- **Login to Azure CLI:** ```PowerShell az login
az account set --subscription <subscription name>
**The following command can upload the certificate as a secret to your key vault:**
-The `<your key vault>` is the name of the key vault you want to upload the certificate to. `<name of the secret>` is any name you want to give to your secret and how it will show up in the keyvault. Note the name, you will use it to configure your kafka output in your ASA job. `<file path to certificate>` is the path to where you have downloaded your certificate.
+The `<your key vault>` is the name of the key vault you want to upload the certificate to. `<name of the secret>` is any name you want to give to your secret and how it shows up in the key vault. `<file path to certificate>` is the path to where the certificate your certificate is located. You can right-click and copy the path to the certificate.
```PowerShell az keyvault secret set --vault-name <your key vault> --name <name of the secret> --file <file path to certificate> ```
+For example:
+
+```PowerShell
+az keyvault secret set --vault-name mykeyvault --name kafkasecret --file C:\Users\Downloads\certificatefile.pem
+```
+ ### Configure Managed identity Azure Stream Analytics requires you to configure managed identity to access key vault. You can configure your ASA job to use managed identity by navigating to the **Managed Identity** tab on the left side under **Configure**.
- ![Configure Stream Analytics managed identity](./media/common/stream-analytics-enable-managed-identity-new.png)
1. Click on the **managed identity tab** under **configure**. 2. Select **Switch Identity** and select the identity to use with the job: system-assigned identity or user-assigned identity.
You can configure your ASA job to use managed identity by navigating to the **Ma
4. Review and **save**. ### Grant the Stream Analytics job permissions to access the certificate in the key vault
-For your Azure Stream Analytics job to access the certificate in your key vault and read the secret for authentication using managed identity, the service principal you created when you configured managed identity for your Azure Stream Analytics job must have special permissions to the key vault.
+
+For your Azure Stream Analytics job to read the secret in your key vault, the job must have permission to access the key vault.
+Use the following steps to grant special permissions to your stream analytics job:
1. Select **Access control (IAM)**.
Visit the [Run your Azure Stream Analytics job in an Azure Virtual Network docum
> [!NOTE] > For direct help with using the Azure Stream Analytics Kafka output, please reach out to [askasa@microsoft.com](mailto:askasa@microsoft.com).
->
+>
## Next steps > [!div class="nextstepaction"] > [Quickstart: Create a Stream Analytics job by using the Azure portal](stream-analytics-quick-create-portal.md)
+> [Stream data from confluent cloud Kafka with Azure Stream Analytics](confluent-kafka-input.md)
+> [Stream data from Azure Stream Analytics into confluent cloud](confluent-kafka-output.md)
<!--Link references--> [stream.analytics.developer.guide]: ../stream-analytics-developer-guide.md
stream-analytics Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Stream Analytics description: Lists Azure Policy Regulatory Compliance controls available for Azure Stream Analytics. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 10/23/2023 Last updated : 11/06/2023
stream-analytics Sql Reference Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/sql-reference-data.md
When using the delta query, [temporal tables in Azure SQL Database](/azure/azure
Note that Stream Analytics runtime may periodically run the snapshot query in addition to the delta query to store checkpoints.
+ > [!IMPORTANT]
+ > When using reference data delta queries, do not make identical updates to the temporal reference data table multiple times. This could cause incorrect results to be produced.
+ > Here's an example which may cause reference data to produce incorrect results:
+ > ```SQL
+ > UPDATE myTable SET VALUE=2 WHERE ID = 1;
+ > UPDATE myTable SET VALUE=2 WHERE ID = 1;
+ > ```
+ > Correct example:
+ > ```SQL
+ > UPDATE myTable SET VALUE = 2 WHERE ID = 1 and not exists (select * from myTable where ID = 1 and value = 2);
+ > ```
+ > This ensures no duplicate updates are performed.
++ ## Test your query It is important to verify that your query is returning the expected dataset that the Stream Analytics job will use as reference data. To test your query, go to Input under Job Topology section on portal. You can then select Sample Data on your SQL Database Reference input. After the sample becomes available, you can download the file and check to see if the data being returned is as expected. If you want a optimize your development and test iterations, it is recommended to use the [Stream Analytics tools for Visual Studio](./stream-analytics-tools-for-visual-studio-install.md). You can also any other tool of your preference to first ensure the query is returning the right results from you Azure SQL Database and then use that in your Stream Analytics job.
stream-analytics Stream Analytics Define Kafka Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-define-kafka-input.md
Previously updated : 11/02/2023 Last updated : 11/06/2023 # Stream data from Kafka into Azure Stream Analytics (Preview)
-Kafka is a distributed streaming platform used to publish and subscribe to streams of records. Kafka is designed to allow your apps to process records as they occur. It is an open-source system developed by the Apache Software Foundation and written in Java and Scala.
+Kafka is a distributed streaming platform used to publish and subscribe to streams of records. Kafka is designed to allow your apps to process records as they occur. It's an open-source system developed by the Apache Software Foundation and written in Java and Scala.
The following are the major use cases: * Messaging
You can use four types of security protocols to connect to your Kafka clusters:
|Property name |Description | |-|--| |mTLS |encryption and authentication |
-|SASL_SSL |It combines two different security mechanisms - SASL (Simple Authentication and Security Layer) and SSL (Secure Sockets Layer) - to ensure both authentication and encryption are in place for data transmission. |
+|SASL_SSL |It combines two different security mechanisms - SASL (Simple Authentication and Security Layer) and SSL (Secure Sockets Layer) - to ensure both authentication and encryption are in place for data transmission. The mechanism supported is PLAIN. The SASL_SSL protocol doesn't support SCRAM |
|SASL_PLAINTEXT |standard authentication with username and password without encryption | |None |The serialization format (JSON, CSV, Avro, Parquet) of the incoming data stream. | > [!IMPORTANT]
-> Confluent Cloud supports authentication using API Keys, OAuth, or SAML single sign-on (SSO). Azure Stream Analytics does not support authentication using OAuth or SAML single sign-on (SSO).
+> Confluent Cloud supports authentication using API Keys, OAuth, or SAML single sign-on (SSO). Azure Stream Analytics doesn't support authentication using OAuth or SAML single sign-on (SSO).
> You can connect to confluent cloud using an API Key that has topic-level access via the SASL_SSL security protocol. ### Connect to Confluent Cloud using API key
-The ASA Kafka input is a librdkafka-based client, and to connect to confluent cloud, you need TLS certificates that confluent cloud uses for server auth.
-Confluent uses TLS certificates from LetΓÇÖs Encrypt, an open certificate authority (CA). You can download the ISRG Root X1 certificate in PEM format on the site of [LetsEncrypt](https://letsencrypt.org/certificates/).
+Azure stream analytics is a librdkafka-based client, and to connect to confluent cloud, you need TLS certificates that confluent cloud uses for server authentication. Confluent cloud uses TLS certificates from LetΓÇÖs Encrypt, an open certificate authority (CA).
+
+Download the ISRG Root X1 certificate in **PEM** format on the site of [LetsEncrypt](https://letsencrypt.org/certificates/).
+ > [!IMPORTANT] > You must use Azure CLI to upload the certificate as a secret to your key vault. You cannot use Azure Portal to upload a certificate that has multiline secrets to key vault.
To authenticate using the API Key confluent offers, you must use the SASL_SSL pr
| Setting | Value | | | |
- | Username | Key/ Username from API Key |
- | Password | Secret/ Password from API key |
- | KeyVault | Name of Azure Key vault with Uploaded certificate from LetΓÇÖs Encrypt |
- | Certificate | name of the certificate uploaded to KeyVault downloaded from LetΓÇÖs Encrypt (Download the ISRG Root X1 certificate in PEM format). Note: you must upload the certificate as a secret using Azure CLI. Refer to the **Key vault integration** guide below |
+ | Username | confluent cloud API key |
+ | Password | confluent cloud API secret |
+ | Key vault name | name of Azure Key vault with uploaded certificate |
+ | Truststore certificates | name of the Key Vault Secret that holds the ISRG Root X1 certificate |
+
+ :::image type="content" source="./media/kafka/kafka-input.png" alt-text="Screenshot showing how to configure kafka input for a stream analytics job." lightbox="./media/kafka/kafka-input.png" :::
> [!NOTE] > Depending on how your confluent cloud kafka cluster is configured, you may need a certificate different from the standard certificate confluent cloud uses for server authentication. Confirm with the admin of the confluent cloud kafka cluster to verify what certificate to use.
->
+
+For step-by-step tutorial on connecting to confluent cloud kakfa, visit the documentation:
+
+Confluent cloud kafka input: [Stream data from confluent cloud Kafka with Azure Stream Analytics](confluent-kafka-input.md)
+Confluent cloud kafka output: [Stream data from Azure Stream Analytics into confluent cloud](confluent-kafka-output.md)
## Key vault integration
az account set --subscription <subscription name>
**The following command can upload the certificate as a secret to your key vault:**
-The `<your key vault>` is the name of the key vault you want to upload the certificate to. `<name of the secret>` is any name you want to give to your secret and how it will show up in the key vault. Note the name; you will use it to configure your kafka output in your ASA job. `<file path to certificate>` is the path to where you have downloaded your certificate.
+The `<your key vault>` is the name of the key vault you want to upload the certificate to. `<name of the secret>` is any name you want to give to your secret and how it shows up in the key vault. `<file path to certificate>` is the path to where the certificate your certificate is located. You can right-click and copy the path to the certificate.
```PowerShell az keyvault secret set --vault-name <your key vault> --name <name of the secret> --file <file path to certificate> ```
+For example:
+```PowerShell
+az keyvault secret set --vault-name mykeyvault --name kafkasecret --file C:\Users\Downloads\certificatefile.pem
+```
### Configure Managed identity Azure Stream Analytics requires you to configure managed identity to access key vault. You can configure your ASA job to use managed identity by navigating to the **Managed Identity** tab on the left side under **Configure**.
- ![Configure Stream Analytics managed identity](./media/common/stream-analytics-enable-managed-identity-new.png)
1. Click on the **managed identity tab** under **configure**. 2. Select on **Switch Identity** and select the identity to use with the job: system-assigned identity or user-assigned identity.
You can configure your ASA job to use managed identity by navigating to the **Ma
4. Review and **save**. ### Grant the Stream Analytics job permissions to access the certificate in the key vault
-For your Azure Stream Analytics job to access the certificate in your key vault and read the secret for authentication using managed identity, the service principal you created when you configured managed identity for your Azure Stream Analytics job must have special permissions to the key vault.
+
+For your Azure Stream Analytics job to read the secret in your key vault, the job must have permission to access the key vault.
+Use the following steps to grant special permissions to your stream analytics job:
1. Select **Access control (IAM)**.
Visit the [Run your Azure Stream Analytics job in an Azure Virtual Network docum
### Limitations
-* When configuring your Azure Stream Analytics jobs to use VNET/SWIFT, your job must be configured with at least six (6) streaming units or one (1) V2 streaming unit. .
+* When configuring your Azure Stream Analytics jobs to use VNET/SWIFT, your job must be configured with at least six (6) streaming units or one (1) V2 streaming unit.
* When using mTLS or SASL_SSL with Azure Key vault, you must convert your Java Key Store to PEM format. * The minimum version of Kafka you can configure Azure Stream Analytics to connect to is version 0.10. * Azure Stream Analytics does not support authentication to confluent cloud using OAuth or SAML single sign-on (SSO). You must use API Key via the SASL_SSL protocol
Visit the [Run your Azure Stream Analytics job in an Azure Virtual Network docum
## Next steps > [!div class="nextstepaction"] > [Quickstart: Create a Stream Analytics job by using the Azure portal](stream-analytics-quick-create-portal.md)
+> [Stream data from confluent cloud Kafka with Azure Stream Analytics](confluent-kafka-input.md)
+> [Stream data from Azure Stream Analytics into confluent cloud](confluent-kafka-output.md)
+ <!--Link references--> [stream.analytics.developer.guide]: ../stream-analytics-developer-guide.md
synapse-analytics Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Synapse Analytics description: Lists Azure Policy Regulatory Compliance controls available for Azure Synapse Analytics. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 10/23/2023 Last updated : 11/06/2023
update-manager Whats Upcoming https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/whats-upcoming.md
Previously updated : 09/27/2023 Last updated : 11/07/2023 # What are the upcoming features in Azure Update Manager The article [What's new in Azure Update Manager](whats-new.md) contains updates of feature releases. This article lists all the upcoming features for Azure Update Manager.
+## Azure Stack HCI patching (preview)
+Azure Update Manager will allow you to patch Azure Stack HCI cluster.
+ ## Alerting Enable alerts to address events as captured in updates data.
virtual-desktop Configure Host Pool Load Balancing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/configure-host-pool-load-balancing.md
Title: Configure Azure Virtual Desktop load-balancing - Azure
-description: How to configure the load-balancing method for a Azure Virtual Desktop environment.
+description: How to configure the load-balancing method for an Azure Virtual Desktop environment.
Last updated 10/12/2020
virtual-machines Automatic Extension Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/automatic-extension-upgrade.md
Previously updated : 12/28/2022 Last updated : 11/7/2023 # Automatic Extension Upgrade for VMs and Scale Sets in Azure
-**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
- Automatic Extension Upgrade is available for Azure VMs and Azure Virtual Machine Scale Sets. When Automatic Extension Upgrade is enabled on a VM or scale set, the extension is upgraded automatically whenever the extension publisher releases a new version for that extension. Automatic Extension Upgrade has the following features: - Supported for Azure VMs and Azure Virtual Machine Scale Sets.-- Upgrades are applied in an availability-first deployment model (detailed below).-- For a Virtual Machine Scale Set, no more than 20% of the scale set virtual machines will be upgraded in a single batch. The minimum batch size is one virtual machine.
+- Upgrades are applied in an availability-first deployment model.
+- For a Virtual Machine Scale Set, no more than 20% of the scale set virtual machines upgrades in a single batch. The minimum batch size is one virtual machine.
- Works for all VM sizes, and for both Windows and Linux extensions. - You can opt out of automatic upgrades at any time. - Automatic extension upgrade can be enabled on a Virtual Machine Scale Sets of any size.
Automatic Extension Upgrade is available for Azure VMs and Azure Virtual Machine
- Supported in all public cloud regions. ## How does Automatic Extension Upgrade work?
-The extension upgrade process replaces the existing extension version on a VM with a new version of the same extension when published by the extension publisher. The health of the VM is monitored after the new extension is installed. If the VM is not in a healthy state within 5 minutes of the upgrade completion, the extension version is rolled back to the previous version.
+The extension upgrade process replaces the existing extension version on a VM with a new version of the same extension when published by the extension publisher. The health of the VM is monitored after the new extension is installed. If the VM isn't in a healthy state within 5 minutes of the upgrade completion, the extension version is rolled back to the previous version.
A failed extension update is automatically retried. A retry is attempted every few days automatically without user intervention. ### Availability-first Updates The availability-first model for platform orchestrated updates ensures that availability configurations in Azure are respected across multiple availability levels.
-For a group of virtual machines undergoing an update, the Azure platform will orchestrate updates:
+For a group of virtual machines undergoing an update, the Azure platform orchestrates updates:
**Across regions:**-- An update will move across Azure globally in a phased manner to prevent Azure-wide deployment failures.
+- An update moves across Azure globally in a phased manner to prevent Azure-wide deployment failures.
- A 'phase' can have one or more regions, and an update moves across phases only if eligible VMs in the previous phase update successfully.-- Geo-paired regions will not be updated concurrently and cannot be in the same regional phase.
+- Geo-paired regions won't be updated concurrently and can't be in the same regional phase.
- The success of an update is measured by tracking the health of a VM post update. VM health is tracked through platform health indicators for the VM. For Virtual Machine Scale Sets, the VM health is tracked through application health probes or the Application Health extension, if applied to the scale set. **Within a region:**-- VMs in different Availability Zones are not updated concurrently with the same update.-- Single VMs that are not part of an availability set are batched on a best effort basis to avoid concurrent updates for all VMs in a subscription.
+- VMs in different Availability Zones aren't updated concurrently with the same update.
+- Single VMs that aren't part of an availability set are batched on a best effort basis to avoid concurrent updates for all VMs in a subscription.
**Within a 'set':**-- All VMs in a common availability set or scale set are not updated concurrently. -- VMs in a common availability set are updated within Update Domain boundaries and VMs across multiple Update Domains are not updated concurrently.
+- All VMs in a common availability set or scale set aren't updated concurrently.
+- VMs in a common availability set are updated within Update Domain boundaries and VMs across multiple Update Domains aren't updated concurrently.
- VMs in a common virtual machine scale set are grouped in batches and updated within Update Domain boundaries. ### Upgrade process for Virtual Machine Scale Sets
-1. Before beginning the upgrade process, the orchestrator will ensure that no more than 20% of VMs in the entire scale set are unhealthy (for any reason).
+1. Before the upgrade process starts, the orchestrator ensures that no more than 20% of VMs in the entire scale set are unhealthy (for any reason).
2. The upgrade orchestrator identifies the batch of VM instances to upgrade. An upgrade batch can have a maximum of 20% of the total VM count, subject to a minimum batch size of one virtual machine.
-3. For scale sets with configured application health probes or Application Health extension, the upgrade waits up to 5 minutes (or the defined health probe configuration) for the VM to become healthy before upgrading the next batch. If a VM does not recover its health after an upgrade, then by default the previous extension version on the VM is reinstalled.
+3. For scale sets with configured application health probes or Application Health extension, the upgrade waits up to 5 minutes (or the defined health probe configuration) for the VM to become healthy before upgrading the next batch. If a VM doesn't recover its health after an upgrade, then by default the previous extension version on the VM is reinstalled.
-4. The upgrade orchestrator also tracks the percentage of VMs that become unhealthy after an upgrade. The upgrade will stop if more than 20% of upgraded instances become unhealthy during the upgrade process.
+4. The upgrade orchestrator also tracks the percentage of VMs that become unhealthy after an upgrade. The upgrade stops if more than 20% of upgraded instances become unhealthy during the upgrade process.
The above process continues until all instances in the scale set have been upgraded.
-The scale set upgrade orchestrator checks for the overall scale set health before upgrading every batch. While upgrading a batch, there could be other concurrent planned or unplanned maintenance activities that could impact the health of your scale set virtual machines. In such cases, if more than 20% of the scale set's instances become unhealthy, then the scale set upgrade stops at the end of current batch.
+The scale set upgrade orchestrator checks for the overall scale set health before upgrading every batch. During a batch upgrade, there could be other concurrent planned or unplanned maintenance activities that could impact the health of your scale set virtual machines. In such cases, if more than 20% of the scale set's instances become unhealthy, then the scale set upgrade stops at the end of current batch.
## Supported extensions Automatic Extension Upgrade supports the following extensions (and more are added periodically): - [Azure Automation Hybrid Worker extension](../automation/extension-based-hybrid-runbook-worker-install.md) - Linux and Windows - Dependency Agent ΓÇô [Linux](./extensions/agent-dependency-linux.md) and [Windows](./extensions/agent-dependency-windows.md) - [Application Health Extension](../virtual-machine-scale-sets/virtual-machine-scale-sets-health-extension.md) ΓÇô Linux and Windows
+- [Guest Attestation Extension](../virtual-machines/boot-integrity-monitoring-overview.md) - Linux and Windows
- [Guest Configuration Extension](./extensions/guest-configuration.md) ΓÇô Linux and Windows - Key Vault ΓÇô [Linux](./extensions/key-vault-linux.md) and [Windows](./extensions/key-vault-windows.md) - [Azure Monitor Agent](../azure-monitor/agents/azure-monitor-agent-overview.md)
Automatic Extension Upgrade supports the following extensions (and more are adde
To enable Automatic Extension Upgrade for an extension, you must ensure the property `enableAutomaticUpgrade` is set to `true` and added to every extension definition individually. ### REST API for Virtual Machines
-To enable automatic extension upgrade for an extension (in this example the Dependency Agent extension) on an Azure VM, use the following:
+To enable automatic extension upgrade for an extension (in this example the Dependency Agent extension) on an Azure VM, use the following call:
``` PUT on `/subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.Compute/virtualMachines/<vmName>/extensions/<extensionName>?api-version=2019-12-01`
PUT on `/subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/provi
``` ### REST API for Virtual Machine Scale Sets
-Use the following to add the extension to the scale set model:
+Use the following call to add the extension to the scale set model:
``` PUT on `/subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.Compute/virtualMachineScaleSets/<vmssName>?api-version=2019-12-01`
az vmss extension set \
A VM or Virtual Machine Scale Set can have multiple extensions with automatic extension upgrade enabled. The same VM or scale set can also have other extensions without automatic extension upgrade enabled.
-If multiple extension upgrades are available for a virtual machine, the upgrades may be batched together, but each extension upgrade is applied individually on a virtual machine. A failure on one extension does not impact the other extension(s) that may be upgrading. For example, if two extensions are scheduled for an upgrade, and the first extension upgrade fails, the second extension will still be upgraded.
+If multiple extension upgrades are available for a virtual machine, the upgrades may be batched together, but each extension upgrade is applied individually on a virtual machine. A failure on one extension doesn't impact the other extension(s) that may be upgrading. For example, if two extensions are scheduled for an upgrade, and the first extension upgrade fails, the second extension will still be upgraded.
Automatic Extension Upgrades can also be applied when a VM or virtual machine scale set has multiple extensions configured with [extension sequencing](../virtual-machine-scale-sets/virtual-machine-scale-sets-extension-sequencing.md). Extension sequencing is applicable for the first-time deployment of the VM, and any future extension upgrades on an extension are applied independently.
virtual-machines Boot Integrity Monitoring Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/boot-integrity-monitoring-overview.md
You can deploy the guest attestation extension for trusted launch VMs using a qu
### [CLI](#tab/cli)
-If Secure Boot and vTPM are ON, boot integrity will be ON.
-1. Create a virtual machine with Trusted Launch that has Secure Boot + vTPM capabilities through initial deployment of trusted launch virtual machine. To deploy guest attestation extension use (`-- enable_integrity_monitoring`). Configuration of virtual machines are customizable by virtual machine owner (`az vm create`).
-1. For existing VMs, you can enable boot integrity monitoring settings by updating to make sure enable integrity monitoring is turned on (`-- enable_integrity_monitoring`).
+1. Create a virtual machine with Trusted Launch that has Secure Boot + vTPM capabilities through initial deployment of trusted launch virtual machine. To deploy guest attestation extension use (`--enable_integrity_monitoring`). Configuration of virtual machines are customizable by virtual machine owner (`az vm create`).
+1. For existing VMs, you can enable boot integrity monitoring settings by updating to make sure enable integrity monitoring is turned on (`--enable_integrity_monitoring`).
> [!NOTE] > The Guest Attestation Extension needs to be configured explicitly.
virtual-machines Oms Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/oms-linux.md
The following table provides a mapping of the version of the Log Analytics VM ex
| Log Analytics Linux VM extension version | Log Analytics Agent bundle version | |--|--|
-| 1.17.0 | [1.17.0](https://github.com/microsoft/OMS-Agent-for-Linux/releases/tag/OMSAgent_v1.16.0-0) |
+| 1.17.1 | [1.17.1](https://github.com/microsoft/OMS-Agent-for-Linux/releases/tag/OMSAgent_v1.17.1) |
| 1.16.0 | [1.16.0](https://github.com/microsoft/OMS-Agent-for-Linux/releases/tag/OMSAgent_v1.16.0-0) | | 1.14.23 | [1.14.23](https://github.com/microsoft/OMS-Agent-for-Linux/releases/tag/OMSAgent_v1.14.23-0) | | 1.14.20 | [1.14.20](https://github.com/microsoft/OMS-Agent-for-Linux/releases/tag/OMSAgent_v1.14.20-0) |
virtual-machines Hibernate Resume Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/hibernate-resume-troubleshooting.md
If you're unable to create a VM with hibernation enabled, ensure that you're usi
| OperationNotAllowed | User VM Image isn't supported for a VM with Hibernation capability. | Use a platform image or Shared Gallery Image if you want to use the hibernation feature | | OperationNotAllowed | Referencing a Dedicated Host isn't supported for a VM with Hibernation capability. | | | OperationNotAllowed | Referencing a Capacity Reservation Group isn't supported for a VM with Hibernation capability. | |
-| OperationNotAllowed | Enabling/disabling hibernation on an existing VM requires the VM to be stopped (deallocated) first. | Stop-deallocate the VM, patch with VM to enable hibernation and then start the VM |
| OperationNotAllowed | Hibernation can't be enabled on Virtual Machine since the OS Disk Size ({0} bytes) should at least be greater than the VM memory ({1} bytes). | Ensure the OS disk has enough space to be able to persist the RAM contents once the VM is hibernated | | OperationNotAllowed | Hibernation can't be enabled on Virtual Machines created in an Availability Set. | Hibernation is only supported for standalone VMs & Virtual Machine Scale Sets Flex VMs |
Commonly seen issues:
| Issue | Action | |--|--| | Guest fails to hibernate because Hyper-V Guest Shutdown Service is disabled. | [Ensure that Hyper-V Guest Shutdown Service isn't disabled.](/virtualization/hyper-v-on-windows/reference/integration-services#hyper-v-guest-shutdown-service) Enabling this service should resolve the issue. |
-| Guest fails to hibernate because HVCI (Memory integrity) is enabled. | Hibernation isn't supported with HVCI. Disabling HVCI should resolve the issue. |
+| Guest fails to hibernate because HVCI (Memory integrity) is enabled. | If Memory Integrity is enabled in the guest and you are trying to hibernate the VM, then ensure your guest is running the minimum OS build required to support hibernation with Memory Integrity. <br /> <br /> Win 11 22H2 ΓÇô Minimum OS Build - 22621.2134 <br /> Win 11 21H1 - Minimum OS Build - 22000.2295 <br /> Win 10 22H2 - Minimum OS Build - 19045.3324 |
Logs needed for troubleshooting:
If the guest OS isn't configured for hibernation, take the appropriate action to
| VMHibernateFailed | Hibernating the VM 'hiber_vm_res_5' failed due to an internal error. Retry later. | Retry after 5mins. If it continues to fail after multiple retries, check if the guest is correctly configured to support hibernation or contact Azure support. | | VMHibernateNotSupported | The VM 'Z0000ZYJ000' doesn't support hibernation. Ensure that the VM is correctly configured to support hibernation. | Hibernating a VM immediately after boot isn't supported. Retry hibernating the VM after a few minutes. |
+## Azure extensions disabled on Debian images
+Azure extensions are currently disabled by default for Debian images (more details here: https://lists.debian.org/debian-cloud/2023/07/msg00037.html). If you wish to enable hibernation for Debian based VMs through the LinuxHibernationExtension, then you can re-enable support for VM extensions via cloud-init custom data:
+
+```bash
+#!/bin/sh
+sed -i -e 's/^Extensions\.Enabled =.* $/Extensions.Enabled=y/" /etc/waagent.conf
+```
++
+Alternatively, you can enable hibernation on the guest by [installing the hibernation-setup-tool](hibernate-resume.md#option-2-hibernation-setup-tool).
+ ## Unable to resume a VM Starting a hibernated VM is similar to starting a stopped VM. For errors and troubleshooting steps related to starting a VM, refer to this guide
virtual-machines Hibernate Resume https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/hibernate-resume.md
In the Azure portal under 'Preview features', select 'Hibernation Preview'. The
### [PowerShell](#tab/checkhiberPS) ```powershell
-Get-AzProviderFeature -FeatureName " VMHibernationPreview " -ProviderNamespace "Microsoft.Compute"
+Get-AzProviderFeature -FeatureName "VMHibernationPreview" -ProviderNamespace "Microsoft.Compute"
``` ### [CLI](#tab/checkhiberCLI) ```azurecli
Once you've created a VM with hibernation enabled, you need to configure the gue
There are many ways you can configure the guest OS for hibernation in Linux VMs. #### Option 1: LinuxHibernateExtension
- You can install the [LinuxHibernateExtension](/cli/azure/azure-cli-extensions-overview) on your Linux VM to configure the guest OS for hibernation.
+When you create a Hibernation-enabled VM via the Azure portal, the LinuxHibernationExtension is automatically installed on the VM.
+
+If the extension is missing, you can [manually install the LinuxHibernateExtension](/cli/azure/azure-cli-extensions-overview) on your Linux VM to configure the guest OS for hibernation.
+
+>[!NOTE]
+> Azure extensions are currently disabled by default for Debian images. To re-enable extensions, [check the hibernation troubleshooting guide](hibernate-resume-troubleshooting.md#azure-extensions-disabled-on-debian-images).
##### [CLI](#tab/cliLHE)
virtual-machines Instance Metadata Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/instance-metadata-service.md
The IMDS API contains multiple endpoint categories representing different data s
## Versions
-> [!NOTE]
-> This feature was released alongside version 2020-10-01, which is currently being rolled out and may not yet be available in every region.
- ### List API versions Returns the set of supported API versions.
virtual-machines Nd H100 V5 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/nd-h100-v5-series.md
NVIDIA NVLink Interconnect: Supported <br>
The ND H100 v5 series supports the following kernel version: Ubuntu 20.04: 5.4.0-1046-azure
-| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | GPU | GPU Memory GiB | Max data disks | Max uncached disk throughput: IOPS/MBps | Max network bandwidth (Mbps) | Max NICs |
+| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | GPU | GPU Memory GiB | Max data disks | Max uncached disk throughput: IOPS/MBps | Max network bandwidth | Max NICs |
|||||-|-|-|--||-|
-| Standard_ND96isr_v5 | 96 | 1900 | 1000 | 8 H100 80 GB GPUs(NVLink) | 80 | 32 | 40800/612 | 2400 | 8 |
+| Standard_ND96isr_v5 | 96 | 1900 | 1000 | 8 H100 80 GB GPUs(NVLink) | 80 | 32 | 40800/612 | 80,000 Mbps | 8 |
[!INCLUDE [virtual-machines-common-sizes-table-defs](../../includes/virtual-machines-common-sizes-table-defs.md)]
virtual-machines Security Controls Policy Image Builder https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/security-controls-policy-image-builder.md
Title: Azure Policy Regulatory Compliance controls for Azure VM Image Builder description: Lists Azure Policy Regulatory Compliance controls available for Azure VM Image Builder. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 10/23/2023 Last updated : 11/06/2023
virtual-machines Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Virtual Machines description: Lists Azure Policy Regulatory Compliance controls available for Azure Virtual Machines . These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 10/23/2023 Last updated : 11/06/2023
virtual-network-manager Query Azure Resource Graph https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/query-azure-resource-graph.md
+
+ Title: Query your Azure Virtual Network Manager using Azure Resource Graph (ARG)
+description: This article covers the usage of Azure Resource Graph with Azure Virtual Network Manager.
++++ Last updated : 11/02/2023++
+# Query your Azure Virtual Network Manager using Azure Resource Graph (ARG)
+
+This article covers the usage of [Azure Resource Graph](../governance/resource-graph/overview.md) with Azure Virtual Network Manager. Azure Resource Graph (ARG) extends Azure Resource Management allowing you to query a given set of subscriptions for better governance of your environment. With ARG integration, you can query ARG to get insights into their Azure Virtual Network Manager (AVNM) configurations. Insights are provided through customized [Kusto queries](/azure/data-explorer/kusto/query/) offering resource level or at the regional level status data.
++
+## Using ARG with virtual network manager
+
+The following are some of the scenarios where Azure Resource Graph can be used for insight into Azure Virtual Network
+
+- Retrieve regional goal state data to understand the configurations that are deployed in each region and their status.
+- Discover all the resources that have a particular configuration applied.
+- Retrieve effective configurations that are applied to a virtual network and its provisioning state.
+- Identify the number of virtual networks that succeeded, failed, or are in progress during a deployment process.
+
+## Available resources
+
+The following resources are available for querying [security admin configurations](concept-security-admins.md) in ARG:
+
+- microsoft.network/effectivesecurityadminrules
+- microsoft.network/networkmanagers/securityadminconfigurations/rulecollections/snapshots
+- microsoft.network/networkmanagers/securityadminconfigurations/rulecollections/rules/snapshots
+- microsoft.network/networkmanagers/securityadminconfigurations/snapshots
+- microsoft.network/networkmanagers/securityadminregionalgoalstates
+
+## Get started
+
+To get started with querying your virtual network manager data in ARG, follow these steps:
+
+1. Search for *Resource Graph Explorer* in the Azure portal and select the same to get redirected to the ARG query editor.
+
+ :::image type="content" source="media/query-azure-resource-graph/azure-resource-graph-editor.png" alt-text="Screenshot of Azure Resource Graph editor with virtual network manager query example in Kusto.":::
+
+1. Enter your Kusto queries in the query editor and select **Run Query**.
+
+You can download the output of these queries as CSV from the **Resource Graph Explorer**. You can also use these queries in custom automation using any automation clients supported by ARG, such as [PowerShell](../governance/resource-graph/first-query-powershell.md), [CLI](../governance/resource-graph/first-query-azurecli.md), or [SDK](../governance/resource-graph/first-query-python.md). You can also create [custom workbooks](../azure-monitor/visualize/workbooks-overview.md) in the Azure portal using ARG as a data source.
+
+> [!NOTE]
+> ARG allows you to query the resources for which you have the appropriate RBAC rights.
+
+## Sample queries
+
+The following are sample queries you can run on your virtual network manager data. You can use in them in custom dashboards and automations. Listed with each query is the input involved and the output returned.
+
+#### List all virtual network managers impacting a given virtual network
+
+Input: Enter the **vnetId** of the virtual network. It uses the following syntax: *00000000-0000-0000-0000-000000000000*
+Output: List of virtual network manager IDs.
+
+```kusto
+
+networkresources
+| where type == "microsoft.network/effectivesecurityadminrules"
+| extend vnetId = "00000000-0000-0000-0000-000000000000"
+| where id == strcat(vnetId,"/providers/Microsoft.Network/effectiveSecurityAdminRules/default")
+| mv-expand properties.EffectiveSecurityAdminConfigurations
+| mv-expand properties.effectiveSecurityAdminConfigurations
+| extend configId = tolower(iff(properties_EffectiveSecurityAdminConfigurations.Id == "", properties_effectiveSecurityAdminConfigurations.id, properties_EffectiveSecurityAdminConfigurations.Id))
+| extend networkManagerId = substring(configId, 0, indexof(configId, "/securityadminconfigurations/"))
+| distinct networkManagerId
+
+```
+
+#### List commit details of latest security admin commit for a given network manager
+
+Input: Enter **id** of the virtual network manager. It uses the following syntax: */subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myResourceGroup/providers/Microsoft.Network/networkManagers/myVirtualNetworkManager*
+
+Output: List of commit details for security admin configurations including *CommitId, CommitTimestamp, location, SecurityAdminConfigurationId, SecurityAdminRuleIds, SecurityAdminRuleCollectionIds, status, and errorMessage*.
+
+```kusto
+networkresources
+| where type == "microsoft.network/networkmanagers/securityadminregionalgoalstates"
+| where id contains tolower("/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myResourceGroup/providers/Microsoft.Network/networkManagers/myVirtualNetworkManager")
+| extend adminConfigurationId = tolower(iff(properties.securityAdminConfigurations[0].id == "", properties.SecurityAdminConfigurations[0].Id, properties.securityAdminConfigurations[0].id))
+| extend adminRuleCollectionIds = todynamic(iff(properties.securityAdminRuleCollections == "", properties.SecurityAdminRuleCollections, properties.securityAdminRuleCollections))
+| extend adminRuleIds = todynamic(iff(properties.securityAdminRules == "", properties.SecurityAdminRules, properties.securityAdminRules))
+| extend commitId = iff(properties.commitId == "", properties.CommitId, properties.commitId)
+| extend timeStamp = todatetime(iff(properties.commitTimestamp == "", properties.CommitTimestamp, properties.commitTimestamp))
+| extend status = iff(properties.status == "", properties.Status, properties.status)
+| extend errorMessage = iff(properties.errorMessage == "" and properties.ErrorMessage == "", "", iff(properties.errorMessage == "", properties.ErrorMessage, properties.errorMessage))
+| order by timeStamp desc
+| project commitId, timeStamp, location, adminConfigurationId, adminRuleCollectionIds, adminRuleIds, status, errorMessage
+```
+
+#### Count of virtual networks impacted by a given security admin configuration
+
+Input: Enter the **adminConfigurationID** of the security admin configuration snapshot. It uses the following syntax:
+`"/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myResourceGroup/providers/Microsoft.Network/networkManagers/myVirtualNetworkManager/securityAdminConfigurations/config_2023-05-15-15-07-27/snapshots/0"`
+
+Output: List the virtual networks impacted including *Region, successCount, and failedcount*.
+
+> [!NOTE]
+> The adminConfigurationId of the security admin configuration snapshot. You can get this id from the output of [List commit details](#list-commit-details-of-latest-security-admin-commit-for-a-given-network-manager) query.
+
+```kusto
+networkresources
+| where type == "microsoft.network/effectivesecurityadminrules"
+| extend snapshotConfigIdToCheck = tolower("/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myResourceGroup/providers/Microsoft.Network/networkManagers/myVirtualNetworkManager/securityAdminConfigurations/config_2023-05-15-15-07-27/snapshots/0")
+| mv-expand properties.effectiveSecurityAdminConfigurations
+| mv-expand properties.EffectiveSecurityAdminConfigurations
+| extend configurationId = tolower(iff(properties_effectiveSecurityAdminConfigurations.id == "", properties_EffectiveSecurityAdminConfigurations.Id, properties_effectiveSecurityAdminConfigurations.id))
+| extend provisioningState = tolower(iff(properties.ProvisioningState == "", properties.provisioningState, properties.ProvisioningState))
+| where configurationId == snapshotConfigIdToCheck
+| summarize count() by location, provisioningState
+```
+
+## Next steps
+
+Create an [Azure Virtual Network Manager](create-virtual-network-manager-portal.md) instance using the Azure portal.
virtual-network Configure Public Ip Application Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/configure-public-ip-application-gateway.md
Previously updated : 08/24/2023 Last updated : 11/13/2023
Application gateway doesn't support changing the public IP address after creatio
## Caveats
-* Public IPv6 addresses aren't supported on Application Gateways at this time.
+* Azure Application Gateway support for a [frontend public IPv6 address](../../application-gateway/ipv6-application-gateway-portal.md) is currently in public preview.
## Next steps
virtual-network Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Virtual Network description: Lists Azure Policy Regulatory Compliance controls available for Azure Virtual Network. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 10/23/2023 Last updated : 11/06/2023
virtual-network Service Tags Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/service-tags-overview.md
By default, service tags reflect the ranges for the entire cloud. Some service t
| **AzureAttestation** | Azure Attestation. | Outbound | No | Yes | | **AzureBackup** |Azure Backup.<br/><br/>**Note**: This tag has a dependency on the **Storage** and **AzureActiveDirectory** tags. | Outbound | No | Yes | | **AzureBotService** | Azure Bot Service. | Outbound | No | Yes |
-| **AzureCloud** | All [datacenter public IP addresses](https://www.microsoft.com/download/details.aspx?id=56519). Doesn't include IPv6. | Both | Yes | Yes |
+| **AzureCloud** | All [datacenter public IP addresses](https://www.microsoft.com/download/details.aspx?id=56519). Includes IPv6. | Both | Yes | Yes |
| **AzureCognitiveSearch** | Azure Cognitive Search. <br/><br/>This tag or the IP addresses covered by this tag can be used to grant indexers secure access to data sources. For more information about indexers, see [indexer connection documentation](../search/search-indexer-troubleshooting.md#connection-errors). <br/><br/> **Note**: The IP of the search service isn't included in the list of IP ranges for this service tag and **also needs to be added** to the IP firewall of data sources. | Inbound | No | Yes | | **AzureConnectors** | This tag represents the IP addresses used for managed connectors that make inbound webhook callbacks to the Azure Logic Apps service and outbound calls to their respective services, for example, Azure Storage or Azure Event Hubs. | Both | Yes | Yes | | **AzureContainerAppsService** | Azure Container Apps Service | Both | Yes | No |
By default, service tags reflect the ranges for the entire cloud. Some service t
| **MicrosoftAzureFluidRelay** | This tag represents the IP addresses used for Azure Microsoft Fluid Relay Server. </br> **Note**: This tag has a dependency on the **AzureFrontDoor.Frontend** tag. | Outbound | No | Yes | | **MicrosoftCloudAppSecurity** | Microsoft Defender for Cloud Apps. | Outbound | No | Yes | | **MicrosoftContainerRegistry** | Container registry for Microsoft container images. <br/><br/>**Note**: This tag has a dependency on the **AzureFrontDoor.FirstParty** tag. | Outbound | Yes | Yes |
-| **MicrosoftDefenderForEndpoint** | Microsoft Defender for Endpoint <br/></br>**Please note this service tag is currently not available and in progress. We will update once it is ready for use.**| Both | No | Yes |
+| **MicrosoftDefenderForEndpoint** | Microsoft Defender for Endpoint. </br> This service tag is available in public preview. </br> For more information, see [Onboarding devices using streamlined connectivity for Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/configure-device-connectivity) | Both | No | Yes |
| **MicrosoftPurviewPolicyDistribution** | This tag should be used within the outbound security rules for a data source (e.g. Azure SQL MI) configured with private endpoint to retrieve policies from Microsoft Purview | Outbound| No | No | | **PowerBI** | Power BI platform backend services and API endpoints.<br/><br/>**Note:** does not include frontend endpoints at the moment (e.g., app.powerbi.com).<br/><br/>Access to frontend endpoints should be provided through AzureCloud tag (Outbound, HTTPS, can be regional). | Both | No | Yes | | **PowerPlatformInfra** | This tag represents the IP addresses used by the infrastructure to host Power Platform services. | Outbound | Yes | Yes |
By default, service tags reflect the ranges for the entire cloud. Some service t
| **SqlManagement** | Management traffic for SQL-dedicated deployments. | Both | No | Yes | | **Storage** | Azure Storage. <br/><br/>**Note**: This tag represents the service, but not specific instances of the service. For example, the tag represents the Azure Storage service, but not a specific Azure Storage account. | Outbound | Yes | Yes | | **StorageSyncService** | Storage Sync Service. | Both | No | Yes |
+| **StorageMover** | Storage Mover. | Outbound | Yes | Yes |
| **WindowsAdminCenter** | Allow the Windows Admin Center backend service to communicate with customers' installation of Windows Admin Center. | Outbound | No | Yes | | **WindowsVirtualDesktop** | Azure Virtual Desktop (formerly Windows Virtual Desktop). | Both | No | Yes |
+| **VideoIndexer** | Video Indexer. </br> Used to allow customers opening up their NSG to Video Indexer service and receive callbacks to their service. | Both | No | Yes |
| **VirtualNetwork** | The virtual network address space (all IP address ranges defined for the virtual network), all connected on-premises address spaces, [peered](virtual-network-peering-overview.md) virtual networks, virtual networks connected to a [virtual network gateway](../vpn-gateway/vpn-gateway-about-vpngateways.md?toc=%2fazure%2fvirtual-network%3ftoc.json), the [virtual IP address of the host](./network-security-groups-overview.md#azure-platform-considerations), and address prefixes used on [user-defined routes](virtual-networks-udr-overview.md). This tag might also contain default routes. | Both | No | No | > [!NOTE]
web-application-firewall Waf Front Door Rate Limit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/waf-front-door-rate-limit.md
# What is rate limiting for Azure Front Door? Rate limiting enables you to detect and block abnormally high levels of traffic from any socket IP address.
-By using Azure Web Application Firewall in Azure Front Door, you can mitigate some types of denial-of-service attacks. Rate limiting also protects you against clients that have accidentally been misconfigured to send large volumes of requests in a short time period.
+By using Azure Web Application Firewall in Azure Front Door, you can mitigate some types of denial-of-service attacks. Rate limiting also protects you against clients that were accidentally misconfigured to send large volumes of requests in a short time period.
The socket IP address is the address of the client that initiated the TCP connection to Azure Front Door. Typically, the socket IP address is the IP address of the user, but it might also be the IP address of a proxy server or another device that sits between the user and Azure Front Door.
If the threshold is low enough, the first request to the new Azure Front Door se
A few considerations to keep in mind while you determine threshold values and time windows for rate limiting: -- Larger window size and smaller thresholds are most effective in preventing against DDoS attacks.-- Setting larger time window sizes (for example, five minutes over one minute) and larger threshold values (for example, 200 over 100) tend to be more accurate in enforcing close to rate limits thresholds than using the shorter time window sizes and lower threshold values.
+- A larger window size with the smallest acceptable request count threshold is the most effective configuration for preventing DDoS attacks. This configuration is more effective because when an attacker reaches the threshold they're blocked for the remainder of the rate limit window. Therefore, if an attacker is blocked in the first 30 seconds of a one-minute window, they're only rate limited for the remaining 30 seconds. If an attacker is blocked in the first minute of a five-minute window, they're rate limited for the remaining four minutes.
+- Setting larger time window sizes (for example, five minutes over one minute) and larger threshold values (for example, 200 over 100) tend to be more accurate in enforcing close to rate limit's thresholds than using the shorter time window sizes and lower threshold values.
+- Azure Front Door WAF rate limiting operates on a fixed time period. Once a rate limit threshold is breached, all traffic matching that rate limiting rule is blocked for the remainder of the fixed window.
## Next steps