Updates from: 03/30/2023 01:19:22
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Billing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/billing.md
Previously updated : 01/17/2022 Last updated : 03/06/2023
# Billing model for Azure Active Directory B2C
-Azure Active Directory B2C (Azure AD B2C) pricing is based on monthly active users (MAU), which is the count of unique users with authentication activity within a calendar month. This billing model applies to both Azure AD B2C tenants and [Azure AD guest user collaboration (B2B)](../active-directory/external-identities/external-identities-pricing.md). MAU billing helps you reduce costs by offering a free tier and flexible, predictable pricing. In this article, learn about MAU billing, linking Azure AD B2C tenants to a subscription, and changing the pricing tier.
+Azure Active Directory B2C (Azure AD B2C) pricing is based on monthly active users (MAU), which is the count of unique users with authentication activity within a calendar month. This billing model applies to both Azure AD B2C tenants and [Azure AD guest user collaboration (B2B)](../active-directory/external-identities/external-identities-pricing.md). MAU billing helps you reduce costs by offering a free tier and flexible, predictable pricing.
+
+In this article, learn about MAU and Go Local billing, linking Azure AD B2C tenants to a subscription, and changing the pricing tier.
## MAU overview
-A monthly active user (MAU) is a unique user that performs an authentication within a given month. A user that authenticates multiple times within a given month is counted as one MAU. Customers are not charged for a MAUΓÇÖs subsequent authentications during the month, nor for inactive users. Authentications may include:
+A monthly active user (MAU) is a unique user that performs an authentication within a given month. A user that authenticates multiple times within a given month is counted as one MAU. Customers aren't charged for a MAUΓÇÖs subsequent authentications during the month, nor for inactive users. Authentications may include:
+
+- Active, interactive sign in by the user. For example, [sign-up or sign in](add-sign-up-and-sign-in-policy.md), [self-service password reset](add-password-reset-policy.md), or any type of [user flow](user-flow-overview.md) or [custom policy](custom-policy-overview.md).
+- Passive, non-interactive sign in such as [single sign-on (SSO)](session-behavior.md), or any type of token acquisition. For example, authorization code flow, token refresh, or [resource owner password credentials flow](add-ropc-policy.md).
-- Active, interactive sign-in by the user. For example, [sign-up or sign-in](add-sign-up-and-sign-in-policy.md), [self-service password reset](add-password-reset-policy.md), or any type of [user flow](user-flow-overview.md) or [custom policy](custom-policy-overview.md).-- Passive, non-interactive sign-in such as [single sign-on (SSO)](session-behavior.md), or any type of token acquisition. For example, authorization code flow, token refresh, or [resource owner password credentials flow](add-ropc-policy.md).
+If Azure AD B2C [Go-Local add-on](data-residency.md#go-local-add-on) is available in your country/region, and you enable it, you'll be charged per MAU, which is an added charge to your Azure AD B2C [Premium P1 or P2 pricing](https://azure.microsoft.com/pricing/details/active-directory-b2c/) license. Learn more [About Local Data Residency add-on](#about-go-local-add-on)
-If you choose to provide higher levels of assurance using Multi-factor Authentication (MFA) for Voice and SMS, you will continue to be charged a worldwide flat fee for each MFA attempt that month, whether the sign-in is successful or unsuccessful.
+Also, if you choose to provide higher levels of assurance by using Multi-factor Authentication (MFA) for Voice and SMS, you'll be charged a worldwide flat fee for each MFA attempt that month, whether the sign in is successful or unsuccessful.
+
> [!IMPORTANT] > This article does not contain pricing details. For the latest information about usage billing and pricing, see [Azure Active Directory B2C pricing](https://azure.microsoft.com/pricing/details/active-directory-b2c/). See also [Azure AD B2C region availability and data residency](data-residency.md) for details about where the Azure AD B2C service is available and where user data is stored.
MAU billing went into effect for Azure AD B2C tenants on **November 1, 2019**. A
- If you have an existing Azure AD B2C tenant that was linked to a subscription before November 1, 2019, upgrade to the monthly active users (MAU) billing model. You can also choose to stay on the per-authentication billing model. Your Azure AD B2C tenant must also be linked to the appropriate Azure pricing tier based on the features you want to use. Premium features require Azure AD B2C [Premium P1 or P2 pricing](https://azure.microsoft.com/pricing/details/active-directory-b2c/). You might need to upgrade your pricing tier as you use new features. For example, for risk-based Conditional Access policies, you’ll need to select the Azure AD B2C Premium P2 pricing tier for your tenant.+
+> [!NOTE]
+> Your first 50,000 MAUs per month are free for both Premium P1 and Premium P2 features, but the **free tier doesnΓÇÖt apply to free trial, credit-based, or sponsorship subscriptions**. Once the free trial period or credits expire for these types of subscriptions, you'll begin to be charged for Azure AD B2C MAUs. To determine the total number of MAUs, we combine MAUs from all your tenants (both Azure AD and Azure AD B2C) that are linked to the same subscription.
+
+## About Go-Local add-on
+
+Azure AD B2C's [Go-Local add-on](data-residency.md#go-local-add-on) enables you to create Azure AD B2C tenant within the country you choose when you [create your Azure AD B2C](tutorial-create-tenant.md). *Go-Local* refers to MicrosoftΓÇÖs commitment to allow some customers to configure some services to store their data at rest in the Geo of the customerΓÇÖs choice, typically a country. This feature isn't available in all countries.
+ > [!NOTE]
-> Your first 50,000 MAUs per month are free for both Premium P1 and Premium P2 features, but the **free tier doesnΓÇÖt apply to free trial, credit-based, or sponsorship subscriptions**. Once the free trial period or credits expire for these types of subscriptions, you'll begin to be charged for Azure AD B2C MAUs. To determine the total number of MAUs, we combine MAUs from all your tenants (both Azure AD and Azure AD B2C) that are linked to the same subscription.
+> If you enable Go-Local add-on , the 50,000 free MAUs per month given by your AD B2C subscription doesn't apply for Go-Local add-on . You'll incur a charge per MAU, on the Go-Local add-on from the first MAU. However, you'll continue to enjoy free 50,000 MAUs per month on the other features available on your Azure AD B2C [Premium P1 or P2 pricing](https://azure.microsoft.com/pricing/details/active-directory-b2c/).
+ ## Link an Azure AD B2C tenant to a subscription
-Usage charges for Azure Active Directory B2C (Azure AD B2C) are billed to an Azure subscription. You need to explicitly link an Azure AD B2C tenant to an Azure subscription by creating an Azure AD B2C *resource* within the target Azure subscription. Several Azure AD B2C resources can be created in a single Azure subscription, along with other Azure resources like virtual machines, and storage accounts. You can see all of the resources within a subscription by going to the Azure Active Directory (Azure AD) tenant that the subscription is associated with.
+Usage charges for Azure AD B2C are billed to an Azure subscription. You need to explicitly link an Azure AD B2C tenant to an Azure subscription by creating an Azure AD B2C *resource* within the target Azure subscription. Several Azure AD B2C resources can be created in a single Azure subscription, along with other Azure resources like virtual machines, and storage accounts. You can see all of the resources within a subscription by going to the Azure Active Directory (Azure AD) tenant that the subscription is associated with.
A subscription linked to an Azure AD B2C tenant can be used for the billing of Azure AD B2C usage or other Azure resources, including additional Azure AD B2C resources. It can't be used to add other Azure license-based services or Office 365 licenses within the Azure AD B2C tenant.
A subscription linked to an Azure AD B2C tenant can be used for the billing of A
1. Select **Create a resource**, and then, in the **Search services and Marketplace** field, search for and select **Azure Active Directory B2C**. 1. Select **Create**. 1. Select **Link an existing Azure AD B2C Tenant to my Azure subscription**.
-1. Select an **Azure AD B2C Tenant** from the dropdown. Only tenants for which you're a global administrator and that are not already linked to a subscription are shown. The **Azure AD B2C Resource name** field is populated with the domain name of the Azure AD B2C tenant you select.
+1. Select an **Azure AD B2C Tenant** from the dropdown. Only tenants for which you're a global administrator and that aren't already linked to a subscription are shown. The **Azure AD B2C Resource name** field is populated with the domain name of the Azure AD B2C tenant you select.
1. Select an active Azure **Subscription** of which you're an owner. 1. Under **Resource group**, select **Create new**, and then specify the **Resource group location**. The resource group settings here have no impact on your Azure AD B2C tenant location, performance, or billing status. 1. Select **Create**.
In some cases, you'll need to upgrade your pricing tier as you use new features.
To change your pricing tier, follow these steps:
-1. Sign in to the Azure portal.
+1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. To select the Azure AD directory that contains the Azure subscription your Azure B2C tenant is linked to and not the Azure AD B2C tenant itself, select the **Directories + subscriptions** icon in the portal toolbar.
+1. Make sure you're using the Azure AD directory that contains the subscription your Azure B2C tenant and not the Azure AD B2C tenant itself:
+ 1. In the Azure portal toolbar, select the **Directories + subscriptions** (:::image type="icon" source="./../active-directory/develop/media/common/portal-directory-subscription-filter.png" border="false":::) icon.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD directory in the **Directory name** list, and then select **Switch**.
+ 1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD directory in the **Directory name** list, and then select **Switch** button next to it.
1. In the search box at the top of the portal, enter the name of your Azure AD B2C tenant. Then select the tenant in the search results under **Resources**.
To mitigate this issue, relax your regional restrictions.
Azure Cloud Solution Providers (CSP) subscriptions are supported in Azure AD B2C. The functionality is available using APIs or the Azure portal for Azure AD B2C and for all Azure resources. CSP subscription administrators can link, move, and delete relationships with Azure AD B2C as done with other Azure resources.
-The management of Azure AD B2C using role-based access control is not affected by the association between the Azure AD B2C tenant and an Azure CSP subscription. Role-based access control is achieved by using tenant-based roles, not subscription-based roles.
+The management of Azure AD B2C using role-based access control isn't affected by the association between the Azure AD B2C tenant and an Azure CSP subscription. Role-based access control is achieved by using tenant-based roles, not subscription-based roles.
## Change the Azure AD B2C tenant billing subscription
If the source and destination subscriptions are associated with different Azure
1. In the Azure AD B2C directory itself, [invite a guest user](user-overview.md#guest-user) from the destination Azure AD tenant (the one that the destination Azure subscription is linked to) and ensure this user has the **Global administrator** role in Azure AD B2C. 1. Navigate to the *Azure resource* representing Azure AD B2C in your source Azure subscription as explained in the [Manage your Azure AD B2C tenant resources](#manage-your-azure-ad-b2c-tenant-resources) section above. Don't switch to the actual Azure AD B2C tenant.
-1. Select the **Delete** button on the **Overview** page. This does *not* delete the related Azure AD B2C tenant's users or applications. It merely removes the billing link from the source subscription.
+1. Select the **Delete** button on the **Overview** page. This action *doesn't* delete the related Azure AD B2C tenant's users or applications. It merely removes the billing link from the source subscription.
1. Sign in to the Azure portal with the user account that was added as an administrator in Azure AD B2C in step 1. Then navigate to the destination Azure subscription, which is linked to the destination Azure Active Directory tenant. 1. Re-establish the billing link in the destination subscription by following the [Create the link](#create-the-link) procedure above. 1. Your Azure AD B2C resource has now moved to the destination Azure subscription (linked to the target Azure Active Directory) and will be billed through this subscription from now on.
active-directory-b2c Custom Policy Developer Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-policy-developer-notes.md
Previously updated : 11/08/2022 Last updated : 03/06/2023
The following table summarizes the Security Assertion Markup Language (SAML) app
| [Application Insights user journey logs](troubleshoot-with-application-insights.md) | Preview | Used for troubleshooting during development. | | [Application Insights event logs](analytics-with-application-insights.md) | Preview | Used to monitor user flows in production. |
+## Other features
+
+| Feature | Status | Notes |
+| - | :--: | -- |
+| [Go-Local add-on](data-residency.md#go-local-add-on) | Preview | Azure AD B2C's [Go-Local add-on](data-residency.md#go-local-add-on) enables you to create Azure AD B2C tenant within the country you choose when you [create your Azure AD B2C](tutorial-create-tenant.md). |
+ ## Responsibilities of custom policy feature-set developers Manual policy configuration grants lower-level access to the underlying platform of Azure AD B2C and results in the creation of a unique, trust framework. The many possible permutations of custom identity providers, trust relationships, integrations with external services, and step-by-step workflows require a methodical approach to design and configuration.
active-directory-b2c Data Residency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/data-residency.md
Title: Region availability and data residency
+ Title: "Azure AD B2C: Region availability & data residency"
description: Region availability, data residency, high availability, SLA, and information about Azure Active Directory B2C preview tenants.
Previously updated : 12/12/2022 Last updated : 03/06/2023
Azure Active Directory B2C (Azure AD B2C) stores customer data in a geographic location based on how a tenant was created and provisioned. For the Azure portal or Azure AD API, the location is defined when a customer selects a location from the pre-defined list.
-Region availability and data residency are two different concepts that apply to Azure AD B2C. This article explains the differences between these two concepts, and compares how they apply to Azure versus Azure AD B2C.
+Region availability and data residency are two different concepts that apply to Azure AD B2C. This article explains the differences between these two concepts, and compares how they apply to Azure versus Azure AD B2C. [Region availability](#region-availability) refers to where a service is available for use whereas [Data residency](#data-residency) refers to where user data is stored.
+
Azure AD B2C is **generally available worldwide** with the option for **data residency** in the **United States, Europe, Asia Pacific, or Australia**. [Region availability](#region-availability) refers to where a service is available for use. [Data residency](#data-residency) refers to where customer data is stored. For customers in the EU and EFTA, see [EU Data Boundary](#eu-data-boundary).
+If you enable [Go-Local add-on](#go-local-add-on), you can store your data exclusively in a specific country.
++ ## Region availability
-Azure AD B2C is available worldwide via the Azure public cloud. You can see availability of this service in both Azure's [Products Available By Region](https://azure.microsoft.com/regions/services/) page and the [Active Directory B2C pricing calculator](https://azure.microsoft.com/pricing/details/active-directory-b2c/). Also, Azure AD B2C service is highly available. Learn more about [Service Level Agreement (SLA) for Azure Active Directory B2C](https://azure.microsoft.com/support/legal/sla/active-directory-b2c/v1_1).
+Azure AD B2C service is available worldwide via the Azure public cloud. You can see availability of this service in both Azure's [Products Available By Region](https://azure.microsoft.com/regions/services/) page and the [Active Directory B2C pricing calculator](https://azure.microsoft.com/pricing/details/active-directory-b2c/). Also, Azure AD B2C service is highly available. Learn more about [Service Level Agreement (SLA) for Azure Active Directory B2C](https://azure.microsoft.com/support/legal/sla/active-directory-b2c/v1_1).
+ ## Data residency
-Azure AD B2C stores customer data in the United States, Europe, the Asia Pacific region, or Australia.
+Azure AD B2C stores customer data in the United States, Europe, the Asia Pacific region, Japan or Australia.
-Data residency is determined by the country/region you select when you [create an Azure AD B2C tenant](tutorial-create-tenant.md):
+Data residency is determined by the location you select when you [create an Azure AD B2C tenant](tutorial-create-tenant.md):
![Screenshot of a Create Tenant form, choosing country or region.](./media/data-residency/data-residency-b2c-tenant.png)
The following locations are in the process of being added to the list. For now,
> Argentina, Brazil, Chile, Colombia, Ecuador, Iraq, Paraguay, Peru, Uruguay, and Venezuela
+To find the exact location where your data is located per region or country, refer to [where Azure Active Directory data is located](https://aka.ms/aaddatamap)service.
++
+### Go-Local add-on
+
+*Go-Local* refers to MicrosoftΓÇÖs commitment to allow some customers to configure some services to store their data at rest in the Geo of the customerΓÇÖs choice, typically a country. Go-Local is as way fulfilling corporate policies and compliance requirements. You choose the country where you want to store your data when you [create your Azure AD B2C](tutorial-create-tenant.md).
+
+The Go-Local add-on is a paid add-on, but it's optional. If you choose to use it, you'll incur an extra charge in addition to your Azure AD B2C Premium P1 or P2 licenses. See more information in [Billing model](billing.md).
+
+At the moment, the following countries have the local data residence option:
+
+- Japan
+
+- Australia
+
+#### What do I need to do?
+
+|If you're in | What to do |
+|-||
+| Australia | If you've an existing Azure AD B2C tenant that you created since **April 2021**, then your data is resident in Australia. You need to opt in to start using Go-Local add-on. <br> If you're creating a new Azure AD B2C tenant, you can enable Go-Local add-on when you create it.|
+| Japan | You can enable Go-Local add-on when you create a new Azure AD B2C tenant. |
+++ ## EU Data Boundary > [!IMPORTANT] > For comprehensive details about Microsoft's EU Data Boundary commitment, see [Microsoft's EU Data Boundary documentation](/privacy/eudb/eu-data-boundary-learn). + ## Remote profile solution With Azure AD B2C [custom policies](custom-policy-overview.md), you can integrate with [RESTful API services](api-connectors-overview.md), which allow you to store and read user profiles from a remote database (such as a marketing database, CRM system, or any line-of-business application). - During the sign-up and profile editing flows, Azure AD B2C calls a custom REST API to persist the user profile to the remote data source. The user's credentials are stored in Azure AD B2C directory. -- Upon sign-in, after credentials validation with a local or social account, Azure AD B2C invokes the REST API, which sends the user's unique identifier as a user primary key (email address or user objectId). The REST API reads the data from the remote database and returns the user profile.
+- Upon sign in, after credentials validation with a local or social account, Azure AD B2C invokes the REST API, which sends the user's unique identifier as a user primary key (email address or user objectId). The REST API reads the data from the remote database and returns the user profile.
-After sign-up, profile editing, or sign-in is complete, Azure AD B2C includes the user profile in the access token that is returned to the application. For more information, see the [Azure AD B2C Remote profile sample solution](https://github.com/azure-ad-b2c/samples/tree/master/policies/remote-profile) in GitHub.
+After sign-up, profile editing, or sign-in action is complete, Azure AD B2C includes the user profile in the access token that is returned to the application. For more information, see the [Azure AD B2C Remote profile sample solution](https://github.com/azure-ad-b2c/samples/tree/master/policies/remote-profile) in GitHub.
## Next steps
active-directory-b2c Supported Azure Ad Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/supported-azure-ad-features.md
Title: Supported Azure AD features
-description: Learn about Azure AD features, which are still supported in Azure AD B2C.
+ Title: Supported Azure Active Directory features
+description: Learn about Azure Active Directory features, which are still supported in Azure AD B2C.
-# Supported Azure AD features
+# Supported Azure Active Directory features
-An Azure AD B2C tenant is different than an Azure Active Directory tenant, which you may already have, but it relies on it. The following Azure AD features can be used in your Azure AD B2C tenant.
+An Azure Active Directory B2C (Azure AD B2C) tenant is different than an Azure Active Directory (Azure AD) tenant, which you may already have, but it relies on it. The following Azure AD features can be used in your Azure AD B2C tenant.
|Feature |Azure AD | Azure AD B2C | ||||
An Azure AD B2C tenant is different than an Azure Active Directory tenant, which
| [Premium P1](https://azure.microsoft.com/pricing/details/active-directory) | Fully supported for Azure AD premium P1 features. For example, [Password Protection](../active-directory/authentication/concept-password-ban-bad.md), [Hybrid Identities](../active-directory/hybrid/whatis-hybrid-identity.md), [Conditional Access](../active-directory/roles/permissions-reference.md#), [Dynamic groups](../active-directory/enterprise-users/groups-create-rule.md), and more. | Azure AD B2C uses [Azure AD B2C Premium P1 license](https://azure.microsoft.com/pricing/details/active-directory/external-identities/), which is different from Azure AD premium P1. A subset of Azure AD Conditional Access features is supported with [consumer accounts](user-overview.md#consumer-user). Learn how to configure Azure AD B2C [Conditional Access](conditional-access-user-flow.md).| | [Premium P2](https://azure.microsoft.com/pricing/details/active-directory/) | Fully supported for Azure AD premium P2 features. For example, [Identity Protection](../active-directory/identity-protection/overview-identity-protection.md), and [Identity Governance](../active-directory/governance/identity-governance-overview.md). | Azure AD B2C uses [Azure AD B2C Premium P2 license](https://azure.microsoft.com/pricing/details/active-directory/external-identities/), which is different from Azure AD premium P2. A subset of Azure AD Identity Protection features is supported with [consumer accounts](user-overview.md#consumer-user). Learn how to [Investigate risk with Identity Protection](identity-protection-investigate-risk.md) and configure Azure AD B2C [Conditional Access](conditional-access-user-flow.md). | |[Data retention policy](../active-directory/reports-monitoring/reference-reports-data-retention.md#how-long-does-azure-ad-store-the-data)|Data retention period for both audit and sign in logs depend on your subscription. Learn more about [How long Azure AD store reporting data](../active-directory/reports-monitoring/reference-reports-data-retention.md#how-long-does-azure-ad-store-the-data).|Sign in and audit logs are only retained for **seven (7) days**. If you require a longer retention period, use the [Azure monitor](azure-monitor.md).|
+| [Go-Local add-on](data-residency.md#go-local-add-on) | Azure AD Go-Local add-on enables you to store data in the country you choose when your Azure AD tenant.| Just like Azure AD, Azure AD B2C supports [Go-Local add-on](data-residency.md#go-local-add-on). |
> [!NOTE]
-> **Other Azure resources in your tenant:** <br>In an Azure AD B2C tenant, you can't provision other Azure resources such as virtual machines, Azure web apps, or Azure functions. You must create these resources in your Azure AD tenant.
+> **Other Azure resources in your tenant:** <br>In an Azure AD B2C tenant, you can't provision other Azure resources such as virtual machines, Azure web apps, or Azure functions. You must create these resources in your Azure AD tenant.
active-directory-b2c Tutorial Create Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/tutorial-create-tenant.md
Before you create your Azure AD B2C tenant, you need to take the following consi
1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. Make sure you're using the directory that contains your subscription:
- 1. In the Azure portal toolbar, select the **Directories + subscriptions** filter icon.
-
- ![Directories + subscriptions filter icon](media/tutorial-create-tenant/directories-subscription-filter-icon.png)
-
- 1. Find the directory that contains your subscription and select the **Switch** button next to it. Switching a directory reloads the portal. If the directory that contains your subscription has the **Current** label next to it, you don't need to do anything.
+1. Make sure you're using the Azure Active Directory (Azure AD) tenant that contains your subscription:
+ 1. In the Azure portal toolbar, select the **Directories + subscriptions** (:::image type="icon" source="./../active-directory/develop/media/common/portal-directory-subscription-filter.png" border="false":::) icon.
+
+ 1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD directory that contains your subscription in the **Directory name** list, and then select **Switch** button next to it.
+
![Screenshot of the directories and subscriptions window.](media/tutorial-create-tenant/switch-directory.png) 1. Add **Microsoft.AzureActiveDirectory** as a resource provider for the Azure subscription you're using ([learn more](../azure-resource-manager/management/resource-providers-and-types.md?WT.mc_id=Portal-Microsoft_Azure_Support#register-resource-provider-1)): 1. On the Azure portal, search for and select **Subscriptions**.
- 2. Select your subscription, and then in the left menu, select **Resource providers**. If you don't see the left menu, select the **Show the menu for < name of your subscription >** icon at the top left part of the page to expand it.
- 3. Make sure the **Microsoft.AzureActiveDirectory** row shows a status of **Registered**. If it doesn't, select the row, and then select **Register**.
+ 1. Select your subscription, and then in the left menu, select **Resource providers**. If you don't see the left menu, select the **Show the menu for < name of your subscription >** icon at the top left part of the page to expand it.
+ 1. Make sure the **Microsoft.AzureActiveDirectory** row shows a status of **Registered**. If it doesn't, select the row, and then select **Register**.
1. On the Azure portal menu or from the **Home** page, select **Create a resource**.
Before you create your Azure AD B2C tenant, you need to take the following consi
![Create a new Azure AD B2C tenant selected in Azure portal](media/tutorial-create-tenant/portal-02-create-tenant.png)
-1. On the **Create a directory** page, enter the following:
+1. On the **Create a directory** page:
- - **Organization name** - Enter a name for your Azure AD B2C tenant.
- - **Initial domain name** - Enter a domain name for your Azure AD B2C tenant.
- - **Country or region** - Select your country or region from the list. This selection can't be changed later.
- - **Subscription** - Select your subscription from the list.
- - **Resource group** - Select or search for the resource group that will contain the tenant.
+ - For **Organization name**, enter a name for your Azure AD B2C tenant.
+ - For **Initial domain name**, enter a domain name for your Azure AD B2C tenant.
+ - For **Location**, select your country from the list. If the country you select has a [Go-Local add-on](data-residency.md#go-local-add-on) option, such as Japan or Australia, and you want to store your data exclusively within that country, select the **Store Azure AD Core Store data, components and service data in the location selected above** checkbox. Go-Local add-on is a paid add-on whose charge is added to your Azure AD B2C Premium P1 or P2 licenses charges, see [Billing model](billing.md#about-go-local-add-on). You can't change the data residency region after you create your Azure AD B2C tenant.
+ - For **Subscription**, select your subscription from the list.
+ - For **Resource group**, select or search for the resource group that will contain the tenant.
- ![Create tenant form in with example values in Azure portal](media/tutorial-create-tenant/review-and-create-tenant.png)
+ :::image type="content" source="media/tutorial-create-tenant/review-and-create-tenant.png" alt-text="Screenshot of create tenant form in with example values in Azure portal.":::
1. Select **Review + create**. 1. Review your directory settings. Then select **Create**. Learn more about [troubleshooting deployment errors](../azure-resource-manager/templates/common-deployment-errors.md).
You can link multiple Azure AD B2C tenants to a single Azure subscription for bi
## Select your B2C tenant directory To start using your new Azure AD B2C tenant, you need to switch to the directory that contains the tenant:
-1. In the Azure portal toolbar, select the **Directories + subscriptions** filter icon.
+1. In the Azure portal toolbar, select the **Directories + subscriptions** filter icon (:::image type="icon" source="./../active-directory/develop/media/common/portal-directory-subscription-filter.png" border="false":::).
1. On the **All Directories** tab, find the directory that contains your Azure AD B2C tenant and then select the **Switch** button next to it. If at first you don't see your new Azure B2C tenant in the list, refresh your browser window or sign out and sign back in. Then in the Azure portal toolbar, select the **Directories + subscriptions** filter again.
active-directory-b2c Tutorial Delete Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/tutorial-delete-tenant.md
Previously updated : 08/30/2022 Last updated : 03/06/2023
active-directory Concepts Azure Multi Factor Authentication Prompts Session Lifetime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concepts-azure-multi-factor-authentication-prompts-session-lifetime.md
Previously updated : 01/29/2023 Last updated : 03/28/2023
Devices joined to Azure AD using Azure AD Join or Hybrid Azure AD Join receive a
### Show option to remain signed-in
-When a user selects **Yes** on the *Stay signed in?* option during sign-in, a persistent cookie is set on the browser. This persistent cookie remembers both first and second factor, and it applies only for authentication requests in the browser.
+When a user selects **Yes** on the *Stay signed in?* prompt option during sign-in, a persistent cookie is set on the browser. This persistent cookie remembers both first and second factor, and it applies only for authentication requests in the browser.
![Screenshot of example prompt to remain signed in](./media/concepts-azure-multi-factor-authentication-prompts-session-lifetime/stay-signed-in-prompt.png) If you have an Azure AD Premium 1 license, we recommend using Conditional Access policy for *Persistent browser session*. This policy overwrites the *Stay signed in?* setting and provides an improved user experience. If you don't have an Azure AD Premium 1 license, we recommend enabling the stay signed in setting for your users.
-For more information on configuring the option to let users remain signed-in, see [Customize your Azure AD sign-in page](../fundamentals/active-directory-users-profile-azure-portal.md#learn-about-the-stay-signed-in-prompt).
+For more information on configuring the option to let users remain signed-in, see [How to manage the 'Stay signed in?' prompt](../fundamentals/how-to-manage-stay-signed-in-prompt.md).
### Remember Multi-Factor Authentication
active-directory How To Certificate Based Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-certificate-based-authentication.md
To enable Azure AD CBA and configure user bindings in the Azure portal, complete
1. Click **Ok** to save any custom rule.
+>[!IMPORTANT]
+>PolicyOID should be in object identifier format as per https://www.rfc-editor.org/rfc/rfc5280#section-4.2.1.4. For ex: If the certificate policies says "All Issuance Policies" you should enter the OID as 2.5.29.32.0 in the add rules editor. Entering the string "All Issuance Policies" in rules editor is invalid and will not take effect.
+ ## Step 4: Configure username binding policy The username binding policy helps validate the certificate of the user. By default, we map Principal Name in the certificate to UserPrincipalName in the user object to determine the user. An admin can override the default and create a custom mapping.
active-directory How To Mfa Authenticator Lite https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-authenticator-lite.md
# How to enable Microsoft Authenticator Lite for Outlook mobile (preview)
+>[!NOTE]
+>Rollout has not yet completed across Outlook applications. If this feature is enabled in your tenant, your users may not yet be prompted for the experience. To minimize user disruption, we recommend enabling this feature when the rollout completes.
+ Microsoft Authenticator Lite is another surface for Azure Active Directory (Azure AD) users to complete multifactor authentication by using push notifications or time-based one-time passcodes (TOTP) on their Android or iOS device. With Authenticator Lite, users can satisfy a multifactor authentication requirement from the convenience of a familiar app. Authenticator Lite is currently enabled in [Outlook mobile](https://www.microsoft.com/microsoft-365/outlook-mobile-for-android-and-ios). Users receive a notification in Outlook mobile to approve or deny sign-in, or they can copy a TOTP to use during sign-in.
Users receive a notification in Outlook mobile to approve or deny sign-in, or th
## Enable Authenticator Lite
+>[!NOTE]
+>Rollout has not yet completed across Outlook applications. If this feature is enabled in your tenant, your users may not yet be prompted for the experience. To minimize user disruption, we recommend enabling this feature when the rollout completes.
+ By default, Authenticator Lite is [Microsoft managed](concept-authentication-default-enablement.md#microsoft-managed-settings) and disabled during preview. After general availability, the Microsoft managed state default value will change to enable Authenticator Lite.
+### Enablement Authenticator Lite in Azure portal UX
+
+To enable Authenticator Lite in the Azure portal, complete the following steps:
+
+ 1. In the Azure portal, click Security > Authentication methods > Microsoft Authenticator.
+
+ 2. On the Enable and Target tab, click Yes and All users to enable the policy for everyone or add selected users and groups. Set the Authentication mode for these users/groups to Any or Push.
+
+ Only users who are enabled for Microsoft Authenticator here can be enabled to use Authenticator Lite for sign-in, or excluded from it. Users who aren't enabled for Microsoft Authenticator can't see the feature. Users who have Microsoft Authenticator downloaded on the same device Outlook is downloaded on will not be prompted to register for Authenticator Lite in Outlook.
+
+<img width="1112" alt="Entra portal Authenticator settings" src="https://user-images.githubusercontent.com/108090297/228603771-52c5933c-f95e-4f19-82db-eda2ba640b94.png">
++
+ 3. On the Configure tab, for **Microsoft Authenticator on companion applications**, change Status to Enabled, choose who to include or exclude from Authenticator Lite, and click Save.
+
+<img width="664" alt="Authenticator Lite configuration settings" src="https://user-images.githubusercontent.com/108090297/228603364-53f2581f-a4e0-42ee-8016-79b23e5eff6c.png">
+
+### Enable Authenticator Lite via Graph APIs
+ | Property | Type | Description | |-||-| | excludeTarget | featureTarget | A single entity that is excluded from this feature. <br>You can only exclude one group from Authenticator Lite, which can be a dynamic or nested group.|
If the sign-in was done by phone app notification, under **authenticationAppDeiv
If a user has registered Authenticator Lite, the userΓÇÖs registered authentication methods include **Microsoft Authenticator (in Outlook)**. ## Push notifications in Authenticator Lite
-Push notifications sent by Authenticator Lite aren't configurable and don't depend on the Authenticator feature settings. The settings for features included in the Authenticator Lite experience are listed in the following table.
+Push notifications sent by Authenticator Lite aren't configurable and don't depend on the Authenticator feature settings. The settings for features included in the Authenticator Lite experience are listed in the following table. Every authentication includes a number matching prompt and does not include app and location context, regardless of Microsoft Authentiator feature settings.
| Authenticator Feature | Authenticator Lite Experience| |::|:-:|
Users can only register for Authenticator Lite from mobile Outlook. Authenticato
### Can users register Microsoft Authenticator and Authenticator Lite?
-Users that have Microsoft Authenticator on their device can't register Authenticator Lite. If a user has an Authenticator Lite registration and then later downloads Microsoft Authenticator, they can register both. If a user has two devices, they can register Authenticator Lite on one and Microsoft Authenticator on the other.
+Users that have Microsoft Authenticator on their device can't register Authenticator Lite on that same device. If a user has an Authenticator Lite registration and then later downloads Microsoft Authenticator, they can register both. If a user has two devices, they can register Authenticator Lite on one and Microsoft Authenticator on the other.
## Known Issues (Public preview)
active-directory How To Mfa Number Match https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-number-match.md
description: Learn how to use number matching in MFA notifications
Previously updated : 02/16/2023 Last updated : 03/28/2023
AD FS adapter will require number matching on supported versions of Windows Serv
### NPS extension
-Although NPS doesn't support number matching, the latest NPS extension does support One-Time Password (OTP) methods such as the OTP available in Microsoft Authenticator, other software tokens, and hardware FOBs. OTP sign-in provides better security than the alternative **Approve**/**Deny** experience. Make sure you run the latest version of the [NPS extension](https://www.microsoft.com/download/details.aspx?id=54688).
+Although NPS doesn't support number matching, the latest NPS extension does support time-based one-time password (TOTP) methods such as the TOTP available in Microsoft Authenticator, other software tokens, and hardware FOBs. TOTP sign-in provides better security than the alternative **Approve**/**Deny** experience. Make sure you run the latest version of the [NPS extension](https://www.microsoft.com/download/details.aspx?id=54688).
-After May 8, 2023, when number matching is enabled for all users, anyone who performs a RADIUS connection with NPS extension version 1.2.2216.1 or later will be prompted to sign in with an OTP method instead.
+After May 8, 2023, when number matching is enabled for all users, anyone who performs a RADIUS connection with NPS extension version 1.2.2216.1 or later will be prompted to sign in with a TOTP method instead.
-Users must have an OTP authentication method registered to see this behavior. Without an OTP method registered, users continue to see **Approve**/**Deny**.
+Users must have a TOTP authentication method registered to see this behavior. Without a TOTP method registered, users continue to see **Approve**/**Deny**.
-Prior to the release of NPS extension version 1.2.2216.1 after May 8, 2023, organizations that run any of these earlier versions of NPS extension can modify the registry to require users to enter an OTP:
+Prior to the release of NPS extension version 1.2.2216.1 after May 8, 2023, organizations that run any of these earlier versions of NPS extension can modify the registry to require users to enter a TOTP:
- 1.2.2131.2 - 1.2.1959.1
Prior to the release of NPS extension version 1.2.2216.1 after May 8, 2023, orga
- 1.0.1.40 >[!NOTE]
->NPS extensions versions earlier than 1.0.1.40 don't support OTP enforced by number matching. These versions will continue to present users with **Approve**/**Deny**.
+>NPS extensions versions earlier than 1.0.1.40 don't support TOTP enforced by number matching. These versions will continue to present users with **Approve**/**Deny**.
-To create the registry entry to override the **Approve**/**Deny** options in push notifications and require an OTP instead:
+To create the registry entry to override the **Approve**/**Deny** options in push notifications and require a TOTP instead:
1. On the NPS Server, open the Registry Editor. 1. Navigate to HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\AzureMfa. 1. Create the following String/Value pair:
- Name: OVERRIDE_NUMBER_MATCHING_WITH_OTP
- Value = TRUE
+ - Name: OVERRIDE_NUMBER_MATCHING_WITH_OTP
+ - Value = TRUE
1. Restart the NPS Service. In addition: -- Users who perform OTP must have either Microsoft Authenticator registered as an authentication method, or some other hardware or software OATH token. A user who can't use an OTP method will always see **Approve**/**Deny** options with push notifications if they use a version of NPS extension earlier than 1.2.2216.1.
+- Users who perform TOTP must have either Microsoft Authenticator registered as an authentication method, or some other hardware or software OATH token. A user who can't use an OTP method will always see **Approve**/**Deny** options with push notifications if they use a version of NPS extension earlier than 1.2.2216.1.
- Users must be [enabled for number matching](#enable-number-matching-in-the-portal). - The NPS Server where the NPS extension is installed must be configured to use PAP protocol. For more information, see [Determine which authentication methods your users can use](howto-mfa-nps-extension.md#determine-which-authentication-methods-your-users-can-use). >[!IMPORTANT]
- >MSCHAPv2 doesn't support OTP. If the NPS Server isn't configured to use PAP, user authorization will fail with events in the **AuthZOptCh** log of the NPS Extension server in Event Viewer:<br>
+ >MSCHAPv2 doesn't support TOTP. If the NPS Server isn't configured to use PAP, user authorization will fail with events in the **AuthZOptCh** log of the NPS Extension server in Event Viewer:<br>
>NPS Extension for Azure MFA: Challenge requested in Authentication Ext for User npstesting_ap. >You can configure the NPS Server to support PAP. If PAP is not an option, you can set OVERRIDE_NUMBER_MATCHING_WITH_OTP = FALSE to fall back to Approve/Deny push notifications.
-If your organization uses Remote Desktop Gateway and the user is registered for OTP code along with Microsoft Authenticator push notifications, the user won't be able to meet the Azure AD MFA challenge and Remote Desktop Gateway sign-in will fail. In this case, you can set OVERRIDE_NUMBER_MATCHING_WITH_OTP = FALSE to fall back to **Approve**/**Deny** push notifications with Microsoft Authenticator.
+If your organization uses Remote Desktop Gateway and the user is registered for a TOTP code along with Microsoft Authenticator push notifications, the user won't be able to meet the Azure AD MFA challenge and Remote Desktop Gateway sign-in will fail. In this case, you can set OVERRIDE_NUMBER_MATCHING_WITH_OTP = FALSE to fall back to **Approve**/**Deny** push notifications with Microsoft Authenticator.
### Apple Watch supported for Microsoft Authenticator
Here are differences in sign-in scenarios that Microsoft Authenticator users wil
- Authentication flows will require users to do number match when using Microsoft Authenticator. If their version of Microsoft Authenticator doesnΓÇÖt support number match, their authentication will fail. - Self-service password reset (SSPR) and combined registration will also require number match when using Microsoft Authenticator. - AD FS adapter will require number matching on [supported versions of Windows Server](#ad-fs-adapter). On earlier versions, users will continue to see the **Approve**/**Deny** experience and wonΓÇÖt see number matching until you upgrade. -- NPS extension versions beginning 1.2.2131.2 will require users to do number matching. Because the NPS extension canΓÇÖt show a number, the user will be asked to enter a One-Time Passcode (OTP). The user must have an OTP authentication method such as Microsoft Authenticator or software OATH tokens registered to see this behavior. If the user doesnΓÇÖt have an OTP method registered, theyΓÇÖll continue to get the **Approve**/**Deny** experience.
+- NPS extension versions beginning 1.2.2131.2 will require users to do number matching. Because the NPS extension canΓÇÖt show a number, the user will be asked to enter a TOTP. The user must have a TOTP authentication method such as Microsoft Authenticator or software OATH tokens registered to see this behavior. If the user doesnΓÇÖt have a TOTP method registered, theyΓÇÖll continue to get the **Approve**/**Deny** experience.
To create a registry entry that overrides this behavior and prompts users with **Approve**/**Deny**: 1. On the NPS Server, open the Registry Editor. 1. Navigate to HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\AzureMfa. 1. Create the following String/Value:
- Name: OVERRIDE_NUMBER_MATCHING_WITH_OTP
- Value = FALSE
+ - Name: OVERRIDE_NUMBER_MATCHING_WITH_OTP
+ - Value = FALSE
1. Restart the NPS Service. - Apple Watch will remain unsupported for number matching. We recommend you uninstall the Microsoft Authenticator Apple Watch app because you have to approve notifications on your phone.
-### How can users enter an OTP with the NPS extension?
+### How can users enter a TOTP with the NPS extension?
-The VPN and NPS server must be using PAP protocol for OTP prompts to appear. If they're using a protocol that doesn't support OTP, such as MSCHAPv2, they'll continue to see the **Approve/Deny** notifications.
+The VPN and NPS server must be using PAP protocol for TOTP prompts to appear. If they're using a protocol that doesn't support TOTP, such as MSCHAPv2, they'll continue to see the **Approve/Deny** notifications.
-### Will users get a prompt similar to a number matching prompt, but will need to enter an OTP?
+### Will users get a prompt similar to a number matching prompt, but will need to enter a TOTP?
They'll see a prompt to supply a verification code. They must select their account in Microsoft Authenticator and enter the random generated code that appears there.
active-directory Howto Mfa Nps Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-nps-extension.md
Previously updated : 03/16/2023 Last updated : 03/28/2023
A VPN server may send repeated requests to the NPS server if the timeout value i
For more information on why you see discarded packets in the NPS server logs, see [RADIUS protocol behavior and the NPS extension](#radius-protocol-behavior-and-the-nps-extension) at the start of this article. ### How do I get Microsoft Authenticator number matching to work with NPS?
-Make sure you run the latest version of the NPS extension. NPS extension versions beginning with 1.0.1.40 support number matching.
+Although NPS doesn't support number matching, the latest NPS extension does support time-based one-time password (TOTP) methods such as the TOTP available in Microsoft Authenticator, other software tokens, and hardware FOBs. TOTP sign-in provides better security than the alternative **Approve**/**Deny** experience. Make sure you run the latest version of the [NPS extension](https://www.microsoft.com/download/details.aspx?id=54688).
-Because the NPS extension can't show a number, a user who is enabled for number matching will still be prompted to Approve/Deny. However, you can create a registry key that overrides push notifications to ask a user to enter a One-Time Passcode (OTP). The user must have an OTP authentication method registered to see this behavior. Common OTP authentication methods include the OTP available in the Authenticator app, other software tokens, and so on.
+After May 8, 2023, when number matching is enabled for all users, anyone who performs a RADIUS connection with NPS extension version 1.2.2216.1 or later will be prompted to sign in with a TOTP method instead.
-If the user doesn't have an OTP method registered, they'll continue to get the Approve/Deny experience. A user with number matching disabled will always see the Approve/Deny experience.
-
-To create the registry key that overrides push notifications:
-1. On the NPS Server, open the Registry Editor.
-2. Navigate to HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\AzureMfa.
-3. Set the following Key Value Pair: Key: OVERRIDE_NUMBER_MATCHING_WITH_OTP Value = TRUE
-4. Restart the NPS Service.
+Users must have a TOTP authentication method registered to see this behavior. Without a TOTP method registered, users continue to see **Approve**/**Deny**.
+
+Prior to the release of NPS extension version 1.2.2216.1 after May 8, 2023, organizations that run earlier versions of NPS extension can modify the registry to require users to enter a TOTP. For more information, see [NPS extension](how-to-mfa-number-match.md#nps-extension).
## Managing the TLS/SSL Protocols and Cipher Suites
active-directory Location Condition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/location-condition.md
Title: Location condition in Azure Active Directory Conditional Access
-description: Learn about creating location-based Conditional Access policies using Azure AD.
+ Title: Using networks and countries in Azure Active Directory
+description: Use GPS locations and public IPv4 and IPv6 networks in Conditional Access policy to make access decisions.
Previously updated : 02/23/2023 Last updated : 03/17/2023
Conditional Access policies are at their most basic an if-then statement combini
Organizations can use this location for common tasks like: - Requiring multifactor authentication for users accessing a service when they're off the corporate network.-- Blocking access for users accessing a service from specific countries or regions.
+- Blocking access for users accessing a service from specific countries or regions your organization never operates from.
The location found using the public IP address a client provides to Azure Active Directory or GPS coordinates provided by the Microsoft Authenticator app. Conditional Access policies by default apply to all IPv4 and IPv6 addresses. For more information about IPv6 support, see the article [IPv6 support in Azure Active Directory](/troubleshoot/azure/active-directory/azure-ad-ipv6-support).
+> [!TIP]
+> Conditional Access policies are enforced after first-factor authentication is completed. Conditional Access isn't intended to be an organization's first line of defense for scenarios like denial-of-service (DoS) attacks, but it can use signals from these events to determine access.
+ ## Named locations Locations exist in the Azure portal under **Azure Active Directory** > **Security** > **Conditional Access** > **Named locations**. These named network locations may include locations like an organization's headquarters network ranges, VPN network ranges, or ranges that you wish to block. Named locations are defined by IPv4 and IPv6 address ranges or by countries/regions.
-![Named locations in the Azure portal](./media/location-condition/new-named-location.png)
+> [!VIDEO https://www.youtube.com/embed/P80SffTIThY]
### IPv4 and IPv6 address ranges
If you select **Determine location by IP address**, the system collects the IP a
If you select **Determine location by GPS coordinates**, the user needs to have the Microsoft Authenticator app installed on their mobile device. Every hour, the system contacts the userΓÇÖs Microsoft Authenticator app to collect the GPS location of the userΓÇÖs mobile device.
-The first time the user must share their location from the Microsoft Authenticator app, the user receives a notification in the app. The user needs to open the app and grant location permissions.
-
-Every hour the user is accessing resources covered by the policy they need to approve a push notification from the app.
+The first time the user must share their location from the Microsoft Authenticator app, the user receives a notification in the app. The user needs to open the app and grant location permissions. Every hour the user is accessing resources covered by the policy they need to approve a push notification from the app.
Every time the user shares their GPS location, the app does jailbreak detection (Using the same logic as the Intune MAM SDK). If the device is jailbroken, the location isn't considered valid, and the user isn't granted access.
You can also find the client IP by clicking a row in the report, and then going
## What you should know
+### Cloud proxies and VPNs
+
+When you use a cloud hosted proxy or VPN solution, the IP address Azure AD uses while evaluating a policy is the IP address of the proxy. The X-Forwarded-For (XFF) header that contains the userΓÇÖs public IP address isn't used because there's no validation that it comes from a trusted source, so would present a method for faking an IP address.
+
+When a cloud proxy is in place, a policy that requires a [hybrid Azure AD joined or compliant device](howto-conditional-access-policy-compliant-device.md#create-a-conditional-access-policy) can be easier to manage. Keeping a list of IP addresses used by your cloud hosted proxy or VPN solution up to date can be nearly impossible.
+ ### When is a location evaluated? Conditional Access policies are evaluated when:
By default, Azure AD issues a token on an hourly basis. After users move off the
The IP address used in policy evaluation is the public IPv4 or IPv6 address of the user. For devices on a private network, this IP address isn't the client IP of the userΓÇÖs device on the intranet, it's the address used by the network to connect to the public internet.
-### Bulk uploading and downloading of named locations
+### When you might block locations?
-When you create or update named locations, for bulk updates, you can upload or download a CSV file with the IP ranges. An upload replaces the IP ranges in the list with those ranges from the file. Each row of the file contains one IP Address range in CIDR format.
+A policy that uses the location condition to block access is considered restrictive, and should be done with care after thorough testing. Some instances of using the location condition to block authentication may include:
-### Cloud proxies and VPNs
+- Blocking countries where your organization never does business.
+- Blocking specific IP ranges like:
+ - Known malicious IPs before a firewall policy can be changed.
+ - For highly sensitive or privileged actions and cloud applications.
+ - Based on user specific IP range like access to accounting or payroll applications.
-When you use a cloud hosted proxy or VPN solution, the IP address Azure AD uses while evaluating a policy is the IP address of the proxy. The X-Forwarded-For (XFF) header that contains the userΓÇÖs public IP address isn't used because there's no validation that it comes from a trusted source, so would present a method for faking an IP address.
+### User exclusions
+
-When a cloud proxy is in place, a policy that requires a hybrid Azure AD joined device can be used, or the inside corpnet claim from AD FS.
+### Bulk uploading and downloading of named locations
+
+When you create or update named locations, for bulk updates, you can upload or download a CSV file with the IP ranges. An upload replaces the IP ranges in the list with those ranges from the file. Each row of the file contains one IP Address range in CIDR format.
### API support and PowerShell
active-directory Access Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/access-tokens.md
Some claims are used to help the Microsoft identity platform secure tokens for r
### Payload claims
-| Claim | Format | Description |
-|-|--|-|
-| `aud` | String, an Application ID URI or GUID | Identifies the intended audience of the token. The API must validate this value and reject the token if the value doesn't match. In v2.0 tokens, this value is always the client ID of the API. In v1.0 tokens, it can be the client ID or the resource URI used in the request. The value can depend on how the client requested the token. |
-| `iss` | String, a security token service (STS) URI | Identifies the STS that constructs and returns the token, and the Azure AD tenant in which the user was authenticated. If the token issued is a v2.0 token (see the `ver` claim), the URI ends in `/v2.0`. The GUID that indicates that the user is a consumer user from a Microsoft account is `9188040d-6c67-4c5b-b112-36a304b66dad`. The application can use the GUID portion of the claim to restrict the set of tenants that can sign in to the application, if applicable. |
-|`idp`| String, usually an STS URI | Records the identity provider that authenticated the subject of the token. This value is identical to the value of the Issuer claim unless the user account isn't in the same tenant as the issuer, such as guests. If the claim isn't present, the value of `iss` can be used instead. For personal accounts being used in an organizational context (for instance, a personal account invited to an Azure AD tenant), the `idp` claim may be 'live.com' or an STS URI containing the Microsoft account tenant `9188040d-6c67-4c5b-b112-36a304b66dad`. |
-| `iat` | int, a Unix timestamp | Specifies when the authentication for this token occurred. |
-| `nbf` | int, a Unix timestamp | Specifies the time before which the JWT must not be accepted for processing. |
-| `exp` | int, a Unix timestamp | Specifies the expiration time on or after which the JWT must not be accepted for processing. A resource may reject the token before this time as well. The rejection can occur when a change in authentication is required or a token revocation has been detected. |
-| `aio` | Opaque String | An internal claim used by Azure AD to record data for token reuse. Resources shouldn't use this claim. |
-| `acr` | String, a `0` or `1`, only present in v1.0 tokens | A value of `0` for the "Authentication context class" claim indicates the end-user authentication didn't meet the requirements of ISO/IEC 29115. |
-| `amr` | JSON array of strings, only present in v1.0 tokens | Identifies how the subject of the token was authenticated. |
-| `appid` | String, a GUID, only present in v1.0 tokens | The application ID of the client using the token. The application can act as itself or on behalf of a user. The application ID typically represents an application object, but it can also represent a service principal object in Azure AD. |
-| `azp` | String, a GUID, only present in v2.0 tokens | A replacement for `appid`. The application ID of the client using the token. The application can act as itself or on behalf of a user. The application ID typically represents an application object, but it can also represent a service principal object in Azure AD. |
-| `appidacr` | String, a `0`, `1`, or `2`, only present in v1.0 tokens | Indicates how the client was authenticated. For a public client, the value is `0`. If client ID and client secret are used, the value is `1`. If a client certificate was used for authentication, the value is `2`. |
-| `azpacr` | String, a `0`, `1`, or `2`, only present in v2.0 tokens | A replacement for `appidacr`. Indicates how the client was authenticated. For a public client, the value is `0`. If client ID and client secret are used, the value is `1`. If a client certificate was used for authentication, the value is `2`. |
-| `preferred_username` | String, only present in v2.0 tokens. | The primary username that represents the user. The value could be an email address, phone number, or a generic username without a specified format. The value is mutable and might change over time. Since the value is mutable, it must not be used to make authorization decisions. The value can be used for username hints, however, and in human-readable UI as a username. The `profile` scope is required in order to receive this claim. |
-| `name` | String | Provides a human-readable value that identifies the subject of the token. The value isn't guaranteed to be unique, it's mutable, and is only used for display purposes. The `profile` scope is required in order to receive this claim. |
-| `scp` | String, a space separated list of scopes | The set of scopes exposed by the application for which the client application has requested (and received) consent. The application should verify that these scopes are valid ones exposed by the application, and make authorization decisions based on the value of these scopes. Only included for user tokens. |
-| `roles` | Array of strings, a list of permissions | The set of permissions exposed by the application that the requesting application or user has been given permission to call. For application tokens, this set of permissions is used during the [client credential flow](v2-oauth2-client-creds-grant-flow.md) in place of user scopes. For user tokens, this set of values is populated with the roles the user was assigned to on the target application. |
-| `wids` | Array of [RoleTemplateID](../roles/permissions-reference.md#all-roles) GUIDs | Denotes the tenant-wide roles assigned to this user, from the section of roles present in [Azure AD built-in roles](../roles/permissions-reference.md#all-roles). This claim is configured on a per-application basis, through the `groupMembershipClaims` property of the [application manifest](reference-app-manifest.md). Setting it to `All` or `DirectoryRole` is required. May not be present in tokens obtained through the implicit flow due to token length concerns. |
-| `groups` | JSON array of GUIDs | Provides object IDs that represent the group memberships of the subject. These values are unique and can be safely used for managing access, such as enforcing authorization to access a resource. The groups included in the groups claim are configured on a per-application basis, through the `groupMembershipClaims` property of the [application manifest](reference-app-manifest.md). A value of `null` excludes all groups, a value of `SecurityGroup` includes only Active Directory Security Group memberships, and a value of `All` includes both Security Groups and Microsoft 365 Distribution Lists. <br><br>See the `hasgroups` claim for details on using the `groups` claim with the implicit grant. For other flows, if the number of groups the user is in goes over 150 for SAML and 200 for JWT, then Azure AD adds an overage claim to the claim sources. The claim sources point to the Microsoft Graph endpoint that contains the list of groups for the user. |
-| `hasgroups` | Boolean | If present, always `true`, indicates whether the user is in at least one group. Used in place of the `groups` claim for JWTs in implicit grant flows if the full groups claim would extend the URI fragment beyond the URL length limits (currently six or more groups). Indicates that the client should use the Microsoft Graph API to determine the groups (`https://graph.microsoft.com/v1.0/users/{userID}/getMemberObjects`) of the user. |
-| `groups:src1` | JSON object | For token requests that aren't length limited (see `hasgroups`) but still too large for the token, a link to the full groups list for the user is included. For JWTs as a distributed claim, for SAML as a new claim in place of the `groups` claim. <br><br>**Example JWT Value**: <br> `"groups":"src1"` <br> `"_claim_sources`: `"src1" : { "endpoint" : "https://graph.microsoft.com/v1.0/users/{userID}/getMemberObjects" }` |
-| `sub` | String | The principal about which the token asserts information, such as the user of an application. This value is immutable and can't be reassigned or reused. It can be used to perform authorization checks safely, such as when the token is used to access a resource, and can be used as a key in database tables. Because the subject is always present in the tokens that Azure AD issues, use this value in a general-purpose authorization system. The subject is, however, a pairwise identifier that is unique to a particular application ID. If a single user signs into two different applications using two different client IDs, those applications receive two different values for the subject claim. Two different values may or may not be desired depending on architecture and privacy requirements. See also the `oid` claim (which does remain the same across applications within a tenant). |
-| `oid` | String, a GUID | The immutable identifier for the requestor, which is the user or service principal whose identity has been verified. It can also be used to perform authorization checks safely and as a key in database tables. This ID uniquely identifies the requestor across applications. Two different applications signing in the same user receive the same value in the `oid` claim. The `oid` can be used when making queries to Microsoft online services, such as the Microsoft Graph. The Microsoft Graph returns this ID as the `id` property for a given user account. Because the `oid` allows multiple applications to correlate principals, the `profile` scope is required in order to receive this claim for users. If a single user exists in multiple tenants, the user contains a different object ID in each tenant. The accounts are considered different, even though the user logs into each account with the same credentials. |
-|`tid` | String, a GUID | Represents the tenant that the user is signing in to. For work and school accounts, the GUID is the immutable tenant ID of the organization that the user is signing in to. For sign-ins to the personal Microsoft account tenant (services like Xbox, Teams for Life, or Outlook), the value is `9188040d-6c67-4c5b-b112-36a304b66dad`. To receive this claim, the application must request the `profile` scope. |
-| `unique_name` | String, only present in v1.0 tokens | Provides a human readable value that identifies the subject of the token. This value isn't guaranteed to be unique within a tenant and should be used only for display purposes. |
-| `uti` | String | Token identifier claim, equivalent to `jti` in the JWT specification. Unique, per-token identifier that is case-sensitive. |
-| `rh` | Opaque String | An internal claim used by Azure to revalidate tokens. Resources shouldn't use this claim. |
-| `ver` | String, either `1.0` or `2.0` | Indicates the version of the access token. |
+| Claim | Format | Description | Authorization considerations |
+|-|--|-||
+| `aud` | String, an Application ID URI or GUID | Identifies the intended audience of the token. In v2.0 tokens, this value is always the client ID of the API. In v1.0 tokens, it can be the client ID or the resource URI used in the request. The value can depend on how the client requested the token. | This value must be validated, reject the token if the value doesn't match the intended audience. |
+| `iss` | String, a security token service (STS) URI | Identifies the STS that constructs and returns the token, and the Azure AD tenant in which the user was authenticated. If the token issued is a v2.0 token (see the `ver` claim), the URI ends in `/v2.0`. The GUID that indicates that the user is a consumer user from a Microsoft account is `9188040d-6c67-4c5b-b112-36a304b66dad`. | The application can use the GUID portion of the claim to restrict the set of tenants that can sign in to the application, if applicable. |
+|`idp`| String, usually an STS URI | Records the identity provider that authenticated the subject of the token. This value is identical to the value of the Issuer claim unless the user account isn't in the same tenant as the issuer, such as guests. If the claim isn't present, the value of `iss` can be used instead. For personal accounts being used in an organizational context (for instance, a personal account invited to an Azure AD tenant), the `idp` claim may be 'live.com' or an STS URI containing the Microsoft account tenant `9188040d-6c67-4c5b-b112-36a304b66dad`. | |
+| `iat` | int, a Unix timestamp | Specifies when the authentication for this token occurred. | |
+| `nbf` | int, a Unix timestamp | Specifies the time before which the JWT must not be accepted for processing. | |
+| `exp` | int, a Unix timestamp | Specifies the expiration time on or after which the JWT must not be accepted for processing. A resource may reject the token before this time as well. The rejection can occur when a change in authentication is required or a token revocation has been detected. | |
+| `aio` | Opaque String | An internal claim used by Azure AD to record data for token reuse. Resources shouldn't use this claim. | |
+| `acr` | String, a `0` or `1`, only present in v1.0 tokens | A value of `0` for the "Authentication context class" claim indicates the end-user authentication didn't meet the requirements of ISO/IEC 29115. | |
+| `amr` | JSON array of strings, only present in v1.0 tokens | Identifies how the subject of the token was authenticated. | |
+| `appid` | String, a GUID, only present in v1.0 tokens | The application ID of the client using the token. The application can act as itself or on behalf of a user. The application ID typically represents an application object, but it can also represent a service principal object in Azure AD. | `appid` may be used in authorization decisions. |
+| `azp` | String, a GUID, only present in v2.0 tokens | A replacement for `appid`. The application ID of the client using the token. The application can act as itself or on behalf of a user. The application ID typically represents an application object, but it can also represent a service principal object in Azure AD. | `azp` may be used in authorization decisions. |
+| `appidacr` | String, a `0`, `1`, or `2`, only present in v1.0 tokens | Indicates how the client was authenticated. For a public client, the value is `0`. If client ID and client secret are used, the value is `1`. If a client certificate was used for authentication, the value is `2`. | |
+| `azpacr` | String, a `0`, `1`, or `2`, only present in v2.0 tokens | A replacement for `appidacr`. Indicates how the client was authenticated. For a public client, the value is `0`. If client ID and client secret are used, the value is `1`. If a client certificate was used for authentication, the value is `2`. | |
+| `preferred_username` | String, only present in v2.0 tokens. | The primary username that represents the user. The value could be an email address, phone number, or a generic username without a specified format. The value is mutable and might change over time. The value can be used for username hints, however, and in human-readable UI as a username. The `profile` scope is required in order to receive this claim. | Since this value is mutable, it must not be used to make authorization decisions. |
+| `name` | String | Provides a human-readable value that identifies the subject of the token. The value isn't guaranteed to be unique, it's mutable, and is only used for display purposes. The `profile` scope is required in order to receive this claim. | This value must not be used to make authorization decisions. |
+| `scp` | String, a space separated list of scopes | The set of scopes exposed by the application for which the client application has requested (and received) consent. Only included for user tokens. | The application should verify that these scopes are valid ones exposed by the application, and make authorization decisions based on the value of these scopes. |
+| `roles` | Array of strings, a list of permissions | The set of permissions exposed by the application that the requesting application or user has been given permission to call. For application tokens, this set of permissions is used during the [client credential flow](v2-oauth2-client-creds-grant-flow.md) in place of user scopes. For user tokens, this set of values is populated with the roles the user was assigned to on the target application. | These values can be used for managing access, such as enforcing authorization to access a resource. |
+| `wids` | Array of [RoleTemplateID](../roles/permissions-reference.md#all-roles) GUIDs | Denotes the tenant-wide roles assigned to this user, from the section of roles present in [Azure AD built-in roles](../roles/permissions-reference.md#all-roles). This claim is configured on a per-application basis, through the `groupMembershipClaims` property of the [application manifest](reference-app-manifest.md). Setting it to `All` or `DirectoryRole` is required. May not be present in tokens obtained through the implicit flow due to token length concerns. | These values can be used for managing access, such as enforcing authorization to access a resource. |
+| `groups` | JSON array of GUIDs | Provides object IDs that represent the group memberships of the subject. The groups included in the groups claim are configured on a per-application basis, through the `groupMembershipClaims` property of the [application manifest](reference-app-manifest.md). A value of `null` excludes all groups, a value of `SecurityGroup` includes only Active Directory Security Group memberships, and a value of `All` includes both Security Groups and Microsoft 365 Distribution Lists. <br><br>See the `hasgroups` claim for details on using the `groups` claim with the implicit grant. For other flows, if the number of groups the user is in goes over 150 for SAML and 200 for JWT, then Azure AD adds an overage claim to the claim sources. The claim sources point to the Microsoft Graph endpoint that contains the list of groups for the user. | These values can be used for managing access, such as enforcing authorization to access a resource. |
+| `hasgroups` | Boolean | If present, always `true`, indicates whether the user is in at least one group. Used in place of the `groups` claim for JWTs in implicit grant flows if the full groups claim would extend the URI fragment beyond the URL length limits (currently six or more groups). Indicates that the client should use the Microsoft Graph API to determine the groups (`https://graph.microsoft.com/v1.0/users/{userID}/getMemberObjects`) of the user. | |
+| `groups:src1` | JSON object | For token requests that aren't length limited (see `hasgroups`) but still too large for the token, a link to the full groups list for the user is included. For JWTs as a distributed claim, for SAML as a new claim in place of the `groups` claim. <br><br>**Example JWT Value**: <br> `"groups":"src1"` <br> `"_claim_sources`: `"src1" : { "endpoint" : "https://graph.microsoft.com/v1.0/users/{userID}/getMemberObjects" }` | |
+| `sub` | String | The principal about which the token asserts information, such as the user of an application. This value is immutable and can't be reassigned or reused. The subject is a pairwise identifier that is unique to a particular application ID. If a single user signs into two different applications using two different client IDs, those applications receive two different values for the subject claim. Two different values may or may not be desired depending on architecture and privacy requirements. See also the `oid` claim (which does remain the same across applications within a tenant). | This value can be used to perform authorization checks, such as when the token is used to access a resource, and can be used as a key in database tables. |
+| `oid` | String, a GUID | The immutable identifier for the requestor, which is the user or service principal whose identity has been verified. This ID uniquely identifies the requestor across applications. Two different applications signing in the same user receive the same value in the `oid` claim. The `oid` can be used when making queries to Microsoft online services, such as the Microsoft Graph. The Microsoft Graph returns this ID as the `id` property for a given user account. Because the `oid` allows multiple applications to correlate principals, the `profile` scope is required in order to receive this claim for users. If a single user exists in multiple tenants, the user contains a different object ID in each tenant. The accounts are considered different, even though the user logs into each account with the same credentials. | This value can be used to perform authorization checks, such as when the token is used to access a resource, and can be used as a key in database tables. |
+| `tid` | String, a GUID | Represents the tenant that the user is signing in to. For work and school accounts, the GUID is the immutable tenant ID of the organization that the user is signing in to. For sign-ins to the personal Microsoft account tenant (services like Xbox, Teams for Life, or Outlook), the value is `9188040d-6c67-4c5b-b112-36a304b66dad`. To receive this claim, the application must request the `profile` scope. | This value should be considered in combination with other claims in authorization decisions. |
+| `unique_name` | String, only present in v1.0 tokens | Provides a human readable value that identifies the subject of the token. | This value isn't guaranteed to be unique within a tenant and should be used only for display purposes. |
+| `uti` | String | Token identifier claim, equivalent to `jti` in the JWT specification. Unique, per-token identifier that is case-sensitive. | |
+| `rh` | Opaque String | An internal claim used by Azure to revalidate tokens. Resources shouldn't use this claim. | |
+| `ver` | String, either `1.0` or `2.0` | Indicates the version of the access token. | |
#### Groups overage claim
active-directory Active Directory V2 Protocols https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-v2-protocols.md
The endpoint URIs for your app are generated automatically when you register or
Two commonly used endpoints are the [authorization endpoint](v2-oauth2-auth-code-flow.md#request-an-authorization-code) and [token endpoint](v2-oauth2-auth-code-flow.md#redeem-a-code-for-an-access-token). Here are examples of the `authorize` and `token` endpoints:
-```Bash
+```
# Authorization endpoint - used by client to obtain authorization from the resource owner. https://login.microsoftonline.com/<issuer>/oauth2/v2.0/authorize # Token endpoint - used by client to exchange an authorization grant or refresh token for an access token.
active-directory Howto Restrict Your App To A Set Of Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-restrict-your-app-to-a-set-of-users.md
Previously updated : 12/19/2022 Last updated : 03/28/2023 + #Customer intent: As a tenant administrator, I want to restrict an application that I have registered in Azuren-e AD to a select set of users available in my Azure AD tenant
Applications registered in an Azure Active Directory (Azure AD) tenant are, by d
Similarly, in a [multi-tenant](howto-convert-app-to-be-multi-tenant.md) application, all users in the Azure AD tenant where the application is provisioned can access the application once they successfully authenticate in their respective tenant.
-Tenant administrators and developers often have requirements where an application must be restricted to a certain set of users. There are two ways to restrict an application to a certain set of users or security groups:
+Tenant administrators and developers often have requirements where an application must be restricted to a certain set of users or apps (services). There are two ways to restrict an application to a certain set of users, apps or security groups:
- Developers can use popular authorization patterns like [Azure role-based access control (Azure RBAC)](howto-implement-rbac-for-apps.md). - Tenant administrators and developers can use built-in feature of Azure AD. ## Supported app configurations
-The option to restrict an app to a specific set of users or security groups in a tenant works with the following types of applications:
+The option to restrict an app to a specific set of users, apps or security groups in a tenant works with the following types of applications:
- Applications configured for federated single sign-on with SAML-based authentication.-- Application proxy applications that use Azure AD pre-authentication.
+- Application proxy applications that use Azure AD preauthentication.
- Applications built directly on the Azure AD application platform that use OAuth 2.0/OpenID Connect authentication after a user or admin has consented to that application. ## Update the app to require user assignment
To update an application to require user assignment, you must be owner of the ap
1. Sign in to the [Azure portal](https://portal.azure.com/) 1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch the tenant in which you want to register an application. 1. Search for and select **Azure Active Directory**.
-1. Under **Manage**, select **Enterprise Applications** > **All applications**.
+1. Under **Manage**, select **Enterprise Applications** then select **All applications**.
1. Select the application you want to configure to require assignment. Use the filters at the top of the window to search for a specific application. 1. On the application's **Overview** page, under **Manage**, select **Properties**. 1. Locate the setting **Assignment required?** and set it to **Yes**. When this option is set to **Yes**, users and services attempting to access the application or services must first be assigned for this application, or they won't be able to sign-in or obtain an access token.
To update an application to require user assignment, you must be owner of the ap
When an application requires assignment, user consent for that application isn't allowed. This is true even if users consent for that app would have otherwise been allowed. Be sure to [grant tenant-wide admin consent](../manage-apps/grant-admin-consent.md) to apps that require assignment.
-## Assign the app to users and groups
+## Assign the app to users and groups to restrict access
Once you've configured your app to enable user assignment, you can go ahead and assign the app to users and groups.
-1. Under **Manage**, select the **Users and groups** > **Add user/group** .
+1. Under **Manage**, select the **Users and groups** then select **Add user/group**.
1. Select the **Users** selector.
- A list of users and security groups will be shown along with a textbox to search and locate a certain user or group. This screen allows you to select multiple users and groups in one go.
+ A list of users and security groups are shown along with a textbox to search and locate a certain user or group. This screen allows you to select multiple users and groups in one go.
1. Once you're done selecting the users and groups, select **Select**. 1. (Optional) If you have defined app roles in your application, you can use the **Select role** option to assign the app role to the selected users and groups. 1. Select **Assign** to complete the assignments of the app to the users and groups. 1. Confirm that the users and groups you added are showing up in the updated **Users and groups** list.
+## Restrict access to an app (resource) by assigning other services (client apps)
+
+Follow the steps in this section to secure app-to-app authentication access for your tenant.
+
+1. Navigate to Service Principal sign-in logs in your tenant to find services authenticating to access resources in your tenant.
+1. Check using app ID if a Service Principal exists for both resource and client apps in your tenant that you wish to manage access.
+ ```powershell
+ Get-MgServicePrincipal `
+ -Filter "AppId eq '$appId'"
+ ```
+1. Create a Service Principal using app ID, if it doesn't exist:
+ ```powershell
+ New-MgServicePrincipal `
+ -AppId $appId
+ ```
+1. Explicitly assign client apps to resource apps (this functionality is available only in API and not in the Azure AD Portal):
+ ```powershell
+ $clientAppId = ΓÇ£[guid]ΓÇ¥
+ $clientId = (Get-MgServicePrincipal -Filter "AppId eq '$clientAppId'").Id
+ New-MgServicePrincipalAppRoleAssignment `
+ -ServicePrincipalId $clientId `
+ -PrincipalId $clientId `
+ -ResourceId (Get-MgServicePrincipal -Filter "AppId eq '$appId'").Id `
+ -AppRoleId "00000000-0000-0000-0000-000000000000"
+ ```
+1. Require assignment for the resource application to restrict access only to the explicitly assigned users or services.
+ ```powershell
+ Update-MgServicePrincipal -ServicePrincipalId (Get-MgServicePrincipal -Filter "AppId eq '$appId'").Id -AppRoleAssignmentRequired:$true
+ ```
+ > [!NOTE]
+ > If you don't want tokens to be issued for an application or if you want to block an application from being accessed by users or services in your tenant, create a service principal for the application and [disable user sign-in](../manage-apps/disable-user-sign-in-portal.md) for it.
+ ## More information For more information about roles and security groups, see:
active-directory Msal Net Client Assertions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-net-client-assertions.md
Previously updated : 03/18/2021 Last updated : 03/29/2023
static string GetSignedClientAssertion(X509Certificate2 certificate, string tena
### Alternative method
-You also have the option of using [Microsoft.IdentityModel.JsonWebTokens](https://www.nuget.org/packages/Microsoft.IdentityModel.JsonWebTokens/) to create the assertion for you. The code will be a more elegant as shown in the example below:
+You also have the option of using [Microsoft.IdentityModel.JsonWebTokens](https://www.nuget.org/packages/Microsoft.IdentityModel.JsonWebTokens/) to create the assertion for you. The code will be more elegant as shown in the example below:
```csharp string GetSignedClientAssertionAlt(X509Certificate2 certificate)
active-directory V2 Oauth2 Client Creds Grant Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-oauth2-client-creds-grant-flow.md
After you've acquired the necessary authorization for your application, proceed
```HTTP POST /{tenant}/oauth2/v2.0/token HTTP/1.1 //Line breaks for clarity
-Host: login.microsoftonline.com
+Host: login.microsoftonline.com:443
Content-Type: application/x-www-form-urlencoded client_id=535fb089-9ff3-47b6-9bfb-4f1264799865
curl -X POST -H "Content-Type: application/x-www-form-urlencoded" -d 'client_id=
```HTTP POST /{tenant}/oauth2/v2.0/token HTTP/1.1 // Line breaks for clarity
-Host: login.microsoftonline.com
+Host: login.microsoftonline.com:443
Content-Type: application/x-www-form-urlencoded scope=https%3A%2F%2Fgraph.microsoft.com%2F.default
The parameters for the certificate-based request differ in only one way from the
```HTTP POST /{tenant}/oauth2/v2.0/token HTTP/1.1 // Line breaks for clarity
-Host: login.microsoftonline.com
+Host: login.microsoftonline.com:443
Content-Type: application/x-www-form-urlencoded scope=https%3A%2F%2Fgraph.microsoft.com%2F.default
An error response (400 Bad Request) looks like this:
Now that you've acquired a token, use the token to make requests to the resource. When the token expires, repeat the request to the `/token` endpoint to acquire a fresh access token. ```HTTP
-GET /v1.0/users
-Host: https://graph.microsoft.com
+GET /v1.0/users HTTP/1.1
+Host: graph.microsoft.com:443
Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsIng1dCI6Ik5HVEZ2ZEstZnl0aEV1Q... ``` Try the following command in your terminal, ensuring to replace the token with your own.
active-directory Howto Vm Sign In Azure Ad Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/howto-vm-sign-in-azure-ad-linux.md
To uninstall old packages:
1. Log in as a local user with admin privileges. 1. Make sure there are no logged-in Azure AD users. Call the `who -u` command to see who is logged in. Then use `sudo kill <pid>` for all session processes that the previous command reported.
-1. Run `sudo apt remove --purge aadlogin` (Ubuntu/Debian), `sudo yum erase aadlogin` (RHEL or CentOS), or `sudo zypper remove aadlogin` (openSUSE or SLES).
+1. Run `sudo apt remove --purge aadlogin` (Ubuntu/Debian), `sudo yum remove aadlogin` (RHEL or CentOS), or `sudo zypper remove aadlogin` (openSUSE or SLES).
1. If the command fails, try the low-level tools with scripts disabled: 1. For Ubuntu/Debian, run `sudo dpkg --purge aadlogin`. If it's still failing because of the script, delete the `/var/lib/dpkg/info/aadlogin.prerm` file and try again.
- 1. For everything else, run `rpm -e ΓÇônoscripts aadogin`.
+ 1. For everything else, run `rpm -e --noscripts aadogin`.
1. Repeat steps 3-4 for package `aadlogin-selinux`. ### Extension installation errors
One solution is to remove `AllowGroups` and `DenyGroups` statements from *sshd_c
Another solution is to move `AllowGroups` and `DenyGroups` to a `match user` section in *sshd_config*. Make sure the match template excludes Azure AD users.
+### Getting Permission Denied when trying to connect from Azure Shell to Linux Red Hat/Oracle/Centos 7.X VM.
+
+The OpenSSH server version in the target VM 7.4 is too old. Version incompatible with OpenSSH client version 8.8. Refer to [RSA SHA256 certificates no longer work](https://bugzilla.mindrot.org/show_bug.cgi?id=3351) for more information.
+
+Workaround:
+
+- Adding option `"PubkeyAcceptedKeyTypes= +ssh-rsa-cert-v01@openssh.com"` in the `az ssh vm ` command.
+
+```azurecli-interactive
+az ssh vm -n myVM -g MyResourceGroup -- -A -o "PubkeyAcceptedKeyTypes= +ssh-rsa-cert-v01@openssh.com"
+```
+- Adding the option `"PubkeyAcceptedKeyTypes= +ssh-rsa-cert-v01@openssh.com"` in the `/home/<user>/.ssh/config file`.
++
+Add the `"PubkeyAcceptedKeyTypes +ssh-rsa-cert-v01@openssh.com"` into the client config file.
+
+```config
+Host *
+PubkeyAcceptedKeyTypes +ssh-rsa-cert-v01@openssh.com
+```
+ ## Next steps - [What is a device identity?](overview.md)
active-directory Directory Delete Howto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/directory-delete-howto.md
A few enterprise applications can't be deleted in the Azure portal and might blo
5. Run the following commands to set the tenant context. DO NOT skip these steps or you run the risk of deleting enterprise apps from the wrong teant. `Clear-AzContext -Scope CurrentUser`+ `Connect-AzAccount -Tenant \<object id of the tenant you are attempting to delete\>`
+
`Get-AzContext` >[!WARNING]
active-directory Licensing Service Plan Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-service-plan-reference.md
Previously updated : 03/23/2023 Last updated : 03/28/2023
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
- **Service plans included (friendly names)**: A list of service plans (friendly names) in the product that correspond to the string ID and GUID >[!NOTE]
->This information last updated on March 23rd, 2023.<br/>You can also download a CSV version of this table [here](https://download.microsoft.com/download/e/3/e/e3e9faf2-f28b-490a-9ada-c6089a1fc5b0/Product%20names%20and%20service%20plan%20identifiers%20for%20licensing.csv).
+>This information last updated on March 28th, 2023.<br/>You can also download a CSV version of this table [here](https://download.microsoft.com/download/e/3/e/e3e9faf2-f28b-490a-9ada-c6089a1fc5b0/Product%20names%20and%20service%20plan%20identifiers%20for%20licensing.csv).
><br/> | Product name | String ID | GUID | Service plans included | Service plans included (friendly names) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| Azure Information Protection Plan 1 | RIGHTSMANAGEMENT | c52ea49f-fe5d-4e95-93ba-1de91d380f89 | RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3) | AZURE INFORMATION PROTECTION PREMIUM P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>MICROSOFT AZURE ACTIVE DIRECTORY RIGHTS (bea4c11e-220a-4e6d-8eb8-8ea15d019f90) | | Azure Information Protection Premium P1 for Government | RIGHTSMANAGEMENT_CE_GOV | 78362de1-6942-4bb8-83a1-a32aa67e6e2c | EXCHANGE_S_FOUNDATION_GOV (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>RMS_S_PREMIUM_GOV (1b66aedf-8ca1-4f73-af76-ec76c6180f98)<br/>RMS_S_ENTERPRISE_GOV (6a76346d-5d6e-4051-9fe3-ed3f312b5597) | Exchange Foundation for Government (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>Azure Information Protection Premium P1 for GCC (1b66aedf-8ca1-4f73-af76-ec76c6180f98)<br/>Azure Rights Management (6a76346d-5d6e-4051-9fe3-ed3f312b5597) | | Business Apps (free) | SMB_APPS | 90d8b3f8-712e-4f7b-aa1e-62e7ae6cbe96 | DYN365BC_MS_INVOICING (39b5c996-467e-4e60-bd62-46066f572726)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2) | Microsoft Invoicing (39b5c996-467e-4e60-bd62-46066f572726)<br/>Microsoft Bookings (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2) |
+| Common Data Service for Apps File Capacity | CDS_FILE_CAPACITY | 631d5fb1-a668-4c2a-9427-8830665a742e | CDS_FILE_CAPACITY (dd12a3a8-caec-44f8-b4fb-2f1a864b51e3)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | Common Data Service for Apps File Capacity (dd12a3a8-caec-44f8-b4fb-2f1a864b51e3)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318) |
| Common Data Service Database Capacity | CDS_DB_CAPACITY | e612d426-6bc3-4181-9658-91aa906b0ac0 | CDS_DB_CAPACITY (360bcc37-0c11-4264-8eed-9fa7a3297c9b)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | Common Data Service for Apps Database Capacity (360bcc37-0c11-4264-8eed-9fa7a3297c9b)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318) | | Common Data Service Database Capacity for Government | CDS_DB_CAPACITY_GOV | eddf428b-da0e-4115-accf-b29eb0b83965 | CDS_DB_CAPACITY_GOV (1ddffef6-4f69-455e-89c7-d5d72105f915)<br/>EXCHANGE_S_FOUNDATION_GOV (922ba911-5694-4e99-a794-73aed9bfeec8) | Common Data Service for Apps Database Capacity for Government (1ddffef6-4f69-455e-89c7-d5d72105f915)<br/>Exchange Foundation for Government (922ba911-5694-4e99-a794-73aed9bfeec8)| | Common Data Service Log Capacity | CDS_LOG_CAPACITY | 448b063f-9cc6-42fc-a0e6-40e08724a395 | CDS_LOG_CAPACITY (dc48f5c5-e87d-43d6-b884-7ac4a59e7ee9)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | Common Data Service for Apps Log Capacity (dc48f5c5-e87d-43d6-b884-7ac4a59e7ee9)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| Dynamics 365 Customer Engagement Plan | DYN365_ENTERPRISE_PLAN1 | ea126fc5-a19e-42e2-a731-da9d437bffcf | D365_CSI_EMBED_CE (1412cdc1-d593-4ad1-9050-40c30ad0b023)<br/>DYN365_ENTERPRISE_P1 (d56f3deb-50d8-465a-bedb-f079817ccac1)<br/>D365_ProjectOperations (69f07c66-bee4-4222-b051-195095efee5b)<br/>D365_ProjectOperationsCDS (18fa3aba-b085-4105-87d7-55617b8585e6)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW_DYN_P2 (b650d915-9886-424b-a08d-633cede56f57)<br/>FLOW_DYN_APPS (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>Forms_Pro_CE (97f29a83-1a20-44ff-bf48-5e4ad11f3e51)<br/>NBENTERPRISE (03acaee3-9492-4f40-aed4-bcb6b32981b6)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>POWERAPPS_DYN_P2 (0b03f40b-c404-40c3-8651-2aceb74365fa)<br/>PROJECT_FOR_PROJECT_OPERATIONS (0a05d977-a21a-45b2-91ce-61c240dbafa2)<br/>PROJECT_CLIENT_SUBSCRIPTION (fafd7243-e5c1-4a3a-9e40-495efcb1d3c3)<br/>SHAREPOINT_PROJECT (fe71d6c3-a2ea-4499-9778-da042bf08063)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72) | Dynamics 365 Customer Service Insights for CE Plan (1412cdc1-d593-4ad1-9050-40c30ad0b023)<br/>Dynamics 365 P1 (d56f3deb-50d8-465a-bedb-f079817ccac1)<br/>Dynamics 365 Project Operations (69f07c66-bee4-4222-b051-195095efee5b)<br/>Dynamics 365 Project Operations CDS (18fa3aba-b085-4105-87d7-55617b8585e6)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Flow for Dynamics 365 (b650d915-9886-424b-a08d-633cede56f57)<br/>Flow for Dynamics 365 (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>Microsoft Dynamics 365 Customer Voice for Customer Engagement Plan (97f29a83-1a20-44ff-bf48-5e4ad11f3e51)<br/>Microsoft Social Engagement Enterprise (03acaee3-9492-4f40-aed4-bcb6b32981b6)<br/>Office for the web (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>Power Apps for Dynamics 365 (0b03f40b-c404-40c3-8651-2aceb74365fa)<br/>Project for Project Operations (0a05d977-a21a-45b2-91ce-61c240dbafa2)<br/>Project Online Desktop Client (fafd7243-e5c1-4a3a-9e40-495efcb1d3c3)<br/>Project Online Service (fe71d6c3-a2ea-4499-9778-da042bf08063)<br/>SharePoint (Plan 2) (5dbe027f-2339-4123-9542-606e4d348a72) | | Dynamics 365 Customer Insights vTrial | DYN365_CUSTOMER_INSIGHTS_VIRAL | 036c2481-aa8a-47cd-ab43-324f0c157c2d | CDS_CUSTOMER_INSIGHTS_TRIAL (94e5cbf6-d843-4ee8-a2ec-8b15eb52019e)<br/>DYN365_CUSTOMER_INSIGHTS_ENGAGEMENT_INSIGHTS_BASE_TRIAL (e2bdea63-235e-44c6-9f5e-5b0e783f07dd)<br/>DYN365_CUSTOMER_INSIGHTS_VIRAL (ed8e8769-94c5-4132-a3e7-7543b713d51f)<br/>Forms_Pro_Customer_Insights (fe581650-cf61-4a09-8814-4bd77eca9cb5) | Common Data Service for Customer Insights Trial (94e5cbf6-d843-4ee8-a2ec-8b15eb52019e)<br/>Dynamics 365 Customer Insights Engagement Insights Viral (e2bdea63-235e-44c6-9f5e-5b0e783f07dd)<br/>Dynamics 365 Customer Insights Viral Plan (ed8e8769-94c5-4132-a3e7-7543b713d51f)<br/>Microsoft Dynamics 365 Customer Voice for Customer Insights (fe581650-cf61-4a09-8814-4bd77eca9cb5) | | Dynamics 365 Customer Service Enterprise Viral Trial | Dynamics_365_Customer_Service_Enterprise_viral_trial | 1e615a51-59db-4807-9957-aa83c3657351 | CUSTOMER_VOICE_DYN365_VIRAL_TRIAL (dbe07046-af68-4861-a20d-1c8cbda9194f)<br/>CCIBOTS_PRIVPREV_VIRAL (ce312d15-8fdf-44c0-9974-a25a177125ee)<br/>DYN365_CS_MESSAGING_VIRAL_TRIAL (3bf52bdf-5226-4a97-829e-5cca9b3f3392)<br/>DYN365_CS_ENTERPRISE_VIRAL_TRIAL (94fb67d3-465f-4d1f-a50a-952da079a564)<br/>DYNB365_CSI_VIRAL_TRIAL (33f1466e-63a6-464c-bf6a-d1787928a56a)<br/>DYN365_CS_VOICE_VIRAL_TRIAL (3de81e39-4ce1-47f7-a77f-8473d4eb6d7c)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>POWER_APPS_DYN365_VIRAL_TRIAL (54b37829-818e-4e3c-a08a-3ea66ab9b45d)<br/>POWER_AUTOMATE_DYN365_VIRAL_TRIAL (81d4ecb8-0481-42fb-8868-51536c5aceeb) | Customer Voice for Dynamics 365 vTrial (dbe07046-af68-4861-a20d-1c8cbda9194f)<br/>Dynamics 365 AI for Customer Service Virtual Agents Viral (ce312d15-8fdf-44c0-9974-a25a177125ee)<br/>Dynamics 365 Customer Service Digital Messaging vTrial (3bf52bdf-5226-4a97-829e-5cca9b3f3392)<br/>Dynamics 365 Customer Service Enterprise vTrial (94fb67d3-465f-4d1f-a50a-952da079a564)<br/>Dynamics 365 Customer Service Insights vTrial (33f1466e-63a6-464c-bf6a-d1787928a56a)<br/>Dynamics 365 Customer Service Voice vTrial (3de81e39-4ce1-47f7-a77f-8473d4eb6d7c)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Power Apps for Dynamics 365 vTrial (54b37829-818e-4e3c-a08a-3ea66ab9b45d)<br/>Power Automate for Dynamics 365 vTrial (81d4ecb8-0481-42fb-8868-51536c5aceeb) |
+| Dynamics 365 for Customer Service Enterprise Attach to Qualifying Dynamics 365 Base Offer A | D365_CUSTOMER_SERVICE_ENT_ATTACH | eb18b715-ea9d-4290-9994-2ebf4b5042d2 | D365_CUSTOMER_SERVICE_ENT_ATTACH (61a2665f-1873-488c-9199-c3d0bc213fdf)<br/>Power_Pages_Internal_User (60bf28f9-2b70-4522-96f7-335f5e06c941)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | Dynamics 365 for Customer Service Enterprise Attach (61a2665f-1873-488c-9199-c3d0bc213fdf)<br/>Power Pages Internal User (60bf28f9-2b70-4522-96f7-335f5e06c941)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318) |
| Dynamics 365 Customer Service Insights Trial | DYN365_AI_SERVICE_INSIGHTS | 61e6bd70-fbdb-4deb-82ea-912842f39431 | DYN365_AI_SERVICE_INSIGHTS (4ade5aa6-5959-4d2c-bf0a-f4c9e2cc00f2) |Dynamics 365 AI for Customer Service Trial (4ade5aa6-5959-4d2c-bf0a-f4c9e2cc00f2) | | Dynamics 365 Customer Voice Trial | FORMS_PRO | bc946dac-7877-4271-b2f7-99d2db13cd2c | DYN365_CDS_FORMS_PRO (363430d1-e3f7-43bc-b07b-767b6bb95e4b)<br/>FORMS_PRO (17efdd9f-c22c-4ad8-b48e-3b1f3ee1dc9a)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FORMS_PLAN_E5 (e212cbc7-0961-4c40-9825-01117710dcb1)<br/>FLOW_FORMS_PRO (57a0746c-87b8-4405-9397-df365a9db793) | Common Data Service (363430d1-e3f7-43bc-b07b-767b6bb95e4b)<br/>Dynamics 365 Customer Voice (17efdd9f-c22c-4ad8-b48e-3b1f3ee1dc9a)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Forms (Plan E5) (e212cbc7-0961-4c40-9825-01117710dcb1)<br/>Power Automate for Dynamics 365 Customer Voice (57a0746c-87b8-4405-9397-df365a9db793) | | Dynamics 365 Customer Service Professional | DYN365_CUSTOMER_SERVICE_PRO | 1439b6e2-5d59-4873-8c59-d60e2a196e92 | DYN365_CUSTOMER_SERVICE_PRO (6929f657-b31b-4947-b4ce-5066c3214f54)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>POWERAPPS_CUSTOMER_SERVICE_PRO (c507b04c-a905-4940-ada6-918891e6d3ad)<br/>FLOW_CUSTOMER_SERVICE_PRO (0368fc9c-3721-437f-8b7d-3d0f888cdefc)<br/>PROJECT_ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72) | Dynamics 365 for Customer Service Pro (6929f657-b31b-4947-b4ce-5066c3214f54)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Office for the web (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>Power Apps for Customer Service Pro (c507b04c-a905-4940-ada6-918891e6d3ad)<br/>Power Automate for Customer Service Pro (0368fc9c-3721-437f-8b7d-3d0f888cdefc)<br/>Project Online Essentials (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SharePoint (Plan 2) (5dbe027f-2339-4123-9542-606e4d348a72) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| Dynamics 365 Finance | DYN365_FINANCE | 55c9eb4e-c746-45b4-b255-9ab6b19d5c62 | DYN365_CDS_FINANCE (e95d7060-d4d9-400a-a2bd-a244bf0b609e)<br/>DYN365_REGULATORY_SERVICE (c7657ae3-c0b0-4eed-8c1d-6a7967bd9c65)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>D365_Finance (9f0e1b4e-9b33-4300-b451-b2c662cd4ff7)<br/>POWERAPPS_DYN_APPS (874fc546-6efe-4d22-90b8-5c4e7aa59f4b)<br/>FLOW_DYN_APPS (7e6d7d78-73de-46ba-83b1-6d25117334ba) | Common Data Service for Dynamics 365 Finance (e95d7060-d4d9-400a-a2bd-a244bf0b609e)<br/>Dynamics 365 for Finance and Operations, Enterprise edition - Regulatory Service (c7657ae3-c0b0-4eed-8c1d-6a7967bd9c65)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Dynamics 365 for Finance (9f0e1b4e-9b33-4300-b451-b2c662cd4ff7)<br/>Power Apps for Dynamics 365 (874fc546-6efe-4d22-90b8-5c4e7aa59f4b)<br/>Power Automate for Dynamics 365 (7e6d7d78-73de-46ba-83b1-6d25117334ba) | | Dynamics 365 for Case Management Enterprise Edition | DYN365_ENTERPRISE_CASE_MANAGEMENT | d39fb075-21ae-42d0-af80-22a2599749e0 | DYN365_ENTERPRISE_CASE_MANAGEMENT (2822a3a1-9b8f-4432-8989-e11669a60dc8)<br/>NBENTERPRISE (03acaee3-9492-4f40-aed4-bcb6b32981b6)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>PROJECT_ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>POWERAPPS_DYN_APPS (874fc546-6efe-4d22-90b8-5c4e7aa59f4b)<br/>FLOW_DYN_APPS (7e6d7d78-73de-46ba-83b1-6d25117334ba) | Dynamics 365 for Case Management (2822a3a1-9b8f-4432-8989-e11669a60dc8)<br/>Microsoft Social Engagement (03acaee3-9492-4f40-aed4-bcb6b32981b6)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Office for the Web (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>Project Online Essentials (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SharePoint (Plan 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>Power Apps for Dynamics 365 (874fc546-6efe-4d22-90b8-5c4e7aa59f4b)<br/>Power Automate for Dynamics 365 (7e6d7d78-73de-46ba-83b1-6d25117334ba) | | Dynamics 365 for Customer Service Enterprise Edition | DYN365_ENTERPRISE_CUSTOMER_SERVICE | 749742bf-0d37-4158-a120-33567104deeb | D365_CSI_EMBED_CSEnterprise (5b1e5982-0e88-47bb-a95e-ae6085eda612)<br/>DYN365_ENTERPRISE_CUSTOMER_SERVICE (99340b49-fb81-4b1e-976b-8f2ae8e9394f)<br/>Forms_Pro_Service (67bf4812-f90b-4db9-97e7-c0bbbf7b2d09)<br/>NBENTERPRISE (03acaee3-9492-4f40-aed4-bcb6b32981b6)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>PROJECT_ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>POWERAPPS_DYN_APPS (874fc546-6efe-4d22-90b8-5c4e7aa59f4b)<br/>FLOW_DYN_APPS (7e6d7d78-73de-46ba-83b1-6d25117334ba) | Dynamics 365 Customer Service Insights for CS Enterprise (5b1e5982-0e88-47bb-a95e-ae6085eda612)<br/>Dynamics 365 for Customer Service (99340b49-fb81-4b1e-976b-8f2ae8e9394f)<br/>Microsoft Dynamics 365 Customer Voice for Customer Service Enterprise (67bf4812-f90b-4db9-97e7-c0bbbf7b2d09)<br/>Microsoft Social Engagement (03acaee3-9492-4f40-aed4-bcb6b32981b6)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Office for the Web (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>Project Online Essentials (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SharePoint (Plan 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>Power Apps for Dynamics 365 (874fc546-6efe-4d22-90b8-5c4e7aa59f4b)<br/>Power Automate for Dynamics 365 (7e6d7d78-73de-46ba-83b1-6d25117334ba) |
+| Dynamics 365 for Customer Service Chat | DYN365_CS_CHAT | 7d7af6c2-0be6-46df-84d1-c181b0272909 |DYN365_CS_CHAT_FPA (426ec19c-d5b1-4548-b894-6fe75028c30d)<br/>DYN365_CS_CHAT (f69129db-6dc1-4107-855e-0aaebbcd9dd4)<br/>POWER_VIRTUAL_AGENTS_D365_CS_CHAT (19e4c3a8-3ebe-455f-a294-4f3479873ae3)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | Dynamics 365 Customer Service Chat Application Integration (426ec19c-d5b1-4548-b894-6fe75028c30d)<br/>Dynamics 365 for Customer Service Chat (f69129db-6dc1-4107-855e-0aaebbcd9dd4)<br/>Power Virtual Agents for Chat (19e4c3a8-3ebe-455f-a294-4f3479873ae3)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318) |
| Dynamics 365 for Field Service Attach to Qualifying Dynamics 365 Base Offer | D365_FIELD_SERVICE_ATTACH | a36cdaa2-a806-4b6e-9ae0-28dbd993c20e | D365_FIELD_SERVICE_ATTACH (55c9148b-d5f0-4101-b5a0-b2727cfc0916)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | Dynamics 365 for Field Service Attach (55c9148b-d5f0-4101-b5a0-b2727cfc0916)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318) | | Dynamics 365 for Field Service Enterprise Edition | DYN365_ENTERPRISE_FIELD_SERVICE | c7d15985-e746-4f01-b113-20b575898250 | DYN365_ENTERPRISE_FIELD_SERVICE (8c66ef8a-177f-4c0d-853c-d4f219331d09)<br/>Forms_Pro_FS (9c439259-63b0-46cc-a258-72be4313a42d)<br/>NBENTERPRISE (03acaee3-9492-4f40-aed4-bcb6b32981b6)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>PROJECT_ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>POWERAPPS_DYN_APPS (874fc546-6efe-4d22-90b8-5c4e7aa59f4b)<br/>FLOW_DYN_APPS (7e6d7d78-73de-46ba-83b1-6d25117334ba) | Dynamics 365 for Field Service (8c66ef8a-177f-4c0d-853c-d4f219331d09)<br/>Microsoft Dynamics 365 Customer Voice for Field Service (9c439259-63b0-46cc-a258-72be4313a42d)<br/>Microsoft Social Engagement (03acaee3-9492-4f40-aed4-bcb6b32981b6)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Office for the Web (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>Project Online Essentials (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SharePoint (Plan 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>Power Apps for Dynamics 365 (874fc546-6efe-4d22-90b8-5c4e7aa59f4b)<br/>Power Automate for Dynamics 365 (7e6d7d78-73de-46ba-83b1-6d25117334ba) | | Dynamics 365 for Financials Business Edition | DYN365_FINANCIALS_BUSINESS_SKU | cc13a803-544e-4464-b4e4-6d6169a138fa | DYN365_FINANCIALS_BUSINESS (920656a2-7dd8-4c83-97b6-a356414dbd36)<br/>FLOW_DYN_APPS (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>POWERAPPS_DYN_APPS (874fc546-6efe-4d22-90b8-5c4e7aa59f4b) |FLOW FOR DYNAMICS 365 (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>POWERAPPS FOR DYNAMICS 365 (874fc546-6efe-4d22-90b8-5c4e7aa59f4b)<br/>DYNAMICS 365 FOR FINANCIALS (920656a2-7dd8-4c83-97b6-a356414dbd36) |
+| Dynamics 365 Hybrid Connector | CRM_HYBRIDCONNECTOR | de176c31-616d-4eae-829a-718918d7ec23 | CRM_HYBRIDCONNECTOR (0210d5c8-49d2-4dd1-a01b-a91c7c14e0bf)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | CRM Hybrid Connector (0210d5c8-49d2-4dd1-a01b-a91c7c14e0bf)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318) |
+| Dynamics 365 for Marketing Additional Application | DYN365_MARKETING_APPLICATION_ADDON | 99c5688b-6c75-4496-876f-07f0fbd69add | DYN365_MARKETING_APPLICATION_ADDON (51cf0638-4861-40c0-8b20-1161ab2f80be)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | Dynamics 365 for Marketing Additional Application (51cf0638-4861-40c0-8b20-1161ab2f80be)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318) |
+| Dynamics 365 for Marketing Additional Non-Prod Application | DYN365_MARKETING_SANDBOX_APPLICATION_ADDON | c393e9bd-2335-4b46-8b88-9e2a86a85ec1 | DYN365_MARKETING_SANDBOX_APPLICATION_ADDON (1599de10-5250-4c95-acf2-491f74edce48) | Dynamics 365 Marketing Sandbox Application AddOn (1599de10-5250-4c95-acf2-491f74edce48) |
+| Dynamics 365 for Marketing Addnl Contacts Tier 5 | DYN365_MARKETING_CONTACT_ADDON_T5 | d8eec316-778c-4f14-a7d1-a0aca433b4e7 | DYN365_MARKETING_50K_CONTACT_ADDON (e626a4ec-1ba2-409e-bf75-9bc0bc30cca7)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | Dynamics 365 for Marketing 50K Addnl Contacts (e626a4ec-1ba2-409e-bf75-9bc0bc30cca7)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318) |
+| Dynamics 365 for Marketing Attach | DYN365_MARKETING_APP_ATTACH | 85430fb9-02e8-48be-9d7e-328beb41fa29 | DYN365_MARKETING_APP (a3a4fa10-5092-401a-af30-0462a95a7ac8)<br/>Forms_Pro_Marketing_App (22b657cf-0a9e-467b-8a91-5e31f21bc570)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | Dynamics 365 for Marketing (a3a4fa10-5092-401a-af30-0462a95a7ac8)<br/>Microsoft Dynamics 365 Customer Voice for Marketing Application (22b657cf-0a9e-467b-8a91-5e31f21bc570)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318) |
| Dynamics 365 for Marketing Business Edition | DYN365_BUSINESS_MARKETING | 238e2f8d-e429-4035-94db-6926be4ffe7b | DYN365_BUSINESS_Marketing (393a0c96-9ba1-4af0-8975-fa2f853a25ac)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | Dynamics 365 Marketing (393a0c96-9ba1-4af0-8975-fa2f853a25ac)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318) | | Dynamics 365 for Marketing USL | D365_MARKETING_USER | 4b32a493-9a67-4649-8eb9-9fc5a5f75c12 | DYN365_MARKETING_MSE_USER (2824c69a-1ac5-4397-8592-eae51cb8b581)<br/>DYN365_MARKETING_USER (5d7a6abc-eebd-46ab-96e1-e4a2f54a2248)<br/>Forms_Pro_Marketing (76366ba0-d230-47aa-8087-b6d55dae454f)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MCOMEETBASIC (9974d6cf-cd24-4ba2-921c-e2aa687da846) | Dynamics 365 for Marketing MSE User (2824c69a-1ac5-4397-8592-eae51cb8b581)<br/>Dynamics 365 for Marketing USL (5d7a6abc-eebd-46ab-96e1-e4a2f54a2248)<br/>Microsoft Dynamics 365 Customer Voice for Marketing (76366ba0-d230-47aa-8087-b6d55dae454f)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Teams Audio Conferencing with dial-out to select geographies (9974d6cf-cd24-4ba2-921c-e2aa687da846) | | Dynamics 365 for Sales and Customer Service Enterprise Edition | DYN365_ENTERPRISE_SALES_CUSTOMERSERVICE | 8edc2cf8-6438-4fa9-b6e3-aa1660c640cc | DYN365_ENTERPRISE_P1 (d56f3deb-50d8-465a-bedb-f079817ccac1)<br/>FLOW_DYN_APPS (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>NBENTERPRISE (03acaee3-9492-4f40-aed4-bcb6b32981b6)<br/>POWERAPPS_DYN_APPS (874fc546-6efe-4d22-90b8-5c4e7aa59f4b)<br/>PROJECT_ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014) |DYNAMICS 365 CUSTOMER ENGAGEMENT PLAN (d56f3deb-50d8-465a-bedb-f079817ccac1)<br/>FLOW FOR DYNAMICS 365 (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>MICROSOFT SOCIAL ENGAGEMENT - SERVICE DISCONTINUATION (03acaee3-9492-4f40-aed4-bcb6b32981b6)<br/>POWERAPPS FOR DYNAMICS 365 (874fc546-6efe-4d22-90b8-5c4e7aa59f4b)<br/>PROJECT ONLINE ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SHAREPOINT ONLINE (PLAN 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014) | | Dynamics 365 for Sales Enterprise Edition | DYN365_ENTERPRISE_SALES | 1e1a282c-9c54-43a2-9310-98ef728faace | DYN365_ENTERPRISE_SALES (2da8e897-7791-486b-b08f-cc63c8129df7)<br/>FLOW_DYN_APPS (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>NBENTERPRISE (03acaee3-9492-4f40-aed4-bcb6b32981b6)<br/>POWERAPPS_DYN_APPS (874fc546-6efe-4d22-90b8-5c4e7aa59f4b)<br/>PROJECT_ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014) | DYNAMICS 365 FOR SALES (2da8e897-7791-486b-b08f-cc63c8129df7)<br/>FLOW FOR DYNAMICS 365 (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>MICROSOFT SOCIAL ENGAGEMENT - SERVICE DISCONTINUATION (03acaee3-9492-4f40-aed4-bcb6b32981b6)<br/>POWERAPPS FOR DYNAMICS 365 (874fc546-6efe-4d22-90b8-5c4e7aa59f4b)<br/>PROJECT ONLINE ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SHAREPOINT ONLINE (PLAN 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014) |
+| Dynamics 365 Sales Premium | DYN365_SALES_PREMIUM | 2edaa1dc-966d-4475-93d6-8ee8dfd96877 | DYN365_SALES_INSIGHTS (fedc185f-0711-4cc0-80ed-0a92da1a8384)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>PROJECT_ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>Microsoft_Viva_Sales_PowerAutomate (a933a62f-c3fb-48e5-a0b7-ac92b94b4420)<br/>Microsoft_Viva_Sales_PremiumTrial (8ba1ff15-7bf6-4620-b65c-ecedb6942766)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW_DYN_APPS (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>POWERAPPS_DYN_APPS (874fc546-6efe-4d22-90b8-5c4e7aa59f4b)<br/>Power_Pages_Internal_User (60bf28f9-2b70-4522-96f7-335f5e06c941)<br/>Forms_Pro_SalesEnt (8839ef0e-91f1-4085-b485-62e06e7c7987)<br/>DYN365_ENTERPRISE_SALES (2da8e897-7791-486b-b08f-cc63c8129df7) | Dynamics 365 AI for Sales (Embedded) (fedc185f-0711-4cc0-80ed-0a92da1a8384)<br/>SharePoint (Plan 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>Project Online Essentials (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>Office for the Web (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>Microsoft Viva Sales Premium with Power Automate (a933a62f-c3fb-48e5-a0b7-ac92b94b4420)<br/>Microsoft Viva Sales Premium & Trial (8ba1ff15-7bf6-4620-b65c-ecedb6942766)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Power Automate for Dynamics 365 (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>Power Apps for Dynamics 365 (874fc546-6efe-4d22-90b8-5c4e7aa59f4b)<br/>Power Pages Internal User (60bf28f9-2b70-4522-96f7-335f5e06c941)<br/>Microsoft Dynamics 365 Customer Voice for Sales Enterprise (8839ef0e-91f1-4085-b485-62e06e7c7987)<br/>Dynamics 365 for Sales (2da8e897-7791-486b-b08f-cc63c8129df7) |
| Dynamics 365 for Sales Professional | D365_SALES_PRO | be9f9771-1c64-4618-9907-244325141096 | DYN365_SALES_PRO (88d83950-ff78-4e85-aa66-abfc787f8090)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>POWERAPPS_SALES_PRO (6f9f70ce-138d-49f8-bb8b-2e701b7dde75)<br/>FLOW_SALES_PRO (f944d685-f762-4371-806d-a1f48e5bea13)<br/>PROJECT_ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72) | Dynamics 365 for Sales Professional (88d83950-ff78-4e85-aa66-abfc787f8090)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Office for the Web (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>Power Apps for Sales Pro (6f9f70ce-138d-49f8-bb8b-2e701b7dde75)<br/>Power Automate for Sales Pro (f944d685-f762-4371-806d-a1f48e5bea13)<br/>Project Online Essentials (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SharePoint (Plan 2) (5dbe027f-2339-4123-9542-606e4d348a72) | | Dynamics 365 for Sales Professional Trial | D365_SALES_PRO_IW | 9c7bff7a-3715-4da7-88d3-07f57f8d0fb6 | D365_SALES_PRO_IW (73f205fc-6b15-47a5-967e-9e64fdf72d0a)<br/>D365_SALES_PRO_IW_Trial (db39a47e-1f4f-462b-bf5b-2ec471fb7b88) | Dynamics 365 for Sales Professional Trial (73f205fc-6b15-47a5-967e-9e64fdf72d0a)<br/>Dynamics 365 for Sales Professional Trial (db39a47e-1f4f-462b-bf5b-2ec471fb7b88) | | Dynamics 365 for Sales Professional Attach to Qualifying Dynamics 365 Base Offer | D365_SALES_PRO_ATTACH | 245e6bf9-411e-481e-8611-5c08595e2988 | D365_SALES_PRO_ATTACH (065f3c64-0649-4ec7-9f47-ef5cf134c751)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | Dynamics 365 for Sales Pro Attach (065f3c64-0649-4ec7-9f47-ef5cf134c751)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| Microsoft Intune SMB | INTUNE_SMB | e6025b08-2fa5-4313-bd0a-7e5ffca32958 | AAD_SMB (de377cbc-0019-4ec2-b77c-3f223947e102)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>INTUNE_SMBIZ (8e9ff0ff-aa7a-4b20-83c1-2f636b600ac2)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/> | AZURE ACTIVE DIRECTORY (de377cbc-0019-4ec2-b77c-3f223947e102)<br/> EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/> MICROSOFT INTUNE (8e9ff0ff-aa7a-4b20-83c1-2f636b600ac2)<br/> MICROSOFT INTUNE (c1ec4a95-1f05-45b3-a911-aa3fa01094f5) | | Microsoft PowerApps for Developer | POWERAPPS_DEV | 5b631642-bd26-49fe-bd20-1daaa972ef80 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>DYN365_CDS_DEV_VIRAL (d8c638e2-9508-40e3-9877-feb87603837b)<br/>FLOW_DEV_VIRAL (c7ce3f26-564d-4d3a-878d-d8ab868c85fe)<br/>POWERAPPS_DEV_VIRAL (a2729df7-25f8-4e63-984b-8a8484121554) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Common Data Service (d8c638e2-9508-40e3-9877-feb87603837b)<br/>Flow for Developer (c7ce3f26-564d-4d3a-878d-d8ab868c85fe)<br/>PowerApps for Developer (a2729df7-25f8-4e63-984b-8a8484121554) | | Microsoft Power Apps Plan 2 (Qualified Offer) | POWERFLOW_P2 | ddfae3e3-fcb2-4174-8ebd-3023cb213c8b | DYN365_CDS_P2 (6ea4c1ef-c259-46df-bce2-943342cd3cb2)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>POWERAPPS_P2 (00527d7f-d5bc-4c2a-8d1e-6c0de2410c81)<br/>FLOW_P2 (56be9436-e4b2-446c-bb7f-cc15d16cca4d) | Common Data Service - P2 (6ea4c1ef-c259-46df-bce2-943342cd3cb2)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/> Power Apps (Plan 2) (00527d7f-d5bc-4c2a-8d1e-6c0de2410c81)<br/>Power Automate (Plan 2) (56be9436-e4b2-446c-bb7f-cc15d16cca4d) |
+| Microsoft Relationship Sales solution | DYN365_ENTERPRISE_RELATIONSHIP_SALES | 4f05b1a3-a978-462c-b93f-781c6bee998f | Forms_Pro_Relationship_Sales (507172c0-6001-4f4f-80e7-f350507af3e5)<br/>DYN365_ENTERPRISE_RELATIONSHIP_SALES (56e3d4ca-2e31-4c3f-8d57-89c1d363503b)<br/>NBENTERPRISE (03acaee3-9492-4f40-aed4-bcb6b32981b6)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft_Viva_Sales_PremiumTrial (8ba1ff15-7bf6-4620-b65c-ecedb6942766)<br/>Microsoft_Viva_Sales_PowerAutomate (a933a62f-c3fb-48e5-a0b7-ac92b94b4420)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>PROJECT_ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>POWERAPPS_DYN_APPS (874fc546-6efe-4d22-90b8-5c4e7aa59f4b)<br/>FLOW_DYN_APPS (7e6d7d78-73de-46ba-83b1-6d25117334ba) | Microsoft Dynamics 365 Customer Voice for Relationship Sales (507172c0-6001-4f4f-80e7-f350507af3e5)<br/>Microsoft Relationship Sales solution (56e3d4ca-2e31-4c3f-8d57-89c1d363503b)<br/>Retired - Microsoft Social Engagement (03acaee3-9492-4f40-aed4-bcb6b32981b6)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Viva Sales Premium & Trial (8ba1ff15-7bf6-4620-b65c-ecedb6942766)<br/>Microsoft Viva Sales Premium with Power Automate (a933a62f-c3fb-48e5-a0b7-ac92b94b4420)<br/>Office for the Web (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>Project Online Essentials (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SharePoint (Plan 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>Power Apps for Dynamics 365 (874fc546-6efe-4d22-90b8-5c4e7aa59f4b)<br/>Power Automate for Dynamics 365 (7e6d7d78-73de-46ba-83b1-6d25117334ba) |
| Microsoft Stream | STREAM | 1f2f344a-700d-42c9-9427-5cea1d5d7ba6 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MICROSOFTSTREAM (acffdce6-c30f-4dc2-81c0-372e33c515ec) | EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MICROSOFT STREAM (acffdce6-c30f-4dc2-81c0-372e33c515ec) | | Microsoft Stream Plan 2 | STREAM_P2 | ec156933-b85b-4c50-84ec-c9e5603709ef | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>STREAM_P2 (d3a458d0-f10d-48c2-9e44-86f3f684029e) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Stream Plan 2 (d3a458d0-f10d-48c2-9e44-86f3f684029e) | | Microsoft Stream Storage Add-On (500 GB) | STREAM_STORAGE | 9bd7c846-9556-4453-a542-191d527209e8 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>STREAM_STORAGE (83bced11-77ce-4071-95bd-240133796768) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Stream Storage Add-On (83bced11-77ce-4071-95bd-240133796768) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| Power Apps Plan 1 for Government | POWERAPPS_P1_GOV | eca22b68-b31f-4e9c-a20c-4d40287bc5dd | DYN365_CDS_P1_GOV (ce361df2-f2a5-4713-953f-4050ba09aad8)<br/>EXCHANGE_S_FOUNDATION_GOV (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>FLOW_P1_GOV (774da41c-a8b3-47c1-8322-b9c1ab68be9f)<br/>POWERAPPS_P1_GOV (5ce719f1-169f-4021-8a64-7d24dcaec15f) | Common Data Service for Government (ce361df2-f2a5-4713-953f-4050ba09aad8)<br/>Exchange Foundation for Government (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>Power Automate (Plan 1) for Government (774da41c-a8b3-47c1-8322-b9c1ab68be9f)<br/>PowerApps Plan 1 for Government (5ce719f1-169f-4021-8a64-7d24dcaec15f) | | Power Apps Portals login capacity add-on Tier 2 (10 unit min) | POWERAPPS_PORTALS_LOGIN_T2 | 57f3babd-73ce-40de-bcb2-dadbfbfff9f7 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>CDS_POWERAPPS_PORTALS_LOGIN (32ad3a4e-2272-43b4-88d0-80d284258208)<br/>POWERAPPS_PORTALS_LOGIN (084747ad-b095-4a57-b41f-061d84d69f6f) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Common Data Service Power Apps Portals Login Capacity (32ad3a4e-2272-43b4-88d0-80d284258208)<br/>Power Apps Portals Login Capacity Add-On (084747ad-b095-4a57-b41f-061d84d69f6f) | | Power Apps Portals login capacity add-on Tier 2 (10 unit min) for Government | POWERAPPS_PORTALS_LOGIN_T2_GCC | 26c903d5-d385-4cb1-b650-8d81a643b3c4 | CDS_POWERAPPS_PORTALS_LOGIN_GCC (0f7b9a29-7990-44ff-9d05-a76be778f410)<br/>EXCHANGE_S_FOUNDATION_GOV (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>POWERAPPS_PORTALS_LOGIN_GCC (bea6aef1-f52d-4cce-ae09-bed96c4b1811) | Common Data Service Power Apps Portals Login Capacity for GCC (0f7b9a29-7990-44ff-9d05-a76be778f410)<br/>Exchange Foundation for Government (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>Power Apps Portals Login Capacity Add-On for Government (bea6aef1-f52d-4cce-ae09-bed96c4b1811) |
+| Power Apps Portals login capacity add-on Tier 3 (50 unit min) | POWERAPPS_PORTALS_LOGIN_T3 | 927d8402-8d3b-40e8-b779-34e859f7b497 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>CDS_POWERAPPS_PORTALS_LOGIN (32ad3a4e-2272-43b4-88d0-80d284258208)<br/>POWERAPPS_PORTALS_LOGIN (084747ad-b095-4a57-b41f-061d84d69f6f) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Common Data Service Power Apps Portals Login Capacity (32ad3a4e-2272-43b4-88d0-80d284258208)<br/>Power Apps Portals Login Capacity Add-On (084747ad-b095-4a57-b41f-061d84d69f6f) |
+| Power Apps Portals page view capacity add-on | POWERAPPS_PORTALS_PAGEVIEW | a0de5e3a-2500-4a19-b8f4-ec1c64692d22 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>CDS_POWERAPPS_PORTALS_PAGEVIEW (72c30473-7845-460a-9feb-b58f216e8694)<br/>POWERAPPS_PORTALS_PAGEVIEW (1c5a559a-ec06-4f76-be5b-6a315418495f) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>CDS PowerApps Portals page view capacity add-on (72c30473-7845-460a-9feb-b58f216e8694)<br/>Power Apps Portals Page View Capacity Add-On (1c5a559a-ec06-4f76-be5b-6a315418495f) |
| Power Apps Portals page view capacity add-on for Government | POWERAPPS_PORTALS_PAGEVIEW_GCC | 15a64d3e-5b99-4c4b-ae8f-aa6da264bfe7 | CDS_POWERAPPS_PORTALS_PAGEVIEW_GCC (352257a9-db78-4217-a29d-8b8d4705b014)<br/>EXCHANGE_S_FOUNDATION_GOV (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>POWERAPPS_PORTALS_PAGEVIEW_GCC (483d5646-7724-46ac-ad71-c78b7f099d8d) | CDS PowerApps Portals page view capacity add-on for GCC (352257a9-db78-4217-a29d-8b8d4705b014)<br/>Exchange Foundation for Government (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>Power Apps Portals Page View Capacity Add-On for Government (483d5646-7724-46ac-ad71-c78b7f099d8d) | | Power Automate per flow plan | FLOW_BUSINESS_PROCESS | b3a42176-0a8c-4c3f-ba4e-f2b37fe5be6b | CDS_Flow_Business_Process (c84e52ae-1906-4947-ac4d-6fb3e5bf7c2e)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW_BUSINESS_PROCESS (7e017b61-a6e0-4bdc-861a-932846591f6e) | Common data service for Flow per business process plan (c84e52ae-1906-4947-ac4d-6fb3e5bf7c2e)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Flow per business process plan (7e017b61-a6e0-4bdc-861a-932846591f6e) | | Power Automate per user plan | FLOW_PER_USER | 4a51bf65-409c-4a91-b845-1121b571cc9d | DYN365_CDS_P2 (6ea4c1ef-c259-46df-bce2-943342cd3cb2)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW_PER_USER (c5002c70-f725-4367-b409-f0eff4fee6c0) | Common Data Service - P2 (6ea4c1ef-c259-46df-bce2-943342cd3cb2)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Flow per user plan (c5002c70-f725-4367-b409-f0eff4fee6c0) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| Power BI Pro Dept | POWER_BI_PRO_DEPT | 3a6a908c-09c5-406a-8170-8ebb63c42882 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>BI_AZURE_P2 (70d33638-9c74-4d01-bfd3-562de28bd4ba) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Power BI Pro (70d33638-9c74-4d01-bfd3-562de28bd4ba) | | Power BI Pro for Faculty | POWER_BI_PRO_FACULTY | de5f128b-46d7-4cfc-b915-a89ba060ea56 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>BI_AZURE_P2 (70d33638-9c74-4d01-bfd3-562de28bd4ba) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Power BI Pro (70d33638-9c74-4d01-bfd3-562de28bd4ba) | | Power BI Pro for GCC | POWERBI_PRO_GOV | f0612879-44ea-47fb-baf0-3d76d9235576 | EXCHANGE_S_FOUNDATION_GOV (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>BI_AZURE_P_2_GOV (944e9726-f011-4353-b654-5f7d2663db76) | Exchange Foundation for Government (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>Power BI Pro for Government (944e9726-f011-4353-b654-5f7d2663db76) |
+| Power Pages vTrial for Makers | Power_Pages_vTrial_for_Makers | 3f9f06f5-3c31-472c-985f-62d9c10ec167 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>DYN365_CDS_VIRAL (17ab22cd-a0b3-4536-910a-cb6eb12696c0)<br/>POWER_PAGES_VTRIAL (6817d093-2d30-4249-8bd6-774f01efa78c) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Common Data Service (17ab22cd-a0b3-4536-910a-cb6eb12696c0)<br/>Power Pages vTrial for Makers (6817d093-2d30-4249-8bd6-774f01efa78c) |
| Power Virtual Agent | VIRTUAL_AGENT_BASE | e4e55366-9635-46f4-a907-fc8c3b5ec81f | CDS_VIRTUAL_AGENT_BASE (0a0a23fa-fea1-4195-bb89-b4789cb12f7f)<br/>FLOW_VIRTUAL_AGENT_BASE (4b81a949-69a1-4409-ad34-9791a6ec88aa)<br/>VIRTUAL_AGENT_BASE (f6934f16-83d3-4f3b-ad27-c6e9c187b260) | Common Data Service for Virtual Agent Base (0a0a23fa-fea1-4195-bb89-b4789cb12f7f)<br/>Power Automate for Virtual Agent (4b81a949-69a1-4409-ad34-9791a6ec88aa)<br/>Virtual Agent Base (f6934f16-83d3-4f3b-ad27-c6e9c187b260) |
+| Power Virtual Agent User License | VIRTUAL_AGENT_USL | 4b74a65c-8b4a-4fc8-9f6b-5177ed11ddfa | CDS_VIRTUAL_AGENT_USL (cb867b3c-7f38-4d0d-99ce-e29cd69812c8)<br/>FLOW_VIRTUAL_AGENT_USL (82f141c9-2e87-4f43-8cb2-12d2701dc6b3)<br/>VIRTUAL_AGENT_USL (1263586c-59a4-4ad0-85e1-d50bc7149501) | Common Data Service (cb867b3c-7f38-4d0d-99ce-e29cd69812c8)<br/>Power Automate for Virtual Agent (82f141c9-2e87-4f43-8cb2-12d2701dc6b3)<br/>Virtual Agent (1263586c-59a4-4ad0-85e1-d50bc7149501) |
| Power Virtual Agents Viral Trial | CCIBOTS_PRIVPREV_VIRAL | 606b54a9-78d8-4298-ad8b-df6ef4481c80 | DYN365_CDS_CCI_BOTS (cf7034ed-348f-42eb-8bbd-dddeea43ee81)<br/>CCIBOTS_PRIVPREV_VIRAL (ce312d15-8fdf-44c0-9974-a25a177125ee)<br/>FLOW_CCI_BOTS (5d798708-6473-48ad-9776-3acc301c40af) | Common Data Service for CCI Bots (cf7034ed-348f-42eb-8bbd-dddeea43ee81)<br/>Dynamics 365 AI for Customer Service Virtual Agents Viral (ce312d15-8fdf-44c0-9974-a25a177125ee)<br/>Flow for CCI Bots (5d798708-6473-48ad-9776-3acc301c40af) | | Privacy Management ΓÇô risk| PRIVACY_MANAGEMENT_RISK | e42bc969-759a-4820-9283-6b73085b68e6 | MIP_S_Exchange (cd31b152-6326-4d1b-ae1b-997b625182e6)<br/>PRIVACY_MANGEMENT_RISK (f281fb1f-99a7-46ab-9edb-ffd74e260ed3)<br/>PRIVACY_MANGEMENT_RISK_EXCHANGE (ebb17a6e-6002-4f65-acb0-d386480cebc1) | Data Classification in Microsoft 365 (cd31b152-6326-4d1b-ae1b-997b625182e6)<br/>Priva - Risk (f281fb1f-99a7-46ab-9edb-ffd74e260ed3)<br/>Priva - Risk (Exchange) (ebb17a6e-6002-4f65-acb0-d386480cebc1) | | Privacy Management - risk for EDU | PRIVACY_MANAGEMENT_RISK_EDU | dcdbaae7-d8c9-40cb-8bb1-62737b9e5a86 | MIP_S_Exchange (cd31b152-6326-4d1b-ae1b-997b625182e6)<br/>PRIVACY_MANGEMENT_RISK (f281fb1f-99a7-46ab-9edb-ffd74e260ed3)<br/>PRIVACY_MANGEMENT_RISK_EXCHANGE (ebb17a6e-6002-4f65-acb0-d386480cebc1) | Data Classification in Microsoft 365 (cd31b152-6326-4d1b-ae1b-997b625182e6)<br/>Priva - Risk (f281fb1f-99a7-46ab-9edb-ffd74e260ed3)<br/>Priva - Risk (Exchange) (ebb17a6e-6002-4f65-acb0-d386480cebc1) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| Privacy Management - risk_USGOV_GCCHIGH | PRIVACY_MANAGEMENT_RISK_USGOV_GCCHIGH | 787d7e75-29ca-4b90-a3a9-0b780b35367c | MIP_S_Exchange (cd31b152-6326-4d1b-ae1b-997b625182e6)<br/>PRIVACY_MANGEMENT_RISK (f281fb1f-99a7-46ab-9edb-ffd74e260ed3)<br/>PRIVACY_MANGEMENT_RISK_EXCHANGE (ebb17a6e-6002-4f65-acb0-d386480cebc1) | Data Classification in Microsoft 365 (cd31b152-6326-4d1b-ae1b-997b625182e6)<br/>Priva - Risk (f281fb1f-99a7-46ab-9edb-ffd74e260ed3)<br/>Priva - Risk (Exchange) (ebb17a6e-6002-4f65-acb0-d386480cebc1) | | Privacy Management - subject rights request (1) | PRIVACY_MANAGEMENT_SUB_RIGHTS_REQ_1_V2 | d9020d1c-94ef-495a-b6de-818cbbcaa3b8 | MIP_S_EXCHANGE_CO (5b96ffc4-3853-4cf4-af50-e38505080f6b)<br/>PRIVACY_MANGEMENT_DSR_EXCHANGE_1 (93d24177-c2c3-408a-821d-3d25dfa66e7a)<br/>PRIVACY_MANGEMENT_DSR_1 (07a4098c-3f2d-427f-bfe2-5889ed75dd7b) | Data Classification in Microsoft 365 - Company Level (MIP_S_EXCHANGE_CO)<br/>Privacy Management - Subject Rights Request (1 - Exchange) (PRIVACY_MANGEMENT_DSR_EXCHANGE_1)<br/>Privacy Management - Subject Rights Request (1) (PRIVACY_MANGEMENT_DSR_1) | | Privacy Management - subject rights request (1) for EDU | PRIVACY_MANAGEMENT_SUB_RIGHTS_REQ_1_EDU_V2 | 475e3e81-3c75-4e07-95b6-2fed374536c8 | MIP_S_EXCHANGE_CO (5b96ffc4-3853-4cf4-af50-e38505080f6b)<br/>PRIVACY_MANGEMENT_DSR_EXCHANGE_1 (93d24177-c2c3-408a-821d-3d25dfa66e7a)<br/>PRIVACY_MANGEMENT_DSR_1 (07a4098c-3f2d-427f-bfe2-5889ed75dd7b) | Data Classification in Microsoft 365 - Company Level (5b96ffc4-3853-4cf4-af50-e38505080f6b)<br/>Privacy Management - Subject Rights Request (1 - Exchange) (93d24177-c2c3-408a-821d-3d25dfa66e7a)<br/>Privacy Management - Subject Rights Request (1) (07a4098c-3f2d-427f-bfe2-5889ed75dd7b) |
-| Privacy Management - subject rights request (1) GCC | PRIVACY_MANAGEMENT_SUB_RIGHTS_REQ_1_V2_GCC | 017fb6f8-00dd-4025-be2b-4eff067cae72 | MIP_S_EXCHANGE_CO (5b96ffc4-3853-4cf4-af50-e38505080f6b)<br/>PRIVACY_MANGEMENT_DSR_EXCHANGE_1 (93d24177-c2c3-408a-821d-3d25dfa66e7a)<br/>PRIVACY_MANGEMENT_DSR_1 (07a4098c-3f2d-427f-bfe2-5889ed75dd7b) | Data Classification in Microsoft 365 - Company Level (5b96ffc4-3853-4cf4-af50-e38505080f6b)<br/>Privacy Management - Subject Rights Request (1 - Exchange) (93d24177-c2c3-408a-821d-3d25dfa66e7a)<br/Privacy Management - Subject Rights Request (1) (07a4098c-3f2d-427f-bfe2-5889ed75dd7b) |
+| Privacy Management - subject rights request (1) GCC | PRIVACY_MANAGEMENT_SUB_RIGHTS_REQ_1_V2_GCC | 017fb6f8-00dd-4025-be2b-4eff067cae72 | MIP_S_EXCHANGE_CO (5b96ffc4-3853-4cf4-af50-e38505080f6b)<br/>PRIVACY_MANGEMENT_DSR_EXCHANGE_1 (93d24177-c2c3-408a-821d-3d25dfa66e7a)<br/>PRIVACY_MANGEMENT_DSR_1 (07a4098c-3f2d-427f-bfe2-5889ed75dd7b) | Data Classification in Microsoft 365 - Company Level (5b96ffc4-3853-4cf4-af50-e38505080f6b)<br/>Privacy Management - Subject Rights Request (1 - Exchange) (93d24177-c2c3-408a-821d-3d25dfa66e7a)<br/>Privacy Management - Subject Rights Request (1) (07a4098c-3f2d-427f-bfe2-5889ed75dd7b) |
| Privacy Management - subject rights request (1) USGOV_DOD | PRIVACY_MANAGEMENT_SUB_RIGHTS_REQ_1_V2_USGOV_DOD | d3c841f3-ea93-4da2-8040-6f2348d20954 | MIP_S_EXCHANGE_CO (5b96ffc4-3853-4cf4-af50-e38505080f6b)<br/>PRIVACY_MANGEMENT_DSR_EXCHANGE_1 (93d24177-c2c3-408a-821d-3d25dfa66e7a)<br/>PRIVACY_MANGEMENT_DSR_1 (07a4098c-3f2d-427f-bfe2-5889ed75dd7b) | Data Classification in Microsoft 365 - Company Level (5b96ffc4-3853-4cf4-af50-e38505080f6b)<br/>Privacy Management - Subject Rights Request (1 - Exchange) (93d24177-c2c3-408a-821d-3d25dfa66e7a)<br/>Privacy Management - Subject Rights Request (1) (07a4098c-3f2d-427f-bfe2-5889ed75dd7b) | | Privacy Management - subject rights request (1) USGOV_GCCHIGH | PRIVACY_MANAGEMENT_SUB_RIGHTS_REQ_1_V2_USGOV_GCCHIGH | 706d2425-6170-4818-ba08-2ad8f1d2d078 | MIP_S_EXCHANGE_CO (5b96ffc4-3853-4cf4-af50-e38505080f6b)<br/>PRIVACY_MANGEMENT_DSR_EXCHANGE_1 (93d24177-c2c3-408a-821d-3d25dfa66e7a)<br/>PRIVACY_MANGEMENT_DSR_1 (07a4098c-3f2d-427f-bfe2-5889ed75dd7b) | Data Classification in Microsoft 365 - Company Level (5b96ffc4-3853-4cf4-af50-e38505080f6b)<br/>Privacy Management - Subject Rights Request (1 - Exchange) (93d24177-c2c3-408a-821d-3d25dfa66e7a)<br/>Privacy Management - Subject Rights Request (1) (07a4098c-3f2d-427f-bfe2-5889ed75dd7b) | | Privacy Management - subject rights request (10) | PRIVACY_MANAGEMENT_SUB_RIGHTS_REQ_10_V2 | 78ea43ac-9e5d-474f-8537-4abb82dafe27 | MIP_S_EXCHANGE_CO (5b96ffc4-3853-4cf4-af50-e38505080f6b)<br/>PRIVACY_MANGEMENT_DSR_EXCHANGE_10 (f0241705-7b44-4401-a6b6-7055062b5b03)<br/>PRIVACY_MANGEMENT_DSR_10 (74853901-d7a9-428e-895d-f4c8687a9f0b) | Data Classification in Microsoft 365 - Company Level (5b96ffc4-3853-4cf4-af50-e38505080f6b)<br/>Privacy Management - Subject Rights Request (10 - Exchange) (f0241705-7b44-4401-a6b6-7055062b5b03)<br/>Privacy Management - Subject Rights Request (10) (74853901-d7a9-428e-895d-f4c8687a9f0b) |
active-directory How To Customize Branding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/how-to-customize-branding.md
Previously updated : 03/24/2023 Last updated : 03/28/2023
When users authenticate into your corporate intranet or web-based applications,
The default sign-in experience is the global look and feel that applies across all sign-ins to your tenant. Before you customize any settings, the default Microsoft branding appears in your sign-in pages. You can customize this default experience with a custom background image and/or color, favicon, layout, header, and footer. You can also upload a custom CSS.
+The updated experience for adding company branding covered in this article is available as an Azure AD preview feature. To opt in and explore the new experience, go to **Azure AD** > **Preview features** and enable the **Enhanced Company Branding** feature. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+ > [!NOTE]
-> Instructions for the legacy company branding customization process can be found in the **[Customize branding](customize-branding.md)** article.<br><br>The updated experience for adding company branding covered in this article is available as an Azure AD preview feature. To opt in and explore the new experience, go to **Azure AD** > **Preview features** and enable the **Enhanced Company Branding** feature. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
->
+> Instructions for the legacy company branding customization process can be found in the **[Customize branding](customize-branding.md)** article. Instructions for how to manage the **'Stay signed in prompt?'** can be found in the **[Manage the 'Stay signed in?' prompt](how-to-manage-stay-signed-in-prompt.md)** article.
## License requirements
In the following examples replace the contoso.com with your own tenant name, or
- Self-service password reset `https://passwordreset.microsoftonline.com/?whr=contoso.com` > [!NOTE]
-> The settings to manage the 'Stay signed in?' prompt can now be found in the User settings area of Azure AD. Go to **Azure AD** > **Users** > **User settings**.
-<br><br>
-For more information on the 'Stay signed in?' prompt, see [How to manage user profile information](how-to-manage-user-profile-info.md#learn-about-the-stay-signed-in-prompt).
+> To manage the settings of the 'Stay signed in?' prompt, go to **Azure AD** > **Users** > **User settings**.
## How to navigate the company branding process
Azure AD supports right-to-left functionality for languages such as Arabic and H
- [View the CSS template reference guide](reference-company-branding-css-template.md). - [Learn more about default user permissions in Azure AD](../fundamentals/users-default-permissions.md)-- [Manage the 'stay signed in' prompt](how-to-manage-user-profile-info.md#learn-about-the-stay-signed-in-prompt)
+- [Manage the 'stay signed in' prompt](how-to-manage-stay-signed-in-prompt.md)
active-directory How To Manage Stay Signed In Prompt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/how-to-manage-stay-signed-in-prompt.md
+
+ Title: Manage the 'Stay signed in' prompt - Azure AD - Microsoft Entra
+description: Instructions about how to set up the 'Stay signed in' prompt for Azure AD users.
++++++++ Last updated : 03/28/2023+++++
+# Manage the 'Stay signed in?' prompt
+
+The **Stay signed in?** prompt appears after a user successfully signs in. This process is known as **Keep me signed in** (KMSI) and was previously part of the [customize branding](how-to-customize-branding.md) process.
+
+This article covers how the KMSI process works, how to enable it for customers, and how to troubleshoot KMSI issues.
+
+## How does it work?
+
+If a user answers **Yes** to the **'Stay signed in?'** prompt, a persistent authentication cookie is issued. The cookie must be stored in session for KMSI to work. KMSI won't work with locally stored cookies. If KMSI isn't enabled, a non-persistent cookie is issued and lasts for 24 hours or until the browser is closed.
+
+The following diagram shows the user sign-in flow for a managed tenant and federated tenant using the KMSI in prompt. This flow contains smart logic so that the **Stay signed in?** option won't be displayed if the machine learning system detects a high-risk sign-in or a sign-in from a shared device. For federated tenants, the prompt will show after the user successfully authenticates with the federated identity service.
+
+Some features of SharePoint Online and Office 2010 depend on users being able to choose to remain signed in. If you uncheck the **Show option to remain signed in** option, your users may see other unexpected prompts during the sign-in process.
+
+![Diagram showing the user sign-in flow for a managed vs. federated tenant.](media/how-to-manage-stay-signed-in-prompt/kmsi-workflow.png)
+
+## License and role requirements
+
+Configuring the 'keep me signed in' (KMSI) option requires one of the following licenses:
+
+- Azure AD Premium 1
+- Azure AD Premium 2
+- Office 365 (for Office apps)
+- Microsoft 365
+
+You must have the **Global Administrator** role to enable the 'Stay signed in?' prompt.
+
+## Enable the 'Stay signed in?' prompt
+
+The KMSI setting is managed in the **User settings** of Azure Active Directory (Azure AD).
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+1. Go to **Azure Active Directory** > **Users** > **User settings**.
+1. Set the **Show keep user signed in** toggle to **Yes**.
+
+ ![Screenshot of the Show keep user signed in prompt.](media/how-to-manage-stay-signed-in-prompt/show-keep-user-signed-in.png)
+
+## Troubleshoot 'Stay signed in?' issues
+
+If a user doesn't act on the **Stay signed in?** prompt but abandons the sign-in attempt, a sign-in log entry appears in the Azure AD **Sign-ins** page. The prompt the user sees is called an "interrupt."
+
+![Sample 'Stay signed in?' prompt](media/how-to-manage-stay-signed-in-prompt/kmsi-stay-signed-in-prompt.png)
+
+Details about the sign-in error are found in the **Sign-in logs** in Azure AD. Select the impacted user from the list and locate the following details in the **Basic info** section.
+
+* **Sign in error code**: 50140
+* **Failure reason**: This error occurred due to "Keep me signed in" interrupt when the user was signing in.
+
+You can stop users from seeing the interrupt by setting the **Show option to remain signed in** setting to **No** in the user settings. This setting disables the KMSI prompt for all users in your Azure AD directory.
+
+You also can use the [persistent browser session controls in Conditional Access](../conditional-access/howto-conditional-access-session-lifetime.md) to prevent users from seeing the KMSI prompt. This option allows you to disable the KMSI prompt for a select group of users (such as the global administrators) without affecting sign-in behavior for everyone else in the directory.
+
+To ensure that the KMSI prompt is shown only when it can benefit the user, the KMSI prompt is intentionally not shown in the following scenarios:
+
+* User is signed in via seamless SSO and integrated Windows authentication (IWA)
+* User is signed in via Active Directory Federation Services and IWA
+* User is a guest in the tenant
+* User's risk score is high
+* Sign-in occurs during user or admin consent flow
+* Persistent browser session control is configured in a conditional access policy
+
+## Next steps
+
+- [Learn how to customize branding for sign-in experiences](how-to-customize-branding.md)
+- [Manage user settings in Azure AD](how-to-manage-user-profile-info.md)
active-directory How To Manage User Profile Info https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/how-to-manage-user-profile-info.md
Previously updated : 03/23/2023 Last updated : 03/28/2023
When new users are created, only some details are added to their user profile. I
1. There are two ways to edit user profile details. Either select **Edit properties** from the top of the page or select **Properties**.
- ![Screenshot of the overview page for a selected user, with the edit options highlighted.](media/active-directory-users-profile-azure-portal/user-profile-overview.png)
+ ![Screenshot of the overview page for a selected user, with the edit options highlighted.](media/how-to-manage-user-profile-info/user-profile-overview.png)
1. After making any changes, select the **Save** button.
If you selected the **Edit properties option**:
- To edit properties based on the category, select a category from the top of the page. - Select the **Save** button at the bottom of the page to save any changes.
- ![Screenshot a selected user's details, with the detail categories and save button highlighted.](media/active-directory-users-profile-azure-portal/user-profile-properties-tabbed-view.png)
+ ![Screenshot a selected user's details, with the detail categories and save button highlighted.](media/how-to-manage-user-profile-info/user-profile-properties-tabbed-view.png)
If you selected the **Properties tab option**: - The full list of properties appears for you to review. - To edit a property, select the pencil icon next to the category heading. - Select the **Save** button at the bottom of the page to save any changes.
- ![Screenshot the Properties tab, with the edit options highlighted.](media/active-directory-users-profile-azure-portal/user-profile-properties-single-page-view.png)
+ ![Screenshot the Properties tab, with the edit options highlighted.](media/how-to-manage-user-profile-info/user-profile-properties-single-page-view.png)
### Profile categories There are six categories of profile details you may be able to edit.
There are six categories of profile details you may be able to edit.
- **On-premises:** Accounts synced from Windows Server Active Directory include other values not applicable to Azure AD accounts.
- >[!Note]
- >You must use Windows Server Active Directory to update the identity, contact info, or job info for users whose source of authority is Windows Server Active Directory. After you complete your update, you must wait for the next synchronization cycle to complete before you'll see the changes.
+> [!Note]
+> You must use Windows Server Active Directory to update the identity, contact info, or job info for users whose source of authority is Windows Server Active Directory. After you complete your update, you must wait for the next synchronization cycle to complete before you'll see the changes.
### Add or edit the profile picture On the user's overview page, select the camera icon in the lower-right corner of the user's thumbnail. If no image has been added, the user's initials appear here. This picture appears in Azure Active Directory and on the user's personal pages, such as the myapps.microsoft.com page. All your changes are saved for the user.
->[!Note]
+> [!Note]
> If you're having issues updating a user's profile picture, please ensure that your Office 365 Exchange Online Enterprise App is Enabled for users to sign in. ## Manage settings for all users
-In the **User settings** area of Azure AD, you can adjust several settings that affect all users, such as restricting access to the Azure AD administration portal, how external collaboration is managed, and providing users the option to connect their LinkedIn account. Some settings are managed in a separate area of Azure AD and linked from this page.
+In the **User settings** area of Azure AD, you can adjust several settings that affect all users. Some settings are managed in a separate area of Azure AD and linked from this page. These settings require the Global Administrator role.
-Go to **Azure AD** > **User settings**.
+Go to **Azure AD** > **User settings**.
-### Learn about the 'Stay signed in?' prompt
+![Screenshot of the Azure AD user settings options.](media/how-to-manage-user-profile-info/user-settings-options.png)
-The **Stay signed in?** prompt appears after a user successfully signs in. This process is known as **Keep me signed in** (KMSI). If a user answers **Yes** to this prompt, a persistent authentication cookie is issued. The cookie must be stored in session for KMSI to work. KMSI won't work with locally stored cookies. If KMSI isn't enabled, a non-persistent cookie is issued and lasts for 24 hours or until the browser is closed.
+The following settings can be managed from Azure AD **User settings**.
-The following diagram shows the user sign-in flow for a managed tenant and federated tenant using the KMSI in prompt. This flow contains smart logic so that the **Stay signed in?** option won't be displayed if the machine learning system detects a high-risk sign-in or a sign-in from a shared device. For federated tenants, the prompt will show after the user successfully authenticates with the federated identity service.
-
-The KMSI setting is available in **User settings**. Some features of SharePoint Online and Office 2010 depend on users being able to choose to remain signed in. If you uncheck the **Show option to remain signed in** option, your users may see other unexpected prompts during the sign-in process.
-
-![Diagram showing the user sign-in flow for a managed vs. federated tenant](media/customize-branding/kmsi-workflow.png)
-
-Configuring the 'keep me signed in' (KMSI) option requires one of the following licenses:
--- Azure AD Premium 1-- Azure AD Premium 2-- Office 365 (for Office apps)-- Microsoft 365-
-#### Troubleshoot 'Stay signed in?' issues
-
-If a user doesn't act on the **Stay signed in?** prompt but abandons the sign-in attempt, a sign-in log entry appears in the Azure AD **Sign-ins** page. The prompt the user sees is called an "interrupt."
-
-![Sample 'Stay signed in?' prompt](media/customize-branding/kmsi-stay-signed-in-prompt.png)
-
-Details about the sign-in error are found in the **Sign-in logs** in Azure AD. Select the impacted user from the list and locate the following error code details in the **Basic info** section.
-
-* **Sign in error code**: 50140
-* **Failure reason**: This error occurred due to "Keep me signed in" interrupt when the user was signing in.
-
-You can stop users from seeing the interrupt by setting the **Show option to remain signed in** setting to **No** in the user settings. This setting disables the KMSI prompt for all users in your Azure AD directory.
-
-You also can use the [persistent browser session controls in Conditional Access](../conditional-access/howto-conditional-access-session-lifetime.md) to prevent users from seeing the KMSI prompt. This option allows you to disable the KMSI prompt for a select group of users (such as the global administrators) without affecting sign-in behavior for everyone else in the directory.
-
-To ensure that the KMSI prompt is shown only when it can benefit the user, the KMSI prompt is intentionally not shown in the following scenarios:
-
-* User is signed in via seamless SSO and integrated Windows authentication (IWA)
-* User is signed in via Active Directory Federation Services and IWA
-* User is a guest in the tenant
-* User's risk score is high
-* Sign-in occurs during user or admin consent flow
-* Persistent browser session control is configured in a conditional access policy
+- Manage how end users launch and view their applications
+- Allow users to register their own applications
+- [Prevent non-admins from creating their own tenants](users-default-permissions.md#restrict-member-users-default-permissions)
+- Restrict access to the Azure AD administration portal
+- [Allow users to connect their work or school account with LinkedIn](../enterprise-users/linkedin-user-consent.md)
+- [Enable the "Stay signed in?" prompt](how-to-manage-stay-signed-in-prompt.md)
+- Manage external collaboration settings
+ - [Guest user access](../enterprise-users/users-restrict-guest-permissions.md)
+ - [Guest invite setting](../external-identities/external-collaboration-settings-configure.md)
+ - [External user leave settings](../external-identities/self-service-sign-up-user-flow.md#enable-self-service-sign-up-for-your-tenant)
+ - Collaboration restrictions
+- Manage user feature settings
## Next steps+ - [Add or delete users](add-users-azure-active-directory.md) - [Assign roles to users](active-directory-users-assign-role-azure-portal.md)
active-directory Manage Self Service Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/manage-self-service-access.md
Previously updated : 11/24/2022 Last updated : 03/29/2023
To enable self-service application access to an application, follow the steps be
1. Select the application from the list. If you don't see the application, start typing its name in the search box. Or use the filter controls to select the application type, status, or visibility, and then select **Apply**. 1. In the left navigation menu, select **Self-service**.-
+ > [!NOTE]
+ > The **Self-service** menu item isn't available if your app registration's setting for public client flows is enabled. To access this menu item, select **Authentication** in the left navigation, then set the **Allow public client flows** to **No**.
1. To enable Self-service application access for this application, set **Allow users to request access to this application?** to **Yes.** 1. Next to **To which group should assigned users be added?**, select **Select group**. Choose a group, and then select **Select**. When a user's request is approved, they'll be added to this group. When viewing this group's membership, you'll be able to see who has been granted access to the application through self-service access.
active-directory Groups Activate Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/groups-activate-roles.md
Title: Activate your group membership or ownership in Privileged Identity Management
+ Title: Activate your group membership or ownership in Privileged Identity Management (Preview)
description: Learn how to activate your group membership or ownership in Privileged Identity Management (PIM). documentationcenter: ''
na Previously updated : 3/15/2023 Last updated : 3/29/2023
-# Activate your group membership or ownership in Privileged Identity Management
+# Activate your group membership or ownership in Privileged Identity Management (preview)
In Azure Active Directory (Azure AD), part of Microsoft Entra, you can use Privileged Identity Management (PIM) to have just-in-time membership in the group or just-in-time ownership of the group.
active-directory Pim Resource Roles Configure Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-resource-roles-configure-alerts.md
na Previously updated : 3/23/2023 Last updated : 3/29/2023 -+
Alert | Severity | Trigger | Recommendation
**Duplicate role created** | Medium | Multiple roles have the same criteria. | Use only one of these roles. **Roles are being assigned outside of Privileged Identity Management** | High | A role is managed directly through the Azure IAM resource, or the Azure Resource Manager API. | Review the users in the list and remove them from privileged roles assigned outside of Privilege Identity Management.
+>[!NOTE]
+> For the **Roles are being assigned outside of Privileged Identity Management** alerts, you may encounter duplicate notifications. These duplications may primarily be related to a potential live site incident where notifications are being sent again.
+ ### Severity - **High**: Requires immediate action because of a policy violation.
active-directory Concept Provisioning Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-provisioning-logs.md
Previously updated : 01/23/2023 Last updated : 03/24/2023
Use the following table to better understand how to resolve errors that you find
> | AzureDirectoryB2BManagementPolicyCheckFailure | The cross-tenant synchronization policy allowing automatic redemption failed.<br/><br/>The synchronization engine checks to ensure that the administrator of the target tenant has created an inbound cross-tenant synchronization policy allowing automatic redemption. The synchronization engine also checks if the administrator of the source tenant has enabled an outbound policy for automatic redemption. | Ensure that the automatic redemption setting has been enabled for both the source and target tenants. For more information, see [Automatic redemption setting](../multi-tenant-organizations/cross-tenant-synchronization-overview.md#automatic-redemption-setting). | > | AzureActiveDirectoryQuotaLimitExceeded | The number of objects in the tenant exceeds the directory limit.<br/><br/>Azure AD has limits for the number of objects that can be created in a tenant. | Check whether the quota can be increased. For information about the directory limits and steps to increase the quota, see [Azure AD service limits and restrictions](../enterprise-users/directory-service-limits-restrictions.md). | > |InvitationCreationFailure| The Azure AD provisioning service attempted to invite the user in the target tenant. That invitation failed.| Navigate to the user settings page in Azure AD > external users > collaboration restrictions and ensure that collaboration with that tenant is enabled.|
+> |AzureActiveDirectoryInsufficientRights|When a B2B user in the target tenant has a role other than User, Helpdesk Admin, or User Account Admin, they cannot be deleted.| Please remove the role(s) on the user in the target tenant in order to successfully delete the user in the target tenant.|
## Next steps
active-directory Boxcryptor Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/boxcryptor-provisioning-tutorial.md
The scenario outlined in this tutorial assumes that you already have the followi
* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md) * A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
-* Boxcryptor Single sign-on enabled [subscription](https://www.boxcryptor.com/pricing/for-teams).
+* Boxcryptor Single sign-on enabled [subscription](https://www.boxcryptor.com/for-teams/).
## Step 1. Plan your provisioning deployment 1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
Once you've configured provisioning, use the following resources to monitor your
## Next steps
-* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Connecter Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/connecter-provisioning-tutorial.md
+
+ Title: 'Tutorial: Configure Connecter for automatic user provisioning with Azure Active Directory'
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to Connecter.
++
+writer: twimmers
+
+ms.assetid: 6e60505a-f8c8-46f6-8e6f-525e7c8416b7
++++ Last updated : 03/24/2023+++
+# Tutorial: Configure Connecter for automatic user provisioning
+
+This tutorial describes the steps you need to perform in both Connecter and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users to [Connecter](https://www.designconnected.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
++
+## Supported capabilities
+> [!div class="checklist"]
+> * Create users in Connecter.
+> * Remove users in Connecter when they do not require access anymore.
+> * Keep user attributes synchronized between Azure AD and Connecter.
+> * [Single sign-on](../manage-apps/add-application-portal-setup-oidc-sso.md) to Connecter (recommended).
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md)
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* An admin account for Connecter Server's [Team Portal](https://teamwork.connecterapp.com/)
+
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. Determine what data to [map between Azure AD and Connecter](../app-provisioning/customize-application-attributes.md).
+
+## Step 2. Configure Connecter to support provisioning with Azure AD
+
+### Roles
+
+There are two main roles involved in the configuration:
+
+1. **Team Portal admin** - the sole administrator of everything connected with user and permissions management in Connecter Server. Can be changed by the Connecter Server Subscription owner from here.
+1. Azure AD admin - a person that has full access to the administrative backend of Azure AD and can install new services.
+
+### Step-by-step guide
+#### Actions that must be done by the Team Portal admin:
+1. Log in to Connecter's [Team Portal](https://teamwork.connecterapp.com/).
+1. Select your team.
+1. Click on the **Features tab**.
+
+ ![Screenshot of navigating to features tab.](media/connecter-provisioning-tutorial/feature-tab.png)
+
+6. *Optional*: If you would like to select a workspace that your team members will be automatically added to when they are synchronized from Azure AD select the **Workspace configuration** action and select the workspace and the permissions.
+
+ ![Screenshot of selecting workspace configuration.](media/connecter-provisioning-tutorial/workspace-configuration.png)
+
+7. Click on the **Authenticate** button. This will open the sign-in page. Sign in with your **Azure AD admin** account to add Connecter to your enterprise applications.
+
+ ![Screenshot of Azure AD admin sign-in page.](media/connecter-provisioning-tutorial/azure-sign-in-page.png)
++
+8. Click on **Get SCIM token**.
+9. Use the button to copy the token to your clipboard and save it for future purpose.
+
+## Step 3. Add Connecter from the Azure AD application gallery
+
+Add Connecter from the Azure AD application gallery to start managing provisioning to Connecter. If you have previously setup Connecter for SSO you can use the same application. However it's recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+
+## Step 4. Define who will be in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* Start small. Test with a small set of users before rolling out to everyone. When scope for provisioning is set to assigned users, you can control this by assigning one or two users to the app. When scope is set to all users, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* If you need more roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
++
+## Step 5. Configure automatic user provisioning to Connecter
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users in TestApp based on user and/or group assignments in Azure AD.
+
+### To configure automatic user provisioning for Connecter in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Screenshot of Enterprise applications blade.](common/enterprise-applications.png)
+
+1. In the applications list, select **Connecter**.
+
+ ![Screenshot of the Connecter link in the Applications list.](common/all-applications.png)
+
+1. Select the **Provisioning** tab.
+
+ ![Screenshot of Provisioning tab.](common/provisioning.png)
+
+1. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Screenshot of Provisioning tab automatic.](common/provisioning-automatic.png)
+
+1. Under the **Admin Credentials** section, input your Connecter Tenant URL as `https://teamwork.connecterapp.com/scim/v2` and corresponding Secret Token obtained from step 2. Click **Test Connection** to ensure Azure AD can connect to Connecter. If the connection fails, ensure your Connecter account has Admin permissions and try again.
+
+ ![Screenshot of Token.](common/provisioning-testconnection-tenanturltoken.png)
+
+1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Screenshot of Notification Email.](common/provisioning-notification-email.png)
+
+1. Select **Save**.
+
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Connecter**.
+
+1. Review the user attributes that are synchronized from Azure AD to Connecter in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Connecter for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you'll need to ensure that the Connecter API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by Connecter|
+ |||||
+ |userName|String|&check;|&check;
+ |active|Boolean||
+ |displayName|String||&check;
+ |externalId|String||
+
+1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+1. To enable the Azure AD provisioning service for Connecter, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Screenshot of Provisioning Status Toggled On.](common/provisioning-toggle-on.png)
+
+1. Define the users that you would like to provision to Connecter by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Screenshot of Provisioning Scope.](common/provisioning-scope.png)
+
+1. When you're ready to provision, click **Save**.
+
+ ![Screenshot of Saving Provisioning Configuration.](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it's to completion
+* If the provisioning configuration seems to be in an unhealthy state, the application goes into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+
+## More resources
+
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Maptician Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/maptician-provisioning-tutorial.md
# Tutorial: Configure Maptician for automatic user provisioning
-This tutorial describes the steps you need to perform in both Maptician and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Maptician](https://maptician.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
+This tutorial describes the steps you need to perform in both Maptician and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Maptician](https://www.maptician.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
## Capabilities Supported
The scenario outlined in this tutorial assumes that you already have the followi
* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md) * A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
-* A [Maptician](https://maptician.com/) tenant.
+* A [Maptician](https://www.maptician.com/) tenant.
* A user account in Maptician with Admin permissions. ## Step 1. Plan your provisioning deployment
Once you've configured provisioning, use the following resources to monitor your
## Next steps
-* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Servicenow Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/servicenow-provisioning-tutorial.md
For more information on the Azure AD automatic user provisioning service, see [A
1. Obtain credentials for an admin in ServiceNow. Go to the user profile in ServiceNow and verify that the user has the admin role. ![Screenshot that shows a ServiceNow admin role.](media/servicenow-provisioning-tutorial/servicenow-admin-role.png)
-
-1. Enable the SCIM v2 Plugin using the steps outlined by this [ServiceNow doc](https://docs.servicenow.com/en-US/bundle/utah-platform-security/page/integrate/authentication/task/activate-scim-plugin.html)
- ## Step 3: Add ServiceNow from the Azure AD application gallery
After you've configured provisioning, use the following resources to monitor you
- When an update to the *active* attribute in ServiceNow is provisioned, the attribute *locked_out* is also updated accordingly, even if *locked_out* is not mapped in the Azure provisioning service.
-## Update a ServiceNow application to use the ServiceNow SCIM 2.0 endpoint
-In March 2023, ServiceNow released a SCIM 2.0 connector. Completing the steps below will update applications configured to use the non-SCIM endpoint to the use the SCIM 2.0 endpoint. These steps will remove any customizations previously made to the ServiceNow application, including:
-* Authentication details
-* Scoping filters
-* Custom attribute mappings
-
-> [!NOTE]
-> Be sure to note any changes that have been made to the settings listed above before completing the steps below. Failure to do so will result in the loss of customized settings.
-
-1. Sign into the Azure portal at https://portal.azure.com
-2. Navigate to your current ServiceNow app under Azure Active Directory > Enterprise Applications
-3. In the Properties section of your new custom app, copy the Object ID.
-
- ![Screenshot of ServiceNow app in the Azure portal.](./media/servicenow-provisioning-tutorial/app-properties.png)
-
-4. In a new web browser window, go to https://developer.microsoft.com/graph/graph-explorer and sign in as the administrator for the Azure AD tenant where your app is added.
-
- ![Screenshot of Microsoft Graph explorer sign in page.](./media/workplace-by-facebook-provisioning-tutorial/permissions.png)
-
-5. Check to make sure the account being used has the correct permissions. The permission ΓÇ£Directory.ReadWrite.AllΓÇ¥ is required to make this change.
-
- ![Screenshot of Microsoft Graph settings option.](./media/workplace-by-facebook-provisioning-tutorial/permissions-2.png)
-
- ![Screenshot of Microsoft Graph permissions.](./media/workplace-by-facebook-provisioning-tutorial/permissions-3.png)
-
-6. Using the ObjectID selected from the app previously, run the following command:
-
-```
-GET https://graph.microsoft.com/beta/servicePrincipals/[object-id]/synchronization/jobs/
-```
-
-7. Taking the "id" value from the response body of the GET request from above, run the command below, replacing "[job-id]" with the id value from the GET request. The value should have the format of "ServiceNowOutDelta.xxxxxxxxxxxxxxx.xxxxxxxxxxxxxxx":
-```
-DELETE https://graph.microsoft.com/beta/servicePrincipals/[object-id]/synchronization/jobs/[job-id]
-```
-8. In the Graph Explorer, run the command below. Replace "[object-id]" with the service principal ID (object ID) copied from the third step.
-```
-POST https://graph.microsoft.com/beta/servicePrincipals/[object-id]/synchronization/jobs { "templateId": "serviceNowScim" }
-```
-
-![Screenshot of Microsoft Graph request.](./media/servicenow-provisioning-tutorial/graph-request.png)
-
-9. Return to the first web browser window and select the Provisioning tab for your application. Your configuration will have been reset. You can confirm the upgrade has taken place by confirming the Job ID starts with ΓÇ£serviceNowScimΓÇ¥.
-
-10. The new SCIM app uses OAuth2 to authenticate with the SCIM endpoint. Enter the required fields and authenticate with the new SCIM endpoint. [This ServiceNow documentation](https://docs.servicenow.com/bundle/utah-platform-security/page/administer/security/task/t_CreateEndpointforExternalClients.html) outlines how to generate these values.
-
-11. Restore any previous changes you made to the application (Authentication details, Scoping filters, Custom attribute mappings) and re-enable provisioning.
-
-> [!NOTE]
-> Failure to restore the previous settings may results in attributes (name.formatted for example) updating in ServiceNow unexpectedly. Be sure to check the configuration before enabling provisioning
- ## Additional resources - [Managing user account provisioning for enterprise apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
aks Azure Cni Overlay https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-cni-overlay.md
Previously updated : 03/09/2023 Last updated : 03/21/2023 # Configure Azure CNI Overlay networking in Azure Kubernetes Service (AKS)
Like Azure CNI Overlay, Kubenet assigns IP addresses to pods from an address spa
## IP address planning
-* **Cluster Nodes**: Cluster nodes go into a subnet in your VNet, so verify you have a subnet large enough to account for future scale. A simple `/24` subnet can host up to 251 nodes (the first three IP addresses in a subnet are reserved for management operations).
+- **Cluster Nodes**: Cluster nodes go into a subnet in your VNet, so verify you have a subnet large enough to account for future scale. A simple `/24` subnet can host up to 251 nodes (the first three IP addresses in a subnet are reserved for management operations).
-* **Pods**: The overlay solution assigns a `/24` address space for pods on every node from the private CIDR that you specify during cluster creation. The `/24` size is fixed and can't be increased or decreased. You can run up to 250 pods on a node. When planning the pod address space, ensure that the private CIDR is large enough to provide `/24` address spaces for new nodes to support future cluster expansion.
+- **Pods**: The overlay solution assigns a `/24` address space for pods on every node from the private CIDR that you specify during cluster creation. The `/24` size is fixed and can't be increased or decreased. You can run up to 250 pods on a node. When planning the pod address space, ensure that the private CIDR is large enough to provide `/24` address spaces for new nodes to support future cluster expansion.
The following are additional factors to consider when planning pods IP address space:
- * Pod CIDR space must not overlap with the cluster subnet range.
- * Pod CIDR space must not overlap with IP ranges used in on-premises networks and peered networks.
- * The same pod CIDR space can be used on multiple independent AKS clusters in the same VNet.
+ - Pod CIDR space must not overlap with the cluster subnet range.
+ - Pod CIDR space must not overlap with IP ranges used in on-premises networks and peered networks.
+ - The same pod CIDR space can be used on multiple independent AKS clusters in the same VNet.
-* **Kubernetes service address range**: The size of the service address CIDR depends on the number of cluster services you plan to create. It must be smaller than `/12`. This range should also not overlap with the pod CIDR range, cluster subnet range, and IP range used in peered VNets and on-premises networks.
+- **Kubernetes service address range**: The size of the service address CIDR depends on the number of cluster services you plan to create. It must be smaller than `/12`. This range should also not overlap with the pod CIDR range, cluster subnet range, and IP range used in peered VNets and on-premises networks.
-* **Kubernetes DNS service IP address**: This is an IP address within the Kubernetes service address range that's used by cluster service discovery. Don't use the first IP address in your address range, as this address is used for the `kubernetes.default.svc.cluster.local` address.
+- **Kubernetes DNS service IP address**: This is an IP address within the Kubernetes service address range that's used by cluster service discovery. Don't use the first IP address in your address range, as this address is used for the `kubernetes.default.svc.cluster.local` address.
## Network security groups Pod to pod traffic with Azure CNI Overlay is not encapsulated and subnet [network security group][nsg] rules are applied. If the subnet NSG contains deny rules that would impact the pod CIDR traffic, make sure the following rules are in place to ensure proper cluster functionality (in addition to all [AKS egress requirements][aks-egress]):
-* Traffic from the node CIDR to the node CIDR on all ports and protocols
-* Traffic from the node CIDR to the pod CIDR on all ports and protocols (required for service traffic routing)
-* Traffic from the pod CIDR to the pod CIDR on all ports and protocols (required for pod to pod and pod to service traffic, including DNS)
+- Traffic from the node CIDR to the node CIDR on all ports and protocols
+- Traffic from the node CIDR to the pod CIDR on all ports and protocols (required for service traffic routing)
+- Traffic from the pod CIDR to the pod CIDR on all ports and protocols (required for pod to pod and pod to service traffic, including DNS)
Traffic from a pod to any destination outside of the pod CIDR block will utilize SNAT to set the source IP to the IP of the node where the pod is running.
Azure CNI offers two IP addressing options for pods - the traditional configurat
Use overlay networking when:
-* You would like to scale to a large number of pods, but have limited IP address space in your VNet.
-* Most of the pod communication is within the cluster.
-* You don't need advanced AKS features, such as virtual nodes.
+- You would like to scale to a large number of pods, but have limited IP address space in your VNet.
+- Most of the pod communication is within the cluster.
+- You don't need advanced AKS features, such as virtual nodes.
Use the traditional VNet option when:
-* You have available IP address space.
-* Most of the pod communication is to resources outside of the cluster.
-* Resources outside the cluster need to reach pods directly.
-* You need AKS advanced features, such as virtual nodes.
+- You have available IP address space.
+- Most of the pod communication is to resources outside of the cluster.
+- Resources outside the cluster need to reach pods directly.
+- You need AKS advanced features, such as virtual nodes.
## Limitations with Azure CNI Overlay Azure CNI Overlay has the following limitations:
-* You can't use Application Gateway as an Ingress Controller (AGIC) for an overlay cluster.
-* Windows Server 2019 node pools are not supported for overlay.
+- You can't use Application Gateway as an Ingress Controller (AGIC) for an overlay cluster.
+- Windows Server 2019 node pools are not supported for overlay.
+- Traffic from host network pods is not able to reach Windows overlay pods.
## Install the aks-preview Azure CLI extension
location="westcentralus"
az aks create -n $clusterName -g $resourceGroup --location $location --network-plugin azure --network-plugin-mode overlay --pod-cidr 192.168.0.0/16 ```
+## Upgrade an existing cluster to CNI Overlay
+
+You can update an existing Azure CNI cluster to Overlay if the cluster meets certain criteria. A cluster must:
+
+- be on Kubernetes version 1.22+
+- **not** be using the dynamic pod IP allocation feature
+- **not** have network policies enabled
+- **not** be using any Windows node pools with docker as the container runtime
+
+The upgrade process will trigger each node pool to be re-imaged simultaneously (i.e. upgrading each node pool separately to overlay is not supported). Any disruptions to cluster networking will be similar to a node image upgrade or Kubernetes version upgrade where each node in a node pool is re-imaged.
+
+> [!WARNING]
+> Due to the limitation around Windows overlay pods incorrectly SNATing packets from host network pods, this has a more detrimental effect for clusters upgrading to overlay.
+
+While nodes are being upgraded to use the CNI Overlay feature, pods that are on nodes which haven't been upgraded yet will not be able to communicate with pods on Windows nodes that have been upgraded to Overlay. In other words, overlay Windows pods will not be able to reply to any traffic from pods still running with an IP from the node subnet.
+
+This network disruption will only occur during the upgrade. Once the migration to overlay has completed for all node pools, all overlay pods will be able to communicate successfully with the Windows pods.
+
+> [!NOTE]
+> The upgrade completion doesn't change the existing limitation that host network pods **cannot** communicate with Windows overlay pods.
+ ## Next steps To learn how to utilize AKS with your own Container Network Interface (CNI) plugin, see [Bring your own Container Network Interface (CNI) plugin](use-byo-cni.md).
aks Control Kubeconfig Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/control-kubeconfig-access.md
Title: Limit access to kubeconfig in Azure Kubernetes Service (AKS)
description: Learn how to control access to the Kubernetes configuration file (kubeconfig) for cluster administrators and cluster users Previously updated : 05/06/2020 Last updated : 03/28/2023 # Use Azure role-based access control to define access to the Kubernetes configuration file in Azure Kubernetes Service (AKS)
-You can interact with Kubernetes clusters using the `kubectl` tool. The Azure CLI provides an easy way to get the access credentials and configuration information to connect to your AKS clusters using `kubectl`. To limit who can get that Kubernetes configuration (*kubeconfig*) information and to limit the permissions they then have, you can use Azure role-based access control (Azure RBAC).
+You can interact with Kubernetes clusters using the `kubectl` tool. The Azure CLI provides an easy way to get the access credentials and *kubeconfig* configuration file to connect to your AKS clusters using `kubectl`. You can use Azure role-based access control (Azure RBAC) to limit who can get access to the *kubeconfig* file and the permissions they have.
This article shows you how to assign Azure roles that limit who can get the configuration information for an AKS cluster. ## Before you begin
-This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli], [using Azure PowerShell][aks-quickstart-powershell], or [using the Azure portal][aks-quickstart-portal].
+* This article assumes that you have an existing AKS cluster. If you need an AKS cluster, create one using [Azure CLI][aks-quickstart-cli], [Azure PowerShell][aks-quickstart-powershell], or [the Azure portal][aks-quickstart-portal].
+* This article also requires that you're running Azure CLI version 2.0.65 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
-This article also requires that you are running the Azure CLI version 2.0.65 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
+## Available permissions for cluster roles
-## Available cluster roles permissions
+When you interact with an AKS cluster using the `kubectl` tool, a configuration file, called *kubeconfig*, defines cluster connection information. This configuration file is typically stored in *~/.kube/config*. Multiple clusters can be defined in this *kubeconfig* file. You can switch between clusters using the [`kubectl config use-context`][kubectl-config-use-context] command.
-When you interact with an AKS cluster using the `kubectl` tool, a configuration file is used that defines cluster connection information. This configuration file is typically stored in *~/.kube/config*. Multiple clusters can be defined in this *kubeconfig* file. You switch between clusters using the [kubectl config use-context][kubectl-config-use-context] command.
+The [`az aks get-credentials`][az-aks-get-credentials] command lets you get the access credentials for an AKS cluster and merges these credentials into the *kubeconfig* file. You can use Azure RBAC to control access to these credentials. These Azure roles let you define who can retrieve the *kubeconfig* file and what permissions they have within the cluster.
-The [az aks get-credentials][az-aks-get-credentials] command lets you get the access credentials for an AKS cluster and merges them into the *kubeconfig* file. You can use Azure role-based access control (Azure RBAC) to control access to these credentials. These Azure roles let you define who can retrieve the *kubeconfig* file, and what permissions they then have within the cluster.
+There are two Azure roles you can apply to an Azure Active Directory (Azure AD) user or group:
-The two built-in roles are:
+- **Azure Kubernetes Service Cluster Admin Role**
-* **Azure Kubernetes Service Cluster Admin Role**
- * Allows access to *Microsoft.ContainerService/managedClusters/listClusterAdminCredential/action* API call. This API call [lists the cluster admin credentials][api-cluster-admin].
- * Downloads *kubeconfig* for the *clusterAdmin* role.
-* **Azure Kubernetes Service Cluster User Role**
- * Allows access to *Microsoft.ContainerService/managedClusters/listClusterUserCredential/action* API call. This API call [lists the cluster user credentials][api-cluster-user].
- * Downloads *kubeconfig* for *clusterUser* role.
+ * Allows access to `Microsoft.ContainerService/managedClusters/listClusterAdminCredential/action` API call. This API call [lists the cluster admin credentials][api-cluster-admin].
+ * Downloads *kubeconfig* for the *clusterAdmin* role.
-These Azure roles can be applied to an Azure Active Directory (AD) user or group.
+- **Azure Kubernetes Service Cluster User Role**
+
+ * Allows access to `Microsoft.ContainerService/managedClusters/listClusterUserCredential/action` API call. This API call [lists the cluster user credentials][api-cluster-user].
+ * Downloads *kubeconfig* for *clusterUser* role.
> [!NOTE]
-> On clusters that use Azure AD, users with the *clusterUser* role have an empty *kubeconfig* file that prompts a log in. Once logged in, users have access based on their Azure AD user or group settings. Users with the *clusterAdmin* role have admin access.
+> On clusters that use Azure AD, users with the *clusterUser* role have an empty *kubeconfig* file that prompts a login. Once logged in, users have access based on their Azure AD user or group settings. Users with the *clusterAdmin* role have admin access.
>
-> On clusters that do not use Azure AD, the *clusterUser* role has same effect of *clusterAdmin* role.
+> On clusters that don't use Azure AD, the *clusterUser* role has same effect of *clusterAdmin* role.
## Assign role permissions to a user or group
-To assign one of the available roles, you need to get the resource ID of the AKS cluster and the ID of the Azure AD user account or group. The following example commands:
+To assign one of the available roles, you need to get the resource ID of the AKS cluster and the ID of the Azure AD user account or group using the following steps:
-* Get the cluster resource ID using the [az aks show][az-aks-show] command for the cluster named *myAKSCluster* in the *myResourceGroup* resource group. Provide your own cluster and resource group name as needed.
-* Use the [az account show][az-account-show] and [az ad user show][az-ad-user-show] commands to get your user ID.
-* Finally, assign a role using the [az role assignment create][az-role-assignment-create] command.
+1. Get the cluster resource ID using the [`az aks show`][az-aks-show] command for the cluster named *myAKSCluster* in the *myResourceGroup* resource group. Provide your own cluster and resource group name as needed.
+2. Use the [`az account show`][az-account-show] and [`az ad user show`][az-ad-user-show] commands to get your user ID.
+3. Assign a role using the [`az role assignment create`][az-role-assignment-create] command.
The following example assigns the *Azure Kubernetes Service Cluster Admin Role* to an individual user account:
az role assignment create \
--role "Azure Kubernetes Service Cluster Admin Role" ```
+If you want to assign permissions to an Azure AD group, update the `--assignee` parameter shown in the previous example with the object ID for the *group* rather than the *user*.
+
+To get the object ID for a group, use the [`az ad group show`][az-ad-group-show] command. The following command gets the object ID for the Azure AD group named *appdev*:
+
+```azurecli-interactive
+az ad group show --group appdev --query objectId -o tsv
+```
+ > [!IMPORTANT]
-> In some cases, the *user.name* in the account is different than the *userPrincipalName*, such as with Azure AD guest users:
+> In some cases, such as Azure AD guest users, the *user.name* in the account is different than the *userPrincipalName*.
>
-> ```output
+> ```azurecli-interactive
> $ az account show --query user.name -o tsv > user@contoso.com
+>
> $ az ad user list --query "[?contains(otherMails,'user@contoso.com')].{UPN:userPrincipalName}" -o tsv > user_contoso.com#EXT#@contoso.onmicrosoft.com > ``` >
-> In this case, set the value of *ACCOUNT_UPN* to the *userPrincipalName* from the Azure AD user. For example, if your account *user.name* is *user\@contoso.com*:
->
+> In this case, set the value of *ACCOUNT_UPN* to the *userPrincipalName* from the Azure AD user. For example, if your account *user.name* is *user\@contoso.com*, this action would look like the following example:
+>
> ```azurecli-interactive > ACCOUNT_UPN=$(az ad user list --query "[?contains(otherMails,'user@contoso.com')].{UPN:userPrincipalName}" -o tsv) > ```
-> [!TIP]
-> If you want to assign permissions to an Azure AD group, update the `--assignee` parameter shown in the previous example with the object ID for the *group* rather than a *user*. To obtain the object ID for a group, use the [az ad group show][az-ad-group-show] command. The following example gets the object ID for the Azure AD group named *appdev*: `az ad group show --group appdev --query objectId -o tsv`
-
-You can change the previous assignment to the *Cluster User Role* as needed.
-
-The following example output shows the role assignment has been successfully created:
-
-```
-{
- "canDelegate": null,
- "id": "/subscriptions/<guid>/resourcegroups/myResourceGroup/providers/Microsoft.ContainerService/managedClusters/myAKSCluster/providers/Microsoft.Authorization/roleAssignments/b2712174-5a41-4ecb-82c5-12b8ad43d4fb",
- "name": "b2712174-5a41-4ecb-82c5-12b8ad43d4fb",
- "principalId": "946016dd-9362-4183-b17d-4c416d1f8f61",
- "resourceGroup": "myResourceGroup",
- "roleDefinitionId": "/subscriptions/<guid>/providers/Microsoft.Authorization/roleDefinitions/0ab01a8-8aac-4efd-b8c2-3ee1fb270be8",
- "scope": "/subscriptions/<guid>/resourcegroups/myResourceGroup/providers/Microsoft.ContainerService/managedClusters/myAKSCluster",
- "type": "Microsoft.Authorization/roleAssignments"
-}
-```
- ## Get and verify the configuration information
-With Azure roles assigned, use the [az aks get-credentials][az-aks-get-credentials] command to get the *kubeconfig* definition for your AKS cluster. The following example gets the *--admin* credentials, which work correctly if the user has been granted the *Cluster Admin Role*:
+Once the roles are assigned, use the [`az aks get-credentials`][az-aks-get-credentials] command to get the *kubeconfig* definition for your AKS cluster. The following example gets the *--admin* credentials, which works correctly if the user has been granted the *Cluster Admin Role*:
```azurecli-interactive az aks get-credentials --resource-group myResourceGroup --name myAKSCluster --admin ```
-You can then use the [kubectl config view][kubectl-config-view] command to verify that the *context* for the cluster shows that the admin configuration information has been applied:
+You can then use the [`kubectl config view`][kubectl-config-view] command to verify that the *context* for the cluster shows that the admin configuration information has been applied.
-```
+```azurecli-interactive
$ kubectl config view
+```
+Your output should look similar to the following example output:
+
+```azurecli-interactive
apiVersion: v1 clusters: - cluster:
users:
## Remove role permissions
-To remove role assignments, use the [az role assignment delete][az-role-assignment-delete] command. Specify the account ID and cluster resource ID, as obtained in the previous commands. If you assigned the role to a group rather than a user, specify the appropriate group object ID rather than account object ID for the `--assignee` parameter:
+To remove role assignments, use the [`az role assignment delete`][az-role-assignment-delete] command. Specify the account ID and cluster resource ID that you obtained in the previous steps. If you assigned the role to a group rather than a user, specify the appropriate group object ID rather than account object ID for the `--assignee` parameter.
```azurecli-interactive az role assignment delete --assignee $ACCOUNT_ID --scope $AKS_CLUSTER
For enhanced security on access to AKS clusters, [integrate Azure Active Directo
[aks-quickstart-powershell]: ./learn/quick-kubernetes-deploy-powershell.md [azure-cli-install]: /cli/azure/install-azure-cli [az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials
-[azure-rbac]: ../role-based-access-control/overview.md
[api-cluster-admin]: /rest/api/aks/managedclusters/listclusteradmincredentials [api-cluster-user]: /rest/api/aks/managedclusters/listclusterusercredentials [az-aks-show]: /cli/azure/aks#az_aks_show
aks Deploy Marketplace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/deploy-marketplace.md
Included among these solutions are Kubernetes application-based container offers
This feature is currently supported only in the following regions:
+- East US
+- West US
+- Central US
- West Central US
+- South Central US
+- East US 2
+- West US 2
- West Europe-- East US
+- North Europe
+- Canada Central
+- Southeast Asia
+- Australia East
+- Central India
Kubernetes application-based container offers cannot be deployed on AKS for Azure Stack HCI or AKS Edge Essentials.
az provider register --namespace Microsoft.KubernetesConfiguration --wait
1. In the [Azure portal](https://portal.azure.com/), search for **Marketplace** on the top search bar. In the results, under **Services**, select **Marketplace**.
-1. You can search for an offer or publisher directly by name, or you can browse all offers. To find Kubernetes application offers, use the **Product Type** filter for **Azure Containers**.
+1. You can search for an offer or publisher directly by name, or you can browse all offers. To find Kubernetes application offers, on the left side under **Categories** select **Containers**.
- :::image type="content" source="./media/deploy-marketplace/browse-marketplace-inline.png" alt-text="Screenshot of Azure Marketplace offers in the Azure portal, with the filter for product type set to Azure containers." lightbox="./media/deploy-marketplace/browse-marketplace-full.png":::
+ :::image type="content" source="./media/deploy-marketplace/containers-inline.png" alt-text="Screenshot of Azure Marketplace offers in the Azure portal, with the container category on the left side highlighted." lightbox="./media/deploy-marketplace/containers.png":::
> [!IMPORTANT]
- > The **Azure Containers** category includes both Kubernetes applications and standalone container images. This walkthrough is specific to Kubernetes applications. If you find that the steps to deploy an offer differ in some way, you're most likely trying to deploy a container image-based offer instead of a Kubernetes application-based offer.
- >
- > To ensure that you're searching for Kubernetes applications, include the term **KubernetesApps** in your search.
+ > The **Containers** category includes both Kubernetes applications and standalone container images. This walkthrough is specific to Kubernetes applications. If you find that the steps to deploy an offer differ in some way, you're most likely trying to deploy a container image-based offer instead of a Kubernetes application-based offer.
+
+1. You will see several Kubernetes application offers displayed on the page. To view all of the Kubernetes application offers, select **See more**.
+
+ :::image type="content" source="./media/deploy-marketplace/see-more-inline.png" alt-text="Screenshot of Azure Marketplace K8s offers in the Azure portal" lightbox="./media/deploy-marketplace/see-more.png":::
1. After you decide on an application, select the offer.
aks Deployment Center Launcher https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/deployment-center-launcher.md
- Title: Deployment Center for Azure Kubernetes
-description: Deployment Center in Azure DevOps simplifies setting up a robust Azure DevOps pipeline for your application
-- Previously updated : 07/12/2019---
-# Deployment Center for Azure Kubernetes
-
-> [!IMPORTANT]
-> Deployment Center for Azure Kubernetes Service will be retired on March 31, 2023. [Learn more](#retirement)
-
-Deployment Center in Azure DevOps simplifies setting up a robust Azure DevOps pipeline for your application. By default, Deployment Center configures an Azure DevOps pipeline to deploy your application updates to the Kubernetes cluster. You can extend the default configured Azure DevOps pipeline and also add richer capabilities: the ability to gain approval before deploying, provision additional Azure resources, run scripts, upgrade your application, and even run more validation tests.
-
-In this tutorial, you will:
-
-> [!div class="checklist"]
-> * Configure an Azure DevOps pipeline to deploy your application updates to the Kubernetes cluster.
-> * Examine the continuous integration (CI) pipeline.
-> * Examine the continuous delivery (CD) pipeline.
-> * Clean up the resources.
-
-## Prerequisites
-
-* An Azure subscription. You can get one free through [Visual Studio Dev Essentials](https://visualstudio.microsoft.com/dev-essentials/).
-
-* An Azure Kubernetes Service (AKS) cluster.
-
-## Create an AKS cluster
-
-1. Sign in to your [Azure portal](https://portal.azure.com/).
-
-1. Select the [Cloud Shell](../cloud-shell/overview.md) option on the right side of the menu bar in the Azure portal.
-
-1. To create the AKS cluster, run the following commands:
-
- ```azurecli
- # Create a resource group in the South India location:
-
- az group create --name azooaks --location southindia
-
- # Create a cluster named azookubectl with one node.
-
- az aks create --resource-group azooaks --name azookubectl --node-count 1 --enable-addons monitoring --generate-ssh-keys
- ```
-
-## Deploy application updates to a Kubernetes cluster
-
-1. Go to the resource group that you created in the previous section.
-
-1. Select the AKS cluster, and then select **Deployment Center (preview)** on the left blade. Select **Get started**.
-
- ![Screenshot shows the Azure portal with an arrow pointing to the Deployment center.](media/deployment-center-launcher/settings.png)
-
-1. Choose the location of the code and select **Next**. Then, select one of the currently supported repositories: **[Azure Repos](/azure/devops/repos/index)** or **GitHub**.
-
- Azure Repos is a set of version control tools that help you manage your code. Whether your software project is large or small, using version control as early as possible is a good idea.
-
- - **Azure Repos**: Choose a repository from your existing project and organization.
-
- ![Azure Repos](media/deployment-center-launcher/azure-repos.gif)
-
- - **GitHub**: Authorize and select the repository for your GitHub account.
-
- ![Animation shows a process in GitHub of selecting GitHub as the source and then selecting your repository.](media/deployment-center-launcher/github.gif)
--
-1. Deployment Center analyzes the repository and detects your Dockerfile. If you want to update the Dockerfile, you can edit the identified port number.
-
- ![Application Settings](media/deployment-center-launcher/application-settings.png)
-
- If the repository doesn't contain the Dockerfile, the system displays a message to commit one.
-
- ![Screenshot shows the Deployment center with a message Could not find Dockerfile in the repository.](media/deployment-center-launcher/dockerfile.png)
-
-1. Select an existing container registry or create one, and then select **Finish**. The pipeline is created automatically and queues a build in [Azure Pipelines](/azure/devops/pipelines/index).
-
- Azure Pipelines is a cloud service that you can use to automatically build and test your code project and make it available to other users. Azure Pipelines combines continuous integration and continuous delivery to constantly and consistently test and build your code and ship it to any target.
-
- ![Container Registry](media/deployment-center-launcher/container-registry.png)
-
-1. Select the link to see the ongoing pipeline.
-
-1. You'll see the successful logs after deployment is complete.
-
- ![Screenshot shows Deployment center with Release-1 marked with a green check mark icon.](media/deployment-center-launcher/logs.png)
-
-## Examine the CI pipeline
-
-Deployment Center automatically configures your Azure DevOps organization's CI/CD pipeline. The pipeline can be explored and customized.
-
-1. Go to the Deployment Center dashboard.
-
-1. Select the build number from the list of successful logs to view the build pipeline for your project.
-
-1. Select the ellipsis (...) in the upper-right corner. A menu shows several options, such as queuing a new build, retaining a build, and editing the build pipeline. Select **Edit pipeline**.
-
-1. You can examine the different tasks for your build pipeline in this pane. The build performs various tasks, such as collecting sources from the Git repository, creating an image, pushing an image to the container registry, and publishing outputs that are used for deployments.
-
-1. Select the name of the build pipeline at the top of the pipeline.
-
-1. Change your build pipeline name to something more descriptive, select **Save & queue**, and then select **Save**.
-
-1. Under your build pipeline, select **History**. This pane shows an audit trail of your recent build changes. Azure DevOps monitors any changes made to the build pipeline and allows you to compare versions.
-
-1. Select **Triggers**. You can include or exclude branches from the CI process.
-
-1. Select **Retention**. You can specify policies to keep or remove a number of builds, depending on your scenario.
-
-## Examine the CD pipeline
-
-Deployment Center automatically creates and configures the relationship between your Azure DevOps organization and your Azure subscription. The steps involved include setting up an Azure service connection to authenticate your Azure subscription with Azure DevOps. The automated process also creates a release pipeline, which provides continuous delivery to Azure.
-
-1. Select **Pipelines**, and then select **Releases**.
-
-1. To edit the release pipeline, select **Edit**.
-
-1. Select **Drop** from the **Artifacts** list. In the previous steps, the construction pipeline you examined produces the output used for the artifact.
-
-1. Select the **Continuous deployment** trigger on the right of the **Drop** option. This release pipeline has an enabled CD trigger that runs a deployment whenever a new build artifact is available. You can also disable the trigger to require manual execution for your deployments.
-
-1. To examine all the tasks for your pipeline, select **Tasks**. The release sets the tiller environment, configures the `imagePullSecrets` parameter, installs Helm tools, and deploys the Helm charts to the Kubernetes cluster.
-
-1. To view the release history, select **View releases**.
-
-1. To see the summary, select **Release**. Select any of the stages to explore multiple menus, such as a release summary, associated work items, and tests.
-
-1. Select **Commits**. This view shows code commits related to this deployment. Compare releases to see the commit differences between deployments.
-
-1. Select **Logs**. The logs contain useful deployment information, which you can view during and after deployments.
-
-## Clean up resources
-
-You can delete the related resources that you created when you don't need them anymore. Use the delete functionality on the DevOps Projects dashboard.
-
-## Next steps
-
-You can modify these build and release pipelines to meet the needs of your team. Or, you can use this CI/CD model as a template for your other pipelines.
-
-## Retirement
-
-Deployment Center for Azure Kubernetes will be retired on March 31, 2023 in favor of [Automated deployments](./automated-deployments.md). We encourage you to switch for enjoy similar capabilities.
-
-#### Migration Steps
-
-There is no migration required as AKS Deployment center experience does not store any information itself, it just helps users with their Day 0 getting started experience on Azure. Moving forward, the recommended way for users to get started on CI/CD for AKS will be using [Automated deployments](./automated-deployments.md) feature.
-
-For existing pipelines, users will still be able to perform all operations from GitHub Actions or Azure DevOps after the retirement of this experience. Only the ability to create and view pipelines from Azure portal will be removed. See [GitHub Actions](https://docs.github.com/en/actions) or [Azure DevOps](/azure/devops/pipelines/get-started/pipelines-get-started) to learn how to get started.
-
-For new application deployments to AKS, instead of using Deployment center users can get the same capabilities by using Automated deployments.
-
-#### FAQΓÇ»
-
-1. Where can I manage my CD pipeline after this experience is deprecated?ΓÇ»
-
-Post retirement, you will not be able to view or create CD pipelines from Azure portalΓÇÖs AKS blade. However, as with the current experience, you can go to GitHub Actions or Azure DevOps portal and view or update the configured pipelines there.
-
-2. Will I lose my earlier configured pipelines?
-
-No. All the created pipelines will still be available and functional in GitHub or Azure DevOps. Only the experience of creating and viewing pipelines from Azure portal will be retired.
-
-3. How can I still configure CD pipelines directly through Azure portal?
-
-You can use Automated deployments available in the AKS blade in Azure portal.
aks Egress Outboundtype https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/egress-outboundtype.md
Previously updated : 06/29/2020 Last updated : 03/28/2023 #Customer intent: As a cluster operator, I want to define my own egress paths with user-defined routes. Since I define this up front I do not want AKS provided load balancer configurations. # Customize cluster egress with outbound types in Azure Kubernetes Service (AKS)
-Egress from an AKS cluster can be customized to fit specific scenarios. By default, AKS will provision a Standard SKU Load Balancer to be set up and used for egress. However, the default setup may not meet the requirements of all scenarios if public IPs are disallowed or additional hops are required for egress.
+You can customize egress for an AKS cluster to fit specific scenarios. By default, AKS will provision a standard SKU load balancer to be set up and used for egress. However, the default setup may not meet the requirements of all scenarios if public IPs are disallowed or additional hops are required for egress.
-This article covers the various types of outbound connectivity that are available in AKS Clusters.
+This article covers the various types of outbound connectivity that are available in AKS clusters.
+
+> [!NOTE]
+> You can now update the `outboundType` after cluster creation. This feature is in preview. See [Updating `outboundType after cluster creation (preview)](#updating-outboundtype-after-cluster-creation-preview).
## Limitations
-* Outbound type can only be defined at cluster create time and can't be updated afterwards.
- * Reconfiguring outbound type is now supported in preview; see below.
-* Setting `outboundType` requires AKS clusters with a `vm-set-type` of `VirtualMachineScaleSets` and `load-balancer-sku` of `Standard`.
-## Overview of outbound types in AKS
+* Setting `outboundType` requires AKS clusters with a `vm-set-type` of `VirtualMachineScaleSets` and `load-balancer-sku` of `Standard`.
-An AKS cluster can be configured with three different categories of outbound type: load balancer, NAT gateway, or user-defined routing.
+## Outbound types in AKS
-> [!IMPORTANT]
-> Outbound type impacts only the egress traffic of your cluster. For more information, see [setting up ingress controllers](ingress-basic.md).
+You can configure an AKS cluster using the following outbound types: load balancer, NAT gateway, or user-defined routing. The outbound type impacts only the egress traffic of your cluster. For more information, see [setting up ingress controllers](ingress-basic.md).
> [!NOTE]
-> You can use your own [route table][byo-route-table] with UDR and kubenet networking. Make sure your cluster identity (service principal or managed identity) has Contributor permissions to the custom route table.
+> You can use your own [route table][byo-route-table] with UDR and [kubenet networking](../aks/configure-kubenet.md). Make sure your cluster identity (service principal or managed identity) has Contributor permissions to the custom route table.
-### Outbound type of loadBalancer
+### Outbound type of `loadBalancer`
-If `loadBalancer` is set, AKS completes the following configuration automatically. The load balancer is used for egress through an AKS assigned public IP. An outbound type of `loadBalancer` supports Kubernetes services of type `loadBalancer`, which expect egress out of the load balancer created by the AKS resource provider.
+The load balancer is used for egress through an AKS-assigned public IP. An outbound type of `loadBalancer` supports Kubernetes services of type `loadBalancer`, which expect egress out of the load balancer created by the AKS resource provider.
-The following configuration is done by AKS.
- * A public IP address is provisioned for cluster egress.
- * The public IP address is assigned to the load balancer resource.
- * Backend pools for the load balancer are set up for agent nodes in the cluster.
+If `loadBalancer` is set, AKS automatically completes the following configuration:
-Below is a network topology deployed in AKS clusters by default, which use an `outboundType` of `loadBalancer`.
+* A public IP address is provisioned for cluster egress.
+* The public IP address is assigned to the load balancer resource.
+* Backend pools for the load balancer are set up for agent nodes in the cluster.
![Diagram shows ingress I P and egress I P, where the ingress I P directs traffic to a load balancer, which directs traffic to and from an internal cluster and other traffic to the egress I P, which directs traffic to the Internet, M C R, Azure required services, and the A K S Control Plane.](media/egress-outboundtype/outboundtype-lb.png)
-For more information, see [using a standard load balancer in AKS](load-balancer-standard.md) for more information.
+For more information, see [using a standard load balancer in AKS](load-balancer-standard.md).
### Outbound type of `managedNatGateway` or `userAssignedNatGateway`
-If `managedNatGateway` or `userAssignedNatGateway` are selected for `outboundType`, AKS relies on [Azure Networking NAT gateway](../virtual-network/nat-gateway/manage-nat-gateway.md) for cluster egress.
--- `managedNatGateway` is used when using managed virtual networks, and tells AKS to provision a NAT gateway and attach it to the cluster subnet.-- `userAssignedNatGateway` is used when using bring-your-own virtual networking, and requires that a NAT gateway has been provisioned before cluster creation.
+If `managedNatGateway` or `userAssignedNatGateway` are selected for `outboundType`, AKS relies on [Azure Networking NAT gateway](../virtual-network/nat-gateway/manage-nat-gateway.md) for cluster egress.
-NAT gateway has significantly improved handling of SNAT ports when compared to Standard Load Balancer.
+* Select `managedNatGateway` when using managed virtual networks. AKS will provision a NAT gateway and attach it to the cluster subnet.
+* Select `userAssignedNatGateway` when using bring-your-own virtual networking. This option requires that you have provisioned a NAT gateway before cluster creation.
-For more information, see [using NAT Gateway with AKS](nat-gateway.md) for more information.
+For more information, see [using NAT gateway with AKS](nat-gateway.md).
-### Outbound type of userDefinedRouting
+### Outbound type of `userDefinedRouting`
> [!NOTE]
-> Using outbound type is an advanced networking scenario and requires proper network configuration.
+> The `userDefinedRouting` outbound type is an advanced networking scenario and requires proper network configuration.
If `userDefinedRouting` is set, AKS won't automatically configure egress paths. The egress setup must be done by you.
-The AKS cluster must be deployed into an existing virtual network with a subnet that has been previously configured because when not using standard load balancer (SLB) architecture, you must establish explicit egress. As such, this architecture requires explicitly sending egress traffic to an appliance like a firewall, gateway, proxy or to allow the Network Address Translation (NAT) to be done by a public IP assigned to the standard load balancer or appliance.
+You must deploy the AKS cluster into an existing virtual network with a subnet that has been previously configured. Since you're not using a standard load balancer (SLB) architecture, you must establish explicit egress. This architecture requires explicitly sending egress traffic to an appliance like a firewall, gateway, proxy or to allow NAT to be done by a public IP assigned to the standard load balancer or appliance.
-For more information, see [configuring cluster egress via user-defined routing](egress-udr.md) for more information.
+For more information, see [configuring cluster egress via user-defined routing](egress-udr.md).
-## Updating `outboundType` after cluster creation (PREVIEW)
+## Updating `outboundType` after cluster creation (preview)
Changing the outbound type after cluster creation will deploy or remove resources as required to put the cluster into the new egress configuration. Migration is only supported between `loadBalancer`, `managedNATGateway` (if using a managed virtual network), and `userDefinedNATGateway` (if using a custom virtual network). > [!WARNING]
-> Changing the outbound type on a cluster is disruptive to network connectivity and will result in a change of the cluster's egress IP address. If any firewall rules have been configured to restrict traffic from the cluster, they will need to be updated to match the new egress IP address.
+> Changing the outbound type on a cluster is disruptive to network connectivity and will result in a change of the cluster's egress IP address. If any firewall rules have been configured to restrict traffic from the cluster, you need to update them to match the new egress IP address.
[!INCLUDE [preview features callout](includes/preview/preview-callout.md)]
-### Install the aks-preview Azure CLI extension
+### Install the `aks-preview` Azure CLI extension
`aks-preview` version 0.5.113 is required.
-To install the `aks-preview` extension, run the following command:
-
-```azurecli
-az extension add --name aks-preview
-```
+* Install and update the `aks-preview` extension.
-Run the following command to update to the latest version of the extension released:
+ ```azurecli
+ # Install aks-preview extension
+ az extension add --name aks-preview
-```azurecli
-az extension update --name aks-preview
-```
+ # Update aks-preview extension
+ az extension update --name aks-preview
+ ```
-### Register the 'AKS-OutBoundTypeMigrationPreview' feature flag
+### Register the `AKS-OutBoundTypeMigrationPreview` feature flag
-Register the `AKS-OutBoundTypeMigrationPreview` feature flag by using the [az feature register][az-feature-register] command, as shown in the following example:
+1. Register the `AKS-OutBoundTypeMigrationPreview` feature flag using the [`az feature register`][az-feature-register] command. It takes a few minutes for the status to show *Registered*.
-```azurecli-interactive
-az feature register --namespace "Microsoft.ContainerService" --name "AKS-OutBoundTypeMigrationPreview"
-```
+ ```azurecli-interactive
+ az feature register --namespace "Microsoft.ContainerService" --name "AKS-OutBoundTypeMigrationPreview"
+ ```
-It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [az feature show][az-feature-show] command:
+2. Verify the registration status using the [`az feature show`][az-feature-show] command.
-```azurecli-interactive
-az feature show --namespace "Microsoft.ContainerService" --name "AKS-OutBoundTypeMigrationPreview"
-```
+ ```azurecli-interactive
+ az feature show --namespace "Microsoft.ContainerService" --name "AKS-OutBoundTypeMigrationPreview"
+ ```
-When the status reflects *Registered*, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [az provider register][az-provider-register] command:
+3. When the status reflects *Registered*, refresh the registration of the *Microsoft.ContainerService* resource provider using the [`az provider register`][az-provider-register] command.
-```azurecli-interactive
-az provider register --namespace Microsoft.ContainerService
-```
+ ```azurecli-interactive
+ az provider register --namespace Microsoft.ContainerService
+ ```
-### Update a cluster to use a new outbound type
+### Update cluster to use a new outbound type
-Run the following command to change a cluster's outbound configuration:
+* Update the outbound configuration of your cluster using the [`az aks update`][az-aks-update] command.
-```azurecli-interactive
-az aks update -g <resourceGroup> -n <clusterName> --outbound-type <loadBalancer|managedNATGateway|userAssignedNATGateway>
-```
+ ```azurecli-interactive
+ az aks update -g <resourceGroup> -n <clusterName> --outbound-type <loadBalancer|managedNATGateway|userAssignedNATGateway>
+ ```
## Next steps -- [Configure standard load balancing in an AKS cluster](load-balancer-standard.md)-- [Configure NAT gateway in an AKS cluster](nat-gateway.md)-- [Configure user-defined routing in an AKS cluster](egress-udr.md)-- [NAT gateway documentation](./nat-gateway.md)-- [Azure networking UDR overview](../virtual-network/virtual-networks-udr-overview.md).-- [Manage route tables](../virtual-network/manage-route-table.md).
+* [Configure standard load balancing in an AKS cluster](load-balancer-standard.md)
+* [Configure NAT gateway in an AKS cluster](nat-gateway.md)
+* [Configure user-defined routing in an AKS cluster](egress-udr.md)
+* [NAT gateway documentation](./nat-gateway.md)
+* [Azure networking UDR overview](../virtual-network/virtual-networks-udr-overview.md)
+* [Manage route tables](../virtual-network/manage-route-table.md)
<!-- LINKS - internal -->
-[az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials
-[byo-route-table]: configure-kubenet.md#bring-your-own-subnet-and-route-table-with-kubenet
+[az-feature-register]: /cli/azure/feature#az_feature_register
+[az-feature-show]: /cli/azure/feature#az_feature_show
+[az-provider-register]: /cli/azure/provider#az_provider_register
+[az-aks-update]: /cli/azure/aks#az_aks_update
aks Node Image Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/node-image-upgrade.md
Title: Upgrade Azure Kubernetes Service (AKS) node images
description: Learn how to upgrade the images on AKS cluster nodes and node pools. Previously updated : 11/25/2020- Last updated : 03/28/2023
-# Azure Kubernetes Service (AKS) node image upgrade
+# Upgrade Azure Kubernetes Service (AKS) node images
-AKS supports upgrading the images on a node so you're up to date with the newest OS and runtime updates. AKS regularly provides new images with the latest updates, so it's beneficial to upgrade your node's images regularly for the latest AKS features. Linux node images are updated weekly, and Windows node images updated monthly. Although customers will be notified of image upgrades via the AKS release notes, it might take up to a week for updates to be rolled out in all regions. This article shows you how to upgrade AKS cluster node images and how to update node pool images without upgrading the version of Kubernetes.
+Azure Kubernetes Service (AKS) regularly provides new node images, so it's beneficial to upgrade your node images frequently to use the latest AKS features. Linux node images are updated weekly, and Windows node images are updated monthly. Image upgrade announcements are included in the [AKS release notes](https://github.com/Azure/AKS/releases), and it can take up to a week for these updates to be rolled out across all regions. Node image upgrades can also be performed automatically and scheduled using planned maintenance. For more details, see [Automatically upgrade node images][auto-upgrade-node-image].
-For more information about the latest images provided by AKS, see the [AKS release notes](https://github.com/Azure/AKS/releases).
-
-For information on upgrading the Kubernetes version for your cluster, see [Upgrade an AKS cluster][upgrade-cluster].
-
-Node image upgrades can also be performed automatically, and scheduled by using planned maintenance. For more details, see [Automatically upgrade node images][auto-upgrade-node-image].
+This article shows you how to upgrade AKS cluster node images and how to update node pool images without upgrading the Kubernetes version. For information on upgrading the Kubernetes version for your cluster, see [Upgrade an AKS cluster][upgrade-cluster].
> [!NOTE] > The AKS cluster must use virtual machine scale sets for the nodes.
-## Check if your node pool is on the latest node image
+## Check for available node image upgrades
-You can see what is the latest node image version available for your node pool with the following command:
+Check for available node image upgrades using the [`az aks nodepool get-upgrades`][az-aks-nodepool-get-upgrades] command.
```azurecli az aks nodepool get-upgrades \
az aks nodepool get-upgrades \
--resource-group myResourceGroup ```
-In the output you can see the `latestNodeImageVersion` like on the example below:
+The output will show the `latestNodeImageVersion`, like in the following example:
```output {
- "id": "/subscriptions/XXXX-XXX-XXX-XXX-XXXXX/resourcegroups/myResourceGroup/providers/Microsoft.ContainerService/managedClusters/myAKSCluster/agentPools/nodepool1/upgradeProfiles/default",
+ "id": "/subscriptions/XXXX-XXX-XXX-XXX-XXXXX/resourcegroups/myResourceGroup/providers/Microsoft.ContainerService/managedClusters/myAKSCluster/agentPools/mynodepool/upgradeProfiles/default",
"kubernetesVersion": "1.17.11", "latestNodeImageVersion": "AKSUbuntu-1604-2020.10.28", "name": "default",
In the output you can see the `latestNodeImageVersion` like on the example below
} ```
-So for `nodepool1` the latest node image available is `AKSUbuntu-1604-2020.10.28`. You can now compare it with the current node image version in use by your node pool by running:
+The example output shows `AKSUbuntu-1604-2020.10.28` as the `latestNodeImageVersion`.
+
+Compare the latest version with your current node image version using the [`az aks nodepool show`][az-aks-nodepool-show] command.
```azurecli az aks nodepool show \
az aks nodepool show \
--query nodeImageVersion ```
-An example output would be:
+Your output should look similar to the following example:
```output "AKSUbuntu-1604-2020.10.08" ```
-So in this example you could upgrade from the current `AKSUbuntu-1604-2020.10.08` image version to the latest version `AKSUbuntu-1604-2020.10.28`.
+In this example, there's an available node image version upgrade, which is from version `AKSUbuntu-1604-2020.10.08` to version `AKSUbuntu-1604-2020.10.28`.
-## Upgrade all nodes in all node pools
+## Upgrade all node images in all node pools
-Upgrading the node image is done with `az aks upgrade`. To upgrade the node image, use the following command:
+Upgrade the node image using the [`az aks upgrade`][az-aks-upgrade] command with the `--node-image-only` flag.
```azurecli az aks upgrade \
az aks upgrade \
--node-image-only ```
-During the upgrade, check the status of the node images with the following `kubectl` command to get the labels and filter out the current node image information:
+You can check the status of the node images using the `kubectl get nodes` command.
>[!NOTE] > This command may differ slightly depending on the shell you use. See the [Kubernetes JSONPath documentation][kubernetes-json-path] for more information on Windows/PowerShell environments.
During the upgrade, check the status of the node images with the following `kube
kubectl get nodes -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.metadata.labels.kubernetes\.azure\.com\/node-image-version}{"\n"}{end}' ```
-When the upgrade is complete, use `az aks show` to get the updated node pool details. The current node image is shown in the `nodeImageVersion` property.
+When the upgrade is complete, use the [`az aks show`][az-aks-show] command to get the updated node pool details. The current node image is shown in the `nodeImageVersion` property.
```azurecli az aks show \
az aks show \
## Upgrade a specific node pool
-Upgrading the image on a node pool is similar to upgrading the image on a cluster.
-
-To update the OS image of the node pool without doing a Kubernetes cluster upgrade, use the `--node-image-only` option in the following example:
+To update the OS image of a node pool without doing a Kubernetes cluster upgrade, use the [`az aks nodepool upgrade`][az-aks-nodepool-upgrade] command with the `--node-image-only` flag.
```azurecli az aks nodepool upgrade \
az aks nodepool upgrade \
--node-image-only ```
-During the upgrade, check the status of the node images with the following `kubectl` command to get the labels and filter out the current node image information:
+You can check the status of the node images with the `kubectl get nodes` command.
>[!NOTE] > This command may differ slightly depending on the shell you use. See the [Kubernetes JSONPath documentation][kubernetes-json-path] for more information on Windows/PowerShell environments.
During the upgrade, check the status of the node images with the following `kube
kubectl get nodes -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.metadata.labels.kubernetes\.azure\.com\/node-image-version}{"\n"}{end}' ```
-When the upgrade is complete, use `az aks nodepool show` to get the updated node pool details. The current node image is shown in the `nodeImageVersion` property.
+When the upgrade is complete, use the [`az aks nodepool show`][az-aks-nodepool-show] command to get the updated node pool details. The current node image is shown in the `nodeImageVersion` property.
```azurecli az aks nodepool show \
az aks nodepool show \
To speed up the node image upgrade process, you can upgrade your node images using a customizable node surge value. By default, AKS uses one additional node to configure upgrades.
-If you'd like to increase the speed of upgrades, use the `--max-surge` value to configure the number of nodes to be used for upgrades so they complete faster. To learn more about the trade-offs of various `--max-surge` settings, see [Customize node surge upgrade][max-surge].
-
-The following command sets the max surge value for performing a node image upgrade:
+If you'd like to increase the speed of upgrades, use the [`az aks nodepool update`][az-aks-nodepool-update] command with the `--max-surge` flag to configure the number of nodes used for upgrades. To learn more about the trade-offs of various `--max-surge` settings, see [Customize node surge upgrade][max-surge].
```azurecli az aks nodepool update \
az aks nodepool update \
--no-wait ```
-During the upgrade, check the status of the node images with the following `kubectl` command to get the labels and filter out the current node image information:
+You can check the status of the node images with the `kubectl get nodes` command.
```azurecli kubectl get nodes -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.metadata.labels.kubernetes\.azure\.com\/node-image-version}{"\n"}{end}'
az aks nodepool show \
- See the [AKS release notes](https://github.com/Azure/AKS/releases) for information about the latest node images. - Learn how to upgrade the Kubernetes version with [Upgrade an AKS cluster][upgrade-cluster].-- [Automatically apply cluster and node pool upgrades with GitHub Actions][github-schedule]
+- [Automatically apply cluster and node pool upgrades with GitHub Actions][github-schedule].
- Learn more about multiple node pools and how to upgrade node pools with [Create and manage multiple node pools][use-multiple-node-pools]. <!-- LINKS - external -->
az aks nodepool show \
[github-schedule]: node-upgrade-github-actions.md [use-multiple-node-pools]: use-multiple-node-pools.md [max-surge]: upgrade-cluster.md#customize-node-surge-upgrade
-[az-extension-add]: /cli/azure/extension#az_extension_add
-[az-extension-update]: /cli/azure/extension#az_extension_update
[auto-upgrade-node-image]: auto-upgrade-node-image.md
+[az-aks-nodepool-get-upgrades]: /cli/azure/aks/nodepool#az_aks_nodepool_get_upgrades
+[az-aks-nodepool-show]: /cli/azure/aks/nodepool#az_aks_nodepool_show
+[az-aks-nodepool-upgrade]: /cli/azure/aks/nodepool#az_aks_nodepool_upgrade
+[az-aks-nodepool-update]: /cli/azure/aks/nodepool#az_aks_nodepool_update
+[az-aks-upgrade]: /cli/azure/aks#az_aks_upgrade
+[az-aks-show]: /cli/azure/aks#az_aks_show
aks Spot Node Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/spot-node-pool.md
Title: Add an Azure Spot node pool to an Azure Kubernetes Service (AKS) cluster description: Learn how to add an Azure Spot node pool to an Azure Kubernetes Service (AKS) cluster. Previously updated : 01/21/2022 Last updated : 03/29/2023 #Customer intent: As a cluster operator or developer, I want to learn how to add an Azure Spot node pool to an AKS Cluster. # Add an Azure Spot node pool to an Azure Kubernetes Service (AKS) cluster
-A Spot node pool is a node pool backed by an [Azure Spot Virtual machine scale set][vmss-spot]. Using Spot VMs for nodes with your AKS cluster allows you to take advantage of unutilized capacity in Azure at a significant cost savings. The amount of available unutilized capacity will vary based on many factors, including node size, region, and time of day.
+A Spot node pool is a node pool backed by an [Azure Spot Virtual machine scale set][vmss-spot]. With Spot VMs in your AKS cluster, you can take advantage of unutilized Azure capacity with significant cost savings. The amount of available unutilized capacity varies based on many factors, such as node size, region, and time of day.
-When you deploy a Spot node pool, Azure will allocate the Spot nodes if there's capacity available. There's no SLA for the Spot nodes. A Spot scale set that backs the Spot node pool is deployed in a single fault domain and offers no high availability guarantees. At any time when Azure needs the capacity back, the Azure infrastructure will evict Spot nodes.
+When you deploy a Spot node pool, Azure allocates the Spot nodes if there's capacity available and deploys a Spot scale set that backs the Spot node pool in a single default domain. There's no SLA for the Spot nodes. There are no high availability guarantees. If Azure needs capacity back, the Azure infrastructure will evict the Spot nodes.
Spot nodes are great for workloads that can handle interruptions, early terminations, or evictions. For example, workloads such as batch processing jobs, development and testing environments, and large compute workloads may be good candidates to schedule on a Spot node pool. In this article, you add a secondary Spot node pool to an existing Azure Kubernetes Service (AKS) cluster.
-This article assumes a basic understanding of Kubernetes and Azure Load Balancer concepts. For more information, see [Kubernetes core concepts for Azure Kubernetes Service (AKS)][kubernetes-concepts].
-
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
- ## Before you begin
-When you create a cluster to use a Spot node pool, that cluster must use Virtual Machine Scale Sets for node pools and the *Standard* SKU load balancer. You must also add another node pool after you create your cluster, which is covered in a later step.
-
-This article requires that you're running the Azure CLI version 2.14 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
+* This article assumes a basic understanding of Kubernetes and Azure Load Balancer concepts. For more information, see [Kubernetes core concepts for Azure Kubernetes Service (AKS)][kubernetes-concepts].
+* If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+* When you create a cluster to use a Spot node pool, the cluster must use Virtual Machine Scale Sets for node pools and the *Standard* SKU load balancer. You must also add another node pool after you create your cluster, which is covered in this tutorial.
+* This article requires that you're running the Azure CLI version 2.14 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
### Limitations The following limitations apply when you create and manage AKS clusters with a Spot node pool:
-* A Spot node pool can't be the cluster's default node pool. A Spot node pool can only be used for a secondary pool.
+* A Spot node pool can't be a default node pool, it can only be used as a secondary pool.
* The control plane and node pools can't be upgraded at the same time. You must upgrade them separately or remove the Spot node pool to upgrade the control plane and remaining node pools at the same time. * A Spot node pool must use Virtual Machine Scale Sets.
-* You can't change ScaleSetPriority or SpotMaxPrice after creation.
-* When setting SpotMaxPrice, the value must be -1 or a positive value with up to five decimal places.
-* A Spot node pool will have the label *kubernetes.azure.com/scalesetpriority:spot*, the taint *kubernetes.azure.com/scalesetpriority=spot:NoSchedule*, and system pods will have anti-affinity.
+* You can't change `ScaleSetPriority` or `SpotMaxPrice` after creation.
+* When setting `SpotMaxPrice`, the value must be *-1* or a *positive value with up to five decimal places*.
+* A Spot node pool will have the `kubernetes.azure.com/scalesetpriority:spot` label, the taint `kubernetes.azure.com/scalesetpriority=spot:NoSchedule`, and the system pods will have anti-affinity.
* You must add a [corresponding toleration][spot-toleration] and affinity to schedule workloads on a Spot node pool. ## Add a Spot node pool to an AKS cluster
-You must add a Spot node pool to an existing cluster that has multiple node pools enabled. For more details on creating an AKS cluster with multiple node pools, see [use multiple node pools][use-multiple-node-pools].
-
-Create a node pool using the [az aks nodepool add][az-aks-nodepool-add] command:
-
-```azurecli-interactive
-az aks nodepool add \
- --resource-group myResourceGroup \
- --cluster-name myAKSCluster \
- --name spotnodepool \
- --priority Spot \
- --eviction-policy Delete \
- --spot-max-price -1 \
- --enable-cluster-autoscaler \
- --min-count 1 \
- --max-count 3 \
- --no-wait
-```
+When adding a Spot node pool to an existing cluster, it must be a cluster with multiple node pools enabled. When you create an AKS cluster with multiple node pools enabled, you create a node pool with a `priority` of `Regular` by default. To add a Spot node pool, you must specify `Spot` as the value for `priority`. For more details on creating an AKS cluster with multiple node pools, see [use multiple node pools][use-multiple-node-pools].
+
+* Create a node pool with a `priority` of `Spot` using the [az aks nodepool add][az-aks-nodepool-add] command.
-By default, you create a node pool with a *priority* of *Regular* in your AKS cluster when you create a cluster with multiple node pools. The above command adds an auxiliary node pool to an existing AKS cluster with a *priority* of *Spot*. The *priority* of *Spot* makes the node pool a Spot node pool. The *eviction-policy* parameter is set to *Delete* in the above example, which is the default value. When you set the [eviction policy][eviction-policy] to *Delete*, nodes in the underlying scale set of the node pool are deleted when they're evicted. You can also set the eviction policy to *Deallocate*. When you set the eviction policy to *Deallocate*, nodes in the underlying scale set are set to the stopped-deallocated state upon eviction. Nodes in the stopped-deallocated state count against your compute quota and can cause issues with cluster scaling or upgrading. The *priority* and *eviction-policy* values can only be set during node pool creation. Those values can't be updated later.
+ ```azurecli-interactive
+ az aks nodepool add \
+ --resource-group myResourceGroup \
+ --cluster-name myAKSCluster \
+ --name spotnodepool \
+ --priority Spot \
+ --eviction-policy Delete \
+ --spot-max-price -1 \
+ --enable-cluster-autoscaler \
+ --min-count 1 \
+ --max-count 3 \
+ --no-wait
+ ```
-The command also enables the [cluster autoscaler][cluster-autoscaler], which is recommended to use with Spot node pools. Based on the workloads running in your cluster, the cluster autoscaler scales up and scales down the number of nodes in the node pool. For Spot node pools, the cluster autoscaler will scale up the number of nodes after an eviction if more nodes are still needed. If you change the maximum number of nodes a node pool can have, you also need to adjust the `maxCount` value associated with the cluster autoscaler. If you don't use a cluster autoscaler, upon eviction, the Spot pool will eventually decrease to zero and require a manual operation to receive any additional Spot nodes.
+In the previous command, the `priority` of `Spot` makes the node pool a Spot node pool. The `eviction-policy` parameter is set to `Delete`, which is the default value. When you set the [eviction policy][eviction-policy] to `Delete`, nodes in the underlying scale set of the node pool are deleted when they're evicted.
+
+You can also set the eviction policy to `Deallocate`, which means that the nodes in the underlying scale set are set to the *stopped-deallocated* state upon eviction. Nodes in the *stopped-deallocated* state count against your compute quota and can cause issues with cluster scaling or upgrading. The `priority` and `eviction-policy` values can only be set during node pool creation. Those values can't be updated later.
+
+The previous command also enables the [cluster autoscaler][cluster-autoscaler], which we recommend using with Spot node pools. Based on the workloads running in your cluster, the cluster autoscaler scales the number of nodes up and down. For Spot node pools, the cluster autoscaler will scale up the number of nodes after an eviction if more nodes are still needed. If you change the maximum number of nodes a node pool can have, you also need to adjust the `maxCount` value associated with the cluster autoscaler. If you don't use a cluster autoscaler, upon eviction, the Spot pool will eventually decrease to *0* and require manual operation to receive any additional Spot nodes.
> [!IMPORTANT]
-> Only schedule workloads on Spot node pools that can handle interruptions, such as batch processing jobs and testing environments. It is recommended that you set up [taints and tolerations][taints-tolerations] on your Spot node pool to ensure that only workloads that can handle node evictions are scheduled on a Spot node pool. For example, the above command by default adds a taint of *kubernetes.azure.com/scalesetpriority=spot:NoSchedule* so only pods with a corresponding toleration are scheduled on this node.
+> Only schedule workloads on Spot node pools that can handle interruptions, such as batch processing jobs and testing environments. We recommend you set up [taints and tolerations][taints-tolerations] on your Spot node pool to ensure that only workloads that can handle node evictions are scheduled on a Spot node pool. For example, the above command adds a taint of `kubernetes.azure.com/scalesetpriority=spot:NoSchedule`, so only pods with a corresponding toleration are scheduled on this node.
-## Verify the Spot node pool
+### Verify the Spot node pool
-To verify your node pool has been added as a Spot node pool:
+* Verify your node pool has been added using the [`az aks nodepool show`][az-aks-nodepool-show] command and confirming the `scaleSetPriority` is `Spot`.
-```azurecli
-az aks nodepool show --resource-group myResourceGroup --cluster-name myAKSCluster --name spotnodepool
-```
+ ```azurecli
+ az aks nodepool show --resource-group myResourceGroup --cluster-name myAKSCluster --name spotnodepool
+ ```
-Confirm *scaleSetPriority* is *Spot*.
+### Schedule a pod to run on the Spot node
-To schedule a pod to run on a Spot node, add a toleration and node affinity that corresponds to the taint applied to your Spot node. The following example shows a portion of a yaml file that defines a toleration that corresponds to the *kubernetes.azure.com/scalesetpriority=spot:NoSchedule* taint and a node affinity that corresponds to the *kubernetes.azure.com/scalesetpriority=spot* label used in the previous step.
+To schedule a pod to run on a Spot node, you can add a toleration and node affinity that corresponds to the taint applied to your Spot node.
+
+The following example shows a portion of a YAML file that defines a toleration corresponding to the `kubernetes.azure.com/scalesetpriority=spot:NoSchedule` taint and a node affinity corresponding to the `kubernetes.azure.com/scalesetpriority=spot` label used in the previous step.
```yaml spec:
spec:
... ```
-When a pod with this toleration and node affinity is deployed, Kubernetes will successfully schedule the pod on the nodes with the taint and label applied.
+When you deploy a pod with this toleration and node affinity, Kubernetes will successfully schedule the pod on the nodes with the taint and label applied.
## Upgrade a Spot node pool
-Upgrading Spot node pools was previously unsupported, but is now an available operation. When Upgrading a Spot node pool, AKS will internally issue a cordon and an eviction notice, but no drain is applied. There are no surge nodes available for Spot node pool upgrades. Outside of these changes, behavior when upgrading Spot node pools is consistent with other node pool types.
+When you upgrade a Spot node pool, AKS internally issues a cordon and an eviction notice, but no drain is applied. There are no surge nodes available for Spot node pool upgrades. Outside of these changes, the behavior when upgrading Spot node pools is consistent with that of other node pool types.
-For more information on upgrading, see [Upgrade an AKS cluster][upgrade-cluster] and the Azure CLI command [az aks upgrade][az-aks-upgrade].
+For more information on upgrading, see [Upgrade an AKS cluster][upgrade-cluster].
## Max price for a Spot pool
-[Pricing for Spot instances is variable][pricing-spot], based on region and SKU. For more information, see pricing for [Linux][pricing-linux] and [Windows][pricing-windows].
+[Pricing for Spot instances is variable][pricing-spot], based on region and SKU. For more information, see pricing information for [Linux][pricing-linux] and [Windows][pricing-windows].
-With variable pricing, you have option to set a max price, in US dollars (USD), using up to five decimal places. For example, the value *0.98765* would be a max price of $0.98765 USD per hour. If you set the max price to *-1*, the instance won't be evicted based on price. The price for the instance will be the current price for Spot or the price for a standard instance, whichever is less, as long as there's capacity and quota available.
+With variable pricing, you have the option to set a max price, in US dollars (USD) using up to five decimal places. For example, the value *0.98765* would be a max price of *$0.98765 USD per hour*. If you set the max price to *-1*, the instance won't be evicted based on price. As long as there's capacity and quota available, the price for the instance will be the lower price of either the current price for a Spot instance or for a standard instance.
## Next steps In this article, you learned how to add a Spot node pool to an AKS cluster. For more information about how to control pods across node pools, see [Best practices for advanced scheduler features in AKS][operator-best-practices-advanced-scheduler].
-<!-- LINKS - External -->
-[kubernetes-services]: https://kubernetes.io/docs/concepts/services-networking/service/
- <!-- LINKS - Internal -->
-[aks-support-policies]: support-policies.md
-[aks-faq]: faq.md
[azure-cli-install]: /cli/azure/install-azure-cli [az-aks-nodepool-add]: /cli/azure/aks/nodepool#az-aks-nodepool-add
+[az-aks-nodepool-show]: /cli/azure/aks/nodepool#az_aks_nodepool_show
[cluster-autoscaler]: cluster-autoscaler.md [eviction-policy]: ../virtual-machine-scale-sets/use-spot.md#eviction-policy [kubernetes-concepts]: concepts-clusters-workloads.md
In this article, you learned how to add a Spot node pool to an AKS cluster. For
[use-multiple-node-pools]: use-multiple-node-pools.md [vmss-spot]: ../virtual-machine-scale-sets/use-spot.md [upgrade-cluster]: upgrade-cluster.md
-[az-aks-upgrade]: /cli/azure/aks#az_aks_upgrade
aks Use Ultra Disks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-ultra-disks.md
Title: Enable Ultra Disk support on Azure Kubernetes Service (AKS) description: Learn how to enable and configure Ultra Disks in an Azure Kubernetes Service (AKS) cluster Previously updated : 1/9/2022 Last updated : 03/28/2023 # Use Azure ultra disks on Azure Kubernetes Service
-[Azure ultra disks](../virtual-machines/disks-enable-ultra-ssd.md) offer high throughput, high IOPS, and consistent low latency disk storage for your stateful applications. One major benefit of ultra disks is the ability to dynamically change the performance of the SSD along with your workloads without the need to restart your agent nodes. Ultra disks are suited for data-intensive workloads.
+[Azure ultra disks](../virtual-machines/disks-enable-ultra-ssd.md) offer high throughput, high IOPS, and consistent low latency disk storage for your stateful applications. With ultra disks, you can dynamically change the performance of the SSD along with your workloads without the need to restart your agent nodes. Ultra disks are suited for data-intensive workloads.
## Before you begin
-This feature can only be set at cluster creation or node pool creation time.
+This feature can only be set at cluster or node pool creation time.
> [!IMPORTANT]
-> Azure ultra disks require nodepools deployed in availability zones and regions that support these disks as well as only specific VM series. See the [**Ultra disks GA scope and limitations**](../virtual-machines/disks-enable-ultra-ssd.md#ga-scope-and-limitations).
+> Azure ultra disks require node pools deployed in availability zones and regions that support these disks and specific VM series. For more information, see the [**Ultra disks GA scope and limitations**](../virtual-machines/disks-enable-ultra-ssd.md#ga-scope-and-limitations).
### Limitations -- Ultra disks can't be used with some features and functionality, such as availability sets or Azure Disk Encryption. Review [**Ultra disks GA scope and limitations**](../virtual-machines/disks-enable-ultra-ssd.md#ga-scope-and-limitations) before proceeding.-- The supported size range for ultra disks is between 100 and 1500.
+- Ultra disks can't be used with certain features, such as availability sets or Azure Disk encryption. Review [**Ultra disks GA scope and limitations**](../virtual-machines/disks-enable-ultra-ssd.md#ga-scope-and-limitations) before proceeding.
+- The supported size range for ultra disks is between *100* and *1500*.
-## Create a new cluster that can use ultra disks
+## Create a cluster that can use ultra disks
-Create an AKS cluster that is able to leverage Azure ultra Disks by using the following CLI commands. Use the `--enable-ultra-ssd` flag to set the `EnableUltraSSD` feature.
+Create an AKS cluster that can use ultra disks by enabling the `EnableUltraSSD` feature.
-Create an Azure resource group:
+1. Create an Azure resource group using the [`az group create`][az-group-create] command.
-```azurecli-interactive
-az group create --name myResourceGroup --location westus2
-```
+ ```azurecli-interactive
+ az group create --name myResourceGroup --location westus2
+ ```
-Create an AKS-managed Azure AD cluster with support for ultra disks.
+2. Create an AKS-managed Azure AD cluster with support for ultra disks using the [`az aks create`][az-aks-create] command with the `--enable-ultra-ssd` flag.
-```azurecli-interactive
-az aks create -g MyResourceGroup -n myAKSCluster -l westus2 --node-vm-size Standard_D2s_v3 --zones 1 2 --node-count 2 --enable-ultra-ssd
-```
-
-If you want to create clusters without ultra disk support, you can do so by omitting the `--enable-ultra-ssd` parameter.
+ ```azurecli-interactive
+ az aks create -g MyResourceGroup -n myAKSCluster -l westus2 --node-vm-size Standard_D2s_v3 --zones 1 2 --node-count 2 --enable-ultra-ssd
+ ```
## Enable ultra disks on an existing cluster
-You can enable ultra disks on existing clusters by adding a new node pool to your cluster that support ultra disks. Configure a new node pool to use ultra disks by using the `--enable-ultra-ssd` flag.
+You can enable ultra disks on existing clusters by adding a new node pool to your cluster that support ultra disks.
-```azurecli
-az aks nodepool add --name ultradisk --cluster-name myAKSCluster --resource-group myResourceGroup --node-vm-size Standard_D2s_v3 --zones 1 2 --node-count 2 --enable-ultra-ssd
-```
+- Configure a new node pool to use ultra disks using the [`az aks nodepool add`][az-aks-nodepool-add] command with the `--enable-ultra-ssd` flag.
-If you want to create new node pools without support for ultra disks, you can do so by omitting the `--enable-ultra-ssd` parameter.
+ ```azurecli
+ az aks nodepool add --name ultradisk --cluster-name myAKSCluster --resource-group myResourceGroup --node-vm-size Standard_D2s_v3 --zones 1 2 --node-count 2 --enable-ultra-ssd
+ ```
## Use ultra disks dynamically with a storage class
-To use ultra disks in our deployments or stateful sets you can use a [storage class for dynamic provisioning][azure-disk-volume].
+To use ultra disks in your deployments or stateful sets, you can use a [storage class for dynamic provisioning][azure-disk-volume].
### Create the storage class
-A storage class is used to define how a unit of storage is dynamically created with a persistent volume. For more information on Kubernetes storage classes, see [Kubernetes Storage Classes][kubernetes-storage-classes].
+A storage class is used to define how a unit of storage is dynamically created with a persistent volume. For more information on Kubernetes storage classes, see [Kubernetes storage classes][kubernetes-storage-classes]. In this case, we'll create a storage class that references ultra disks.
-In this case, we'll create a storage class that references ultra disks. Create a file named `azure-ultra-disk-sc.yaml`, and copy in the following manifest.
+1. Create a file named `azure-ultra-disk-sc.yaml` and copy in the following manifest:
-```yaml
-kind: StorageClass
-apiVersion: storage.k8s.io/v1
-metadata:
- name: ultra-disk-sc
-provisioner: disk.csi.azure.com # replace with "kubernetes.io/azure-disk" if aks version is less than 1.21
-volumeBindingMode: WaitForFirstConsumer # optional, but recommended if you want to wait until the pod that will use this disk is created
-parameters:
- skuname: UltraSSD_LRS
- kind: managed
- cachingMode: None
- diskIopsReadWrite: "2000" # minimum value: 2 IOPS/GiB
- diskMbpsReadWrite: "320" # minimum value: 0.032/GiB
-```
+ ```yaml
+ kind: StorageClass
+ apiVersion: storage.k8s.io/v1
+ metadata:
+ name: ultra-disk-sc
+ provisioner: disk.csi.azure.com # replace with "kubernetes.io/azure-disk" if aks version is less than 1.21
+ volumeBindingMode: WaitForFirstConsumer # optional, but recommended if you want to wait until the pod that will use this disk is created
+ parameters:
+ skuname: UltraSSD_LRS
+ kind: managed
+ cachingMode: None
+ diskIopsReadWrite: "2000" # minimum value: 2 IOPS/GiB
+ diskMbpsReadWrite: "320" # minimum value: 0.032/GiB
+ ```
-Create the storage class with the [kubectl apply][kubectl-apply] command and specify your *azure-ultra-disk-sc.yaml* file:
+2. Create the storage class using the [`kubectl apply`][kubectl-apply] command and specify your `azure-ultra-disk-sc.yaml` file.
-```console
-kubectl apply -f azure-ultra-disk-sc.yaml
-```
+ ```console
+ kubectl apply -f azure-ultra-disk-sc.yaml
+ ```
-The output from the command resembles the following example:
+ Your output should resemble the following example output:
-```console
-storageclass.storage.k8s.io/ultra-disk-sc created
-```
+ ```console
+ storageclass.storage.k8s.io/ultra-disk-sc created
+ ```
## Create a persistent volume claim A persistent volume claim (PVC) is used to automatically provision storage based on a storage class. In this case, a PVC can use the previously created storage class to create an ultra disk.
-Create a file named `azure-ultra-disk-pvc.yaml`, and copy in the following manifest. The claim requests a disk named `ultra-disk` that is *1000 GB* in size with *ReadWriteOnce* access. The *ultra-disk-sc* storage class is specified as the storage class.
+1. Create a file named `azure-ultra-disk-pvc.yaml` and copy in the following manifest:
+
+ ```yaml
+ apiVersion: v1
+ kind: PersistentVolumeClaim
+ metadata:
+ name: ultra-disk
+ spec:
+ accessModes:
+ - ReadWriteOnce
+ storageClassName: ultra-disk-sc
+ resources:
+ requests:
+ storage: 1000Gi
+ ```
-```yaml
-apiVersion: v1
-kind: PersistentVolumeClaim
-metadata:
- name: ultra-disk
-spec:
- accessModes:
- - ReadWriteOnce
- storageClassName: ultra-disk-sc
- resources:
- requests:
- storage: 1000Gi
-```
+ The claim requests a disk named `ultra-disk` that is *1000 GB* in size with *ReadWriteOnce* access. The *ultra-disk-sc* storage class is specified as the storage class.
-Create the persistent volume claim with the [kubectl apply][kubectl-apply] command and specify your *azure-ultra-disk-pvc.yaml* file:
+2. Create the persistent volume claim using the [`kubectl apply`][kubectl-apply] command and specify your `azure-ultra-disk-pvc.yaml` file.
-```console
-kubectl apply -f azure-ultra-disk-pvc.yaml
-```
+ ```console
+ kubectl apply -f azure-ultra-disk-pvc.yaml
+ ```
-The output from the command resembles the following example:
+ Your output should resemble the following example output:
-```console
-persistentvolumeclaim/ultra-disk created
-```
+ ```console
+ persistentvolumeclaim/ultra-disk created
+ ```
## Use the persistent volume Once the persistent volume claim has been created and the disk successfully provisioned, a pod can be created with access to the disk. The following manifest creates a basic NGINX pod that uses the persistent volume claim named *ultra-disk* to mount the Azure disk at the path `/mnt/azure`.
-Create a file named `nginx-ultra.yaml`, and copy in the following manifest.
-
-```yaml
-kind: Pod
-apiVersion: v1
-metadata:
- name: nginx-ultra
-spec:
- containers:
- - name: nginx-ultra
- image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
- resources:
- requests:
- cpu: 100m
- memory: 128Mi
- limits:
- cpu: 250m
- memory: 256Mi
- volumeMounts:
- - mountPath: "/mnt/azure"
- name: volume
- volumes:
- - name: volume
- persistentVolumeClaim:
- claimName: ultra-disk
-```
-
-Create the pod with the [kubectl apply][kubectl-apply] command, as shown in the following example:
-
-```console
-kubectl apply -f nginx-ultra.yaml
-```
-
-The output from the command resembles the following example:
-
-```console
-pod/nginx-ultra created
-```
-
-You now have a running pod with your Azure disk mounted in the `/mnt/azure` directory. This configuration can be seen when inspecting your pod via `kubectl describe pod nginx-ultra`, as shown in the following condensed example:
-
-```console
-kubectl describe pod nginx-ultra
-
-[...]
-Volumes:
- volume:
- Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
- ClaimName: azure-managed-disk
- ReadOnly: false
- default-token-smm2n:
- Type: Secret (a volume populated by a Secret)
- SecretName: default-token-smm2n
- Optional: false
-[...]
-Events:
- Type Reason Age From Message
- - - - -
- Normal Scheduled 2m default-scheduler Successfully assigned mypod to aks-nodepool1-79590246-0
- Normal SuccessfulMountVolume 2m kubelet, aks-nodepool1-79590246-0 MountVolume.SetUp succeeded for volume "default-token-smm2n"
- Normal SuccessfulMountVolume 1m kubelet, aks-nodepool1-79590246-0 MountVolume.SetUp succeeded for volume "pvc-faf0f176-8b8d-11e8-923b-deb28c58d242"
-[...]
-```
+1. Create a file named `nginx-ultra.yaml` and copy in the following manifest:
+
+ ```yaml
+ kind: Pod
+ apiVersion: v1
+ metadata:
+ name: nginx-ultra
+ spec:
+ containers:
+ - name: nginx-ultra
+ image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
+ resources:
+ requests:
+ cpu: 100m
+ memory: 128Mi
+ limits:
+ cpu: 250m
+ memory: 256Mi
+ volumeMounts:
+ - mountPath: "/mnt/azure"
+ name: volume
+ volumes:
+ - name: volume
+ persistentVolumeClaim:
+ claimName: ultra-disk
+ ```
+
+2. Create the pod using [`kubectl apply`][kubectl-apply] command and specify your `nginx-ultra.yaml` file.
+
+ ```console
+ kubectl apply -f nginx-ultra.yaml
+ ```
+
+ Your output should resemble the following example output:
+
+ ```console
+ pod/nginx-ultra created
+ ```
+
+ You now have a running pod with your Azure disk mounted in the `/mnt/azure` directory.
+
+3. See your configuration details using the `kubectl describe pod` command and specify your `nginx-ultra.yaml` file.
+
+ ```console
+ kubectl describe pod nginx-ultra
+ ```
+
+ Your output should resemble the following example output:
+
+ ```console
+ [...]
+ Volumes:
+ volume:
+ Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
+ ClaimName: azure-managed-disk
+ ReadOnly: false
+ default-token-smm2n:
+ Type: Secret (a volume populated by a Secret)
+ SecretName: default-token-smm2n
+ Optional: false
+ [...]
+ Events:
+ Type Reason Age From Message
+ - - - -
+ Normal Scheduled 2m default-scheduler Successfully assigned mypod to aks-nodepool1-79590246-0
+ Normal SuccessfulMountVolume 2m kubelet, aks-nodepool1-79590246-0 MountVolume.SetUp succeeded for volume "default-token-smm2n"
+ Normal SuccessfulMountVolume 1m kubelet, aks-nodepool1-79590246-0 MountVolume.SetUp succeeded for volume "pvc-faf0f176-8b8d-11e8-923b-deb28c58d242"
+ [...]
+ ```
## Using Azure tags
-For more details on using Azure tags, see [Use Azure tags in Azure Kubernetes Service (AKS)][use-tags].
+For more details on using Azure tags, see [Use Azure tags in AKS][use-tags].
## Next steps - For more about ultra disks, see [Using Azure ultra disks](../virtual-machines/disks-enable-ultra-ssd.md).-- For more about storage best practices, see [Best practices for storage and backups in Azure Kubernetes Service (AKS)][operator-best-practices-storage]
+- For more about storage best practices, see [Best practices for storage and backups in AKS][operator-best-practices-storage].
<!-- LINKS - external -->
-[access-modes]: https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes
[kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply
-[kubectl-get]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get
[kubernetes-storage-classes]: https://kubernetes.io/docs/concepts/storage/storage-classes/
-[kubernetes-volumes]: https://kubernetes.io/docs/concepts/storage/persistent-volumes/
-[managed-disk-pricing-performance]: https://azure.microsoft.com/pricing/details/managed-disks/
<!-- LINKS - internal --> [azure-disk-volume]: azure-disk-csi.md
-[azure-files-pvc]: azure-files-csi.md
-[premium-storage]: ../virtual-machines/disks-types.md
-[az-disk-list]: /cli/azure/disk#az_disk_list
-[az-snapshot-create]: /cli/azure/snapshot#az_snapshot_create
-[az-disk-create]: /cli/azure/disk#az_disk_create
-[az-disk-show]: /cli/azure/disk#az_disk_show
-[aks-quickstart-cli]: kubernetes-walkthrough.md
-[aks-quickstart-portal]: kubernetes-walkthrough-portal.md
-[install-azure-cli]: /cli/azure/install-azure-cli
[operator-best-practices-storage]: operator-best-practices-storage.md
-[concepts-storage]: concepts-storage.md
-[storage-class-concepts]: concepts-storage.md#storage-classes
-[az-extension-add]: /cli/azure/extension#az_extension_add
-[az-extension-update]: /cli/azure/extension#az_extension_update
-[az-feature-register]: /cli/azure/feature#az_feature_register
-[az-feature-list]: /cli/azure/feature#az_feature_list
-[az-provider-register]: /cli/azure/provider#az_provider_register
[use-tags]: use-tags.md
+[az-group-create]: /cli/azure/group#az_group_create
+[az-aks-create]: /cli/azure/aks#az_aks_create
+[az-aks-nodepool-add]: /cli/azure/aks/nodepool#az_aks_nodepool_add
api-management Api Management Page Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-page-templates.md
Last updated 11/04/2019
# Page templates in Azure API Management
-Azure API Management provides you the ability to customize the content of developer portal pages using a set of templates that configure their content. Using [DotLiquid](http://dotliquidmarkup.org/) syntax and the editor of your choice, such as [DotLiquid for Designers](https://github.com/dotliquid/dotliquid/wiki/DotLiquid-for-Designers), and a provided set of localized [String resources](api-management-template-resources.md#strings), [Glyph resources](api-management-template-resources.md#glyphs), and [Page controls](api-management-page-controls.md), you have great flexibility to configure the content of the pages as you see fit using these templates.
+Azure API Management provides you the ability to customize the content of developer portal pages using a set of templates that configure their content. Using [DotLiquid](https://github.com/dotliquid) syntax and the editor of your choice, such as [DotLiquid for Designers](https://github.com/dotliquid/dotliquid/wiki/DotLiquid-for-Designers), and a provided set of localized [String resources](api-management-template-resources.md#strings), [Glyph resources](api-management-template-resources.md#glyphs), and [Page controls](api-management-page-controls.md), you have great flexibility to configure the content of the pages as you see fit using these templates.
The templates in this section allow you to customize the content of the sign in, sign up, and page not found pages in the developer portal.
api-management Api Management Product Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-product-templates.md
# Product templates in Azure API Management
-Azure API Management provides you the ability to customize the content of developer portal pages using a set of templates that configure their content. Using [DotLiquid](http://dotliquidmarkup.org/) syntax and the editor of your choice, such as [DotLiquid for Designers](https://github.com/dotliquid/dotliquid/wiki/DotLiquid-for-Designers), and a provided set of localized [String resources](api-management-template-resources.md#strings), [Glyph resources](api-management-template-resources.md#glyphs), and [Page controls](api-management-page-controls.md), you have great flexibility to configure the content of the pages as you see fit using these templates.
+Azure API Management provides you the ability to customize the content of developer portal pages using a set of templates that configure their content. Using [DotLiquid](https://github.com/dotliquid) syntax and the editor of your choice, such as [DotLiquid for Designers](https://github.com/dotliquid/dotliquid/wiki/DotLiquid-for-Designers), and a provided set of localized [String resources](api-management-template-resources.md#strings), [Glyph resources](api-management-template-resources.md#glyphs), and [Page controls](api-management-page-controls.md), you have great flexibility to configure the content of the pages as you see fit using these templates.
The templates in this section allow you to customize the content of the product pages in the developer portal.
api-management Protect With Ddos Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/protect-with-ddos-protection.md
Depending on the DDoS Protection plan you use, enable DDoS protection on the vir
### Enable DDoS protection on the API Management public IP address
-If your plan uses the IP DDoS Protection SKU, see [Enable DDoS IP Protection Preview for a public IP address](../ddos-protection/manage-ddos-protection-powershell-ip.md#disable-ddos-ip-protection-preview-for-an-existing-public-ip-address).
+If your plan uses the IP DDoS Protection SKU, see [Enable DDoS IP Protection for a public IP address](../ddos-protection/manage-ddos-protection-powershell-ip.md#disable-ddos-ip-protection-for-an-existing-public-ip-address).
## Next steps
app-service App Service Configuration References https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-configuration-references.md
This topic shows you how to work with configuration data in your App Service or
To get started with using App Configuration references in App Service, you'll first need an App Configuration store, and provide your app permission to access the configuration key-values in the store.
-1. Create an App Configuration store by following the [App Configuration quickstart](../azure-app-configuration/quickstart-dotnet-core-app.md#create-an-app-configuration-store).
+1. Create an App Configuration store by following the [App Configuration quickstart](../azure-app-configuration/quickstart-azure-app-configuration-create.md).
> [!NOTE] > App Configuration references do not yet support network-restricted configuration stores.
app-service Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview.md
*Azure App Service* is an HTTP-based service for hosting web applications, REST APIs, and mobile back ends. You can develop in your favorite language, be it .NET, .NET Core, Java, Ruby, Node.js, PHP, or Python. Applications run and scale with ease on both Windows and [Linux](#app-service-on-linux)-based environments.
-App Service not only adds the power of Microsoft Azure to your application, such as security, load balancing, autoscaling, and automated management. You can also take advantage of its DevOps capabilities, such as continuous deployment from Azure DevOps, GitHub, Docker Hub, and other sources, package management, staging environments, custom domain, and TLS/SSL certificates.
+App Service adds the power of Microsoft Azure to your application, such as security, load balancing, autoscaling, and automated management. Additionally, you can take advantage of its DevOps capabilities, such as continuous deployment from Azure DevOps, GitHub, Docker Hub, and other sources, package management, staging environments, custom domain, and TLS/SSL certificates.
With App Service, you pay for the Azure compute resources you use. The compute resources you use are determined by the *App Service plan* that you run your apps on. For more information, see [Azure App Service plans overview](overview-hosting-plans.md).
app-service Quickstart Html Uiex https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-html-uiex.md
The page is running as an Azure App Service web app.
## 4. Update and redeploy the app
-In the Cloud Shell, **type** `nano https://docsupdatetracker.net/index.html` to open the nano text editor.
+In the Cloud Shell, use `sed` to change "Azure App Service - Sample Static HTML Site" to "Azure App Service".
-In the `<h1>` heading tag, change "Azure App Service - Sample Static HTML Site" to "Azure App Service".
-
-![Nano https://docsupdatetracker.net/index.html](media/quickstart-html/nano-index-html.png)
-
-**Save** your changes by using command `^O`.
-
-**Exit** nano by using command `^X`.
+```bash
+sed -i 's/Azure App Service - Sample Static HTML Site/Azure App Service/' https://docsupdatetracker.net/index.html
+```
Redeploy the app with `az webapp up` command.
app-service Quickstart Html https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-html.md
The page is running as an Azure App Service web app.
## Update and redeploy the app
-In the Cloud Shell, type `nano https://docsupdatetracker.net/index.html` to open the nano text editor. In the `<h1>` heading tag, change "Azure App Service - Sample Static HTML Site" to "Azure App Service", as shown below.
+In the Cloud Shell, use `sed` to change "Azure App Service - Sample Static HTML Site" to "Azure App Service".
-![Nano https://docsupdatetracker.net/index.html](media/quickstart-html/nano-index-html.png)
-
-Save your changes and exit nano. Use the command `^O` to save and `^X` to exit.
+```bash
+sed -i 's/Azure App Service - Sample Static HTML Site/Azure App Service/' https://docsupdatetracker.net/index.html
+```
You'll now redeploy the app with the same `az webapp up` command.
app-service Quickstart Ruby https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-ruby.md
The Ruby sample code is running in an Azure App Service Linux web app.
### [Azure CLI](#tab/cli)
-1. From Azure Cloud Shell, launch a text editor - such as `nano` or `vim` - to edit the file in `app/controllers/application_controller.rb`.
-
- ```bash
- nano app/controllers/application_controller.rb
- ```
-
-1. Edit the *ApplicationController* class so that it shows "Hello world from Azure App Service on Linux!" instead of "Hello from Azure App Service on Linux!".
+1. From Azure Cloud Shell, launch a text editor and edit the file `app/controllers/application_controller.rb`. Edit the *ApplicationController* class so that it shows "Hello world from Azure App Service on Linux!" instead of "Hello from Azure App Service on Linux!".
```ruby class ApplicationController < ActionController::Base
app-service Tutorial Connect Msi Azure Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-msi-azure-database.md
In this section, connectivity to the Azure database in your code follows the `De
1. Instantiate a `DefaultAzureCredential` from the Azure Identity client library. If you're using a user-assigned identity, specify the client ID of the identity. 1. Get an access token for the resource URI respective to the database type. - For Azure SQL Database: `https://database.windows.net/.default`
- - For Azure Database for MySQL: `https://ossrdbms-aad.database.windows.net`
- - For Azure Database for PostgreSQL: `https://ossrdbms-aad.database.windows.net`
+ - For Azure Database for MySQL: `https://ossrdbms-aad.database.windows.net/.default`
+ - For Azure Database for PostgreSQL: `https://ossrdbms-aad.database.windows.net/.default`
1. Add the token to your connection string. 1. Open the connection.
For Azure Database for MySQL and Azure Database for PostgreSQL, the database use
//var credential = new DefaultAzureCredential(new DefaultAzureCredentialOptions { ManagedIdentityClientId = '<client-id-of-user-assigned-identity>' }); // user-assigned identity // Get token for Azure Database for MySQL
- var token = credential.GetToken(new Azure.Core.TokenRequestContext(new[] { "https://ossrdbms-aad.database.windows.net" }));
+ var token = credential.GetToken(new Azure.Core.TokenRequestContext(new[] { "https://ossrdbms-aad.database.windows.net/.default" }));
// Set MySQL user depending on the environment string user;
For Azure Database for MySQL and Azure Database for PostgreSQL, the database use
//var credential = new DefaultAzureCredential(new DefaultAzureCredentialOptions { ManagedIdentityClientId = '<client-id-of-user-assigned-identity>' }); // user-assigned identity // Get token for Azure Database for PostgreSQL
- var token = credential.GetToken(new Azure.Core.TokenRequestContext(new[] { "https://ossrdbms-aad.database.windows.net" }));
+ var token = credential.GetToken(new Azure.Core.TokenRequestContext(new[] { "https://ossrdbms-aad.database.windows.net/.default" }));
// Check if in Azure and set user accordingly string postgresqlUser;
For Azure Database for MySQL and Azure Database for PostgreSQL, the database use
//var credential = new DefaultAzureCredential(new DefaultAzureCredentialOptions { ManagedIdentityClientId = '<client-id-of-user-assigned-identity>' }); // user-assigned identity // Get token for Azure Database for MySQL
- var token = credential.GetToken(new Azure.Core.TokenRequestContext(new[] { "https://ossrdbms-aad.database.windows.net" }));
+ var token = credential.GetToken(new Azure.Core.TokenRequestContext(new[] { "https://ossrdbms-aad.database.windows.net/.default" }));
// Set MySQL user depending on the environment string user;
For Azure Database for MySQL and Azure Database for PostgreSQL, the database use
//var credential = new DefaultAzureCredential(new DefaultAzureCredentialOptions { ManagedIdentityClientId = '<client-id-of-user-assigned-identity>' }); // user-assigned identity // Get token for Azure Database for PostgreSQL
- var token = credential.GetToken(new Azure.Core.TokenRequestContext(new[] { "https://ossrdbms-aad.database.windows.net" }));
+ var token = credential.GetToken(new Azure.Core.TokenRequestContext(new[] { "https://ossrdbms-aad.database.windows.net/.default" }));
// Check if in Azure and set user accordingly string postgresqlUser;
For Azure Database for MySQL and Azure Database for PostgreSQL, the database use
//const credential = new DefaultAzureCredential({ managedIdentityClientId: '<client-id-of-user-assigned-identity>' }); // user-assigned identity // Get token for Azure Database for MySQL
- const accessToken = await credential.getToken("https://ossrdbms-aad.database.windows.net");
+ const accessToken = await credential.getToken("https://ossrdbms-aad.database.windows.net/.default");
// Set MySQL user depending on the environment if(process.env.IDENTITY_ENDPOINT) {
For Azure Database for MySQL and Azure Database for PostgreSQL, the database use
//const credential = new DefaultAzureCredential({ managedIdentityClientId: '<client-id-of-user-assigned-identity>' }); // user-assigned identity // Get token for Azure Database for PostgreSQL
- const accessToken = await credential.getToken("https://ossrdbms-aad.database.windows.net");
+ const accessToken = await credential.getToken("https://ossrdbms-aad.database.windows.net/.default");
// Set PosrgreSQL user depending on the environment if(process.env.IDENTITY_ENDPOINT) {
For Azure Database for MySQL and Azure Database for PostgreSQL, the database use
#credential = DefaultAzureCredential(managed_identity_client_id='<client-id-of-user-assigned-identity>') # user-assigned identity # Get token for Azure Database for MySQL
- token = credential.get_token("https://ossrdbms-aad.database.windows.net")
+ token = credential.get_token("https://ossrdbms-aad.database.windows.net/.default")
# Set MySQL user depending on the environment if 'IDENTITY_ENDPOINT' in os.environ:
For Azure Database for MySQL and Azure Database for PostgreSQL, the database use
#credential = DefaultAzureCredential(managed_identity_client_id='<client-id-of-user-assigned-identity>') # user-assigned identity # Get token for Azure Database for PostgreSQL
- token = credential.get_token("https://ossrdbms-aad.database.windows.net")
+ token = credential.get_token("https://ossrdbms-aad.database.windows.net/.default")
# Set PostgreSQL user depending on the environment if 'IDENTITY_ENDPOINT' in os.environ:
For Azure Database for MySQL and Azure Database for PostgreSQL, the database use
// Get the token TokenRequestContext request = new TokenRequestContext();
- request.addScopes("https://ossrdbms-aad.database.windows.net");
+ request.addScopes("https://ossrdbms-aad.database.windows.net/.default");
AccessToken token=creds.getToken(request).block(); // Set MySQL user depending on the environment
For Azure Database for MySQL and Azure Database for PostgreSQL, the database use
// Get the token TokenRequestContext request = new TokenRequestContext();
- request.addScopes("https://ossrdbms-aad.database.windows.net");
+ request.addScopes("https://ossrdbms-aad.database.windows.net/.default");
AccessToken token=creds.getToken(request).block(); // Set PostgreSQL user depending on the environment
app-service Tutorial Multi Container App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-multi-container-app.md
The following changes have been made for Redis (to be used in a later section):
* [Adds Redis Object Cache 1.3.8 WordPress plugin.](https://github.com/Azure-Samples/multicontainerwordpress/blob/5669a89e0ee8599285f0e2e6f7e935c16e539b92/docker-entrypoint.sh#L74) * [Uses App Setting for Redis host name in WordPress wp-config.php.](https://github.com/Azure-Samples/multicontainerwordpress/blob/5669a89e0ee8599285f0e2e6f7e935c16e539b92/docker-entrypoint.sh#L162)
-To use the custom image, you'll update your docker-compose-wordpress.yml file. In Cloud Shell, type `nano docker-compose-wordpress.yml` to open the nano text editor. Change the `image: wordpress` to use `image: mcr.microsoft.com/azuredocs/multicontainerwordpress`. You no longer need the database container. Remove the `db`, `environment`, `depends_on`, and `volumes` section from the configuration file. Your file should look like the following code:
+To use the custom image, you'll update your docker-compose-wordpress.yml file. In Cloud Shell, open a text editor and change the `image: wordpress` to use `image: mcr.microsoft.com/azuredocs/multicontainerwordpress`. You no longer need the database container. Remove the `db`, `environment`, `depends_on`, and `volumes` section from the configuration file. Your file should look like the following code:
```yaml version: '3.3'
restart: always ```
-Save your changes and exit nano. Use the command `^O` to save and `^X` to exit.
- ### Update app with new configuration In Cloud Shell, reconfigure your multi-container [web app](overview.md) with the [az webapp config container set](/cli/azure/webapp/config/container#az-webapp-config-container-set) command. Don't forget to replace _\<app-name>_ with the name of the web app you created earlier.
When the app setting has been created, Cloud Shell shows information similar to
### Modify configuration file
-In the Cloud Shell, type `nano docker-compose-wordpress.yml` to open the nano text editor.
+In the Cloud Shell, opne the file `docker-compose-wordpress.yml` in a text editor.
The `volumes` option maps the file system to a directory within the container. `${WEBAPP_STORAGE_HOME}` is an environment variable in App Service that is mapped to persistent storage for your app. You'll use this environment variable in the volumes option so that the WordPress files are installed into persistent storage instead of the container. Make the following modifications to the file:
application-gateway End To End Ssl Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/end-to-end-ssl-portal.md
Previously updated : 11/14/2019 Last updated : 03/28/2023
To configure end-to-end TLS with an application gateway, you need a certificate
For end-to-end TLS encryption, the right backend servers must be allowed in the application gateway. To allow this access, upload the public certificate of the backend servers, also known as Authentication Certificates (v1) or Trusted Root Certificates (v2), to the application gateway. Adding the certificate ensures that the application gateway communicates only with known backend instances. This configuration further secures end-to-end communication.
+> [!IMPORTANT]
+> If you receive an error message for the backend server certificate, verify that the frontend certificate Common Name (CN) matches the backend certificate CN. For more information, see [Trusted root certificate mismatch](./application-gateway-backend-health-troubleshooting.md#trusted-root-certificate-mismatch)
+ To learn more, see [Overview of TLS termination and end to end TLS with Application Gateway](./ssl-overview.md). ## Create a new application gateway with end-to-end TLS
application-gateway Ingress Controller Install Existing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-install-existing.md
In the first few steps, we install Helm's Tiller on your Kubernetes cluster. Use
``` 1. Edit helm-config.yaml and fill in the values for `appgw` and `armAuth`.
- ```bash
- nano helm-config.yaml
- ```
-
+
> [!NOTE] > The `<identity-resource-id>` and `<identity-client-id>` are the properties of the Azure AD Identity you setup in the previous section. You can retrieve this information by running the following command: `az identity show -g <resourcegroup> -n <identity-name>`, where `<resourcegroup>` is the resource group in which the top level AKS cluster object, Application Gateway and Managed Identify are deployed.
application-gateway Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/overview.md
Previously updated : 09/13/2022 Last updated : 11/15/2022 #Customer intent: As an IT administrator, I want to learn about Azure Application Gateways and what I can use them for.
application-gateway Ssl Certificate Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ssl-certificate-management.md
There are two primary scenarios when deleting a certificate from portal:
| Frontend IP | The frontend IP of the gateway gets updated to reflect the new state. | ### Bulk update
-The bulk operation feature is helpful for large gateways having multiple SSL certificates for separate listeners. Similar to individual certificate management, this option allows you to change the type from "Uploaded" to "Key Vault" or vice-versa. This utility is also helpful in recovering a gateway when facing misconfigurations for multiple certificate objects simultaneously.
+The bulk operation feature is helpful for large gateways having multiple SSL certificates for separate listeners. Similar to individual certificate management, this option also allows you to change the type from "Uploaded" to "Key Vault" or vice-versa (if required). This utility is also helpful in recovering a gateway when facing misconfigurations for multiple certificate objects simultaneously.
To use the Bulk update option, 1. Choose the certificates to be updated using the checkboxes and select the "Bulk update" menu option.
application-gateway Ssl Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ssl-overview.md
Previously updated : 09/13/2022 Last updated : 03/27/2023
End-to-end TLS allows you to encrypt and securely transmit sensitive data to the
When configured with end-to-end TLS communication mode, Application Gateway terminates the TLS sessions at the gateway and decrypts user traffic. It then applies the configured rules to select an appropriate backend pool instance to route traffic to. Application Gateway then initiates a new TLS connection to the backend server and re-encrypts data using the backend server's public key certificate before transmitting the request to the backend. Any response from the web server goes through the same process back to the end user. End-to-end TLS is enabled by setting protocol setting in [Backend HTTP Setting](./configuration-overview.md#http-settings) to HTTPS, which is then applied to a backend pool.
-The [TLS policy](./application-gateway-ssl-policy-overview.md) applies only to the frontend traffic for both V1 and V2 SKU gateways. The backend TLS connection supports TLS 1.0 to TLS 1.2 versions.
+In Application Gateway v1 SKU gateways, [TLS policy](./application-gateway-ssl-policy-overview.md) applies the TLS version only to frontend traffic and the defined ciphers to both frontend and backend targets. In Application Gateway v2 SKU gateways, TLS policy only applies to frontend traffic, backend TLS connections will always be negotiated via TLS 1.0 to TLS 1.2 versions.
Application Gateway only communicates with those backend servers that have either allow-listed their certificate with the Application Gateway or whose certificates are signed by well-known CA authorities and the certificate's CN matches the host name in the HTTP backend settings. These include the trusted Azure services such as Azure App Service/Web Apps and Azure API Management.
application-gateway Url Route Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/url-route-overview.md
description: This article provides an overview of the Azure Application Gateway
Previously updated : 01/14/2022 Last updated : 03/28/2023
Path rules are case insensitive.
|`/Repos/*/Comments/*` |no| |`/CurrentUser/Comments/*` |yes|
+Path rules are processed in order, based on how they're listed in the portal. The least specific path (with wildcards) should be at the end of the list, so that it will be processed last. If wildcard rules are present at the top of the list, they take priority and will be processed first. See the following example scenarios.
+ #### Examples Path-based rule processing when wildcard (*) is used:
applied-ai-services Concept Insurance Card https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-insurance-card.md
monikerRange: 'form-recog-3.0.0'
recommendations: false
-# Azure Form Recognizer health insurance card model (preview)
+# Azure Form Recognizer health insurance card model
**This article applies to:** ![Form Recognizer v3.0 checkmark](media/yes-icon.png) **Form Recognizer v3.0**.
-> [!IMPORTANT]
->
-> * The Form Recognizer Studio health insurance card model is currently in gated preview. Features, approaches and processes may change, prior to General Availability (GA), based on user feedback.
-> * Complete and submit the [**Form Recognizer private preview request form**](https://aka.ms/form-recognizer/preview/survey) to request access.
- The Form Recognizer health insurance card model combines powerful Optical Character Recognition (OCR) capabilities with deep learning models to analyze and extract key information from US health insurance cards. A health insurance card is a key document for care processing and can be digitally analyzed for patient onboarding, financial coverage information, cashless payments, and insurance claim processing. The health insurance card model analyzes health card images; extracts key information such as insurer, member, prescription, and group number; and returns a structured JSON representation. Health insurance cards can be presented in various formats and quality including phone-captured images, scanned documents, and digital PDFs. ***Sample health insurance card processed using Form Recognizer Studio***
applied-ai-services Resource Customer Stories https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/resource-customer-stories.md
The following customers and partners have adopted Form Recognizer across a wide
| Customer/Partner | Description | Link | ||-|-|
-| **Acumatica** | [**Acumatica**](https://www.acumatica.com/) is a technology provider that develops cloud and browser-based enterprise resource planning (ERP) software for small and medium-sized businesses (SMBs). To bring expense claims into the modern age, Acumatica incorporated Form Recognizer into its native application. The Form Recognizer's prebuilt-receipt API and machine learning capabilities are used to automatically extract data from receipts. Acumatica's customers can file multiple, error-free claims in a matter of seconds, freeing up more time to focus on other important tasks. | [Customer story](https://customers.microsoft.com/story/762684-acumatica-partner-professional-services-azure) |
+| **Acumatica** | [**Acumatica**](https://www.acumatica.com/) is a technology provider that develops cloud and browser-based enterprise resource planning (ERP) software for small and medium-sized businesses (SMBs). To bring expense claims into the modern age, Acumatica incorporated Form Recognizer into its native application. The Form Recognizer's prebuilt-receipt API and machine learning capabilities are used to automatically extract data from receipts. Acumatica's customers can file multiple, error-free claims in a matter of seconds, freeing up more time to focus on other important tasks. | |
| **Air Canada** | In September 2021, [**Air Canada**](https://www.aircanada.com/) was tasked with verifying the COVID-19 vaccination status of thousands of worldwide employees in only two months. After realizing manual verification would be too costly and complex within the time constraint, Air Canada turned to its internal AI team for an automated solution. The AI team partnered with Microsoft and used Form Recognizer to roll out a fully functional, accurate solution within weeks. This partnership met the government mandate on time and saved thousands of hours of manual work. | [Customer story](https://customers.microsoft.com/story/1505667713938806113-air-canada-travel-transportation-azure-form-recognizer)|
-|**Arkas Logistics** | [**Arkas Logistics**](http://www.arkaslojistik.com.tr/) is operates under the umbrella of Arkas Holding, T├╝rkiye's leading holding institution and operating in 23 countries. During the COVID-19 crisis, the company has been able to provide outstanding, complete logistical services thanks to its focus on contactless operation and digitalization steps. Form Recognizer powers a solution that maintains the continuity of the supply chain and allows for uninterrupted service. | [Customer story](https://customers.microsoft.com/story/842149-arkas-logistics-transportation-azure-en-turkey ) |
-|**Automation Anywhere**| [**Automation Anywhere**](https://www.automationanywhere.com/) is on a singular and unwavering mission to democratize automation by liberating teams from mundane, repetitive tasks, and allowing more time for innovation and creativity with cloud-native robotic process automation (RPA)software. To protect the citizens of the United Kingdom, healthcare providers must process tens of thousands of COVID-19 tests daily, each one accompanied by a form for the World Health Organization (WHO). Manually completing and processing these forms would potentially slow testing and divert resources away from patient care. In response, Automation Anywhere built an AI-powered bot to help a healthcare provider automatically process and submit the COVID-19 test forms at scale. | [Customer story](https://customers.microsoft.com/story/811346-automation-anywhere-partner-professional-services-azure-cognitive-services) |
-|**AvidXchange**| [**AvidXchange**](https://www.avidxchange.com/) has developed an accounts payable automation solution applying Form Recognizer. AvidXchange partners with Azure Cognitive Services to deliver an accounts payable automation solution for the middle market. Customers benefit from faster invoice processing times and increased accuracy to ensure their suppliers are paid the right amount, at the right time. | [Blog](https://techcommunity.microsoft.com/t5/azure-ai/form-recognizer-now-reads-more-languages-processes-ids-and/ba-p/2179428)|
-|**Blue Prism**| [**Blue Prism**](https://www.blueprism.com/) Decipher is an AI-powered document processing capability that's directly embedded into the company's connected-RPA platform. Decipher works with Form Recognizer to help organizations process forms faster and with less human effort. One of Blue Prism's customers has been testing the solution to automate invoice handling as part of its procurement process. | [Customer story](https://customers.microsoft.com/story/737482-blue-prism-partner-professional-services-azure) |
+|**Arkas Logistics** | [**Arkas Logistics**](http://www.arkaslojistik.com.tr/) is operates under the umbrella of Arkas Holding, T├╝rkiye's leading holding institution and operating in 23 countries. During the COVID-19 crisis, the company has been able to provide outstanding, complete logistical services thanks to its focus on contactless operation and digitalization steps. Form Recognizer powers a solution that maintains the continuity of the supply chain and allows for uninterrupted service. ||
+|**Automation Anywhere**| [**Automation Anywhere**](https://www.automationanywhere.com/) is on a singular and unwavering mission to democratize automation by liberating teams from mundane, repetitive tasks, and allowing more time for innovation and creativity with cloud-native robotic process automation (RPA)software. To protect the citizens of the United Kingdom, healthcare providers must process tens of thousands of COVID-19 tests daily, each one accompanied by a form for the World Health Organization (WHO). Manually completing and processing these forms would potentially slow testing and divert resources away from patient care. In response, Automation Anywhere built an AI-powered bot to help a healthcare provider automatically process and submit the COVID-19 test forms at scale. | |
+|**AvidXchange**| [**AvidXchange**](https://www.avidxchange.com/) has developed an accounts payable automation solution applying Form Recognizer. AvidXchange partners with Azure Cognitive Services to deliver an accounts payable automation solution for the middle market. Customers benefit from faster invoice processing times and increased accuracy to ensure their suppliers are paid the right amount, at the right time. ||
+|**Blue Prism**| [**Blue Prism**](https://www.blueprism.com/) Decipher is an AI-powered document processing capability that's directly embedded into the company's connected-RPA platform. Decipher works with Form Recognizer to help organizations process forms faster and with less human effort. One of Blue Prism's customers has been testing the solution to automate invoice handling as part of its procurement process. ||
|**Chevron**| [**Chevron**](https://www.chevron.com//) Canada Business Unit is now using Form Recognizer with UiPath's robotic process automation platform to automate the extraction of data and move it into back-end systems for analysis. Subject matter experts have more time to focus on higher-value activities and information flows more rapidly. Accelerated operational control enables the company to analyze its business with greater speed, accuracy, and depth. | [Customer story](https://customers.microsoft.com/story/chevron-mining-oil-gas-azure-cognitive-services)|
-|**Cross Masters**|[**Cross Masters**](https://crossmasters.com/), uses cutting-edge AI technologies not only as a passion, but as an essential part of a work culture requiring continuous innovation. One of the latest success stories is automation of manual paperwork required to process thousands of invoices. Cross Masters used Form Recognizer to develop a unique, customized solution, to provide clients with market insights from a large set of collected invoices. Most impressive is the extraction quality and continuous introduction of new features, such as model composing and table labeling. | [Blog](https://techcommunity.microsoft.com/t5/azure-ai/form-recognizer-now-reads-more-languages-processes-ids-and/ba-p/2179428)|
+|**Cross Masters**|[**Cross Masters**](https://crossmasters.com/), uses cutting-edge AI technologies not only as a passion, but as an essential part of a work culture requiring continuous innovation. One of the latest success stories is automation of manual paperwork required to process thousands of invoices. Cross Masters used Form Recognizer to develop a unique, customized solution, to provide clients with market insights from a large set of collected invoices. Most impressive is the extraction quality and continuous introduction of new features, such as model composing and table labeling. ||
|**Element**| [**Element**](https://www.element.com/) is a global business that provides specialist testing, inspection, and certification services to a diverse range of businesses. Element is one of the fastest growing companies in the global testing, inspection and certification sector having over 6,500 engaged experts working in more than 200 facilities across the globe. When the finance team for the Americas was forced to work from home during the COVID-19 pandemic, it needed to digitalize its paper processes fast. The creativity of the team and its use of Azure Form Recognizer delivered more than business as usualΓÇöit delivered significant efficiencies. The Element team used the tools in Azure so the next phase could be expedited. Rather than coding from scratch, they saw the opportunity to use the Azure Form Recognizer. This integration quickly gave them the functionality they needed, together with the agility and security of Azure. Azure Logic Apps is used to automate the process of extracting the documents from email, storing them, and updating the system with the extracted data. Computer Vision, part of Azure Cognitive Services, partners with Azure Form Recognizer to extract the right data points from the invoice documentsΓÇöwhether they're a pdf or scanned images. | [Customer story](https://customers.microsoft.com/story/1414941527887021413-element)| |**Emaar Properties**| [**Emaar Properties**](https://www.emaar.com/en/), operates Dubai Mall, the world's most-visited retail and entertainment destination. Each year, the Dubai Mall draws more than 80 million visitors. To enrich the shopping experience, Emaar Properties offers a unique rewards program through a dedicated mobile app. Loyalty program points are earned via submitted receipts. Emaar Properties uses Azure Form Recognizer to process submitted receipts and has achieved 92 percent reading accuracy.| [Customer story](https://customers.microsoft.com/story/1459754150957690925-emaar-retailers-azure-en-united-arab-emirates)| |**EY**| [**EY**](https://ey.com/) (Ernst & Young Global Limited) is a multinational professional services network that helps to create long-term value for clients and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries to help clients grow, transform, and operate. EY teams work across assurance, consulting, law, strategy, tax, and transactions to find solutions for complex issues facing our world today. The EY Technology team collaborated with Microsoft to build a platform that hastens invoice extraction and contract comparison processes. Azure Form Recognizer and Custom Vision partnered to enable EY teams to automate and improve the OCR and document handling processes for its transactions services clients. | [Customer story](https://customers.microsoft.com/story/1404985164224935715-ey-professional-services-azure-form-recognizer)|
-|**Financial Fabric**| [**Financial Fabric**](https://www.financialfabric.com/), a Microsoft Cloud Solution Provider, delivers data architecture, science, and analytics services to investment managers at hedge funds, family offices, and corporate treasuries. Its daily processes involve extracting and normalizing data from thousands of complex financial documents, such as bank statements and legal agreements. The company then provides custom analytics to help its clients make better investment decisions. Extracting this data previously took days or weeks. By using Form Recognizer, Financial Fabric has reduced the time it takes to go from extraction to analysis to just minutes. | [Customer story](https://customers.microsoft.com/story/financial-fabric-banking-capital-markets-azure)|
+|**Financial Fabric**| [**Financial Fabric**](https://www.financialfabric.com/), a Microsoft Cloud Solution Provider, delivers data architecture, science, and analytics services to investment managers at hedge funds, family offices, and corporate treasuries. Its daily processes involve extracting and normalizing data from thousands of complex financial documents, such as bank statements and legal agreements. The company then provides custom analytics to help its clients make better investment decisions. Extracting this data previously took days or weeks. By using Form Recognizer, Financial Fabric has reduced the time it takes to go from extraction to analysis to just minutes. ||
|**Fujitsu**| [**Fujitsu**](https://scanners.us.fujitsu.com/about-us) is the world leader in document scanning technology, with more than 50 percent of global market share, but that doesn't stop the company from constantly innovating. To improve the performance and accuracy of its cloud scanning solution, Fujitsu incorporated Azure Form Recognizer. It took only a few months to deploy the new technologies, and they have boosted character recognition rates as high as 99.9 percent. This collaboration helps Fujitsu deliver market-leading innovation and give its customers powerful and flexible tools for end-to-end document management. | [Customer story](https://customers.microsoft.com/en-us/story/1504311236437869486-fujitsu-document-scanning-azure-form-recognizer)|
-|**GEP**| [**GEP**](https://www.gep.com/) has developed an invoice processing solution for a client using Form Recognizer. GEP combined their AI solution with Azure Form Recognizer to automate the processing of 4,000 invoices a day for a client saving them tens of thousands of hours of manual effort. This collaborative effort improved accuracy, controls, and compliance on a global scale." Sarateudu Sethi, GEP's Vice President of Artificial Intelligence. | [Blog](https://techcommunity.microsoft.com/t5/azure-ai/form-recognizer-now-reads-more-languages-processes-ids-and/ba-p/2179428)|
+|**GEP**| [**GEP**](https://www.gep.com/) has developed an invoice processing solution for a client using Form Recognizer. GEP combined their AI solution with Azure Form Recognizer to automate the processing of 4,000 invoices a day for a client saving them tens of thousands of hours of manual effort. This collaborative effort improved accuracy, controls, and compliance on a global scale." Sarateudu Sethi, GEP's Vice President of Artificial Intelligence. ||
|**HCA Healthcare**| [**HCA Healthcare**](https://hcahealthcare.com/) is one of the nation's leading providers of healthcare with over 180 hospitals and 2,000 sites-of-care located throughout the United States and serving approximately 35 million patients each year. Currently, they're using Azure Form Recognizer to simplify and improve the patient onboarding experience and reducing administrative time spent entering repetitive data into the care center's system. | [Customer story](https://customers.microsoft.com/story/1404891793134114534-hca-healthcare-healthcare-provider-azure)|
-|**Icertis**| [**Icertis**](https://www.icertis.com/), is a Software as a Service (SaaS) provider headquartered in Bellevue, Washington. Icertis digitally transforms the contract management process with a cloud-based, AI-powered, contract lifecycle management solution. Azure Form Recognizer enables Icertis Contract Intelligence to take key-value pairs embedded in contracts and create structured data understood and operated upon by machine algorithms. Through these and other powerful Azure Cognitive and AI services, Icertis empowers customers in every industry to improve business in multiple ways: optimized manufacturing operations, added agility to retail strategies, reduced risk in IT services, and faster delivery of life-saving pharmaceutical products. | [Blog](https://cloudblogs.microsoft.com/industry-blog/en-in/unicorn/2022/01/12/how-icertis-built-a-contract-management-solution-using-azure-form-recognizer/)|
+|**Icertis**| [**Icertis**](https://www.icertis.com/), is a Software as a Service (SaaS) provider headquartered in Bellevue, Washington. Icertis digitally transforms the contract management process with a cloud-based, AI-powered, contract lifecycle management solution. Azure Form Recognizer enables Icertis Contract Intelligence to take key-value pairs embedded in contracts and create structured data understood and operated upon by machine algorithms. Through these and other powerful Azure Cognitive and AI services, Icertis empowers customers in every industry to improve business in multiple ways: optimized manufacturing operations, added agility to retail strategies, reduced risk in IT services, and faster delivery of life-saving pharmaceutical products. ||
|**Instabase**| [**Instabase**](https://instabase.com/) is a horizontal application platform that provides best-in-class machine learning processes to help retrieve, organize, identify, and understand complex masses of unorganized data. The application platform then brings this data into business workflows as organized information. This workflow provides a repository of integrative applications to orchestrate and harness that information with the means to rapidly extend and enhance them as required. The applications are fully containerized for widespread, infrastructure-agnostic deployment. | [Customer story](https://customers.microsoft.com/en-gb/story/1376278902865681018-instabase-partner-professional-services-azure)| |**Northern Trust**| [**Northern Trust**](https://www.northerntrust.com/) is a leading provider of wealth management, asset servicing, asset management, and banking to corporations, institutions, families, and individuals. As part of its initiative to digitize alternative asset servicing, Northern Trust has launched an AI-powered solution to extract unstructured investment data from alternative asset documents and making it accessible and actionable for asset-owner clients. Azure Applied AI services accelerate time-to-value for enterprises building AI solutions. This proprietary solution transforms crucial information from various unstructured formats into digital, actionable insights for investment teams. | [Customer story](https://www.businesswire.com/news/home/20210914005449/en/Northern-Trust-Automates-Data-Extraction-from-Alternative-Asset-Documentation)| |**Old Mutual**| [**Old Mutual**](https://www.oldmutual.co.za/) is Africa's leading financial services group with a comprehensive range of investment capabilities. They're the industry leader in retirement fund solutions, investments, asset management, group risk benefits, insurance, and multi-fund management. The Old Mutual team used Microsoft Natural Language Processing and Optical Character Recognition to provide the basis for automating key customer transactions received via emails. It also offered an opportunity to identify incomplete customer requests in order to nudge customers to the correct digital channels. Old Mutual's extensible solution technology was further developed as a microservice to be consumed by any enterprise application through a secure API management layer. | [Customer story](https://customers.microsoft.com/en-us/story/1507561807660098567-old-mutual-banking-capital-markets-azure-en-south-africa)| |**Standard Bank**| [**Standard Bank of South Africa**](https://www.standardbank.co.za/southafrica/personal/home) is Africa's largest bank by assets. Standard Bank is headquartered in Johannesburg, South Africa, and has more than 150 years of trade experience in Africa and beyond. When manual due diligence in cross-border transactions began absorbing too much staff time, the bank decided it needed a new way forward. Standard Bank uses Form Recognizer to significantly reduce its cross-border payments registration and processing time. | [Customer story](https://customers.microsoft.com/en-hk/story/1395059149522299983-standard-bank-of-south-africa-banking-capital-markets-azure-en-south-africa)|
-| **WEX**| [**WEX**](https://www.wexinc.com/) has developed a tool to process Explanation of Benefits documents using Form Recognizer. "The technology is truly amazing. I was initially worried that this type of solution wouldn't be feasible, but I soon realized that Form Recognizer can read virtually any document with accuracy." Matt Dallahan, Senior Vice President of Product Management and Strategy | [Blog](https://techcommunity.microsoft.com/t5/azure-ai/form-recognizer-now-reads-more-languages-processes-ids-and/ba-p/2179428)|
-|**Wilson Allen** | [**Wilson Allen**](https://wilsonallen.com/) took advantage of AI container support for Azure Cognitive Services and created a powerful AI solution that help firms around the world find unprecedented levels of insight in previously siloed and unstructured data. Its clients can use this data to support business development and foster client relationships. | [Customer story](https://customers.microsoft.com/story/814361-wilson-allen-partner-professional-services-azure)|
+| **WEX**| [**WEX**](https://www.wexinc.com/) has developed a tool to process Explanation of Benefits documents using Form Recognizer. "The technology is truly amazing. I was initially worried that this type of solution wouldn't be feasible, but I soon realized that Form Recognizer can read virtually any document with accuracy." Matt Dallahan, Senior Vice President of Product Management and Strategy ||
+|**Wilson Allen** | [**Wilson Allen**](https://wilsonallen.com/) took advantage of AI container support for Azure Cognitive Services and created a powerful AI solution that help firms around the world find unprecedented levels of insight in previously siloed and unstructured data. Its clients can use this data to support business development and foster client relationships. ||
|**Zelros**| [**Zelros**](http://www.zelros.com/) offers AI-powered software for the insurance industry. Insurers use the platform to take in forms and seamlessly manage customer enrollment and claims filing. The company combined its technology with Form Recognizer to automatically pull key-value pairs and text out of documents. When insurers use the platform, they can quickly process paperwork, ensure high accuracy, and redirect thousands of hours previously spent on manual data extraction toward better service. | [Customer story](https://customers.microsoft.com/story/816397-zelros-insurance-azure)|
automation Automation Hrw Run Runbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-hrw-run-runbooks.md
Jobs for Hybrid Runbook Workers run under the local **System** account.
**PowerShell 7.2**
-To run PowerShell 7.2 runbooks on a Windows Hybrid Worker, install *PowerShell* on the Hybrid Worker. See [Installing PowerShell on Windows](https://learn.microsoft.com/powershell/scripting/install/installing-powershell-on-windows?view=powershell-7.3).
+To run PowerShell 7.2 runbooks on a Windows Hybrid Worker, install *PowerShell* on the Hybrid Worker. See [Installing PowerShell on Windows](/powershell/scripting/install/installing-powershell-on-windows).
After PowerShell 7.2 installation is complete, create an environment variable with Variable name as powershell_7_2_path and Variable value as location of the executable *PowerShell*. Restart the Hybrid Runbook Worker after environment variable is created successfully. **PowerShell 7.1**
-To run PowerShell 7.1 runbooks on a Windows Hybrid Worker, install *PowerShell* on the Hybrid Worker. See [Installing PowerShell on Windows](https://learn.microsoft.com/powershell/scripting/install/installing-powershell-on-windows?view=powershell-7.3).
+To run PowerShell 7.1 runbooks on a Windows Hybrid Worker, install *PowerShell* on the Hybrid Worker. See [Installing PowerShell on Windows](/powershell/scripting/install/installing-powershell-on-windows).
Ensure to add the *PowerShell* file to the PATH environment variable and restart the Hybrid Runbook Worker after the installation. **Python 3.10**
If the *Python* executable file is at the default location *C:\Python27\python.e
**PowerShell 7.1**
-To run PowerShell 7.1 runbooks on a Windows Hybrid Worker, install *PowerShell* on the Hybrid Worker. See [Installing PowerShell on Windows](https://learn.microsoft.com/powershell/scripting/install/installing-powershell-on-windows?view=powershell-7.3).
+To run PowerShell 7.1 runbooks on a Windows Hybrid Worker, install *PowerShell* on the Hybrid Worker. See [Installing PowerShell on Windows](/powershell/scripting/install/installing-powershell-on-windows).
Ensure to add the *PowerShell* file to the PATH environment variable and restart the Hybrid Runbook Worker after the installation. **Python 3.8**
If the *Python* executable file is at the default location *C:\Python27\python.e
**PowerShell 7.2**
-To run PowerShell 7.2 runbooks on a Linux Hybrid Worker, install *PowerShell* file on the Hybrid Worker. For more information, see [Installing PowerShell on Linux](https://learn.microsoft.com/powershell/scripting/install/installing-powershell-on-linux?view=powershell-7.3).
+To run PowerShell 7.2 runbooks on a Linux Hybrid Worker, install *PowerShell* file on the Hybrid Worker. For more information, see [Installing PowerShell on Linux](/powershell/scripting/install/installing-powershell-on-linux).
After PowerShell 7.2 installation is complete, create an environment variable with **Variable name** as *powershell_7_2_path* and **Variable value** as location of the executable *PowerShell* file. Restart the Hybrid Runbook Worker after an environment variable is created successfully.
To create the GPG keyring and keypair, use the Hybrid Runbook Worker [nxautomati
sudo su - nxautomation ```
-1. Once you are using **nxautomation**, generate the GPG keypair. GPG guides you through the steps. You must provide name, email address, expiration time, and passphrase. Then you wait until there is enough entropy on the machine for the key to be generated.
+1. Once you are using **nxautomation**, generate the GPG keypair as root. GPG guides you through the steps. You must provide name, email address, expiration time, and passphrase. Then you wait until there is enough entropy on the machine for the key to be generated.
```bash sudo gpg --generate-key ```
-1. Because the GPG directory was generated with sudo, you must change its owner to **nxautomation** using the following command.
+1. Because the GPG directory was generated with sudo, you must change its owner to **nxautomation** using the following command as root.
```bash sudo chown -R nxautomation ~/.gnupg
gpg_public_keyring_path = /home/nxautomation/run/.gnupg/pubring.kbx
### Verify that signature validation is on
-If signature validation has been disabled on the machine, you must turn it on by running the following sudo command. Replace `<LogAnalyticsworkspaceId>` with your workspace ID.
+If signature validation has been disabled on the machine, you must turn it on by running the following command as root. Replace `<LogAnalyticsworkspaceId>` with your workspace ID.
```bash sudo python /opt/microsoft/omsconfig/modules/nxOMSAutomationWorker/DSCResources/MSFT_nxOMSAutomationWorkerResource/automationworker/scripts/require_runbook_signature.py --true <LogAnalyticsworkspaceId>
automation Automation Runbook Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-runbook-types.md
Title: Azure Automation runbook types
description: This article describes the types of runbooks that you can use in Azure Automation and considerations for determining which type to use. Previously updated : 12/28/2022 Last updated : 03/29/2023
The PowerShell version is determined by the **Runtime version** specified (that
The same Azure sandbox and Hybrid Runbook Worker can execute **PowerShell 5.1** and **PowerShell 7.1 (preview)** runbooks side by side. > [!NOTE]
-> - Currently, PowerShell 7.2 (preview) runtime version is supported in five regions for Cloud jobs only: West Central US, East US, South Africa North, North Europe, Australia Southeast
+> - Currently, PowerShell 7.2 (preview) runtime version is supported for both Cloud and Hybrid jobs in all Public regions except Australia Central2, Korea South, Sweden South, Jio India Central, Brazil Southeast, Central India, West India, UAE Central, and Gov clouds.
> - At the time of runbook execution, if you select **Runtime Version** as **7.1 (preview)**, PowerShell modules targeting 7.1 (preview) runtime version are used and if you select **Runtime Version** as **5.1**, PowerShell modules targeting 5.1 runtime version are used. This applies for PowerShell 7.2 (preview) modules and runbooks. Ensure that you select the right Runtime Version for modules.
-For example : if you are executing a runbook for a SharePoint automation scenario in **Runtime version** *7.1 (preview)*, then import the module in **Runtime version** **7.1 (preview)**; if you are executing a runbook for a SharePoint automation scenario in **Runtime version** **5.1**, then import the module in **Runtime version** *5.1*. In this case, you would see two entries for the module, one for **Runtime Version** **7.1(preview)** and other for **5.1**.
+For example: if you're executing a runbook for a SharePoint automation scenario in **Runtime version** *7.1 (preview)*, then import the module in **Runtime version** **7.1 (preview)**; if you're executing a runbook for a SharePoint automation scenario in **Runtime version** **5.1**, then import the module in **Runtime version** *5.1*. In this case, you would see two entries for the module, one for **Runtime Version** **7.1(preview)** and other for **5.1**.
:::image type="content" source="./media/automation-runbook-types/runbook-types.png" alt-text="runbook Types.":::
The following are the current limitations and known issues with PowerShell runbo
- Runbooks can't use [parallel processing](automation-powershell-workflow.md#use-parallel-processing) to execute multiple actions in parallel. - Runbooks can't use [checkpoints](automation-powershell-workflow.md#use-checkpoints-in-a-workflow) to resume runbook if there's an error. - You can include only PowerShell, PowerShell Workflow runbooks, and graphical runbooks as child runbooks by using the [Start-AzAutomationRunbook](/powershell/module/az.automation/start-azautomationrunbook) cmdlet, which creates a new job.-- Runbooks can't use the PowerShell [#Requires](/powershell/module/microsoft.powershell.core/about/about_requires) statement, it is not supported in Azure sandbox or on Hybrid Runbook Workers and might cause the job to fail.
+- Runbooks can't use the PowerShell [#Requires](/powershell/module/microsoft.powershell.core/about/about_requires) statement, it isn't supported in Azure sandbox or on Hybrid Runbook Workers and might cause the job to fail.
**Known issues**
The following are the current limitations and known issues with PowerShell runbo
**Limitations** - You must be familiar with PowerShell scripting.-- The Azure Automation internal PowerShell cmdlets are not supported on a Linux Hybrid Runbook Worker. You must import the `automationassets` module at the beginning of your Python runbook to access the Automation account shared resources (assets) functions.-- For the PowerShell 7 runtime version, the module activities are not extracted for the imported modules.-- *PSCredential* runbook parameter type is not supported in PowerShell 7 runtime version.-- PowerShell 7.x does not support workflows. See [this](/powershell/scripting/whats-new/differences-from-windows-powershell#powershell-workflow) for more details.-- PowerShell 7.x currently does not support signed runbooks.
+- The Azure Automation internal PowerShell cmdlets aren't supported on a Linux Hybrid Runbook Worker. You must import the `automationassets` module at the beginning of your Python runbook to access the Automation account shared resources (assets) functions.
+- For the PowerShell 7 runtime version, the module activities aren't extracted for the imported modules.
+- *PSCredential* runbook parameter type isn't supported in PowerShell 7 runtime version.
+- PowerShell 7.x doesn't support workflows. See [this](/powershell/scripting/whats-new/differences-from-windows-powershell#powershell-workflow) for more details.
+- PowerShell 7.x currently doesn't support signed runbooks.
- Source control integration doesn't support PowerShell 7.1 (preview) Also, PowerShell 7.1 (preview) runbooks in source control gets created in Automation account as Runtime 5.1.-- PowerShell 7.1 module management is not supported through `Get-AzAutomationModule` cmdlets.
+- PowerShell 7.1 module management isn't supported through `Get-AzAutomationModule` cmdlets.
- Runbook will fail with no log trace if the input value contains the character '. **Known issues** -- Executing child scripts using `.\child-runbook.ps1` is not supported in this preview.
+- Executing child scripts using `.\child-runbook.ps1` isn't supported in this preview.
**Workaround**: Use `Start-AutomationRunbook` (internal cmdlet) or `Start-AzAutomationRunbook` (from *Az.Automation* module) to start another runbook from parent runbook. - Runbook properties defining logging preference is not supported in PowerShell 7 runtime. **Workaround**: Explicitly set the preference at the start of the runbook as below -
The following are the current limitations and known issues with PowerShell runbo
``` - Avoid importing `Az.Accounts` module to version 2.4.0 version for PowerShell 7 runtime as there can be an unexpected behavior using this version in Azure Automation. - You might encounter formatting problems with error output streams for the job running in PowerShell 7 runtime.-- When you import a PowerShell 7.1 module that's dependent on other modules, you may find that the import button is gray even when PowerShell 7.1 version of the dependent module is installed. For example, Az.Compute version 4.20.0, has a dependency on Az.Accounts being >= 2.6.0. This issue occurs when an equivalent dependent module in PowerShell 5.1 doesn't meet the version requirements. For example, 5.1 version of Az.Accounts was < 2.6.0.
+- When you import a PowerShell 7.1 module that's dependent on other modules, you may find that the import button is gray even when PowerShell 7.1 version of the dependent module is installed. For example, Az.Compute version 4.20.0, has a dependency on Az.Accounts being >= 2.6.0. This issue occurs when an equivalent dependent module in PowerShell 5.1 doesn't meet the version requirements. For example, 5.1 version of Az.Accounts were < 2.6.0.
- When you start PowerShell 7 runbook using the webhook, it auto-converts the webhook input parameter to an invalid JSON.
The following are the current limitations and known issues with PowerShell runbo
**Limitations** > [!NOTE]
-> Currently, PowerShell 7.2 (preview) runtime version is supported in five regions for Cloud jobs only: West Central US, East US, South Africa North, North Europe, and Australia Southeast.
+> Currently, PowerShell 7.2 (preview) runtime version is supported for both Cloud and Hybrid jobs in all Public regions except Australia Central2, Korea South, Sweden South, Jio India Central, Brazil Southeast, Central India, West India, UAE Central, and Gov clouds.
- You must be familiar with PowerShell scripting.-- For the PowerShell 7 runtime version, the module activities are not extracted for the imported modules.-- *PSCredential* runbook parameter type is not supported in PowerShell 7 runtime version.-- PowerShell 7.x does not support workflows. See [this](/powershell/scripting/whats-new/differences-from-windows-powershell#powershell-workflow) for more details.-- PowerShell 7.x currently does not support signed runbooks.-- Source control integration doesn't support PowerShell 7.2 (preview). Also, PowerShell 7.2 (preview) runbooks in source control gets created in Automation account as Runtime 5.1.-- Currently, only cloud jobs are supported for PowerShell 7.2 (preview) runtime versions.
+- For the PowerShell 7 runtime version, the module activities aren't extracted for the imported modules.
+- *PSCredential* runbook parameter type isn't supported in PowerShell 7 runtime version.
+- PowerShell 7.x doesn't support workflows. See [this](/powershell/scripting/whats-new/differences-from-windows-powershell#powershell-workflow) for more details.
+- PowerShell 7.x currently doesn't support signed runbooks.
+- Source control integration doesn't support PowerShell 7.2 (preview). Also, PowerShell 7.2 (preview) runbooks in source control get created in Automation account as Runtime 5.1.
- Logging job operations to the Log Analytics workspace through linked workspace or diagnostics settings are not supported.-- Currently, PowerShell 7.2 (preview) runbooks are only supported from Azure portal. Rest API and PowerShell is not supported.-- Az module 8.3.0 is installed by default and cannot be managed at the automation account level. Use custom modules to override the Az module to the desired version.
+- Currently, PowerShell 7.2 (preview) runbooks are only supported from Azure portal. Rest API and PowerShell isn't supported.
+- Az module 8.3.0 is installed by default and can't be managed at the automation account level. Use custom modules to override the Az module to the desired version.
- The imported PowerShell 7.2 (preview) module would be validated during job execution. Ensure that all dependencies for the selected module are also imported for successful job execution. - PowerShell 7.2 module management is not supported through `Get-AzAutomationModule` cmdlets.
PowerShell Workflow runbooks are text runbooks based on [Windows PowerShell Work
Python runbooks compile under Python 2, Python 3.8 (preview) and Python 3.10 (preview). You can directly edit the code of the runbook using the text editor in the Azure portal. You can also use an offline text editor and [import the runbook](manage-runbooks.md) into Azure Automation.
-* Python 3.10 (preview) runbooks are currently supported in five regions for cloud jobs only:
- - West Central US
- - East US
- - South Africa North
- - North Europe
- - Australia Southeast
+Currently, Python 3.10 (preview) runtime version is supported for both Cloud and Hybrid jobs in all Public regions except Australia Central2, Korea South, Sweden South, Jio India Central, Brazil SouthEast, Central India, West India, UAE Central, and Gov clouds.
### Advantages
Following are the limitations of Python runbooks
# [Python 2.7](#tab/py27) - You must be familiar with Python scripting.-- For Python 2.7.12 modules use wheel files cp27-amd6.
+- For Python 2.7.12 modules, use wheel files cp27-amd6.
- To use third-party libraries, you must [import the packages](python-packages.md) into the Automation account. - Azure Automation doesn't supportΓÇ»**sys.stderr**. - The Python **automationassets** package is not available on pypi.org, so it's not available for import onto a Windows machine.
Following are the limitations of Python runbooks
- To use third-party libraries, you must [import the packages](python-packages.md) into the Automation account. - Using **Start-AutomationRunbook** cmdlet in PowerShell/PowerShell Workflow to start a Python 3.8 (preview) runbook (preview) doesn't work. You can use **Start-AzAutomationRunbook** cmdlet from Az.Automation module or **Start-AzureRmAutomationRunbook** cmdlet from AzureRm.Automation module to work around this limitation.  - Azure Automation doesn't support **sys.stderr**.-- The Python **automationassets** package is not available on pypi.org, so it's not available for import onto a Windows machine.
+- The Python **automationassets** package isn't available on pypi.org, so it's not available for import onto a Windows machine.
# [Python 3.10 (preview)](#tab/py10) **Limitations** -- For Python 3.10 (preview) modules, currently, only the wheel files targeting cp310 Linux OS are supported. [Learn more](./python-3-packages.md)-- Currently, only cloud jobs are supported for Python 3.10 (preview) runtime versions.-- Custom packages for Python 3.10 (preview) are only validated during job runtime. Job is expected to fail if the package is not compatible in the runtime or if required dependencies of packages are not imported into automation account.-- Currently, Python 3.10 (preview) runbooks are only supported from Azure portal. Rest API and PowerShell is not supported.
+- For Python 3.10 (preview) modules, currently, only the wheel files targeting cp310 Linux OS are supported. [Learn more](./python-3-packages.md).
+- Custom packages for Python 3.10 (preview) are only validated during job runtime. Job is expected to fail if the package is not compatible in the runtime or if necessary dependencies of packages are not imported into automation account.
+- Currently, Python 3.10 (preview) runbooks are only supported from Azure portal. Rest API and PowerShell isn't supported.
### Multiple Python versions
-It is applicable for Windows Hybrid workers. For a Windows Runbook Worker, when running a Python 2 runbook it looks for the environment variable `PYTHON_2_PATH` first and validates whether it points to a valid executable file. For example, if the installation folder is `C:\Python2`, it would check if `C:\Python2\python.exe` is a valid path. If not found, then it looks for the `PATH` environment variable to do a similar check.
+It's applicable for Windows Hybrid workers. For a Windows Runbook Worker, when running a Python 2 runbook it looks for the environment variable `PYTHON_2_PATH` first and validates whether it points to a valid executable file. For example, if the installation folder is `C:\Python2`, it would check if `C:\Python2\python.exe` is a valid path. If not found, then it looks for the `PATH` environment variable to do a similar check.
For Python 3, it looks for the `PYTHON_3_PATH` env variable first and then falls back to the `PATH` environment variable.
When using only one version of Python, you can add the installation path to the
### Known issues
-For cloud jobs, Python 3.8 jobs sometimes fail with an exception message `invalid interpreter executable path`. You might see this exception if the job is delayed, starting more than 10 minutes, or using **Start-AutomationRunbook** to start Python 3.8 runbooks. If the job is delayed, restarting the runbook should be sufficient. Hybrid jobs should work without any issue if using the following steps:
-
-1. Create a new environment variable called `PYTHON_3_PATH` and specify the installation folder. For example, if the installation folder is `C:\Python3`, then this path needs to be added to the variable.
-1. Restart the machine after setting the environment variable.
+For cloud jobs, Python 3.8 jobs sometimes fail with an exception message `invalid interpreter executable path`. You might see this exception if the job is delayed, starting more than 10 minutes, or using **Start-AutomationRunbook** to start Python 3.8 runbooks. If the job is delayed, restarting the runbook should be sufficient.
## Graphical runbooks
automation Python 3 Packages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/python-3-packages.md
Title: Manage Python 3 packages in Azure Automation
description: This article tells how to manage Python 3 packages (preview) in Azure Automation. Previously updated : 10/26/2022 Last updated : 03/29/2023
Perform the following steps using a 64-bit Linux machine with Python 3.10.x and
1. On the **Add Python Package** page, select a local package to upload. The package can be a **.whl** or **.tar.gz** file for Python 3.8 (preview) and **.whl** file for Python 3.10 (preview). 1. Enter a name and select the **Runtime Version** as Python 3.8 (preview) or Python 3.10 (preview). > [!NOTE]
- > Python 3.10 (preview) runtime version is currently supported in five regions for Cloud jobs only: West Central US, East US, South Africa North, North Europe, Australia Southeast.
-1. Select **Import**
+ > Currently, Python 3.10 (preview) runtime version is supported for both Cloud and Hybrid jobs in all Public regions except Australia Central2, Korea South, Sweden South, Jio India Central, Brazil Southeast, Central India, West India, UAE Central, and Gov clouds.
+1. Select **Import**.
:::image type="content" source="media/python-3-packages/upload-package.png" alt-text="Screenshot shows the Add Python 3.8 (preview) Package page with an uploaded tar.gz file selected.":::
azure-app-configuration Concept Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/concept-disaster-recovery.md
Last updated 07/09/2020
# Resiliency and disaster recovery > [!IMPORTANT]
-> Azure App Configuration added [geo-replication](./concept-geo-replication.md) support recently. You can enable replicas of your data across multiple locations for enhanced resiliency to regional outages. You can also leverage App Configuration provider libraries in your applications for [automatic failover](./howto-geo-replication.md#use-replicas). The geo-replication feature is currently under preview. It will be the recommended solution for high availability when the feature is generally available.
+> Azure App Configuration supports [geo-replication](./concept-geo-replication.md). You can enable replicas of your data across multiple locations for enhanced resiliency to regional outages. You can also leverage App Configuration provider libraries in your applications for [automatic failover](./howto-geo-replication.md#use-replicas). Utilizing geo-replication is the recommended solution for high availability.
Currently, Azure App Configuration is a regional service. Each configuration store is created in a particular Azure region. A region-wide outage affects all stores in that region. App Configuration doesn't offer automatic failover to another region. This article provides general guidance on how you can use multiple configuration stores across Azure regions to increase the geo-resiliency of your application.
azure-app-configuration Concept Geo Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/concept-geo-replication.md
Title: Geo-replication in Azure App Configuration (Preview)
+ Title: Geo-replication in Azure App Configuration
description: Details of the geo-replication feature in Azure App Configuration.
Last updated 08/01/2022
-# Geo-replication overview (Preview)
+# Geo-replication overview
For application developers and IT engineers, a common goal is to build and run resilient applications. Resiliency is defined as the ability of your application to react to failure and still remain functional. To achieve resilience in the face of regional failures in the cloud, the first step is to build in redundancy to avoid a single point of failure. This redundancy can be achieved with geo-replication.
-The App Configuration geo-replication feature allows you to replicate your configuration store at-will to the regions of your choice. Each new **replica** will be in a different region and creates a new endpoint for your applications to send requests to. The original endpoint of your configuration store is called the **Origin**. The origin can't be removed, but otherwise behaves like any replica.
+The App Configuration geo-replication feature allows you to replicate your configuration store at-will to the regions of your choice. Each new **replica** will be in a different region and creates a new endpoint for your applications to send requests to. The original endpoint of your configuration store is called the **Origin**. The origin can't be removed, but otherwise behaves like any replica.
Changing or updating your key-values can be done in any replica. These changes will be synchronized with all other replicas following an eventual consistency model.
This team would benefit from geo-replication. They can create a replica of their
- Geo-replication isn't available in the free tier. - Each replica has limits, as outlined in the [App Configuration pricing page](https://azure.microsoft.com/pricing/details/app-configuration/). These limits are isolated per replica. - Azure App Configuration also supports Azure availability zones to create a resilient and highly available store within an Azure Region. Availability zone support is automatically included for a replica if the replica's region has availability zone support. The combination of availability zones for redundancy within a region, and geo-replication across multiple regions, enhances both the availability and performance of a configuration store.-- Currently, you can only authenticate with replica endpoints with [Azure Active Directory (Azure AD)](../app-service/overview-managed-identity.md). <!-- To add once these links become available: - Request handling for replicas will vary by configuration provider, for further information reference [.NET Geo-replication Reference](https://azure.microsoft.com/pricing/details/app-configuration/) and [Java Geo-replication Reference](https://azure.microsoft.com/pricing/details/app-configuration/).
Each replica created will add extra charges. Reference the [App Configuration pr
> [!div class="nextstepaction"] > [How to enable Geo replication](./howto-geo-replication.md)
-> [Resiliency and Disaster Recovery](./concept-disaster-recovery.md)
+> [Resiliency and Disaster Recovery](./concept-disaster-recovery.md)
azure-app-configuration Enable Dynamic Configuration Aspnet Netfx https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/enable-dynamic-configuration-aspnet-netfx.md
ms.devlang: csharp Previously updated : 10/12/2021 Last updated : 03/20/2023 #Customer intent: I want to dynamically update my ASP.NET web application (.NET Framework) to use the latest configuration data in App Configuration.
In this tutorial, you learn how to:
## Prerequisites -- Azure subscription - [create one for free](https://azure.microsoft.com/free/)
+- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/).
+- An App Configuration store. [Create a store](./quickstart-azure-app-configuration-create.md#create-an-app-configuration-store).
- [Visual Studio](https://visualstudio.microsoft.com/vs) - [.NET Framework 4.7.2 or later](https://dotnet.microsoft.com/download/dotnet-framework)
-## Create an App Configuration store
+## Add key-values
+Add the following key-values to the App Configuration store and leave **Label** and **Content Type** with their default values. For more information about how to add key-values to a store using the Azure portal or the CLI, go to [Create a key-value](./quickstart-azure-app-configuration-create.md#create-a-key-value).
-7. Select **Operations** > **Configuration explorer** > **Create** > **Key-value** to add the following key-values:
-
- | Key | Value |
- ||-|
- | *TestApp:Settings:BackgroundColor* | *White* |
- | *TestApp:Settings:FontColor* | *Black* |
- | *TestApp:Settings:FontSize* | *40* |
- | *TestApp:Settings:Message* | *Data from Azure App Configuration* |
- | *TestApp:Settings:Sentinel* | *v1* |
-
- Leave **Label** and **Content type** empty.
+| Key | Value |
+||-|
+| *TestApp:Settings:BackgroundColor* | *White* |
+| *TestApp:Settings:FontColor* | *Black* |
+| *TestApp:Settings:FontSize* | *40* |
+| *TestApp:Settings:Message* | *Data from Azure App Configuration* |
+| *TestApp:Settings:Sentinel* | *v1* |
## Create an ASP.NET Web Application
azure-app-configuration Enable Dynamic Configuration Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/enable-dynamic-configuration-dotnet.md
ms.devlang: csharp Previously updated : 07/24/2020 Last updated : 03/20/2023 #Customer intent: I want to dynamically update my .NET Framework app to use the latest configuration data in App Configuration.
In this tutorial, you learn how to:
## Prerequisites -- Azure subscription - [create one for free](https://azure.microsoft.com/free/)
+- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/).
+- An App Configuration store. [Create a store](./quickstart-azure-app-configuration-create.md#create-an-app-configuration-store).
- [Visual Studio](https://visualstudio.microsoft.com/vs) - [.NET Framework 4.7.2 or later](https://dotnet.microsoft.com/download/dotnet-framework)
-## Create an App Configuration store
+## Add a key-value
+Add the following key-value to the App Configuration store and leave **Label** and **Content Type** with their default values. For more information about how to add key-values to a store using the Azure portal or the CLI, go to [Create a key-value](./quickstart-azure-app-configuration-create.md#create-a-key-value).
-7. Select **Configuration explorer** > **+ Create** > **Key-value** to add the following key-value:
-
- | Key | Value |
- |-|-|
- | *TestApp:Settings:Message* | *Data from Azure App Configuration* |
-
- Leave **Label** and **Content Type** empty.
+| Key | Value |
+|-|-|
+| *TestApp:Settings:Message* | *Data from Azure App Configuration* |
## Create a .NET Framework console app
azure-app-configuration Howto Backup Config Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-backup-config-store.md
# Back up App Configuration stores automatically
+> [!IMPORTANT]
+> Azure App Configuration supports [geo-replication](./concept-geo-replication.md). You can enable replicas of your data across multiple locations for enhanced resiliency to regional outages. You can also leverage App Configuration provider libraries in your applications for [automatic failover](./howto-geo-replication.md#use-replicas). Utilizing geo-replication is the recommended solution for high availability.
+ In this article, you'll learn how to set up an automatic backup of key-values from a primary Azure App Configuration store to a secondary store. The automatic backup uses the integration of Azure Event Grid with App Configuration. After you set up the automatic backup, App Configuration will publish events to Azure Event Grid for any changes made to key-values in a configuration store. Event Grid supports various Azure services from which users can subscribe to the events emitted whenever key-values are created, updated, or deleted.
-> [!IMPORTANT]
-> Azure App Configuration added [geo-replication](./concept-geo-replication.md) support recently. You can enable replicas of your data across multiple locations for enhanced resiliency to regional outages. You can also leverage App Configuration provider libraries in your applications for [automatic failover](./howto-geo-replication.md#use-replicas). The geo-replication feature is currently under preview. It will be the recommended solution for high availability when the feature is generally available.
- ## Overview In this article, you'll use Azure Queue storage to receive events from Event Grid and use a timer-trigger of Azure Functions to process events in the queue in batches.
azure-app-configuration Howto Geo Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-geo-replication.md
Title: Enable geo-replication (preview)
+ Title: Enable geo-replication
description: Learn how to use Azure App Configuration geo replication to create, delete, and manage replicas of your configuration store. ms.devlang: csharp, java Previously updated : 10/10/2022 Last updated : 03/20/2023 #Customer intent: I want to be able to list, create, and delete the replicas of my configuration store.
-# Enable geo-replication (Preview)
+# Enable geo-replication
This article covers replication of Azure App Configuration stores. You'll learn about how to create, use and delete a replica in your configuration store.
Each replica you create has its dedicated endpoint. If your application resides
When geo-replication is enabled, and if one replica isn't accessible, you can let your application failover to another replica for improved resiliency. App Configuration provider libraries have built-in failover support by accepting multiple replica endpoints. You can provide a list of your replica endpoints in the order of the most preferred to the least preferred endpoint. When the current endpoint isn't accessible, the provider library will fail over to a less preferred endpoint, but it will try to connect to the more preferred endpoints from time to time. When a more preferred endpoint becomes available, it will switch to it for future requests.
-Assuming you have an application using Azure App Configuration, you can update it as the following sample code to take advantage of the failover feature.
-
-> [!NOTE]
-> You can only use Azure AD authentication to connect to replicas. Authentication with access keys is not supported during the preview.
+Assuming you have an application using Azure App Configuration, you can update it as the following sample code to take advantage of the failover feature. You can either provide a list of endpoints for Azure Active Directory (Azure AD) authentication or a list of connection strings for access key-based authentication.
### [.NET](#tab/dotnet) Edit the call to the `AddAzureAppConfiguration` method, which is often found in the `program.cs` file of your application.
+**Connect with Azure AD**
+ ```csharp configurationBuilder.AddAzureAppConfiguration(options => { // Provide an ordered list of replica endpoints var endpoints = new Uri[] {
- new Uri("https://<first-replica-endpoint>.azconfig.io"),
- new Uri("https://<second-replica-endpoint>.azconfig.io") };
+ new Uri("<first-replica-endpoint>"),
+ new Uri("<second-replica-endpoint>") };
- // Connect to replica endpoints using AAD authentication
+ // Connect to replica endpoints using Azure AD authentication
options.Connect(endpoints, new DefaultAzureCredential()); // Other changes to options }); ```
+**Connect with Connection String**
+
+```csharp
+configurationBuilder.AddAzureAppConfiguration(options =>
+{
+ // Provide an ordered list of replica connection strings
+ var connectionStrings = new string[] {
+ Environment.GetEnvironmentVariable("FIRST_REPLICA_CONNECTION_STRING"),
+ Environment.GetEnvironmentVariable("SECOND_REPLICA_CONNECTION_STRING") };
+
+ // Connect to replica endpoints using connection strings
+ options.Connect(connectionStrings);
+
+ // Other changes to options
+});
+```
+ > [!NOTE]
-> The failover support is available if you use version **5.3.0-preview** or later of any of the following packages.
+> The failover support is available if you use version **6.0.0** or later of any of the following packages.
> - `Microsoft.Extensions.Configuration.AzureAppConfiguration` > - `Microsoft.Azure.AppConfiguration.AspNetCore` > - `Microsoft.Azure.AppConfiguration.Functions.Worker` ### [Java Spring](#tab/spring)
-Edit the endpoint configuration in `bootstrap.properties`, to use endpoints which allows a list of endpoints.
+Edit the `endpoints` or `connection-strings` properties in the `bootstrap.properties` file of your application.
+
+**Connect with Azure AD**
```properties
-spring.cloud.azure.appconfiguration.stores[0].endpoints[0]="https://<first-replica-endpoint>.azconfig.io"
-spring.cloud.azure.appconfiguration.stores[0].endpoints[1]="https://<second-replica-endpoint>.azconfig.io"
+spring.cloud.azure.appconfiguration.stores[0].endpoints[0]="<first-replica-endpoint>"
+spring.cloud.azure.appconfiguration.stores[0].endpoints[1]="<second-replica-endpoint>"
```++ > [!NOTE]
-> The failover support is available if you use version of **2.10.0-beta.1** or later of any of the following packages.
-> - `azure-spring-cloud-appconfiguration-config`
-> - `azure-spring-cloud-appconfiguration-config-web`
-> - `azure-spring-cloud-starter-appconfiguration-config`
+> The failover support is available if you use version of **4.0.0-beta.1** or later of any of the following packages.
+> - `spring-cloud-azure-appconfiguration-config`
+> - `spring-cloud-azure-appconfiguration-config-web`
+> - `spring-cloud-azure-starter-appconfiguration-config`
azure-app-configuration Howto Leverage Json Content Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-leverage-json-content-type.md
ms.assetid:
ms.devlang: azurecli Previously updated : 08/24/2022 Last updated : 03/27/2023
In this tutorial, you'll learn how to:
[!INCLUDE [azure-cli-prepare-your-environment.md](~/articles/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
+- An App Configuration store. [Create a store](./quickstart-azure-app-configuration-create.md#create-an-app-configuration-store).
- This tutorial requires version 2.10.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
-## Create an App Configuration store
-- ## Create JSON key-values in App Configuration JSON key-values can be created using Azure portal, Azure CLI, or by importing from a JSON file. In this section, you'll find instructions on creating the same JSON key-values using all three methods. ### Create JSON key-values using Azure portal
-Browse to your App Configuration store, and select **Configuration Explorer** > **Create** > **Key-value** to add the following key-value pairs:
+Add the following key-values to the App Configuration store. Leave **Label** with its default value. For more information about how to add key-values to a store using the Azure portal or the CLI, go to [Create a key-value](./quickstart-azure-app-configuration-create.md#create-a-key-value).
-| Key | Value | Content Type |
-||||
-| Settings:BackgroundColor | "Green" | application/json |
-| Settings:FontSize | 24 | application/json |
-| Settings:UseDefaultRouting | false | application/json |
-| Settings:BlockedUsers | null | application/json |
-| Settings:ReleaseDate | "2020-08-04T12:34:56.789Z" | application/json |
-| Settings:RolloutPercentage | [25,50,75,100] | application/json |
-| Settings:Logging | {"Test":{"Level":"Debug"},"Prod":{"Level":"Warning"}} | application/json |
+| Key | Value | Content Type |
+| - | - | |
+| *Settings:BackgroundColor* | *"Green"* | *application/json* |
+| *Settings:FontSize* | *24* | *application/json* |
+| *Settings:UseDefaultRouting* | *false* | *application/json* |
+| *Settings:BlockedUsers* | *null* | *application/json* |
+| *Settings:ReleaseDate* | *"2020-08-04T12:34:56.789Z"* | *application/json* |
+| *Settings:RolloutPercentage* | *[25,50,75,100]* | *application/json* |
+| *Settings:Logging* | *{"Test":{"Level":"Debug"},"Prod":{"Level":"Warning"}}* | *application/json* |
-Leave **Label** empty and select **Apply**.
+1. Select **Apply**.
### Create JSON key-values using Azure CLI
az appconfig kv import -s file --format json --path "~/Import.json" --content-ty
``` > [!NOTE]
-> The `--depth` argument is used for flattening hierarchical data from a file into key-value pairs. In this tutorial, depth is specified for demonstrating that you can also store JSON objects as values in App Configuration. If depth isn't specified, JSON objects will be flattened to the deepest level by default.
+> The `--depth` argument is used for flattening hierarchical data from a file into key-values. In this tutorial, depth is specified for demonstrating that you can also store JSON objects as values in App Configuration. If depth isn't specified, JSON objects will be flattened to the deepest level by default.
The JSON key-values you created should look like this in App Configuration:
azure-app-configuration Howto Move Resource Between Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-move-resource-between-regions.md
Title: Move an App Configuration store to another region description: Learn how to move an App Configuration store to a different region. - Previously updated : 8/23/2021
-#Customer intent: I want to move my App Configuration resource from one Azure region to another.
Last updated : 03/27/2023+
+#Customer intent: I want to move my App Configuration resource from one Azure region to another.
+
-# Move an App Configuration store to another region
+# Move an App Configuration store to another region
-App Configuration stores are region-specific and can't be moved across regions automatically. You must create a new App Configuration store in the target region, then move your content from the source store to the new target store. You might move your configuration to another region for a number of reasons. For example, to take advantage of a new Azure region with Availability Zone support, to deploy features or services available in specific regions only, or to meet internal policy and governance requirements.
+App Configuration stores are region-specific and can't be moved across regions automatically. You must create a new App Configuration store in the target region, then move your content from the source store to the new target store. You might move your configuration to another region for a number of reasons. For example, to take advantage of a new Azure region with availability zone support, to deploy features or services available in specific regions only, or to meet internal policy and governance requirements.
-The following steps walk you through the process of creating a new target store and exporting your current store to the new region.
+The following steps walk you through the process of creating a new target store and exporting your current store to the new region.
## Design considerations Before you begin, keep in mind the following concepts:
-* Configuration store names are globally unique.
+* Configuration store names are globally unique.
* You need to reconfigure your access policies and network configuration settings in the new configuration store.
-## Create the target configuration store
-
-### [Portal](#tab/portal)
-To create a new App Configuration store in the Portal, follow these steps:
-1. Sign in to the [Azure portal](https://portal.azure.com). In the upper-left corner of the home page, select **Create a resource**. In the **Search the Marketplace** box, enter *App Configuration* and select <kbd>Enter</kbd>.
-
- ![Search for App Configuration](../../includes/media/azure-app-configuration-create/azure-portal-search.png)
-1. Select **App Configuration** from the search results, and then select **Create**.
-
- ![Select Create](../../includes/media/azure-app-configuration-create/azure-portal-app-configuration-create.png)
-1. On the **Create App Configuration** pane, enter the following settings:
-
- | Setting | Suggested value | Description |
- ||||
- | **Subscription** | Your subscription | Select the Azure subscription of your original store |
- | **Resource group** | Your resource group | Select the Azure resource group of your original store |
- | **Resource name** | Globally unique name | Enter a unique resource name to use for the target App Configuration store. This can not be the same name as the previous configuration store. |
- | **Location** | Your target Location | Select the target region you want to move your configuration store to. |
- | **Pricing tier** | *Standard* | Select the desired pricing tier. For more information, see the [App Configuration pricing page](https://azure.microsoft.com/pricing/details/app-configuration). |
-1. Select **Review + create** to validate your settings.
-1. Select **Create**. The deployment might take a few minutes.
-1. Once the resource has been deployed, recreate the access policies and network configuration settings of our source store. These will not be transferred with the configuration. This can include using manage identities, virtual networks, and public network access.
-
-#### [Azure CLI](#tab/azcli)
-To create a new App Configuration store in the CLI, follow these steps:
-1. Log in to the Azure CLI with your credentials.
- ```azurecli
- az login
- ```
-1. Create a new configuration store with the `create` command,
- ```azurecli
- az appconfig create -g MyResourceGroup -n MyResourceName -l targetlocation --sku Standard
- ```
- and enter the following settings:
-
- | Setting | Suggested value | Description |
- ||||
- | **Resource group** | Your resource group | Select the Azure resource group of your original store |
- | **Resource name** | Globally unique name | Enter a unique resource name to use for the target App Configuration store. This can not be the same name as the previous configuration store. |
- | **Location** | Your target Location | Select the target region you want to move your configuration store to. |
- | **Sku** | *Standard* | Select the desired pricing tier. For more information, see the [App Configuration pricing page](https://azure.microsoft.com/pricing/details/app-configuration). |
-1. The deployment might take a few minutes. Once it is complete, recreate the access policies and network configuration settings of our source store. These will not be transferred with the configuration values. This can include using manage identities, virtual networks, and public network access. For more information, reference the [CLI documentation](./cli-samples.md).
-
+## Create a target configuration store
+
+1. Create a new App Configuration store by following the [App Configuration quickstart](../azure-app-configuration/quickstart-azure-app-configuration-create.md#create-an-app-configuration-store). For **Location** select the target region you want to move your configuration store to, and for **Pricing tier** select **Standard**.
+1. Once the resource has been deployed, recreate the access policies and network configuration settings of your source store. These will not be transferred with the configuration. This can include using managed identities, virtual networks, and public network access.
+ ## Transfer your configuration key-values ### [Portal](#tab/portal)
-Follow these steps to export your configuration to the target store using the Portal:
-1. Navigate to your source configuration store in the [Azure portal](https://portal.azure.com) and select **Import/Export** under **Operations** .
-1. Select **Export** and choose **App Configuration** in the **Target Service** dropdown.
+
+Follow these steps to export your configuration to the target store using the Azure portal:
+
+1. Navigate to your source configuration store in the [Azure portal](https://portal.azure.com) and select **Import/Export** under **Operations**.
+1. Select **Export** and choose **App Configuration** in the **Target Service** dropdown.
![Export to another configuration store](media/export-to-config-store.png)
-1. Click on **Select Resource** and enter your **Subscription** and **Resource group**. The **Resource** is the name of the target configuration store you created previously.
-1. Select **Apply** to verify your target configuration store.
-1. Leave the from label, time, and Label fields as their default values and select **Apply**.
-1. To verify that your configurations have been successfully transferred from your source to your target store, navigate to your target configuration store in the portal. Select **Configuration Explorer** under **Operations** and verify that this contains the same key value pairs as those in your original store.
- > [!NOTE]
- > This process only allows for configuration key-values to be exported by one label at a time. To export multiple, repeat steps 2-5 for each label.
+1. Click on **Select Resource** and enter your **Subscription** and **Resource group**. The **Resource** is the name of the target configuration store you created previously.
+1. Select **Apply** to verify your target configuration store.
+1. Leave the **From label**, **Time**, and **Label** fields with their default values and select **Apply**. For more information about labels, go to [Keys and values](concept-key-value.md).
+1. To verify that your configurations have been successfully transferred from your source to your target store, navigate to your target configuration store in the portal. Select **Configuration Explorer** under **Operations** and verify that this contains the same key-values as those in your original store.
### [Azure CLI](#tab/azcli)+ Follow these steps to export your configuration to the target store using the Azure:
-1. In the Azure CLI, enter the following command that will export all of the values from your source configuration store to your target configuration store.
+
+1. In the Azure CLI, enter the following command that will export all of the values from your source configuration store to your target configuration store.
+ ```azurecli az appconfig kv export -n SourceConfigurationStore -d appconfig --dest-name TargetConfigurationStore --key * --label * --preserve-labels ```
-1. To verify that your configurations have been successfully transferred from your source to your target store, list all of the key values in your target store.
+
+1. To verify that your configurations have been successfully transferred from your source to your target store, list all of the key values in your target store.
+ ```azurecli az appconfig kv list -n TargetAppConfiguration --all ```+
-## Delete your source configuration store
-If the configuration has been transferred to the target store, you can choose to delete your source configuration store.
+## Delete your source configuration store
+
+If the configuration has been transferred to the target store, you can choose to delete your source configuration store.
### [Portal](#tab/portal)+ Follow these steps to delete your source configuration store in the Portal:+ 1. Sign in to the [Azure portal](https://portal.azure.com), and select **Resource groups**.
-1. In the **Filter by name** box, enter the name of your resource group.
+1. In the **Filter by name** box, enter the name of your resource group.
1. In the result list, select the resource group name to see an overview.
-1. Select your source configuration store, and on the **Overview** blade, select **Delete**.
+1. Select your source configuration store, and on the **Overview** blade, select **Delete**.
1. You're asked to confirm the deletion of the configuration store, select **Yes**. After a few moments, the source configuration store will have been deleted. ### [Azure CLI](#tab/azcli)+ Follow these steps to delete your source configuration store in the Azure CLI:
-1. In the Azure CLI, run the following command:
+
+1. In the Azure CLI, run the following command:
+ ```azurecli az appconfig delete -g ResourceGroupName -n SourceConfiguration ```
- Note that the **Resource Group** is the one associated with your source Configuration store.
+
+ Note that the **Resource Group** is the one associated with your source Configuration store.
+ 1. Deleting the source configuration store might take a few moments. You can verify that the operation was successful by listing all of the current configuration stores in your resource group. + ```azurecli az appconfig list -g MyResourceGroup ```+ After a few moments, the source configuration store will have been deleted. + ## Next steps > [!div class="nextstepaction"] > [Automatically back up key-values from Azure App Configuration stores](./howto-move-resource-between-regions.md)
->[Azure App Configuration resiliency and disaster recovery](./concept-disaster-recovery.md)
+
+> [!div class="nextstepaction"]
+> [Azure App Configuration resiliency and disaster recovery](./concept-disaster-recovery.md)
+
+> [!div class="nextstepaction"]
+> [How to enable geo-replication](./howto-geo-replication.md)
azure-app-configuration Integrate Kubernetes Deployment Helm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/integrate-kubernetes-deployment-helm.md
Previously updated : 04/14/2020 Last updated : 03/27/2023 #Customer intent: I want to use Azure App Configuration data in Kubernetes deployment with Helm.
This tutorial assumes basic understanding of managing Kubernetes with Helm. Lear
## Prerequisites -- [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/).
- Install [Azure CLI](/cli/azure/install-azure-cli) (version 2.4.0 or later) - Install [Helm](https://helm.sh/docs/intro/install/) (version 2.14.0 or later)
+- An App Configuration store. [Create a store](./quickstart-azure-app-configuration-create.md#create-an-app-configuration-store).
- A Kubernetes cluster.
-## Create an App Configuration store
+## Add key-values
+Add the following key-values to the App Configuration store and leave **Label** and **Content Type** with their default values. For more information about how to add key-values to a store using the Azure portal or the CLI, go to [Create a key-value](./quickstart-azure-app-configuration-create.md#create-a-key-value).
-1. Select **Configuration Explorer** > **Create** to add the following key-value pairs:
-
- | Key | Value |
- |||
- | settings.color | White |
- | settings.message | Data from Azure App Configuration |
-
-2. Leave **Label** and **Content Type** empty for now.
+| Key | Value |
+|||
+| *settings.color* | *White* |
+| *settings.message* | *Data from Azure App Configuration* |
## Add a Key Vault reference to App Configuration
azure-app-configuration Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/overview.md
Previously updated : 04/19/2022 Last updated : 03/20/2023 # What is Azure App Configuration?
The easiest way to add an App Configuration store to your application is through
> [!div class="nextstepaction"] > [Best practices](howto-best-practices.md)+
+> [!div class="nextstepaction"]
+> [FAQ](faq.yml)
+>
+> [!div class="nextstepaction"]
+> [Create an App Configuration store](quickstart-azure-app-configuration-create.md)
+
azure-app-configuration Quickstart Aspnet Core App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-aspnet-core-app.md
ms.devlang: csharp Previously updated : 01/04/2023 Last updated : 03/27/2023 #Customer intent: As an ASP.NET Core developer, I want to learn how to manage all my app settings in one place.
In this quickstart, you'll use Azure App Configuration to externalize storage an
## Prerequisites
-* Azure subscription - [create one for free](https://azure.microsoft.com/free/dotnet)
-* [.NET Core SDK](https://dotnet.microsoft.com/download)
+- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/).
+- An App Configuration store. [Create a store](./quickstart-azure-app-configuration-create.md#create-an-app-configuration-store).
+- [.NET Core SDK](https://dotnet.microsoft.com/download)
> [!TIP] > The Azure Cloud Shell is a free, interactive shell that you can use to run the command line instructions in this article. It has common Azure tools preinstalled, including the .NET Core SDK. If you're logged in to your Azure subscription, launch your [Azure Cloud Shell](https://shell.azure.com) from shell.azure.com. You can learn more about Azure Cloud Shell by [reading our documentation](../cloud-shell/overview.md)
-## Create an App Configuration store
+## Add key-values
+Add the following key-values to the App Configuration store and leave **Label** and **Content Type** with their default values. For more information about how to add key-values to a store using the Azure portal or the CLI, go to [Create a key-value](./quickstart-azure-app-configuration-create.md#create-a-key-value).
-9. Select **Operations** > **Configuration explorer** > **Create** > **Key-value** to add the following key-value pairs:
-
- | Key | Value |
- ||-|
- | `TestApp:Settings:BackgroundColor` | *white* |
- | `TestApp:Settings:FontColor` | *black* |
- | `TestApp:Settings:FontSize` | *24* |
- | `TestApp:Settings:Message` | *Data from Azure App Configuration* |
-
- Leave **Label** and **Content type** empty for now. Select **Apply**.
+| Key | Value |
+||-|
+| *TestApp:Settings:BackgroundColor* | *white* |
+| *TestApp:Settings:FontColor* | *black* |
+| *TestApp:Settings:FontSize* | *24* |
+| *TestApp:Settings:Message* | *Data from Azure App Configuration* |
## Create an ASP.NET Core web app
azure-app-configuration Quickstart Azure App Configuration Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-azure-app-configuration-create.md
+
+ Title: "Quickstart: Create an Azure App Configuration store"
++
+description: "In this quickstart, learn how to create an App Configuration store."
+
+ms.devlang: csharp
++ Last updated : 03/14/2023+
+#Customer intent: As an Azure developer, I want to create an app configuration store to manage all my app settings in one place using Azure App Configuration.
+
+# Quickstart: Create an Azure App Configuration store
+
+Azure App Configuration is an Azure service designed to help you centrally manage your app settings and feature flags. In this quickstart, learn how to create an App Configuration store and add a few key-values and feature flags.
+
+## Prerequisites
+
+An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/).
+
+## Create an App Configuration store
+
+### [Portal](#tab/azure-portal)
+
+1. On the Azure portal's homepage, enter *App Configuration* in the search box at the top and select **App Configuration** from the search results.
+
+ :::image type="content" source="media/azure-app-configuration-create/azure-portal-find-app-configuration.png" alt-text="Screenshot of the Azure portal that shows the App Configuration service in the search bar.":::
+
+1. Select **Create** or **Create app configuration**.
+
+ :::image type="content" source="media/azure-app-configuration-create/azure-portal-select-create-app-configuration.png" alt-text="Screenshot of the Azure portal that shows the button to launch the creation of an App Configuration store.":::
+
+1. In the **Basics** tab, enter the following settings:
+
+ | Setting | Suggested value | Description |
+ |-|-||
+ | **Subscription** | Your subscription | Select the Azure subscription that you want to use to create an App Configuration store. If your account has only one subscription, it's automatically selected and the **Subscription** list isn't displayed. |
+ | **Resource group** | *AppConfigTestResources* | Select or create a resource group for your App Configuration store resource. A resource group can be used to organize and manage multiple resources at the same time, such as deleting multiple resources in a single operation by deleting their resource group. For more information, see [Manage Azure resource groups by using the Azure portal](../azure-resource-manager/management/manage-resource-groups-portal.md). |
+ | **Location** | *Central US* | Use **Location** to specify the geographic location in which your app configuration store is hosted. For the best performance, create the resource in the same region as other components of your application. |
+ | **Resource name** | Globally unique name | Enter a unique resource name to use for the App Configuration store resource. The name must be a string between 5 and 50 characters and contain only numbers, letters, and the `-` character. The name can't start or end with the `-` character. |
+ | **Pricing tier** | *Free* | Selecting **Free**. If you select the standard tier, you can also get access to geo-replication and soft-delete features. For more information, see the [App Configuration pricing page](https://azure.microsoft.com/pricing/details/app-configuration). |
+
+ :::image type="content" source="media/azure-app-configuration-create/azure-portal-basic-tab.png" alt-text="Screenshot of the Azure portal that shows the basic tab of the creation for with the free tier selected.":::
+
+1. Select **Review + create** to validate your settings.
+
+ :::image type="content" source="media/azure-app-configuration-create/azure-portal-review.png" alt-text="Screenshot of the Azure portal that shows the configuration settings in the Review + create tab.":::
+
+1. Select **Create**. The deployment might take a few minutes.
+1. After the deployment finishes, go to the App Configuration resource. Select **Settings** > **Access keys**. Make a note of the primary read-only key connection string. You'll use this connection string later to configure your application to communicate with the App Configuration store that you created.
+
+### [Azure CLI](#tab/azure-cli)
+
+To create an App Configuration store, start by creating a resource group for your new service.
+
+### Create a resource group
+
+Create a resource group named *AppConfigTestResources* in the Central US location with the [az group create](/cli/azure/group#az-group-create) command:
+
+```azurecli
+az group create --name AppConfigTestResources --location centralus
+```
+
+### Create an App Configuration store
+
+Create a new store with the [az group create](/cli/azure/appconfig/#az-appconfig-create) command and replace the placeholder `<name>` with a unique resource name for your App Configuration store.
+
+```azurecli
+az appconfig create --location centralus --name <name> --resource-group AppConfigTestResources
+```
+++
+If you're following another tutorial to use the App Configuration store, you can go back to your original tutorial as the store should be ready. To continue with this tutorial, follow the steps below.
+
+## Create a key-value
+
+### [Portal](#tab/azure-portal)
+
+ 1. Select **Operations** > **Configuration explorer** > **Create** > **Key-value** to add a key-value to a store. For example:
+
+ | Key | Value |
+ ||-|
+ |*TestApp:Settings:TextAlign* | *center* |
+
+1. Leave **Label** and **Content Type** with their default values, then select **Apply**. For more information about labels and content types, go to [Keys and values](concept-key-value.md).
+
+ :::image type="content" source="media/azure-app-configuration-create/azure-portal-create-key-value.png" alt-text="Screenshot of the Azure portal that shows the configuration settings to create a key-value.":::
+
+### [Azure CLI](#tab/azure-cli)
+
+Add a key-value to the App Configuration store using the [az appconfig kv set](/cli/azure/appconfig/#az-appconfig-kv-set) command. Replace the placeholder `<name>` with the name of the App Configuration store:
++
+```azurecli
+az appconfig kv set --name <name> --key TestApp:Settings:TextAlign --value center
+```
+++
+## Create a feature flag
+
+### [Portal](#tab/azure-portal)
+
+1. Select **Operations** > **Feature Manager** > **Create** and fill out the form with the following parameters:
+
+ | Setting | Suggested value | Description |
+ ||--|--|
+ | Enable feature flag | Box is checked. | Check this box to make the new feature flag active as soon as the flag has been created. |
+ | Feature flag name | *featureA* | The feature flag name is the unique ID of the flag, and the name that should be used when referencing the flag in code. |
+
+1. Leave all other fields with their default values and select **Apply**.
+
+ :::image type="content" source="media/azure-app-configuration-create/azure-portal-create-feature-flag.png" alt-text="Screenshot of the Azure portal that shows the configuration settings to create a feature flag.":::
+
+### [Azure CLI](#tab/azure-cli)
+
+Add a feature flag to the App Configuration store using the [az appconfig feature set](/cli/azure/appconfig/#az-appconfig-feature-set) command. Replace the placeholder `<name>` with the name of the App Configuration store:
+
+```azurecli
+az appconfig feature set --name <name> --feature featureA
+```
+++
+## Clean up resources
+
+When no longer needed, delete the resource group. Deleting a resource group also deletes the resources in it.
+
+> [!WARNING]
+> Deleting a resource group is irreversible.
+
+### [Portal](#tab/azure-portal)
+
+1. In the Azure portal, search for and select **Resource groups**.
+
+1. Select your resource group, for instance *AppConfigTestResources*, and then select **Delete resource group**.
+
+1. Type the resource group name to verify, and then select **Delete**.
+
+### [Azure CLI](#tab/azure-cli)
+
+Run the [az group delete](/cli/azure/group/#az-group-delete) command. Replace the placeholder `<name>` with the name of the App Configuration store:
++
+```azurecli
+az group delete --name <name>
+```
+++
+## Next steps
+
+Advance to the next article to learn how to create an ASP.NET Core app with Azure App Configuration to centralize storage and management of its application settings.
+> [!div class="nextstepaction"]
+> [Quickstart ASP.NET Core](quickstart-aspnet-core-app.md)
azure-app-configuration Quickstart Azure Functions Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-azure-functions-csharp.md
ms.devlang: csharp Previously updated : 06/02/2021 Last updated : 03/20/2023 #Customer intent: As an Azure Functions developer, I want to manage all my app settings in one place using Azure App Configuration.
In this quickstart, you incorporate the Azure App Configuration service into an
## Prerequisites -- Azure subscription - [create one for free](https://azure.microsoft.com/free/dotnet).
+- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/).
+- An App Configuration store. [Create a store](./quickstart-azure-app-configuration-create.md#create-an-app-configuration-store).
- [Visual Studio](https://visualstudio.microsoft.com/vs) with the **Azure development** workload. - [Azure Functions tools](../azure-functions/functions-develop-vs.md), if you don't have it installed with Visual Studio already.
-## Create an App Configuration store
+## Add a key-value
+Add the following key-value to the App Configuration store and leave **Label** and **Content Type** with their default values. For more information about how to add key-values to a store using the Azure portal or the CLI, go to [Create a key-value](./quickstart-azure-app-configuration-create.md#create-a-key-value).
-7. Select **Configuration Explorer** > **+ Create** > **Key-value** to add the following key-value pairs:
-
- | Key | Value |
- |||
- | TestApp:Settings:Message | Data from Azure App Configuration |
-
- Leave **Label** and **Content Type** empty for now.
-
-8. Select **Apply**.
+| Key | Value |
+| -- | -- |
+| *TestApp:Settings:Message* | *Data from Azure App Configuration* |
## Create a Functions app
azure-app-configuration Quickstart Dotnet App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-dotnet-app.md
In this quickstart, a .NET Framework console app is used as an example, but the
## Prerequisites -- Azure subscription - [create one for free](https://azure.microsoft.com/free/dotnet)
+- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/).
+- An App Configuration store. [Create a store](./quickstart-azure-app-configuration-create.md#create-an-app-configuration-store).
- [Visual Studio](https://visualstudio.microsoft.com/vs) - [.NET Framework 4.7.2 or later](https://dotnet.microsoft.com/download/dotnet-framework)
-## Create an App Configuration store
+## Add a key-value
+Add the following key-value to the App Configuration store and leave **Label** and **Content Type** with their default values. For more information about how to add key-values to a store using the Azure portal or the CLI, go to [Create a key-value](./quickstart-azure-app-configuration-create.md#create-a-key-value).
-7. Select **Configuration explorer** > **+ Create** > **Key-value** to add the following key-value:
-
- | Key | Value |
- |-|-|
- | *TestApp:Settings:Message* | *Data from Azure App Configuration* |
-
- Leave **Label** and **Content Type** empty. For more information about labels and content types, go to [Keys and values](concept-key-value.md#label-keys).
+| Key | Value |
+|-|-|
+| *TestApp:Settings:Message* | *Data from Azure App Configuration* |
## Create a .NET Framework console app
azure-app-configuration Quickstart Dotnet Core App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-dotnet-core-app.md
ms.devlang: csharp Previously updated : 04/05/2022 Last updated : 03/20/2023 #Customer intent: As a .NET Core developer, I want to manage all my app settings in one place.
In this quickstart, you incorporate Azure App Configuration into a .NET Core con
## Prerequisites -- Azure subscription - [create one for free](https://azure.microsoft.com/free/dotnet)
+- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/).
+- An App Configuration store. [Create a store](./quickstart-azure-app-configuration-create.md#create-an-app-configuration-store).
- [.NET Core SDK](https://dotnet.microsoft.com/download) - also available in the [Azure Cloud Shell](https://shell.azure.com).
-## Create an App Configuration store
+## Add a key-value
+Add the following key-value to the App Configuration store and leave **Label** and **Content Type** with their default values. For more information about how to add key-values to a store using the Azure portal or the CLI, go to [Create a key-value](./quickstart-azure-app-configuration-create.md#create-a-key-value).
-7. Select **Configuration Explorer** > **Create** > **Key-value** to add the following key-value pairs:
-
- | Key | Value |
- |||
- | TestApp:Settings:Message | Data from Azure App Configuration |
-
- Leave **Label** and **Content Type** empty for now.
-
-8. Select **Apply**.
+| Key | Value |
+|-|-|
+| *TestApp:Settings:Message* | *Data from Azure App Configuration* |
## Create a .NET Core console app
azure-app-configuration Quickstart Feature Flag Aspnet Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-feature-flag-aspnet-core.md
ms.devlang: csharp Previously updated : 10/28/2022 Last updated : 03/28/2023 #Customer intent: As an ASP.NET Core developer, I want to use feature flags to control feature availability quickly and confidently. # Quickstart: Add feature flags to an ASP.NET Core app
-In this quickstart, you'll create a feature flag in Azure App Configuration and use it to dynamically control the availability of a new web page in an ASP.NET Core app without restarting or redeploying it.
+In this quickstart, you'll create a feature flag in Azure App Configuration and use it to dynamically control the availability of a new web page in an ASP.NET Core app without restarting or redeploying it.
The feature management support extends the dynamic configuration feature in App Configuration. The example in this quickstart builds on the ASP.NET Core app introduced in the dynamic configuration tutorial. Before you continue, finish the [quickstart](./quickstart-aspnet-core-app.md), and the [tutorial](./enable-dynamic-configuration-aspnet-core.md) to create an ASP.NET Core app with dynamic configuration first. ## Prerequisites Follow the documents to create an ASP.NET Core app with dynamic configuration.
-* [Quickstart: Create an ASP.NET Core app with App Configuration](./quickstart-aspnet-core-app.md)
-* [Tutorial: Use dynamic configuration in an ASP.NET Core app](./enable-dynamic-configuration-aspnet-core.md)
+- [Quickstart: Create an ASP.NET Core app with App Configuration](./quickstart-aspnet-core-app.md)
+- [Tutorial: Use dynamic configuration in an ASP.NET Core app](./enable-dynamic-configuration-aspnet-core.md)
## Create a feature flag
-Navigate to the Azure App Configuration store you created previously in Azure portal. Under **Operations** section, select **Feature manager** > **Create** to add a feature flag called *Beta*.
+Add a feature flag called *Beta* to the App Configuration store and leave **Label** and **Description** with their default values. For more information about how to add feature flags to a store using the Azure portal or the CLI, go to [Create a feature flag](./quickstart-azure-app-configuration-create.md#create-a-feature-flag).
> [!div class="mx-imgBorder"] > ![Enable feature flag named Beta](./media/add-beta-feature-flag.png)
-Leave the rest of fields empty for now. Select **Apply** to save the new feature flag. To learn more, check out [Manage feature flags in Azure App Configuration](./manage-feature-flags.md).
- ## Use a feature flag 1. Navigate into the project's directory, and run the following command to add a reference to the [Microsoft.FeatureManagement.AspNetCore](https://www.nuget.org/packages/Microsoft.FeatureManagement.AspNetCore) NuGet package.
azure-app-configuration Quickstart Feature Flag Azure Functions Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-feature-flag-azure-functions-csharp.md
ms.devlang: csharp Previously updated : 8/26/2020 Last updated : 3/20/2023 # Quickstart: Add feature flags to an Azure Functions app
The .NET Feature Management libraries extend the framework with feature flag sup
## Prerequisites -- Azure subscription - [create one for free](https://azure.microsoft.com/free/)
+- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/).
+- An App Configuration store. [Create a store](./quickstart-azure-app-configuration-create.md#create-an-app-configuration-store).
- [Visual Studio 2019](https://visualstudio.microsoft.com/vs) with the **Azure development** workload. - [Azure Functions tools](../azure-functions/functions-develop-vs.md#check-your-tools-version)
-## Create an App Configuration store
+## Add a feature flag
+Add a feature flag called *Beta* to the App Configuration store and leave **Label** and **Description** with their default values. For more information about how to add feature flags to a store using the Azure portal or the CLI, go to [Create a feature flag](./quickstart-azure-app-configuration-create.md#create-a-feature-flag).
-7. Select **Feature Manager** > **+Add** to add a feature flag called `Beta`.
-
- > [!div class="mx-imgBorder"]
- > ![Enable feature flag named Beta](media/add-beta-feature-flag.png)
-
- Leave `label` and `Description` undefined for now.
-
-8. Select **Apply** to save the new feature flag.
+> [!div class="mx-imgBorder"]
+> ![Enable feature flag named Beta](media/add-beta-feature-flag.png)
## Create a Functions app
azure-app-configuration Quickstart Feature Flag Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-feature-flag-dotnet.md
.NET Previously updated : 10/19/2020 Last updated : 3/20/2023 #Customer intent: As a .NET Framework developer, I want to use feature flags to control feature availability quickly and confidently.
The .NET Feature Management libraries extend the framework with feature flag sup
## Prerequisites -- Azure subscription - [create one for free](https://azure.microsoft.com/free/dotnet)
+- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/).
+- An App Configuration store. [Create a store](./quickstart-azure-app-configuration-create.md#create-an-app-configuration-store).
- [Visual Studio 2019](https://visualstudio.microsoft.com/vs) - [.NET Framework 4.8](https://dotnet.microsoft.com/download)
-## Create an App Configuration store
+## Add a feature flag
+Add a feature flag called *Beta* to the App Configuration store and leave **Label** and **Description** with their default values. For more information about how to add feature flags to a store using the Azure portal or the CLI, go to [Create a feature flag](./quickstart-azure-app-configuration-create.md#create-a-feature-flag).
-7. Select **Feature Manager** > **+Add** to add a feature flag called `Beta`.
-
- > [!div class="mx-imgBorder"]
- > ![Enable feature flag named Beta](media/add-beta-feature-flag.png)
-
- Leave `label` undefined for now.
+> [!div class="mx-imgBorder"]
+> ![Enable feature flag named Beta](media/add-beta-feature-flag.png)
## Create a .NET console app
azure-app-configuration Quickstart Feature Flag Spring Boot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-feature-flag-spring-boot.md
ms.devlang: java Previously updated : 05/02/2022 Last updated : 03/20/2023 #Customer intent: As an Spring Boot developer, I want to use feature flags to control feature availability quickly and confidently.
The Spring Boot Feature Management libraries extend the framework with comprehen
## Prerequisites
-* Azure subscription - [create one for free](https://azure.microsoft.com/free/)
-* A supported [Java Development Kit SDK](/java/azure/jdk) with version 11.
-* [Apache Maven](https://maven.apache.org/download.cgi) version 3.0 or above.
+- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/).
+- An App Configuration store. [Create a store](./quickstart-azure-app-configuration-create.md#create-an-app-configuration-store).
+- A supported [Java Development Kit SDK](/java/azure/jdk) with version 11.
+- [Apache Maven](https://maven.apache.org/download.cgi) version 3.0 or above.
-## Create an App Configuration instance
+## Add a feature flag
+Add a feature flag called *Beta* to the App Configuration store and leave **Label** and **Description** with their default values. For more information about how to add feature flags to a store using the Azure portal or the CLI, go to [Create a feature flag](./quickstart-azure-app-configuration-create.md#create-a-feature-flag).
-7. Select **Feature Manager** > **+Add** to add a feature flag called `Beta`.
-
- > [!div class="mx-imgBorder"]
- > ![Enable feature flag named Beta](media/add-beta-feature-flag.png)
-
- Leave `label` undefined for now.
+> [!div class="mx-imgBorder"]
+> ![Enable feature flag named Beta](media/add-beta-feature-flag.png)
## Create a Spring Boot app
azure-app-configuration Quickstart Java Spring App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-java-spring-app.md
In this quickstart, you incorporate Azure App Configuration into a Java Spring a
## Prerequisites -- Azure subscription - [create one for free](https://azure.microsoft.com/free/)
+- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/).
+- An App Configuration store. [Create a store](./quickstart-azure-app-configuration-create.md#create-an-app-configuration-store).
- A supported [Java Development Kit (JDK)](/java/azure/jdk) with version 11. - [Apache Maven](https://maven.apache.org/download.cgi) version 3.0 or above. - A Spring Boot application. If you don't have one, create a Maven project with the [Spring Initializr](https://start.spring.io/). Be sure to select **Maven Project** and, under **Dependencies**, add the **Spring Web** dependency, and then select Java version 8 or higher.
-## Create an App Configuration store
+## Add a key-value
+Add the following key-value to the App Configuration store and leave **Label** and **Content Type** with their default values. For more information about how to add key-values to a store using the Azure portal or the CLI, go to [Create a key-value](./quickstart-azure-app-configuration-create.md#create-a-key-value).
-9. Select **Configuration Explorer** > **+ Create** > **Key-value** to add the following key-value pairs:
-
- | Key | Value |
- |||
- | /application/config.message | Hello |
-
- Leave **Label** and **Content Type** empty for now.
-
-10. Select **Apply**.
+| Key | Value |
+|||
+| /application/config.message | Hello |
## Connect to an App Configuration store
azure-app-configuration Quickstart Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-javascript.md
ms.devlang: javascript Previously updated : 07/12/2021 Last updated : 03/20/2023 #Customer intent: As a JavaScript developer, I want to manage all my app settings in one place.
In this quickstart, you will use Azure App Configuration to centralize storage a
## Prerequisites -- Azure subscription - [create one for free](https://azure.microsoft.com/free/)
+- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/).
+- An App Configuration store. [Create a store](./quickstart-azure-app-configuration-create.md#create-an-app-configuration-store).
- [LTS versions of Node.js](https://nodejs.org/en/about/releases/). For information about installing Node.js either directly on Windows or using the Windows Subsystem for Linux (WSL), see [Get started with Node.js](/windows/dev-environment/javascript/nodejs-overview)
-## Create an App Configuration store
+## Add a key-value
+Add the following key-value to the App Configuration store and leave **Label** and **Content Type** with their default values. For more information about how to add key-values to a store using the Azure portal or the CLI, go to [Create a key-value](./quickstart-azure-app-configuration-create.md#create-a-key-value).
-7. Select **Configuration Explorer** > **Create** > **Key-value** to add the following key-value pairs:
-
- | Key | Value |
- |||
- | TestApp:Settings:Message | Data from Azure App Configuration |
-
- Leave **Label** and **Content Type** empty for now.
-
-8. Select **Apply**.
+| Key | Value |
+|||
+| TestApp:Settings:Message | Data from Azure App Configuration |
## Setting up the Node.js app
azure-app-configuration Quickstart Python Provider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-python-provider.md
ms.devlang: python Previously updated : 03/10/2023 Last updated : 03/20/2023 #Customer intent: As a Python developer, I want to manage all my app settings in one place.
The Python App Configuration provider is a library running on top of the [Azure
## Prerequisites -- Azure subscription - [create one for free](https://azure.microsoft.com/free/)
+- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/).
+- An App Configuration store. [Create a store](./quickstart-azure-app-configuration-create.md#create-an-app-configuration-store).
- Python 3.6 or later - for information on setting up Python on Windows, see the [Python on Windows documentation](/windows/python/)
-## Create an App Configuration store
+## Add key-values
+Add the following key-values to the App Configuration store. For more information about how to add key-values to a store using the Azure portal or the CLI, go to [Create a key-value](./quickstart-azure-app-configuration-create.md#create-a-key-value).
-9. Select **Configuration Explorer** > **Create** > **Key-value** to add the following key-value pairs:
-
- | Key | Value | Label | Content type |
- |-|-|-|--|
- | *message* | *Hello* | Leave empty | Leave empty |
- | *test.message* | *Hello test* | Leave empty | Leave empty |
- | *my_json* | *{"key":"value"}* | Leave empty | *application/json* |
-
-10. Select **Apply**.
+| Key | Value | Label | Content type |
+|-|-|-|--|
+| *message* | *Hello* | Leave empty | Leave empty |
+| *test.message* | *Hello test* | Leave empty | Leave empty |
+| *my_json* | *{"key":"value"}* | Leave empty | *application/json* |
## Set up the Python app
azure-app-configuration Quickstart Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-python.md
This document shows examples of how to use the [Azure SDK for Python](https://gi
- Azure subscription - [create one for free](https://azure.microsoft.com/free/) - Python 3.6 or later - for information on setting up Python on Windows, see the [Python on Windows documentation](/windows/python/)-- An Azure App Configuration store
+- An App Configuration store. [Create a store](./quickstart-azure-app-configuration-create.md#create-an-app-configuration-store).
## Create a key-value
-1. In the Azure portal, open your App Configuration store and select **Configuration Explorer** > **Create** > **Key-value** to add the following key-value:
+Add the following key-value to the App Configuration store and leave **Label** and **Content Type** with their default values. For more information about how to add key-values to a store using the Azure portal or the CLI, go to [Create a key-value](./quickstart-azure-app-configuration-create.md#create-a-key-value).
- | Key | Value |
- |-|-|
- | *TestApp:Settings:Message* | *Data from Azure App Configuration* |
+| Key | Value |
+|-|-|
+| *TestApp:Settings:Message* | *Data from Azure App Configuration* |
- Leave **Label** and **Content Type** empty for now.
-
-1. Select **Apply**.
## Set up the Python app
azure-arc Clean Up Past Installation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/clean-up-past-installation.md
kubectl delete crd dags.sql.arcdata.microsoft.com
kubectl delete crd exporttasks.tasks.arcdata.microsoft.com kubectl delete crd monitors.arcdata.microsoft.com kubectl delete crd activedirectoryconnectors.arcdata.microsoft.com
+kubectl delete crd failovergroups.sql.arcdata.microsoft.com
+kubectl delete crd kafkas.arcdata.microsoft.com
+kubectl delete crd postgresqlrestoretasks.tasks.postgresql.arcdata.microsoft.com
+kubectl delete crd sqlmanagedinstancemonitoringprofiles.arcdata.microsoft.com
+kubectl delete crd sqlmanagedinstancereprovisionreplicatasks.tasks.sql.arcdata.microsoft.com
+kubectl delete crd telemetrycollectors.arcdata.microsoft.com
+kubectl delete crd telemetryrouters.arcdata.microsoft.com
# Substitute the name of the namespace the data controller was deployed in into {namespace}.
kubectl delete mutatingwebhookconfiguration arcdata.microsoft.com-webhook-{names
[Start by creating a Data Controller](create-data-controller-indirect-cli.md)
-Already created a Data Controller? [Create an Azure Arc-enabled SQL Managed Instance](create-sql-managed-instance.md)
+Already created a Data Controller? [Create an Azure Arc-enabled SQL Managed Instance](create-sql-managed-instance.md)
azure-arc Rotate Sql Managed Instance Credentials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/rotate-sql-managed-instance-credentials.md
- Title: Rotate SQL Managed Instance service-managed credentials (preview)
-description: Rotate SQL Managed Instance service-managed credentials (preview)
------ Previously updated : 03/06/2023--
-# Rotate Azure Arc-enabled SQL Managed Instance service-managed credentials (preview)
-
-This article describes how to rotate service-managed credentials for Azure Arc-enabled SQL Managed Instance. Arc data services generates various service-managed credentials like certificates and SQL logins used for Monitoring, Backup/Restore, High Availability etc. These credentials are considered custom resource credentials managed by Azure Arc data services.
-
-Service-managed credential rotation is a user-triggered operation that you initiate during a security issue or when periodic rotation is required for compliance.
-
-## Limitations
-
-Consider the following limitations when you rotate a managed instance service-managed credentials:
--- SQL Server failover groups aren't supported.-- Automatically pre-scheduled rotation isn't supported.-- The service-managed DPAPI symmetric keys, keytab, active directory accounts, and service-managed TDE credentials aren't included in this credential rotation.-- SQL Managed Instance Business Critical tier isn't supported.-- This feature should not be used in production currently. There is a known limitation where _rollback_ cannot be triggered unless credential rotation is completed successfully and the SQLMI is in "Ready" state.-
-## General Purpose tier
-
-During a SQL Managed Instance service-managed credential rotation, the managed instance Kubernetes pod is terminated and reprovisioned when new credentials are generated. This process causes a short amount of downtime as the new managed instance pod is created. To handle the interruption, build resiliency into your application such as connection retry logic, to ensure minimal disruption. Read [Overview of the reliability pillar](/azure/architecture/framework/resiliency/overview) for more information on how to architect resiliency and [retry guidance for Azure Services](/azure/architecture/best-practices/retry-service-specific#sql-database-using-adonet).
-
-## Prerequisites:
-
-Before you proceed with this article, you must have an Azure Arc-enabled SQL Managed Instance resource created.
--- [An Azure Arc-enabled SQL Managed Instance created](./create-sql-managed-instance.md)-
-## How to rotate service-managed credentials in a managed instance
-
-Service-managed credentials are associated with a generation within the managed instance. To rotate all service-managed credentials for a managed instance, the generation must be increased by 1.
-
-Run the following commands to get current service-managed credentials generation from spec and generate the new generation of service-managed credentials. This action triggers a service-managed credential rotation.
-
-```console
-rotateCredentialGeneration=$(($(kubectl get sqlmi <sqlmi-name> -o jsonpath='{.spec.update.managedCredentialsGeneration}' -n <namespace>) + 1)) 
-```
--
-```console
-kubectl patch sqlmi <sqlmi-name> --namespace <namespace> --type merge --patch '{ "spec": { "update": { "managedCredentialsGeneration": '$rotateCredentialGeneration'} } }' 
-```
--
-The `managedCredentialsGeneration` identifies the target generation for the service-managed credentials. The rest of the features like configuration and the kubernetes topology remain the same.
-
-## How to roll back service-managed credentials in a managed instance
-
-> [!NOTE]
-> Rollback is required when credential rotation failed for any reasons. Rollback to previous credentials generation is supported only once to n-1 where n is current generation.
-
-Run the following two commands to get current service-managed credentials generation from spec and rollback to the previous generation of service-managed credentials:
-
-```console
-rotateCredentialGeneration=$(($(kubectl get sqlmi <sqlmi-name> -o jsonpath='{.spec.update.managedCredentialsGeneration}' -n <namespace>) - 1)) 
-```
-
-```console
-kubectl patch sqlmi <sqlmi-name> --namespace <namespace> --type merge --patch '{ "spec": { "update": { "managedCredentialsGeneration": '$rotateCredentialGeneration'} } }' 
-```
-
-Triggering rollback is the same as triggering a rotation of service-managed credentials except that the target generation is previous generation and doesn't generate a new generation or credentials.
-
-## Next steps
--- [View the SQL managed instance dashboards](azure-data-studio-dashboards.md#view-the-sql-managed-instance-dashboards)-- [View SQL Managed Instance in the Azure portal](view-arc-data-services-inventory-in-azure-portal.md)+
+ Title: Rotate SQL Managed Instance service-managed credentials (preview)
+description: Rotate SQL Managed Instance service-managed credentials (preview)
++++++ Last updated : 03/06/2023++
+# Rotate Azure Arc-enabled SQL Managed Instance service-managed credentials (preview)
+
+This article describes how to rotate service-managed credentials for Azure Arc-enabled SQL Managed Instance. Arc data services generates various service-managed credentials like certificates and SQL logins used for Monitoring, Backup/Restore, High Availability etc. These credentials are considered custom resource credentials managed by Azure Arc data services.
+
+Service-managed credential rotation is a user-triggered operation that you initiate during a security issue or when periodic rotation is required for compliance.
+
+## Limitations
+
+Consider the following limitations when you rotate a managed instance service-managed credentials:
+
+- SQL Server failover groups aren't supported.
+- Automatically pre-scheduled rotation isn't supported.
+- The service-managed DPAPI symmetric keys, keytab, active directory accounts, and service-managed TDE credentials aren't included in this credential rotation.
+- SQL Managed Instance Business Critical tier isn't supported.
+- This feature should not be used in production currently. There is a known limitation where _rollback_ cannot be triggered unless credential rotation is completed successfully and the SQLMI is in "Ready" state.
+
+## General Purpose tier
+
+During a SQL Managed Instance service-managed credential rotation, the managed instance Kubernetes pod is terminated and reprovisioned when new credentials are generated. This process causes a short amount of downtime as the new managed instance pod is created. To handle the interruption, build resiliency into your application such as connection retry logic, to ensure minimal disruption. Read [Overview of the reliability pillar](/azure/architecture/framework/resiliency/overview) for more information on how to architect resiliency and [retry guidance for Azure Services](/azure/architecture/best-practices/retry-service-specific#sql-database-using-adonet).
+
+## Prerequisites:
+
+Before you proceed with this article, you must have an Azure Arc-enabled SQL Managed Instance resource created.
+
+- [An Azure Arc-enabled SQL Managed Instance created](./create-sql-managed-instance.md)
+
+## How to rotate service-managed credentials in a managed instance
+
+Service-managed credentials are associated with a generation within the managed instance. To rotate all service-managed credentials for a managed instance, the generation must be increased by 1.
+
+Run the following commands to get current service-managed credentials generation from spec and generate the new generation of service-managed credentials. This action triggers a service-managed credential rotation.
+
+```console
+rotateCredentialGeneration=$(($(kubectl get sqlmi <sqlmi-name> -o jsonpath='{.spec.update.managedCredentialsGeneration}' -n <namespace>) + 1)) 
+```
++
+```console
+kubectl patch sqlmi <sqlmi-name> --namespace <namespace> --type merge --patch '{ "spec": { "update": { "managedCredentialsGeneration": '$rotateCredentialGeneration'} } }' 
+```
++
+The `managedCredentialsGeneration` identifies the target generation for the service-managed credentials. The rest of the features like configuration and the kubernetes topology remain the same.
+
+## How to roll back service-managed credentials in a managed instance
+
+> [!NOTE]
+> Rollback is required when credential rotation failed for any reasons. Rollback to previous credentials generation is supported only once to n-1 where n is current generation.
+
+Run the following two commands to get current service-managed credentials generation from spec and rollback to the previous generation of service-managed credentials:
+
+```console
+rotateCredentialGeneration=$(($(kubectl get sqlmi <sqlmi-name> -o jsonpath='{.spec.update.managedCredentialsGeneration}' -n <namespace>) - 1)) 
+```
+
+```console
+kubectl patch sqlmi <sqlmi-name> --namespace <namespace> --type merge --patch '{ "spec": { "update": { "managedCredentialsGeneration": '$rotateCredentialGeneration'} } }' 
+```
+
+Triggering rollback is the same as triggering a rotation of service-managed credentials except that the target generation is previous generation and doesn't generate a new generation or credentials.
+
+## Next steps
+
+- [View the SQL managed instance dashboards](azure-data-studio-dashboards.md#view-the-sql-managed-instance-dashboards)
+- [View SQL Managed Instance in the Azure portal](view-arc-data-services-inventory-in-azure-portal.md)
azure-arc Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/troubleshooting.md
Title: "Troubleshoot common Azure Arc-enabled Kubernetes issues" Previously updated : 03/13/2023 Last updated : 03/28/2023 description: "Learn how to resolve common issues with Azure Arc-enabled Kubernetes clusters and GitOps."
For more information, see [Debugging DNS Resolution](https://kubernetes.io/docs/
Issues with outbound network connectivity from the cluster may arise for different reasons. First make sure all of the [network requirements](network-requirements.md) have been met.
-If you encounter this issue, and your cluster is behind an outbound proxy server, make sure you have passed proxy parameters during the onboarding of your cluster and that the proxy is configured correctly. For more information, see [Connect using an outbound proxy server](quickstart-connect-cluster.md#connect-using-an-outbound-proxy-server).
+If you encounter this issue, and your cluster is behind an outbound proxy server, make sure you've passed proxy parameters during the onboarding of your cluster and that the proxy is configured correctly. For more information, see [Connect using an outbound proxy server](quickstart-connect-cluster.md#connect-using-an-outbound-proxy-server).
### Unable to retrieve MSI certificate
Problems retrieving the MSI certificate are usually due to network issues. Check
### Insufficient cluster permissions
-If the provided kubeconfig file doesn't have sufficient permissions to install the Azure Arc agents, the Azure CLI command will return an error.
+If the provided kubeconfig file doesn't have sufficient permissions to install the Azure Arc agents, the Azure CLI command returns an error.
```azurecli az connectedk8s connect --resource-group AzureArc --name AzureArcCluster
To resolve this issue, try the following steps.
config-agent-65d5df564f-lffqm 1/2 CrashLoopBackOff 0 1m14s ```
-3. If the certificate below isn't present, the system assigned managed identity hasn't been installed.
+3. If the `azure-identity-certificate` isn't present, the system assigned managed identity hasn't been installed.
```console kubectl get secret -n azure-arc -o yaml | grep name:
To resolve this issue, try the following steps.
name: azure-identity-certificate ```
- To resolve this issue, try deleting the Arc deployment by running the `az connectedk8s delete` command and reinstalling it. If the issue continues to happen, it could be an issue with your proxy settings. In that case, [try connecting your cluster to Azure Arc via a proxy](./quickstart-connect-cluster.md#connect-using-an-outbound-proxy-server) to connect your cluster to Arc via a proxy. Please also verify if all the [network prerequisites](network-requirements.md) have been met.
+ To resolve this issue, try deleting the Arc deployment by running the `az connectedk8s delete` command and reinstalling it. If the issue continues to happen, it could be an issue with your proxy settings. In that case, [try connecting your cluster to Azure Arc via a proxy](./quickstart-connect-cluster.md#connect-using-an-outbound-proxy-server) to connect your cluster to Arc via a proxy. Also verify that all of the [network prerequisites](network-requirements.md) have been met.
4. If the `clusterconnect-agent` and the `config-agent` pods are running, but the `kube-aad-proxy` pod is missing, check your pod security policies. This pod uses the `azure-arc-kube-aad-proxy-sa` service account, which doesn't have admin permissions but requires the permission to mount host path.
To resolve this issue, try the following steps.
name: kube-aad-proxy-certificate ```
- If the certificate is missing, [delete the deployment](quickstart-connect-cluster.md#clean-up-resources) and re-onboard with a different name for the cluster. If the problem continues, please contact support.
+ If the certificate is missing, [delete the deployment](quickstart-connect-cluster.md#clean-up-resources) and re-onboard with a different name for the cluster. If the problem continues, contact support.
### Helm validation error
az connectedk8s connect -n AzureArcTest -g AzureArcTest
Ensure that you have the latest helm version installed before proceeding. This operation might take a while...
-Please check if the azure-arc namespace was deployed and run 'kubectl get pods -n azure-arc' to check if all the pods are in running state. A possible cause for pods stuck in pending state could be insufficientresources on the Kubernetes cluster to onboard to arc.
+Check if the azure-arc namespace was deployed, and run 'kubectl get pods -n azure-arc' to check if all the pods are in running state. A possible cause for pods stuck in pending state could be insufficientresources on the Kubernetes cluster to onboard to Azure Arc.
ValidationError: Unable to install helm release: Error: customresourcedefinitions.apiextensions.k8s.io "connectedclusters.arc.azure.com" not found ```
az extension add --name k8s-configuration
## GitOps management
-### Flux v1 - General
-
-> [!NOTE]
-> Eventually Azure will stop supporting GitOps with Flux v1, so begin using [Flux v2](./tutorial-use-gitops-flux2.md) as soon as possible.
-
-To help troubleshoot issues with `sourceControlConfigurations` resource (Flux v1), run these Azure CLI commands with `--debug` parameter specified:
-
-```azurecli
-az provider show -n Microsoft.KubernetesConfiguration --debug
-az k8s-configuration create <parameters> --debug
-```
-
-### Flux v1 - Create configurations
-
-Write permissions on the Azure Arc-enabled Kubernetes resource (`Microsoft.Kubernetes/connectedClusters/Write`) are necessary and sufficient for creating configurations on that cluster.
-
-### `sourceControlConfigurations` remains `Pending` (Flux v1)
-
-```console
-kubectl -n azure-arc logs -l app.kubernetes.io/component=config-agent -c config-agent
-$ k -n pending get gitconfigs.clusterconfig.azure.com -o yaml
-apiVersion: v1
-items:
-- apiVersion: clusterconfig.azure.com/v1beta1
- kind: GitConfig
- metadata:
- creationTimestamp: "2020-04-13T20:37:25Z"
- generation: 1
- name: pending
- namespace: pending
- resourceVersion: "10088301"
- selfLink: /apis/clusterconfig.azure.com/v1beta1/namespaces/pending/gitconfigs/pending
- uid: d9452407-ff53-4c02-9b5a-51d55e62f704
- spec:
- correlationId: ""
- deleteOperator: false
- enableHelmOperator: false
- giturl: git@github.com:slack/cluster-config.git
- helmOperatorProperties: null
- operatorClientLocation: azurearcfork8s.azurecr.io/arc-preview/fluxctl:0.1.3
- operatorInstanceName: pending
- operatorParams: '"--disable-registry-scanning"'
- operatorScope: cluster
- operatorType: flux
- status:
- configAppliedTime: "2020-04-13T20:38:43.081Z"
- isSyncedWithAzure: true
- lastPolledStatusTime: ""
- message: 'Error: {exit status 1} occurred while doing the operation : {Installing
- the operator} on the config'
- operatorPropertiesHashed: ""
- publicKey: ""
- retryCountPublicKey: 0
- status: Installing the operator
-kind: List
-metadata:
- resourceVersion: ""
- selfLink: ""
-```
- ### Flux v2 - General To help troubleshoot issues with `fluxConfigurations` resource (Flux v2), run these Azure CLI commands with the `--debug` parameter specified:
For more information, see [How do I resolve `webhook does not support dry run` e
The `microsoft.flux` extension installs the Flux controllers and Azure GitOps agents into your Azure Arc-enabled Kubernetes or Azure Kubernetes Service (AKS) clusters. If the extension isn't already installed in a cluster and you create a GitOps configuration resource for that cluster, the extension will be installed automatically.
-If you experience an error during installation, or if the extension is in a failed state, run a script to investigate. The cluster-type parameter can be set to `connectedClusters` for an Arc-enabled cluster or `managedClusters` for an AKS cluster. The name of the `microsoft.flux` extension will be "flux" if the extension was installed automatically during creation of a GitOps configuration. Look in the "statuses" object for information.
+If you experience an error during installation, or if the extension is in a failed state, run a script to investigate. The cluster-type parameter can be set to `connectedClusters` for an Arc-enabled cluster or `managedClusters` for an AKS cluster. The name of the `microsoft.flux` extension is "flux" if the extension was installed automatically during creation of a GitOps configuration. Look in the "statuses" object for information.
One example:
kubectl delete namespaces flux-system
Some other aspects to consider:
-* For an AKS cluster, assure that the subscription has the `Microsoft.ContainerService/AKS-ExtensionManager` feature flag enabled.
+* For an AKS cluster, ensure that the subscription has the `Microsoft.ContainerService/AKS-ExtensionManager` feature flag enabled.
```azurecli az feature register --namespace Microsoft.ContainerService --name AKS-ExtensionManager ```
-* Assure that the cluster doesn't have any policies that restrict creation of the `flux-system` namespace or resources in that namespace.
+* Ensure that the cluster doesn't have any policies that restrict creation of the `flux-system` namespace or resources in that namespace.
-With these actions accomplished, you can either [recreate a flux configuration](./tutorial-use-gitops-flux2.md), which will install the flux extension automatically, or you can reinstall the flux extension manually.
+With these actions accomplished, you can either [recreate a flux configuration](./tutorial-use-gitops-flux2.md), which installs the flux extension automatically, or you can reinstall the flux extension manually.
### Flux v2 - Installing the `microsoft.flux` extension in a cluster with Azure AD Pod Identity enabled
The extension status also returns as "Failed".
The extension-agent pod is trying to get its token from IMDS on the cluster in order to talk to the extension service in Azure, but the token request is intercepted by the [pod identity](../../aks/use-azure-ad-pod-identity.md)).
-You can fix this issue by upgrading to the latest version of the `microsoft.flux` extension. For version 1.6.1 or earlier, the workaround is to create an `AzurePodIdentityException` that will tell Azure AD Pod Identity to ignore the token requests from flux-extension pods.
+You can fix this issue by upgrading to the latest version of the `microsoft.flux` extension. For version 1.6.1 or earlier, the workaround is to create an `AzurePodIdentityException` that tells Azure AD Pod Identity to ignore the token requests from flux-extension pods.
```console apiVersion: aadpodidentity.k8s.io/v1
The controllers installed in your Kubernetes cluster with the Microsoft Flux ext
| Container Name | CPU limit | Memory limit | | -- | -- | -- |
-| fluxconfig-agent | 50m | 150Mi |
-| fluxconfig-controller | 100m | 150Mi |
-| fluent-bit | 20m | 150Mi |
-| helm-controller | 1000m | 1Gi |
-| source-controller | 1000m | 1Gi |
-| kustomize-controller | 1000m | 1Gi |
-| notification-controller | 1000m | 1Gi |
-| image-automation-controller | 1000m | 1Gi |
-| image-reflector-controller | 1000m | 1Gi |
-
-If you have enabled a custom or built-in Azure Gatekeeper Policy, such as `Kubernetes cluster containers CPU and memory resource limits should not exceed the specified limits`, that limits the resources for containers on Kubernetes clusters, you will need to either ensure that the resource limits on the policy are greater than the limits shown above or the `flux-system` namespace is part of the `excludedNamespaces` parameter in the policy assignment.
+| fluxconfig-agent | 50 m | 150 Mi |
+| fluxconfig-controller | 100 m | 150 Mi |
+| fluent-bit | 20 m | 150 Mi |
+| helm-controller | 1000 m | 1 Gi |
+| source-controller | 1000 m | 1 Gi |
+| kustomize-controller | 1000 m | 1 i |
+| notification-controller | 1000 m | 1 Gi |
+| image-automation-controller | 1000 m | 1 Gi |
+| image-reflector-controller | 1000 m | 1 Gi |
+
+If you've enabled a custom or built-in Azure Gatekeeper Policy that limits the resources for containers on Kubernetes clusters, such as `Kubernetes cluster containers CPU and memory resource limits should not exceed the specified limits`, ensure that either the resource limits on the policy are greater than the limits shown above or that the `flux-system` namespace is part of the `excludedNamespaces` parameter in the policy assignment.
+
+### Flux v1
+
+> [!NOTE]
+> We recommend [migrating to Flux v2](conceptual-gitops-flux2.md#migrate-from-flux-v1) as soon as possible. Support for Flux v1-based cluster configuration resources created prior to May 1, 2023 will end on [May 24, 2025](https://azure.microsoft.com/updates/migrate-your-gitops-configurations-from-flux-v1-to-flux-v2-by-24-may-2025/). Starting on May 1, 2023, you won't be able to create new Flux v1-based cluster configuration resources.
+
+To help troubleshoot issues with `sourceControlConfigurations` resource (Flux v1), run these Azure CLI commands with `--debug` parameter specified:
+
+```azurecli
+az provider show -n Microsoft.KubernetesConfiguration --debug
+az k8s-configuration create <parameters> --debug
+```
+
+#### Flux v1 - Create configurations
+
+Write permissions on the Azure Arc-enabled Kubernetes resource (`Microsoft.Kubernetes/connectedClusters/Write`) are necessary and sufficient for creating configurations on that cluster.
+
+#### `sourceControlConfigurations` remains `Pending` (Flux v1)
+
+```console
+kubectl -n azure-arc logs -l app.kubernetes.io/component=config-agent -c config-agent
+$ k -n pending get gitconfigs.clusterconfig.azure.com -o yaml
+apiVersion: v1
+items:
+- apiVersion: clusterconfig.azure.com/v1beta1
+ kind: GitConfig
+ metadata:
+ creationTimestamp: "2020-04-13T20:37:25Z"
+ generation: 1
+ name: pending
+ namespace: pending
+ resourceVersion: "10088301"
+ selfLink: /apis/clusterconfig.azure.com/v1beta1/namespaces/pending/gitconfigs/pending
+ uid: d9452407-ff53-4c02-9b5a-51d55e62f704
+ spec:
+ correlationId: ""
+ deleteOperator: false
+ enableHelmOperator: false
+ giturl: git@github.com:slack/cluster-config.git
+ helmOperatorProperties: null
+ operatorClientLocation: azurearcfork8s.azurecr.io/arc-preview/fluxctl:0.1.3
+ operatorInstanceName: pending
+ operatorParams: '"--disable-registry-scanning"'
+ operatorScope: cluster
+ operatorType: flux
+ status:
+ configAppliedTime: "2020-04-13T20:38:43.081Z"
+ isSyncedWithAzure: true
+ lastPolledStatusTime: ""
+ message: 'Error: {exit status 1} occurred while doing the operation : {Installing
+ the operator} on the config'
+ operatorPropertiesHashed: ""
+ publicKey: ""
+ retryCountPublicKey: 0
+ status: Installing the operator
+kind: List
+metadata:
+ resourceVersion: ""
+ selfLink: ""
+```
## Monitoring
This warning occurs when you use a service principal to log into Azure. The serv
az ad sp show --id bc313c14-388c-4e7d-a58e-70017303ee3b --query objectId -o tsv ```
-1. Sign in into Azure CLI using the service principal. Use the `<objectId>` value from above step to enable custom locations on the cluster:
+1. Sign in into Azure CLI using the service principal. Use the `<objectId>` value from the previous step to enable custom locations on the cluster:
* To enable custom locations when connecting the cluster to Arc, run the following command:
This warning occurs when you use a service principal to log into Azure. The serv
## Azure Arc-enabled Open Service Mesh
-The steps below provide guidance on validating the deployment of all the Open Service Mesh (OSM) extension components on your cluster.
+This section shows how to validate the deployment of all the Open Service Mesh (OSM) extension components on your cluster.
### Check OSM Controller **Deployment**
Example output:
1845 ```
-The number in the output indicates the number of bytes, or the size of the CA Bundle. If this is empty, 0, or a number under 1000, the CA Bundle is not correctly provisioned. Without a correct CA Bundle, the `ValidatingWebhook` will throw an error.
+The number in the output indicates the number of bytes, or the size of the CA Bundle. If the output is empty, 0, or a number under 1000, the CA Bundle isn't correctly provisioned. Without a correct CA Bundle, the `ValidatingWebhook` will throw an error.
### Check the `osm-mesh-config` resource
metadata:
### Check namespaces >[!Note]
->The arc-osm-system namespace will never participate in a service mesh and will never be labeled or annotated with the key/values below.
+>The arc-osm-system namespace will never participate in a service mesh and will never be labeled or annotated with the key/values shown here.
We use the `osm namespace add` command to join namespaces to a given service mesh. When a Kubernetes namespace is part of the mesh, confirm the following:
The following label must be present:
} ```
-If you aren't using `osm` CLI, you could also manually add these annotations to your namespaces. If a namespace isn't annotated with `"openservicemesh.io/sidecar-injection": "enabled"`, or isn't labeled with `"openservicemesh.io/monitored-by": "osm"`, the OSM Injector will not add Envoy sidecars.
+If you aren't using `osm` CLI, you could also manually add these annotations to your namespaces. If a namespace isn't annotated with `"openservicemesh.io/sidecar-injection": "enabled"`, or isn't labeled with `"openservicemesh.io/monitored-by": "osm"`, the OSM Injector won't add Envoy sidecars.
>[!Note] >After `osm namespace add` is called, only **new** pods will be injected with an Envoy sidecar. Existing pods must be restarted with `kubectl rollout restart deployment` command.
azure-arc Manage Vm Extensions Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/manage-vm-extensions-cli.md
The following example enables the Microsoft Antimalware extension on an Azure Ar
az connectedmachine extension create --resource-group "resourceGroupName" --machine-name "myMachineName" --location "regionName" --publisher "Microsoft.Azure.Security" --type "IaaSAntimalware" --name "IaaSAntimalware" --settings '"{\"AntimalwareEnabled\": \"true\"}"' ```
+The following example enables the Datadog extension on an Azure Arc-enabled Windows server:
+
+```azurecli
+az connectedmachine extension create --resource-group "resourceGroupName" --machine-name "myMachineName" --location "regionName" --publisher "Datadog.Agent" --type "DatadogWindowsAgent" --settings '{"site": "us3.datadoghq.com"}' --protected-settings '{"api_key": "YourDatadogAPIKey" }'
+```
+ ## List extensions installed To get a list of the VM extensions on your Azure Arc-enabled server, use [az connectedmachine extension list](/cli/azure/connectedmachine/extension#az-connectedmachine-extension-list) with the `--machine-name` and `--resource-group` parameters.
azure-arc Manage Vm Extensions Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/manage-vm-extensions-powershell.md
The following example enables the Key Vault VM extension on an Azure Arc-enabled
New-AzConnectedMachineExtension -ResourceGroupName $resourceGroup -Location $location -MachineName $machineName -Name "KeyVaultForWindows or KeyVaultforLinux" -Publisher "Microsoft.Azure.KeyVault" -ExtensionType "KeyVaultforWindows or KeyVaultforLinux" -Setting $settings ```
+### Datadog VM extension
+
+The following example enables the Datadog VM extension on an Azure Arc-enabled server:
+
+```azurepowershell
+$resourceGroup = "resourceGroupName"
+$machineName = "machineName"
+$location = "machineRegion"
+$osType = "Windows" # change to Linux if appropriate
+$settings = @{
+ # change to your preferred Datadog site
+ site = "us3.datadoghq.com"
+}
+$protectedSettings = @{
+ # change to your Datadog API key
+ api_key = "APIKEY"
+}
+
+New-AzConnectedMachineExtension -ResourceGroupName $resourceGroup -Location $location -MachineName $machineName -Name "Datadog$($osType)Agent" -Publisher "Datadog.Agent" -ExtensionType "Datadog$($osType)Agent" -Setting $settings -ProtectedSetting $protectedSettings
+```
+ ## List extensions installed To get a list of the VM extensions on your Azure Arc-enabled server, use [Get-AzConnectedMachineExtension](/powershell/module/az.connectedmachine/get-azconnectedmachineextension) with the `-MachineName` and `-ResourceGroupName` parameters.
azure-cache-for-redis Cache How To Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-encryption.md
Title: Configure active encryption for Enterprise Azure Cache for Redis instances
-description: Learn about encryption for your Azure Cache for Redis Enterprise instances across Azure regions.
+ Title: Configure disk encryption in Azure Cache for Redis
+description: Learn about disk encryption when using Azure Cache for Redis.
Previously updated : 03/24/2023 Last updated : 03/28/2023
azure-cache-for-redis Cache How To Premium Persistence https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-premium-persistence.md
The [az redisenterprise create](/cli/azure/redisenterprise#az-redisenterprise-cr
az redisenterprise create --cluster-name "cache1" --resource-group "rg1" --location "East US" --sku "Enterprise_E10" --persistence rdb-enabled=true rdb-frequency="1h" ```
-Existing caches can be updated using the [az redisenterprise update](/cli/azure/redisenterprise#az-redisenterprise-update) command. This example adds RDB persistence with 12 hour frequency to an existing cache instance:
+Existing caches can be updated using the [az redisenterprise database update](/cli/azure/redisenterprise/database#az-redisenterprise-database-update) command. This example adds RDB persistence with 12 hour frequency to an existing cache instance:
```azurecli az redisenterprise database update --cluster-name "cache1" --resource-group "rg1" --persistence rdb-enabled=true rdb-frequency="12h"
azure-cache-for-redis Cache Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-whats-new.md
Previously updated : 02/06/2023 Last updated : 03/28/2023 # What's New in Azure Cache for Redis
+## March 2023
+
+### In-place scale up and scale out for the Enterprise tiers (preview)
+
+The Enterprise and Enterprise Flash tiers now support the ability to scale cache instances up and out without requiring downtime or data loss. Scale up and scale out actions can both occur in the same operation.
+
+For more information, see [Scale an Azure Cache for Redis instance](cache-how-to-scale.md)
+
+### Support for RedisJSON in active geo-replicated caches (preview)
+
+Cache instances using active geo-replication now support the RedisJSON module.
+
+For more information, see [Configure active geo-replication](cache-how-to-active-geo-replication.md).
+
+### Flush operation for active geo-replicated caches (preview)
+
+Caches using active geo-replication now include a built-in _flush_ operation that can be initiated at the control plane level. Use the _flush_ operation with your cache instead of the `FLUSH ALL` and `FLUSH DB` operations, which are blocked by design for active geo-replicated caches.
+
+For more information, see [Flush operation](cache-how-to-active-geo-replication.md#flush-operation)
+
+### Customer managed key (CMK) disk encryption (preview)
+
+Redis data that is saved on disk can now be encrypted using customer managed keys (CMK) in the Enterprise and Enterprise Flash tiers. Using CMK adds another layer of control to the default disk encryption.
+
+For more information, see [Enable disk encryption](cache-how-to-encryption.md)
+
+### Connection event audit logs (preview)
+
+Enterprise and Enterprise Flash tier caches can now log all connection, disconnection, and authentication events through diagnostic settings. Logging this information helps in security audits. You can also monitor who has access to your cache resource.
+
+For more information, see [Enabling connection audit logs](cache-monitor-diagnostic-settings.md)
+ ## November 2022 ### Support for RedisJSON
Beginning January 20, 2023, all versions of Azure Cache for Redis REST API, Powe
> > The default Redis version that is used when creating a cache instance can vary because it is based on the latest stable version offered in Azure Cache for Redis.
-If you need a specific version of Redis for your application, we recommend using latest artifact versions as shown in the table below. Then, choose the Redis version explicitly when you create the cache.
+If you need a specific version of Redis for your application, we recommend using latest artifact versions as shown in the table. Then, choose the Redis version explicitly when you create the cache.
| Artifact | Version that supports specifying Redis version | |||
The default version of Redis that is used when creating a cache can change over
As of May 2022, Azure Cache for Redis rolls over to TLS certificates issued by DigiCert Global G2 CA Root. The current Baltimore CyberTrust Root expires in May 2025, requiring this change.
-We expect that most Azure Cache for Redis customers won't be affected. However, your application might be affected if you explicitly specify a list of acceptable certificate authorities (CAs), known as _certificate pinning_.
+We expect that most Azure Cache for Redis customers aren't affected. However, your application might be affected if you explicitly specify a list of acceptable certificate authorities (CAs), known as _certificate pinning_.
For more information, read this blog that contains instructions on [how to check whether your client application is affected](https://techcommunity.microsoft.com/t5/azure-developer-community-blog/azure-cache-for-redis-tls-upcoming-migration-to-digicert-global/ba-p/3171086). We recommend taking the actions recommended in the blog to avoid cache connectivity loss.
azure-functions Functions Reference Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-java.md
public class Function {
} public static class TestInputData {
- public String getKey() { return this.RowKey; }
- private String RowKey;
+ public String getKey() { return this.rowKey; }
+ private String rowKey;
} public static class Person {
- public String PartitionKey;
- public String RowKey;
- public String Name;
+ public String partitionKey;
+ public String rowKey;
+ public String name;
public Person(String p, String r, String n) {
- this.PartitionKey = p;
- this.RowKey = r;
- this.Name = n;
+ this.partitionKey = p;
+ this.rowKey = r;
+ this.name = n;
} } }
azure-government Documentation Government Csp List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-csp-list.md
Below you can find a list of all the authorized Cloud Solution Providers (CSPs),
|[Quiet Professionals, LLC](https://quietprofessionalsllc.com)| |[R3, LLC](https://www.r3-it.com/)| |[Red River](https://www.redriver.com)|
+|[RSMUS, LLC](https://rsmus.com)|
|[SAIC](https://www.saic.com)| |[SentinelBlue LLC](https://www.sentinelblue.com/)| |[Smartronix](https://www.smartronix.com)|
azure-maps Drawing Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/drawing-requirements.md
You can see an example of the Walls layer in the [sample drawing package].
You can include a DWG layer that contains doors. Each door must overlap the edge of a unit from the Unit layer.
-Door openings in an Azure Maps dataset are represented as a single-line segment that overlaps multiple unit boundaries. The following images show how Azure Maps converts door layer geometry into opening features in a dataset..
+Door openings in an Azure Maps dataset are represented as a single-line segment that overlaps multiple unit boundaries. The following images show how Azure Maps converts door layer geometry into opening features in a dataset.
![Four graphics that show the steps to generate openings](./media/drawing-requirements/opening-steps.png)
The `unitProperties` object contains a JSON array of unit properties.
| Property | Type | Required | Description | |--||-|-| |`unitName`|string|true|Name of unit to associate with this `unitProperty` record. This record is only valid when a label matching `unitName` is found in the `unitLabel` layers. |
-|`categoryName`|string|false|Purpose of the unit. A list of values that the provided rendering styles can make use of is documented in [categories.json](https://atlas.microsoft.com/sdk/javascript/indoor/0.1/categories.json).|
+|`categoryName`|string|false|Purpose of the unit. A list of values that the provided rendering styles can make use of is documented in [categories.json](https://atlas.microsoft.com/sdk/javascript/indoor/0.2/categories.json).|
|`occupants`|array of directoryInfo objects |false |List of occupants for the unit. | |`nameAlt`|string|false|Alternate name of the unit. | |`nameSubtitle`|string|false|Subtitle of the unit. |
The `zoneProperties` object contains a JSON array of zone properties.
| Property | Type | Required | Description | |--||-|-| |zoneName |string |true |Name of zone to associate with `zoneProperty` record. This record is only valid when a label matching `zoneName` is found in the `zoneLabel` layer of the zone. |
-|categoryName| string| false |Purpose of the zone. A list of values that the provided rendering styles can make use of is documented in [categories.json](https://atlas.microsoft.com/sdk/javascript/indoor/0.1/categories.json).|
+|categoryName| string| false |Purpose of the zone. A list of values that the provided rendering styles can make use of is documented in [categories.json](https://atlas.microsoft.com/sdk/javascript/indoor/0.2/categories.json).|
|zoneNameAlt| string| false |Alternate name of the zone. | |zoneNameSubtitle| string | false |Subtitle of the zone. | |zoneSetId| string | false | Set ID to establish a relationship among multiple zones so that they can be queried or selected as a group. For example, zones that span multiple levels. |
azure-maps How To Create Custom Styles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-create-custom-styles.md
Now when you select that unit in the map, the pop-up menu will have the new laye
[subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account [manifest]: drawing-requirements.md#manifest-file-requirements [unitProperties]: drawing-requirements.md#unitproperties
-[categories]: https://atlas.microsoft.com/sdk/javascript/indoor/0.1/categories.json
+[categories]: https://atlas.microsoft.com/sdk/javascript/indoor/0.2/categories.json
[Instantiate the Indoor Manager]: how-to-use-indoor-module.md#instantiate-the-indoor-manager [map configuration]: creator-indoor-maps.md#map-configuration
azure-maps How To Use Indoor Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-indoor-module.md
You can install and embed the *Azure Maps Indoor* module in one of two ways.
To use the globally hosted Azure Content Delivery Network version of the *Azure Maps Indoor* module, reference the following JavaScript and Style Sheet references in the `<head>` element of the HTML file: ```html
-<link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/indoor/0.1/atlas-indoor.min.css" type="text/css"/>
-<script src="https://atlas.microsoft.com/sdk/javascript/indoor/0.1/atlas-indoor.min.js"></script>
+<link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/indoor/0.2/atlas-indoor.min.css" type="text/css"/>
+<script src="https://atlas.microsoft.com/sdk/javascript/indoor/0.2/atlas-indoor.min.js"></script>
``` Or, you can download the *Azure Maps Indoor* module. The *Azure Maps Indoor* module contains a client library for accessing Azure Maps services. Follow the steps below to install and load the *Indoor* module into your web application.
Your file should now look similar to the HTML below.
<title>Indoor Maps App</title> <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" />
- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/indoor/0.1/atlas-indoor.min.css" type="text/css"/>
+ <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/indoor/0.2/atlas-indoor.min.css" type="text/css"/>
<script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script>
- <script src="https://atlas.microsoft.com/sdk/javascript/indoor/0.1/atlas-indoor.min.js"></script>
+ <script src="https://atlas.microsoft.com/sdk/javascript/indoor/0.2/atlas-indoor.min.js"></script>
<style> html,
azure-maps Release Notes Indoor Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/release-notes-indoor-module.md
+
+ Title: Release notes - Indoor Module
+
+description: Release notes for the Azure Maps Indoor Module.
++ Last updated : 3/24/2023+++++
+# Indoor Module release notes
+
+This document contains information about new features and other changes to the Azure Maps Indoor Module.
+
+## [0.2.0]
+
+### New features (0.2.0)
+
+- Support for new [drawing package 2.0] derived tilesets.
+
+- Support the possibility to select a facility when clicking on a feature that doesn't contain a facilityId, but has a levelId so that the facility can be inferred from the levelId.
+
+### Changes (0.2.0)
+
+- Performance improvements for level picker and indoor manager.
+
+- Revamp of how level filters are applied to indoor style layers.
+
+### Bug fixes (0.2.0)
+
+- Fix slider not updating when changing level in the level picker when used inside the shadow dom of a custom element.
+
+- Fix exception on disabling of dynamic styling.
+
+## Next steps
+
+Explore samples showcasing Azure Maps:
+
+> [!div class="nextstepaction"]
+> [Azure Maps Creator Samples]
+
+Stay up to date on Azure Maps:
+
+> [!div class="nextstepaction"]
+> [Azure Maps Blog]
+
+[drawing package 2.0]: ./drawing-package-guide.md
+[0.2.0]: https://www.npmjs.com/package/azure-maps-indoor/v/0.2.0
+[Azure Maps Creator Samples]: https://samples.azuremaps.com/?search=creator
+[Azure Maps Blog]: https://techcommunity.microsoft.com/t5/azure-maps-blog/bg-p/AzureMapsBlog
azure-maps Release Notes Map Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/release-notes-map-control.md
This document contains information about new features and other changes to the M
## v3 (preview)
-### [3.0.0-preview.5](https://www.npmjs.com/package/azure-maps-control/v/3.0.0-preview.5) (March 15, 2023)
+### [3.0.0-preview.5] (March 15, 2023)
#### Installation (3.0.0-preview.5)
This update is the first preview of the upcoming 3.0.0 release. The underlying [
## v2 (latest)
-### [2.2.5](https://www.npmjs.com/package/azure-maps-control/v/2.2.5)
+### [2.2.5]
#### New features (2.2.5)
Stay up to date on Azure Maps:
> [!div class="nextstepaction"] > [Azure Maps Blog]
+[3.0.0-preview.5]: https://www.npmjs.com/package/azure-maps-control/v/3.0.0-preview.5
[3.0.0-preview.4]: https://www.npmjs.com/package/azure-maps-control/v/3.0.0-preview.4 [3.0.0-preview.3]: https://www.npmjs.com/package/azure-maps-control/v/3.0.0-preview.3 [3.0.0-preview.2]: https://www.npmjs.com/package/azure-maps-control/v/3.0.0-preview.2 [3.0.0-preview.1]: https://www.npmjs.com/package/azure-maps-control/v/3.0.0-preview.1
+[2.2.5]: https://www.npmjs.com/package/azure-maps-control/v/2.2.5
[2.2.4]: https://www.npmjs.com/package/azure-maps-control/v/2.2.4 [2.2.3]: https://www.npmjs.com/package/azure-maps-control/v/2.2.3 [2.2.2]: https://www.npmjs.com/package/azure-maps-control/v/2.2.2
azure-monitor Azure Monitor Agent Data Collection Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-data-collection-endpoint.md
Title: Define Azure Monitor Agent network settings description: Define network settings and enable network isolation for Azure Monitor Agent. -- Last updated 12/19/2022
azure-monitor Azure Monitor Agent Extension Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-extension-versions.md
Title: Azure Monitor agent extension versions description: This article describes the version details for the Azure Monitor agent virtual machine extension. -- Last updated 2/22/2023
azure-monitor Azure Monitor Agent Troubleshoot Linux Vm Rsyslog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-troubleshoot-linux-vm-rsyslog.md
Title: Rsyslog data not uploaded due to Full Disk space issue on AMA Linux Agent description: Guidance for troubleshooting rsyslog issues on Linux virtual machines, scale sets with Azure Monitor agent and Data Collection Rules. -- Last updated 5/3/2022
azure-monitor Azure Monitor Agent Troubleshoot Linux Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-troubleshoot-linux-vm.md
Title: Troubleshoot the Azure Monitor agent on Linux virtual machines and scale sets description: Guidance for troubleshooting issues on Linux virtual machines, scale sets with Azure Monitor agent and Data Collection Rules. -- Last updated 5/3/2022
azure-monitor Azure Monitor Agent Troubleshoot Windows Arc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-troubleshoot-windows-arc.md
Title: Troubleshoot the Azure Monitor agent on Windows Arc-enabled server description: Guidance for troubleshooting issues on Windows Arc-enabled server with Azure Monitor agent and Data Collection Rules. -- Last updated 7/19/2022
azure-monitor Azure Monitor Agent Troubleshoot Windows Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-troubleshoot-windows-vm.md
Title: Troubleshoot the Azure Monitor agent on Windows virtual machines and scale sets description: Guidance for troubleshooting issues on Windows virtual machines, scale sets with Azure Monitor agent and Data Collection Rules. -- Last updated 6/9/2022
azure-monitor Azure Monitor Agent Windows Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-windows-client.md
Title: Set up the Azure Monitor agent on Windows client devices description: This article describes the instructions to install the agent on Windows 10, 11 client OS devices, configure data collection, manage and troubleshoot the agent. -- Last updated 1/9/2023
azure-monitor Data Sources Event Tracing Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-sources-event-tracing-windows.md
Title: Collecting Event Tracing for Windows (ETW) Events for analysis Azure Mon
description: Learn how to collect Event Tracing for Windows (ETW) for analysis in Azure Monitor Logs. -- Last updated 02/07/2022 ms. reviewer: shseth
azure-monitor Action Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/action-groups.md
Title: Manage action groups in the Azure portal description: Find out how to create and manage action groups. Learn about notifications and actions that action groups enable, such as email, webhooks, and Azure Functions.- Last updated 09/07/2022-
azure-monitor Alerts Classic Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-classic-portal.md
Title: Create and manage classic metric alerts using Azure Monitor description: Learn how to use Azure portal or PowerShell to create, view and manage classic metric alert rules.-- Last updated 2/23/2022
> This article describes how to create older classic metric alerts. Azure Monitor now supports [newer near-real time metric alerts and a new alerts experience](./alerts-overview.md). Classic alerts are [retired](./monitoring-classic-retirement.md) for public cloud users. Classic alerts for Azure Government cloud and Azure China 21Vianet will retire on **29 February 2024**. >
-Classic metric alerts in Azure Monitor provide a way to get notified when one of your metrics crosses a threshold. Classic metric alerts is an older functionality that allows for alerting only on non-dimensional metrics. There is an existing newer functionality called Metric alerts, which has improved functionality over classic metric alerts. You can learn more about the new metric alerts functionality in [metric alerts overview](./alerts-metric-overview.md). In this article, we will describe how to create, view and manage classic metric alert rules through Azure portal and PowerShell.
+Classic metric alerts in Azure Monitor provide a way to get notified when one of your metrics crosses a threshold. Classic metric alerts is an older functionality that allows for alerting only on non-dimensional metrics. There's an existing newer functionality called Metric alerts, which has improved functionality over classic metric alerts. You can learn more about the new metric alerts functionality in [metric alerts overview](./alerts-metric-overview.md). In this article, we'll describe how to create, view and manage classic metric alert rules through Azure portal and PowerShell.
## With Azure portal
After you create an alert, you can select it and do one of the following tasks:
[!INCLUDE [updated-for-az](../../../includes/updated-for-az.md)]
-This sections shows how to use PowerShell commands create, view and manage classic metric alerts.The examples in the article illustrate how you can use Azure Monitor cmdlets for classic metric alerts.
+This section shows how to use PowerShell commands create, view and manage classic metric alerts.The examples in the article illustrate how you can use Azure Monitor cmdlets for classic metric alerts.
1. If you haven't already, set up PowerShell to run on your computer. For more information, see [How to Install and Configure PowerShell](/powershell/azure/). You can also review the entire list of Azure Monitor PowerShell cmdlets at [Azure Monitor (Insights) Cmdlets](/powershell/module/az.applicationinsights).
azure-monitor Alerts Enable Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-enable-template.md
Title: Resource Manager template - create metric alert description: Learn how to use a Resource Manager template to create a classic metric alert to receive notifications by email or webhook.-- Last updated 03/30/2022
azure-monitor Alerts Log Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-log-query.md
Title: Optimize log alert queries | Microsoft Docs description: This article gives recommendations for writing efficient alert queries.-- Last updated 2/23/2022+ # Optimize log alert queries
azure-monitor Alerts Log Webhook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-log-webhook.md
Title: Webhook actions for log alerts in Azure alerts description: Describes how to configure a log alert pushes with webhook action and available customizations-- Last updated 2/23/2022+ # Webhook actions for log alert rules
azure-monitor Alerts Metric Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-metric-logs.md
Title: Creating Metric Alerts for Logs in Azure Monitor description: Tutorial on creating near-real time metric alerts on popular log analytics data.-- Last updated 7/24/2022
azure-monitor Alerts Metric Multiple Time Series Single Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-metric-multiple-time-series-single-rule.md
Title: Monitor multiple time series in a single metric alert rule description: Alert at scale by using a single alert rule for multiple time series.-- Last updated 2/23/2022+ # Monitor multiple time series in a single metric alert rule
azure-monitor Alerts Metric Near Real Time https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-metric-near-real-time.md
Title: Supported resources for metric alerts in Azure Monitor description: Reference on support metrics and logs for metric alerts in Azure Monitor-- Last updated 3/8/2023
azure-monitor Alerts Prepare Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-prepare-migration.md
Title: Update logic apps & runbooks for alerts migration description: Learn how to modify your webhooks, logic apps, and runbooks to prepare for voluntary migration.-- Last updated 2/23/2022+ # Prepare your logic apps and runbooks for migration of classic alert rules
azure-monitor Alerts Rate Limiting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-rate-limiting.md
Title: Rate limiting for SMS, emails, push notifications description: Understand how Azure limits the number of possible SMS, email, Azure App Service push, or webhook notifications from an action group.-- Last updated 2/23/2022
azure-monitor Alerts Resource Move https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-resource-move.md
Title: How to update alert rules or alert processing rules when their target resource moves to a different Azure region description: Background and instructions for how to update alert rules or alert processing rules when their target resource moves to a different Azure region. -- Last updated 2/23/2022
azure-monitor Alerts Sms Behavior https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-sms-behavior.md
Title: SMS alert behavior in action groups description: SMS message format and responding to SMS messages to unsubscribe, resubscribe, or request help.-- Last updated 2/23/2022
azure-monitor Alerts Troubleshoot Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-troubleshoot-log.md
Title: Troubleshoot log alerts in Azure Monitor | Microsoft Docs description: Common issues, errors, and resolutions for log alert rules in Azure.-- Last updated 2/23/2022+ # Troubleshoot log alerts in Azure Monitor
azure-monitor Alerts Troubleshoot Metric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-troubleshoot-metric.md
Title: Frequently asked questions about Azure Monitor metric alerts description: Common issues with Azure Monitor metric alerts and possible solutions. -- Last updated 8/31/2022 ms:reviwer: harelbr
azure-monitor Alerts Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-troubleshoot.md
Title: Troubleshooting Azure Monitor alerts and notifications description: Common issues with Azure Monitor alerts and possible solutions. -- Last updated 2/23/2022
azure-monitor Alerts Understand Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-understand-migration.md
description: Understand how the alerts migration works and troubleshoot problems
Last updated 2/23/2022--+ # Understand migration options to newer alerts
azure-monitor Alerts Using Migration Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-using-migration-tool.md
Title: Migrate Azure Monitor alert rules description: Learn how to use the voluntary migration tool to migrate your classic alert rules.-- Last updated 2/23/2022+ # Use the voluntary migration tool to migrate your classic alert rules
azure-monitor Alerts Webhooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-webhooks.md
Title: Call a webhook with a classic metric alert in Azure Monitor description: Learn how to reroute Azure metric alerts to other, non-Azure systems.-- Last updated 2/23/2022
azure-monitor Itsmc Dashboard Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-dashboard-errors.md
Title: Connector status errors in the ITSMC dashboard description: Learn about common errors that exist in the IT Service Management Connector dashboard. -- Last updated 2/23/2022
azure-monitor Itsmc Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-dashboard.md
Title: Investigate errors by using the ITSMC dashboard description: Learn how to use the IT Service Management Connector dashboard to investigate errors. -- Last updated 2/23/2022
azure-monitor Itsmc Resync Servicenow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-resync-servicenow.md
Title: How to manually fix ServiceNow sync problems description: Reset the connection to ServiceNow so alerts in Microsoft Azure can again call ServiceNow -- Last updated 03/30/2022
azure-monitor Itsmc Troubleshoot Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-troubleshoot-overview.md
Title: Troubleshoot problems in ITSMC description: Learn how to resolve common problems in IT Service Management Connector. -- Last updated 2/23/2022
The following sections identify common symptoms, possible causes, and resolution
**Cause**: There can be several reasons for this:
-* Templates are not shown as a part of the action definition dropdown and an error message is shown: "Can't retrieve the template configuration, see the connector logs for more information."
-* Values are not shown in the dropdowns of the default fields as a part of the action definition and an error message is shown: "No values found for the following fields: \<field names\>."
-* Incidents/Events are not created in ServiceNow.
+* Templates aren't shown as a part of the action definition dropdown and an error message is shown: "Can't retrieve the template configuration, see the connector logs for more information."
+* Values aren't shown in the dropdowns of the default fields as a part of the action definition and an error message is shown: "No values found for the following fields: \<field names\>."
+* Incidents/Events aren't created in ServiceNow.
**Resolution**: * [Sync the connector](itsmc-resync-servicenow.md).
The following sections identify common symptoms, possible causes, and resolution
### In the incidents received from ServiceNow, the configuration item is blank **Cause**: There can be several reasons for this:
-* The alert is not a log alert. Configuration items are only supported by log alerts.
+* The alert isn't a log alert. Configuration items are only supported by log alerts.
* The search results do not include the **Computer** or **Resource** column. * The values in the configuration item field do not match an entry in the CMDB.
azure-monitor Automate With Logic Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/automate-with-logic-apps.md
- Title: Automate Application Insights processes by using Logic Apps
-description: Learn how you can quickly automate repeatable processes by adding the Application Insights connector to your logic app.
- Previously updated : 07/31/2022-----
-# Automate Application Insights processes by using Logic Apps
-
-Do you find yourself repeatedly running the same queries on your telemetry data to check whether your service is functioning properly? Are you looking to automate these queries for finding trends and anomalies and then build your own workflows around them? The Application Insights connector for Azure Logic Apps is the right tool for this purpose.
-
-> [!NOTE]
-> The Application Insights connector has been replaced by the [Azure Monitor connector](../logs/logicapp-flow-connector.md). It's integrated with Azure Active Directory instead of requiring an API key. You can also use it to retrieve data from a Log Analytics workspace.
-
-With this integration, you can automate numerous processes without writing a single line of code. You can create a logic app with the Application Insights connector to quickly automate any Application Insights process.
-
-You can also add other actions. The Logic Apps feature of Azure App Service makes hundreds of actions available. For example, by using a logic app, you can automatically send an email notification or create a bug in Azure DevOps. You can also use one of the many available [templates](../../logic-apps/logic-apps-create-logic-apps-from-templates.md) to help speed up the process of creating your logic app.
-
-## Create a logic app for Application Insights
-
-In this tutorial, you learn how to create a logic app that uses the Log Analytics autocluster algorithm to group attributes in the data for a web application. The flow automatically sends the results by email. This example shows how you can use Application Insights analytics and Logic Apps together.
-
-### Create a logic app
-1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Select **Create a resource** > **Web** > **Logic App**.
-
- ![Screenshot that shows the New logic app window.](./media/automate-with-logic-apps/1createlogicapp.png)
-
-### Create a trigger for your logic app
-1. In the **Logic Apps Designer** window, under **Start with a common trigger**, select **Recurrence**.
-
- ![Screenshot that shows the Logic Apps Designer window.](./media/automate-with-logic-apps/2logicappdesigner.png)
-
-1. In the **Interval** box, enter **1**. In the **Frequency** box, select **Day**.
-
- ![Screenshot that shows the Logic Apps Designer Recurrence window.](./media/automate-with-logic-apps/3recurrence.png)
-
-### Add an Application Insights action
-
-1. Select **New step**.
-
-1. In the **Choose an action** search box, enter **Application Insights**.
-
-1. Under **Actions**, select **Visualize Analytics query - Azure Application Insights**.
-
- ![Screenshot that shows the Logic App Designer Choose an action window.](./media/automate-with-logic-apps/4visualize.png)
-
-### Connect to an Application Insights resource
-
-For this step, you need an application ID and an API key for your resource.
-
-1. Select **API Access** > **Create API key**.
-
- ![Screenshot that shows the API Access page in the Azure portal with the Create API key button selected.](./media/automate-with-logic-apps/5apiaccess.png)
-
- ![Screenshot that shows the Application ID in the Azure portal.](./media/automate-with-logic-apps/6apikey.png)
-
-1. Provide a name for your connection, the application ID, and the API key.
-
- ![Screenshot that shows the Logic App Designer flow connection window.](./media/automate-with-logic-apps/7connection.png)
-
-### Specify the Log Analytics query and chart type
-In the following example, the query selects the failed requests within the last day and correlates them with exceptions that occurred as part of the operation. Log Analytics correlates the failed requests based on the `operation_Id` identifier. The query then segments the results by using the autocluster algorithm.
-
-When you create your own queries, verify that they're working properly in Log Analytics before you add them to your flow.
-
-1. In the **Query** box, add the following Log Analytics query:
-
- ```
- requests
- | where timestamp > ago(1d)
- | where success == "False"
- | project name, operation_Id
- | join ( exceptions
- | project problemId, outerMessage, operation_Id
- ) on operation_Id
- | evaluate autocluster()
- ```
-
-1. In the **Chart Type** box, select **Html Table**.
-
- ![Screenshot that shows the Log Analytics query configuration window.](./media/automate-with-logic-apps/8query.png)
-
-### Configure the logic app to send email
-
-1. Select **New step**.
-
-1. In the search box, enter **Office 365 Outlook**.
-
-1. Select **Send an email - Office 365 Outlook**.
-
- ![Screenshot that shows the Send an email button on the Office 365 Outlook screen.](./media/automate-with-logic-apps/9sendemail.png)
-
-1. In the **Send an email** window:
-
- 1. Enter the email address of the recipient.
- 1. Enter a subject for the email.
- 1. Select anywhere in the **Body** box. On the **Dynamic content** menu that opens at the right, select **Body**.
- 1. Select the **Add new parameter** dropdown list and select **Attachments** and **Is HTML**.
-
- ![Screenshot that shows the Send an email window with the Body box highlighted and the Dynamic content menu with Body highlighted on the right side.](./media/automate-with-logic-apps/10emailbody.png)
-
- ![Screenshot that shows the Add new parameter dropdown list in the Send an email window with the Attachments and Is HTML checkboxes selected.](./media/automate-with-logic-apps/11emailparameter.png)
-
-1. On the **Dynamic content** menu:
-
- 1. Select **Attachment Name**.
- 1. Select **Attachment Content**.
- 1. In the **Is HTML** box, select **Yes**.
-
- ![Screenshot that shows the Office 365 email configuration screen.](./media/automate-with-logic-apps/12emailattachment.png)
-
-### Save and test your logic app
-
-1. Select **Save** to save your changes.
-
- You can wait for the trigger to run the logic app, or you can run the logic app immediately by selecting **Run**.
-
- ![Screenshot that shows the Save button on the Logic Apps Designer screen.](./media/automate-with-logic-apps/13save.png)
-
- When your logic app runs, the recipients you specified in the email list will receive an email that looks like this example:
-
- ![Image that shows an email message generated by a logic app with a query result set.](./media/automate-with-logic-apps/email-generated-by-logic-app-generated-email.png)
-
- > [!NOTE]
- > The log app generates an email with a JPG file that depicts the query result set. If your query doesn't return results, the logic app won't create a JPG file.
-
-## Next steps
--- Learn more about creating [Log Analytics queries](../logs/get-started-queries.md).-- Learn more about [Logic Apps](../../logic-apps/logic-apps-overview.md).
azure-monitor Best Practices Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-cost.md
description: Recommendations for reducing costs in Azure Monitor.
Previously updated : 10/17/2022 Last updated : 03/29/2023
Azure Monitor includes the following design considerations related to cost:
> [!div class="checklist"] > - Use diagnostic settings and transformations to collect only critical resource log data from Azure resources. > - Configure VM agents to collect only critical events.
-> - Use transformations to filter resource logs.
+> - Use transformations to filter resource logs for [supported tables](logs/tables-feature-support.md).
> - Ensure that VMs aren't sending data to multiple workspaces. **Monitor usage**
You may be able to significantly reduce your costs by optimizing the configurati
| Recommendation | Description | |:|:| | Configure pricing tier or dedicated cluster for your Log Analytics workspaces. | By default, Log Analytics workspaces will use pay-as-you-go pricing with no minimum data volume. If you collect enough amount of data, you can significantly decrease your cost by using a [commitment tier](logs/cost-logs.md#commitment-tiers) or [dedicated cluster](logs/logs-dedicated-clusters.md), which allows you to commit to a daily minimum of data collected in exchange for a lower rate.<br><br>See [Azure Monitor Logs cost calculations and options](logs/cost-logs.md) for details on commitment tiers and guidance on determining which is most appropriate for your level of usage. See [Usage and estimated costs](usage-estimated-costs.md#usage-and-estimated-costs) to view estimated costs for your usage at different pricing tiers.
-| Configure tables used for debugging, troubleshooting, and auditing as Basic Logs. | Tables in a Log Analytics workspace configured for [Basic Logs](logs/basic-logs-configure.md) have a lower ingestion cost in exchange for limited features and a charge for log queries. If you query these tables infrequently, this query cost can be more than offset by the reduced ingestion cost.<br><br>See [Configure Basic Logs in Azure Monitor (Preview)](logs/basic-logs-configure.md) for more information about Basic Logs and [Query Basic Logs in Azure Monitor (preview)](.//logs/basic-logs-query.md) for details on query limitations. |
+| Configure tables used for debugging, troubleshooting, and auditing as Basic Logs. | Tables in a Log Analytics workspace configured for [Basic Logs](logs/basic-logs-configure.md) have a lower ingestion cost in exchange for limited features and a charge for log queries. If you query these tables infrequently, this query cost can be more than offset by the reduced ingestion cost.<br><br>See [Configure Basic Logs in Azure Monitor](logs/basic-logs-configure.md) for more information about Basic Logs and [Query Basic Logs in Azure Monitor](.//logs/basic-logs-query.md) for details on query limitations. |
| Configure data retention and archiving. | There is a charge for retaining data in a Log Analytics workspace beyond the default of 30 days (90 days in Sentinel if enabled on the workspace). If you need to retain data for compliance reasons or for occasional investigation or analysis of historical data, configure [Archived Logs](logs/data-retention-archive.md), which allows you to retain data for up to seven years at a reduced cost.<br><br>See [Configure data retention and archive policies in Azure Monitor Logs](logs/data-retention-archive.md) for details on how to configure your workspace and how to work with archived data. |
Since Azure Monitor charges for the collection of data, your goal should be to c
| Recommendation | Description | |:|:|
-| Collect only critical resource log data from Azure resources. | When you create [diagnostic settings](essentials/diagnostic-settings.md) to send [resource logs](essentials/resource-logs.md) for your Azure resources to a Log Analytics database, only specify those categories that you require. Since diagnostic settings don't allow granular filtering of resource logs, use a [workspace transformation](essentials/data-collection-transformations.md?#workspace-transformation-dcr) to further filter unneeded data. See [Diagnostic settings in Azure Monitor](essentials/diagnostic-settings.md#controlling-costs) for details on how to configure diagnostic settings and using transformations to filter their data. |
+| Collect only critical resource log data from Azure resources. | When you create [diagnostic settings](essentials/diagnostic-settings.md) to send [resource logs](essentials/resource-logs.md) for your Azure resources to a Log Analytics database, only specify those categories that you require. Since diagnostic settings don't allow granular filtering of resource logs, you can use a [workspace transformation](essentials/data-collection-transformations.md?#workspace-transformation-dcr) to further filter unneeded data for those resources that use a [supported table](logs/tables-feature-support.md). See [Diagnostic settings in Azure Monitor](essentials/diagnostic-settings.md#controlling-costs) for details on how to configure diagnostic settings and using transformations to filter their data. |
#### Virtual machines
Since Azure Monitor charges for the collection of data, your goal should be to c
| Recommendation | Description | |:|:|
-| Change to Workspace-based Application Insights | Ensure that your Application Insights resources are Workspace-based so that they can leveage new cost saving tools such as Basic Logs, Commitment Tiers, Retention by data type and Data Archive. |
+| Change to Workspace-based Application Insights | Ensure that your Application Insights resources are [Workspace-based](app/create-workspace-resource.md) so that they can leveage new cost savings tools such as [Basic Logs](logs/basic-logs-configure.md), [Commitment Tiers](logs/cost-logs.md#commitment-tiers), [Retention by data type and Data Archive](logs/data-retention-archive.md#set-retention-and-archive-policy-by-table). |
| Use sampling to tune the amount of data collected. | [Sampling](app/sampling.md) is the primary tool you can use to tune the amount of data collected by Application Insights. Use sampling to reduce the amount of telemetry that's sent from your applications with minimal distortion of metrics. | | Limit the number of Ajax calls. | [Limit the number of Ajax calls](app/javascript.md#configuration) that can be reported in every page view or disable Ajax reporting. If you disable Ajax calls, you'll be disabling [JavaScript correlation](app/javascript.md#enable-distributed-tracing) too. | | Disable unneeded modules. | [Edit ApplicationInsights.config](app/configuration-with-applicationinsights-config.md) to turn off collection modules that you don't need. For example, you might decide that performance counters or dependency data aren't required. |
Since Azure Monitor charges for the collection of data, your goal should be to c
| Recommendation | Description | |:|:|
-| Remove unnecssary data during data ingestion | After following all of the preveious recommendations, consider using Azure Monitor data collection transformations to reduce the size of your data during ingestion. |
+| Remove unnecssary data during data ingestion | After following all of the preveious recommendations, consider using Azure Monitor [data collection transformations](essentials/data-collection-transformations.md) to reduce the size of your data during ingestion. |
## Monitor workspace and analyze usage
azure-monitor Best Practices Multicloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-multicloud.md
If you use Defender for Cloud for security management and threat detection, then
## Kubernetes [Container insights](containers/container-insights-overview.md) in Azure Monitor uses [Azure Arc-enabled Kubernetes](../azure-arc/servers/overview.md) to provide a consistent experience between both [Azure Kubernetes Service (AKS)](../aks/intro-kubernetes.md) and Kubernetes clusters in your AWS EKS or GCP GKE instances. You can view your hybrid clusters right alongside your Azure machines and onboard them using identical methods. This includes using standard Azure constructs such as Azure Policy and applying tags.
+Use Prometheus [remote write](./essentials/prometheus-remote-write.md) from your on-premises, AWS, or GCP clusters to send data to Azure managed service for Prometheus.
+ The [Azure Monitor agent](agents/agents-overview.md) installed by Container insights collects telemetry from the client operating system of clusters regardless of their location. Use the same analysis tools on Container insights to monitor clusters across your different cloud environments. - [Connect an existing Kubernetes cluster to Azure Arc](../azure-arc/kubernetes/quickstart-connect-cluster.md)
azure-monitor Container Insights Enable Arc Enabled Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-enable-arc-enabled-clusters.md
Title: Monitor Azure Arc-enabled Kubernetes clusters Last updated 05/24/2022 -- description: Collect metrics and logs of Azure Arc-enabled Kubernetes clusters using Azure Monitor.
azure-monitor Container Insights Enable Provisioned Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-enable-provisioned-clusters.md
Title: Monitor AKS hybrid clusters Last updated 01/10/2023 -- description: Collect metrics and logs of AKS hybrid clusters using Azure Monitor.
azure-monitor Container Insights Transition Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-transition-solution.md
Title: Transition from the Container Monitoring Solution to using Container Insights Last updated 8/29/2022 -- description: "Learn how to migrate from using the legacy OMS solution to monitoring your containers using Container Insights"
azure-monitor App Insights Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/app-insights-metrics.md
Last updated 07/03/2019 -+ # Application Insights log-based metrics
azure-monitor Collect Custom Metrics Guestos Resource Manager Vmss https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/collect-custom-metrics-guestos-resource-manager-vmss.md
Title: Collect Windows scale set metrics in Azure Monitor with template description: Send guest OS metrics to the Azure Monitor metric store by using a Resource Manager template for a Windows virtual machine scale set- Last updated 09/09/2019- # Send guest OS metrics to the Azure Monitor metric store by using an Azure Resource Manager template for a Windows virtual machine scale set
azure-monitor Collect Custom Metrics Guestos Vm Classic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/collect-custom-metrics-guestos-vm-classic.md
Title: Send classic Windows VM metrics to Azure Monitor metrics database description: Send Guest OS metrics to the Azure Monitor data store for a Windows virtual machine (classic)- Last updated 09/09/2019- # Send Guest OS metrics to the Azure Monitor metrics database for a Windows virtual machine (classic)
azure-monitor Collect Custom Metrics Guestos Vm Cloud Service Classic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/collect-custom-metrics-guestos-vm-cloud-service-classic.md
Title: Send classic Cloud Services metrics to Azure Monitor metrics database description: Describes the process for sending Guest OS performance metrics for Azure classic Cloud Services to the Azure Monitor metric store. -- Last updated 09/09/2019- # Send Guest OS metrics to the Azure Monitor metric store classic Cloud Services
azure-monitor Collect Custom Metrics Linux Telegraf https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/collect-custom-metrics-linux-telegraf.md
Title: Collect custom metrics for Linux VM with the InfluxData Telegraf agent description: Instructions on how to deploy the InfluxData Telegraf agent on a Linux VM in Azure and configure the agent to publish metrics to Azure Monitor. - Last updated 06/16/2022- # Collect custom metrics for a Linux VM with the InfluxData Telegraf agent
azure-monitor Data Collection Rule Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-collection-rule-best-practices.md
Title: Best practices for data collection rule creation and management in Azure Monitor description: Details on the best practices to be followed to correctly create and maintain data collection rule in Azure Monitor. -- Last updated 12/14/2022-+
azure-monitor Diagnostics Settings Policies Deployifnotexists https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/diagnostics-settings-policies-deployifnotexists.md
The following steps show how to apply the policy to send audit logs to for key v
1. Select **Review + create**, then select **Create** . :::image type="content" source="./media/diagnostics-settings-policies-deployifnotexists/assign-policy-remediation.png" alt-text="A screenshot of the assign policy page, remediation tab.":::
-The policy visible in the resources' diagnostic setting after approximately 30 minutes.
### [CLI](#tab/cli) To apply a policy using the CLI, use the following commands:
Find the role in the policy definition by searching for *roleDefinitionIds*
```azurecli az policy assignment identity assign --system-assigned --resource-group rg-001 --role 92aaf0da-9dab-42b6-94a3-d43ce8d16293 --identity-scope /subscriptions/12345678-aaaa-bbbb-cccc-1234567890ab/resourceGroups/rg001 --name policy-assignment-1 ```
-
- When assigning policies that send logs to event hubs, you must manually add the *Azure Event Hubs Data Owner* role for the event hub to your policy assigned identity.
-
- ```azurecli
- az role assignment create --assignee <Principal ID> --role "Azure Event Hubs Data Owner" --scope /subscriptions/<subscription ID>/resourceGroups/<event hub's resource group>
- ```
+ 1. Trigger a scan to find existing resources using [`az policy state trigger-scan`](https://learn.microsoft.com/cli/azure/policy/state?view=azure-cli-latest#az-policy-state-trigger-scan). ```azurecli
To apply a policy using the PowerShell, use the following commands:
New-AzRoleAssignment -Scope $rg.ResourceId -ObjectId $policyAssignment.Identity.PrincipalId -RoleDefinitionId $roleDefId } ```
- When assigning policies that send logs to event hubs, you must manually add the *Azure Event Hubs Data Owner* role for the event hub to your system assigned Managed Identity.
- ```azurepowershell
- New-AzRoleAssignment -Scope /subscriptions/<subscription ID>/resourceGroups/<event hub's resource group> -ObjectId $policyAssignment.Identity.PrincipalId -RoleDefinitionId "Azure Event Hubs Data Owner"
- ```
1. Scan for compliance, then create a remediation task to force compliance for existing resources. ```azurepowershell
To apply a policy using the PowerShell, use the following commands:
```
-> [!Note]
-> When assigning policies that send logs to event hubs, you must manually add the *Azure Event Hubs Data Owner* role for the event hub to your policy assigned identity.
-> Use the `az role assignment create` Azure CLI command.
-> ```azurecli
-> az role assignment create --assignee <Principal ID> --role "Azure Event Hubs Data Owner" --scope /subscriptions/<subscription ID>/resourceGroups/<event hub's resource group>
->```
-> For example:
-> ```azurecli
-> az role assignment create --assignee xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx --role "Azure Event Hubs Data Owner" --scope /subscriptions/yyyyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyyy/resourceGroups/myResourceGroup
->```
->
-> Find your principal ID on the **Policy Assignment** page, **Managed Identity** tab.
-> :::image type="content" source="./media/diagnostics-settings-policies-deployifnotexists/find-principal.png" alt-text="A screenshot showing the policy assignment page, managed identity tab.":::
-
+The policy is visible in the resources' diagnostic settings after approximately 30 minutes.
## Remediation tasks
azure-monitor Metric Chart Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metric-chart-samples.md
Title: Azure Monitor metric chart example description: Learn about visualizing your Azure Monitor data.- - Last updated 01/29/2019-+ # Metric chart examples
azure-monitor Metrics Charts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-charts.md
Title: Advanced features of Metrics Explorer description: Metrics are a series of measured values and counts that Azure collects. Learn to use Metrics Explorer to investigate the health and usage of resources.- Last updated 06/09/2022-+
azure-monitor Metrics Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-getting-started.md
Title: Get started with Azure Monitor metrics explorer description: Learn how to create your first metric chart with Azure Monitor metrics explorer.- Last updated 02/21/2022-
azure-monitor Monitor Azure Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/monitor-azure-resource.md
This section discusses collecting and monitoring data.
As soon as you create an Azure resource, Azure Monitor is enabled and starts collecting metrics and activity logs. With some configuration, you can gather more monitoring data and enable other features. The Azure Monitor data platform is made up of Metrics and Logs. Each feature collects different kinds of data and enables different Azure Monitor features. - [Azure Monitor Metrics](../essentials/data-platform-metrics.md) stores numeric data from monitored resources into a time-series database. The metric database is automatically created for each Azure subscription. Use [Metrics Explorer](../essentials/tutorial-metrics.md) to analyze data from Azure Monitor Metrics.-- [Azure Monitor managed service for Prometheus](../essentials/prometheus-metrics-overview.md) collects and stores numeric data from Azure Kubernetes Service, in a Prometheus compatible time-series database. Onboard to managed Prometheus using remote write, or the Azure Kubernetes Service add-on. Analyze the data using a Prometheus explorer workbook in your [Azure Monitor workspace](../essentials/azure-monitor-workspace-overview.md), or [Grafana](../visualize/grafana-plugin.md). - [Azure Monitor Logs](../logs/data-platform-logs.md) collects logs and performance data where they can be retrieved and analyzed in different ways by using log queries. You must create a Log Analytics workspace to collect log data. Use [Log Analytics](../logs/log-analytics-tutorial.md) to analyze data from Azure Monitor Logs. ### <a id="monitoring-data-from-azure-resources"></a> Monitor data from Azure resources
azure-monitor Prometheus Metrics Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-metrics-enable.md
Last updated 01/24/2022
-# Collect Prometheus metrics from AKS cluster (preview)
-This article describes how to configure your Azure Kubernetes Service (AKS) cluster to send data to Azure Monitor managed service for Prometheus. When you configure your AKS cluster to send data to Azure Monitor managed service for Prometheus, a containerized version of the [Azure Monitor agent](../agents/agents-overview.md) is installed with a metrics extension. You just need to specify the Azure Monitor workspace that the data should be sent to.
+# Collect Prometheus metrics from an AKS cluster (preview)
+This article describes how to configure your Azure Kubernetes Service (AKS) cluster to send data to Azure Monitor managed service for Prometheus. When you configure your AKS cluster to send data to Azure Monitor managed service for Prometheus, a containerized version of the [Azure Monitor agent](../agents/agents-overview.md) is installed with a metrics extension. Then you specify the Azure Monitor workspace where the data should be sent.
> [!NOTE]
-> The process described here doesn't enable [Container insights](../containers/container-insights-overview.md) on the cluster even though the Azure Monitor agent installed in this process is the same one used by Container insights. See [Enable Container insights](../containers/container-insights-onboard.md) for different methods to enable Container insights on your cluster. See [Collect Prometheus metrics with Container insights](../containers/container-insights-prometheus.md) for details on adding Prometheus collection to a cluster that already has Container insights enabled.
+> The process described here doesn't enable [Container insights](../containers/container-insights-overview.md) on the cluster even though the Azure Monitor agent installed in this process is the same one used by Container insights.
+>
+>For different methods to enable Container insights on your cluster, see [Enable Container insights](../containers/container-insights-onboard.md). For details on adding Prometheus collection to a cluster that already has Container insights enabled, see [Collect Prometheus metrics with Container insights](../containers/container-insights-prometheus.md).
## Prerequisites - You must either have an [Azure Monitor workspace](azure-monitor-workspace-overview.md) or [create a new one](azure-monitor-workspace-manage.md#create-an-azure-monitor-workspace). - The cluster must use [managed identity authentication](../../aks/use-managed-identity.md).-- The following resource providers must be registered in the subscription of the AKS cluster and the Azure Monitor Workspace.
+- The following resource providers must be registered in the subscription of the AKS cluster and the Azure Monitor workspace:
- Microsoft.ContainerService - Microsoft.Insights - Microsoft.AlertsManagement
Use any of the following methods to install the Azure Monitor agent on your AKS
### [Azure portal](#tab/azure-portal) 1. Open the **Azure Monitor workspaces** menu in the Azure portal and select your cluster.
-2. Select **Managed Prometheus** to display a list of AKS clusters.
-3. Select **Configure** next to the cluster you want to enable.
-
- :::image type="content" source="media/prometheus-metrics-enable/azure-monitor-workspace-configure-prometheus.png" lightbox="media/prometheus-metrics-enable/azure-monitor-workspace-configure-prometheus.png" alt-text="Screenshot of Azure Monitor workspace with Prometheus configuration.":::
+1. Select **Managed Prometheus** to display a list of AKS clusters.
+1. Select **Configure** next to the cluster you want to enable.
+ :::image type="content" source="media/prometheus-metrics-enable/azure-monitor-workspace-configure-prometheus.png" lightbox="media/prometheus-metrics-enable/azure-monitor-workspace-configure-prometheus.png" alt-text="Screenshot that shows an Azure Monitor workspace with a Prometheus configuration.":::
### [CLI](#tab/cli) #### Prerequisites -- Register the `AKS-PrometheusAddonPreview` feature flag in the Azure Kubernetes clusters subscription with the following command in Azure CLI: `az feature register --namespace Microsoft.ContainerService --name AKS-PrometheusAddonPreview`.-- The aks-preview extension needs to be installed using the command `az extension add --name aks-preview`. For more information on how to install a CLI extension, see [Use and manage extensions with the Azure CLI](/cli/azure/azure-cli-extensions-overview).-- Aks-preview version 0.5.122 or higher is required for this feature. You can check the aks-preview version using the `az version` command.-
-#### Install metrics addon
-
-Use `az aks update` with the `-enable-azuremonitormetrics` option to install the metrics addon. Following are multiple options depending on the Azure Monitor workspace and Grafana workspace you want to use.
+- Register the `AKS-PrometheusAddonPreview` feature flag in the Azure Kubernetes clusters subscription with the following command in the Azure CLI: `az feature register --namespace Microsoft.ContainerService --name AKS-PrometheusAddonPreview`.
+- The aks-preview extension must be installed by using the command `az extension add --name aks-preview`. For more information on how to install a CLI extension, see [Use and manage extensions with the Azure CLI](/cli/azure/azure-cli-extensions-overview).
+- The aks-preview version 0.5.122 or higher is required for this feature. Check the aks-preview version by using the `az version` command.
+#### Install the metrics add-on
-**Create a new default Azure Monitor workspace.**<br>
-If no Azure Monitor Workspace is specified, a default Azure Monitor Workspace is created in the `DefaultRG-<cluster_region>` following the format `DefaultAzureMonitorWorkspace-<mapped_region>`.
-This Azure Monitor Workspace is in the region specific in [Region mappings](#region-mappings).
+Use `az aks update` with the `-enable-azuremonitormetrics` option to install the metrics add-on. Depending on the Azure Monitor workspace and Grafana workspace you want to use, choose one of the following options:
-```azurecli
-az aks update --enable-azuremonitormetrics -n <cluster-name> -g <cluster-resource-group>
-```
+- **Create a new default Azure Monitor workspace.**<br>
+If no Azure Monitor workspace is specified, a default Azure Monitor workspace is created in the `DefaultRG-<cluster_region>` following the format `DefaultAzureMonitorWorkspace-<mapped_region>`.
+This Azure Monitor workspace is in the region specified in [Region mappings](#region-mappings).
+
+ ```azurecli
+ az aks update --enable-azuremonitormetrics -n <cluster-name> -g <cluster-resource-group>
+ ```
-**Use an existing Azure Monitor workspace.**<br>
-If the Azure Monitor workspace is linked to one or more Grafana workspaces, then the data is available in Grafana.
-
-```azurecli
-az aks update --enable-azuremonitormetrics -n <cluster-name> -g <cluster-resource-group> --azure-monitor-workspace-resource-id <workspace-name-resource-id>
-```
+- **Use an existing Azure Monitor workspace.**<br>
+If the Azure Monitor workspace is linked to one or more Grafana workspaces, the data is available in Grafana.
-**Use an existing Azure Monitor workspace and link with an existing Grafana workspace.**<br>
-This creates a link between the Azure Monitor workspace and the Grafana workspace.
+ ```azurecli
+ az aks update --enable-azuremonitormetrics -n <cluster-name> -g <cluster-resource-group> --azure-monitor-workspace-resource-id <workspace-name-resource-id>
+ ```
-```azurecli
-az aks update --enable-azuremonitormetrics -n <cluster-name> -g <cluster-resource-group> --azure-monitor-workspace-resource-id <azure-monitor-workspace-name-resource-id> --grafana-resource-id <grafana-workspace-name-resource-id>
-```
+- **Use an existing Azure Monitor workspace and link with an existing Grafana workspace.**<br>
+This option creates a link between the Azure Monitor workspace and the Grafana workspace.
+
+ ```azurecli
+ az aks update --enable-azuremonitormetrics -n <cluster-name> -g <cluster-resource-group> --azure-monitor-workspace-resource-id <azure-monitor-workspace-name-resource-id> --grafana-resource-id <grafana-workspace-name-resource-id>
+ ```
-The output for each command looks similar to the following:
+The output for each command looks similar to the following example:
```json "azureMonitorProfile": {
The output for each command looks similar to the following:
``` #### Optional parameters
-Following are optional parameters that you can use with the previous commands.
+You can use the following optional parameters with the previous commands:
-- `--ksm-metric-annotations-allow-list` is a comma-separated list of Kubernetes annotations keys that will be used in the resource's labels metric. By default the metric contains only name and namespace labels. To include more annotations provide a list of resource names in their plural form and Kubernetes annotation keys, you would like to allow for them. A single `*` can be provided per resource instead to allow any annotations, but that has severe performance implications.-- `--ksm-metric-labels-allow-list` is a comma-separated list of more Kubernetes label keys that is used in the resource's labels metric. By default the metric contains only name and namespace labels. To include more labels provide a list of resource names in their plural form and Kubernetes label keys, you would like to allow for them. A single `*` can be provided per resource instead to allow any labels, but that has severe performance implications.-- `--enable-windows-recording-rules` lets you enable the recording rule groups required for proper functioning of the windows dashboards.
+- `--ksm-metric-annotations-allow-list` is a comma-separated list of Kubernetes annotations keys used in the resource's labels metric. By default, the metric contains only name and namespace labels. To include more annotations, provide a list of resource names in their plural form and Kubernetes annotation keys that you want to allow for them. A single `*` can be provided per resource instead to allow any annotations, but it has severe performance implications.
+- `--ksm-metric-labels-allow-list` is a comma-separated list of more Kubernetes label keys that is used in the resource's labels metric. By default the metric contains only name and namespace labels. To include more labels, provide a list of resource names in their plural form and Kubernetes label keys that you want to allow for them. A single asterisk (`*`) can be provided per resource instead to allow any labels, but it has severe performance implications.
+- `--enable-windows-recording-rules` lets you enable the recording rule groups required for proper functioning of the Windows dashboards.
**Use annotations and labels.**
Following are optional parameters that you can use with the previous commands.
az aks update --enable-azuremonitormetrics -n <cluster-name> -g <cluster-resource-group> --ksm-metric-labels-allow-list "namespaces=[k8s-label-1,k8s-label-n]" --ksm-metric-annotations-allow-list "pods=[k8s-annotation-1,k8s-annotation-n]" ```
-The output is similar to the following:
+The output is similar to the following example:
```json "azureMonitorProfile": {
The output is similar to the following:
} ```
-## [Resource Manager](#tab/resource-manager)
+## [Azure Resource Manager](#tab/resource-manager)
### Prerequisites -- Register the `AKS-PrometheusAddonPreview` feature flag in the Azure Kubernetes clusters subscription with the following command in Azure CLI: `az feature register --namespace Microsoft.ContainerService --name AKS-PrometheusAddonPreview`.-- If the Azure Managed Grafana instance is in a subscription other than the Azure Monitor Workspaces subscription, register the Azure Monitor Workspace subscription with the `Microsoft.Dashboard` resource provider following this [documentation](../../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider).
+- Register the `AKS-PrometheusAddonPreview` feature flag in the Azure Kubernetes clusters subscription with the following command in the Azure CLI: `az feature register --namespace Microsoft.ContainerService --name AKS-PrometheusAddonPreview`.
+- If the Azure Managed Grafana instance is in a subscription other than the Azure Monitor Workspaces subscription, register the Azure Monitor Workspace subscription with the `Microsoft.Dashboard` resource provider by following [this documentation](../../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider).
- The Azure Monitor workspace and Azure Managed Grafana workspace must already be created.-- The template needs to be deployed in the same resource group as the Azure Managed Grafana workspace.-- Users with 'User Access Administrator' role in the subscription of the AKS cluster can be able to enable 'Monitoring Data Reader' role directly by deploying the template.-
+- The template must be deployed in the same resource group as the Azure Managed Grafana workspace.
+- Users with the User Access Administrator role in the subscription of the AKS cluster can enable the Monitoring Data Reader role directly by deploying the template.
### Retrieve required values for Grafana resource
-From the **Overview** page for the Azure Managed Grafana instance in the Azure portal, select **JSON view**.
+On the **Overview** page for the Azure Managed Grafana instance in the Azure portal, select **JSON view**.
-If you're using an existing Azure Managed Grafana instance that already has been linked to an Azure Monitor workspace, then you need the list of Grafana integrations. Copy the value of the `azureMonitorWorkspaceIntegrations` field. If it doesn't exist, then the instance hasn't been linked with any Azure Monitor workspace.
+If you're using an existing Azure Managed Grafana instance that's already linked to an Azure Monitor workspace, you need the list of Grafana integrations. Copy the value of the `azureMonitorWorkspaceIntegrations` field. If it doesn't exist, the instance hasn't been linked with any Azure Monitor workspace.
```json "properties": {
If you're using an existing Azure Managed Grafana instance that already has been
} ```
-### Download and edit template and parameter file
+### Download and edit the template and the parameter file
1. Download the template at [https://aka.ms/azureprometheus-enable-arm-template](https://aka.ms/azureprometheus-enable-arm-template) and save it as **existingClusterOnboarding.json**.
-2. Download the parameter file at [https://aka.ms/azureprometheus-enable-arm-template-parameterss](https://aka.ms/azureprometheus-enable-arm-template-parameters) and save it as **existingClusterParam.json**.
-3. Edit the values in the parameter file.
+1. Download the parameter file at [https://aka.ms/azureprometheus-enable-arm-template-parameterss](https://aka.ms/azureprometheus-enable-arm-template-parameters) and save it as **existingClusterParam.json**.
+1. Edit the values in the parameter file.
| Parameter | Value | |:|:|
If you're using an existing Azure Managed Grafana instance that already has been
| `azureMonitorWorkspaceLocation` | Location of the Azure Monitor workspace. Retrieve from the **JSON view** on the **Overview** page for the Azure Monitor workspace. | | `clusterResourceId` | Resource ID for the AKS cluster. Retrieve from the **JSON view** on the **Overview** page for the cluster. | | `clusterLocation` | Location of the AKS cluster. Retrieve from the **JSON view** on the **Overview** page for the cluster. |
- | `metricLabelsAllowlist` | Comma-separated list of Kubernetes labels keys that will be used in the resource's labels metric. |
- | `metricAnnotationsAllowList` | Comma-separated list of more Kubernetes label keys that will be used in the resource's labels metric. |
+ | `metricLabelsAllowlist` | Comma-separated list of Kubernetes labels keys to be used in the resource's labels metric. |
+ | `metricAnnotationsAllowList` | Comma-separated list of more Kubernetes label keys to be used in the resource's labels metric. |
| `grafanaResourceId` | Resource ID for the managed Grafana instance. Retrieve from the **JSON view** on the **Overview** page for the Grafana instance. | | `grafanaLocation` | Location for the managed Grafana instance. Retrieve from the **JSON view** on the **Overview** page for the Grafana instance. | | `grafanaSku` | SKU for the managed Grafana instance. Retrieve from the **JSON view** on the **Overview** page for the Grafana instance. Use the **sku.name**. | -
-4. Open the template file and update the `grafanaIntegrations` property at the end of the file with the values that you retrieved from the Grafana instance. This is similar to the following:
+1. Open the template file and update the `grafanaIntegrations` property at the end of the file with the values that you retrieved from the Grafana instance. The following example is similar:
```json {
If you're using an existing Azure Managed Grafana instance that already has been
} ````
-In this json, `full_resource_id_1` and `full_resource_id_2` were already in the Azure Managed Grafana resource JSON, and they're added here to the ARM template. If you have no existing Grafana integrations, then don't include these entries for `full_resource_id_1` and `full_resource_id_2`.
+In this JSON, `full_resource_id_1` and `full_resource_id_2` were already in the Azure Managed Grafana resource JSON. They're added here to the Azure Resource Manager template (ARM template). If you have no existing Grafana integrations, don't include these entries for `full_resource_id_1` and `full_resource_id_2`.
-The final `azureMonitorWorkspaceResourceId` entry is already in the template and is used to link to the Azure Monitor Workspace resource ID provided in the parameters file.
+The final `azureMonitorWorkspaceResourceId` entry is already in the template and is used to link to the Azure Monitor workspace resource ID provided in the parameters file.
## [Bicep](#tab/bicep)
The final `azureMonitorWorkspaceResourceId` entry is already in the template and
- Register the `AKS-PrometheusAddonPreview` feature flag in the Azure Kubernetes clusters subscription with the following command in Azure CLI: `az feature register --namespace Microsoft.ContainerService --name AKS-PrometheusAddonPreview`. - The Azure Monitor workspace and Azure Managed Grafana workspace must already be created. - The template needs to be deployed in the same resource group as the Azure Managed Grafana workspace.-- Users with 'User Access Administrator' role in the subscription of the AKS cluster can be able to enable 'Monitoring Data Reader' role directly by deploying the template.
+- Users with the User Access Administrator role in the subscription of the AKS cluster can enable the Monitoring Data Reader role directly by deploying the template.
-### Minor Limitation while deploying through bicep
-Currently in bicep, there's no way to explicitly "scope" the Monitoring Data Reader role assignment on a string parameter "resource ID" for Azure Monitor Workspace (like in ARM template). Bicep expects a value of type "resource | tenant" and currently there's no rest api [spec](https://github.com/Azure/azure-rest-api-specs) for Azure Monitor Workspace. So, as a workaround, the default scoping for Monitoring Data Reader role is on the resource group and thus the role is applied on the same Azure monitor workspace (by inheritance) which is the expected behavior. Thus, after deploying this bicep template, the Grafana resource will get read permissions in all the Azure Monitor Workspaces under the subscription.
+### Minor limitation with Bicep deployment
+Currently in Bicep, there's no way to explicitly "scope" the Monitoring Data Reader role assignment on a string parameter "resource ID" for an Azure Monitor workspace (like in an ARM template). Bicep expects a value of type `resource | tenant`. Currently, there's no REST API [spec](https://github.com/Azure/azure-rest-api-specs) for an Azure Monitor workspace.
+As a workaround, the default scoping for the Monitoring Data Reader role is on the resource group. The role is applied on the same Azure Monitor workspace (by inheritance), which is the expected behavior. After you deploy this Bicep template, the Grafana resource gets read permissions in all the Azure Monitor workspaces under the subscription.
-### Retrieve required values for Grafana resource
+### Retrieve required values for a Grafana resource
-From the **Overview** page for the Azure Managed Grafana instance in the Azure portal, select **JSON view**.
+On the **Overview** page for the Azure Managed Grafana instance in the Azure portal, select **JSON view**.
-If you're using an existing Azure Managed Grafana instance that already has been linked to an Azure Monitor workspace, then you need the list of Grafana integrations. Copy the value of the `azureMonitorWorkspaceIntegrations` field. If it doesn't exist, then the instance hasn't been linked with any Azure Monitor workspace.
+If you're using an existing Azure Managed Grafana instance that's already linked to an Azure Monitor workspace, you need the list of Grafana integrations. Copy the value of the `azureMonitorWorkspaceIntegrations` field. If it doesn't exist, the instance hasn't been linked with any Azure Monitor workspace.
```json "properties": {
If you're using an existing Azure Managed Grafana instance that already has been
} ```
-### Download and edit templates and parameter file
+### Download and edit templates and the parameter file
-1. Download the main bicep template from [here](https://aka.ms/azureprometheus-enable-bicep-template) and save it as **FullAzureMonitorMetricsProfile.bicep**.
-2. Download the parameter file from [here](https://aka.ms/azureprometheus-enable-bicep-template-parameters) and save it as **FullAzureMonitorMetricsProfileParameters.json** in the same directory as the main bicep template.
-3. Download the [nested_azuremonitormetrics_dcra_clusterResourceId.bicep](https://aka.ms/nested_azuremonitormetrics_dcra_clusterResourceId) and [nested_azuremonitormetrics_profile_clusterResourceId.bicep](https://aka.ms/nested_azuremonitormetrics_profile_clusterResourceId) files in the same directory as the main bicep template.
-4. Edit the values in the parameter file.
-5. The main bicep template creates all the required resources and uses two modules for creating the dcra and monitor metrics profile resources from the other two bicep files.
+1. Download the main Bicep template from [this GitHub file](https://aka.ms/azureprometheus-enable-bicep-template). Save it as **FullAzureMonitorMetricsProfile.bicep**.
+1. Download the parameter file from [this GitHub file](https://aka.ms/azureprometheus-enable-bicep-template-parameters) and save it as **FullAzureMonitorMetricsProfileParameters.json** in the same directory as the main Bicep template.
+1. Download the [nested_azuremonitormetrics_dcra_clusterResourceId.bicep](https://aka.ms/nested_azuremonitormetrics_dcra_clusterResourceId) and [nested_azuremonitormetrics_profile_clusterResourceId.bicep](https://aka.ms/nested_azuremonitormetrics_profile_clusterResourceId) files in the same directory as the main Bicep template.
+1. Edit the values in the parameter file.
+1. The main Bicep template creates all the required resources. It uses two modules for creating the Data Collection Rule Associations (DCRA) and Azure Monitor metrics profile resources from the other two Bicep files.
| Parameter | Value | |:|:|
If you're using an existing Azure Managed Grafana instance that already has been
| `azureMonitorWorkspaceLocation` | Location of the Azure Monitor workspace. Retrieve from the **JSON view** on the **Overview** page for the Azure Monitor workspace. | | `clusterResourceId` | Resource ID for the AKS cluster. Retrieve from the **JSON view** on the **Overview** page for the cluster. | | `clusterLocation` | Location of the AKS cluster. Retrieve from the **JSON view** on the **Overview** page for the cluster. |
- | `metricLabelsAllowlist` | Comma-separated list of Kubernetes labels keys that will be used in the resource's labels metric. |
- | `metricAnnotationsAllowList` | Comma-separated list of more Kubernetes label keys that will be used in the resource's labels metric. |
+ | `metricLabelsAllowlist` | Comma-separated list of Kubernetes labels keys used in the resource's labels metric. |
+ | `metricAnnotationsAllowList` | Comma-separated list of more Kubernetes label keys used in the resource's labels metric. |
| `grafanaResourceId` | Resource ID for the managed Grafana instance. Retrieve from the **JSON view** on the **Overview** page for the Grafana instance. | | `grafanaLocation` | Location for the managed Grafana instance. Retrieve from the **JSON view** on the **Overview** page for the Grafana instance. | | `grafanaSku` | SKU for the managed Grafana instance. Retrieve from the **JSON view** on the **Overview** page for the Grafana instance. Use the **sku.name**. | -
-6. Open the template file and update the `grafanaIntegrations` property at the end of the file with the values that you retrieved from the Grafana instance. This is similar to the following:
+1. Open the template file and update the `grafanaIntegrations` property at the end of the file with the values that you retrieved from the Grafana instance. The following example is similar:
```json {
If you're using an existing Azure Managed Grafana instance that already has been
} ````
-In this json, `full_resource_id_1` and `full_resource_id_2` were already in the Azure Managed Grafana resource JSON, and they're added here to the ARM template. If you have no existing Grafana integrations, then don't include these entries for `full_resource_id_1` and `full_resource_id_2`.
+In this JSON, `full_resource_id_1` and `full_resource_id_2` were already in the Azure Managed Grafana resource JSON. They're added here to the ARM template. If you have no existing Grafana integrations, don't include these entries for `full_resource_id_1` and `full_resource_id_2`.
-The final `azureMonitorWorkspaceResourceId` entry is already in the template and is used to link to the Azure Monitor Workspace resource ID provided in the parameters file.
+The final `azureMonitorWorkspaceResourceId` entry is already in the template and is used to link to the Azure Monitor workspace resource ID provided in the parameters file.
## [Azure Policy](#tab/azurepolicy) ### Prerequisites -- Register the `AKS-PrometheusAddonPreview` feature flag in the Azure Kubernetes clusters subscription with the following command in Azure CLI: `az feature register --namespace Microsoft.ContainerService --name AKS-PrometheusAddonPreview`.
+- Register the `AKS-PrometheusAddonPreview` feature flag in the Azure Kubernetes clusters subscription with the following command in the Azure CLI: `az feature register --namespace Microsoft.ContainerService --name AKS-PrometheusAddonPreview`.
- The Azure Monitor workspace and Azure Managed Grafana workspace must already be created.
-### Download Azure policy rules and parameters and deploy
+### Download Azure Policy rules and parameters and deploy
-1. Download the main Azure policy rules template from [here](https://aka.ms/AddonPolicyMetricsProfile) and save it as **AddonPolicyMetricsProfile.rules.json**.
-2. Download the parameter file from [here](https://aka.ms/AddonPolicyMetricsProfile.parameters) and save it as **AddonPolicyMetricsProfile.parameters.json** in the same directory as the rules template.
-3. Create the policy definition using a command like: `az policy definition create --name "(Preview) Prometheus Metrics addon" --display-name "(Preview) Prometheus Metrics addon" --mode Indexed --metadata version=1.0.0 category=Kubernetes --rules .\AddonPolicyMetricsProfile.rules.json --params .\AddonPolicyMetricsProfile.parameters.json`
-4. After creating the policy definition, go to Azure portal -> Policy -> Definitions and select the Policy definition you created.
-5. Select 'Assign' and then go to the 'Parameters' tab and fill in the details. Then select 'Review + Create'.
-6. Now that the policy is assigned to the subscription, whenever you create a new cluster, which does not have Prometheus enabled, the policy will run and deploy the resources. If you want to apply the policy to existing AKS cluster, create a 'Remediation task' for that AKS cluster resource after going to the 'Policy Assignment'.
-7. Now you should see metrics flowing in the existing linked Grafana resource, which is linked with the corresponding Azure Monitor Workspace.
+1. Download the main Azure Policy rules template from [this GitHub file](https://aka.ms/AddonPolicyMetricsProfile). Save it as **AddonPolicyMetricsProfile.rules.json**.
+1. Download the parameter file from [this GitHub file](https://aka.ms/AddonPolicyMetricsProfile.parameters). Save it as **AddonPolicyMetricsProfile.parameters.json** in the same directory as the rules template.
+1. Create the policy definition by using a command like: <br> `az policy definition create --name "(Preview) Prometheus Metrics addon" --display-name "(Preview) Prometheus Metrics addon" --mode Indexed --metadata version=1.0.0 category=Kubernetes --rules .\AddonPolicyMetricsProfile.rules.json --params .\AddonPolicyMetricsProfile.parameters.json`
+1. After you create the policy definition, in the Azure portal, select **Policy** > **Definitions**. Select the policy definition you created.
+1. Select **Assign**, go to the **Parameters** tab, and fill in the details. Select **Review + Create**.
+1. Now that the policy is assigned to the subscription, whenever you create a new cluster, which doesn't have Prometheus enabled, the policy runs and deploys the resources. If you want to apply the policy to an existing AKS cluster, create a **Remediation task** for that AKS cluster resource after you go to the **Policy Assignment**.
+1. Now you should see metrics flowing in the existing linked Grafana resource, which is linked with the corresponding Azure Monitor workspace.
-In case you create a new Managed Grafana resource from Azure portal, please link it with the corresponding Azure Monitor Workspace from the 'Linked Grafana Workspaces' tab of the relevant Azure Monitor Workspace page. Assign the role 'Monitoring Data Reader' to the Grafana MSI on the Azure Monitor Workspace resource so that it can read data for displaying the charts, using the instructions below.
+In case you create a new Managed Grafana resource from the Azure portal, link it with the corresponding Azure Monitor workspace from the **Linked Grafana Workspaces** tab of the relevant **Azure Monitor Workspace** page. Assign the Monitoring Data Reader role to the Grafana MSI on the Azure Monitor workspace resource so that it can read data for displaying the charts. Use the following instructions.
-1. From the **Overview** page for the Azure Managed Grafana instance in the Azure portal, select **JSON view**.
+1. On the **Overview** page for the Azure Managed Grafana instance in the Azure portal, select **JSON view**.
-2. Copy the value of the `principalId` field for the `SystemAssigned` identity.
+1. Copy the value of the `principalId` field for the `SystemAssigned` identity.
-```json
-"identity": {
- "principalId": "00000000-0000-0000-0000-000000000000",
- "tenantId": "00000000-0000-0000-0000-000000000000",
- "type": "SystemAssigned"
- },
-```
-3. From the **Access control (IAM)** page for the Azure Managed Grafana instance in the Azure portal, select **Add** and then **Add role assignment**.
-4. Select `Monitoring Data Reader`.
-5. Select **Managed identity** and then **Select members**.
-6. Select the **system-assigned managed identity** with the `principalId` from the Grafana resource.
-7. Select **Select** and then **Review+assign**.
+ ```json
+ "identity": {
+ "principalId": "00000000-0000-0000-0000-000000000000",
+ "tenantId": "00000000-0000-0000-0000-000000000000",
+ "type": "SystemAssigned"
+ },
+ ```
+1. On the **Access control (IAM)** page for the Azure Managed Grafana instance in the Azure portal, select **Add** > **Add role assignment**.
+1. Select `Monitoring Data Reader`.
+1. Select **Managed identity** > **Select members**.
+1. Select the **system-assigned managed identity** with the `principalId` from the Grafana resource.
+1. Choose **Select** > **Review+assign**.
-### Deploy template
+### Deploy the template
-Deploy the template with the parameter file using any valid method for deploying Resource Manager templates. See [Deploy the sample templates](../resource-manager-samples.md#deploy-the-sample-templates) for examples of different methods.
+Deploy the template with the parameter file by using any valid method for deploying ARM templates. For examples of different methods, see [Deploy the sample templates](../resource-manager-samples.md#deploy-the-sample-templates).
### Limitations -- Ensure that you update the `kube-state metrics` Annotations and Labels list with proper formatting. There's a limitation in the Resource Manager template deployments that require exact values in the `kube-state` metrics pods. If the Kubernetes pod has any issues with malformed parameters and isn't running, then the feature won't work as expected.-- A data collection rule and data collection endpoint is created with the name `MSProm-\<short-cluster-region\>-\<cluster-name\>`. These names can't currently be modified.-- You must get the existing Azure Monitor workspace integrations for a Grafana workspace and update the Resource Manager template with it, otherwise it will overwrite and remove the existing integrations from the grafana workspace.
+- Ensure that you update the `kube-state metrics` Annotations and Labels list with proper formatting. There's a limitation in the ARM template deployments that require exact values in the `kube-state` metrics pods. If the Kubernetes pod has any issues with malformed parameters and isn't running, the feature won't work as expected.
+- A data collection rule and data collection endpoint are created with the name `MSProm-\<short-cluster-region\>-\<cluster-name\>`. Currently, these names can't be modified.
+- You must get the existing Azure Monitor workspace integrations for a Grafana workspace and update the ARM template with it. Otherwise, it overwrites and removes the existing integrations from the Grafana workspace.
-## Enable windows metrics collection
+## Enable Windows metrics collection
-As of version 6.4.0-main-02-22-2023-3ee44b9e, windows metric collection has been enabled for the AKS clusters. Onboarding to the Azure Monitor Metrics Addon will enable the windows daemonset pods to start running on your nodepools. Both Windows Server 2019 and Windows Server 2022 are supported. Follow the steps below to enable the pods to collect metrics from your windows node pools.
+As of version 6.4.0-main-02-22-2023-3ee44b9e, Windows metric collection has been enabled for the AKS clusters. Onboarding to the Azure Monitor Metrics add-on enables the Windows DaemonSet pods to start running on your node pools. Both Windows Server 2019 and Windows Server 2022 are supported. Follow these steps to enable the pods to collect metrics from your Windows node pools.
-1. Manually install the windows exporter on AKS nodes to access windows metrics.
+1. Manually install windows-exporter on AKS nodes to access Windows metrics.
Enable the following collectors: * `[defaults]`
As of version 6.4.0-main-02-22-2023-3ee44b9e, windows metric collection has been
* `process` * `cpu_info`
- Deploy the [windows-exporter-daemonset YAML](https://github.com/prometheus-community/windows_exporter/blob/master/kubernetes/windows-exporter-daemonset.yaml) file
+ Deploy the [windows-exporter-daemonset YAML](https://github.com/prometheus-community/windows_exporter/blob/master/kubernetes/windows-exporter-daemonset.yaml) file:
+ ``` kubectl apply -f windows-exporter-daemonset.yaml ```
-2. Apply the [ama-metrics-settings-configmap](https://github.com/Azure/prometheus-collector/blob/main/otelcollector/configmaps/ama-metrics-settings-configmap.yaml) to your cluster, setting the `windowsexporter` and `windowskubeproxy` booleans to rue`. For more information, see [Metrics addon settings configmap](./prometheus-metrics-scrape-configuration.md#metrics-addon-settings-configmap).
-3. While onboarding, enable the recording rules required for the default dashboards.
-
- * For CLI include the option `--enable-windows-recording-rules`.
- * For ARM template, Bicep, or Policy, set `enableWindowsRecordingRules` to `true` in the parameters file.
- If the cluster is already onboarded to Azure Monitor Metrics, to enable windows recording rule groups use this [ARM template](https://github.com/Azure/prometheus-collector/blob/kaveesh/windows_recording_rules/AddonArmTemplate/WindowsRecordingRuleGroupTemplate/WindowsRecordingRules.json) and [Parameters](https://github.com/Azure/prometheus-collector/blob/kaveesh/windows_recording_rules/AddonArmTemplate/WindowsRecordingRuleGroupTemplate/WindowsRecordingRulesParameters.json) file to create the rule groups.
-
-## Verify Deployment
-
-Run the following command to verify that the DaemonSet was deployed properly on the linux nodepools:
-
-```
-kubectl get ds ama-metrics-node --namespace=kube-system
-```
-
-The number of pods should be equal to the number of nodes on the cluster. The output should resemble the following:
-
-```
-User@aksuser:~$ kubectl get ds ama-metrics-node --namespace=kube-system
-NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
-ama-metrics-node 1 1 1 1 1 <none> 10h
-```
--
-Run the following command to verify that the DaemonSet was deployed properly on the windows nodepools:
-
-```
-kubectl get ds ama-metrics-win-node --namespace=kube-system
-```
-
-The output should resemble the following:
-
-```
-User@aksuser:~$ kubectl get ds ama-metrics-node --namespace=kube-system
-NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
-ama-metrics-win-node 3 3 3 3 3 <none> 10h
-```
-
-Run the following command to which verify that the ReplicaSets were deployed properly:
-
-```
-kubectl get rs --namespace=kube-system
-```
-
-The output should resemble the following:
-
-```
-User@aksuser:~$kubectl get rs --namespace=kube-system
-NAME DESIRED CURRENT READY AGE
-ama-metrics-5c974985b8 1 1 1 11h
-ama-metrics-ksm-5fcf8dffcd 1 1 1 11h
-```
-## Feature Support
+1. Apply the [ama-metrics-settings-configmap](https://github.com/Azure/prometheus-collector/blob/main/otelcollector/configmaps/ama-metrics-settings-configmap.yaml) to your cluster. Set the `windowsexporter` and `windowskubeproxy` Booleans to `true`. For more information, see [Metrics add-on settings configmap](./prometheus-metrics-scrape-configuration.md#metrics-add-on-settings-configmap).
+1. Enable the recording rules required for the default dashboards:
+
+ * For the CLI, include the option `--enable-windows-recording-rules`.
+ * For an ARM template, Bicep, or Azure Policy, set `enableWindowsRecordingRules` to `true` in the parameters file.
+
+ If the cluster is already onboarded to Azure Monitor metrics, to enable Windows recording rule groups, use this [ARM template](https://github.com/Azure/prometheus-collector/blob/kaveesh/windows_recording_rules/AddonArmTemplate/WindowsRecordingRuleGroupTemplate/WindowsRecordingRules.json) and [parameters](https://github.com/Azure/prometheus-collector/blob/kaveesh/windows_recording_rules/AddonArmTemplate/WindowsRecordingRuleGroupTemplate/WindowsRecordingRulesParameters.json) file to create the rule groups.
+
+## Verify deployment
+
+1. Run the following command to verify that the DaemonSet was deployed properly on the Linux node pools:
+
+ ```
+ kubectl get ds ama-metrics-node --namespace=kube-system
+ ```
+
+ The number of pods should be equal to the number of nodes on the cluster. The output should resemble the following example:
+
+ ```
+ User@aksuser:~$ kubectl get ds ama-metrics-node --namespace=kube-system
+ NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
+ ama-metrics-node 1 1 1 1 1 <none> 10h
+ ```
+
+1. Run the following command to verify that the DaemonSet was deployed properly on the Windows node pools:
+
+ ```
+ kubectl get ds ama-metrics-win-node --namespace=kube-system
+ ```
+
+ The output should resemble the following example:
+
+ ```
+ User@aksuser:~$ kubectl get ds ama-metrics-node --namespace=kube-system
+ NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
+ ama-metrics-win-node 3 3 3 3 3 <none> 10h
+ ```
+
+1. Run the following command to verify that the ReplicaSets were deployed properly:
+
+ ```
+ kubectl get rs --namespace=kube-system
+ ```
+
+ The output should resemble the following example:
+
+ ```
+ User@aksuser:~$kubectl get rs --namespace=kube-system
+ NAME DESIRED CURRENT READY AGE
+ ama-metrics-5c974985b8 1 1 1 11h
+ ama-metrics-ksm-5fcf8dffcd 1 1 1 11h
+ ```
+
+## Feature support
- ARM64 and Mariner nodes are supported.-- HTTP Proxy is supported and will use the same settings as the HTTP Proxy settings for the AKS cluster configured with [these instructions](../../../articles/aks/http-proxy.md).
+- HTTP Proxy is supported and uses the same settings as the HTTP Proxy settings for the AKS cluster configured with [these instructions](../../../articles/aks/http-proxy.md).
## Limitations -- CPU and Memory requests and limits can't be changed for Container insights metrics addon. If changed, they'll be reconciled and replaced by original values in a few seconds.-- Azure Monitor Private Link (AMPLS) isn't currently supported.
+- CPU and Memory requests and limits can't be changed for the Container insights metrics add-on. If changed, they're reconciled and replaced by original values in a few seconds.
+
+- Azure Monitor Private Link isn't currently supported.
- Only public clouds are currently supported.
+## Uninstall the metrics add-on
+Currently, the Azure CLI is the only option to remove the metrics add-on and stop sending Prometheus metrics to Azure Monitor managed service for Prometheus.
-## Uninstall metrics addon
-Currently, Azure CLI is the only option to remove the metrics addon and stop sending Prometheus metrics to Azure Monitor managed service for Prometheus.
+1. Install the `aks-preview` extension by using the following command:
-Install the `aks-preview` extension using the following command:
+ ```
+ az extension add --name aks-preview
+ ```
+
+ For more information on installing a CLI extension, see [Use and manage extensions with the Azure CLI](/cli/azure/azure-cli-extensions-overview).
+
+ > [!NOTE]
+ > Upgrade your az cli version to the latest version and ensure that the aks-preview version you're using is at least '0.5.132'. Find your current version by using the `az version`.
+
+ ```azurecli
+ az extension add --name aks-preview
+ ```
-```
-az extension add --name aks-preview
-```
+1. Use the following command to remove the agent from the cluster nodes and delete the recording rules created for the data being collected from the cluster, along with the DCRA that links the data collection endpoint or data collection rule with your cluster. This action doesn't remove the data collection endpoint, data collection rule, or the data already collected and stored in your Azure Monitor workspace.
-For more information on installing a CLI extension, see [Use and manage extensions with the Azure CLI](/cli/azure/azure-cli-extensions-overview).
-
-> [!NOTE]
-> Upgrade your az cli version to the latest version and ensure that the aks-preview version you're using is at least '0.5.132'. Find your current version using the `az version`.
-```azurecli
-az extension add --name aks-preview
-```
-Use the following command to remove the agent from the cluster nodes and delete the recording rules created for the data being collected from the cluster along with the Data Collection Rule Associations (DCRA) that link the DCE or DCR with your cluster. This doesn't remove the DCE, DCR, or the data already collected and stored in your Azure Monitor workspace.
-
-```azurecli
-az aks update --disable-azuremonitormetrics -n <cluster-name> -g <cluster-resource-group>
-```
+ ```azurecli
+ az aks update --disable-azuremonitormetrics -n <cluster-name> -g <cluster-resource-group>
+ ```
## Region mappings
-When you allow a default Azure Monitor workspace to be created when you install the metrics addon, it's created in the region listed in the following table.
+When you allow a default Azure Monitor workspace to be created when you install the metrics add-on, it's created in the region listed in the following table.
-| AKS Cluster region | Azure Monitor workspace region |
+| AKS cluster region | Azure Monitor workspace region |
|--|| |australiacentral |eastus| |australiacentral2 |eastus|
When you allow a default Azure Monitor workspace to be created when you install
## Next steps --- [See the default configuration for Prometheus metrics](./prometheus-metrics-scrape-default.md).-- [Customize Prometheus metric scraping for the cluster](./prometheus-metrics-scrape-configuration.md).-- [Use Azure Monitor managed service for Prometheus (preview) as data source for Grafana](./prometheus-grafana.md)
+- [See the default configuration for Prometheus metrics](./prometheus-metrics-scrape-default.md)
+- [Customize Prometheus metric scraping for the cluster](./prometheus-metrics-scrape-configuration.md)
+- [Use Azure Monitor managed service for Prometheus (preview) as the data source for Grafana](./prometheus-grafana.md)
- [Configure self-hosted Grafana to use Azure Monitor managed service for Prometheus (preview)](./prometheus-self-managed-grafana-azure-active-directory.md)
azure-monitor Prometheus Metrics Scrape Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-metrics-scrape-configuration.md
Title: Customize scraping of Prometheus metrics in Azure Monitor (preview)
-description: Customize metrics scraping for a Kubernetes cluster with the metrics addon in Azure Monitor.
+description: Customize metrics scraping for a Kubernetes cluster with the metrics add-on in Azure Monitor.
Last updated 09/28/2022
# Customize scraping of Prometheus metrics in Azure Monitor (preview)
-This article provides instructions on customizing metrics scraping for a Kubernetes cluster with the [metrics addon](prometheus-metrics-enable.md) in Azure Monitor.
+This article provides instructions on customizing metrics scraping for a Kubernetes cluster with the [metrics add-on](prometheus-metrics-enable.md) in Azure Monitor.
## Configmaps
-Three different configmaps can be configured to change the default settings of the metrics addon:
+Three different configmaps can be configured to change the default settings of the metrics add-on:
-- ama-metrics-settings-configmap-- ama-metrics-prometheus-config-- ama-metrics-prometheus-config-node
+- `ama-metrics-settings-configmap`
+- `ama-metrics-prometheus-config`
+- `ama-metrics-prometheus-config-node`
-## Metrics addon settings configmap
+## Metrics add-on settings configmap
-The [ama-metrics-settings-configmap](https://github.com/Azure/prometheus-collector/blob/main/otelcollector/configmaps/ama-metrics-settings-configmap.yaml) can be downloaded, edited, and applied to the cluster to customize the out-of-the-box features of the metrics addon.
+The [ama-metrics-settings-configmap](https://github.com/Azure/prometheus-collector/blob/main/otelcollector/configmaps/ama-metrics-settings-configmap.yaml) can be downloaded, edited, and applied to the cluster to customize the out-of-the-box features of the metrics add-on.
-### Enabling and disabling default targets
-The following table has a list of all the default targets that the Azure Monitor metrics addon can scrape by default and whether it's initially enabled. Default targets are scraped every 30 seconds.
+### Enable and disable default targets
+The following table has a list of all the default targets that the Azure Monitor metrics add-on can scrape by default and whether it's initially enabled. Default targets are scraped every 30 seconds.
| Key | Type | Enabled | Description | |--||-|-| | kubelet | bool | `true` | Scrape kubelet in every node in the K8s cluster without any extra scrape config. | | cadvisor | bool | `true` | Scrape cadvisor in every node in the K8s cluster without any extra scrape config.<br>Linux only. |
-| kubestate | bool | `true` | Scrape kube-state-metrics in the K8s cluster (installed as a part of the addon) without any extra scrape config. |
+| kubestate | bool | `true` | Scrape kube-state-metrics in the K8s cluster (installed as a part of the add-on) without any extra scrape config. |
| nodeexporter | bool | `true` | Scrape node metrics without any extra scrape config.<br>Linux only. | | coredns | bool | `false` | Scrape coredns service in the K8s cluster without any extra scrape config. |
-| kubeproxy | bool | `false` | Scrape kube-proxy in every linux node discovered in the K8s cluster without any extra scrape config.<br>Linux only. |
-| apiserver | bool | `false` | Scrape the kubernetes api server in the K8s cluster without any extra scrape config. |
-| windowsexporter | bool | `false` | Scrape the windows exporter in every node in the K8s cluster without any extra scrape config.<br>Windows only. |
-| windowskubeproxy | bool | `false` | Scrape the windows kubeproxy in every node in the K8s cluster without any extra scrape config.<br>Windows only. |
-| prometheuscollectorhealth | bool | `false` | Scrape info about the prometheus-collector container such as the amount and size of time series scraped. |
+| kubeproxy | bool | `false` | Scrape kube-proxy in every Linux node discovered in the K8s cluster without any extra scrape config.<br>Linux only. |
+| apiserver | bool | `false` | Scrape the Kubernetes API server in the K8s cluster without any extra scrape config. |
+| windowsexporter | bool | `false` | Scrape windows-exporter in every node in the K8s cluster without any extra scrape config.<br>Windows only. |
+| windowskubeproxy | bool | `false` | Scrape windows-kube-proxy in every node in the K8s cluster without any extra scrape config.<br>Windows only. |
+| prometheuscollectorhealth | bool | `false` | Scrape information about the prometheus-collector container, such as the amount and size of time series scraped. |
-If you want to turn on the scraping of the default targets that aren't enabled by default, edit the configmap `ama-metrics-settings-configmap` [configmap](https://aka.ms/azureprometheus-addon-settings-configmap) to update the targets listed under `default-scrape-settings-enabled` to `true`, and apply the configmap to your cluster.
+If you want to turn on the scraping of the default targets that aren't enabled by default, edit the [configmap](https://aka.ms/azureprometheus-addon-settings-configmap) `ama-metrics-settings-configmap` to update the targets listed under `default-scrape-settings-enabled` to `true`. Apply the configmap to your cluster.
-### Customizing metrics collected by default targets
+### Customize metrics collected by default targets
By default, for all the default targets, only minimal metrics used in the default recording rules, alerts, and Grafana dashboards are ingested as described in [minimal-ingestion-profile](prometheus-metrics-scrape-configuration-minimal.md). To collect all metrics from default targets, in the configmap under `default-targets-metrics-keep-list`, set `minimalingestionprofile` to `false`.
-To filter in more metrics for any default targets, edit the settings under `default-targets-metrics-keep-list` for the corresponding job you'd like to change.
+To filter in more metrics for any default targets, edit the settings under `default-targets-metrics-keep-list` for the corresponding job you want to change.
-For example, `kubelet` is the metric filtering setting for the default target kubelet. Use the following to filter IN metrics collected for the default targets using regex based filtering.
+For example, `kubelet` is the metric filtering setting for the default target kubelet. Use the following script to filter *in* metrics collected for the default targets by using regex-based filtering.
``` kubelet = "metricX|metricY"
apiserver = "mymetric.*"
``` > [!NOTE]
-> If you use quotes or backslashes in the regex, you'll need to escape them using a backslash. For example `"test\'smetric\"s\""` and `testbackslash\\*`.
+> If you use quotation marks or backslashes in the regex, you need to escape them by using a backslash like the examples `"test\'smetric\"s\""` and `testbackslash\\*`.
-To further customize the default jobs to change properties such as collection frequency or labels, disable the corresponding default target by setting the configmap value for the target to `false`, and then apply the job using custom configmap. For details on custom configuration, see [Customize scraping of Prometheus metrics in Azure Monitor](prometheus-metrics-scrape-configuration.md#configure-custom-prometheus-scrape-jobs).
+To further customize the default jobs to change properties like collection frequency or labels, disable the corresponding default target by setting the configmap value for the target to `false`. Then apply the job by using a custom configmap. For details on custom configuration, see [Customize scraping of Prometheus metrics in Azure Monitor](prometheus-metrics-scrape-configuration.md#configure-custom-prometheus-scrape-jobs).
### Cluster alias
-The cluster label appended to every time series scraped will use the last part of the full AKS cluster's ARM resourceID. For example, if the resource ID is `/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/rg-name/providers/Microsoft.ContainerService/managedClusters/clustername`, the cluster label is `clustername`.
+The cluster label appended to every time series scraped uses the last part of the full AKS cluster's Azure Resource Manager resource ID. For example, if the resource ID is `/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/rg-name/providers/Microsoft.ContainerService/managedClusters/clustername`, the cluster label is `clustername`.
-To override the cluster label in the time series scraped, update the setting `cluster_alias` to any string under `prometheus-collector-settings` in the `ama-metrics-settings-configmap` [configmap](https://aka.ms/azureprometheus-addon-settings-configmap). You can either create this configmap or edit an existing one.
+To override the cluster label in the time series scraped, update the setting `cluster_alias` to any string under `prometheus-collector-settings` in the [configmap](https://aka.ms/azureprometheus-addon-settings-configmap) `ama-metrics-settings-configmap`. You can either create this configmap or edit an existing one.
-The new label will also show up in the cluster parameter dropdown in the Grafana dashboards instead of the default one.
+The new label also shows up in the cluster parameter dropdown in the Grafana dashboards instead of the default one.
> [!NOTE]
-> Only alphanumeric characters are allowed. Any other characters else will be replaced with `_`. This is to ensure that different components that consume this label will adhere to the basic alphanumeric convention.
+> Only alphanumeric characters are allowed. Any other characters are replaced with `_`. This change is to ensure that different components that consume this label adhere to the basic alphanumeric convention.
### Debug mode
-To view every metric that is being scraped for debugging purposes, the metrics addon agent can be configured to run in debug mode by updating the setting `enabled` to `true` under the `debug-mode` setting in `ama-metrics-settings-configmap` [configmap](https://aka.ms/azureprometheus-addon-settings-configmap). You can either create this configmap or edit an existing one. See [the Debug Mode section in Troubleshoot collection of Prometheus metrics](prometheus-metrics-troubleshoot.md#debug-mode) for more details.
+To view every metric that's being scraped for debugging purposes, the metrics add-on agent can be configured to run in debug mode by updating the setting `enabled` to `true` under the `debug-mode` setting in the [configmap](https://aka.ms/azureprometheus-addon-settings-configmap) `ama-metrics-settings-configmap`. You can either create this configmap or edit an existing one. For more information, see the [Debug mode section in Troubleshoot collection of Prometheus metrics](prometheus-metrics-troubleshoot.md#debug-mode).
### Scrape interval settings
-To update the scrape interval settings for any target, the customer can update the duration in default-targets-scrape-interval-settings setting for that target in `ama-metrics-settings-configmap` [configmap](https://aka.ms/azureprometheus-addon-settings-configmap). The scrape intervals have to be set by customer in the correct format specified [here](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#configuration-file), else the default value of 30 seconds will be applied to the corresponding targets.
+To update the scrape interval settings for any target, you can update the duration in the setting `default-targets-scrape-interval-settings` for that target in the [configmap](https://aka.ms/azureprometheus-addon-settings-configmap) `ama-metrics-settings-configmap`. You have to set the scrape intervals in the correct format specified in [this website](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#configuration-file). Otherwise, the default value of 30 seconds is applied to the corresponding targets.
## Configure custom Prometheus scrape jobs
-You can configure the metrics addon to scrape targets other than the default ones, using the same configuration format as the [Prometheus configuration file](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#configuration-file).
+You can configure the metrics add-on to scrape targets other than the default ones by using the same configuration format as the [Prometheus configuration file](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#configuration-file).
Follow the instructions to [create, validate, and apply the configmap](prometheus-metrics-scrape-validate.md) for your cluster.
-### Advanced Setup: Configure custom Prometheus scrape jobs for the daemonset
+### Advanced setup: Configure custom Prometheus scrape jobs for the DaemonSet
-The `ama-metrics` replicaset pod consumes the custom Prometheus config and scrapes the specified targets. For a cluster with a large number of nodes and pods and a large volume of metrics to scrape, some of the applicable custom scrape targets can be off-loaded from the single `ama-metrics` replicaset pod to the `ama-metrics` daemonset pod. The [ama-metrics-prometheus-config-node configmap](https://aka.ms/azureprometheus-addon-ds-configmap), similar to the regular configmap, can be created to have static scrape configs on each node. The scrape config should only target a single node and shouldn't use service discovery. Otherwise each node tries to scrape all targets and will make many calls to the Kubernetes API server. The `node-exporter` config below is one of the default targets for the daemonset pods. It uses the `$NODE_IP` environment variable, which is already set for every ama-metrics addon container to target a specific port on the node:
+The `ama-metrics` ReplicaSet pod consumes the custom Prometheus config and scrapes the specified targets. For a cluster with a large number of nodes and pods and a large volume of metrics to scrape, some of the applicable custom scrape targets can be off-loaded from the single `ama-metrics` ReplicaSet pod to the `ama-metrics` DaemonSet pod.
+
+The [ama-metrics-prometheus-config-node configmap](https://aka.ms/azureprometheus-addon-ds-configmap), similar to the regular configmap, can be created to have static scrape configs on each node. The scrape config should only target a single node and shouldn't use service discovery. Otherwise, each node tries to scrape all targets and makes many calls to the Kubernetes API server.
+
+The following `node-exporter` config is one of the default targets for the DaemonSet pods. It uses the `$NODE_IP` environment variable, which is already set for every `ama-metrics` add-on container to target a specific port on the node.
```yaml - job_name: node
The `ama-metrics` replicaset pod consumes the custom Prometheus config and scrap
- targets: ['$NODE_IP:9100'] ```
-Custom scrape targets can follow the same format using `static_configs` with targets using the `$NODE_IP` environment variable and specifying the port to scrape. Each pod of the daemonset takes the config, scrapes the metrics, and sends them for that node.
+Custom scrape targets can follow the same format by using `static_configs` with targets and using the `$NODE_IP` environment variable and specifying the port to scrape. Each pod of the DaemonSet takes the config, scrapes the metrics, and sends them for that node.
## Prometheus configuration tips and examples
-### Configuration File for custom scrape config
+Learn some tips from examples in this section.
-The configuration format is the same as the [Prometheus configuration file](https://aka.ms/azureprometheus-promioconfig). Currently supported are the following sections:
+### Configuration file for custom scrape config
+
+The configuration format is the same as the [Prometheus configuration file](https://aka.ms/azureprometheus-promioconfig). Currently, the following sections are supported:
```yaml global:
scrape_configs:
- <job-y> ```
-Any other unsupported sections need to be removed from the config before applying as a configmap. Otherwise the custom configuration fails validation and won't be applied.
+Any other unsupported sections must be removed from the config before they're applied as a configmap. Otherwise, the custom configuration fails validation and isn't applied.
-Refer to [Apply config file](prometheus-metrics-scrape-validate.md#apply-config-file) section to create a configmap from the prometheus config.
+See the [Apply config file](prometheus-metrics-scrape-validate.md#apply-config-file) section to create a configmap from the Prometheus config.
> [!NOTE]
-> When custom scrape configuration fails to apply due to validation errors, default scrape configuration will continue to be used.
+> When custom scrape configuration fails to apply because of validation errors, default scrape configuration continues to be used.
-## Scrape Configs
-The currently supported methods of target discovery for a [scrape config](https://aka.ms/azureprometheus-promioconfig-scrape) are either [`static_configs`](https://aka.ms/azureprometheus-promioconfig-static) or [`kubernetes_sd_configs`](https://aka.ms/azureprometheus-promioconfig-sdk8s) for specifying or discovering targets.
+## Scrape configs
+Currently, the supported methods of target discovery for a [scrape config](https://aka.ms/azureprometheus-promioconfig-scrape) are either [`static_configs`](https://aka.ms/azureprometheus-promioconfig-static) or [`kubernetes_sd_configs`](https://aka.ms/azureprometheus-promioconfig-sdk8s) for specifying or discovering targets.
#### Static config
scrape_configs:
#### Kubernetes Service Discovery config
-Targets discovered using [`kubernetes_sd_configs`](https://aka.ms/azureprometheus-promioconfig-sdk8s) will each have different `__meta_*` labels depending on what role is specified. The labels can be used in the `relabel_configs` section to filter targets or replace labels for the targets.
+Targets discovered using [`kubernetes_sd_configs`](https://aka.ms/azureprometheus-promioconfig-sdk8s) each have different `__meta_*` labels depending on what role is specified. You can use the labels in the `relabel_configs` section to filter targets or replace labels for the targets.
See the [Prometheus examples](https://aka.ms/azureprometheus-promsampleossconfig) of scrape configs for a Kubernetes cluster. ### Relabel configs
-The `relabel_configs` section is applied at the time of target discovery and applies to each target for the job. Below are examples showing ways to use `relabel_configs`.
+The `relabel_configs` section is applied at the time of target discovery and applies to each target for the job. The following examples show ways to use `relabel_configs`.
-#### Adding a label
-Add a new label called `example_label` with value `example_value` to every metric of the job. Use `__address__` as the source label only because that label will always exist and will add the label for every target of the job.
+#### Add a label
+Add a new label called `example_label` with the value `example_value` to every metric of the job. Use `__address__` as the source label only because that label always exists and adds the label for every target of the job.
```yaml relabel_configs:
relabel_configs:
#### Use Kubernetes Service Discovery labels
-If a job is using [`kubernetes_sd_configs`](https://aka.ms/azureprometheus-promioconfig-sdk8s) to discover targets, each role has associated `__meta_*` labels for metrics. The `__*` labels are dropped after discovering the targets. To filter by them at the metrics level, first keep them using `relabel_configs` by assigning a label name and then use `metric_relabel_configs` to filter.
+If a job is using [`kubernetes_sd_configs`](https://aka.ms/azureprometheus-promioconfig-sdk8s) to discover targets, each role has associated `__meta_*` labels for metrics. The `__*` labels are dropped after discovering the targets. To filter by using them at the metrics level, first keep them using `relabel_configs` by assigning a label name. Then use `metric_relabel_configs` to filter.
```yaml # Use the kubernetes namespace as a label called 'kubernetes_namespace'
metric_relabel_configs:
#### Job and instance relabeling
-The `job` and `instance` label values can be changed based on the source label, just like any other label.
+You can change the `job` and `instance` label values based on the source label, just like any other label.
```yaml # Replace the job name with the pod label 'k8s app'
relabel_configs:
### Metric relabel configs
-Metric relabel configs are applied after scraping and before ingestion. Use the `metric_relabel_configs` section to filter metrics after scraping. Below are examples of how to do so.
+Metric relabel configs are applied after scraping and before ingestion. Use the `metric_relabel_configs` section to filter metrics after scraping. The following examples show how to do so.
#### Drop metrics by name
metric_relabel_configs:
regex: '(example_.*)' ```
-#### Rename Metrics
+#### Rename metrics
Metric renaming isn't supported.
-#### Filter Metrics by Labels
+#### Filter metrics by labels
```yaml
-# Keep only metrics with where example_label = 'example'
+# Keep metrics only where example_label = 'example'
metric_relabel_configs: - source_labels: [example_label] action: keep
metric_relabel_configs:
``` ```yaml
-# Keep metric only if `example_label_1 = value_1` and `example_label_2 = value_2`
+# Keep metrics only if `example_label_1 = value_1` and `example_label_2 = value_2`
metric_relabel_configs: - source_labels: [example_label_1, example_label_2] separator: ';'
metric_relabel_configs:
``` ```yaml
-# Keep metric only if `example_label` exists as a label
+# Keep metrics only if `example_label` exists as a label
metric_relabel_configs: - source_labels: [example_label_1] action: keep regex: '.+' ```
-### Pod Annotation Based Scraping
+### Pod annotation-based scraping
+
+If you're currently using Azure Monitor Container insights Prometheus scraping with the setting `monitor_kubernetes_pods = true`, adding this job to your custom config allows you to scrape the same pods and metrics.
-If you're currently using Azure Monitor Container Insights Prometheus scraping with the setting `monitor_kubernetes_pods = true`, adding this job to your custom config will allow you to scrape the same pods and metrics.
+The following scrape config uses the `__meta_*` labels added from the `kubernetes_sd_configs` for the `pod` role to filter for pods with certain annotations.
-The scrape config below uses the `__meta_*` labels added from the `kubernetes_sd_configs` for the `pod` role to filter for pods with certain annotations.
+To scrape certain pods, specify the port, path, and scheme through annotations for the pod and the following job scrapes only the address specified by the annotation:
-To scrape certain pods, specify the port, path, and scheme through annotations for the pod and the below job will scrape only the address specified by the annotation:
-- `prometheus.io/scrape`: Enable scraping for this pod-- `prometheus.io/scheme`: If the metrics endpoint is secured, then you'll need to set scheme to `https` & most likely set the TLS config.
+- `prometheus.io/scrape`: Enable scraping for this pod.
+- `prometheus.io/scheme`: If the metrics endpoint is secured, you need to set scheme to `https` and most likely set the TLS config.
- `prometheus.io/path`: If the metrics path isn't /metrics, define it with this annotation.-- `prometheus.io/port`: Specify a single, desired port to scrape
+- `prometheus.io/port`: Specify a single port that you want to scrape.
```yaml scrape_configs:
scrape_configs:
- action: labelmap regex: __meta_kubernetes_pod_label_(.+) ```
-Refer to [Apply config file](prometheus-metrics-scrape-validate.md#apply-config-file) section to create a configmap from the prometheus config.
+See the [Apply config file](prometheus-metrics-scrape-validate.md#apply-config-file) section to create a configmap from the Prometheus config.
## Next steps -- [Learn more about collecting Prometheus metrics](prometheus-metrics-overview.md).
+[Learn more about collecting Prometheus metrics](prometheus-metrics-overview.md)
azure-monitor Prometheus Metrics Scrape Default https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-metrics-scrape-default.md
Title: Default Prometheus metrics configuration in Azure Monitor (preview)
-description: Lists the default targets, dashboards, and recording rules for Prometheus metrics in Azure Monitor.
+description: This article lists the default targets, dashboards, and recording rules for Prometheus metrics in Azure Monitor.
Last updated 09/28/2022
# Default Prometheus metrics configuration in Azure Monitor (preview)
-This article lists the default targets, dashboards, and recording rules when you [configure Prometheus metrics to be scraped from an AKS cluster](prometheus-metrics-enable.md) for any AKS cluster.
+This article lists the default targets, dashboards, and recording rules when you [configure Prometheus metrics to be scraped from an Azure Kubernetes Service (AKS) cluster](prometheus-metrics-enable.md) for any AKS cluster.
## Scrape frequency
- The default scrape frequency for all default targets and scrapes is **30 seconds**.
+ The default scrape frequency for all default targets and scrapes is 30 seconds.
## Targets scraped
The following metrics are collected by default from each default target. All oth
- `kube_node_status_condition` - `kube_node_spec_taint`
-## Targets scraped for windows
+## Targets scraped for Windows
-There are two default jobs that can be run for windows which scrape metrics required for the dashboards specific to windows.
+Two default jobs can be run for Windows that scrape metrics required for the dashboards specific to Windows.
> [!NOTE]
-> This requires an update in the ama-metrics-settings-configmap and installing windows exporter on all windows nodepools. Please refer to the [enablement document](./prometheus-metrics-enable.md#enable-prometheus-metric-collection) for more information
+> This requires an update in the ama-metrics-settings-configmap and installing windows-exporter on all Windows node pools. For more information, see the [enablement document](./prometheus-metrics-enable.md#enable-prometheus-metric-collection).
- `windows-exporter` (`job=windows-exporter`) - `kube-proxy-windows` (`job=kube-proxy-windows`)
-## Metrics scraped for windows
+## Metrics scraped for Windows
-The following metrics are collected when windows exporter and windows kube proxy are enabled.
+The following metrics are collected when windows-exporter and kube-proxy-windows are enabled.
**windows-exporter (job=windows-exporter)**<br> - `windows_system_system_up_time`
The following metrics are collected when windows exporter and windows kube proxy
## Dashboards
-Following are the default dashboards that are automatically provisioned and configured by Azure Monitor managed service for Prometheus when you [link your Azure Monitor workspace to an Azure Managed Grafana instance](../essentials/azure-monitor-workspace-manage.md#link-a-grafana-workspace). Source code for these dashboards can be found in [GitHub](https://aka.ms/azureprometheus-mixins)
+The following default dashboards are automatically provisioned and configured by Azure Monitor managed service for Prometheus when you [link your Azure Monitor workspace to an Azure Managed Grafana instance](../essentials/azure-monitor-workspace-manage.md#link-a-grafana-workspace). Source code for these dashboards can be found in [this GitHub folder](https://aka.ms/azureprometheus-mixins).
- Kubernetes / Compute Resources / Cluster - Kubernetes / Compute Resources / Namespace (Pods)
Following are the default dashboards that are automatically provisioned and conf
## Recording rules
-Following are the default recording rules that are automatically configured by Azure Monitor managed service for Prometheus when you [link your Azure Monitor workspace to an Azure Managed Grafana instance](../essentials/azure-monitor-workspace-manage.md#link-a-grafana-workspace). Source code for these recording rules can be found in [GitHub](https://aka.ms/azureprometheus-mixins)
-
+The following default recording rules are automatically configured by Azure Monitor managed service for Prometheus when you [link your Azure Monitor workspace to an Azure Managed Grafana instance](../essentials/azure-monitor-workspace-manage.md#link-a-grafana-workspace). Source code for these recording rules can be found in [this GitHub folder](https://aka.ms/azureprometheus-mixins).
- `cluster:node_cpu:ratio_rate5m` - `namespace_cpu:kube_pod_container_resource_requests:sum`
Following are the default recording rules that are automatically configured by A
## Next steps -- [Customize scraping of Prometheus metrics](prometheus-metrics-scrape-configuration.md).
+[Customize scraping of Prometheus metrics](prometheus-metrics-scrape-configuration.md)
azure-monitor Resource Group Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/resource-group-insights.md
Title: Azure Monitor Resource Group insights | Microsoft Docs description: Understand the health and performance of your distributed applications and services at the Resource Group level with Resource Group insights feature of Azure Monitor. -- Last updated 09/19/2018--+ # Monitor Azure Monitor Resource Group insights
azure-monitor Azure Data Explorer Query Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/azure-data-explorer-query-storage.md
The process flow is to:
Azure Monitor logs can be exported to a storage account by using any of the following options: - Export all data from your Log Analytics workspace to a storage account or event hub. Use the Log Analytics workspace data export feature of Azure Monitor Logs. For more information, see [Log Analytics workspace data export in Azure Monitor](./logs-data-export.md).-- Scheduled export from a log query by using a logic app. This method is similar to the data export feature but allows you to send filtered or aggregated data to Azure Storage. This method is subject to [log query limits](../service-limits.md#log-analytics-workspaces). For more information, see [Archive data from a Log Analytics workspace to Azure Storage by using Logic Apps](./logs-export-logic-app.md).-- One-time export by using a logic app. For more information, see [Azure Monitor Logs connector for Logic Apps and Power Automate](./logicapp-flow-connector.md).
+- Scheduled export from a log query by using a logic app workflow. This method is similar to the data export feature but allows you to send filtered or aggregated data to Azure Storage. This method is subject to [log query limits](../service-limits.md#log-analytics-workspaces). For more information, see [Archive data from a Log Analytics workspace to Azure Storage by using Azure Logic Apps](./logs-export-logic-app.md).
+- One-time export by using a logic app workflow. For more information, see [Azure Monitor Logs connector for Azure Logic Apps](../../connectors/connectors-azure-monitor-logs.md).
- One-time export to a local machine by using a PowerShell script. For more information, see [Invoke-AzOperationalInsightsQueryExport](https://www.powershellgallery.com/packages/Invoke-AzOperationalInsightsQueryExport). > [!TIP]
azure-monitor Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/customer-managed-keys.md
Title: Azure Monitor customer-managed key description: Information and steps to configure Customer-managed key to encrypt data in your Log Analytics workspaces using an Azure Key Vault key. --+ Last updated 05/01/2022
azure-monitor Data Platform Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/data-platform-logs.md
The following table describes some of the ways that you can use Azure Monitor Lo
| Alert | Configure a [log alert rule](../alerts/alerts-log.md) that sends a notification or takes [automated action](../alerts/action-groups.md) when the results of the query match a particular result. | | Visualize | Pin query results rendered as tables or charts to an [Azure dashboard](../../azure-portal/azure-portal-dashboards.md).<br>Create a [workbook](../visualize/workbooks-overview.md) to combine with multiple sets of data in an interactive report. <br>Export the results of a query to [Power BI](./log-powerbi.md) to use different visualizations and share with users outside Azure.<br>Export the results of a query to [Grafana](../visualize/grafana-plugin.md) to use its dashboarding and combine with other data sources.| | Get insights | Logs support [insights](../insights/insights-overview.md) that provide a customized monitoring experience for particular applications and services. |
-| Retrieve | Access log query results from a:<ul><li>Command line via the [Azure CLI](/cli/azure/monitor/log-analytics) or [Azure PowerShell cmdlets](/powershell/module/az.operationalinsights).</li><li>Custom app via the [REST API](/rest/api/loganalytics/) or client library for [.NET](/dotnet/api/overview/azure/Monitor.Query-readme), [Go](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/monitor/azquery), [Java](/java/api/overview/azure/monitor-query-readme), [JavaScript](/javascript/api/overview/azure/monitor-query-readme), or [Python](/python/api/overview/azure/monitor-query-readme).</li></ul> |
-| Import | Upload logs from a custom app via the [REST API](./logs-ingestion-api-overview.md) or client library for [.NET](/dotnet/api/overview/azure/Monitor.Ingestion-readme), [Java](/java/api/overview/azure/monitor-ingestion-readme), [JavaScript](/javascript/api/overview/azure/monitor-ingestion-readme), or [Python](/python/api/overview/azure/monitor-ingestion-readme). |
-| Export | Configure [automated export of log data](./logs-data-export.md) to an Azure Storage account or Azure Event Hubs.<br>Build a workflow to retrieve log data and copy it to an external location by using [Azure Logic Apps](./logicapp-flow-connector.md). |
+| Retrieve | Access log query results from:<ul><li>Command line via the [Azure CLI](/cli/azure/monitor/log-analytics) or [Azure PowerShell cmdlets](/powershell/module/az.operationalinsights).</li><li>Custom app via the [REST API](/rest/api/loganalytics/) or client library for [.NET](/dotnet/api/overview/azure/Monitor.Query-readme), [Go](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/monitor/azquery), [Java](/java/api/overview/azure/monitor-query-readme), [JavaScript](/javascript/api/overview/azure/monitor-query-readme), or [Python](/python/api/overview/azure/monitor-query-readme).</li></ul> |
+| Import | Upload logs from a custom app via the [REST API](/azure/azure-monitor/logs/logs-ingestion-api-overview) or client library for [.NET](/dotnet/api/overview/azure/Monitor.Ingestion-readme), [Java](/java/api/overview/azure/monitor-ingestion-readme), [JavaScript](/javascript/api/overview/azure/monitor-ingestion-readme), or [Python](/python/api/overview/azure/monitor-ingestion-readme). |
+| Export | Configure [automated export of log data](./logs-data-export.md) to an Azure Storage account or Azure Event Hubs.<br>Build a workflow to retrieve log data and copy it to an external location by using [Azure Logic Apps](../../connectors/connectors-azure-monitor-logs.md). |
![Diagram that shows an overview of Azure Monitor Logs.](media/data-platform-logs/logs-overview.png)
The experience of using Log Analytics to work with Azure Monitor queries in the
- Learn about [log queries](./log-query-overview.md) to retrieve and analyze data from a Log Analytics workspace. - Learn about [metrics in Azure Monitor](../essentials/data-platform-metrics.md).-- Learn about the [monitoring data available](../data-sources.md) for various resources in Azure.
+- Learn about the [monitoring data available](../data-sources.md) for various resources in Azure.
azure-monitor Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/functions.md
You can add parameters to a function so that you can provide values for certain
Parameters are ordered as they're created. Parameters that have no default value are positioned in front of parameters that have a default value.
+> [!NOTE]
+> Classic Application Insights resources don't support parameterized functions. If you have a [workspace-based Application Insights resource](../app/create-workspace-resource.md), you can create parameterized functions from your Log Analytics workspace. For information on migrating your Classic Application Insights resource to a workspace-based resource, see [Migrate to workspace-based Application Insights resources](../app/convert-classic-resource.md).
+ ## Work with function code You can view the code of a function either to gain insight into how it works or to modify the code for a workspace function. Select **Load the function code** to add the function code to the current query in the editor.
azure-monitor Log Query Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/log-query-overview.md
Areas in Azure Monitor where you'll use queries include:
- [Log alert rules](../alerts/alerts-overview.md): Proactively identify issues from data in your workspace. Each alert rule is based on a log query that's automatically run at regular intervals. The results are inspected to determine if an alert should be created. - [Workbooks](../visualize/workbooks-overview.md): Include the results of log queries by using different visualizations in interactive visual reports in the Azure portal. - [Azure dashboards](../visualize/tutorial-logs-dashboards.md): Pin the results of any query into an Azure dashboard, which allows you to visualize log and metric data together and optionally share with other Azure users.-- [Azure Logic Apps](../logs/logicapp-flow-connector.md): Use the results of a log query in an automated workflow by using Logic Apps.
+- [Azure Logic Apps](../../connectors/connectors-azure-monitor-logs.md): Use the results of a log query in an automated workflow by using a logic app workflow.
- [PowerShell](/powershell/module/az.operationalinsights/invoke-azoperationalinsightsquery): Use the results of a log query in a PowerShell script from a command line or an Azure Automation runbook that uses `Invoke-AzOperationalInsightsQuery`. - [Azure Monitor Logs API](/rest/api/loganalytics/): Retrieve log data from the workspace from any REST API client. The API request includes a query that's run against Azure Monitor to determine the data to retrieve. - **Azure Monitor Query client libraries**: Retrieve log data from the workspace via an idiomatic client library for the following ecosystems:
azure-monitor Logicapp Flow Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logicapp-flow-connector.md
- Title: Use Azure Monitor Logs with Azure Logic Apps and Power Automate
-description: Learn how you can use Azure Logic Apps and Power Automate to quickly automate repeatable processes by using the Azure Monitor connector.
----- Previously updated : 03/22/2022---
-# Azure Monitor Logs connector for Logic Apps and Power Automate
-[Azure Logic Apps](../../logic-apps/index.yml) and [Power Automate](https://make.powerautomate.com) allow you to create automated workflows using hundreds of actions for various services. The Azure Monitor Logs connector allows you to build workflows that retrieve data from a Log Analytics workspace or an Application Insights application in Azure Monitor. This article describes the actions included with the connector and provides a walkthrough to build a workflow using this data.
-
-For example, you can create a logic app workflow to use Azure Monitor log data in an email notification from Office 365, create a bug in Azure DevOps, or post a Slack message. You can trigger a workflow by a simple schedule or from some action in a connected service such as when a mail or a tweet is received.
-
-## Connector limits
-The Azure Monitor Logs connector has these limits:
-* Max query response size: ~16.7 MB (16 MiB). The connector infrastructure dictates that size limit is set lower than query API limit.
-* Max number of records: 500,000.
-* Max connector timeout: 110 seconds.
-* Max query timeout: 100 seconds.
-* Visualizations in the Logs page and the connector use different charting libraries and some functionality isn't available in the connector currently.
-
-The connector may reach limits depending on the query you use and the size of the results. You can often avoid such cases by adjusting the flow recurrence to run more frequent on smaller time range, or aggregate data to reduce the results size. Frequent queries with lower intervals than 120 seconds arenΓÇÖt recommended due to caching.
-
-## Actions
-The following table describes the actions included with the Azure Monitor Logs connector. Both allow you to run a log query against a Log Analytics workspace or Application Insights application. The difference is in the way the data is returned.
-
-> [!NOTE]
-> The Azure Monitor Logs connector replaces the [Azure Log Analytics connector](/connectors/azureloganalytics/) and the [Azure Application Insights connector](/connectors/applicationinsights/). This connector provides the same functionality as the others and is the preferred method for running a query against a Log Analytics workspace or an Application Insights application.
--
-| Action | Description |
-|:|:|
-| [Run query and and list results](/connectors/azuremonitorlogs/#run-query-and-list-results) | Returns each row as its own object. Use this action when you want to work with each row separately in the rest of the workflow. The action is typically followed by a [For each activity](../../logic-apps/logic-apps-control-flow-loops.md#foreach-loop). |
-| [Run query and and visualize results](/connectors/azuremonitorlogs/#run-query-and-visualize-results) | Returns a JPG file that depicts the query result set. This action lets you use the result set in the rest of the workflow by sending the results in an email, for example. The action only returns a JPG file if the query returns results.|
-
-## Walkthroughs
-The following tutorial illustrates the use of the Azure Monitor Logs connector in Azure Logic Apps. You can perform the same tutorial with Power Automate, the only difference being how you create the initial workflow and run it when complete. You configure the workflow and actions in the same way for both Logic Apps and Power Automate. See [Create a flow from a template in Power Automate](/power-automate/get-started-logic-template) to get started.
--
-### Create a Logic App
-
-1. Go to **Logic Apps** in the Azure portal and select **Add**.
-1. Select a **Subscription**, **Resource group**, and **Region** to store the new logic app and then give it a unique name. You can turn on the **Log Analytics** setting to collect information about runtime data and events as described in [Set up Azure Monitor logs and collect diagnostics data for Azure Logic Apps](../../logic-apps/monitor-workflows-collect-diagnostic-data.md). This setting isn't required for using the Azure Monitor Logs connector.
-
- ![Screenshot that shows the Basics tab on the logic app creation screen.](media/logicapp-flow-connector/create-logic-app.png)
-
-1. Select **Review + create** > **Create**.
-1. When the deployment is complete, select **Go to resource** to open the **Logic Apps Designer**.
-
-### Create a trigger for the logic app workflow
-1. Under **Start with a common trigger**, select **Recurrence**.
-
- This creates a logic app workflow that automatically runs at a regular interval.
-
-1. In the **Frequency** box of the action, select **Day** and in the **Interval** box, enter **1** to run the workflow once per day.
-
- ![Screenshot that shows the Logic Apps Designer "Recurrence" window on which you can set the interval and frequency at which the logic app runs.](media/logicapp-flow-connector/recurrence-action.png)
-
-## Walkthrough: Mail visualized results
-This tutorial shows how to create a logic app workflow that sends the results of an Azure Monitor log query by email.
-
-### Add Azure Monitor Logs action
-1. Select **+ New step** to add an action that runs after the recurrence action.
-1. Under **Choose an action**, type **azure monitor** and then select **Azure Monitor Logs**.
-
- ![Screenshot that shows the Logic App Designer "Choose an action" window.](media/logicapp-flow-connector/select-azure-monitor-connector.png)
-
-1. Select **Azure Log Analytics ΓÇô Run query and visualize results**.
-
- ![Screenshot of a new action being added to a step in the Logic Apps Designer. Azure Monitor Logs is highlighted under Choose an action.](media/logicapp-flow-connector/select-query-action-visualize.png)
-
-### Add Azure Monitor Logs action
-
-1. Select the **Subscription** and **Resource Group** for your Log Analytics workspace.
-1. Select *Log Analytics Workspace* for the **Resource Type** and then select the workspace's name under **Resource Name**.
-1. Add the following log query to the **Query** window.
-
- ```Kusto
- Event
- | where EventLevelName == "Error"
- | where TimeGenerated > ago(1day)
- | summarize TotalErrors=count() by Computer
- | sort by Computer asc
- ```
-
-1. Select *Set in query* for the **Time Range** and **HTML Table** for the **Chart Type**.
-
- ![Screenshot of the settings for the new Azure Monitor Logs action named Run query and visualize results.](media/logicapp-flow-connector/run-query-visualize-action.png)
-
- The account associated with the current connection sends the email. To specify another account, select **Change connection**.
-
-### Add email action
-
-1. Select **+ New step** > **+ Add an action**.
-1. Under **Choose an action**, type **outlook** and then select **Office 365 Outlook**.
-
- ![Screenshot that shows the Logic App Designer "Choose an action" window with the Office 365 Outlook button highlighted.](media/logicapp-flow-connector/select-outlook-connector.png)
-
-1. Select **Send an email (V2)**.
-
- ![Screenshot of a new action being added to a step in the Logic Apps Designer. Send an email (V2) is highlighted under Choose an action.](media/logicapp-flow-connector/select-mail-action.png)
-
-1. Click anywhere in the **Body** box to open a **Dynamic content** window opens with values from the previous actions in the logic app.
-1. Select **See more** and then **Body** which is the results of the query in the Log Analytics action.
-
- ![Screenshot of the settings for the new Send an email (V2) action, showing the body of the email being defined.](media/logicapp-flow-connector/select-body.png)
-
-1. Specify the email address of a recipient in the **To** window and a subject for the email in **Subject**.
-
- ![Screenshot of the settings for the new Send an email (V2) action, showing the subject line and email recipients being defined.](media/logicapp-flow-connector/mail-action.png)
-
-### Save and test your workflow
-1. Select **Save** and then **Run** to perform a test run of the workflow.
-
- ![Save and run](media/logicapp-flow-connector/save-run.png)
--
- When the workflow completes, check the mail of the recipient that you specified. You should receive a mail with a body similar to the following:
-
- ![An image of a sample email.](media/logicapp-flow-connector/sample-mail.png)
-
- > [!NOTE]
- > The workflow generates an email with a JPG file that depicts the query result set. If your query doesn't return results, the workflow won't create a JPG file.
-
-## Next steps
--- Learn more about [log queries in Azure Monitor](./log-query-overview.md).-- Learn more about [Azure Logic Apps](../../logic-apps/index.yml)-- Learn more about [Power Automate](https://make.powerautomate.com).
azure-monitor Logs Data Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logs-data-export.md
Title: Log Analytics workspace data export in Azure Monitor
description: Log Analytics workspace data export in Azure Monitor lets you continuously export data per selected tables in your workspace. You can export to an Azure Storage account or Azure Event Hubs as it's collected. --+ Last updated 02/09/2022
The [number of supported event hubs in Basic and Standard namespace tiers is 10]
## Query exported data Exporting data from workspaces to Storage Accounts help satisfy various scenarios mentioned in [overview](#overview), and can be consumed by tools that can read blobs from Storage Accounts. The following methods let you query data using Log Analytics query language, which is the same for Azure Data Explorer.
-1. Use Azure Data Explorer to [query data in Azure Data Lake](/azure/data-explorer/data-lake-query-data.md).
-2. Use Azure Data Explorer to [ingest data from a Storage Account](/azure/data-explorer/ingest-from-container.md).
+1. Use Azure Data Explorer to [query data in Azure Data Lake](/azure/data-explorer/data-lake-query-data).
+2. Use Azure Data Explorer to [ingest data from a Storage Account](/azure/data-explorer/ingest-from-container).
3. Use Log Analytics workspace to query [ingested data using Logs Ingestion API ](./logs-ingestion-api-overview.md). Ingested data is to a custom log table and not to the original table.
azure-monitor Logs Dedicated Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logs-dedicated-clusters.md
Title: Azure Monitor Logs Dedicated Clusters description: Customers meeting the minimum commitment tier could use dedicated clusters --+ Last updated 01/01/2023
azure-monitor Logs Export Logic App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logs-export-logic-app.md
Title: Export data from a Log Analytics workspace to a storage account by using
description: This article describes a method to use Azure Logic Apps to query data from a Log Analytics workspace and send it to Azure Storage. --+ Last updated 03/01/2022
This article describes a method to use [Azure Logic Apps](../../logic-apps/index
The method discussed in this article describes a scheduled export from a log query by using a logic app. Other options to export data for particular scenarios include: - To export data from your Log Analytics workspace to a storage account or Azure Event Hubs, use the Log Analytics workspace data export feature of Azure Monitor Logs. See [Log Analytics workspace data export in Azure Monitor](logs-data-export.md).-- One-time export by using a logic app. See [Azure Monitor Logs connector for Logic Apps and Power Automate](logicapp-flow-connector.md).
+- One-time export by using a logic app. See [Azure Monitor Logs connector for Logic Apps](../../connectors/connectors-azure-monitor-logs.md).
- One-time export to a local machine by using a PowerShell script. See [Invoke-AzOperationalInsightsQueryExport](https://www.powershellgallery.com/packages/Invoke-AzOperationalInsightsQueryExport). ## Overview
azure-monitor Manage Logs Tables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/manage-logs-tables.md
Last updated 11/09/2022
# Manage tables in a Log Analytics workspace
-A Log Analytics workspace lets you collect logs from Azure and non-Azure resources into one space for data analysis, use by other services, such as [Sentinel](../../../articles/sentinel/overview.md), and to trigger alerts and actions, for example, using [Logic Apps](../logs/logicapp-flow-connector.md). The Log Analytics workspace consists of tables, which you can configure to manage your data model and log-related costs. This article explains the table configuration options in Azure Monitor Logs and how to set table properties based on your data analysis and cost management needs.
+A Log Analytics workspace lets you collect logs from Azure and non-Azure resources into one space for data analysis, use by other services, such as [Sentinel](../../../articles/sentinel/overview.md), and to trigger alerts and actions, for example, using [Azure Logic Apps](../../connectors/connectors-azure-monitor-logs.md). The Log Analytics workspace consists of tables, which you can configure to manage your data model and log-related costs. This article explains the table configuration options in Azure Monitor Logs and how to set table properties based on your data analysis and cost management needs.
## Table properties
azure-monitor Move Workspace Region https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/move-workspace-region.md
Title: Move a Log Analytics workspace to another Azure region by using the Azure portal description: Use an Azure Resource Manager template to move a Log Analytics workspace from one Azure region to another by using the Azure portal.- Last updated 08/17/2021-+ # Move a Log Analytics workspace to another region by using the Azure portal
azure-monitor Move Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/move-workspace.md
Title: Move a Log Analytics workspace in Azure Monitor | Microsoft Docs
description: Learn how to move your Log Analytics workspace to another subscription or resource group. --+ Last updated 09/01/2022
azure-monitor Private Link Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/private-link-configure.md
Title: Configure your private link description: This article shows the steps to configure a private link.--+ Last updated 1/5/2022
azure-monitor Private Link Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/private-link-design.md
Title: Design your Azure Private Link setup description: This article shows how to design your Azure Private Link setup.--+ Last updated 12/14/2022
azure-monitor Private Link Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/private-link-security.md
Title: Use Azure Private Link to connect networks to Azure Monitor description: Set up an Azure Monitor Private Link Scope to securely connect networks to Azure Monitor.--+ Last updated 1/5/2022
azure-monitor Resource Manager Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/resource-manager-cluster.md
Title: Resource Manager template samples for Log Analytics clusters description: Sample Azure Resource Manager templates to deploy Log Analytics clusters. --+ Last updated 06/13/2022
azure-monitor Workbook Templates Move Region https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbook-templates-move-region.md
Title: Move and Azure Workbook template to another region description: How to move a workbook template to a different region - - ibiza Last updated 07/05/2022-+ #Customer intent: As an Azure service administrator, I want to move my resources to another Azure region
azure-monitor Workbooks Move Region https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-move-region.md
Title: Azure Monitor Workbooks - Move Regions description: How to move a workbook to a different region --- ibiza Last updated 07/05/2022-+ #Customer intent: As an Azure service administrator, I want to move my resources to another Azure region
azure-netapp-files Azure Netapp Files Configure Nfsv41 Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-configure-nfsv41-domain.md
NFSv4 introduces the concept of an authentication domain. Azure NetApp Files cur
## Default behavior of user/group mapping
-Root mapping defaults to the `nobody` user because the NFSv4 domain is set to `localdomain` by default. When you mount an Azure NetApp Files NFSv4.1 volume as root, you will see file permissions as follows:
+Root mapping defaults to the `nobody` user because the NFSv4 domain is set to `localdomain` by default. When you mount an Azure NetApp Files NFSv4.1 volume as root, you'll see file permissions as follows:
![Default behavior of user/group mapping for NFSv4.1](../media/azure-netapp-files/azure-netapp-files-nfsv41-default-behavior-user-group-mapping.png)
As the above example shows, the user for `file1` should be `root`, but it maps t
* If the volume is [enabled for LDAP](configure-ldap-extended-groups.md), set `Domain` to the domain that is configured in the Active Directory Connection on your NetApp account. For instance, if `contoso.com` is the configured domain in the NetApp account, then set `Domain = contoso.com`.
- The following examples shows the initial configuration of `/etc/idmapd.conf` before changes:
+ The following examples show the initial configuration of `/etc/idmapd.conf` before changes:
``` [General]
As the above example shows, the user for `file1` should be `root`, but it maps t
2. Unmount any currently mounted NFS volumes. 3. Update the `/etc/idmapd.conf` file.
-4. Restart the `rpcbind` service on your host (`service rpcbind restart`), or simply reboot the host.
+4. Clear the keyring of the NFS `idmapper` (`nfsidmap -c`).
5. Mount the NFS volumes as required. See [Mount a volume for Windows or Linux VMs](azure-netapp-files-mount-unmount-volumes-for-virtual-machines.md).
The following example shows the resulting user/group change:
As the example shows, the user/group has now changed from `nobody` to `root`.
-## Behavior of other (non-root) users and groups
+## Behavior of other (nonroot) users and groups
-Azure NetApp Files supports local users (users created locally on a host) who have permissions associated with files or folders in NFSv4.1 volumes. However, the service does not currently support mapping the users/groups across multiple nodes. Therefore, users created on one host do not map by default to users created on another host.
+Azure NetApp Files supports local users (users created locally on a host) who have permissions associated with files or folders in NFSv4.1 volumes. However, the service doesn't currently support mapping the users/groups across multiple nodes. Therefore, users created on one host don't map by default to users created on another host.
In the following example, `Host1` has three existing test user accounts (`testuser01`, `testuser02`, `testuser03`): ![Screenshot that shows that Host1 has three existing test user accounts.](../media/azure-netapp-files/azure-netapp-files-nfsv41-host1-users.png)
-On `Host2`, note that the test user accounts have not been created, but the same volume is mounted on both hosts:
+On `Host2`, the test user accounts haven't been created, but the same volume is mounted on both hosts:
![Resulting configuration for NFSv4.1](../media/azure-netapp-files/azure-netapp-files-nfsv41-host2-users.png) ## Next step * [Mount a volume for Windows or Linux VMs](azure-netapp-files-mount-unmount-volumes-for-virtual-machines.md)
-* [Configure ADDS LDAP with extended groups for NFS volume access](configure-ldap-extended-groups.md)
+* [Configure AD DS LDAP with extended groups for NFS volume access](configure-ldap-extended-groups.md)
azure-netapp-files Manage Default Individual User Group Quotas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/manage-default-individual-user-group-quotas.md
Previously updated : 02/23/2023 Last updated : 03/28/2023 # Manage default and individual user and group quotas for a volume This article explains the considerations and steps for managing user and group quotas on Azure NetApp Files volumes. To understand the use cases for this feature, see [Understand default and individual user and group quotas](default-individual-user-group-quotas-introduction.md).
-## Quotas in cross-region replication relationships
+## Quotas in cross-region or cross-zone replication relationships
-Quota rules are synced from cross-region replication (CRR) source to destination volumes. Quota rules that you create, delete, or update on a CRR source volume automatically applies to the CRR destination volume.
+Quota rules are synced from cross-region replication (CRR) or cross-zone replication (CZR) source to destination volumes. Quota rules that you create, delete, or update on a CRR or CZR source volume automatically applies to the destination volume.
-Quota rules only come into effect on the CRR destination volume after the replication relationship is deleted because the destination volume is read-only. To learn how to break the replication relationship, see [Delete volume replications](cross-region-replication-delete.md#delete-volume-replications). If source volumes have quota rules and you create the CRR destination volume at the same time as the source volume, all the quota rules are created on destination volume.
+Quota rules only come into effect on the CRR/CZR destination volume after the replication relationship is deleted because the destination volume is read-only. To learn how to break the replication relationship, see [Delete volume replications](cross-region-replication-delete.md#delete-volume-replications). If source volumes have quota rules and you create a replication relationship later, all the quota rules are synced to the destination volume.
## Considerations
Quota rules only come into effect on the CRR destination volume after the replic
* Azure NetApp Files doesn't support individual group quota and default group quota for SMB and dual protocol volumes. * Group quotas track the consumption of disk space for files owned by a particular group. A file can only be owned by exactly one group. * Auxiliary groups only help in permission checks. You can't use auxiliary groups to restrict the quota (disk space) for a file.
-* In a cross-region replication setting:
- * Currently, Azure NetApp Files doesn't support syncing quota rules to the destination (data protection) volume.
- * You canΓÇÖt create quota rules on the destination volume until you [delete the replication](cross-region-replication-delete.md).
- * You need to manually create quota rules on the destination volume if you want them for the volume, and you can do so only after you delete the replication.
+* In a CRR/CZR setting:
+ * You can't create, update, or delete quota rules on the destination volume until you [delete the replication](cross-region-replication-delete.md).
* If a quota rule is in the error state after you delete the replication relationship, you need to delete and re-create the quota rule on the destination volume.
- * During sync or reverse resync operations:
- * If you create, update, or delete a rule on a source volume, you must perform the same operation on the destination volume.
- * If you create, update, or delete a rule on a destination volume after the deletion of the replication relationship, the rule will be reverted to keep the source and destination volumes in sync.
* If you're using [large volumes](large-volumes-requirements-considerations.md) (volumes larger than 100 TiB):     * The space and file usage in a large volume might exceed as much as five percent more than the configured hard limit before the quota limit is enforced and rejects traffic.    * To provide optimal performance, the space consumption may exceed configured hard limit before the quota is enforced. The additional space consumption won't exceed either the lower of 1 GB or five percent of the configured hard limit.    
azure-vmware Azure Vmware Solution Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/azure-vmware-solution-known-issues.md
Refer to the table below to find details about resolution dates or possible work
|Issue | Date discovered | Workaround | Date resolved | | :- | : | :- | :- | | [VMSA-2021-002 ESXiArgs](https://www.vmware.com/security/advisories/VMSA-2021-0002.html) OpenSLP vulnerability publicized in February 2023 | 2021 | [Disable OpenSLP service](https://kb.vmware.com/s/article/76372) | February 2021 - Resolved in [ESXi 7.0 U3c](concepts-private-clouds-clusters.md#vmware-software-versions) |
-| After my private cloud NSX-T Data Center upgrade to version [3.2.2](https://docs.vmware.com/en/VMware-NSX/3.2.2/rn/vmware-nsxt-data-center-322-release-notes/https://docsupdatetracker.net/index.html), the NSX-T Manager **DNS Forwarder Upstream Server Timeout** alarm is raised | February 2023 | [Enable private cloud internet Access](concepts-design-public-internet-access.md), alarm is raised because NSX-T Manager cannot access the configured CloudFlare DNS server. | February 2023 |
-| The cloudadmin user sees a message about the "Distributed Switch not being associated with the host" if they look at Host > Configure > Virtual switches. There *is no* actual problem. Cloudadmin simply can't see it because of permissions. | March 2023 | We will look into adding read-only permissions for the Virtual Distributed Switch (VDS) to the cloudadmin account, which should make that message disappear. | |
+| After my private cloud NSX-T Data Center upgrade to version [3.2.2](https://docs.vmware.com/en/VMware-NSX/3.2.2/rn/vmware-nsxt-data-center-322-release-notes/https://docsupdatetracker.net/index.html), the NSX-T Manager **DNS Forwarder Upstream Server Timeout** alarm is raised | February 2023 | [Enable private cloud internet Access](concepts-design-public-internet-access.md), alarm is raised because NSX-T Manager cannot access the configured CloudFlare DNS server. Otherwise, [change the default DNS zone to point to a valid and reachable DNS server.](configure-dns-azure-vmware-solution.md) | February 2023 |
+ In this article, you learned about the current known issues with the Azure VMware Solution. For more information, see [About Azure VMware Solution](introduction.md).++
azure-vmware Deploy Zerto Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/deploy-zerto-disaster-recovery.md
Title: Deploy Zerto disaster recovery on Azure VMware Solution
description: Learn how to implement Zerto disaster recovery for on-premises VMware or Azure VMware Solution virtual machines. Previously updated : 10/26/2022 Last updated : 3/28/2023 # Deploy Zerto disaster recovery on Azure VMware Solution
-In this article, you'll learn how to implement disaster recovery for on-premises VMware or Azure VMware Solution-based virtual machines (VMs). The solution in this article uses [Zerto disaster recovery](https://www.zerto.com/solutions/use-cases/disaster-recovery/). Instances of Zerto are deployed at both the protected and the recovery sites.
+In this article, learn how to implement disaster recovery for on-premises VMware or Azure VMware Solution-based virtual machines (VMs). The solution in this article uses [Zerto disaster recovery](https://www.zerto.com/solutions/use-cases/disaster-recovery/). Instances of Zerto are deployed at both the protected and the recovery sites.
Zerto is a disaster recovery solution designed to minimize downtime of VMs should a disaster occur. Zerto's platform is built on the foundation of Continuous Data Protection (CDP) that enables minimal or close to no data loss. The platform provides the level of protection wanted for many business-critical and mission-critical enterprise applications. Zerto also automates and orchestrates failover and failback to ensure minimal downtime in a disaster. Overall, Zerto simplifies management through automation and ensures fast and highly predictable recovery times.
To learn more about Zerto platform architecture, see the [Zerto Platform Archite
You can use Zerto with Azure VMware Solution for the following three scenarios.
+> [!NOTE]
+> For Azure NetApp Files (ANFs), [Azure VMware Solution](/azure/azure-vmware/introduction) supports Network File System (NFS) datastores as a persistent storage option. You can create NFS datastores with Azure NetApp Files volumes and attach them to clusters of your choice. You can also create virtual machines (VMs) for optimal cost and performance. To leverage ANF datastores, select them as a Recovery Datastore in the Zerto VPG wizard when creating or editing a VPG.
+
+>[!TIP]
+> Explore more about ANF datastores and how to [Attach Azure NetApp datastores to Azure VMware Solution hosts](/azure/azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts?tabs=azure-portal).
+ ### Scenario 1: On-premises VMware vSphere to Azure VMware Solution disaster recovery In this scenario, the primary site is an on-premises vSphere-based environment. The disaster recovery site is an Azure VMware Solution private cloud. ### Scenario 2: Azure VMware Solution to Azure VMware Solution cloud disaster recovery In this scenario, the primary site is an Azure VMware Solution private cloud in one Azure Region. The disaster recovery site is an Azure VMware Solution private cloud in a different Azure Region. ### Scenario 3: Azure VMware Solution to IaaS VMs cloud disaster recovery In this scenario, the primary site is an Azure VMware Solution private cloud in one Azure Region. Azure Blobs and Azure IaaS (Hyper-V based) VMs are used in times of Disaster. ## Prerequisites
In this scenario, the primary site is an Azure VMware Solution private cloud in
Currently, Zerto disaster recovery on Azure VMware Solution is in an Initial Availability (IA) phase. In the IA phase, you must contact Microsoft to request and qualify for IA support.
-To request IA support for Zerto on Azure VMware Solution, submit this [Install Zerto on AVS form](https://aka.ms/ZertoAVSinstall) with the required information. In the IA phase, Azure VMware Solution only supports manual installation and onboarding of Zerto. However, Microsoft will work with you to ensure that you can manually install Zerto on your private cloud.
+To request IA support for Zerto on Azure VMware Solution, submit this [Install Zerto on AVS form](https://aka.ms/ZertoAVSinstall) with the required information. In the IA phase, Azure VMware Solution only supports manual installation and onboarding of Zerto. However, Microsoft works with you to ensure that you can manually install Zerto on your private cloud.
> [!NOTE] > As part of the manual installation, Microsoft creates a new vCenter user account for Zerto. This user account is only for Zerto Virtual Manager (ZVM) to perform operations on the Azure VMware Solution vCenter. When installing ZVM on Azure VMware Solution, donΓÇÖt select the ΓÇ£Select to enforce roles and permissions using Zerto vCenter privilegesΓÇ¥ option.
-After the ZVM installation, select the options below from the Zerto Virtual Manager **Site Settings**.
+After the ZVM installation, select the options from the Zerto Virtual Manager **Site Settings**.
:::image type="content" source="media/zerto-disaster-recovery/zerto-disaster-recovery-install-5.png" alt-text="Screenshot of the Workload Automation section that shows to select all of the options listed for the blue checkboxes.":::
For more information, see the [Zerto technical documentation](https://www.zerto.
## Ongoing management of Zerto -- As you scale your Azure VMware Solution private cloud operations, you might need to add new Azure VMware Solution hosts for Zerto protection or configure Zerto disaster recovery to new Azure VMware Solution vSphere Clusters. In both these scenarios, you'll be required to open a Support Request with the Azure VMware Solution team in the Initial Availability phase. You can open the [support ticket](https://rc.portal.azure.com/#create/Microsoft.Support) from the Azure portal for these Day 2 configurations.
+- As you scale your Azure VMware Solution private cloud operations, you might need to add new Azure VMware Solution hosts for Zerto protection or configure Zerto disaster recovery to new Azure VMware Solution vSphere Clusters. In both these scenarios, you're required to open a Support Request with the Azure VMware Solution team in the Initial Availability phase. Open the [support ticket](https://rc.portal.azure.com/#create/Microsoft.Support) from the Azure portal for these Day 2 configurations.
:::image type="content" source="media/zerto-disaster-recovery/support-request-zerto-disaster-recovery.png" alt-text="Screenshot that shows the support request for Day 2 Zerto disaster recovery configurations."::: -- In the GA phase, all the above operations will be enabled in an automated self-service fashion.
+- In the GA phase, all the above operations are enabled in an automated self-service fashion.
## FAQs
You can reuse pre-existing Zerto product licenses for Azure VMware Solution envi
### How is Zerto supported?
-Zerto disaster recovery is a solution that is sold and supported by Zerto. For any support issue with Zerto disaster recovery, always contact [Zerto support](https://www.zerto.com/support-and-services/).
+Zerto disaster recovery is a solution sold and supported by Zerto. For any support issue with Zerto disaster recovery, always contact [Zerto support](https://www.zerto.com/support-and-services/).
-Zerto and Microsoft support teams will engage each other as needed to troubleshoot Zerto disaster recovery issues on Azure VMware Solution.
+Zerto and Microsoft support teams engage each other as needed to troubleshoot Zerto disaster recovery issues on Azure VMware Solution.
azure-web-pubsub Reference Cloud Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/reference-cloud-events.md
ce-connectionState: eyJrZXkiOiJhIn0=
* `subprotocols`
- The `connect` event forwards the subprotocol and authentication information to Upstream from the client. The Azure SignalR Service uses the status code to determine if the request will be upgraded to WebSocket protocol.
+ The `connect` event forwards the subprotocol and authentication information to Upstream from the client. Web PubSub service uses the status code to determine if the request will be upgraded to WebSocket protocol.
If the request contains the `subprotocols` property, the server should return one subprotocol it supports. If the server doesn't want to use any subprotocols, it should **not** send the `subprotocol` property in response. [Sending a blank header is invalid](https://developer.mozilla.org/en-US/docs/Web/API/WebSockets_API/Writing_WebSocket_servers#Subprotocols).
backup Backup Azure Database Postgresql Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-database-postgresql-support-matrix.md
Title: Azure Database for PostgreSQL server support matrix description: Provides a summary of support settings and limitations of Azure Database for PostgreSQL server backup. Previously updated : 01/24/2022 Last updated : 03/23/2023
East US, East US 2, Central US, South Central US, West US, West US 2, West Centr
- Recommended limit for the maximum database size is 400 GB. - Cross-region backup isn't supported. Therefore, you can't back up an Azure PostgreSQL server to a vault in another region. Similarly, you can only restore a backup to a server within the same region as the vault. However, we support cross-subscription backup and restore. -- Backup of Azure PostgreSQL servers with Private endpoint enabled is currently not supported.
+- Private endpoint-enabled Azure PostgreSQL servers can be backed up by allowing trusted Microsoft services in the network settings.
- Only the data is recovered during restore; _roles_ aren't restored. ## Next steps
backup Backup Azure Database Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-database-postgresql.md
Title: Back up Azure Database for PostgreSQL description: Learn about Azure Database for PostgreSQL backup with long-term retention Previously updated : 06/07/2022 Last updated : 03/23/2023
You can configure backup on multiple databases across multiple Azure PostgreSQL
>[!Note] >- You don't need to back up the databases *azure_maintenance* and *azure_sys*. Additionally, you can't back up a database already backed-up to a Backup vault.
- >- Backup of Azure PostgreSQL servers with Private endpoint enabled is currently not supported.
+ >- Private endpoint-enabled Azure PostgreSQL servers can be backed up by allowing trusted Microsoft services in the network settings.
:::image type="content" source="./media/backup-azure-database-postgresql/select-azure-postgresql-databases-to-back-up-inline.png" alt-text="Screenshot showing the option to select an Azure PostgreSQL database." lightbox="./media/backup-azure-database-postgresql/select-azure-postgresql-databases-to-back-up-expanded.png":::
backup Backup Sql Server Azure Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-sql-server-azure-troubleshoot.md
Title: Troubleshoot SQL Server database backup description: Troubleshooting information for backing up SQL Server databases running on Azure VMs with Azure Backup. Previously updated : 12/28/2022 Last updated : 03/29/2023
AzureBackup workload extension operation failed. | The VM is shut down, or the V
|||| The VM is not able to contact Azure Backup service due to internet connectivity issues. | The VM needs outbound connectivity to Azure Backup Service, Azure Storage, or Azure Active Directory services.| <li> If you use NSG to restrict connectivity, then you should use the *AzureBackup* service tag to allows outbound access to Azure Backup Service, and similarly for the Azure AD (*AzureActiveDirectory*) and Azure Storage(*Storage*) services. Follow these [steps](./backup-sql-server-database-azure-vms.md#nsg-tags) to grant access. <li> Ensure DNS is resolving Azure endpoints. <li> Check if the VM is behind a load balancer blocking internet access. By assigning public IP to the VMs, discovery will work. <li> Verify there's no firewall/antivirus/proxy that are blocking calls to the above three target services.
+### UserErrorOperationNotAllowedDatabaseMirroringEnabled
+
+| Error message | Possible cause | Recommended action |
+| | | |
+| Backup of databases participating in a database mirroring session is not supported by AzureWorkloadBackup. | When you've the mirrioring operation enabled on a SQL database, this error appears. Currently, Azure Backup doesn't support databases with this feature enabled. | You can remove the database mirroring session of the database for the operation to complete successfully. Alternatively, if the database is already protected, do *Stop backup* operation on the database. |
+ ## Re-registration failures Check for one or more of the following symptoms before you trigger the re-register operation:
cdn Cdn Custom Ssl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-custom-ssl.md
You can use your own certificate to enable the HTTPS feature. This process is do
### Register Azure CDN
-Register Azure CDN as an app in your Azure Active Directory via PowerShell.
+Register Azure CDN as an app in your Azure Active Directory.
+
+> [!NOTE]
+> * `205478c0-bd83-4e1b-a9d6-db63a3e1e1c8` is the service principal for `Microsoft.AzureFrontDoor-Cdn`.
+> * You need to have the **Global Administrator** role to run this command.
+> * The service principal name was changed from `Microsoft.Azure.Cdn` to `Microsoft.AzureFrontDoor-Cdn`.
+
+# [Azure PowerShell](#tab/powershell)
1. If needed, install [Azure PowerShell](/powershell/azure/install-az-ps) on your local machine. 2. In PowerShell, run the following command:
- `New-AzADServicePrincipal -ApplicationId "205478c0-bd83-4e1b-a9d6-db63a3e1e1c8" -Role Contributor`
-
- > [!NOTE]
- > * `205478c0-bd83-4e1b-a9d6-db63a3e1e1c8` is the service principal for `Microsoft.AzureFrontDoor-Cdn`.
- > * You need to have the **Global Administrator** role to run this command.
- > * The service principal name was changed from `Microsoft.Azure.Cdn` to `Microsoft.AzureFrontDoor-Cdn`.
+ `New-AzADServicePrincipal -ApplicationId "205478c0-bd83-4e1b-a9d6-db63a3e1e1c8"`
```bash
- New-AzADServicePrincipal -ApplicationId "205478c0-bd83-4e1b-a9d6-db63a3e1e1c8" -Role Contributor
+ New-AzADServicePrincipal -ApplicationId "205478c0-bd83-4e1b-a9d6-db63a3e1e1c8"
Secret : ServicePrincipalNames : {205478c0-bd83-4e1b-a9d6-db63a3e1e1c8,
Register Azure CDN as an app in your Azure Active Directory via PowerShell.
ApplicationId : 205478c0-bd83-4e1b-a9d6-db63a3e1e1c8 ObjectType : ServicePrincipal DisplayName : Microsoft.AzureFrontDoor-Cdn
- Id : c87be08f-686a-4d9f-8ef8-64707dbd413e
+ Id : abcdef12-3456-7890-abcd-ef1234567890
Type : ```+
+# [Azure CLI](#tab/cli)
+
+1. If needed, install [Azure CLI](/cli/azure/install-azure-cli) on your local machine.
+
+1. Use the Azure CLI to run the following command:
+
+ ```azurecli-interactive
+ az ad sp create --id 205478c0-bd83-4e1b-a9d6-db63a3e1e1c8
+ ```
+ ### Grant Azure CDN access to your key vault Grant Azure CDN permission to access the certificates (secrets) in your Azure Key Vault account.
cloud-services Cloud Services Guestos Msrc Releases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-msrc-releases.md
na Previously updated : 3/14/2023 Last updated : 3/28/2023
The following tables show the Microsoft Security Response Center (MSRC) updates
## March 2023 Guest OS
->[!NOTE]
-
->The March Guest OS is currently being rolled out to Cloud Service VMs that are configured for automatic updates. When the rollout is complete, this version will be made available for manual updates through the Azure portal and configuration files. The following patches are included in the March Guest OS. This list is subject to change.
| Product Category | Parent KB Article | Vulnerability Description | Guest OS | Date First Introduced | | | | | | |
-| Rel 23-03 | [5023697] | Latest Cumulative Update(LCU) | 5.79 | Mar 14, 2023 |
-| Rel 23-03 | [5022835] | IE Cumulative Updates | 2.135, 3.122, 4.115 | Feb 14, 2023 |
-| Rel 23-03 | [5023705] | Latest Cumulative Update(LCU) | 7.23 | Mar 14, 2023 |
-| Rel 23-03 | [5023702] | Latest Cumulative Update(LCU) | 6.55 | Mar 14, 2023 |
-| Rel 23-03 | [5022523] | .NET Framework 3.5 Security and Quality Rollup  | 2.135 | Feb 14, 2023 |
-| Rel 23-03 | [5022515] | .NET Framework 4.6.2 Security and Quality Rollup  | 2.135 | Feb 14, 2023 |
-| Rel 23-03 | [5022574] | .NET Framework 3.5 Security and Quality Rollup  | 4.115 | Feb 14, 2023 |
-| Rel 23-03 | [5022513] | .NET Framework 4.6.2 Security and Quality Rollup  | 4.115 | Feb 14, 2023 |
-| Rel 23-03 | [5022574] | .NET Framework 3.5 Security and Quality Rollup  | 3.122 | Feb 14, 2023 |
-| Rel 23-03 | [5022512] | .NET Framework 4.6.2 Security and Quality Rollup  | 3.122 | Feb 14, 2023 |
-| Rel 23-03 | [5022511] | . NET Framework 4.7.2 Cumulative Update  | 6.55 | Feb 14, 2023 |
-| Rel 23-03 | [5022507] | .NET Framework 4.8 Security and Quality Rollup  | 7.23 | Feb 14, 2023 |
-| Rel 23-03 | [5023769] | Monthly Rollup  | 2.135 | Mar 14, 2023 |
-| Rel 23-03 | [5023756] | Monthly Rollup  | 3.122 | Mar 14, 2023 |
-| Rel 23-03 | [5023765] | Monthly Rollup  | 4.115 | Mar 14, 2023 |
-| Rel 23-03 | [5023791] | Servicing Stack Update  | 3.122 | Mar 14, 2023 |
-| Rel 23-03 | [5023790] | Servicing Stack update | 4.115 | Mar 14, 2022 |
-| Rel 23-03 | [4578013] | OOB Standalone Security Update  | 4.115 | Aug 19, 2020 |
-| Rel 23-03 | [5023788] | Servicing Stack Update | 5.79 | Mar 14, 2023 |
-| Rel 23-03 | [5017397] | Servicing Stack Update LKG  | 2.135 | Sep 13, 2022 |
-| Rel 23-03 | [4494175] | Microcode  | 5.79 | Sep 1, 2020 |
-| Rel 23-03 | [4494174] | Microcode  | 6.55 | Sep 1, 2020 |
-| Rel 23-03 | [5023793] | Servicing Stack Update  | 7.23 | |
+| Rel 23-03 | [5023697] | Latest Cumulative Update(LCU) | [5.79] | Mar 14, 2023 |
+| Rel 23-03 | [5022835] | IE Cumulative Updates | [2.135], [3.122], [4.115] | Feb 14, 2023 |
+| Rel 23-03 | [5023705] | Latest Cumulative Update(LCU) | [7.23] | Mar 14, 2023 |
+| Rel 23-03 | [5023702] | Latest Cumulative Update(LCU) | [6.55] | Mar 14, 2023 |
+| Rel 23-03 | [5022523] | .NET Framework 3.5 Security and Quality Rollup  | [2.135] | Feb 14, 2023 |
+| Rel 23-03 | [5022515] | .NET Framework 4.6.2 Security and Quality Rollup  | [2.135] | Feb 14, 2023 |
+| Rel 23-03 | [5022574] | .NET Framework 3.5 Security and Quality Rollup  | [4.115] | Feb 14, 2023 |
+| Rel 23-03 | [5022513] | .NET Framework 4.6.2 Security and Quality Rollup  | [4.115] | Feb 14, 2023 |
+| Rel 23-03 | [5022574] | .NET Framework 3.5 Security and Quality Rollup  | [3.122] | Feb 14, 2023 |
+| Rel 23-03 | [5022512] | .NET Framework 4.6.2 Security and Quality Rollup  | [3.122] | Feb 14, 2023 |
+| Rel 23-03 | [5022511] | . NET Framework 4.7.2 Cumulative Update  | [6.55] | Feb 14, 2023 |
+| Rel 23-03 | [5022507] | .NET Framework 4.8 Security and Quality Rollup  | [7.23] | Feb 14, 2023 |
+| Rel 23-03 | [5023769] | Monthly Rollup  | [2.135] | Mar 14, 2023 |
+| Rel 23-03 | [5023756] | Monthly Rollup  | [3.122] | Mar 14, 2023 |
+| Rel 23-03 | [5023765] | Monthly Rollup  | [4.115] | Mar 14, 2023 |
+| Rel 23-03 | [5023791] | Servicing Stack Update  | [3.122] | Mar 14, 2023 |
+| Rel 23-03 | [5023790] | Servicing Stack update | [4.115] | Mar 14, 2022 |
+| Rel 23-03 | [4578013] | OOB Standalone Security Update  | [4.115] | Aug 19, 2020 |
+| Rel 23-03 | [5023788] | Servicing Stack Update | [5.79] | Mar 14, 2023 |
+| Rel 23-03 | [5017397] | Servicing Stack Update LKG  | [2.135] | Sep 13, 2022 |
+| Rel 23-03 | [4494175] | Microcode  | [5.79] | Sep 1, 2020 |
+| Rel 23-03 | [4494174] | Microcode  | [6.55] | Sep 1, 2020 |
+| Rel 23-03 | [5023793] | Servicing Stack Update  | [7.23] | |
[5023697]: https://support.microsoft.com/kb/5023697 [5022835]: https://support.microsoft.com/kb/5022835
The following tables show the Microsoft Security Response Center (MSRC) updates
[5017397]: https://support.microsoft.com/kb/5017397 [4494175]: https://support.microsoft.com/kb/4494175 [4494174]: https://support.microsoft.com/kb/4494174
+[2.135]: ./cloud-services-guestos-update-matrix.md#family-2-releases
+[3.122]: ./cloud-services-guestos-update-matrix.md#family-3-releases
+[4.115]: ./cloud-services-guestos-update-matrix.md#family-4-releases
+[5.79]: ./cloud-services-guestos-update-matrix.md#family-5-releases
+[6.55]: ./cloud-services-guestos-update-matrix.md#family-6-releases
+[7.23]: ./cloud-services-guestos-update-matrix.md#family-7-releases
## February 2023 Guest OS
cloud-services Cloud Services Guestos Update Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-update-matrix.md
na Previously updated : 03/1/2023 Last updated : 03/28/2023
Unsure about how to update your Guest OS? Check [this][cloud updates] out.
## News updates
+###### **March 28, 2023**
+The March Guest OS has released.
+ ###### **March 1, 2023** The February Guest OS has released.
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-7.23_202303-01 | March 28, 2023 | Post 7.25 |
| WA-GUEST-OS-7.22_202302-01 | March 1, 2023 | Post 7.24 |
-| WA-GUEST-OS-7.21_202301-011 | January 31, 2023 | Post 7.23 |
+|~~WA-GUEST-OS-7.21_202301-01~~| January 31, 2023 | March 28, 2023 |
|~~WA-GUEST-OS-7.20_202212-01~~| January 19, 2023 | March 1, 2023 | |~~WA-GUEST-OS-7.19_202211-01~~| December 12, 2022 | January 31, 2023 | |~~WA-GUEST-OS-7.18_202210-02~~| November 4, 2022 | January 19, 2023 |
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-6.55_202303-01 | March 28, 2023 | Post 6.57 |
| WA-GUEST-OS-6.54_202302-01 | March 1, 2023 | Post 6.56 |
-| WA-GUEST-OS-6.53_202301-01 | January 31, 2023 | Post 6.55 |
+|~~WA-GUEST-OS-6.53_202301-01~~| January 31, 2023 | March 28, 2023 |
|~~WA-GUEST-OS-6.52_202212-01~~| January 19, 2023 | March 1, 2023 | |~~WA-GUEST-OS-6.51_202211-01~~| December 12, 2022 | January 31, 2023 | |~~WA-GUEST-OS-6.50_202210-02~~| November 4, 2022 | January 19, 2023 |
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-5.79_202303-01 | March 28, 2023 | Post 5.81 |
| WA-GUEST-OS-5.78_202302-01 | March 1, 2023 | Post 5.80 |
-| WA-GUEST-OS-5.77_202301-01 | January 31, 2023 | Post 5.79 |
+|~~WA-GUEST-OS-5.77_202301-01~~| January 31, 2023 | March 28, 2023 |
|~~WA-GUEST-OS-5.76_202212-01~~| January 19, 2023 | March 1, 2023 | |~~WA-GUEST-OS-5.75_202211-01~~| December 12, 2022 | January 31, 2023 | |~~WA-GUEST-OS-5.74_202210-02~~| November 4, 2022 | January 19, 2023 |
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-4.115_202303-01 | March 28, 2023 | Post 4.117 |
| WA-GUEST-OS-4.114_202302-01 | March 1, 2023 | Post 4.116 |
-| WA-GUEST-OS-4.113_202301-01 | January 31, 2023 | Post 4.115 |
+|~~WA-GUEST-OS-4.113_202301-01~~| January 31, 2023 | March 28, 2023 |
|~~WA-GUEST-OS-4.112_202212-01~~| January 19, 2023 | March 1, 2023 | |~~WA-GUEST-OS-4.111_202211-01~~| December 12, 2022 | January 31, 2023 | |~~WA-GUEST-OS-4.110_202210-02~~| November 4, 2022 | January 19, 2023 |
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-3.122_202303-01 | March 28, 2023 | Post 3.124 |
| WA-GUEST-OS-3.121_202302-01 | March 1, 2023 | Post 3.123 |
-| WA-GUEST-OS-3.120_202301-01 | January 31, 2023 | Post 3.122 |
+|~~WA-GUEST-OS-3.120_202301-01~~| January 31, 2023 | March 28, 2023 |
|~~WA-GUEST-OS-3.119_202212-01~~| January 19, 2023 | March 1, 2023 | |~~WA-GUEST-OS-3.118_202211-01~~| December 12, 2022 | January 31, 2023 | |~~WA-GUEST-OS-3.117_202210-02~~| November 4, 2022 | January 19, 2023 |
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-2.135_202303-01 | March 28, 2023 | Post 2.137 |
| WA-GUEST-OS-2.134_202302-01 | March 1, 2023 | Post 2.136 |
-| WA-GUEST-OS-2.133_202301-01 | January 31, 2023 | Post 2.135 |
+|~~WA-GUEST-OS-2.133_202301-01~~| January 31, 2023 | March 28, 2023 |
|~~WA-GUEST-OS-2.132_202212-01~~| January 19, 2023 | March 1, 2023 | |~~WA-GUEST-OS-2.131_202211-01~~| December 12, 2022 | January 31, 2023 | |~~WA-GUEST-OS-2.130_202210-02~~| November 4, 2022 | January 19, 2023 |
cognitive-services Tutorial Voice Enable Your Bot Speech Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/tutorial-voice-enable-your-bot-speech-sdk.md
At this point, check that your resource group (**SpeechEchoBotTutorial-ResourceG
| Name | Type | Location | ||-|-|
-| SpeechEchoBotTutorial-Speech | Cognitive Services | West US |
+| SpeechEchoBotTutorial-Speech | Speech | West US |
### Create an Azure App Service plan
cognitive-services Create Use Glossaries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/document-translation/how-to-guides/create-use-glossaries.md
Previously updated : 03/16/2023 Last updated : 03/28/2023 # Use glossaries with Document Translation
A glossary is a list of terms with definitions that you create for the Document
1. **Specify your glossary in the translation request.** Include the **`glossary URL`**, **`format`**, and **`version`** in your **`POST`** request: :::code language="json" source="../../../../../cognitive-services-rest-samples/curl/Translator/translate-with-glossary.json" range="1-23" highlight="13-14":::
-
+ > [!NOTE]
- > The example used an enabled [**system-assigned managed identity**](create-use-managed-identities.md#enable-a-system-assigned-managed-identity) with a [**Storage Blob Data Contributor**](create-use-managed-identities.md#grant-access-to-your-storage-account) role assignment for authorization. For more information, *see* [**Managed identities for Document Translation**](./create-use-managed-identities.md).
+ > The example used an enabled [**system-assigned managed identity**](create-use-managed-identities.md#enable-a-system-assigned-managed-identity) with a [**Storage Blob Data Contributor**](create-use-managed-identities.md#grant-storage-account-access-for-your-translator-resource) role assignment for authorization. For more information, *see* [**Managed identities for Document Translation**](./create-use-managed-identities.md).
### Case sensitivity
cognitive-services Create Use Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/document-translation/how-to-guides/create-use-managed-identities.md
Previously updated : 03/24/2023 Last updated : 03/28/2023 # Managed identities for Document Translation
- Managed identities for Azure resources are service principals that create an Azure Active Directory (Azure AD) identity and specific permissions for Azure managed resources. Managed identities are a safer way to grant access to data without the need to include SAS tokens with your HTTP requests.
+Managed identities for Azure resources are service principals that create an Azure Active Directory (Azure AD) identity and specific permissions for Azure managed resources. Managed identities are a safer way to grant access to data compared to SAS URLs.
:::image type="content" source="../media/managed-identity-rbac-flow.png" alt-text="Screenshot of managed identity flow (RBAC).":::
-* You can use managed identities to grant access to any resource that supports Azure AD authentication, including your own applications.
+* You can use managed identities to grant access to any resource that supports Azure AD authentication, including your own applications. Using managed identities replaces the requirement for you to include shared access signature tokens (SAS) with your [source and target URLs](#post-request-body).
* To grant access to an Azure resource, assign an Azure role to a managed identity using [Azure role-based access control (`Azure RBAC`)](../../../../role-based-access-control/overview.md).
To get started, you need:
* An [**Azure blob storage account**](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM) in the same region as your Translator resource. You also need to create containers to store and organize your blob data within your storage account.
-* **If your storage account is behind a firewall, you must enable the following configuration**: </br>
+* **If your storage account is behind a firewall, you must enable the following configuration**:
+ 1. Go to the [Azure portal](https://portal.azure.com/) and sign in to your Azure account.
+ 1. Select the Storage account.
+ 1. In the **Security + networking** group in the left pane, select **Networking**.
+ 1. In the **Firewalls and virtual networks** tab, select **Enabled from selected virtual networks and IP addresses**.
- * On your storage account page, select **Security + networking** → **Networking** from the left menu.
- :::image type="content" source="../../media/managed-identities/security-and-networking-node.png" alt-text="Screenshot: security + networking tab.":::
+ :::image type="content" source="../../media/managed-identities/firewalls-and-virtual-networks.png" alt-text="Screenshot: Selected networks radio button selected.":::
- * In the main window, select **Allow access from Selected networks**.
- :::image type="content" source="../../media/managed-identities/firewalls-and-virtual-networks.png" alt-text="Screenshot: Selected networks radio button selected.":::
+ 1. Deselect all check boxes.
+ 1. Make sure **Microsoft network routing** is selected.
+ 1. Under the **Resource instances** section, select **Microsoft.CognitiveServices/accounts** as the resource type and select your Translator resource as the instance name.
+ 1. Make certain that the **Allow Azure services on the trusted services list to access this storage account** box is checked. For more information about managing exceptions, _see_ [Configure Azure Storage firewalls and virtual networks](../../../../storage/common/storage-network-security.md?tabs=azure-portal#manage-exceptions).
- * On the selected networks page, navigate to the **Exceptions** category and make certain that the [**Allow Azure services on the trusted services list to access this storage account**](../../../../storage/common/storage-network-security.md?tabs=azure-portal#manage-exceptions) checkbox is enabled.
+ :::image type="content" source="../../media/managed-identities/allow-trusted-services-checkbox-portal-view.png" alt-text="Screenshot: allow trusted services checkbox, portal view.":::
- :::image type="content" source="../../media/managed-identities/allow-trusted-services-checkbox-portal-view.png" alt-text="Screenshot: allow trusted services checkbox, portal view":::
+ 1. Select **Save**.
+
+ > [!NOTE]
+ > It may take up to 5 min for the network changes to propagate.
+
+ Although by now the network access is permitted, the Translator resource can't yet access the data in the Storage account. You need to [assign a specific access role](#grant-storage-account-access-for-your-translator-resource) for Translator resource managed identity.
## Managed identity assignments
In the following steps, we enable a system-assigned managed identity and grant y
## Enable a system-assigned managed identity
->[!IMPORTANT]
->
-> To enable a system-assigned managed identity, you need **Microsoft.Authorization/roleAssignments/write** permissions, such as [**Owner**](../../../../role-based-access-control/built-in-roles.md#owner) or [**User Access Administrator**](../../../../role-based-access-control/built-in-roles.md#user-access-administrator). You can specify a scope at four levels: management group, subscription, resource group, or resource.
-
-1. Sign in to the [Azure portal](https://portal.azure.com) using an account associated with your Azure subscription.
-
-1. Navigate to your **Translator** resource page in the Azure portal.
+You must grant the Translator resource access to your storage account before it can create, read, or delete blobs. Once you enabled the Translator resource with a system-assigned managed identity, you can use Azure role-based access control (`Azure RBAC`), to give Translator access to your Azure storage containers.
-1. In the left rail, select **Identity** from the **Resource Management** list:
+1. Go to the [Azure portal](https://portal.azure.com/) and sign in to your Azure account.
+1. Select the Translator resource.
+1. In the **Resource Management** group in the left pane, select **Identity**.
+1. Within the **System assigned** tab, turn on the **Status** toggle.
:::image type="content" source="../../media/managed-identities/resource-management-identity-tab.png" alt-text="Screenshot: resource management identity tab in the Azure portal.":::
-1. In the main window, toggle the **System assigned Status** tab to **On**.
+ > [!IMPORTANT]
+ > User assigned managed identity won't meet requirements for the batch transcription storage account scenario. Be sure to enable system assigned managed identity.
-## Grant access to your storage account
+1. Select **Save**.
-You need to grant Translator access to your storage account before it can create, read, or delete blobs. Once you've enabled Translator with a system-assigned managed identity, you can use Azure role-based access control (`Azure RBAC`), to give Translator access to your Azure storage containers.
+## Grant storage account access for your Translator resource
-The **Storage Blob Data Contributor** role gives Translator (represented by the system-assigned managed identity) read, write, and delete access to the blob container and data.
+> [!IMPORTANT]
+> To assign a system-assigned managed identity role, you need **Microsoft.Authorization/roleAssignments/write** permissions, such as [**Owner**](../../../../role-based-access-control/built-in-roles.md#owner) or [**User Access Administrator**](../../../../role-based-access-control/built-in-roles.md#user-access-administrator) at the storage scope for the storage resource.
+1. Go to the [Azure portal](https://portal.azure.com/) and sign in to your Azure account.
+1. Select the Translator resource.
+1. In the **Resource Management** group in the left pane, select **Identity**.
1. Under **Permissions** select **Azure role assignments**: :::image type="content" source="../../media/managed-identities/enable-system-assigned-managed-identity-portal.png" alt-text="Screenshot: enable system-assigned managed identity in Azure portal.":::
The **Storage Blob Data Contributor** role gives Translator (represented by the
:::image type="content" source="../../media/managed-identities/azure-role-assignments-page-portal.png" alt-text="Screenshot: Azure role assignments page in the Azure portal.":::
- >[!NOTE]
- >
- > If you are unable to assign a role in the Azure portal because the Add > Add role assignment option is disabled or get the permissions error, "you do not have permissions to add role assignment at this scope", check that you are currently signed in as a user with an assigned a role that has Microsoft.Authorization/roleAssignments/write permissions such as [**Owner**](../../../../role-based-access-control/built-in-roles.md#owner) or [**User Access Administrator**](../../../../role-based-access-control/built-in-roles.md#user-access-administrator) at the storage scope for the storage resource.
-
-1. Next, you're going to assign a **Storage Blob Data Contributor** role to your Translator service resource. In the **Add role assignment** pop-up window, complete the fields as follows and select **Save**:
+1. Next, you assign a **Storage Blob Data Contributor** role to your Translator service resource. The **Storage Blob Data Contributor** role gives Translator (represented by the system-assigned managed identity) read, write, and delete access to the blob container and data. In the **Add role assignment** pop-up window, complete the fields as follows and select **Save**:
| Field | Value| ||--|
The **Storage Blob Data Contributor** role gives Translator (represented by the
:::image type="content" source="../../media/managed-identities/add-role-assignment-window.png" alt-text="Screenshot: add role assignments page in the Azure portal.":::
-1. After you've received the _Added Role assignment_ confirmation message, refresh the page to see the added role assignment.
+1. After the _Added Role assignment_ confirmation message appears, refresh the page to see the added role assignment.
:::image type="content" source="../../media/managed-identities/add-role-assignment-confirmation.png" alt-text="Screenshot: Added role assignment confirmation pop-up message.":::
-1. If you don't see the change right away, wait and try refreshing the page once more. When you assign or remove role assignments, it can take up to 30 minutes for changes to take effect.
+1. If you don't see the new role assignment right away, wait and try refreshing the page again. When you assign or remove role assignments, it can take up to 30 minutes for changes to take effect.
:::image type="content" source="../../media/managed-identities/assigned-roles-window.png" alt-text="Screenshot: Azure role assignments window.":::
The following headers are included with each Document Translation API request:
### POST request body
-* The request URL is POST `https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0/batches`
-
+* The request URL is POST `https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0/batches`.
* The request body is a JSON object named `inputs`.
-* The `inputs` object contains both `sourceURL` and `targetURL` container addresses for your source and target language pairs
+* The `inputs` object contains both `sourceURL` and `targetURL` container addresses for your source and target language pairs. With system assigned managed identity, you use a plain Storage Account URL (no SAS or other additions). The format is `https://<storage_account_name>.blob.core.windows.net/<container_name>`.
* The `prefix` and `suffix` fields (optional) are used to filter documents in the container including folders. * A value for the `glossaries` field (optional) is applied when the document is being translated. * The `targetUrl` for each target language must be unique.
->[!NOTE]
-> If a file with the same name already exists in the destination, the job will fail. When using managed identities, don't include a SAS token URL with your HTTP requests. Otherwise your requests will fail.
+> [!IMPORTANT]
+> If a file with the same name already exists in the destination, the job will fail. When using managed identities, don't include a SAS token URL with your HTTP requests. If you do so, your requests will fail.
<!-- markdownlint-disable MD024 --> ### Translate all documents in a container
+This sample request body references a source container for all documents to be translated to a target language.
+
+For more information, _see_ [request parameters](#post-request-body).
+ ```json { "inputs": [ { "source": {
- "sourceUrl": "https://my.blob.core.windows.net/source-en"
+ "sourceUrl": "https://<storage_account_name>.blob.core.windows.net/<source_container_name>"
}, "targets": [ {
- "targetUrl": "https://my.blob.core.windows.net/target-fr"
+ "targetUrl": "https://<storage_account_name>.blob.core.windows.net/<target_container_name>"
"language": "fr" } ]
The following headers are included with each Document Translation API request:
### Translate a specific document in a container
-* **Required**: "storageType": "File"
-* This sample request returns a single document translated into two target languages:
+This sample request body references a single source document to be translated into two target languages.
+
+> [!IMPORTANT]
+> In addition to the request parameters [noted previously](#post-request-body), you must include `"storageType": "File"`. Otherwise the source URL is assumed to be at the container level.
```json {
The following headers are included with each Document Translation API request:
{ "storageType": "File", "source": {
- "sourceUrl": "https://my.blob.core.windows.net/source-en/source-english.docx"
+ "sourceUrl": "https://<storage_account_name>.blob.core.windows.net/<source_container_name>/source-english.docx"
}, "targets": [ {
- "targetUrl": "https://my.blob.core.windows.net/target-es/Target-Spanish.docx"
+ "targetUrl": "https://<storage_account_name>.blob.core.windows.net/<target_container_name>/Target-Spanish.docx"
"language": "es" }, {
- "targetUrl": "https://my.blob.core.windows.net/target-de/Target-German.docx",
+ "targetUrl": "https://<storage_account_name>.blob.core.windows.net/<target_container_name>/Target-German.docx",
"language": "de" } ]
The following headers are included with each Document Translation API request:
} ```
-### Translate documents using a custom glossary
+### Translate all documents in a container using a custom glossary
+
+This sample request body references a source container for all documents to be translated to a target language using a glossary.
+
+For more information, _see_ [request parameters](#post-request-body).
```json { "inputs": [ { "source": {
- "sourceUrl": "https://myblob.blob.core.windows.net/source",
+ "sourceUrl": "https://<storage_account_name>.blob.core.windows.net/<source_container_name>",
"filter": { "prefix": "myfolder/" } }, "targets": [ {
- "targetUrl": "https://myblob.blob.core.windows.net/target",
+ "targetUrl": "https://<storage_account_name>.blob.core.windows.net/<target_container_name>",
"language": "es", "glossaries": [ {
- "glossaryUrl": "https:// myblob.blob.core.windows.net/glossary/en-es.xlf",
+ "glossaryUrl": "https://<storage_account_name>.blob.core.windows.net/<glossary_container_name>/en-es.xlf",
"format": "xliff" } ]
The following headers are included with each Document Translation API request:
} ```
- Great! You've learned how to enable and use a system-assigned managed identity. With managed identity for Azure Resources and `Azure RBAC`, you granted Translator specific access rights to your storage resource without including SAS tokens with your HTTP requests.
+Great! You've learned how to enable and use a system-assigned managed identity. With managed identity for Azure Resources and `Azure RBAC`, you granted Translator specific access rights to your storage resource without including SAS tokens with your HTTP requests.
## Next steps
-**Quickstart**
- > [!div class="nextstepaction"]
-> [Get started with Document Translation](../quickstarts/get-started-with-rest-api.md)
-
-**Tutorial**
+> [Quickstart: Get started with Document Translation](../quickstarts/get-started-with-rest-api.md)
> [!div class="nextstepaction"]
-> [Access Azure Storage from a web app using managed identities](../../../../app-service/scenario-secure-app-access-storage.md?bc=%2fazure%2fcognitive-services%2ftranslator%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fcognitive-services%2ftranslator%2ftoc.json)
+> [Tutorial: Access Azure Storage from a web app using managed identities](../../../../app-service/scenario-secure-app-access-storage.md?bc=%2fazure%2fcognitive-services%2ftranslator%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fcognitive-services%2ftranslator%2ftoc.json)
cognitive-services Language Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/document-translation/language-studio.md
Document Translation in Language Studio requires the following resources:
* An active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [**create a free account**](https://azure.microsoft.com/free/).
-* A [**single-service Translator resource**](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) (**not** a multi-service Cognitive Services resource) with [**system-assigned managed identity**](how-to-guides/create-use-managed-identities.md#enable-a-system-assigned-managed-identity) enabled and a [**Storage Blob Data Contributor**](how-to-guides/create-use-managed-identities.md#grant-access-to-your-storage-account) role assigned. For more information, *see* [**Managed identities for Document Translation**](how-to-guides/create-use-managed-identities.md). Also, make sure the region and pricing sections are completed as follows:
+* A [**single-service Translator resource**](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) (**not** a multi-service Cognitive Services resource) with [**system-assigned managed identity**](how-to-guides/create-use-managed-identities.md#enable-a-system-assigned-managed-identity) enabled and a [**Storage Blob Data Contributor**](how-to-guides/create-use-managed-identities.md#grant-storage-account-access-for-your-translator-resource) role assigned. For more information, *see* [**Managed identities for Document Translation**](how-to-guides/create-use-managed-identities.md). Also, make sure the region and pricing sections are completed as follows:
* **Resource Region**. For this project, choose a geographic region such as **East US**. For Document Translation, [system-assigned managed identity](how-to-guides/create-use-managed-identities.md) isn't supported for the **Global** region.
cognitive-services Disconnected Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/containers/disconnected-containers.md
Containers enable you to run Cognitive Services APIs in your own environment, an
* [Speech-to-Text](../speech-service/speech-container-howto.md?tabs=stt) * [Custom Speech-to-Text](../speech-service/speech-container-howto.md?tabs=cstt) * [Neural Text-to-Speech](../speech-service/speech-container-howto.md?tabs=ntts)
-* [Text Translation (Standard)](../translator/containers/translator-how-to-install-container.md#host-computer)
+* [Text Translation (Standard)](../translator/containers/translator-disconnected-containers.md)
* [Language Understanding (LUIS)](../LUIS/luis-container-howto.md) * Azure Cognitive Service for Language * [Sentiment Analysis](../language-service/sentiment-opinion-mining/how-to/use-containers.md)
cognitive-services Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/how-to/call-api.md
Title: Send a Named Entity Recognition (NER) request to your custom model
-description: Learn how to send a request for custom NER.
+description: Learn how to send requests for custom NER.
cognitive-services Create Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/how-to/create-project.md
Title: Using Azure resources in custom NER
+ Title: Create custom NER projects and use Azure resources
-description: Learn about the steps for using Azure resources with custom NER.
+description: Learn how to create and manage projects and Azure resources for custom NER.
cognitive-services Deploy Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/how-to/deploy-model.md
Title: Submit a Custom Named Entity Recognition (NER) task
+ Title: How to deploy a custom NER model
-description: Learn about sending a request for Custom Named Entity Recognition (NER).
+description: Learn how to deploy a model for custom NER.
Previously updated : 10/12/2022 Last updated : 03/23/2023
cognitive-services Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-classification/how-to/call-api.md
Title: Send a text classification request to your model
-description: Learn how to send a request for custom text classification.
+ Title: Send a text classification request to your custom model
+description: Learn how to send requests for custom text classification.
Previously updated : 06/03/2022 Last updated : 03/23/2023 ms.devlang: csharp, python
-# Query deployment to classify text
+# Send text classification requests to your model
-After the deployment is added successfully, you can query the deployment to classify text based on the model you assigned to the deployment.
+After you've successfully deployed a model, you can query the deployment to classify text based on the model you assigned to the deployment.
You can query the deployment programmatically [Prediction API](https://aka.ms/ct-runtime-api) or through the client libraries (Azure SDK). ## Test deployed model
cognitive-services Deploy Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-classification/how-to/deploy-model.md
Title: How to submit custom text classification tasks
+ Title: How to deploy a custom text classification model
description: Learn how to deploy a model for custom text classification.
Previously updated : 10/12/2022 Last updated : 03/23/2023
cognitive-services Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/orchestration-workflow/how-to/call-api.md
Title: How to send a Conversational Language Understanding job
+ Title: How to send requests to orchestration workflow
-description: Learn about sending a request for Conversational Language Understanding.
+description: Learn about sending requests for orchestration workflow.
cognitive-services Create Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/orchestration-workflow/how-to/create-project.md
Title: How to create projects and build schema in orchestration workflow
+ Title: Create orchestration workflow projects and use Azure resources
description: Use this article to learn how to create projects in orchestration workflow
Previously updated : 05/20/2022 Last updated : 03/23/2023
cognitive-services Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/concepts/models.md
When using our embeddings models, keep in mind their limitations and risks.
### GPT-3 Models
-| Model ID | Supports Completions | Supports Embeddings | Base model Regions | Fine-Tuning Regions | Max Request (tokens) | Training Data (up to) |
-| | -- | - | | - | -- | - |
-| ada | Yes | No | N/A | East US<sup>2</sup>, South Central US, West Europe<sup>2</sup> | 2,049 | Oct 2019|
-| text-ada-001 | Yes | No | East US, South Central US, West Europe | N/A | 2,049 | Oct 2019|
-| babbage | Yes | No | N/A | East US<sup>2</sup>, South Central US, West Europe<sup>2</sup> | 2,049 | Oct 2019 |
-| text-babbage-001 | Yes | No | East US, South Central US, West Europe | N/A | 2,049 | Oct 2019 |
-| curie | Yes | No | N/A | East US<sup>2</sup>, South Central US, West Europe<sup>2</sup> | 2,049 | Oct 2019 |
-| text-curie-001 | Yes | No | East US, South Central US, West Europe | N/A | 2,049 | Oct 2019 |
-| davinci<sup>1</sup> | Yes | No | N/A | East US<sup>2</sup>, South Central US, West Europe<sup>2</sup> | 2,049 | Oct 2019|
-| text-davinci-001 | Yes | No | South Central US, West Europe | N/A | | |
-| text-davinci-002 | Yes | No | East US, South Central US, West Europe | N/A | 4,097 | Jun 2021 |
-| text-davinci-003 | Yes | No | East US, West Europe | N/A | 4,097 | Jun 2021 |
-| text-davinci-fine-tune-002<sup>1</sup> | Yes | No | N/A | East US, West Europe<sup>2</sup> | | |
-| gpt-35-turbo<sup>3</sup> (ChatGPT) (preview) | Yes | No | East US, South Central US | N/A | 4,096 | Sep 2021
+These models can be used with Completion API requests. `gpt-35-turbo` is the only model that can be used with both Completion API requests and the Chat Completion API.
+
+| Model ID | Base model Regions | Fine-Tuning Regions | Max Request (tokens) | Training Data (up to) |
+| | | - | -- | - |
+| ada | N/A | South Central US <sup>2</sup> | 2,049 | Oct 2019|
+| text-ada-001 | East US, South Central US, West Europe | N/A | 2,049 | Oct 2019|
+| babbage | N/A | South Central US<sup>2</sup> | 2,049 | Oct 2019 |
+| text-babbage-001 | East US, South Central US, West Europe | N/A | 2,049 | Oct 2019 |
+| curie | N/A | South Central US<sup>2</sup> | 2,049 | Oct 2019 |
+| text-curie-001 | East US, South Central US, West Europe | N/A | 2,049 | Oct 2019 |
+| davinci<sup>1</sup> | N/A | Currently unavailable | 2,049 | Oct 2019|
+| text-davinci-001 | South Central US, West Europe | N/A | | |
+| text-davinci-002 | East US, South Central US, West Europe | N/A | 4,097 | Jun 2021 |
+| text-davinci-003 | East US, West Europe | N/A | 4,097 | Jun 2021 |
+| text-davinci-fine-tune-002<sup>1</sup> | N/A | Currently unavailable | | |
+| gpt-35-turbo<sup>3</sup> (ChatGPT) (preview) | East US, South Central US | N/A | 4,096 | Sep 2021 |
<sup>1</sup> The model is available by request only. Currently we aren't accepting new requests to use the model.
-<br><sup>2</sup> East US and West Europe are currently unavailable for new customers to fine-tune due to high demand. Please use US South Central region for fine-tuning.
+<br><sup>2</sup> East US and West Europe were previously available, but due to high demand they are currently unavailable for new customers to use for fine-tuning. Please use US South Central region for fine-tuning.
<br><sup>3</sup> Currently, only version `0301` of this model is available. This version of the model will be deprecated on 8/1/2023 in favor of newer version of the gpt-35-model. See [ChatGPT model versioning](../how-to/chatgpt.md#model-versioning) for more details. + ### GPT-4 Models
-| Model ID | Supports Completions | Supports Embeddings | Base model Regions | Fine-Tuning Regions | Max Request (tokens) | Training Data (up to) |
-| -- | -- | - | - | - | -- | - |
-| `gpt-4` <sup>1,</sup><sup>2</sup> (preview) | Yes | No | East US, South Central US | N/A | 8,192 | September 2021 |
-| `gpt-4-32k` <sup>1,</sup><sup>2</sup> (preview) | Yes | No | East US, South Central US | N/A | 32,768 | September 2021 |
+These models can only be used with the Chat Completion API.
+
+| Model ID | Base model Regions | Fine-Tuning Regions | Max Request (tokens) | Training Data (up to) |
+| | | | | |
+| `gpt-4` <sup>1,</sup><sup>2</sup> (preview) | East US, South Central US | N/A | 8,192 | September 2021 |
+| `gpt-4-32k` <sup>1,</sup><sup>2</sup> (preview) | East US, South Central US | N/A | 32,768 | September 2021 |
-<sup>1</sup> The model is in preview and only available by request.<br>
+<sup>1</sup> The model is in preview and [only available by request](https://aka.ms/oai/get-gpt4).<br>
<sup>2</sup> Currently, only version `0314` of this model is available. ### Codex Models
-| Model ID | Supports Completions | Supports Embeddings | Base model Regions | Fine-Tuning Regions | Max Request (tokens) | Training Data (up to) |
-| | | | | | | |
-| code-cushman-001<sup>1</sup> | Yes | No | South Central US, West Europe | East US<sup>2</sup> , South Central US, West Europe<sup>2</sup> | 2,048 | |
-| code-davinci-002 | Yes | No | East US, West Europe | N/A | 8,001 | Jun 2021 |
-| code-davinci-fine-tune-002<sup>1</sup> | Yes | No | N/A | East US<sup>2</sup> , West Europe<sup>2</sup> | | |
+
+These models can only be used with Completions API requests.
+
+| Model ID | Base model Regions | Fine-Tuning Regions | Max Request (tokens) | Training Data (up to) |
+| | | | | |
+| code-cushman-001<sup>1</sup> | South Central US, West Europe | Currently unavailable | 2,048 | |
+| code-davinci-002 | East US, West Europe | N/A | 8,001 | Jun 2021 |
<sup>1</sup> The model is available for fine-tuning by request only. Currently we aren't accepting new requests to fine-tune the model.
-<br><sup>2</sup> East US is currently unavailable for new customers to fine-tune due to high demand. Please use US South Central region for US based training.
+ ### Embeddings Models
-| Model ID | Supports Completions | Supports Embeddings | Base model Regions | Fine-Tuning Regions | Max Request (tokens) | Training Data (up to) |
-| | | | | | | |
-| text-embedding-ada-002 | No | Yes | East US, South Central US, West Europe | N/A |2,046 | Sep 2021 |
-| text-similarity-ada-001 | No | Yes | East US, South Central US, West Europe | N/A | 2,046 | Aug 2020 |
-| text-similarity-babbage-001 | No | Yes | South Central US, West Europe | N/A | 2,046 | Aug 2020 |
-| text-similarity-curie-001 | No | Yes | East US, South Central US, West Europe | N/A | 2046 | Aug 2020 |
-| text-similarity-davinci-001 | No | Yes | South Central US, West Europe | N/A | 2,046 | Aug 2020 |
-| text-search-ada-doc-001 | No | Yes | South Central US, West Europe | N/A | 2,046 | Aug 2020 |
-| text-search-ada-query-001 | No | Yes | South Central US, West Europe | N/A | 2,046 | Aug 2020 |
-| text-search-babbage-doc-001 | No | Yes | South Central US, West Europe | N/A | 2,046 | Aug 2020 |
-| text-search-babbage-query-001 | No | Yes | South Central US, West Europe | N/A | 2,046 | Aug 2020 |
-| text-search-curie-doc-001 | No | Yes | South Central US, West Europe | N/A | 2,046 | Aug 2020 |
-| text-search-curie-query-001 | No | Yes | South Central US, West Europe | N/A | 2,046 | Aug 2020 |
-| text-search-davinci-doc-001 | No | Yes | South Central US, West Europe | N/A | 2,046 | Aug 2020 |
-| text-search-davinci-query-001 | No | Yes | South Central US, West Europe | N/A | 2,046 | Aug 2020 |
-| code-search-ada-code-001 | No | Yes | South Central US, West Europe | N/A | 2,046 | Aug 2020 |
-| code-search-ada-text-001 | No | Yes | South Central US, West Europe | N/A | 2,046 | Aug 2020 |
-| code-search-babbage-code-001 | No | Yes | South Central US, West Europe | N/A | 2,046 | Aug 2020 |
-| code-search-babbage-text-001 | No | Yes | South Central US, West Europe | N/A | 2,046 | Aug 2020 |
+
+These models can only be used with Embedding API requests.
+
+| Model ID | Base model Regions | Fine-Tuning Regions | Max Request (tokens) | Training Data (up to) |
+| | | | | |
+| text-embedding-ada-002 | East US, South Central US, West Europe | N/A |2,046 | Sep 2021 |
+| text-similarity-ada-001| East US, South Central US, West Europe | N/A | 2,046 | Aug 2020 |
+| text-similarity-babbage-001 | South Central US, West Europe | N/A | 2,046 | Aug 2020 |
+| text-similarity-curie-001 | East US, South Central US, West Europe | N/A | 2046 | Aug 2020 |
+| text-similarity-davinci-001 | South Central US, West Europe | N/A | 2,046 | Aug 2020 |
+| text-search-ada-doc-001 | South Central US, West Europe | N/A | 2,046 | Aug 2020 |
+| text-search-ada-query-001 | South Central US, West Europe | N/A | 2,046 | Aug 2020 |
+| text-search-babbage-doc-001 | South Central US, West Europe | N/A | 2,046 | Aug 2020 |
+| text-search-babbage-query-001 | South Central US, West Europe | N/A | 2,046 | Aug 2020 |
+| text-search-curie-doc-001 | South Central US, West Europe | N/A | 2,046 | Aug 2020 |
+| text-search-curie-query-001 | South Central US, West Europe | N/A | 2,046 | Aug 2020 |
+| text-search-davinci-doc-001 | South Central US, West Europe | N/A | 2,046 | Aug 2020 |
+| text-search-davinci-query-001 | South Central US, West Europe | N/A | 2,046 | Aug 2020 |
+| code-search-ada-code-001 | South Central US, West Europe | N/A | 2,046 | Aug 2020 |
+| code-search-ada-text-001 | South Central US, West Europe | N/A | 2,046 | Aug 2020 |
+| code-search-babbage-code-001 | South Central US, West Europe | N/A | 2,046 | Aug 2020 |
+| code-search-babbage-text-001 | South Central US, West Europe | N/A | 2,046 | Aug 2020 |
## Next steps
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/whats-new.md
Previously updated : 03/21/2023 Last updated : 03/27/2023 recommendations: false keywords:
keywords:
## March 2023
+### Fine-tuned model change
+
+Deployed customized models (fine-tuned models) that are inactive for greater than 90 days will now automatically have their deployments deleted. **The underlying fine-tuned model is retained and can be redeployed at any time**. Once a fine-tuned model is deployed, it will continue to incur an hourly hosting cost regardless of whether you're actively using the model. To learn more about planning and managing costs with Azure OpenAI, refer to our [cost management guide](/azure/cognitive-services/openai/how-to/manage-costs#base-series-and-codex-series-fine-tuned-models).
+
+### New Features
+ - **GPT-4 series models are now available in preview on Azure OpenAI**. To request access, existing Azure OpenAI customers can [apply by filling out this form](https://aka.ms/oai/get-gpt4). These models are currently available in the East US and South Central US regions. - **New Chat Completion API for ChatGPT and GPT-4 models released in preview on 3/21**. To learn more checkout the [updated quickstarts](./quickstart.md) and [how-to article](./how-to/chatgpt.md).
communication-services Call Automation Teams Interop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-automation/call-automation-teams-interop.md
+
+ Title: Call Automation Teams Interop overview
+
+description: Learn about Teams interoperability with Azure Communication Services Call Automation.
++++ Last updated : 02/22/2023++++
+# Deliver expedient customer service by adding Microsoft Teams users in Call Automation workflows
++
+Businesses are looking for innovative ways to increase the efficiency of their customer service operations. Azure Communication Services Call Automation provides developers the ability to build programmable customer interactions using real-time event triggers to perform actions based on custom business logic. For example, with support for interoperability with Microsoft Teams, developers can use Call Automation APIs to add subject matter experts (SMEs). These SMEs, who use Microsoft Teams, can be added to an existing customer service call to provide to resolve a customer issue.
+
+This interoperability with Microsoft Teams over VoIP makes it easy for developers to implement per-region multi-tenant trunks that maximize value and reduce telephony infrastructure overhead. Each new tenant will be able to use this setup in a few minutes after Microsoft Teams admin has granted necessary permissions to the Azure Communication Services resource.
+
+## Use-cases
+
+1. Expert Consultation: Businesses can invite subject matter experts into their customer service workflows for expedient issue resolution, and to improve their first call resolution rate.
+1. Extend customer service workforce with knowledge workers: Businesses can extend their customer service operation with more capacity during peak influx of customer service calls.
+
+## Scenario Showcase ΓÇô Expert Consultation
+A customer service agent, who is using a Contact Center Agent experience, wants to now add a subject matter expert, who is knowledge worker (regular employee) at Contoso and uses Microsoft Teams, into a support call with a customer to provide some expert advice to resolve a customer issue.
+
+The dataflow diagram depicts a canonical scenario where a Teams user is added to an ongoing ACS call for expert consultation.
+
+[ ![Diagram of calling flow for a customer service with Microsoft Teams and Call Automation.](./media/call-automation-teams-interop.png)](./media/call-automation-teams-interop.png#lightbox)
+
+1. Customer is on an ongoing call with a Contact Center customer service agent.
+1. the call, the customer service agent needs expert help from one of the domain experts part of an engineering team. The agent is able to identify a knowledge worker who is available on Teams (presence via Graph APIs) and tries to add them to the call.
+1. Contoso Contact CenterΓÇÖs SBC is already configured with ACS Direct Routing where this add participant request is processed.
+1. Contoso Contact Center provider has implemented a web service, using ACS Call Automation that receives the ΓÇ£add ParticipantΓÇ¥ request.
+1. With Teams interop built into ACS Call Automation, ACS then uses the Teams userΓÇÖs ObjectId to add them to the call. The Teams user receives the incoming call notification. They accept and join the call.
+1. Once the Teams has provided their expertise, they leave the call. The customer service agent and customer continue wrap up their conversation.
+
+## Capabilities
+
+The following list presents the set of features that are currently available in the Azure Communication Services Call Automation SDKs for calls with Microsoft Teams users.
+
+| Feature Area | Capability | .NET | Java |
+| -| -- | | -- |
+| Pre-call scenarios | Place new outbound call to a Microsoft Teams user | ✔️ | ✔️ |
+| | Redirect (forward) a call to a Microsoft Teams user | ✔️ | ✔️ |
+| | Set custom display name for the callee when making a call offer to a Microsoft Teams user | Only on Microsoft Teams desktop client | Only on Microsoft Teams desktop client |
+| Mid-call scenarios | Add one or more endpoints to an existing call with a Microsoft Teams user | ✔️ | ✔️ |
+| | Play Audio from an audio file | ✔️ | ✔️ |
+| | Recognize user input through DTMF | ✔️ | ✔️ |
+| | Remove one or more endpoints from an existing call| ✔️ | ✔️ |
+| | Blind Transfer a 1:1 call to another endpoint | ✔️ | ✔️ |
+| | Hang up a call (remove the call leg) | ✔️ | ✔️ |
+| | Terminate a call (remove all participants and end call)| ✔️ | ✔️ |
+| Query scenarios | Get the call state | ✔️ | ✔️ |
+| | Get a participant in a call | ✔️ | ✔️ |
+| | List all participants in a call | ✔️ | ✔️ |
+| Call Recording* | Start/pause/resume/stop recording | ✔️ | ✔️ |
+
+> [!IMPORTANT]
+> Azure Communication Services call recording notifications in Teams clients are not supported. You must obtain consent from and notify the parties of recorded communications in a manner that complies with the laws applicable to each participant. i.e., using the Play API available in Call Automation.
+
+## Supported clients
+| Clients | Support |
+| --| -- |
+| Microsoft Teams Desktop | ✔️ |
+| Microsoft Teams Web | ❌ |
+| Microsoft Teams iOS | ❌ |
+| Microsoft Teams Android | ❌ |
+| Azure Communications Services signed in with Microsoft 365 Identity | ❌ |
+
+## Roadmap
+
+1. Support for Microsoft Teams Web coming soon.
+1. Support for Azure Communications Services signed in with Microsoft 365 Identity coming soon.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Get started with Adding a Microsoft Teams user to an ongoing call using Call Automation](./../../quickstarts/call-automation/Callflows-for-customer-interactions.md)
+
+Here are some articles of interest to you:
+- Understand how your resource is [charged for various calling use cases](../pricing.md) with examples.
communication-services Bring Your Own Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-automation/call-recording/bring-your-own-storage.md
-# Bring your own storage (BYOS) overview
+# Bring your own Azure storage overview
[!INCLUDE [Private Preview Disclaimer](../../../../communication-services/includes/private-preview-include-section.md)]
-Bring Your Own Storage (BYOS) for Call Recording allows you to specify an Azure blob storage account for storing call recording files. BYOS enables businesses to store their data in a way that meets their compliance requirements and business needs. For example, end-users could customize their own rules and access to the data, enabling them to store or delete content whenever they need it. BYOS provides a simple and straightforward solution that eliminates the need for developers to invest time and resources in downloading and exporting files.
+Bring Your Own Azure Storage for Call Recording allows you to specify an Azure blob storage account for storing call recording files. Bring your own Azure storage enables businesses to store their data in a way that meets their compliance requirements and business needs. For example, end-users could customize their own rules and access to the data, enabling them to store or delete content whenever they need it. Bring your own Azure Storage provides a simple and straightforward solution that eliminates the need for developers to invest time and resources in downloading and exporting files.
The same Azure Communication Services Call Recording APIs are used to export recordings to your Azure Blob Storage Container. While starting recording for a call, specify the container path where the recording needs to be exported. Upon recording completion, Azure Communication Services automatically fetches and uploads your recording to your storage.
The same Azure Communication Services Call Recording APIs are used to export rec
## Azure Managed Identities
-BYOS uses [Azure Managed Identities](../../../../active-directory/managed-identities-azure-resources/overview.md) to access user-owned resources securely. Azure Managed Identities provides an identity for the application to use when it needs to access Azure resources, eliminating the need for developers to manage credentials.
+Bring your own Azure storage uses [Azure Managed Identities](../../../../active-directory/managed-identities-azure-resources/overview.md) to access user-owned resources securely. Azure Managed Identities provides an identity for the application to use when it needs to access Azure resources, eliminating the need for developers to manage credentials.
## Known issues -- Azure Communication Services will also store your files in a built-in storage for 48 hours even if the exporting with BYOS is successful.-- Randomly, recording files are duplicated during the exporting process when using BYOS. Make sure you delete the duplicated file to avoid extra storage costs in your storage account.
+- Azure Communication Services will also store your files in a built-in storage for 48 hours even if the exporting is successful.
+- Randomly, recording files are duplicated during the exporting process. Make sure you delete the duplicated file to avoid extra storage costs in your storage account.
## Next steps For more information, see the following articles:-- Learn more about BYOS, check out the [BYOS Quickstart](../../../quickstarts/call-automation/call-recording/bring-your-own-storage.md).
+- Learn more about Bring your own Azure storage, check out the [BYO Azure Storage Quickstart](../../../quickstarts/call-automation/call-recording/bring-your-own-storage.md).
- Learn more about Call recording, check out the [Call Recording Quickstart](../../../quickstarts/voice-video-calling/get-started-call-recording.md). - Learn more about [Call Automation](../../../quickstarts/call-automation/callflows-for-customer-interactions.md).
communication-services Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/sms/concepts.md
Title: SMS concepts in Azure Communication Services description: Learn about Communication Services SMS concepts.--++ - Previously updated : 06/30/2021+ Last updated : 03/20/2023
The following documents may be interesting to you:
- Get an SMS capable [phone number](../../quickstarts/telephony/get-phone-number.md) - Get a [short code](../../quickstarts/sms/apply-for-short-code.md) - [Phone number types in Azure Communication Services](../telephony/plan-solution.md)
+- Apply for [Toll-free verification](./sms-faq.md#toll-free-verification)
communication-services Sms Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/sms/sms-faq.md
Title: SMS FAQ
description: SMS FAQ -+ Previously updated : 08/19/2022 Last updated : 3/22/2023
Azure Communication Services customers can use Azure Event Grid to receive incom
### How are messages sent to landline numbers treated?
-In the United States, Azure Communication Services does not check for landline numbers and will attempt to send it to carriers for delivery. Customers will be charged for messages sent to landline numbers.
+In the United States, Azure Communication Services does not check for landline numbers and attempts to send it to carriers for delivery. Customers are charged for messages sent to landline numbers.
### Can I send messages to multiple recipients?
Below is a list with examples of common URL shorteners you should avoid to maxim
### How does Azure Communication Services handle opt-outs for toll-free numbers? Opt-outs for US toll-free numbers are mandated and enforced by US carriers and cannot be overridden. -- **STOP** - If a text message recipient wishes to opt out, they can send ΓÇÿSTOPΓÇÖ to the toll-free number. The carrier sends the following default response for STOP: *"NETWORK MSG: You replied with the word "stop" which blocks all texts sent from this number. Text back "unstop" to receive messages again."*
+- **STOP** - If a text message recipient wishes to opt out, they can send ΓÇÿSTOPΓÇÖ to the toll-free number. The carrier sends the following default response for STOP: *"NETWORK MSG: You replied with the word "stop", which blocks all texts sent from this number. Text back "unstop" to receive messages again."*
- **START/UNSTOP** - If the recipient wishes to resubscribe to text messages from a toll-free number, they can send ΓÇÿSTARTΓÇÖ or ΓÇÿUNSTOPΓÇÖ to the toll-free number. The carrier sends the following default response for START/UNSTOP: *ΓÇ£NETWORK MSG: You have replied ΓÇ£unstopΓÇ¥ and will begin receiving messages again from this number.ΓÇ¥*-- Azure Communication Services will detect the STOP message and block all further messages to the recipient. The delivery report will indicate a failed delivery with status message as ΓÇ£Sender blocked for given recipient.ΓÇ¥
+- Azure Communication Services detects STOP messages and blocks all further messages to the recipient. The delivery report will indicate a failed delivery with status message as ΓÇ£Sender blocked for given recipient.ΓÇ¥
- The STOP, UNSTOP and START messages will be relayed back to you. Azure Communication Services encourages you to monitor and implement these opt-outs to ensure that no further message send attempts are made to recipients who have opted out of your communications. ### How does Azure Communication Services handle opt-outs for short codes?
-Azure communication service offers an opt-out management service for short codes that allows customers to configure responses to mandatory keywords STOP/START/HELP. Prior to provisioning your short code, you will be asked for your preference to manage opt-outs. If you opt-in to use it, the opt-out management service will automatically use your responses in the program brief for Opt-in/ Opt-out/ Help keywords in response to STOP/START/HELP keyword.
+Azure communication service offers an opt-out management service for short codes that allows customers to configure responses to mandatory keywords STOP/START/HELP. Prior to provisioning your short code, you are asked for your preference to manage opt-outs. If you opt-in to use it, the opt-out management service automatically uses your responses in the program brief for Opt-in/ Opt-out/ Help keywords in response to STOP/START/HELP keyword.
*Example:* - **STOP** - If a text message recipient wishes to opt out, they can send ΓÇÿSTOPΓÇÖ to the short code. Azure Communication Services sends your configured response for STOP: *"Contoso Alerts: YouΓÇÖre opted out and will receive no further messages."* - **START** - If the recipient wishes to resubscribe to text messages from a short code, they can send ΓÇÿSTARTΓÇÖ to the short code. Azure Communication Service sends your configured response for START: *ΓÇ£Contoso Promo Alerts: 3 msgs/week. Msg&Data Rates May Apply. Reply HELP for help. Reply STOP to opt-out.ΓÇ¥* - **HELP** - If the recipient wishes to get help with your service, they can send 'HELP' to the short code. Azure Communication Service sends the response you configured in the program brief for HELP: *"Thanks for texting Contoso! Call 1-800-800-8000 for support."*
-Azure Communication Services will detect the STOP message and block all further messages to the recipient. The delivery report will indicate a failed delivery with status message as ΓÇ£Sender blocked for given recipient.ΓÇ¥ The STOP, UNSTOP and START messages will be relayed back to you. Azure Communication Services encourages you to monitor and implement these opt-outs to ensure that no further message send attempts are made to recipients who have opted out of your communications.
+Azure Communication Services detects STOP messages and blocks all further messages to the recipient. The delivery report indicates a failed delivery with status message as ΓÇ£Sender blocked for given recipient.ΓÇ¥ The STOP, UNSTOP and START messages are relayed back to you. Azure Communication Services encourages you to monitor and implement these opt-outs to ensure that no further message send attempts are made to recipients who have opted out of your communications.
## Short codes ### What is the eligibility to apply for a short code?
No. Texting to a toll-free number from a short code is not supported. You also w
Short codes do not fall under E.164 formatting guidelines and do not have a country code, or a "+" sign prefix. In the SMS API request, your short code should be passed as the 5-6 digit number you see in your short codes blade without any prefix. ### How long does it take to get a short code? What happens after a short code program brief application is submitted?
-Once you have submitted the short code program brief application in the Azure portal, the service desk works with the aggregators to get your application approved by each wireless carrier. This process generally takes 8-12 weeks. We will let you know any updates and the status of your applications via the email you provide in the application. For more questions about your submitted application, please email acstnrequest@microsoft.com.
+Once you have submitted the short code program brief application in the Azure portal, the service desk works with the aggregators to get your application approved by each wireless carrier. This process generally takes 8-12 weeks. All updates and the status changes for your applications are communicated via the email you provide in the application. For more questions about your submitted application, please email acstnrequest@microsoft.com.
## Toll-Free Verification ### What is toll free verification?
The toll-free verification process ensures that your services running on toll-fr
This verification is **required** for best SMS delivery experience. ### What happens if I don't verify my toll-free numbers?
-What happens to the unverified toll-free number depends on the destination of SMS traffic.
+ #### SMS to US phone numbers Effective **October 1, 2022**, unverified toll-free numbers sending messages to US phone numbers will be subjected to the following:
-1. **Stricter filtering** - SMS messages are more likely to get blocked due to strict filtering, preventing messages to be delivered (i.e., SMS messages with URLs might be blocked).
+1. **Stricter filtering** - SMS messages are more likely to get blocked due to strict filtering, preventing messages to be delivered (that is, SMS messages with URLs might be blocked).
2. **SMS volume thresholds**:-- **Daily Limit:** 2,000 messages-- **Weekly limit:** 12,000 messages-- **Monthly limit:** 25,000 messages
+Effective April 1, 2023, the industryΓÇÖs toll-free aggregator is implementing new limits to messaging traffic for restricted and pending toll-free numbers. Messaging that exceeds a limit returns Error Code 795/ 4795: tfn-not-verified.
+
+New limits are as follows:
+
+|Limit type |Verification Status|Current limit| Limit effective April 1, 2023 |
+||-|-|-|
+|Daily limit |Unverified | 2,000 |500|
+|Weekly limit| Unverified| 12,000| 1,000|
+|Monthly Limit| Unverified| 25,000| 2,000|
+|Daily limit| Pending Verification| No Limit| 2,000|
+|Weekly limit| Pending Verification| No Limit| 6,000|
+|Monthly Limit| Pending Verification| 500,000| 10,000|
+|Daily limit| Verified | No Limit| No Limit|
+|Weekly limit| Verified| No Limit| No Limit|
+|Monthly Limit| Verified| No Limit| No Limit|
+
-This would not apply to TFNs in a [pending or verified status](#what-do-the-different-application-statuses-verified-pending-and-unverified-mean).
> [!IMPORTANT] > Unverified SMS traffic that exceeds the daily limit or is filtered for spam will have a [4010 error code](../troubleshooting-info.md#sms-error-codes) returned for both scenarios. > > The unverified volume daily cap is a daily maximum limit (not a guaranteed daily minimum), so unverified traffic can still experience message filtering even when itΓÇÖs well below the daily limits.
+> [!IMPORTANT]
+> In the near future, the verification process will need to be completed before sending any traffic on a toll-free number. The official date will be shared in the coming weeks. In the meantime, please start to prepare for this change in your onboarding processes.
+ #### SMS to Canadian phone numbers Effective **October 1, 2022**, unverified toll-free numbers sending messages to Canadian destinations will have its traffic **blocked**. To unblock the traffic, a verification application needs to be submitted and be in [pending or verified status](#what-do-the-different-application-statuses-verified-pending-and-unverified-mean). ### What do the different application statuses (verified, pending and unverified) mean? -- **Verified:** Verified numbers have gone through the toll-free verification process and have been approved. Their traffic is subjected to limited filters. If traffic does trigger any filters, that specific content will be blocked but the number will not be automatically blocked.-- **Pending**: Numbers in pending state have an associated toll-free verification form being reviewed by the toll-free messaging aggregator. They can send at a lower throughput than verified numbers, but higher than unverified numbers. Blocking can be applied to individual content or there can be an automatic block of all traffic from the number. These numbers will remain in this pending state until a decision has been made on verification status.-- **Unverified:** Unverified numbers have either 1) not submitted a verification application or 2) have had their application denied. These numbers are subject to the highest amount of filtering, and numbers in this state will automatically get shut off if any spam or unwanted traffic is detected.
+- **Verified:** Verified numbers have gone through the toll-free verification process and have been approved. Their traffic is subjected to limited filters. If traffic does trigger any filters, that specific content is blocked but the number is not automatically blocked.
+- **Pending**: Numbers in pending state have an associated toll-free verification form being reviewed by the toll-free messaging aggregator. They can send at a lower throughput than verified numbers, but higher than unverified numbers. Blocking can be applied to individual content or there can be an automatic block of all traffic from the number. These numbers remain in this pending state until a decision has been made on verification status.
+- **Unverified:** Unverified numbers have either 1) not submitted a verification application or 2) have had their application denied. These numbers are subject to the highest amount of filtering, and numbers in this state automatically get shut off if any spam or unwanted traffic is detected.
### What happens after I submit the toll-free verification form?++ After submission of the form, we will coordinate with our downstream peer to get the application verified by the toll-free messaging aggregator. While we are reviewing your application, we may reach out to you for more information. - From Application Submitted to Pending = **1-5 business days**
Updates for changes and the status of your applications will be communicated via
To submit a toll-free verification application, navigate to Azure Communication Service resource that your toll-free number is associated with in Azure portal and navigate to the Phone numbers blade. Click on the Toll-Free verification application link displayed as "Submit Application" in the infobox at the top of the phone numbers blade. Complete the form. ### What is considered a high quality toll-free verification application?
-The higher the quality of the application the higher chances your application will enter [pending state](#what-do-the-different-application-statuses-verified-pending-and-unverified-mean) faster.
+The higher the quality of the application the higher chances your application enters [pending state](#what-do-the-different-application-statuses-verified-pending-and-unverified-mean) faster.
Pointers to ensure you are submitting a high quality application: - Phone number(s) listed is/are Toll-free number(s)
Toll-free verification (TFV) involves an integration between Microsoft and the T
### What is the SMS character limit? The size of a single SMS message is 140 bytes. The character limit per single message being sent depends on the message content and encoding used. Azure Communication Services supports both GSM-7 and UCS-2 encoding. -- **GSM-7** - A message containing text characters only will be encoded using GSM-7-- **UCS-2** - A message containing unicode (emojis, international languages) will be encoded using UCS-2
+- **GSM-7** - A message containing text characters only is encoded using GSM-7
+- **UCS-2** - A message containing unicode (emojis, international languages) is encoded using UCS-2
This table shows the maximum number of characters that can be sent per SMS segment to carriers:
Azure Communication Services supports sending and receiving of long messages ove
### Are there any limits on sending messages?
-To ensure that we continue offering the high quality of service consistent with our SLAs, Azure Communication Services applies rate limits (different for each primitive). Developers who call our APIs beyond the limit will receive a 429 HTTP Status Code Response. If your company has requirements that exceed the rate-limits, please email us at phone@microsoft.com.
+To ensure that we continue offering the high quality of service consistent with our SLAs, Azure Communication Services applies rate limits (different for each primitive). Developers who call our APIs beyond the limit receives a 429 HTTP Status Code Response. If your company has requirements that exceed the rate-limits, email us at phone@microsoft.com.
Rate Limits for SMS:
Rate Limits for SMS:
## Carrier Fees ### What are the carrier fees for SMS?
-US and CA carriers charge an added fee for SMS messages sent and/or received from toll-free numbers and short codes. The carrier surcharge is calculated based on the destination of the message for sent messages and based on the sender of the message for received messages. Azure Communication Services charges a standard carrier fee per message segment. Carrier fees are subject to change by mobile carriers. Please refer to [SMS pricing](../sms-pricing.md) for more details.
+US and CA carriers charge an added fee for SMS messages sent and/or received from toll-free numbers and short codes. The carrier surcharge is calculated based on the destination of the message for sent messages and based on the sender of the message for received messages. Azure Communication Services charges a standard carrier fee per message segment. Carrier fees are subject to change by mobile carriers. Refer to [SMS pricing](../sms-pricing.md) for more details.
-### When will we come to know of changes to these surcharges?
-As with similar Azure services, customers will be notified at least 30 days prior to the implementation of any price changes. These charges will be reflected on our SMS pricing page along with the effective dates.
+### When do we come to know of changes to these surcharges?
+As with similar Azure services, customers are notified at least 30 days prior to the implementation of any price changes. These charges are reflected on our SMS pricing page along with the effective dates.
## Emergency support ### Can a customer use Azure Communication Services for emergency purposes?
-Azure Communication Services does not support text-to-911 functionality in the United States, but itΓÇÖs possible that you may have an obligation to do so under the rules of the Federal Communications Commission (FCC). You should assess whether the FCCΓÇÖs text-to-911 rules apply to your service or application. To the extent you're covered by these rules, you'll be responsible for routing 911 text messages to emergency call centers that request them. You're free to determine your own text-to-911 delivery model, but one approach accepted by the FCC involves automatically launching the native dialer on the userΓÇÖs mobile device to deliver 911 texts through the underlying mobile carrier.
+Azure Communication Services does not support text-to-911 functionality in the United States, but itΓÇÖs possible that you may have an obligation to do so under the rules of the Federal Communications Commission (FCC). You should assess whether the FCCΓÇÖs text-to-911 rules apply to your service or application. To the extent you're covered by these rules, you are responsible for routing 911 text messages to emergency call centers that request them. You're free to determine your own text-to-911 delivery model, but one approach accepted by the FCC involves automatically launching the native dialer on the userΓÇÖs mobile device to deliver 911 texts through the underlying mobile carrier.
communication-services Toll Free Verification Guidelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/sms/toll-free-verification-guidelines.md
+
+ Title: Toll-free verification guidelines
+
+description: Learn about how to fill the toll-free verification form
+++++ Last updated : 03/15/2023+++++
+# Toll-free verification guidelines
+
+In this document, we review the guidelines on filling out an application to verify your toll-free number. For detailed process and timelines toll-free verification process, check the [toll-free verification FAQ](./sms-faq.md#toll-free-verification). The toll-free verification application consists of five sections:
+
+- Application Type
+- Company Details
+- Program Details
+- Volume
+- Templates
+
+## Application type
+### Country or region
+ Primary location your toll-free number is used to send messages to. Toll-free numbers are domestic within North America region. Your newly acquired toll-free number can only send messages to US, PR, and CA (regardless of where it's acquired for).
+
+### Associated phone number(s)
+
+This drop down displays all the toll-free numbers you have in the Azure Communication Services resource. You're required to select the toll-free numbers that you would like to get verified. If you don't have a toll-free number, navigate to the phone numbers blade to acquire a toll-free number first.
+
+### Are you using more than one sending phone number?
+
+If you're using multiple sending numbers for the same use case, justify how you're using the multiple numbers. If you need multiple numbers for multiple environments (development, QA, production), state that here.
+
+## Company details
+You need to provide information about your company and point of contact. Status updates for your short code application are sent to the point of contact email address.
+
+## Program content
+Message Senders are required to provide detailed information on the content of their SMS campaign and to ensure that the customer consents to receive text messages, and understands the nature of the program.
+
+### Program description
+You need to describe the program for which the toll-free number is used to send SMS. Include who will be receiving the messages and frequency of the messages.
+
+### Opt-in
+
+The general rule of thumb for opt-in are:
+- Making sure the opt-in flow is thoroughly detailed.ΓÇ»
+- Consumer consent must be collected by the direct (first) party sending the messages. If you're a third party helping the direct party sending messages
+- Ensure there's explicitly stated consent disclaimer language at the time of collection. (that is, when the phone number is collected there must be a disclosure about opting-in to messaging).
+- If your message has Marketing/Promotional content, then it must be optional for customers to opt-in
+
+ Here are some tips on how to show the proof of your opt-in workflow:
+
+### Opt-in URL
+
+|Type of Opt-In| Tips|
+|--|--|
+|Website | Screenshots of the web form where the end customer adds a phone number and agrees to receive SMS messages. This screenshot must explicitly state the consent disclaimer language at the time of collection|
+|Keyword or QR Code Opt-in| Image or screenshot of where the customer discovers the keyword/ QR Code in order to opt-in to these messages|
+|Verbal/IVR opt-in|Provide a screenshot record of opt-in via verbal in your database/ CRM to show how the opt-in data is stored. (that is, a check box on their CRM saying that the customer opted in and the date) OR an audio recording of the IVR flow.|
+|Point of Sale | For POS opt-ins on a screen/tablet, provide screenshot of the form. For verbal POS opt-ins of informational traffic, provide a screenshot of the database or a record of the entry. |
+|2FA/OTP| Provide a screenshot of the process to receive the initial text.|
+|Paper form | Upload the form and make sure it includes XXXX. |
+
+ ## Volume
+
+### Expected total messages sent
+In this field, you're required to provide an estimate of total messages sent per month.
+
+## Templates
+Message senders are required to disclose all the types/categories of messages with samples that are sent over the toll-free number.
+
+#### Examples
+- Contoso Promo Alerts: 3 msgs/week. Msg&Data Rates May Apply. Reply HELP for help. Reply STOP to opt-out.
+- Contoso: Your reservation has been confirmed for 30th February 2022. Txt R to reschedule. Txt HELP or STOP. Msg&Data rates may apply.
+
+ ## Next steps
+
+> [!div class="nextstepaction"]
+> [Acquire a toll-free number](../../quickstarts/telephony/get-phone-number.md)
+
+> [!div class="nextstepaction"]
+> [Apply for a short code](../../quickstarts/sms/apply-for-short-code.md)
+
+The following documents may be interesting to you:
+
+- Familiarize yourself with the [SMS FAQ](./sms-faq.md#toll-free-verification)
+- Familiarize yourself with the [SMS SDK](../sms/sdk-features.md)
communication-services Network Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/network-requirements.md
Communication Services connections require internet connectivity to specific por
| Category | IP ranges or FQDN | Ports | | :-- | :-- | :-- | | Media traffic | Range of Azure public cloud IP addresses 20.202.0.0/16 The range provided above is the range of IP addresses on either Media processor or ACS TURN service. | UDP 3478 through 3481, TCP ports 443 |
-| Signaling, telemetry, registration| *.skype.com, *.microsoft.com, *.azure.net, *.office.com| TCP 443, 80 |
+| Signaling, telemetry, registration| *.skype.com, *.microsoft.com, *.azure.net, *.azure.com, *.office.com| TCP 443, 80 |
The endpoints below should be reachable for U.S. Government GCC High customers only
communication-services Bring Your Own Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/call-automation/call-recording/bring-your-own-storage.md
zone_pivot_groups: acs-csharp-java
-# Call recording: Bring your own storage quickstart
+# Call recording: Bring your own Azure storage quickstart
[!INCLUDE [Private Preview](../../../includes/private-preview-include-section.md)]
-This quickstart gets you started with BYOS (Bring your own storage) for Call Recording. To start using BYOS, make sure you're familiar with the [Call Recording APIs](../../voice-video-calling/get-started-call-recording.md).
+This quickstart gets you started with Bring your own Azure storage for Call Recording. To start using Bring your own Azure Storage functionality, make sure you're familiar with the [Call Recording APIs](../../voice-video-calling/get-started-call-recording.md).
## Pre-requisite: Setting up Managed Identity and RBAC role assignments
communication-services Teams Interop Call Automation Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/call-automation/teams-interop-call-automation-quickstart.md
+
+ Title: Azure Communication Services Call Automation how-to for adding Microsoft Teams User into an existing call
+
+description: ProvIDes a how-to for adding a Microsoft Teams user to a call with Call Automation.
++++ Last updated : 03/28/2023+++
+zone_pivot_groups: acs-csharp-java
++
+# Add a Microsoft Teams user to an existing call using Call Automation
++
+In this quickstart, we use the Azure Communication Services Call Automation APIs to add, remove and transfer to a Teams user.
+
+You need to be part of the Azure Communication Services TAP program. It's likely that youΓÇÖre already part of this program, and if you aren't, sign-up using https://aka.ms/acs-tap-invite. To access to the specific Teams Interop functionality for Call Automation, submit your Teams Tenant IDs and Azure Communication Services Resource IDs by filling this form ΓÇô https://aka.ms/acs-ca-teams-tap. You need to fill the form every time you need a new tenant ID and new resource ID allow-listed.
+
+## Prerequisites
+
+- An Azure account with an active subscription.
+- A Microsoft Teams tenant with administrative privileges.
+- A deployed [Communication Service resource](../../quickstarts/create-communication-resource.md) and valid connection string found by selecting Keys in left side menu on Azure portal.
+- [Acquire a PSTN phone number from the Communication Service resource](../../quickstarts/telephony/get-phone-number.md). Note the phone number you acquired to use in this quickstart.
+- An Azure Event Grid subscription to receive the `IncomingCall` event.
+- The latest [Azure Communication Service Call Automation API library](./callflows-for-customer-interactions.md) for your operating system.
+- A web service that implements the Call Automation API library, follow [this tutorial](./callflows-for-customer-interactions.md).
+
+## Step 1: Authorization for your Azure Communication Services Resource to enable calling to Microsoft Teams users
+
+To enable calling through Call Automation APIs, a [Microsoft Teams Administrator](/azure/active-directory/roles/permissions-reference#teams-administrator) or [Global Administrator](/en-us/azure/active-directory/roles/permissions-reference#global-administrator) must explicitly enable the ACS resource(s) access to their tenant to allow calling.
+
+[Set-CsTeamsAcsFederationConfiguration (MicrosoftTeamsPowerShell)](/powershell/module/teams/set-csteamsacsfederationconfiguration)
+Tenant level setting that enables/disables federation between their tenant and specific ACS resources.
+
+[Set-CsExternalAccessPolicy (SkypeForBusiness)](/powershell/module/skype/set-csexternalaccesspolicy)
+User policy that allows the admin to further control which users in their organization can participate in federated communications with ACS users.
+
+## Step 2: Use the Graph API to get Azure AD object ID for Teams users and optionally check their presence
+A Teams userΓÇÖs Azure Active Directory (Azure AD) object ID (OID) is required to add them to or transfer to them from an ACS call. The OID can be retrieved through 1) Office portal, 2) Azure AD portal, 3) Azure AD Connect; or 4) Graph API. The example below uses Graph API.
+
+Consent must be granted by an Azure AD admin before Graph can be used to search for users, learn more by following on the [Microsoft Graph Security API overview](/graph/security-concept-overview) document. The OID can be retrieved using the list users API to search for users. The following shows a search by display name, but other properties can be searched as well:
+
+[List users using Microsoft Graph v1.0](/graph/api/user-list):
+```rest
+Request:
+ https://graph.microsoft.com/v1.0/users?$search="displayName:Art Anderson"
+Permissions:
+ Application and delegated. Refer to documentation.
+Response:
+ "@odata.context": "https://graph.microsoft.com/v1.0/$metadata#users",
+ "value": [
+ {
+ "displayName": "Art Anderson",
+ "mail": "artanderson@contoso.com",
+ "id": "fc4ccb5f-8046-4812-803f-6c344a5d1560"
+ }
+```
+Optionally, Presence for a user can be retrieved using the get presence API and the user ObjectId. Learn more on the [Microsoft Graph v1.0 documentation](/graph/api/presence-get).
+```rest
+Request:
+https://graph.microsoft.com/v1.0/users/fc4ccb5f-8046-4812-803f-6c344a5d1560/presence
+Permissions:
+Delegated only. Application not supported. Refer to documentation.
+Response:
+ "@odata.context": "https://graph.microsoft.com/v1.0/$metadata#users('fc4ccb5f-8046-4812-803f-6c344a5d1560')/presence/$entity",
+ "id": "fc4ccb5f-8046-4812-803f-6c344a5d1560",
+ "availability": "Offline",
+ "activity": "Offline"
+
+```
+
+## Step 3: Add a Teams user to an existing ACS call controlled by Call Automation APIs
+You need to complete the prerequisite step and have a web service app to control an ACS call. Using the callConnection object, add a participant to the call.
+
+```csharp
+CallAutomationClient client = new CallAutomationClient('<Connection_String>');
+AnswerCallResult answer = await client.AnswerCallAsync(incomingCallContext, new Uri('<Callback_URI>'));
+await answer.Value.CallConnection.AddParticipantAsync(
+ new CallInvite(new MicrosoftTeamsUserIdentifier('<Teams_User_Guid>'))
+ {
+ SourceDisplayName = "Jack (Contoso Tech Support)"
+ });
+```
+On the Microsoft Teams desktop client, Jack's call will be sent to the Microsoft Teams user through an incoming call toast notification.
+![Screenshot of Microsoft Teams desktop client, Jack's call is sent to the Microsoft Teams user through an incoming call toast notification.](./media/incoming-call-toast-notification-teams-user.png)
+
+After the Microsoft Teams user accepts the call, the in-call experience for the Microsoft Teams user will have all the participants displayed on the Microsoft Teams roster.
+![Screenshot of Microsoft Teams user accepting the call and entering the in-call experience for the Microsoft Teams user.](./media/active-call-teams-user.png)
+
+## Step 4: Remove a Teams user from an existing ACS call controlled by Call Automation APIs
+```csharp
+await answer.Value.CallConnection.RemoveParticipantAsync(new MicrosoftTeamsUserIdentifier('<Teams_User_Guid>'));
+```
+
+### Optional feature: Transfer to a Teams user from an existing ACS call controlled by Call Automation APIs
+```csharp
+await answer.Value.CallConnection.TransferCallToParticipantAsync(new CallInvite(new MicrosoftTeamsUserIdentifier('<Teams_User_Guid>')));
+```
+### How to tell if your Tenant isn't enabled for this preview?
+![Screenshot showing the error during Step 1.](./media/teams-federation-error.png)
+
+## Clean up resources
+
+If you want to clean up and remove a Communication Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. Learn more about [cleaning up resources](../../quickstarts/create-communication-resource.md#clean-up-resources).
+
+## Next steps
+
+- Learn more about [Call Automation](../../concepts/call-automation/call-automation.md) and its features.
+- Learn about [Play action](../../concepts/call-automation/play-Action.md) to play audio in a call.
+- Learn how to build a [call workflow](../../quickstarts/call-automation/callflows-for-customer-interactions.md) for a customer support scenario.
communication-services Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/chat/get-started.md
Last updated 06/30/2021
-zone_pivot_groups: acs-azcli-js-csharp-java-python-swift-android
+zone_pivot_groups: acs-azcli-js-csharp-java-python-swift-android-power-platform
# Quickstart: Add Chat to your App
Get started with Azure Communication Services by using the Communication Service
[!INCLUDE [Chat with iOS SDK](./includes/chat-swift.md)] ::: zone-end + ## Clean up resources If you want to clean up and remove a Communication Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. Learn more about [cleaning up resources](../create-communication-resource.md#clean-up-resources).
communication-services Logic App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/chat/logic-app.md
- Title: Send a chat message in Power Automate-
-description: In this quickstart, learn how to send a chat message in Azure Logic Apps workflows by using the Azure Communication Services Chat connector.
---- Previously updated : 07/20/2022-----
-# Quickstart: Send a chat message in Power Automate
-
-You can create automated workflows that send chat messages by using the Azure Communication Services Chat connector. This quickstart shows you how to create a chat, add a participant, send a message, and list messages in an existing workflow.
-
-## Prerequisites
--- An Azure account with an active subscription, or [create an Azure account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).--- An active Azure Communication Services resource, or [create a Communication Services resource](../create-communication-resource.md).--- An active Azure Logic Apps resource, or [create a logic app workflow with the trigger that you want to use](../../../logic-apps/quickstart-create-example-consumption-workflow.md). Currently, the Communication Services Chat connector provides only actions, so your logic app requires a trigger, at minimum.-
-## Create user
-
-Complete these steps in Power Automate with your Power Automate flow open in edit mode.
-
-To add a new step in your workflow by using the Communication Services Identity connector:
-
-1. In the designer, under the step where you want to add the new action, select **New step**. Alternatively, to add the new action between steps, move your pointer over the arrow between those steps, select the plus sign (+), and then select **Add an action**.
-
-1. In the **Choose an operation** search box, enter **Communication Services Identity**. In the list of actions list, select **Create a user**.
-
- :::image type="content" source="./media/logic-app/azure-communications-services-connector-create-user.png" alt-text="Screenshot that shows the Azure Communication Services Identity connector Create user action.":::
-
-1. Enter the connection string. To get the connection string URL in the [Azure portal](https://portal.azure.com/), go to the Azure Communication Services resource. In the resource menu, select **Keys**, and then select **Connection string**. Select the copy icon to copy the connection string.
-
- :::image type="content" source="./media/logic-app/azure-portal-connection-string.png" alt-text="Screenshot that shows the Keys pane for an Azure Communication Services resource." lightbox="./media/logic-app/azure-portal-connection-string.png":::
-
-1. Enter a name for the connection.
-
-1. Select **Show advanced options**, and then select the token scope. The action generates an access token and its expiration time with the specified scope. This action also generates a user ID that's a Communication Services user identity.
-
- :::image type="content" source="./media/logic-app/azure-communications-services-connector-create-user-action.png" alt-text="Screenshot that shows the Azure Communication Services Identity connector Create user action options.":::
-
-1. In **Token Scopes Item**, select **chat**.
-
- :::image type="content" source="./media/logic-app/azure-communications-services-connector-create-user-action-advanced.png" alt-text="Screenshot that shows the Azure Communication Services Chat connector advanced options.":::
-
-1. Select **Create**. The user ID and an access token are shown.
-
-## Create a chat thread
-
-1. Add a new action.
-
-1. In the **Choose an operation** search box, enter **Communication Services Chat**. In the list of actions, select **Create chat thread**.
-
- :::image type="content" source="./media/logic-app/azure-communications-services-connector-create-chat-thread.png" alt-text="Screenshot that shows the Azure Communication Services Chat connector Create a chat thread action.":::
-
-1. Enter the Communication Services endpoint URL. To get the endpoint URL in the [Azure portal](https://portal.azure.com/), go to the Azure Communication Services resource. In the resource menu, select **Keys**, and then select **Endpoint**.
-
-1. Enter a name for the connection.
-
-1. Select the access token that was generated in the preceding section, and then add a chat thread topic description. Add the created user and enter a name for the participant.
-
- :::image type="content" source="./media/logic-app/azure-communications-services-connector-create-chat-thread-input.png" alt-text="Screenshot that shows the Azure Communication Services Chat connector Create chat thread action dialog.":::
-
-## Send a message
-
-1. Add a new action.
-
-1. In the **Choose an operation** search box, enter **Communication Services Chat**. In the list of actions, select **Send message to chat thread**.
-
- :::image type="content" source="./media/logic-app/azure-communications-services-connector-send-chat-message.png" alt-text="Screenshot that shows the Azure Communication Services Chat connector Send chat message action.":::
-
-1. Enter the access token, thread ID, content, and name.
-
- :::image type="content" source="./media/logic-app/azure-communications-services-connector-send-chat-message-input.png" alt-text="Screenshot that shows the Azure Communication Services Chat connector Send chat message action dialog.":::
-
-## List chat thread messages
-
-To verify that you sent a message correctly:
-
-1. Add a new action.
-
-1. In the **Choose an operation** search box, enter **Communication Services Chat**. In the list of actions, select **List chat thread messages**.
-
- :::image type="content" source="./media/logic-app/azure-communications-services-connector-list-chat-messages.png" alt-text="Screenshot that shows the Azure Communication Services Chat connector List chat messages action.":::
-
-1. Enter the access token and thread ID.
-
- :::image type="content" source="./media/logic-app/azure-communications-services-connector-list-chat-messages-input.png" alt-text="Screenshot that shows the Azure Communication Services Chat connector List chat messages action dialog.":::
-
-## Test your logic app
-
-To manually start your workflow, on the designer toolbar, select **Run**. The workflow creates a user, issues an access token for that user, and then removes the token and deletes the user. For more information, review [How to run your workflow](../../../logic-apps/quickstart-create-example-consumption-workflow.md#run-workflow).
-
-Now, select **List chat thread messages**. In the action outputs, check for the message that was sent.
--
-## Clean up resources
-
-To remove a Communication Services subscription, delete the Communication Services resource or resource group. Deleting the resource group also deletes any other resources in that group. For more information, review [How to clean up Communication Services resources](../create-communication-resource.md#clean-up-resources).
-
-To clean up your logic app workflow and related resources, review [how to clean up Azure Logic Apps resources](../../../logic-apps/quickstart-create-example-consumption-workflow.md#clean-up-resources).
-
-## Next steps
-
-In this quickstart, you learned how to create a user, create a chat thread, and send a message by using the Communication Services Identity and Communication Services Chat connectors. To learn more, review [Communication Services Chat connector](/connectors/acschat/).
-
-Learn how to [create and manage Communication Services users and access tokens](../chat/logic-app.md).
-
-Learn how to [send an email message in Power Automate by using Communication Services](../email/logic-app.md).
communication-services Logic App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/email/logic-app.md
- Title: Quickstart -Send email message in Power Automate with Azure Communication Services in Microsoft Power Automate-
-description: In this quickstart, learn how to send an email in Azure Logic Apps workflows by using the Azure Communication Services Email connector.
---- Previously updated : 07/20/2022-----
-# Quickstart: Send email message in Power Automate with Azure Communication Services
-
-This quickstart will show how to send emails using the Azure Communication Services Email connector in your Power Automate workflows.
--
-## Prerequisites
--- An Azure account with an active subscription, or [create an Azure account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).--- An active Azure Communication Services resource, or [create a Communication Services resource](../create-communication-resource.md).--- An active Azure Logic Apps resource (logic app), or [create a Consumption logic app workflow with the trigger that you want to use](../../../logic-apps/quickstart-create-example-consumption-workflow.md). Currently, the Azure Communication Services Email connector provides only actions, so your logic app workflow requires a trigger, at minimum.--- An Azure Communication Services Email resource with a [configured domain](../email/create-email-communication-resource.md) or [custom domain](../email/add-custom-verified-domains.md).--- An Azure Communication Services resource [connected with an Azure Email domain](../email/connect-email-communication-resource.md).---
-## Send email
-
-Add a new step in your workflow by using the Azure Communication Services Email connector, follow these steps in Power Automate with your Power Automate flow open in edit mode.
-
-1. On the designer, under the step where you want to add the new action, select New step. Alternatively, to add the new action between steps, move your pointer over the arrow between those steps, select the plus sign (+), and select Add an action.
-
-1. In the Choose an operation search box, enter Communication Services Email. From the actions list, select Send email.
-
- :::image type="content" source="./media/logic-app/azure-communications-services-connector-send-email.png" alt-text="Screenshot that shows the Azure Communication Services Email connector Send email action.":::
-
-1. Provide the Connection String. This can be found in the [Microsoft Azure](https://portal.azure.com/), within your Azure Communication Service Resource, on the Keys option from the left menu > Connection String
-
- :::image type="content" source="./media/logic-app/azure-communications-services-connection-string.png" alt-text="Screenshot that shows the Azure Communication Services Connection String." lightbox="./media/logic-app/azure-communications-services-connection-string.png":::
-
-1. Provide a Connection Name
-
-1. Select Send email
-
- :::image type="content" source="./media/logic-app/azure-communications-services-connector-send-email.png" alt-text="Screenshot that shows the Azure Communication Services Email connector Send email action.":::
-
-1. Fill the **From** input field using an email domain configured in the [pre-requisites](#prerequisites). Also fill the To, Subject and Body field as shown below
-
- :::image type="content" source="./media/logic-app/azure-communications-services-connector-send-email-input.png" alt-text="Screenshot that shows the Azure Communication Services Email connector Send email action input.":::
---
-## Test your logic app
-
-To manually start your workflow, on the designer toolbar, select **Run**. The workflow should create a user, issue an access token for that user, then remove it and delete the user. For more information, review [how to run your workflow](../../../logic-apps/quickstart-create-example-consumption-workflow.md#run-workflow). You can check the outputs of these actions after the workflow runs successfully.
-
-You should have an email in the address specified. Additionally, you can use the Get email message status action to check the status of emails send through the Send email action. To learn more actions, check the [Azure Communication Services Email connector](/connectors/acsemail/) documentation.
-
-## Clean up resources
-
-To remove a Communication Services subscription, delete the Communication Services resource or resource group. Deleting the resource group also deletes any other resources in that group. For more information, review [how to clean up Communication Services resources](../create-communication-resource.md#clean-up-resources).
-
-To clean up your logic app workflow and related resources, review [how to clean up Logic Apps resources](../../../logic-apps/quickstart-create-example-consumption-workflow.md#clean-up-resources).
-
-## Next steps
-
-To learn more about [how to send a chat message](../chat/logic-app.md) from Power Automate using Azure Communication Services.
-
-To learn more about access tokens check [Create and Manage Azure Communication Services users and access tokens](../chat/logic-app.md).
communication-services Send Email https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/email/send-email.md
Last updated 04/15/2022
-zone_pivot_groups: acs-azcli-js-csharp-java-python
+zone_pivot_groups: acs-azcli-js-csharp-java-python-power-platform
# Quickstart: How to send an email using Azure Communication Service
In this quick start, you'll learn about how to send email using our Email SDKs.
[!INCLUDE [Send Email with Python SDK](./includes/send-email-python.md)] ::: zone-end + ## Troubleshooting To troubleshoot issues related to email delivery, you can get status of the email delivery to capture delivery details.
-## Clean up resources
+## Clean up Azure Communication Service resources
If you want to clean up and remove a Communication Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. Learn more about [cleaning up resources](../create-communication-resource.md#clean-up-resources).
You may also want to:
- Learn about [Email concepts](../../concepts/email/email-overview.md) - Familiarize yourself with [email client library](../../concepts/email/sdk-features.md)
+ - Learn more about [how to send a chat message](../chat/logic-app.md) from Power Automate using Azure Communication Services.
+ - Learn more about access tokens check [Create and Manage Azure Communication Services users and access tokens](../chat/logic-app.md).
communication-services Apply For Toll Free Verification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/sms/apply-for-toll-free-verification.md
+
+ Title: Apply for toll-free verification
+
+description: Learn about how to apply for toll-free verification
+++++ Last updated : 03/16/2023+++++
+# Quickstart: Apply for toll-free verification
+Get started with reliable SMS service using toll-free numbers by submitting a toll-free verification. Toll-free verification maximizes deliverability of messages with low to no traffic filtering.
+
+## Prerequisites
+- [An active Communication Services resource.](../create-communication-resource.md)
+- [An SMS-enabled toll-free number](../telephony/get-phone-number.md)
+
+### What is toll free verification?
+The toll-free verification process ensures that your services running on toll-free numbers (TFNs) comply with carrier policies and [industry best practices](../../concepts/sms/messaging-policy.md). This also provides relevant service information to the downstream carriers, reduces the likelihood of false positive filtering and wrongful spam blocks. For detailed process and timelines toll-free verification process check the [toll-free verification FAQ](../../concepts/sms/sms-faq.md#toll-free-verification).
+
+Verification is **required** for best SMS delivery experience.
+
+## Submit a toll-free verification
+To begin toll-free verification, go to your Communication Services resource on the [Azure portal](https://portal.azure.com).
++
+## Apply for a toll-free verification
+Navigate to the Regulatory Documents blade in the resource menu and click on "Add" button to launch the toll-free verification application wizard. For detailed guidance on how to fill out the program brief application check the [toll-free verification filling guidelines](../../concepts/sms/toll-free-verification-guidelines.md).
++
+A toll-free verification application consists of the following five sections:
+### Application type
+You first need to choose country/region and toll-free numbers you would like to get verified. If you have not acquired a toll-free number, then you need to first acquire the number and then come back to this application. If you have selected more than one toll-free number to verify, you need to provide justification on how the multiple numbers are used for the campaign.
++
+### Contact details
+This section requires you to provide information about your company and point of contact in the case we need additional information for this application.
++
+### Program content
+This section requires you to provide description of the SMS campaign, opt-in method (how you plan to get consent from the customer to receive SMS), and screenshots of the selected opt-in method.
++
+### Volume details
+This section requires you to provide an estimate of the number of messages you plan on sending per month.
++
+### Template information
+This section captures sample messages related to your campaign. Provide samples of each type of message you are sending out to recipients.
++
+### Review
+Once completed, review the toll-free verification details and submit the completed application through the Azure portal.
+
+
+This program brief is automatically sent to the toll-free messaging aggregator for review. The toll-free aggregator then reviews the details of the toll-free verification application, a process that can typically take between 5-6 weeks. Once they approve the application, you are notified via application status change in the Azure portal. You can now start sending and receiving messages with low filtering on this toll-free number for your messaging programs.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Check guidelines for filling a toll-free verification application](../../concepts/sms/toll-free-verification-guidelines.md)
+
+> [!div class="nextstepaction"]
+> [Send an SMS](../sms/send.md)
+
+The following documents may be interesting to you:
+
+- Familiarize yourself with the [SMS SDK](../../concepts/sms/sdk-features.md)
+- Familiarize yourself with the [SMS FAQ](../../concepts/sms/sms-faq.md)
communication-services Logic App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/sms/logic-app.md
- Title: Quickstart - Send SMS messages in Azure Logic Apps using Azure Communication Services-
-description: In this quickstart, learn how to send SMS messages in Azure Logic Apps workflows by using the Azure Communication Services connector.
---- Previously updated : 06/30/2021-----
-# Quickstart: Send SMS messages in Azure Logic Apps with Azure Communication Services
-
-By using the [Azure Communication Services SMS](../../overview.md) connector and [Azure Logic Apps](../../../logic-apps/logic-apps-overview.md), you can create automated workflows that can send SMS messages. This quickstart shows how to automatically send text messages in response to a trigger event, which is the first step in a logic app workflow. A trigger event can be an incoming email message, a recurrence schedule, an [Azure Event Grid](../../../event-grid/overview.md) resource event, or any other [trigger that's supported by Azure Logic Apps](/connectors/connector-reference/connector-reference-logicapps-connectors).
--
-Although this quickstart focuses on using the connector to respond to a trigger, you can also use the connector to respond to other actions, which are the steps that follow the trigger in a workflow. If you're new to Logic Apps, review [What is Azure Logic Apps](../../../logic-apps/logic-apps-overview.md) before you get started.
-
-> [!NOTE]
-> Completing this quickstart incurs a small cost of a few USD cents or less in your Azure account.
-
-## Prerequisites
--- An Azure account with an active subscription, or [create an Azure account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).--- An active Azure Communication Services resource, or [create a Communication Services resource](../create-communication-resource.md).--- An active Azure Logic Apps resource (logic app), or [create a Consumption logic app workflow with the trigger that you want to use](../../../logic-apps/quickstart-create-example-consumption-workflow.md). Currently, the Azure Communication Services SMS connector provides only actions, so your logic app workflow requires a trigger, at minimum.-
- This quickstart uses the **When a new email arrives** trigger, which is available with the [Office 365 Outlook connector](/connectors/office365/).
--- An SMS enabled phone number, or [get a phone number](./../telephony/get-phone-number.md).--
-## Add an SMS action
-
-To add the **Send SMS** action as a new step in your workflow by using the Azure Communication Services SMS connector, follow these steps in the [Azure portal](https://portal.azure.com) with your logic app workflow open in the Logic App Designer:
-
-1. On the designer, under the step where you want to add the new action, select **New step**. Alternatively, to add the new action between steps, move your pointer over the arrow between those steps, select the plus sign (**+**), and select **Add an action**.
-
-1. In the **Choose an operation** search box, enter `Azure Communication Services`. From the actions list, select **Send SMS**.
-
- :::image type="content" source="./media/logic-app/select-send-sms-action.png" alt-text="Screenshot that shows the Logic App Designer and the Azure Communication Services connector with the Send SMS action selected.":::
-
-1. Now create a connection to your Communication Services resource.
- 1. Within the same subscription:
-
- 1. Provide a name for the connection.
-
- 1. Select your Azure Communication Services resource.
-
- 1. Select **Create**.
-
- :::image type="content" source="./media/logic-app/send-sms-configuration.png" alt-text="Screenshot that shows the Send SMS action configuration with sample information.":::
-
- 1. Using the connection string from your Communication Services resource:
-
- 1. Provide a name for the connection.
-
- 1. Select ConnectionString Authentication from the drop down options.
-
- 1. Enter the connection string of your Communication Services resource.
-
- 1. Select **Create**.
-
- :::image type="content" source="./media/logic-app/connection-string-auth.png" alt-text="Screenshot that shows the Connection String Authentication configuration.":::
-
- 1. Using Service Principal ([Refer Services Principal Creation](../identity/service-principal-from-cli.md)):
- 1. Provide a name for the connection.
-
- 1. Select Service principal (Azure AD application) Authentication from the drop down options.
-
- 1. Enter the Tenant ID, Client ID & Client Secret of your Service Principal.
-
- 1. Enter the Communication Services Endpoint URL value of your Communication Services resource.
-
- 1. Select **Create**.
-
- :::image type="content" source="./media/logic-app/service-principal-auth.png" alt-text="Screenshot that shows the Service Principal Authentication configuration.":::
-
-1. In the **Send SMS** action, provide the following information:
-
- * The source and destination phone numbers. For testing purposes, you can use your own phone number as the destination phone number.
-
- * The message content that you want to send, for example, "Hello from Logic Apps!".
-
- Here's a **Send SMS** action with example information:
-
- :::image type="content" source="./media/logic-app/send-sms-action.png" alt-text="Screenshot that shows the Send SMS action with sample information.":::
-
-1. When you're done, on the designer toolbar, select **Save**.
-
-Next, run your logic app workflow for testing.
-
-## Test your logic app
-
-To manually start your workflow, on the designer toolbar, select **Run**. Or, you can wait for the trigger to fire. In both cases, the workflow should send an SMS message to your specified destination phone number. For more information, review [how to run your workflow](../../../logic-apps/quickstart-create-example-consumption-workflow.md#run-workflow).
-
-## Clean up resources
-
-To remove a Communication Services subscription, delete the Communication Services resource or resource group. Deleting the resource group also deletes any other resources in that group. For more information, review [how to clean up Communication Services resources](../create-communication-resource.md#clean-up-resources).
-
-To clean up your logic app workflow and related resources, review [how to clean up Azure Logic Apps resources](../../../logic-apps/quickstart-create-example-consumption-workflow.md#clean-up-resources).
-
-## Next steps
-
-In this quickstart, you learned how to send SMS messages by using Azure Logic Apps and Azure Communication Services. To learn more, continue with subscribing to SMS events:
-
-> [!div class="nextstepaction"]
-> [Subscribe to SMS Events](./handle-sms-events.md)
-
-For more information about SMS in Azure Communication Services, see these articles:
--- [SMS concepts](../../concepts/sms/concepts.md)-- [Phone number types](../../concepts/telephony/plan-solution.md)-- [SMS SDK](../../concepts/sms/sdk-features.md)
communication-services Send https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/sms/send.md
- devx-track-js - mode-other - kr2b-contr-experiment
-zone_pivot_groups: acs-azcli-js-csharp-java-python
+zone_pivot_groups: acs-azcli-js-csharp-java-python-power-platform
# Quickstart: Send an SMS message
zone_pivot_groups: acs-azcli-js-csharp-java-python
[!INCLUDE [Send SMS with Java SDK](./includes/send-sms-java.md)] ::: zone-end + ## Troubleshooting To troubleshoot issues related to SMS delivery, you can [enable delivery reporting with Event Grid](./handle-sms-events.md) to capture delivery details.
communication-services Get Phone Number https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/telephony/get-phone-number.md
In this quickstart you learned how to:
> [!div class="nextstepaction"] > [Send an SMS](../sms/send.md)
+>
+> [!div class="nextstepaction"]
+> [Toll-free verification](../../concepts/sms/sms-faq.md#toll-free-verification)
+
+> [!div class="nextstepaction"]
> [Get started with calling](../voice-video-calling/getting-started-with-calling.md)
confidential-computing Virtual Machine Solutions Amd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/virtual-machine-solutions-amd.md
Title: Azure Confidential virtual machine options on AMD processors
description: Azure Confidential Computing offers multiple options for confidential virtual machines that run on AMD processors backed by SEV-SNP technology. + Previously updated : 11/15/2021 Last updated : 3/29/2023 # Azure Confidential VM options on AMD
Consider the following settings and choices before deploying confidential VMs.
### Azure subscription
-To deploy a confidential VM instance, consider a pay-as-you-go subscription or other purchase option. If you're using an [Azure free account](https://azure.microsoft.com/free/), the quota doesn't allow the appropriate number of Azure compute cores.
+To deploy a confidential VM instance, consider a [pay-as-you-go subscription](/azure/virtual-machines/linux/azure-hybrid-benefit-linux) or other purchase option. If you're using an [Azure free account](https://azure.microsoft.com/free/), the quota doesn't allow the appropriate number of Azure compute cores.
You might need to increase the cores quota in your Azure subscription from the default value. Default limits vary depending on your subscription category. Your subscription might also limit the number of cores you can deploy in certain VM size families, including the confidential VM sizes.
Make sure to specify the following properties for your VM in the parameters sect
> [!div class="nextstepaction"] > [Deploy a confidential VM on AMD from the Azure portal](quick-create-confidential-vm-portal-amd.md)+
connectors Connectors Azure Application Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-azure-application-insights.md
+
+ Title: Connect to Azure Application Insights
+description: Connect to Application Insights from a workflow in Azure Logic Apps.
+
+ms.suite: integration
++ Last updated : 03/07/2023
+tags: connectors
+# As a developer, I want to get telemetry from an Application Insights resource to use with my workflow in Azure Logic Apps.
++
+# Connect to Azure Application Insights from workflows in Azure Logic Apps
+
+> [!NOTE]
+>
+> The [Azure Monitor Logs connector](/connectors/azuremonitorlogs/) replaces the [Azure Log Analytics connector](/connectors/azureloganalytics/)
+> and the [Azure Application Insights connector](/connectors/applicationinsights/). This combined connector provides the same functionality as
+> the other connectors and is the preferred method for running a query against a Log Analytics workspace or an Application Insights resource.
+>
+> For example, when you connect to your Application Insights resource, you don't have to create or provide an application ID and API key.
+> Authentication is integrated with Azure Active Directory. For the how-to guide to use the Azure Monitor Logs connector, see
+> [Connect to Log Analytics or Application Insights from workflows in Azure Logic Apps](connectors-azure-monitor-logs.md).
+
+For more information, see the following documentation:
+
+- [Azure Monitor Logs connector](/connectors/azuremonitorlogs/)
+- [Connect to Log Analytics or Application Insights from workflows in Azure Logic Apps](connectors-azure-monitor-logs.md)
+- [What is Azure Logic Apps](../logic-apps/logic-apps-overview.md)
connectors Connectors Azure Monitor Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-azure-monitor-logs.md
+
+ Title: Connect to Log Analytics or Application Insights
+description: Get log data from a Log Analytics workspace or Application Insights resource to use with your workflow in Azure Logic Apps.
+
+ms.suite: integration
++ Last updated : 03/06/2023
+tags: connectors
+# As a developer, I want to get log data from my Log Analytics workspace or telemetry from my Application Insights resource to use with my workflow in Azure Logic Apps.
++
+# Connect to Log Analytics or Application Insights from workflows in Azure Logic Apps
++
+> [!NOTE]
+>
+> The Azure Monitor Logs connector replaces the [Azure Log Analytics connector](/connectors/azureloganalytics/)
+> and the [Azure Application Insights connector](/connectors/applicationinsights/). This connector provides
+> the same functionality as the other connectors and is the preferred method for running a query against a
+> Log Analytics workspace or an Application Insights resource. For example, when you connect to your Application
+> Insights resource, you don't have to create or provide an application ID and API key. Authentication is
+> integrated with Azure Active Directory.
+
+To build workflows in Azure Logic Apps that retrieve data from a Log Analytics workspace or an Application Insights resource in Azure Monitor, you can use the Azure Monitor Logs connector.
+
+For example, you can create a logic app workflow that sends Azure Monitor log data in an email message from your Office 365 Outlook account, create a bug in Azure DevOps, or post a Slack message. This connector provides only actions, so to start a workflow, you can use a Recurrence trigger to specify a simple schedule or any trigger from another service.
+
+This how-to guide describes how to build a [Consumption logic app workflow](../logic-apps/logic-apps-overview.md#resource-environment-differences) that sends the results of an Azure Monitor log query by email.
+
+## Connector technical reference
+
+For technical information about this connector's operations, see the [connector's reference documentation](/connectors/azuremonitorlogs/).
+
+> [!NOTE]
+>
+> Both of the following actions can run a log query against a Log Analytics workspace or
+> Application Insights resource. The difference exists in the way that data is returned.
+>
+> | Action | Description |
+> |--|-|
+> | [Run query and and list results](/connectors/azuremonitorlogs/#run-query-and-list-results) | Returns each row as its own object. Use this action when you want to work with each row separately in the rest of the workflow. The action is typically followed by a [For each action](../logic-apps/logic-apps-control-flow-loops.md). |
+> | [Run query and and visualize results](/connectors/azuremonitorlogs/#run-query-and-visualize-results) | Returns a JPG file that depicts the query result set. This action lets you use the result set in the rest of the workflow by sending the results in an email, for example. The action only returns a JPG file if the query returns results. |
+
+## Limitations
+
+- The connector has the following limits, which your workflow might reach, based on the query that you use and the size of the results:
+
+ | Limit | Value | Notes |
+ |-|-|-|
+ | Max query response size | ~16.7 MB or 16 MiB | The connector infrastructure dictates that the size limit is set lower than the query API limit. |
+ | Max number of records | 500,000 records ||
+ | Max connector timeout | 110 seconds ||
+ | Max query timeout | 100 seconds ||
+
+ To avoid reaching these limits, try aggregating data to reduce the results size, or adjusting the workflow recurrence to run more frequently across a smaller time range. However, due to caching, frequent queries with intervals less than 120 seconds aren't recommended.
+
+- Visualizations on the Logs page and the connector use different charting libraries. So, the connector currently doesn't include some functionality.
+
+## Prerequisites
+
+- An Azure account and subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+- The [Log Analytics workspace](../azure-monitor/logs/quick-create-workspace.md) or [Application Insights resource](../azure-monitor/app/app-insights-overview.md) that you want to connect.
+
+- The [Consumption logic app workflow](../logic-apps/logic-apps-overview.md#resource-environment-differences) from where you want to access your Log Analytics workspace or Application Insights resource. To use an Azure Monitor Logs action, start your workflow with any trigger. This guide uses the [**Recurrence** trigger](connectors-native-recurrence.md).
+
+ > [!NOTE]
+ >
+ > Although you can turn on the Log Analytics setting in a logic app resource to collect information about runtime data
+ > and events as described in the how-to guide [Set up Azure Monitor logs and collect diagnostics data for Azure Logic Apps](../logic-apps/monitor-workflows-collect-diagnostic-data.md), this setting isn't required
+ > for you to use the Azure Monitor Logs connector.
+
+- An Office 365 Outlook account to complete the example in this guide. Otherwise, you can use any email provider that has an available connector in Azure Logic Apps.
+
+## Add an Azure Monitor Logs action
+
+1. In the [Azure portal](https://portal.azure.com), open your logic app workflow in the designer.
+
+1. In your workflow where you want to add the Azure Monitor Logs action, follow one of these steps:
+
+ - To add an action under the last step, select **New step**.
+
+ - To add an action between steps, move your pointer use over the connecting arrow. Select the plus sign (**+**) that appears, and then select **Add an action**.
+
+ For more information about adding an action, see [Build a workflow by adding a trigger or action](../logic-apps/create-workflow-with-trigger-or-action.md).
+
+1. Under the **Choose an operation** search box, select **Standard**. In the search box, enter **Azure Monitor Logs**.
+
+1. From the actions list, select the action that you want.
+
+ This example continues with the action named **Run query and visualize results**.
+
+1. In the connection box, from the **Tenant** list, select your Azure Active Directory (Azure AD) tenant, and then select **Create**.
+
+ > [!NOTE]
+ >
+ > The account associated with the current connection is used later to send the email.
+ > To use a different account, select **Change connection**.
+
+1. In the **Run query and visualize results** action box, provide the following information:
+
+ | Property | Required | Value | Description |
+ |-|-|-|-|
+ | **Subscription** | Yes | <*Azure-subscription*> | The Azure subscription for your Log Analytics workspace or Application Insights application. |
+ | **Resource Group** | Yes | <*Azure-resource-group*> | The Azure resource group for your Log Analytics workspace or Application Insights application. |
+ | **Resource Type** | Yes | **Log Analytics Workspace** or **Application Insights** | The resource type to connect from your workflow. This example continues by selecting **Log Analytics Workspace**. |
+ | **Resource Name** | Yes | <*Azure-resource-name*> | The name for your Log Analytics workspace or Application Insights resource. |
+
+1. In the **Query** box, enter the following Kusto query to retrieve the specified log data from the following sources:
+
+ * Log Analytics workspace
+
+ The following example query selects errors that occurred within the last day, reports their total number, and sorts them in ascending order.
+
+ ```Kusto
+ Event
+ | where EventLevelName == "Error"
+ | where TimeGenerated > ago(1day)
+ | summarize TotalErrors=count() by Computer
+ | sort by Computer asc
+ ```
+
+ * Application Insights resource
+
+ The following example query selects the failed requests within the last day and correlates them with exceptions that occurred as part of the operation, based on the `operation_Id` identifier. The query then segments the results by using the `autocluster()` algorithm.
+
+ ```kusto
+ requests
+ | where timestamp > ago(1d)
+ | where success == "False"
+ | project name, operation_Id
+ | join ( exceptions
+ | project problemId, outerMessage, operation_Id
+ ) on operation_Id
+ | evaluate autocluster()
+ ```
+
+ > [!NOTE]
+ >
+ > When you create your own queries, make sure they work correctly in Log Analytics before you add them to your Azure Monitor Logs action.
+
+1. For **Time Range**, select **Set in query**.
+
+1. For **Chart Type**, select **Html Table**.
+
+1. Save your workflow. On the designer toolbar, select **Save**.
+
+## Add an email action
+
+1. In your workflow where you want to add the Office 365 Outlook action, follow one of these steps:
+
+ - To add an action under the last step, select **New step**.
+
+ - To add an action between steps, move your pointer use over the connecting arrow. Select the plus sign (**+**) that appears, and then select **Add an action**.
+
+1. Under the **Choose an operation** search box, select **Standard**. In the search box, enter **Office 365 send email**.
+
+1. From the actions list, select the action named **Send an email (V2)**.
+
+1. In the **To** box, enter the recipient's email address. For this example, use your own email address.
+
+1. In the **Subject** box, enter a subject for the email, for example, **Top daily errors or failures**.
+
+1. In the **Body** box, click anywhere inside to open the **Dynamic content** list, which shows the outputs from the previous steps in the workflow.
+
+ 1. In the **Dynamic content** list, next to the **Run query and visualize results** section name, select **See more**.
+
+ 1. From the outputs list, select **Body**, which represents the results of the query that you previously entered in the Log Analytics action.
+
+1. From the **Add new parameter** list, select **Attachments**.
+
+ The **Send an email** action now includes the **Attachments Name** and **Attachments Content** properties.
+
+1. For the added properties, follow these steps:
+
+ 1. In the **Attachment Name** box, from the dynamic content list that appears, under **Run query and visualize results**, select the **Attachment Name** output.
+
+ 1. In the **Attachment Content** box, from the dynamic content list that appears, under **Run query and visualize results**, select the **Attachment Content** output.
+
+1. Save your workflow. On the designer toolbar, select **Save**.
+
+### Test your workflow
+
+1. On the designer toolbar, select **Run Trigger** > **Run**.
+
+1. When the workflow completes, check your email.
+
+ > [!NOTE]
+ >
+ > The workflow generates an email with a JPG file that shows the query result set.
+ > If your query doesn't return any results, the workflow won't create a JPG file.
+
+ For the Log Analytics workspace example, the email that you receive has a body that looks similar to the following example:
+
+ ![Screenshot that shows the data report from a Log Analytics workspace in an example email.](media/connectors-azure-monitor-logs/sample-mail-log-analytics-workspace.png)
+
+ For an Application Insights resource, the email that you receive has a body that looks similar to the following example:
+
+ ![Screenshot that shows the data report from an Application Insights resource in an example email.](media/connectors-azure-monitor-logs/sample-email-application-insights-resource.png)
+
+## Next steps
+
+- Learn more about [log queries in Azure Monitor](../azure-monitor/logs/log-query-overview.md)
+- Learn more about [queries for Log Analytics](../azure-monitor/logs/get-started-queries.md)
cosmos-db Analytical Store Change Data Capture