Updates from: 10/14/2022 01:15:04
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Partner Gallery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-gallery.md
Microsoft partners with the following ISVs for Web Application Firewall (WAF).
| ![Screenshot of Azure WAF logo](./medi) provides centralized protection of your web applications from common exploits and vulnerabilities. | ![Screenshot of Cloudflare logo](./medi) is a WAF provider that helps organizations protect against malicious attacks that aim to exploit vulnerabilities such as SQLi, and XSS. |
-## Identity verification tools
+## Developer tools
Microsoft partners with the following ISVs for tools that can help with implementation of your authentication solution. | ISV partner | Description and integration walkthroughs | |:-|:--|
-| ![Screenshot of a grit ief editor logo.](./medi) is a tool that saves time during authentication deployment. It supports multiple languages without the need to write code. It also has a no code debugger for user journeys.|
+| ![Screenshot of a grit ief editor logo.](./medi) provides a low code/no code experience for developers to create sophisticated authentication user journeys. The tool comes with integrated debugger and templates for the most used scenarios.|
## Additional information
active-directory Concept Certificate Based Authentication Certificateuserids https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-certificate-based-authentication-certificateuserids.md
To map the pattern supported by certificateUserIds, administrators must use expr
You can use the following expression for mapping to SKI and SHA1-PUKEY: ```
-(Contains([alternativeSecurityId],"x509:\<SKI>")>0,[alternativeSecurityId],Error("No altSecurityIdentities SKI match found."))
-& IIF(Contains([alternativeSecurityId],"x509:\<SHA1-PUKEY>")>0,[alternativeSecurityId],Error("No altSecurityIdentities SHA1-PUKEY match found."))
+IF(IsPresent([alternativeSecurityId]),
+ Where($item,[alternativeSecurityId],BitOr(InStr($item, "x509:<SKI>"),InStr($item, "x509:<SHA1-PUKEY>"))>0),[alternativeSecurityId]
+)
``` ## Look up certificateUserIds using Microsoft Graph queries
active-directory Concept Mfa Authprovider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-mfa-authprovider.md
Title: Azure Multi-Factor Auth Providers - Azure Active Directory
+ Title: Azure AD Multi-Factor Auth Providers - Azure Active Directory
description: When should you use an Auth Provider with Azure MFA? Previously updated : 11/21/2019 Last updated : 10/10/2022
-# When to use an Azure Multi-Factor Authentication Provider
+# When to use an Azure AD Multi-Factor Authentication provider
> [!IMPORTANT] > Effective September 1st, 2018 new auth providers may no longer be created. Existing auth providers may continue to be used and updated, but migration is no longer possible. Multi-factor authentication will continue to be available as a feature in Azure AD Premium licenses.
-Two-step verification is available by default for global administrators who have Azure Active Directory, and Microsoft 365 users. However, if you wish to take advantage of [advanced features](howto-mfa-mfasettings.md) then you should purchase the full version of Azure Multi-Factor Authentication (MFA).
+Two-step verification is available by default for Global Administrators who have Azure Active Directory, and Microsoft 365 users. However, if you wish to take advantage of [advanced features](howto-mfa-mfasettings.md) then you should purchase the full version of Azure AD Multi-Factor Authentication (MFA).
-An Azure Multi-Factor Auth Provider is used to take advantage of features provided by Azure Multi-Factor Authentication for users who **do not have licenses**.
+An Azure AD Multi-Factor Auth Provider is used to take advantage of features provided by Azure AD Multi-Factor Authentication for users who **do not have licenses**.
## Caveats related to the Azure MFA SDK Note the SDK has been deprecated and will only continue to work until November 14, 2018. After that time, calls to the SDK will fail.
-## What is an MFA Provider?
+## What is an MFA provider?
-There are two types of Auth providers, and the distinction is around how your Azure subscription is charged. The per-authentication option calculates the number of authentications performed against your tenant in a month. This option is best if you have a number of users authenticating only occasionally. The per-user option calculates the number of individuals in your tenant who perform two-step verification in a month. This option is best if you have some users with licenses but need to extend MFA to more users beyond your licensing limits.
+There are two types of Auth providers, and the distinction is around how your Azure subscription is charged. The per-authentication option calculates the number of authentications performed against your tenant in a month. This option is best if some users authenticate only occasionally. The per-user option calculates the number of users who are eligible to perform MFA, which is all users in Azure AD, and all enabled users in MFA Server. This option is best if some users have licenses but you need to extend MFA to more users beyond your licensing limits.
-## Manage your MFA Provider
+## Manage your MFA provider
-You cannot change the usage model (per enabled user or per authentication) after an MFA provider is created.
+You can't change the usage model (per enabled user or per authentication) after an MFA provider is created.
If you purchased enough licenses to cover all users that are enabled for MFA, you can delete the MFA provider altogether.
-If your MFA provider is not linked to an Azure AD tenant, or you link the new MFA provider to a different Azure AD tenant, user settings and configuration options are not transferred. Also, existing Azure MFA Servers need to be reactivated using activation credentials generated through the MFA Provider.
+If your MFA provider isn't linked to an Azure AD tenant, or you link the new MFA provider to a different Azure AD tenant, user settings and configuration options aren't transferred. Also, existing Azure MFA Servers need to be reactivated using activation credentials generated through the MFA Provider.
### Removing an authentication provider
Azure MFA Servers linked to providers will need to be reactivated using credenti
![Delete an auth provider from the Azure portal](./media/concept-mfa-authprovider/authentication-provider-removal.png)
-When you have confirmed that all settings have been migrated, you can browse to the **Azure portal** > **Azure Active Directory** > **Security** > **MFA** > **Providers** and select the ellipses **...** and select **Delete**.
+After you confirm that all settings are migrated, you can browse to the **Azure portal** > **Azure Active Directory** > **Security** > **MFA** > **Providers** and select the ellipses **...** and select **Delete**.
> [!WARNING] > Deleting an authentication provider will delete any reporting information associated with that provider. You may want to save activity reports before deleting your provider.
active-directory Howto Sspr Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-sspr-windows.md
Previously updated : 03/18/2022 Last updated : 10/13/2022
# Enable Azure Active Directory self-service password reset at the Windows sign-in screen
-Self-service password reset (SSPR) gives users in Azure Active Directory (Azure AD) the ability to change or reset their password, with no administrator or help desk involvement. Typically, users open a web browser on another device to access the [SSPR portal](https://aka.ms/sspr). To improve the experience on computers that run Windows 7, 8, 8.1, and 10, you can enable users to reset their password at the Windows sign-in screen.
+Self-service password reset (SSPR) gives users in Azure Active Directory (Azure AD) the ability to change or reset their password, with no administrator or help desk involvement. Typically, users open a web browser on another device to access the [SSPR portal](https://aka.ms/sspr). To improve the experience on computers that run Windows 7, 8, 8.1, 10, and 11 you can enable users to reset their password at the Windows sign-in screen.
-![Example Windows 7 and 10 login screens with SSPR link shown](./media/howto-sspr-windows/windows-reset-password.png)
+![Example Windows login screens with SSPR link shown](./media/howto-sspr-windows/windows-reset-password.png)
> [!IMPORTANT] > This tutorial shows an administrator how to enable SSPR for Windows devices in an enterprise.
The following limitations apply to using SSPR from the Windows sign-in screen:
- Hybrid Azure AD joined machines must have network connectivity line of sight to a domain controller to use the new password and update cached credentials. This means that devices must either be on the organization's internal network or on a VPN with network access to an on-premises domain controller. - If using an image, prior to running sysprep ensure that the web cache is cleared for the built-in Administrator prior to performing the CopyProfile step. More information about this step can be found in the support article [Performance poor when using custom default user profile](https://support.microsoft.com/help/4056823/performance-issue-with-custom-default-user-profile). - The following settings are known to interfere with the ability to use and reset passwords on Windows 10 devices:
- - If Ctrl+Alt+Del is required by policy in Windows 10, **Reset password** won't work.
- If lock screen notifications are turned off, **Reset password** won't work. - *HideFastUserSwitching* is set to enabled or 1 - *DontDisplayLastUserName* is set to enabled or 1
The following limitations apply to using SSPR from the Windows sign-in screen:
> These limitations also apply to Windows Hello for Business PIN reset from the device lock screen. >
-## Windows 10 password reset
+## Windows 11 and 10 password reset
-To configure a Windows 10 device for SSPR at the sign-in screen, review the following prerequisites and configuration steps.
+To configure a Windows 11 or 10 device for SSPR at the sign-in screen, review the following prerequisites and configuration steps.
-### Windows 10 prerequisites
+### Windows 11 and 10 prerequisites
- An administrator [must enable Azure AD self-service password reset from the Azure portal](tutorial-enable-sspr.md). - Users must register for SSPR before using this feature at [https://aka.ms/ssprsetup](https://aka.ms/ssprsetup)
To configure a Windows 10 device for SSPR at the sign-in screen, review the foll
- Azure AD joined - Hybrid Azure AD joined
-### Enable for Windows 10 using Microsoft Endpoint Manager
+### Enable for Windows 11 and 10 using Microsoft Endpoint Manager
Deploying the configuration change to enable SSPR from the login screen using Microsoft Endpoint Manager is the most flexible method. Microsoft Endpoint Manager allows you to deploy the configuration change to a specific group of machines you define. This method requires Microsoft Endpoint Manager enrollment of the device.
Deploying the configuration change to enable SSPR from the login screen using Mi
1. Sign in to the [Azure portal](https://portal.azure.com) and select **Endpoint Manager**. 1. Create a new device configuration profile by going to **Device configuration** > **Profiles**, then select **+ Create Profile**
- - For **Platform** choose *Windows 10 and later*
+ - For **Platform** choose *Windows 11 and later*
- For **Profile type**, choose *Custom*
-1. Select **Create**, then provide a meaningful name for the profile, such as *Windows 10 sign-in screen SSPR*
+1. Select **Create**, then provide a meaningful name for the profile, such as *Windows 11 sign-in screen SSPR*
Optionally, provide a meaningful description of the profile, then select **Next**. 1. Under *Configuration settings*, select **Add** and provide the following OMA-URI setting to enable the reset password link:
Deploying the configuration change to enable SSPR from the login screen using Mi
1. Configure applicability rules as desired for your environment, such as to *Assign profile if OS edition is Windows 10 Enterprise*, then select **Next**. 1. Review your profile, then select **Create**.
-### Enable for Windows 10 using the Registry
+### Enable for Windows 11 and 10 using the Registry
To enable SSPR at the sign-in screen using a registry key, complete the following steps:
To enable SSPR at the sign-in screen using a registry key, complete the followin
"AllowPasswordReset"=dword:00000001 ```
-### Troubleshooting Windows 10 password reset
+### Troubleshooting Windows 11 and 10 password reset
If you have problems with using SSPR from the Windows sign-in screen, the Azure AD audit log includes information about the IP address and *ClientType* where the password reset occurred, as shown in the following example output: ![Example Windows 7 password reset in the Azure AD Audit log](media/howto-sspr-windows/windows-7-sspr-azure-ad-audit-log.png)
-When users reset their password from the sign-in screen of a Windows 10 device, a low-privilege temporary account called `defaultuser1` is created. This account is used to keep the password reset process secure.
+When users reset their password from the sign-in screen of a Windows 11 or 10 device, a low-privilege temporary account called `defaultuser1` is created. This account is used to keep the password reset process secure.
The account itself has a randomly generated password, which is validated against an organizations password policy, doesn't show up for device sign-in, and is automatically removed after the user resets their password. Multiple `defaultuser` profiles may exist but can be safely ignored.
active-directory Concept Condition Filters For Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-condition-filters-for-devices.md
The following device attributes can be used with the filter for devices conditio
| | | | | | deviceId | Equals, NotEquals, In, NotIn | A valid deviceId that is a GUID | (device.deviceid -eq "498c4de7-1aee-4ded-8d5d-000000000000") | | displayName | Equals, NotEquals, StartsWith, NotStartsWith, EndsWith, NotEndsWith, Contains, NotContains, In, NotIn | Any string | (device.displayName -contains "ABC") |
-| deviceOwnership | Equals, NotEquals | Supported values are "Personal" for bring your own devices and "Company" for corprate owned devices | (device.deviceOwnership -eq "Company") |
+| deviceOwnership | Equals, NotEquals | Supported values are "Personal" for bring your own devices and "Company" for corporate owned devices | (device.deviceOwnership -eq "Company") |
| isCompliant | Equals, NotEquals | Supported values are "True" for compliant devices and "False" for non compliant devices | (device.isCompliant -eq "True") | | manufacturer | Equals, NotEquals, StartsWith, NotStartsWith, EndsWith, NotEndsWith, Contains, NotContains, In, NotIn | Any string | (device.manufacturer -startsWith "Microsoft") | | mdmAppId | Equals, NotEquals, In, NotIn | A valid MDM application ID | (device.mdmAppId -in ["0000000a-0000-0000-c000-000000000000"] |
active-directory V2 Oauth2 On Behalf Of Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-oauth2-on-behalf-of-flow.md
Previously updated : 08/30/2021 Last updated : 09/30/2022 -+ # Microsoft identity platform and OAuth 2.0 On-Behalf-Of flow
-The OAuth 2.0 On-Behalf-Of flow (OBO) serves the use case where an application invokes a service/web API, which in turn needs to call another service/web API. The idea is to propagate the delegated user identity and permissions through the request chain. For the middle-tier service to make authenticated requests to the downstream service, it needs to secure an access token from the Microsoft identity platform, on behalf of the user.
+The on-behalf-of (OBO) flow describes the scenario of a web API using an identity other than its own to call another web API. Referred to as delegation in OAuth, the intent is to pass a user's identity and permissions through the request chain.
-The OBO flow only works for user principals at this time. A service principal cannot request an app-only token, send it to an API, and have that API exchange that for another token that represents that original service principal. Additionally, the OBO flow is focused on acting on another party's behalf, known as a delegated scenario - this means that it uses only delegated *scopes*, and not application *roles*, for reasoning about permissions. *Roles* remain attached to the principal (the user) in the flow, never the application operating on the users behalf.
+For the middle-tier service to make authenticated requests to the downstream service, it needs to secure an access token from the Microsoft identity platform. It only uses delegated *scopes* and not application *roles*. *Roles* remain attached to the principal (the user) and never to the application operating on the user's behalf. This occurs to prevent the user gaining permission to resources they shouldn't have access to.
-This article describes how to program directly against the protocol in your application. When possible, we recommend you use the supported Microsoft Authentication Libraries (MSAL) instead to [acquire tokens and call secured web APIs](authentication-flows-app-scenarios.md#scenarios-and-supported-authentication-flows). Also take a look at the [sample apps that use MSAL](sample-v2-code.md).
+This article describes how to program directly against the protocol in your application. When possible, we recommend you use the supported Microsoft Authentication Libraries (MSAL) instead to [acquire tokens and call secured web APIs](authentication-flows-app-scenarios.md#scenarios-and-supported-authentication-flows). Also refer to the [sample apps that use MSAL](sample-v2-code.md) for examples.
[!INCLUDE [try-in-postman-link](includes/try-in-postman-link.md)] ## Client limitations
-As of May 2018, some implicit-flow derived `id_token` can't be used for OBO flow. Single-page apps (SPAs) should pass an **access** token to a middle-tier confidential client to perform OBO flows instead.
+If a service principal requested an app-only token and sent it to an API, that API would then exchange a token that doesn't represent the original service principal. This is because the OBO flow only works for user principals. Instead, it must use the [client credentials flow](v2-oauth2-client-creds-grant-flow.md) to get an app-only token. In the case of Single-page apps (SPAs), they should pass an access token to a middle-tier confidential client to perform OBO flows instead.
-If a client uses the implicit flow to get an id_token, and that client also has wildcards in a reply URL, the id_token can't be used for an OBO flow. However, access tokens acquired through the implicit grant flow can still be redeemed by a confidential client even if the initiating client has a wildcard reply URL registered.
+If a client uses the implicit flow to get an id_token and also has wildcards in a reply URL, the id_token can't be used for an OBO flow. A wildcard is a URL that ends with a `*` character. For example, if `https://myapp.com/*` was the reply URL the id_token can't be used because it isn't specific enough to identify the client. This would prevent the token being issued. However, access tokens acquired through the implicit grant flow can be redeemed by a confidential client, even if the initiating client has a wildcard reply URL registered. This is because the confidential client can identify the client that acquired the access token. The confidential client can then use the access token to acquire a new access token for the downstream API.
-Additionally, applications with custom signing keys cannot be used as middle-tier API's in the OBO flow (this includes enterprise applications configured for single sign-on). This will result in an error because tokens signed with a key controlled by the client cannot be safely accepted.
+Additionally, applications with custom signing keys can't be used as middle-tier APIs in the OBO flow. This includes enterprise applications configured for single sign-on. If the middle-tier API uses a custom signing key, the downstream API won't be able to validate the signature of the access token that is passed to it. This will result in an error because tokens signed with a key controlled by the client can't be safely accepted.
## Protocol diagram
-Assume that the user has been authenticated on an application using the [OAuth 2.0 authorization code grant flow](v2-oauth2-auth-code-flow.md) or another login flow. At this point, the application has an access token *for API A* (token A) with the user's claims and consent to access the middle-tier web API (API A). Now, API A needs to make an authenticated request to the downstream web API (API B).
+Assume that the user has been authenticated on an application using the [OAuth 2.0 authorization code grant flow](v2-oauth2-auth-code-flow.md) or another log in flow. At this point, the application has an access token for *API A* (token A) with the user's claims and consent to access the middle-tier web API (API A). Now, API A needs to make an authenticated request to the downstream web API (API B).
The steps that follow constitute the OBO flow and are explained with the help of the following diagram.
The steps that follow constitute the OBO flow and are explained with the help of
1. Token B is set by API A in the authorization header of the request to API B. 1. Data from the secured resource is returned by API B to API A, then to the client.
-In this scenario, the middle-tier service has no user interaction to get the user's consent to access the downstream API. Therefore, the option to grant access to the downstream API is presented upfront as a part of the consent step during authentication. To learn how to set this up for your app, see [Gaining consent for the middle-tier application](#gaining-consent-for-the-middle-tier-application).
+In this scenario, the middle-tier service has no user interaction to get the user's consent to access the downstream API. Therefore, the option to grant access to the downstream API is presented upfront as part of the consent step during authentication. To learn how to implement this in your app, see [Gaining consent for the middle-tier application](#gaining-consent-for-the-middle-tier-application).
## Middle-tier access token request
When using a shared secret, a service-to-service access token request contains t
| `grant_type` | Required | The type of token request. For a request using a JWT, the value must be `urn:ietf:params:oauth:grant-type:jwt-bearer`. | | `client_id` | Required | The application (client) ID that [the Azure portal - App registrations](https://go.microsoft.com/fwlink/?linkid=2083908) page has assigned to your app. | | `client_secret` | Required | The client secret that you generated for your app in the Azure portal - App registrations page. The Basic auth pattern of instead providing credentials in the Authorization header, per [RFC 6749](https://datatracker.ietf.org/doc/html/rfc6749#section-2.3.1) is also supported. |
-| `assertion` | Required | The access token that was sent to the middle-tier API. This token must have an audience (`aud`) claim of the app making this OBO request (the app denoted by the `client-id` field). Applications cannot redeem a token for a different app (so e.g. if a client sends an API a token meant for MS Graph, the API cannot redeem it using OBO. It should instead reject the token). |
+| `assertion` | Required | The access token that was sent to the middle-tier API. This token must have an audience (`aud`) claim of the app making this OBO request (the app denoted by the `client-id` field). Applications can't redeem a token for a different app (for example, if a client sends an API a token meant for Microsoft Graph, the API can't redeem it using OBO. It should instead reject the token). |
| `scope` | Required | A space separated list of scopes for the token request. For more information, see [scopes](v2-permissions-and-consent.md). | | `requested_token_use` | Required | Specifies how the request should be processed. In the OBO flow, the value must be set to `on_behalf_of`. | #### Example
-The following HTTP POST requests an access token and refresh token with `user.read` scope for the https://graph.microsoft.com web API.
+The following HTTP POST requests an access token and refresh token with `user.read` scope for the https://graph.microsoft.com web API. The request is signed with the client secret and is made by a confidential client.
```HTTP //line breaks for legibility only
client_id=535fb089-9ff3-47b6-9bfb-4f1264799865
### Second case: Access token request with a certificate
-A service-to-service access token request with a certificate contains the following parameters:
+A service-to-service access token request with a certificate contains the following parameters in addition to the parameters from the previous example:
| Parameter | Type | Description | | | | |
A service-to-service access token request with a certificate contains the follow
| `client_id` | Required | The application (client) ID that [the Azure portal - App registrations](https://go.microsoft.com/fwlink/?linkid=2083908) page has assigned to your app. | | `client_assertion_type` | Required | The value must be `urn:ietf:params:oauth:client-assertion-type:jwt-bearer`. | | `client_assertion` | Required | An assertion (a JSON web token) that you need to create and sign with the certificate you registered as credentials for your application. To learn how to register your certificate and the format of the assertion, see [certificate credentials](active-directory-certificate-credentials.md). |
-| `assertion` | Required | The access token that was sent to the middle-tier API. This token must have an audience (`aud`) claim of the app making this OBO request (the app denoted by the `client-id` field). Applications cannot redeem a token for a different app (so e.g. if a client sends an API a token meant for MS Graph, the API cannot redeem it using OBO. It should instead reject the token). |
+| `assertion` | Required | The access token that was sent to the middle-tier API. This token must have an audience (`aud`) claim of the app making this OBO request (the app denoted by the `client-id` field). Applications can't redeem a token for a different app (for example, if a client sends an API a token meant for MS Graph, the API can't redeem it using OBO. It should instead reject the token). |
| `requested_token_use` | Required | Specifies how the request should be processed. In the OBO flow, the value must be set to `on_behalf_of`. | | `scope` | Required | A space-separated list of scopes for the token request. For more information, see [scopes](v2-permissions-and-consent.md).|
-Notice that the parameters are almost the same as in the case of the request by shared secret except that the `client_secret` parameter is replaced by two parameters: a `client_assertion_type` and `client_assertion`.
+Notice that the parameters are almost the same as in the case of the request by shared secret except that the `client_secret` parameter is replaced by two parameters: a `client_assertion_type` and `client_assertion`. The `client_assertion_type` parameter is set to `urn:ietf:params:oauth:client-assertion-type:jwt-bearer` and the `client_assertion` parameter is set to the JWT token that is signed with the private key of the certificate.
#### Example
-The following HTTP POST requests an access token with `user.read` scope for the https://graph.microsoft.com web API with a certificate.
+The following HTTP POST requests an access token with `user.read` scope for the https://graph.microsoft.com web API with a certificate. The request is signed with the client secret and is made by a confidential client.
```HTTP // line breaks for legibility only
A success response is a JSON OAuth 2.0 response with the following parameters.
### Success response example
-The following example shows a success response to a request for an access token for the https://graph.microsoft.com web API.
+The following example shows a success response to a request for an access token for the https://graph.microsoft.com web API. The response contains an access token and a refresh token and is signed with the private key of the certificate.
```json {
The following example shows a success response to a request for an access token
} ```
-The above access token is a v1.0-formatted token for Microsoft Graph. This is because the token format is based on the **resource** being accessed and unrelated to the endpoints used to request it. The Microsoft Graph is setup to accept v1.0 tokens, so the Microsoft identity platform produces v1.0 access tokens when a client requests tokens for Microsoft Graph. Other apps may indicate that they want v2.0-format tokens, v1.0-format tokens, or even proprietary or encrypted token formats. Both the v1.0 and v2.0 endpoints can emit either format of token - this way the resource can always get the right format of token regardless of how or where the token was requested by the client.
+This access token is a v1.0-formatted token for Microsoft Graph. This is because the token format is based on the **resource** being accessed and unrelated to the endpoints used to request it. The Microsoft Graph is set up to accept v1.0 tokens, so the Microsoft identity platform produces v1.0 access tokens when a client requests tokens for Microsoft Graph. Other apps may indicate that they want v2.0-format tokens, v1.0-format tokens, or even proprietary or encrypted token formats. Both the v1.0 and v2.0 endpoints can emit either format of token. This way, the resource can always get the right format of token regardless of how or where the token was requested by the client.
[!INCLUDE [remind-not-to-validate-access-tokens](includes/remind-not-to-validate-access-tokens.md)]
A service-to-service request for a SAML assertion contains the following paramet
| assertion |required | The value of the access token used in the request.| | client_id |required | The app ID assigned to the calling service during registration with Azure AD. To find the app ID in the Azure portal, select **Active Directory**, choose the directory, and then select the application name. | | client_secret |required | The key registered for the calling service in Azure AD. This value should have been noted at the time of registration. The Basic auth pattern of instead providing credentials in the Authorization header, per [RFC 6749](https://datatracker.ietf.org/doc/html/rfc6749#section-2.3.1) is also supported. |
-| scope |required | A space-separated list of scopes for the token request. For more information, see [scopes](v2-permissions-and-consent.md). SAML itself doesn't have a concept of scopes, but here it is used to identify the target SAML application for which you want to receive a token. For this OBO flow, the scope value must always be the SAML Entity ID with `/.default` appended. For example, in case the SAML application's Entity ID is `https://testapp.contoso.com`, then the requested scope should be `https://testapp.contoso.com/.default`. In case the Entity ID doesn't start with a URI scheme such as `https:`, Azure AD prefixes the Entity ID with `spn:`. In that case you must request the scope `spn:<EntityID>/.default`, for example `spn:testapp/.default` in case the Entity ID is `testapp`. Note that the scope value you request here determines the resulting `Audience` element in the SAML token, which may be important to the SAML application receiving the token. |
+| scope |required | A space-separated list of scopes for the token request. For more information, see [scopes](v2-permissions-and-consent.md). SAML itself doesn't have a concept of scopes, but is used to identify the target SAML application for which you want to receive a token. For this OBO flow, the scope value must always be the SAML Entity ID with `/.default` appended. For example, in case the SAML application's Entity ID is `https://testapp.contoso.com`, then the requested scope should be `https://testapp.contoso.com/.default`. In case the Entity ID doesn't start with a URI scheme such as `https:`, Azure AD prefixes the Entity ID with `spn:`. In that case you must request the scope `spn:<EntityID>/.default`, for example `spn:testapp/.default` in case the Entity ID is `testapp`. The scope value you request here determines the resulting `Audience` element in the SAML token, which may be important to the SAML application receiving the token. |
| requested_token_use |required | Specifies how the request should be processed. In the On-Behalf-Of flow, the value must be `on_behalf_of`. | | requested_token_type | required | Specifies the type of token requested. The value can be `urn:ietf:params:oauth:token-type:saml2` or `urn:ietf:params:oauth:token-type:saml1` depending on the requirements of the accessed resource. | The response contains a SAML token encoded in UTF8 and Base64url. -- **SubjectConfirmationData for a SAML assertion sourced from an OBO call**: If the target application requires a `Recipient` value in `SubjectConfirmationData`, then the value must be configured as the first non-wildcard Reply URL in the resource application configuration. Since the default Reply URL isn't used to determine the `Recipient` value, you might have to reorder the Reply URLs in the application configuration.
+- **SubjectConfirmationData for a SAML assertion sourced from an OBO call**: If the target application requires a `Recipient` value in `SubjectConfirmationData`, then the value must be configured as the first non-wildcard Reply URL in the resource application configuration. Since the default Reply URL isn't used to determine the `Recipient` value, you might have to reorder the Reply URLs in the application configuration to ensure that the first non-wildcard Reply URL is used. For more information, see [Reply URLs](reply-url.md).
- **The SubjectConfirmationData node**: The node can't contain an `InResponseTo` attribute since it's not part of a SAML response. The application receiving the SAML token must be able to accept the SAML assertion without an `InResponseTo` attribute. - **API permissions**: You have to [add the necessary API permissions](quickstart-configure-app-access-web-apis.md) on the middle-tier application to allow access to the SAML application, so that it can request a token for the `/.default` scope of the SAML application. - **Consent**: Consent must have been granted to receive a SAML token containing user data on an OAuth flow. For information, see [Gaining consent for the middle-tier application](#gaining-consent-for-the-middle-tier-application) below.
The response contains a SAML token encoded in UTF8 and Base64url.
## Gaining consent for the middle-tier application
-Depending on the architecture or usage of your application, you may consider different strategies for ensuring that the OBO flow is successful. In all cases, the ultimate goal is to ensure proper consent is given so that the client app can call the middle-tier app, and the middle tier app has permission to call the back-end resource.
-
-> [!NOTE]
-> Previously the Microsoft account system (personal accounts) did not support the "known client applications" field, nor could it show combined consent. This has been added and all apps in the Microsoft identity platform can use the known client application approach for getting consent for OBO calls.
+The goal of the OBO flow is to ensure proper consent is given so that the client app can call the middle-tier app and the middle-tier app has permission to call the back-end resource. Depending on the architecture or usage of your application, you may want to consider the following to ensure that OBO flow is successful.
### .default and combined consent
-The middle tier application adds the client to the [known client applications list](reference-app-manifest.md#knownclientapplications-attribute) (`knownClientApplications`) in its manifest. If a consent prompt is triggered by the client, the consent flow will be both for itself and the middle tier application. On the Microsoft identity platform, this is done using the [`.default` scope](v2-permissions-and-consent.md#the-default-scope). When triggering a consent screen using known client applications and `.default`, the consent screen will show permissions for **both** the client to the middle tier API, and also request whatever permissions are required by the middle-tier API. The user provides consent for both applications, and then the OBO flow works.
+The middle tier application adds the client to the [known client applications list](reference-app-manifest.md#knownclientapplications-attribute) (`knownClientApplications`) in its manifest. If a consent prompt is triggered by the client, the consent flow will be both for itself and the middle tier application. On the Microsoft identity platform, this is done using the [`.default` scope](v2-permissions-and-consent.md#the-default-scope). The `.default` scope is a special scope that is used to request consent to access all the scopes that the application has permissions for. This is useful when the application needs to access multiple resources, but the user should only be prompted for consent once.
+
+When triggering a consent screen using known client applications and `.default`, the consent screen will show permissions for **both** the client to the middle tier API, and also request whatever permissions are required by the middle-tier API. The user provides consent for both applications, and then the OBO flow works.
-The resource service (API) identified in the request should be the API for which the client application is requesting an access token as a result of the user's sign-in. For example, `scope=openid https://middle-tier-api.example.com/.default` (to request an access token for the middle tier API), or `scope=openid offline_access .default` (when a resource is not identified, it defaults to Microsoft Graph).
+The resource service (API) identified in the request should be the API for which the client application is requesting an access token as a result of the user's sign-in. For example, `scope=openid https://middle-tier-api.example.com/.default` (to request an access token for the middle tier API), or `scope=openid offline_access .default` (when a resource isn't identified, it defaults to Microsoft Graph).
-Regardless of which API is identified in the authorization request, the consent prompt will be a combined consent prompt including all required permissions configured for the client app, as well as all required permissions configured for each middle tier API listed in the client's required permissions list, and which have identified the client as a known client application.
+Regardless of which API is identified in the authorization request, the consent prompt will be combined with all required permissions configured for the client app. As well as this, all required permissions configured for each middle tier API listed in the client's required permissions list, which have identified the client as a known client application, are also included.
### Pre-authorized applications
-Resources can indicate that a given application always has permission to receive certain scopes. This is primarily useful to make connections between a front-end client and a back-end resource more seamless. A resource can [declare multiple pre-authorized applications](reference-app-manifest.md#preauthorizedapplications-attribute) (`preAuthorizedApplications`) in its manifest - any such application can request these permissions in an OBO flow and receive them without the user providing consent.
+Resources can indicate that a given application always has permission to receive certain scopes. This is useful to make connections between a front-end client and a back-end resource more seamless. A resource can [declare multiple pre-authorized applications](reference-app-manifest.md#preauthorizedapplications-attribute) (`preAuthorizedApplications`) in its manifest. Any such application can request these permissions in an OBO flow and receive them without the user providing consent.
### Admin consent
active-directory Add Users Administrator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/add-users-administrator.md
Previously updated : 08/31/2022 Last updated : 10/12/2022
To add B2B collaboration users to the directory, follow these steps:
> Group email addresses arenΓÇÖt supported; enter the email address for an individual. Also, some email providers allow users to add a plus symbol (+) and additional text to their email addresses to help with things like inbox filtering. However, Azure AD doesnΓÇÖt currently support plus symbols in email addresses. To avoid delivery issues, omit the plus symbol and any characters following it up to the @ symbol. 6. Select **Invite** to automatically send the invitation to the guest user.
-After you send the invitation, the user account is automatically added to the directory as a guest.
+After you send the invitation, the user account is automatically added to the directory as a guest.
![Screenshot showing the user list including the new Guest user.](media/add-users-administrator//guest-user-type.png)
+The user is added to your directory with a user principal name (UPN) in the format *emailaddress*#EXT#\@*domain*, for example, *john_contoso.com#EXT#\@fabrikam.onmicrosoft.com*, where fabrikam.onmicrosoft.com is the organization from which you sent the invitations. ([Learn more about B2B collaboration user properties](user-properties.md).)
## Add guest users to a group If you need to manually add B2B collaboration users to a group, follow these steps:
active-directory Add Users Information Worker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/add-users-information-worker.md
# How users in your organization can invite guest users to an app
-After a guest user has been added to the directory in Azure AD, an application owner can send the guest user a direct link to the app they want to share. Azure AD admins can also set up self-service management for gallery or SAML-based apps in their Azure AD tenant. This way, application owners can manage their own guest users, even if the guest users havenΓÇÖt been added to the directory yet. When an app is configured for self-service, the application owner uses their Access Panel to invite a guest user to an app or add a guest user to a group that has access to the app. Self-service app management for gallery and SAML-based apps requires some initial setup by an admin. The following is a summary of the setup steps (for more detailed instructions, see [Prerequisites](#prerequisites) later on this page):
+After a guest user has been added to the directory in Azure AD, an application owner can send the guest user a direct link to the app they want to share. Azure AD admins can also set up self-service management for gallery or SAML-based apps in their Azure AD tenant. This way, application owners can manage their own guest users, even if the guest users havenΓÇÖt been added to the directory yet. When an app is configured for self-service, the application owner uses their Access Panel to invite a guest user to an app or add a guest user to a group that has access to the app. Self-service app management for gallery and SAML-based apps requires some initial setup by an admin. Follow the summary of the setup steps (for more detailed instructions, see [Prerequisites](#prerequisites) later on this page):
- Enable self-service group management for your tenant - Create a group to assign to the app and make the user an owner
After a guest user has been added to the directory in Azure AD, an application o
> [!NOTE] > * This article describes how to set up self-service management for gallery and SAML-based apps that youΓÇÖve added to your Azure AD tenant. You can also [set up self-service Microsoft 365 groups](../enterprise-users/groups-self-service-management.md) so your users can manage access to their own Microsoft 365 groups. For more ways users can share Office files and apps with guest users, see [Guest access in Microsoft 365 groups](https://support.office.com/article/guest-access-in-office-365-groups-bfc7a840-868f-4fd6-a390-f347bf51aff6) and [Share SharePoint files or folders](https://support.office.com/article/share-sharepoint-files-or-folders-1fe37332-0f9a-4719-970e-d2578da4941c).
-> * Users are only able to invite guests if they have the **Guest inviter** role.
+> * Users are only able to invite guests if they have the [**Guest inviter**](../roles/permissions-reference.md#guest-inviter
+) role.
## Invite a guest user to an app from the Access Panel After an app is configured for self-service, application owners can use their own Access Panel to invite a guest user to the app they want to share. The guest user doesn't necessarily need to be added to Azure AD in advance. 1. Open your Access Panel by going to `https://myapps.microsoft.com`. 2. Point to the app, select the ellipses (**...**), and then select **Manage app**.
-
- ![Screenshot showing the Manage app sub-menu for the Salesforce app](media/add-users-iw/access-panel-manage-app.png)
-
-3. At the top of the users list, select **+** on the right-hand side.
+
+3. At the top of the users list, select **+** on the right-hand side.
4. In the **Add members** search box, type the email address for the guest user. Optionally, include a welcome message.
- ![Screenshot showing the Add members window for adding a guest](media/add-users-iw/access-panel-invitation.png)
+ 5. Select **Add** to send an invitation to the guest user. After you send the invitation, the user account is automatically added to the directory as a guest.
After an app is configured for self-service, application owners can invite guest
2. Open your Access Panel by going to `https://myapps.microsoft.com`. 3. Select the **Groups** app.
- ![Screenshot showing the Groups app in the Access Panel](media/add-users-iw/access-panel-groups.png)
4. Under **Groups I own**, select the group that has access to the app you want to share.
- ![Screenshot showing where to select a group under the Groups I own](media/add-users-iw/access-panel-groups-i-own.png)
5. At the top of the group members list, select **+**.
- ![Screenshot showing the plus symbol for adding members to the group](media/add-users-iw/access-panel-groups-add-member.png)
6. In the **Add members** search box, type the email address for the guest user. Optionally, include a welcome message.
- ![Screenshot showing the Add members window for adding a guest](media/add-users-iw/access-panel-invitation.png)
7. Select **Add** to automatically send the invitation to the guest user. After you send the invitation, the user account is automatically added to the directory as a guest.
active-directory B2b Direct Connect Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/b2b-direct-connect-overview.md
Previously updated : 06/30/2022 Last updated : 10/12/2022
For example, say Contoso (the resource tenant) trusts MFA claims from Fabrikam.
For information about Conditional Access and Teams, see [Overview of security and compliance](/microsoftteams/security-compliance-overview) in the Microsoft Teams documentation.
+## Trust settings for device compliance
+
+In your cross-tenant access settings, you can use **Trust settings** to trust claims from an external user's home tenant about whether the user's device meets their device compliance policies or is hybrid Azure AD joined. When device trust settings are enabled, Azure AD checks a user's authentication session for a device claim. If the session contains a device claim indicating that the policies have already been met in the user's home tenant, the external user is granted seamless sign-on to your shared resource. You can enable device trust settings for all Azure AD organizations or individual organizations. ([Learn more](authentication-conditional-access.md#device-compliance-and-hybrid-azure-ad-joined-device-policies))
+ ## B2B direct connect user experience Currently, B2B direct connect enables the Teams Connect shared channels feature. B2B direct connect users can access an external organization's Teams shared channel without having to switch tenants or sign in with a different account. The B2B direct connect userΓÇÖs access is determined by the shared channelΓÇÖs policies.
active-directory User Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/user-properties.md
Previously updated : 08/30/2022 Last updated : 10/12/2022
Now, let's see what an Azure AD B2B collaboration user looks like in Azure AD.
### Before invitation redemption
-B2B collaboration user accounts are the result of inviting guest users to collaborate by using the guest users' own credentials. When the invitation is initially sent to the guest user, an account is created in your tenant. This account doesnΓÇÖt have any credentials associated with it because authentication is performed by the guest user's identity provider. The **Issuer** property for the guest user account in your directory is set to the host's organization domain until the guest redeems their invitation. In the portal, the **Invitation accepted** property in the invited userΓÇÖs Azure AD portal profile will be set to **No** and querying for `externalUserState` using the Microsoft Graph API will return `Pending Acceptance`.
+B2B collaboration user accounts are the result of inviting guest users to collaborate by using the guest users' own credentials. When the invitation is initially sent to the guest user, an account is created in your tenant. This account doesnΓÇÖt have any credentials associated with it because authentication is performed by the guest user's identity provider. The **Identities** property for the guest user account in your directory is set to the host's organization domain until the guest redeems their invitation. In the portal, the invited userΓÇÖs profile will show an **External user state** of **PendingAcceptance**. Querying for `externalUserState` using the Microsoft Graph API will return `Pending Acceptance`.
![Screenshot of user profile before redemption.](media/user-properties/before-redemption.png) ### After invitation redemption
-After the B2B collaboration user accepts the invitation, the **Issuer** property is updated based on the userΓÇÖs identity provider.
+After the B2B collaboration user accepts the invitation, the **Identities** property is updated based on the userΓÇÖs identity provider.
-If the B2B collaboration user is using credentials from another Azure AD organization, the **Issuer** is **External Azure AD**.
+- If the B2B collaboration user is using a Microsoft account or credentials from another external identity provider, **Identities** reflects the identity provider, for example **Microsoft Account**, **google.com**, or **facebook.com**.
-![Screenshot of user profile after redemption.](media/user-properties/after-redemption-state-1.png)
+ ![Screenshot of user profile after redemption.](media/user-properties/after-redemption-state-1.png)
-If the B2B collaboration user is using a Microsoft account or credentials from another external identity provider, the **Issuer** reflects the identity provider, for example **Microsoft Account**, **google.com**, or **facebook.com**.
+- If the B2B collaboration user is using credentials from another Azure AD organization, **Identities** is **External Azure AD**.
-![Screenshot of user profile showing an external identity provider.](media/user-properties/after-redemption-state-2.png)
-
-For external users who are using internal credentials, the **Issuer** property is set to the hostΓÇÖs organization domain. The **Directory synced** property is **Yes** if the account is homed in the organizationΓÇÖs on-premises Active Directory and synced with Azure AD, or **No** if the account is a cloud-only Azure AD account. The directory sync information is also available via the `onPremisesSyncEnabled` property in Microsoft Graph.
+- For external users who are using internal credentials, the **Identities** property is set to the hostΓÇÖs organization domain. The **Directory synced** property is **Yes** if the account is homed in the organizationΓÇÖs on-premises Active Directory and synced with Azure AD, or **No** if the account is a cloud-only Azure AD account. The directory sync information is also available via the `onPremisesSyncEnabled` property in Microsoft Graph.
## Key properties of the Azure AD B2B collaboration user
This property indicates the relationship of the user to the host tenancy. This p
> [!NOTE] > The UserType has no relation to how the user signs in, the directory role of the user, and so on. This property simply indicates the user's relationship to the host organization and allows the organization to enforce policies that depend on this property.
-### Issuer
+### Identities
-This property indicates the userΓÇÖs primary identity provider. A user can have several identity providers, which can be viewed by selecting issuer in the userΓÇÖs profile or by querying the `onPremisesSyncEnabled` property via the Microsoft Graph API.
+This property indicates the userΓÇÖs primary identity provider. A user can have several identity providers, which can be viewed by selecting the link next to **Identities** in the userΓÇÖs profile or by querying the `onPremisesSyncEnabled` property via the Microsoft Graph API.
> [!NOTE]
-> Issuer and UserType are independent properties. A value of issuer does not imply a particular value for UserType
+> Identities and UserType are independent properties. A value of Identities does not imply a particular value for UserType
-Issuer property value | Sign-in state
+Identities property value | Sign-in state
| - External Azure AD | This user is homed in an external organization and authenticates by using an Azure AD account that belongs to the other organization. Microsoft account | This user is homed in a Microsoft account and authenticates by using a Microsoft account.
google.com | This user has a Gmail account and has signed up by using self-servi
facebook.com | This user has a Facebook account and has signed up by using self-service to the other organization. mail | This user has an email address that doesn't match with verified Azure AD or SAML/WS-Fed domains, and isn't a Gmail address or a Microsoft account. phone | This user has an email address that doesn't match a verified Azure AD domain or a SAML/WS-Fed domain, and isn't a Gmail address or Microsoft account.
-{issuer URI} | This user is homed in an external organization that doesn't use Azure Active Directory as their identity provider, but instead uses a SAML/WS-Fed-based identity provider. The issuer URI is shown when the issuer field is clicked.
+{issuer URI} | This user is homed in an external organization that doesn't use Azure Active Directory as their identity provider, but instead uses a SAML/WS-Fed-based identity provider. The issuer URI is shown when the Identities field is clicked.
### Directory synced
Typically, an Azure AD B2B user and guest user are synonymous. Therefore, an Azu
## Filter for guest users in the directory
+In the **Users** list, you can use **Add filter** to display only the guest users in your directory.
+
+![Screenshot showing how to add a User type filter for guests.](media/user-properties/add-guest-filter.png)
++ ![Screenshot showing the filter for guest users.](media/user-properties/filter-guest-users.png) ## Convert UserType
active-directory Whats New Sovereign Clouds https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new-sovereign-clouds.md
This page is updated monthly, so revisit it regularly.
-## September 2022
+## October 2022
### General Availability - Azure AD certificate-based authentication
For more information on how to use this feature, see: [Dynamic membership rule f
+## September 2022
+ ### General Availability - No more waiting, provision groups on demand into your SaaS applications.
active-directory How To Connect Group Writeback V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-group-writeback-v2.md
Group writeback allows you to write cloud groups back to your on-premises Active Directory instance by using Azure Active Directory (Azure AD) Connect sync. You can use this feature to manage groups in the cloud, while controlling access to on-premises applications and resources.
->[NOTE]
->The Group writeback functionality is currently in Public Preview as we are collecting customer feedback and telemetry. Please refer to [the limitations](https://learn.microsoft.com/azure/active-directory/hybrid/how-to-connect-group-writeback-v2#understand-limitations-of-public-preview) before you enable this functionality.
+> [!NOTE]
+> The group writeback functionality is currently in Public Preview as we are collecting customer feedback and telemetry. Please refer to [the limitations](https://learn.microsoft.com/azure/active-directory/hybrid/how-to-connect-group-writeback-v2#understand-limitations-of-public-preview) before you enable this functionality.
There are two versions of group writeback. The original version is in general availability and is limited to writing back Microsoft 365 groups to your on-premises Active Directory instance as distribution groups. The new, expanded version of group writeback is in public preview and enables the following capabilities:
active-directory How To Connect Modify Group Writeback https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-modify-group-writeback.md
To configure directory settings to disable automatic writeback of newly created
import-module ADSync $precedenceValue = Read-Host -Prompt "Enter a unique sync rule precedence value [0-99]"
- New-ADSyncRule `
- -Name 'In from AAD - Group SOAinAAD Delete WriteBackOutOfScope and SoftDelete' `
- -Identifier 'cb871f2d-0f01-4c32-a333-ff809145b947' `
- -Description 'Delete AD groups that fall out of scope of Group Writeback or get Soft Deleted in Azure AD' `
- -Direction 'Inbound' `
- -Precedence $precedenceValue `
- -PrecedenceAfter '00000000-0000-0000-0000-000000000000' `
- -PrecedenceBefore '00000000-0000-0000-0000-000000000000' `
- -SourceObjectType 'group' `
- -TargetObjectType 'group' `
- -Connector 'b891884f-051e-4a83-95af-2544101c9083' `
- -LinkType 'Join' `
- -SoftDeleteExpiryInterval 0 `
- -ImmutableTag '' `
- -OutVariable syncRule
-
- Add-ADSyncAttributeFlowMapping `
- -SynchronizationRule $syncRule[0] `
- -Destination 'reasonFiltered' `
- -FlowType 'Expression' `
- -ValueMergeType 'Update' `
- -Expression 'IIF((IsPresent([reasonFiltered]) = True) && (InStr([reasonFiltered], "WriteBackOutOfScope") > 0 || InStr([reasonFiltered], "SoftDelete") > 0), "DeleteThisGroupInAD", [reasonFiltered])' `
- -OutVariable syncRule
-
- New-Object `
- -TypeName 'Microsoft.IdentityManagement.PowerShell.ObjectModel.ScopeCondition' `
- -ArgumentList 'cloudMastered','true','EQUAL' `
- -OutVariable condition0
-
- Add-ADSyncScopeConditionGroup `
- -SynchronizationRule $syncRule[0] `
- -ScopeConditions @($condition0[0]) `
- -OutVariable syncRule
+ New-ADSyncRule `
+ -Name 'In from AAD - Group SOAinAAD Delete WriteBackOutOfScope and SoftDelete' `
+ -Identifier 'cb871f2d-0f01-4c32-a333-ff809145b947' `
+ -Description 'Delete AD groups that fall out of scope of Group Writeback or get Soft Deleted in Azure AD' `
+ -Direction 'Inbound' `
+ -Precedence $precedenceValue `
+ -PrecedenceAfter '00000000-0000-0000-0000-000000000000' `
+ -PrecedenceBefore '00000000-0000-0000-0000-000000000000' `
+ -SourceObjectType 'group' `
+ -TargetObjectType 'group' `
+ -Connector 'b891884f-051e-4a83-95af-2544101c9083' `
+ -LinkType 'Join' `
+ -SoftDeleteExpiryInterval 0 `
+ -ImmutableTag '' `
+ -OutVariable syncRule
+
+ Add-ADSyncAttributeFlowMapping `
+ -SynchronizationRule $syncRule[0] `
+ -Destination 'reasonFiltered' `
+ -FlowType 'Expression' `
+ -ValueMergeType 'Update' `
+ -Expression 'IIF((IsPresent([reasonFiltered]) = True) && (InStr([reasonFiltered], "WriteBackOutOfScope") > 0 || InStr([reasonFiltered], "SoftDelete") > 0), "DeleteThisGroupInAD", [reasonFiltered])' `
+ -OutVariable syncRule
+
+ New-Object `
+ -TypeName 'Microsoft.IdentityManagement.PowerShell.ObjectModel.ScopeCondition' `
+ -ArgumentList 'cloudMastered','true','EQUAL' `
+ -OutVariable condition0
+
+ Add-ADSyncScopeConditionGroup `
+ -SynchronizationRule $syncRule[0] `
+ -ScopeConditions @($condition0[0]) `
+ -OutVariable syncRule
- New-Object `
- -TypeName 'Microsoft.IdentityManagement.PowerShell.ObjectModel.JoinCondition' `
- -ArgumentList 'cloudAnchor','cloudAnchor',$false `
- -OutVariable condition0
-
- Add-ADSyncJoinConditionGroup `
- -SynchronizationRule $syncRule[0] `
- -JoinConditions @($condition0[0]) `
- -OutVariable syncRule
-
- Add-ADSyncRule `
- -SynchronizationRule $syncRule[0]
-
- Get-ADSyncRule `
- -Identifier 'cb871f2d-0f01-4c32-a333-ff809145b947'
- ```
+ New-Object `
+ -TypeName 'Microsoft.IdentityManagement.PowerShell.ObjectModel.JoinCondition' `
+ -ArgumentList 'cloudAnchor','cloudAnchor',$false `
+ -OutVariable condition0
+
+ Add-ADSyncJoinConditionGroup `
+ -SynchronizationRule $syncRule[0] `
+ -JoinConditions @($condition0[0]) `
+ -OutVariable syncRule
+
+ Add-ADSyncRule `
+ -SynchronizationRule $syncRule[0]
+
+ Get-ADSyncRule `
+ -Identifier 'cb871f2d-0f01-4c32-a333-ff809145b947'
+ ```
4. [Enable group writeback](how-to-connect-group-writeback-enable.md). 5. Enable the Azure AD Connect sync scheduler:
active-directory Cato Networks Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cato-networks-provisioning-tutorial.md
The scenario outlined in this tutorial assumes that you already have the followi
## Step 2. Configure Cato Networks to support provisioning with Azure AD 1. Log in to your account in the [Cato Management Application](https://cc2.catonetworks.com).
-1. From the navigation menu, select **Configuration > Global Settings**, and then expand the **VPN Settings** section.
- ![VPN Settings section](media/cato-networks-provisioning-tutorial/vpn-settings.png)
-1. Expand the **SCIM Provisioning** section and enable SCIM provisioning by clicking on **Enable SCIM Provisioning**.
- ![Enable SCIM Provisioning](media/cato-networks-provisioning-tutorial/scim-settings.png)
-1. Copy the Base URL and Bearer Token from the Cato Management Application to the SCIM app in the Azure portal:
- 1. In the Cato Management Application (from the **SCIM Provisioning** section), copy the Base URL.
- 1. In the Cato Networks SCIM app in the Azure portal, in the **Provisioning** tab, paste the base URL in the **Tenant URL** field.
- ![Copy tenant URL](media/cato-networks-provisioning-tutorial/tenant-url.png)
- 1. In the Cato Management Application (from the **SCIM Provisioning** section), click **Generate Token** and copy the bearer token.
- 1. In the Cato Networks SCIM app in the Azure portal, paste the bearer token in the **Secret Token** field.
- ![Copy secret token](media/cato-networks-provisioning-tutorial/secret-token.png)
-1. In the Cato Management Application (from the **SCIM Provisioning** section), click **Save**. SCIM Provisioning between your Cato account and Azure AD is configured.
- ![Save SCIM Configuration](media/cato-networks-provisioning-tutorial/save-CC.png)
-1. Test the connection between the Azure SCIM app and the Cato Cloud. In the Cato Networks SCIM apps in the Azure portal, in the **Provisioning** tab, click **Test Connection**.
+1. From the navigation menu select **Access > Directory Services** and click the **SCIM** section tab.
+ ![Screenshot of navigate to SCIM setting.](media/cato-networks-provisioning-tutorial/navigate.png)
+1. Select **Enable SCIM Provisioning** to set your account to connect to the SCIM app. And then click **Save**.
+ ![Screenshot of Enable SCIM Provisioning.](media/cato-networks-provisioning-tutorial/scim-setting.png)
+1. Copy the **Base URL**.Click **Generate Token** and copy the bearer token. Base Url and token will be entered in the **Tenant URL** and **Secret Token** field in the Provisioning tab of your Cato Network application in the Azure portal.
## Step 3. Add Cato Networks from the Azure AD application gallery
This section guides you through the steps to configure the Azure AD provisioning
1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
- ![Enterprise applications blade](common/enterprise-applications.png)
+ ![Screenshot of enterprise applications blade.](common/enterprise-applications.png)
1. In the applications list, select **Cato Networks**.
- ![The Cato Networks link in the Applications list](common/all-applications.png)
+ ![Screenshot of the Cato Networks link in the Applications list.](common/all-applications.png)
1. Select the **Provisioning** tab.
- ![Provisioning tab](common/provisioning.png)
+ ![Screenshot of provisioning tab.](common/provisioning.png)
1. Set the **Provisioning Mode** to **Automatic**.
- ![Provisioning tab automatic](common/provisioning-automatic.png)
+ ![Screenshot of provisioning tab automatic.](common/provisioning-automatic.png)
1. Under the **Admin Credentials** section, input your Cato Networks Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to Cato Networks. If the connection fails, ensure your Cato Networks account has Admin permissions and try again.
- ![Token](common/provisioning-testconnection-tenanturltoken.png)
+ ![Screenshot of token.](common/provisioning-testconnection-tenanturltoken.png)
1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
- ![Notification Email](common/provisioning-notification-email.png)
+ ![Screenshot of notification email.](common/provisioning-notification-email.png)
1. Select **Save**.
This section guides you through the steps to configure the Azure AD provisioning
1. To enable the Azure AD provisioning service for Cato Networks, change the **Provisioning Status** to **On** in the **Settings** section.
- ![Provisioning Status Toggled On](common/provisioning-toggle-on.png)
+ ![Screenshot of provisioning status toggled on.](common/provisioning-toggle-on.png)
1. Define the users and/or groups that you would like to provision to Cato Networks by choosing the desired values in **Scope** in the **Settings** section.
- ![Provisioning Scope](common/provisioning-scope.png)
+ ![Screenshot of provisioning scope.](common/provisioning-scope.png)
1. When you're ready to provision, click **Save**.
- ![Saving Provisioning Configuration](common/provisioning-configuration-save.png)
+ ![Screenshot of saving provisioning configuration.](common/provisioning-configuration-save.png)
This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to complete than next cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
active-directory Code42 Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/code42-provisioning-tutorial.md
# Tutorial: Configure Code42 for automatic user provisioning
-This tutorial describes the steps you need to perform in both Code42 and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Code42](https://www.crashplan.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
+This tutorial describes the steps you need to perform in both Code42 and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Code42](https://www.code42.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
## Capabilities Supported
active-directory Confluencemicrosoft Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/confluencemicrosoft-tutorial.md
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
> To enable the default login form for admin login on the login page when the force azure login is enabled, add the query parameter in the browser URL. > `https://<DOMAIN:PORT>/login.action?force_azure_login=false`
- 1. **Enable Use of Application Proxy** checkbox, if you have configured your on-premise atlassian application in an App Proxy setup. For App proxy setup , follow the steps on the [Azure AD App Proxy Documentation](/articles/active-directory/app-proxy/what-is-application-proxy.md).
+ 1. **Enable Use of Application Proxy** checkbox, if you have configured your on-premise atlassian application in an App Proxy setup. For App proxy setup , follow the steps on the [Azure AD App Proxy Documentation](../app-proxy/what-is-application-proxy.md).
1. Click **Save** button to save the settings.
active-directory Contentkalender Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/contentkalender-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with Contentkalender'
+description: Learn how to configure single sign-on between Azure Active Directory and Contentkalender.
++++++++ Last updated : 10/10/2022++++
+# Tutorial: Azure AD SSO integration with Contentkalender
+
+In this tutorial, you'll learn how to integrate Contentkalender with Azure Active Directory (Azure AD). When you integrate Contentkalender with Azure AD, you can:
+
+* Control in Azure AD who has access to Contentkalender.
+* Enable your users to be automatically signed-in to Contentkalender with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Contentkalender single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Contentkalender supports **SP** initiated SSO.
+* Contentkalender supports **Just In Time** user provisioning.
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+
+## Add Contentkalender from the gallery
+
+To configure the integration of Contentkalender into Azure AD, you need to add Contentkalender from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Contentkalender** in the search box.
+1. Select **Contentkalender** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+ Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, and walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides)
+
+## Configure and test Azure AD SSO for Contentkalender
+
+Configure and test Azure AD SSO with Contentkalender using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Contentkalender.
+
+To configure and test Azure AD SSO with Contentkalender, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Contentkalender SSO](#configure-contentkalender-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Contentkalender test user](#create-contentkalender-test-user)** - to have a counterpart of B.Simon in Contentkalender that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Contentkalender** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** text box, type one of the following URLs:
+
+ | **Identifier** |
+ ||
+ | `https://login.contentkalender.nl` |
+ | `https://login.decontentkalender.be` |
+ | `https://contentkalender-acc.bettywebblocks.com/` |
+
+ b. In the **Reply URL** text box, type one of the following URLs:
+
+ | **Reply URL** |
+ |--|
+ | `https://login.contentkalender.nl/sso/saml/callback` |
+ | `https://login.decontentkalender.be/sso/saml/callback` |
+ | `https://contentkalender-acc.bettywebblocks.com/sso/saml/callback` |
+
+ c. In the **Sign-on URL** text box, type the URL:
+ `https://contentkalender-acc.bettywebblocks.com/v2/login`
+
+1. Your Contentkalender application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows an example for this. The default value of **Unique User Identifier** is **user.userprincipalname** but Contentkalender expects this to be mapped with the user's email address. For that you can use **user.mail** attribute from the list or use the appropriate attribute value based on your organization configuration.
+
+ ![Screenshot shows the image of attribute mappings.](common/default-attributes.png "Attributes")
+
+1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/copy-metadataurl.png "Certificate")
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Contentkalender.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Contentkalender**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Contentkalender SSO
+
+To configure single sign-on on **Contentkalender** side, you need to send the **App Federation Metadata Url** to [Contentkalender support team](mailto:info@contentkalender.nl). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Contentkalender test user
+
+In this section, a user called B.Simon is created in Contentkalender. Contentkalender supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Contentkalender, a new one is created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to Contentkalender Sign-on URL where you can initiate the login flow.
+
+* Go to Contentkalender Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the Contentkalender tile in the My Apps, this will redirect to Contentkalender Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+
+## Next steps
+
+Once you configure Contentkalender you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
active-directory Github Enterprise Managed User Oidc Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/github-enterprise-managed-user-oidc-provisioning-tutorial.md
This section guides you through the steps to configure the Azure AD provisioning
5. Under the **Admin Credentials** section, input your GitHub Enterprise Managed User (OIDC) Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to GitHub Enterprise Managed User (OIDC). If the connection fails, ensure your GitHub Enterprise Managed User (OIDC) account has created the secret token as an enterprise owner and try again.
+ For "Tenant URL", type https://api.github.com/scim/v2/enterprises/YOUR_ENTERPRISE, replacing YOUR_ENTERPRISE with the name of your enterprise account.
+
+ For example, if your enterprise account's URL is https://github.com/enterprises/octo-corp, the name of the enterprise account is octo-corp.
+
+ For "Secret token", paste the personal access token with the admin:enterprise scope that you created earlier.
+
![Token](common/provisioning-testconnection-tenanturltoken.png) 6. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
Once you've configured provisioning, use the following resources to monitor your
## Next steps
-* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Jiramicrosoft Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/jiramicrosoft-tutorial.md
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
k. **Enable Use of Application Proxy** checkbox, if you have configured your on-premise atlassian application in an App Proxy setup.
- * For App proxy setup , follow the steps on the [Azure AD App Proxy Documentation](/articles/active-directory/app-proxy/what-is-application-proxy.md).
+ * For App proxy setup , follow the steps on the [Azure AD App Proxy Documentation](../app-proxy/what-is-application-proxy.md).
l. Click **Save** button to save the settings.
active-directory Lessonly Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/lessonly-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Lesson.ly | Microsoft Docs'
-description: Learn how to configure single sign-on between Azure Active Directory and Lesson.ly.
+ Title: 'Tutorial: Azure AD SSO integration with Lessonly'
+description: Learn how to configure single sign-on between Azure Active Directory and Lessonly.
Previously updated : 04/20/2021 Last updated : 10/13/2022
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with Lesson.ly
+# Tutorial: Azure AD SSO integration with Lessonly
-In this tutorial, you'll learn how to integrate Lesson.ly with Azure Active Directory (Azure AD). When you integrate Lesson.ly with Azure AD, you can:
+In this tutorial, you'll learn how to integrate Lessonly with Azure Active Directory (Azure AD). When you integrate Lessonly with Azure AD, you can:
-* Control in Azure AD who has access to Lesson.ly.
-* Enable your users to be automatically signed-in to Lesson.ly with their Azure AD accounts.
+* Control in Azure AD who has access to Lessonly.
+* Enable your users to be automatically signed-in to Lessonly with their Azure AD accounts.
* Manage your accounts in one central location - the Azure portal. ## Prerequisites
In this tutorial, you'll learn how to integrate Lesson.ly with Azure Active Dire
To get started, you need the following items: * An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
-* Lesson.ly single sign-on (SSO) enabled subscription.
+* Lessonly single sign-on (SSO) enabled subscription.
## Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment.
-* Lesson.ly supports **SP** initiated SSO.
-* Lesson.ly supports **Just In Time** user provisioning.
+* Lessonly supports **SP** initiated SSO.
+* Lessonly supports **Just In Time** user provisioning.
-## Add Lesson.ly from the gallery
+## Add Lessonly from the gallery
-To configure the integration of Lesson.ly into Azure AD, you need to add Lesson.ly from the gallery to your list of managed SaaS apps.
+To configure the integration of Lessonly into Azure AD, you need to add Lessonly from the gallery to your list of managed SaaS apps.
1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account. 1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**.
-1. In the **Add from the gallery** section, type **Lesson.ly** in the search box.
-1. Select **Lesson.ly** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+1. In the **Add from the gallery** section, type **Lessonly** in the search box.
+1. Select **Lessonly** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
- Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides)
+ Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, and walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides)
-## Configure and test Azure AD SSO for Lesson.ly
+## Configure and test Azure AD SSO for Lessonly
-Configure and test Azure AD SSO with Lesson.ly using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Lesson.ly.
+Configure and test Azure AD SSO with Lessonly using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Lessonly.
-To configure and test Azure AD SSO with Lesson.ly, perform the following steps:
+To configure and test Azure AD SSO with Lessonly, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon. 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
-1. **[Configure Lesson.ly SSO](#configure-lessonly-sso)** - to configure the single sign-on settings on application side.
- 1. **[Create Lesson.ly test user](#create-lessonly-test-user)** - to have a counterpart of B.Simon in Lesson.ly that is linked to the Azure AD representation of user.
+1. **[Configure Lessonly SSO](#configure-lessonly-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Lessonly test user](#create-lessonly-test-user)** - to have a counterpart of B.Simon in Lessonly that is linked to the Azure AD representation of user.
1. **[Test SSO](#test-sso)** - to verify whether the configuration works. ## Configure Azure AD SSO Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the Azure portal, on the **Lesson.ly** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **Lessonly** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**. 1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
Follow these steps to enable Azure AD SSO in the Azure portal.
> [!NOTE] > These values are not real. Update these values with the actual Sign on URL, Reply URL, and Identifier. Contact [Lessonly.com Client support team](mailto:support@lessonly.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
-1. Lesson.ly application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+1. Lessonly application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
![image](common/default-attributes.png)
-1. In addition to above, Lesson.ly application expects few more attributes to be passed back in SAML response which are shown below. These attributes are also pre populated but you can review them as per your requirements.
+1. In addition to above, Lessonly application expects few more attributes to be passed back in SAML response which are shown below. These attributes are also pre populated but you can review them as per your requirements.
| Name | Source Attribute| | | -|
Follow these steps to enable Azure AD SSO in the Azure portal.
![The Certificate download link](common/certificatebase64.png)
-1. On the **Set up Lesson.ly** section, copy the appropriate URL(s) based on your requirement.
+1. On the **Set up Lessonly** section, copy the appropriate URL(s) based on your requirement.
![Copy configuration URLs](common/copy-configuration-urls.png)
In this section, you'll create a test user in the Azure portal called B.Simon.
### Assign the Azure AD test user
-In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Lesson.ly.
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Lessonly.
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
-1. In the applications list, select **Lesson.ly**.
+1. In the applications list, select **Lessonly**.
1. In the app's overview page, find the **Manage** section and select **Users and groups**. 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog. 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. 1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected. 1. In the **Add Assignment** dialog, click the **Assign** button.
-## Configure Lesson.ly SSO
+## Configure Lessonly SSO
-To configure single sign-on on **Lesson.ly** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Lesson.ly support team](mailto:support@lessonly.com). They set this setting to have the SAML SSO connection set properly on both sides.
+To configure single sign-on on **Lessonly** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Lessonly support team](mailto:support@lessonly.com). They set this setting to have the SAML SSO connection set properly on both sides.
-### Create Lesson.ly test user
+### Create Lessonly test user
The objective of this section is to create a user called B.Simon in Lessonly.com. Lessonly.com supports just-in-time provisioning, which is by default enabled.
There is no action item for you in this section. A new user will be created duri
In this section, you test your Azure AD single sign-on configuration with following options.
-* Click on **Test this application** in Azure portal. This will redirect to Lesson.ly Sign-on URL where you can initiate the login flow.
+* Click on **Test this application** in Azure portal. This will redirect to Lessonly Sign-on URL where you can initiate the login flow.
-* Go to Lesson.ly Sign-on URL directly and initiate the login flow from there.
+* Go to Lessonly Sign-on URL directly and initiate the login flow from there.
-* You can use Microsoft My Apps. When you click the Lesson.ly tile in the My Apps, this will redirect to Lesson.ly Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+* You can use Microsoft My Apps. When you click the Lessonly tile in the My Apps, this will redirect to Lessonly Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
## Next steps
-Once you configure Lesson.ly you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
+Once you configure Lessonly you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
active-directory Sap Successfactors Writeback Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sap-successfactors-writeback-tutorial.md
Once the SuccessFactors provisioning app configurations have been completed, you
> ![Select Writeback scope](./media/sap-successfactors-inbound-provisioning/select-writeback-scope.png) > [!NOTE]
- > The SuccessFactors Writeback provisioning app does not support "group assignment". Only "user assignment" is supported.
+ > SuccessFactors Writeback provisioning apps created after 12-Oct-2022 support the "group assignment" feature. If you created the app prior to 12-Oct-2022, it will only have "user assignment" support. To use the "group assignment" feature, create a new instance of the SuccessFactors Writeback application and move your existing mapping configurations to this app.
1. Click **Save**.
Refer to the [Writeback scenarios section](../app-provisioning/sap-successfactor
* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md) * [Learn how to configure single sign-on between SuccessFactors and Azure Active Directory](successfactors-tutorial.md) * [Learn how to integrate other SaaS applications with Azure Active Directory](tutorial-list.md)
-* [Learn how to export and import your provisioning configurations](../app-provisioning/export-import-provisioning-configuration.md)
+* [Learn how to export and import your provisioning configurations](../app-provisioning/export-import-provisioning-configuration.md)
active-directory Zendesk Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/zendesk-provisioning-tutorial.md
Title: 'Tutorial: Configure Zendesk for automatic user provisioning with Azure Active Directory | Microsoft Docs'
-description: Learn how to configure Azure Active Directory to automatically provision and deprovision user accounts to Zendesk.
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to Zendesk.
-
-writer: zhchia
-
+documentationcenter: ''
+
+writer: Thwimmer
+
+ms.assetid: 620f0aa6-42af-4356-85f9-04aa329767f3
+ms.devlang: na
Last updated 08/06/2019-+ # Tutorial: Configure Zendesk for automatic user provisioning
-This tutorial demonstrates the steps to perform in Zendesk and Azure Active Directory (Azure AD) to configure Azure AD to automatically provision and deprovision users and groups to Zendesk.
+This tutorial describes the steps you need to perform in both Zendesk and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Zendesk](http://www.zendesk.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
-> [!NOTE]
-> This tutorial describes a connector that's built on top of the Azure AD user provisioning service. For information on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to software-as-a-service (SaaS) applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
+
+## Capabilities supported
+> [!div class="checklist"]
+> * Create users in Zendesk.
+> * Remove users in Zendesk when they do not require access anymore.
+> * Keep user attributes synchronized between Azure AD and Zendesk.
+> * Provision groups and group memberships in Zendesk.
+> * [Single sign-on](./zendesk-tutorial.md) to Zendesk (recommended)
## Prerequisites
-The scenario outlined in this tutorial assumes that you have:
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
-* An Azure AD tenant.
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md).
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* A user account in Zendesk with Admin rights.
* A Zendesk tenant with the Professional plan or better enabled.
-* A user account in Zendesk with admin permissions.
-
-## Add Zendesk from the Azure Marketplace
-
-Before you configure Zendesk for automatic user provisioning with Azure AD, add Zendesk from the Azure Marketplace to your list of managed SaaS applications.
-
-To add Zendesk from the Marketplace, follow these steps.
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. Determine what data to [map between Azure AD and Zendesk](../app-provisioning/customize-application-attributes.md).
1. In the [Azure portal](https://portal.azure.com), in the navigation pane on the left, select **Azure Active Directory**.
- ![The Azure Active Directory icon](common/select-azuread.png)
+## Step 2. Configure Zendesk to support provisioning with Azure AD
+
+1. Log in to [Admin Center](https://support.zendesk.com/hc/en-us/articles/4408839227290#topic_hfg_dyz_1hb), click **Apps and integrations** in the sidebar, then select **APIs > Zendesk APIs**.
+1. Click the **Settings** tab, and make sure Token Access is **enabled**.
+1. Click the **Add API token** button to the right of **Active API Tokens**.The token is generated and displayed.
+1. Enter an **API token description**.
+1. **Copy** the token and paste it somewhere secure. Once you close this window, the full token will never be displayed again.
+1. Click **Save** to return to the API page.If you click the token to reopen it, a truncated version of the token is displayed.
-2. Go to **Enterprise applications**, and then select **All applications**.
+## Step 3. Add Zendesk from the Azure AD application gallery
- ![The Enterprise applications blade](common/enterprise-applications.png)
+Add Zendesk from the Azure AD application gallery to start managing provisioning to Zendesk. If you have previously setup Zendesk for SSO, you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
-3. To add a new application, select **New application** at the top of the dialog box.
+## Step 4. Define who will be in scope for provisioning
- ![The New application button](common/add-new-app.png)
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-4. In the search box, enter **Zendesk** and select **Zendesk** from the result panel. To add the application, select **Add**.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
- ![Zendesk in the results list](common/search-new-app.png)
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
-## Assign users to Zendesk
-Azure Active Directory uses a concept called *assignments* to determine which users should receive access to selected apps. In the context of automatic user provisioning, only the users or groups that were assigned to an application in Azure AD are synchronized.
+## Step 5. Configure automatic user provisioning to Zendesk
-Before you configure and enable automatic user provisioning, decide which users or groups in Azure AD need access to Zendesk. To assign these users or groups to Zendesk, follow the instructions in [Assign a user or group to an enterprise app](../manage-apps/assign-user-or-group-access-portal.md).
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in Zendesk based on user and/or group assignments in Azure AD.
### Important tips for assigning users to Zendesk
Before you configure and enable automatic user provisioning, decide which users
* When you assign a user to Zendesk, select any valid application-specific role, if available, in the assignment dialog box. Users with the **Default Access** role are excluded from provisioning.
-## Configure automatic user provisioning to Zendesk
-
-This section guides you through the steps to configure the Azure AD provisioning service. Use it to create, update, and disable users or groups in Zendesk based on user or group assignments in Azure AD.
-
-> [!TIP]
-> You also can enable SAML-based single sign-on for Zendesk. Follow the instructions in the [Zendesk single sign-on tutorial](zendesk-tutorial.md). Single sign-on can be configured independently of automatic user provisioning, although these two features complement each other.
### Configure automatic user provisioning for Zendesk in Azure AD
-1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise applications** > **All applications** > **Zendesk**.
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
- ![Enterprise applications blade](common/enterprise-applications.png)
+ ![Screenshot of Enterprise applications blade.](common/enterprise-applications.png)
-2. In the applications list, select **Zendesk**.
+1. In the applications list, select **Zendesk**.
- ![The Zendesk link in the applications list](common/all-applications.png)
+ ![Screenshot of the Zendesk link in the Applications list.](common/all-applications.png)
-3. Select the **Provisioning** tab.
+1. Select the **Provisioning** tab.
- ![Zendesk Provisioning](./media/zendesk-provisioning-tutorial/ZenDesk16.png)
+ ![Screenshot of Provisioning tab.](common/provisioning.png)
-4. Set the **Provisioning Mode** to **Automatic**.
+1. Set the **Provisioning Mode** to **Automatic**.
- ![Zendesk Provisioning Mode](./media/zendesk-provisioning-tutorial/ZenDesk1.png)
+ ![Screenshot of Provisioning tab automatic.](common/provisioning-automatic.png)
-5. Under the **Admin Credentials** section, input the admin username, secret token, and domain of your Zendesk account. Examples of these values are:
+1. Under the **Admin Credentials** section, input the admin username, secret token, and domain of your Zendesk account. Examples of these values are:
* In the **Admin Username** box, fill in the username of the admin account on your Zendesk tenant. An example is admin@contoso.com.
This section guides you through the steps to configure the Azure AD provisioning
* In the **Domain** box, fill in the subdomain of your Zendesk tenant. For example, for an account with a tenant URL of `https://my-tenant.zendesk.com`, your subdomain is **my-tenant**.
-6. The secret token for your Zendesk account is located in **Admin** > **API** > **Settings**. Make sure that **Token Access** is set to **Enabled**.
-
- ![Zendesk admin settings](./media/zendesk-provisioning-tutorial/ZenDesk4.png)
+1. The secret token for your Zendesk account can be generated by following steps mentioned in **Step 2** above.
- ![Zendesk secret token](./media/zendesk-provisioning-tutorial/ZenDesk2.png)
+1. After you fill in the boxes shown in Step 5, select **Test Connection** to make sure that Azure AD can connect to Zendesk. If the connection fails, make sure your Zendesk account has admin permissions and try again.
-7. After you fill in the boxes shown in Step 5, select **Test Connection** to make sure that Azure AD can connect to Zendesk. If the connection fails, make sure your Zendesk account has admin permissions and try again.
-
- ![Zendesk Test Connection](./media/zendesk-provisioning-tutorial/ZenDesk19.png)
+ ![Screenshot of Zendesk Test Connection](./media/zendesk-provisioning-tutorial/ZenDesk19.png)
8. In the **Notification Email** box, enter the email address of the person or group to receive the provisioning error notifications. Select the **Send an email notification when a failure occurs** check box.
- ![Zendesk Notification Email](./media/zendesk-provisioning-tutorial/ZenDesk9.png)
+ ![Screenshot of Zendesk Notification Email](./media/zendesk-provisioning-tutorial/ZenDesk9.png)
9. Select **Save**. 10. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Zendesk**.
- ![Zendesk user synchronization](./media/zendesk-provisioning-tutorial/ZenDesk10.png)
+ ![Screenshot of Zendesk user synchronization](./media/zendesk-provisioning-tutorial/ZenDesk10.png)
11. Review the user attributes that are synchronized from Azure AD to Zendesk in the **Attribute Mappings** section. The attributes selected as **Matching** properties are used to match the user accounts in Zendesk for update operations. To save any changes, select **Save**.
- ![Zendesk matching user attributes](./media/zendesk-provisioning-tutorial/ZenDesk11.png)
+ ![Screenshot of Zendesk matching user attributes](./media/zendesk-provisioning-tutorial/ZenDesk11.png)
12. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Zendesk**.
- ![Zendesk group synchronization](./media/zendesk-provisioning-tutorial/ZenDesk12.png)
+ ![Screenshot of Zendesk group synchronization](./media/zendesk-provisioning-tutorial/ZenDesk12.png)
13. Review the group attributes that are synchronized from Azure AD to Zendesk in the **Attribute Mappings** section. The attributes selected as **Matching** properties are used to match the groups in Zendesk for update operations. To save any changes, select **Save**.
- ![Zendesk matching group attributes](./media/zendesk-provisioning-tutorial/ZenDesk13.png)
+ ![Screenshot of Zendesk matching group attributes](./media/zendesk-provisioning-tutorial/ZenDesk13.png)
14. To configure scoping filters, follow the instructions in the [scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). 15. To enable the Azure AD provisioning service for Zendesk, in the **Settings** section, change **Provisioning Status** to **On**.
- ![Zendesk Provisioning Status](./media/zendesk-provisioning-tutorial/ZenDesk14.png)
+ ![Screenshot of Zendesk Provisioning Status](./media/zendesk-provisioning-tutorial/ZenDesk14.png)
16. Define the users or groups that you want to provision to Zendesk. In the **Settings** section, select the values you want in **Scope**.
- ![Zendesk Scope](./media/zendesk-provisioning-tutorial/ZenDesk15.png)
+ ![Screenshot of Zendesk Scope](./media/zendesk-provisioning-tutorial/ZenDesk15.png)
17. When you're ready to provision, select **Save**.
- ![Zendesk Save](./media/zendesk-provisioning-tutorial/ZenDesk18.png)
+ ![Screenshot of Zendesk Save](./media/zendesk-provisioning-tutorial/ZenDesk18.png)
This operation starts the initial synchronization of all users or groups defined in **Scope** in the **Settings** section. The initial sync takes longer to perform than later syncs. They occur approximately every 40 minutes as long as the Azure AD provisioning service runs.
For information on how to read the Azure AD provisioning logs, see [Reporting on
* Import of all roles will fail if any of the custom roles has a display name similar to the built in roles of "agent" or "end-user". To avoid this, ensure that none of the custom roles being imported has the above display names.
-## Additional resources
+## More resources
-* [Manage user account provisioning for enterprise apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) ## Next steps * [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)-
-<!--Image references-->
-[1]: ./media/zendesk-tutorial/tutorial_general_01.png
-[2]: ./media/zendesk-tutorial/tutorial_general_02.png
-[3]: ./media/zendesk-tutorial/tutorial_general_03.png
aks Csi Secrets Store Identity Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/csi-secrets-store-identity-access.md
# Provide an identity to access the Azure Key Vault Provider for Secrets Store CSI Driver
-The Secrets Store CSI Driver on Azure Kubernetes Service (AKS) provides a variety of methods of identity-based access to your Azure key vault. This article outlines these methods and how to use them to access your key vault and its contents from your AKS cluster. For more information, see [Use the Secrets Store CSI Driver][csi-secrets-store-driver].
+The Secrets Store CSI Driver on Azure Kubernetes Service (AKS) provides a variety of methods of identity-based access to your Azure key vault. This article outlines these methods and how to use them to access your key vault and its contents from your AKS cluster. For more information, see [Use the Secrets Store CSI Driver][csi-secrets-store-driver].
## Use Azure AD workload identity (preview)
-An Azure AD workload identity (preview) is an identity used by an application running on a pod that can authenticate itself against other Azure services that support it, such as Storage or SQL. It integrates with the capabilities native to Kubernetes to federate with external identity providers. In this security model, the AKS cluster acts as token issuer, Azure Active Directory uses OpenID Connect to discover public signing keys and verify the authenticity of the service account token before exchanging it for an Azure AD token. Your workload can exchange a service account token projected to its volume for an Azure AD token using the Azure Identity client library using the Azure SDK or the Microsoft Authentication Library (MSAL).
+An [Azure AD workload identity][workload-identity] is an identity used by an application running on a pod that can authenticate itself against other Azure services that support it, such as Storage or SQL. It integrates with the capabilities native to Kubernetes to federate with external identity providers. In this security model, the AKS cluster acts as token issuer where Azure Active Directory uses OpenID Connect to discover public signing keys and verify the authenticity of the service account token before exchanging it for an Azure AD token. Your workload can exchange a service account token projected to its volume for an Azure AD token using the Azure Identity client library using the Azure SDK or the Microsoft Authentication Library (MSAL).
> [!NOTE] > This authentication method replaces pod-managed identity (preview).
An Azure AD workload identity (preview) is an identity used by an application ru
### Prerequisites - Installed the latest version of the `aks-preview` extension, version 0.5.102 or later. To learn more, see [How to install extensions][how-to-install-extensions].
+- Existing Keyvault
+- Existing Azure Subscription with EnableWorkloadIdentityPreview feature enabled
+- Existing AKS cluster with enable-oidc-issuer and enable-workload-identity enabled
Azure AD workload identity (preview) is supported on both Windows and Linux clusters. ### Configure workload identity
-1. Use the Azure CLI [az account set][az-account-set] command to set a specific subscription to be the current active subscription.
+1. Use the Azure CLI `az account set` command to set a specific subscription to be the current active subscription. Then use the `az identity create` command to create a managed identity.
```azurecli
- az account set --subscription "subscriptionID"
+ export subscriptionID=<subscription id>
+ export resourceGroupName=<resource group name>
+ export UAMI=<name for user assigned identity>
+ export KEYVAULT_NAME=<existing keyvault name>
+ export clusterName=<aks cluster name>
+
+ az account set --subscription $subscriptionID
+ az identity create --name $UAMI --resource-group $resourceGroupName
+ export USER_ASSIGNED_CLIENT_ID="$(az identity show -g $resourceGroupName --name $UAMI --query 'clientId' -o tsv)"
+ export IDENTITY_TENANT=$(az aks show --name $clusterName --resource-group $RG --query aadProfile.tenantId -o tsv)
```
-2. Use the Azure CLI [az account set][az-account-set] command to set a specific subscription to be the current active subscription. Then use the [az identity create][az-identity-create] command to create a managed identity.
+2. You need to set an access policy that grants the workload identity permission to access the Key Vault secrets, access keys, and certificates. The rights are assigned using the `az keyvault set-policy` command shown below.
```azurecli
- az account set --subscription "subscriptionID"
+ az keyvault set-policy -n $KEYVAULT_NAME --key-permissions get --spn $USER_ASSIGNED_CLIENT_ID
+ az keyvault set-policy -n $KEYVAULT_NAME --secret-permissions get --spn $USER_ASSIGNED_CLIENT_ID
+ az keyvault set-policy -n $KEYVAULT_NAME --certificate-permissions get --spn $USER_ASSIGNED_CLIENT_ID
```
- ```azurecli
- az identity create --name "userAssignedIdentityName" --resource-group "resourceGroupName" --location "location" --subscription "subscriptionID"
- ```
-
-3. You need to set an access policy that grants the workload identity permission to access the Key Vault secrets, access keys, and certificates. The rights are assigned using the [az keyvault set-policy][az-keyvault-set-policy] command as shown below.
-
- ```azurecli
- az keyvault set-policy -n $KEYVAULT_NAME --key-permissions get --spn $APPLICATION_CLIENT_ID
- az keyvault set-policy -n $KEYVAULT_NAME --secret-permissions get --spn $APPLICATION_CLIENT_ID
- az keyvault set-policy -n $KEYVAULT_NAME --certificate-permissions get --spn $APPLICATION_CLIENT_ID
- ```
+3. Run the [az aks show][az-aks-show] command to get the AKS cluster OIDC issuer URL.
-4. Run the [az aks show][az-aks-show] command to get the AKS cluster OIDC issuer URL, and replace the default value for the cluster name and the resource group name.
-
- ```azurecli
- az aks show --resource-group resourceGroupName --name clusterName --query "oidcIssuerProfile.issuerUrl" -otsv
+ ```bash
+ export AKS_OIDC_ISSUER="$(az aks show --resource-group $resourceGroupName --name $clusterName --query "oidcIssuerProfile.issuerUrl" -o tsv)"
+ echo $AKS_OIDC_ISSUER
``` > [!NOTE] > If the URL is empty, verify you have installed the latest version of the `aks-preview` extension, version 0.5.102 or later. Also verify you've [enabled the > OIDC issuer][enable-oidc-issuer] (preview).
-5. Establish a federated identity credential between the Azure AD application and the service account issuer and subject. Get the object ID of the Azure AD application. Update the values for `serviceAccountName` and `serviceAccountNamespace` with the Kubernetes service account name and its namespace.
+4. Establish a federated identity credential between the Azure AD application and the service account issuer and subject. Get the object ID of the Azure AD application. Update the values for `serviceAccountName` and `serviceAccountNamespace` with the Kubernetes service account name and its namespace.
```bash
- export APPLICATION_OBJECT_ID="$(az ad app show --id ${APPLICATION_CLIENT_ID} --query id -otsv)"
- export SERVICE_ACCOUNT_NAME=serviceAccountName
- export SERVICE_ACCOUNT_NAMESPACE=serviceAccountNamespace
+ export serviceAccountName="workload-identity-sa" # sample name; can be changed
+ export serviceAccountNamespace="default" # can be changed to namespace of your workload
+
+ cat <<EOF | kubectl apply -f -
+ apiVersion: v1
+ kind: ServiceAccount
+ metadata:
+ annotations:
+ azure.workload.identity/client-id: ${USER_ASSIGNED_CLIENT_ID}
+ labels:
+ azure.workload.identity/use: "true"
+ name: ${serviceAccountName}
+ namespace: ${serviceAccountNamespace}
+ EOF
```
- Then add the federated identity credential by first copying and pasting the following multi-line input in the Azure CLI.
+ Next, use the [az identity federated-credential create][az-identity-federated-credential-create] command to create the federated identity credential between the Managed Identity, the service account issuer, and the subject.
- ```azurecli
- cat <<EOF > body.json
- {
- "name": "kubernetes-federated-credential",
- "issuer": "${SERVICE_ACCOUNT_ISSUER}",
- "subject": "system:serviceaccount:${SERVICE_ACCOUNT_NAMESPACE}:${SERVICE_ACCOUNT_NAME}",
- "description": "Kubernetes service account federated credential",
- "audiences": [
- "api://AzureADTokenExchange"
- ]
- }
- EOF
+ ```bash
+ export federatedIdentityName="aksfederatedidentity" # can be changed as needed
+ az identity federated-credential create --name $federatedIdentityName --identity-name $UAMI --resource-group $RG --issuer ${AKS_OIDC_ISSUER} --subject system:serviceaccount:${serviceAccountNamespace}:${serviceAccountName}
```
+5. Deploy a `SecretProviderClass` by using the following YAML script, noticing that the variables will be interpolated:
- Next, use the [az identity federated-credential create][az-identity-federated-credential-create] command to create the federated identity credential between the Managed Identity, the service account issuer, and the subject. Replace the values `resourceGroupName`, `userAssignedIdentityName`, and `federatedIdentityName`.
-
- ```azurecli
- az identity federated-credential create --name federatedIdentityName --identity-name userAssignedIdentityName --resource-group resourceGroupName --issuer ${AKS_OIDC_ISSUER} --subject ${SERVICE_ACCOUNT_NAMESPACE}:${SERVICE_ACCOUNT_NAME}"
+ ```bash
+ cat <<EOF | kubectl apply -f -
+ # This is a SecretProviderClass example using workload identity to access your key vault
+ apiVersion: secrets-store.csi.x-k8s.io/v1
+ kind: SecretProviderClass
+ metadata:
+ name: azure-kvname-workload-identity # needs to be unique per namespace
+ spec:
+ provider: azure
+ parameters:
+ usePodIdentity: "false"
+ useVMManagedIdentity: "false"
+ clientID: "${USER_ASSIGNED_CLIENT_ID}" # Setting this to use workload identity
+ keyvaultName: ${$KEYVAULT_NAME} # Set to the name of your key vault
+ cloudName: "" # [OPTIONAL for Azure] if not provided, the Azure environment defaults to AzurePublicCloud
+ objects: |
+ array:
+ - |
+ objectName: secret1
+ objectType: secret # object types: secret, key, or cert
+ objectVersion: "" # [OPTIONAL] object versions, default to latest if empty
+ - |
+ objectName: key1
+ objectType: key
+ objectVersion: ""
+ tenantId: "${IDENTITY_TENANT}" # The tenant ID of the key vault
+ EOF
```
-6. Deploy your secretproviderclass and application by setting the `clientID` in the `SecretProviderClass` to the client ID of the Azure AD application.
+6. Deploy a sample pod. Notice the service account reference in the pod definition:
```bash
- clientID: "${APPLICATION_CLIENT_ID}"
+ cat <<EOF | kubectl -n $serviceAccountNamespace -f -
+ # This is a sample pod definition for using SecretProviderClass and the user-assigned identity to access your key vault
+ kind: Pod
+ apiVersion: v1
+ metadata:
+ name: busybox-secrets-store-inline-user-msi
+ spec:
+ serviceAccountName: ${serviceAccountName}
+ containers:
+ - name: busybox
+ image: k8s.gcr.io/e2e-test-images/busybox:1.29-1
+ command:
+ - "/bin/sleep"
+ - "10000"
+ volumeMounts:
+ - name: secrets-store01-inline
+ mountPath: "/mnt/secrets-store"
+ readOnly: true
+ volumes:
+ - name: secrets-store01-inline
+ csi:
+ driver: secrets-store.csi.k8s.io
+ readOnly: true
+ volumeAttributes:
+ secretProviderClass: "azure-kvname-workload-identity"
+ EOF
``` ## Use pod-managed identities
Azure Active Directory (Azure AD) pod-managed identities (preview) use AKS primi
### Usage
-1. Verify that your virtual machine scale set or availability set nodes have their own system-assigned identity:
+1. Verify that your Virtual Machine Scale Set or Availability Set nodes have their own system-assigned identity:
```azurecli-interactive az vmss identity show -g <resource group> -n <vmss scalset name> -o yaml
To validate that the secrets are mounted at the volume path that's specified in
[az-rest]: /cli/azure/reference-index#az-rest [az-identity-federated-credential-create]: /cli/azure/identity/federated-credential#az-identity-federated-credential-create [enable-oidc-issuer]: cluster-configuration.md#oidc-issuer-
+[workload-identity]: ./workload-identity-overview.md
<!-- LINKS EXTERNAL -->
aks Dapr Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/dapr-overview.md
- Title: Dapr extension for Azure Kubernetes Service (AKS) overview
-description: Learn more about using Dapr on your Azure Kubernetes Service (AKS) cluster to develop applications.
--- Previously updated : 07/21/2022---
-# Dapr
-
-Distributed Application Runtime (Dapr) offers APIs that simplify microservice development and implementation. Running as a sidecar process in tandem with your applications, Dapr APIs abstract away common complexities developers regularly encounter when building distributed applications, such as service discovery, message broker integration, encryption, observability, and secret management. Whether your inter-application communication is direct service-to-service, or pub/sub messaging, Dapr helps you write simple, portable, resilient, and secured microservices.
-
-Dapr is incrementally adoptable ΓÇô the API building blocks can be used as the need arises. Use one, several, or all to develop your application faster.
--
-## Capabilities and features
-
-Dapr provides the following set of capabilities to help with your microservice development on AKS:
-
-* Easy provisioning of Dapr on AKS through [cluster extensions][cluster-extensions].
-* Portability enabled through HTTP and gRPC APIs which abstract underlying technologies choices
-* Reliable, secure, and resilient service-to-service calls through HTTP and gRPC APIs
-* Publish and subscribe messaging made easy with support for CloudEvent filtering and ΓÇ£at-least-onceΓÇ¥ semantics for message delivery
-* Pluggable observability and monitoring through Open Telemetry API collector
-* Works independent of language, while also offering language specific SDKs
-* Integration with VS Code through the Dapr extension
-* [More APIs for solving distributed application challenges][dapr-blocks]
-
-## Frequently asked questions
-
-### How do Dapr and Service meshes compare?
-
-A: Where a service mesh is defined as a networking service mesh, Dapr is not a service mesh. While Dapr and service meshes do offer some overlapping capabilities, a service mesh is focused on networking concerns, whereas Dapr is focused on providing building blocks that make it easier for developers to build applications as microservices. Dapr is developer-centric, while service meshes are infrastructure-centric.
-
-Some common capabilities that Dapr shares with service meshes include:
-
-* Secure service-to-service communication with mTLS encryption
-* Service-to-service metric collection
-* Service-to-service distributed tracing
-* Resiliency through retries
-
-In addition, Dapr provides other application-level building blocks for state management, pub/sub messaging, actors, and more. However, Dapr does not provide capabilities for traffic behavior such as routing or traffic splitting. If your solution would benefit from the traffic splitting a service mesh provides, consider using [Open Service Mesh][osm-docs].
-
-For more information on Dapr and service meshes, and how they can be used together, visit the [Dapr documentation][dapr-docs].
-
-### How does the Dapr secrets API compare to the Secrets Store CSI driver?
-
-Both the Dapr secrets API and the managed Secrets Store CSI driver allow for the integration of secrets held in an external store, abstracting secret store technology from application code. The Secrets Store CSI driver mounts secrets held in Azure Key Vault as a CSI volume for consumption by an application. Dapr exposes secrets via a RESTful API that can be called by application code and can be configured with assorted secret stores. The following table lists the capabilities of each offering:
-
-| | Dapr secrets API | Secrets Store CSI driver |
-| | | |
-| **Supported secrets stores** | Local environment variables (for Development); Local file (for Development); Kubernetes Secrets; AWS Secrets Manager; Azure Key Vault secret store; Azure Key Vault with Managed Identities on Kubernetes; GCP Secret Manager; HashiCorp Vault | Azure Key Vault secret store|
-| **Accessing secrets in application code** | Call the Dapr secrets API | Access the mounted volume or sync mounted content as a Kubernetes secret and set an environment variable |
-| **Secret rotation** | New API calls obtain the updated secrets | Polls for secrets and updates the mount at a configurable interval |
-| **Logging and metrics** | The Dapr sidecar generates logs, which can be configured with collectors such as Azure Monitor, emits metrics via Prometheus, and exposes an HTTP endpoint for health checks | Emits driver and Azure Key Vault provider metrics via Prometheus |
-
-For more information on the secret management in Dapr, see the [secrets management building block overview][dapr-secrets-block].
-
-For more information on the Secrets Store CSI driver and Azure Key Vault provider, see the [Secrets Store CSI driver overview][csi-secrets-store].
-
-### How does the managed Dapr cluster extension compare to the open source Dapr offering?
-
-The managed Dapr cluster extension is the easiest method to provision Dapr on an AKS cluster. With the extension, you're able to offload management of the Dapr runtime version by opting into automatic upgrades. Additionally, the extension installs Dapr with smart defaults (for example, provisioning the Dapr control plane in high availability mode).
-
-When installing Dapr OSS via helm or the Dapr CLI, runtime versions and configuration options are the responsibility of developers and cluster maintainers.
-
-Lastly, the Dapr extension is an extension of AKS, therefore you can expect the same support policy as other AKS features.
-
-[Learn more about migrating from Dapr OSS to the Dapr extension for AKS][dapr-migration].
-
-### How can I switch to using the Dapr extension if IΓÇÖve already installed Dapr via a method, such as Helm?
-
-Recommended guidance is to completely uninstall Dapr from the AKS cluster and reinstall it via the cluster extension.
-
-If you install Dapr through the AKS extension, our recommendation is to continue using the extension for future management of Dapr instead of the Dapr CLI. Combining the two tools can cause conflicts and result in undesired behavior.
-
-## Next Steps
-
-After learning about Dapr and some of the challenges it solves, try [installing the dapr extension][dapr-extension].
-
-<!-- Links Internal -->
-[csi-secrets-store]: ./csi-secrets-store-driver.md
-[osm-docs]: ./open-service-mesh-about.md
-[cluster-extensions]: ./cluster-extensions.md
-[dapr-quickstart]: ./quickstart-dapr.md
-[dapr-migration]: ./dapr-migration.md
-[dapr-extension]: ./dapr.md
-
-<!-- Links External -->
-[dapr-docs]: https://docs.dapr.io/
-[dapr-blocks]: https://docs.dapr.io/concepts/building-blocks-concept/
-[dapr-secrets-block]: https://docs.dapr.io/developing-applications/building-blocks/secrets/secrets-overview/
+
+ Title: Dapr extension for Azure Kubernetes Service (AKS) overview
+description: Learn more about using Dapr on your Azure Kubernetes Service (AKS) cluster to develop applications.
+++ Last updated : 10/11/2022+++
+# Dapr
+
+Distributed Application Runtime (Dapr) offers APIs that simplify microservice development and implementation. Running as a sidecar process in tandem with your applications, Dapr APIs abstract away common complexities developers regularly encounter when building distributed applications, such as service discovery, message broker integration, encryption, observability, and secret management. Whether your inter-application communication is direct service-to-service, or pub/sub messaging, Dapr helps you write simple, portable, resilient, and secured microservices.
+
+Dapr is incrementally adoptable ΓÇô the API building blocks can be used as the need arises. Use one, several, or all to develop your application faster.
++
+## Capabilities and features
+
+Dapr provides the following set of capabilities to help with your microservice development on AKS:
+
+* Easy provisioning of Dapr on AKS through [cluster extensions][cluster-extensions].
+* Portability enabled through HTTP and gRPC APIs which abstract underlying technologies choices
+* Reliable, secure, and resilient service-to-service calls through HTTP and gRPC APIs
+* Publish and subscribe messaging made easy with support for CloudEvent filtering and ΓÇ£at-least-onceΓÇ¥ semantics for message delivery
+* Pluggable observability and monitoring through Open Telemetry API collector
+* Works independent of language, while also offering language specific SDKs
+* Integration with VS Code through the Dapr extension
+* [More APIs for solving distributed application challenges][dapr-blocks]
+
+## Frequently asked questions
+
+### How do Dapr and Service meshes compare?
+
+A: Where a service mesh is defined as a networking service mesh, Dapr is not a service mesh. While Dapr and service meshes do offer some overlapping capabilities, a service mesh is focused on networking concerns, whereas Dapr is focused on providing building blocks that make it easier for developers to build applications as microservices. Dapr is developer-centric, while service meshes are infrastructure-centric.
+
+Some common capabilities that Dapr shares with service meshes include:
+
+* Secure service-to-service communication with mTLS encryption
+* Service-to-service metric collection
+* Service-to-service distributed tracing
+* Resiliency through retries
+
+In addition, Dapr provides other application-level building blocks for state management, pub/sub messaging, actors, and more. However, Dapr does not provide capabilities for traffic behavior such as routing or traffic splitting. If your solution would benefit from the traffic splitting a service mesh provides, consider using [Open Service Mesh][osm-docs].
+
+For more information on Dapr and service meshes, and how they can be used together, visit the [Dapr documentation][dapr-docs].
+
+### How does the Dapr secrets API compare to the Secrets Store CSI driver?
+
+Both the Dapr secrets API and the managed Secrets Store CSI driver allow for the integration of secrets held in an external store, abstracting secret store technology from application code. The Secrets Store CSI driver mounts secrets held in Azure Key Vault as a CSI volume for consumption by an application. Dapr exposes secrets via a RESTful API that can be called by application code and can be configured with assorted secret stores. The following table lists the capabilities of each offering:
+
+| | Dapr secrets API | Secrets Store CSI driver |
+| | | |
+| **Supported secrets stores** | Local environment variables (for Development); Local file (for Development); Kubernetes Secrets; AWS Secrets Manager; Azure Key Vault secret store; Azure Key Vault with Managed Identities on Kubernetes; GCP Secret Manager; HashiCorp Vault | Azure Key Vault secret store|
+| **Accessing secrets in application code** | Call the Dapr secrets API | Access the mounted volume or sync mounted content as a Kubernetes secret and set an environment variable |
+| **Secret rotation** | New API calls obtain the updated secrets | Polls for secrets and updates the mount at a configurable interval |
+| **Logging and metrics** | The Dapr sidecar generates logs, which can be configured with collectors such as Azure Monitor, emits metrics via Prometheus, and exposes an HTTP endpoint for health checks | Emits driver and Azure Key Vault provider metrics via Prometheus |
+
+For more information on the secret management in Dapr, see the [secrets management building block overview][dapr-secrets-block].
+
+For more information on the Secrets Store CSI driver and Azure Key Vault provider, see the [Secrets Store CSI driver overview][csi-secrets-store].
+
+### How does the managed Dapr cluster extension compare to the open source Dapr offering?
+
+The managed Dapr cluster extension is the easiest method to provision Dapr on an AKS cluster. With the extension, you're able to offload management of the Dapr runtime version by opting into automatic upgrades. Additionally, the extension installs Dapr with smart defaults (for example, provisioning the Dapr control plane in high availability mode).
+
+When installing Dapr OSS via helm or the Dapr CLI, runtime versions and configuration options are the responsibility of developers and cluster maintainers.
+
+Lastly, the Dapr extension is an extension of AKS, therefore you can expect the same support policy as other AKS features.
+
+[Learn more about migrating from Dapr OSS to the Dapr extension for AKS][dapr-migration].
+
+### How can I authenticate Dapr components with Azure AD using managed identities?
+
+- Learn how [Dapr components authenticate with Azure AD][dapr-msi].
+- Learn about [using managed identities with AKS][aks-msi].
+
+### How can I switch to using the Dapr extension if IΓÇÖve already installed Dapr via a method, such as Helm?
+
+Recommended guidance is to completely uninstall Dapr from the AKS cluster and reinstall it via the cluster extension.
+
+If you install Dapr through the AKS extension, our recommendation is to continue using the extension for future management of Dapr instead of the Dapr CLI. Combining the two tools can cause conflicts and result in undesired behavior.
+
+## Next Steps
+
+After learning about Dapr and some of the challenges it solves, try [Deploying an application with the Dapr cluster extension][dapr-quickstart].
+
+<!-- Links Internal -->
+[csi-secrets-store]: ./csi-secrets-store-driver.md
+[osm-docs]: ./open-service-mesh-about.md
+[cluster-extensions]: ./cluster-extensions.md
+[dapr-quickstart]: ./quickstart-dapr.md
+[dapr-migration]: ./dapr-migration.md
+[aks-msi]: ./use-managed-identity.md
+
+<!-- Links External -->
+[dapr-docs]: https://docs.dapr.io/
+[dapr-blocks]: https://docs.dapr.io/concepts/building-blocks-concept/
+[dapr-secrets-block]: https://docs.dapr.io/developing-applications/building-blocks/secrets/secrets-overview/
+[dapr-msi]: https://docs.dapr.io/developing-applications/integrations/azure/authenticating-azure
aks Supported Kubernetes Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/supported-kubernetes-versions.md
For the past release history, see [Kubernetes](https://en.wikipedia.org/wiki/Kub
| K8s version | Upstream release | AKS preview | AKS GA | End of life | |--|-|--||-|
-| 1.20 | Dec-08-20 | Jan 2021 | Mar 2021 | 1.23 GA |
| 1.21 | Apr-08-21 | May 2021 | Jul 2021 | 1.24 GA | | 1.22 | Aug-04-21 | Sept 2021 | Dec 2021 | 1.25 GA | | 1.23 | Dec 2021 | Jan 2022 | Apr 2022 | 1.26 GA | | 1.24 | Apr-22-22 | May 2022 | Jul 2022 | 1.27 GA | 1.25 | Aug 2022 | Oct 2022 | Nov 2022 | 1.28 GA
+| 1.26 | Dec 2022 | Jan 2023 | Mar 2023 | 1.29 GA
## FAQ
aks Use Multiple Node Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-multiple-node-pools.md
Use the following instructions to migrate your Ubuntu nodes to Mariner nodes.
> [!NOTE] > When adding a new Mariner node pool, you need to add at least one as `--mode System`. Otherwise, AKS won't allow you to delete your existing Ubuntu node pool.+ 2. [Cordon the existing Ubuntu nodes][cordon-and-drain]. 3. [Drain the existing Ubuntu nodes][drain-nodes]. 4. Remove the existing Ubuntu nodes using the `az aks delete` command.
api-management Api Management Access Restriction Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-access-restriction-policies.md
This policy can be used in the following policy [sections](./api-management-howt
## <a name="ValidateJWT"></a> Validate JWT
-The `validate-jwt` policy enforces existence and validity of a JSON web token (JWT) extracted from a specified HTTP header, extracted from a specified query parameter, or matching a specific value.
+The `validate-jwt` policy enforces existence and validity of a JSON web token (JWT) extracted from a specified HTTP header, extracted from a specified query parameter, or matching a specific value. The JSON Web Key Set (JWKS) is cached and is not fetched on each request. Automatic metadata refresh occurs once per hour. If retrieval fails, it will be refreshed in five minutes.
> [!IMPORTANT] > The `validate-jwt` policy requires that the `exp` registered claim is included in the JWT token, unless `require-expiration-time` attribute is specified and set to `false`.
api-management Api Management Howto Manage Protocols Ciphers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-manage-protocols-ciphers.md
Title: Manage protocols and ciphers in Azure API Management | Microsoft Docs
-description: Learn how to manage protocols (TLS) and ciphers (DES) in Azure API Management.
+ Title: Manage protocols and ciphers in Azure API Management | Microsoft Learn
+description: Learn how to manage transport layer security (TLS) protocols and cipher suites in Azure API Management.
- -- Previously updated : 09/07/2021+ Last updated : 09/22/2022 # Manage protocols and ciphers in Azure API Management
-Azure API Management supports multiple versions of Transport Layer Security (TLS) protocol for:
+Azure API Management supports multiple versions of Transport Layer Security (TLS) protocol to secure API traffic for:
* Client side * Backend side
-* The 3DES cipher
-This guide shows you how to manage protocols and ciphers configuration for an Azure API Management instance.
+API Management also supports multiple cipher suites used by the API gateway.
-![Manage protocols and ciphers in APIM](./media/api-management-howto-manage-protocols-ciphers/api-management-protocols-ciphers.png)
+By default, API Management enables TLS 1.2 for client and backend connectivity and several supported cipher suites. This guide shows you how to manage protocols and ciphers configuration for an Azure API Management instance.
+++
+> [!NOTE]
+> * If you're using the self-hosted gateway, see [self-hosted gateway security](self-hosted-gateway-overview.md#security) to manage TLS protocols and cipher suites.
+> * The Consumption tier doesn't support changes to the default cipher configuration.
## Prerequisites * An API Management instance. [Create one if you haven't already](get-started-create-service-instance.md).
-## How to manage TLS protocols and 3DES cipher
+
+## How to manage TLS protocols cipher suites
-1. Navigate to your **API Management instance** in the Azure portal.
-1. Scroll to the **Security** section in the side menu.
-1. Under the Security section, select **Protocols + ciphers**.
+1. In the left navigation of your API Management instance, under **Security**, select **Protocols + ciphers**.
1. Enable or disable desired protocols or ciphers.
-1. Click **Save**. Changes will be applied within an hour.
+1. Select **Save**. Changes are applied within an hour.
> [!NOTE]
-> Some protocols or cipher suites (like backend-side TLS 1.2) can't be enabled or disabled from the Azure portal. Instead, you'll need to apply the REST call. Use the `properties.customProperties` structure in the [Create/Update API Management Service REST API](/rest/api/apimanagement/current-ga/api-management-service/create-or-update) article.
+> Some protocols or cipher suites (such as backend-side TLS 1.2) can't be enabled or disabled from the Azure portal. Instead, you'll need to apply the REST API call. Use the `properties.customProperties` structure in the [Create/Update API Management Service](/rest/api/apimanagement/current-ga/api-management-service/create-or-update) REST API.
## Next steps
+* For recommendations on securing your API Management instance, see [Azure security baseline for API Management](/security/benchmark/azure/baselines/api-management-security-baseline).
+* Learn about security considerations in the API Management [landing zone accelerator](/azure/cloud-adoption-framework/scenarios/app-platform/api-management/security).
* Learn more about [TLS](/dotnet/framework/network-programming/tls).
app-service Configure Language Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-python.md
By default, the `PRE_BUILD_COMMAND`, `POST_BUILD_COMMAND`, and `DISABLE_COLLECTS
- To run post-build commands, set the `POST_BUILD_COMMAND` setting to contain either a command, such as `echo Post-build command`, or a path to a script file relative to your project root, such as `scripts/postbuild.sh`. All commands must use relative paths to the project root folder.
-For additional settings that customize build automation, see [Oryx configuration](https://github.com/microsoft/Oryx/blob/master/doc/configuration.md).
+For other settings that customize build automation, see [Oryx configuration](https://github.com/microsoft/Oryx/blob/master/doc/configuration.md).
To access the build and deployment logs, see [Access deployment logs](#access-deployment-logs).
Existing web applications can be redeployed to Azure as follows:
1. **Source repository**: Maintain your source code in a suitable repository like GitHub, which enables you to set up continuous deployment later in this process. 1. Your *requirements.txt* file must be at the root of your repository for App Service to automatically install the necessary packages.
-1. **Database**: If your app depends on a database, provision the necessary resources on Azure as well. See [Tutorial: Deploy a Django web app with PostgreSQL - create a database](tutorial-python-postgresql-app.md#3create-the-postgresql-database-in-azure) for an example.
+1. **Database**: If your app depends on a database, create the necessary resources on Azure as well.
-1. **App service resources**: Create a resource group, App Service Plan, and App Service web app to host your application. You can most easily do this by doing an initial deployment of your code through the Azure CLI command [`az webapp up`](/cli/azure/webapp?az-webapp-up). Or, you can create and deploy resources as shown in [Tutorial: Deploy a Django web app with PostgreSQL](tutorial-python-postgresql-app.md). Replace the names of the resource group, App Service Plan, and the web app to be more suitable for your application.
+1. **App service resources**: Create a resource group, App Service Plan, and App Service web app to host your application. You can do it easily by running the Azure CLI command [`az webapp up`](/cli/azure/webapp?az-webapp-up). Or, you can create and deploy resources as shown in [Tutorial: Deploy a Django web app with PostgreSQL](tutorial-python-postgresql-app.md). Replace the names of the resource group, App Service Plan, and the web app to be more suitable for your application.
1. **Environment variables**: If your application requires any environment variables, create equivalent [App Service application settings](configure-common.md#configure-app-settings). These App Service settings appear to your code as environment variables, as described on [Access environment variables](#access-app-settings-as-environment-variables).
- - Database connections, for example, are often managed through such settings, as shown in [Tutorial: Deploy a Django web app with PostgreSQL - configure variables to connect the database](tutorial-python-postgresql-app.md#5connect-the-web-app-to-the-database).
+ - Database connections, for example, are often managed through such settings, as shown in [Tutorial: Deploy a Django web app with PostgreSQL - verify connection settings](tutorial-python-postgresql-app.md#2-verify-connection-settings).
- See [Production settings for Django apps](#production-settings-for-django-apps) for specific settings for typical Django apps. 1. **App startup**: Review the section, [Container startup process](#container-startup-process) later in this article to understand how App Service attempts to run your app. App Service uses the Gunicorn web server by default, which must be able to find your app object or *wsgi.py* folder. If needed, you can [Customize the startup command](#customize-startup-command). 1. **Continuous deployment**: Set up continuous deployment, as described on [Continuous deployment to Azure App Service](deploy-continuous-deployment.md) if using Azure Pipelines or Kudu deployment, or [Deploy to App Service using GitHub Actions](./deploy-continuous-deployment.md) if using GitHub actions.
-1. **Custom actions**: To perform actions within the App Service container that hosts your app, such as Django database migrations, you can [connect to the container through SSH](configure-linux-open-ssh-session.md). For an example of running Django database migrations, see [Tutorial: Deploy a Django web app with PostgreSQL - run database migrations](tutorial-python-postgresql-app.md#7migrate-app-database).
+1. **Custom actions**: To perform actions within the App Service container that hosts your app, such as Django database migrations, you can [connect to the container through SSH](configure-linux-open-ssh-session.md). For an example of running Django database migrations, see [Tutorial: Deploy a Django web app with PostgreSQL - generate database schema](tutorial-python-postgresql-app.md#4-generate-database-schema).
- When using continuous deployment, you can perform those actions using post-build commands as described earlier under [Customize build automation](#customize-build-automation). With these steps completed, you should be able to commit changes to your source repository and have those updates automatically deployed to App Service.
For App Service, you then make the following modifications:
Here, `FRONTEND_DIR`, to build a path to where a build tool like yarn is run. You can again use an environment variable and App Setting as desired.
-1. Add `whitenoise` to your *requirements.txt* file. [Whitenoise](http://whitenoise.evans.io/en/stable/) (whitenoise.evans.io) is a Python package that makes it simple for a production Django app to serve it's own static files. Whitenoise specifically serves those files that are found in the folder specified by the Django `STATIC_ROOT` variable.
+1. Add `whitenoise` to your *requirements.txt* file. [Whitenoise](http://whitenoise.evans.io/en/stable/) (whitenoise.evans.io) is a Python package that makes it simple for a production Django app to serve its own static files. Whitenoise specifically serves those files that are found in the folder specified by the Django `STATIC_ROOT` variable.
1. In your *settings.py* file, add the following line for Whitenoise:
For App Service, you then make the following modifications:
## Serve static files for Flask apps
-If your Flask web app includes static front-end files, first follow the instructions on [managing static files](https://flask.palletsprojects.com/en/2.1.x/tutorial/static/) in the Flask documentation. For an example of serving static files in a Flask application, see the [quickstart sample Flask application](https://github.com/Azure-Samples/msdocs-python-flask-webapp-quickstart) on Github.
+If your Flask web app includes static front-end files, first follow the instructions on [managing static files](https://flask.palletsprojects.com/en/2.1.x/tutorial/static/) in the Flask documentation. For an example of serving static files in a Flask application, see the [quickstart sample Flask application](https://github.com/Azure-Samples/msdocs-python-flask-webapp-quickstart) on GitHub.
To serve static files directly from a route on your application, you can use the [`send_from_directory`](https://flask.palletsprojects.com/en/2.2.x/api/#flask.send_from_directory) method:
When deployed to App Service, Python apps run within a Linux Docker container th
This container has the following characteristics: -- Apps are run using the [Gunicorn WSGI HTTP Server](https://gunicorn.org/), using the additional arguments `--bind=0.0.0.0 --timeout 600`.
+- Apps are run using the [Gunicorn WSGI HTTP Server](https://gunicorn.org/), using the extra arguments `--bind=0.0.0.0 --timeout 600`.
- You can provide configuration settings for Gunicorn by [customizing the startup command](#customize-startup-command). - To protect your web app from accidental or deliberate DDOS attacks, Gunicorn is run behind an Nginx reverse proxy as described on [Deploying Gunicorn](https://docs.gunicorn.org/en/latest/deploy.html) (docs.gunicorn.org). - By default, the base container image includes only the Flask web framework, but the container supports other frameworks that are WSGI-compliant and compatible with Python 3.6+, such as Django. -- To install additional packages, such as Django, create a [*requirements.txt*](https://pip.pypa.io/en/stable/user_guide/#requirements-files) file in the root of your project that specifies your direct dependencies. App Service then installs those dependencies automatically when you deploy your project.
+- To install other packages, such as Django, create a [*requirements.txt*](https://pip.pypa.io/en/stable/user_guide/#requirements-files) file in the root of your project that specifies your direct dependencies. App Service then installs those dependencies automatically when you deploy your project.
The *requirements.txt* file *must* be in the project root for dependencies to be installed. Otherwise, the build process reports the error: "Could not find setup.py or requirements.txt; Not running pip install." If you encounter this error, check the location of your requirements file.
During startup, the App Service on Linux container runs the following steps:
3. Check for the existence of a [Flask app](#flask-app), and launch Gunicorn for it if detected. 4. If no other app is found, start a default app that's built into the container.
-The following sections provide additional details for each option.
+The following sections provide extra details for each option.
### Django app
For Django apps, App Service looks for a file named `wsgi.py` within your app co
gunicorn --bind=0.0.0.0 --timeout 600 <module>.wsgi ```
-If you want more specific control over the startup command, use a [custom startup command](#customize-startup-command), replace `<module>` with the name of folder that contains *wsgi.py*, and add a `--chdir` argument if that module is not in the project root. For example, if your *wsgi.py* is located under *knboard/backend/config* from your project root, use the arguments `--chdir knboard/backend config.wsgi`.
+If you want more specific control over the startup command, use a [custom startup command](#customize-startup-command), replace `<module>` with the name of folder that contains *wsgi.py*, and add a `--chdir` argument if that module isn't in the project root. For example, if your *wsgi.py* is located under *knboard/backend/config* from your project root, use the arguments `--chdir knboard/backend config.wsgi`.
To enable production logging, add the `--access-logfile` and `--error-logfile` parameters as shown in the examples for [custom startup commands](#customize-startup-command).
gunicorn --bind=0.0.0.0 --timeout 600 application:app
gunicorn --bind=0.0.0.0 --timeout 600 app:app ```
-If your main app module is contained in a different file, use a different name for the app object, or you want to provide additional arguments to Gunicorn, use a [custom startup command](#customize-startup-command).
+If your main app module is contained in a different file, use a different name for the app object, or you want to provide other arguments to Gunicorn, use a [custom startup command](#customize-startup-command).
### Default behavior
To specify a startup command or command file:
Replace `<custom-command>` with either the full text of your startup command or the name of your startup command file.
-App Service ignores any errors that occur when processing a custom startup command or file, then continues its startup process by looking for Django and Flask apps. If you don't see the behavior you expect, check that your startup command or file is error-free and that a startup command file is deployed to App Service along with your app code. You can also check the [Diagnostic logs](#access-diagnostic-logs) for additional information. Also check the app's **Diagnose and solve problems** page on the [Azure portal](https://portal.azure.com).
+App Service ignores any errors that occur when processing a custom startup command or file, then continues its startup process by looking for Django and Flask apps. If you don't see the behavior you expect, check that your startup command or file is error-free and that a startup command file is deployed to App Service along with your app code. You can also check the [Diagnostic logs](#access-diagnostic-logs) for more information. Also check the app's **Diagnose and solve problems** page on the [Azure portal](https://portal.azure.com).
### Example startup commands
App Service ignores any errors that occur when processing a custom startup comma
gunicorn --bind=0.0.0.0 --timeout 600 --workers=4 --chdir <module_path> <module>.wsgi ```
- For more information, see [Running Gunicorn](https://docs.gunicorn.org/en/stable/run.html) (docs.gunicorn.org). If you are using auto-scale rules to scale your web app up and down, you should also dynamically set the number of gunicorn workers using the `NUM_CORES` environment variable in your startup command, for example: `--workers $((($NUM_CORES*2)+1))`. For more information on setting the recommended number of gunicorn workers, see [the Gunicorn FAQ](https://docs.gunicorn.org/en/stable/design.html#how-many-workers)
+ For more information, see [Running Gunicorn](https://docs.gunicorn.org/en/stable/run.html) (docs.gunicorn.org). If you're using auto-scale rules to scale your web app up and down, you should also dynamically set the number of gunicorn workers using the `NUM_CORES` environment variable in your startup command, for example: `--workers $((($NUM_CORES*2)+1))`. For more information on setting the recommended number of gunicorn workers, see [the Gunicorn FAQ](https://docs.gunicorn.org/en/stable/design.html#how-many-workers)
- **Enable production logging for Django**: Add the `--access-logfile '-'` and `--error-logfile '-'` arguments to the command line:
Use the following steps to access the deployment logs:
1. On the **Logs** tab, select the **Commit ID** for the most recent commit. 1. On the **Log details** page that appears, select the **Show Logs...** link that appears next to "Running oryx build...".
-Build issues such as incorrect dependencies in *requirements.txt* and errors in pre- or post-build scripts will appear in these logs. Errors also appear if your requirements file is not exactly named *requirements.txt* or does not appear in the root folder of your project.
+Build issues such as incorrect dependencies in *requirements.txt* and errors in pre- or post-build scripts will appear in these logs. Errors also appear if your requirements file isn't exactly named *requirements.txt* or doesn't appear in the root folder of your project.
## Open SSH session in browser
In general, the first step in troubleshooting is to use App Service Diagnostics:
Next, examine both the [deployment logs](#access-deployment-logs) and the [app logs](#access-diagnostic-logs) for any error messages. These logs often identify specific issues that can prevent app deployment or app startup. For example, the build can fail if your *requirements.txt* file has the wrong filename or isn't present in your project root folder.
-The following sections provide additional guidance for specific issues.
+The following sections provide guidance for specific issues.
- [App doesn't appear - default app shows](#app-doesnt-appear) - [App doesn't appear - "service unavailable" message](#service-unavailable)
The following sections provide additional guidance for specific issues.
- If your files exist, then App Service wasn't able to identify your specific startup file. Check that your app is structured as App Service expects for [Django](#django-app) or [Flask](#flask-app), or use a [custom startup command](#customize-startup-command). -- <a name="service-unavailable"></a>**You see the message "Service Unavailable" in the browser.** The browser has timed out waiting for a response from App Service, which indicates that App Service started the Gunicorn server, but the app itself did not start. This condition could indicate that the Gunicorn arguments are incorrect, or that there's an error in the app code.
+- <a name="service-unavailable"></a>**You see the message "Service Unavailable" in the browser.** The browser has timed out waiting for a response from App Service, which indicates that App Service started the Gunicorn server, but the app itself didn't start. This condition could indicate that the Gunicorn arguments are incorrect, or that there's an error in the app code.
- Refresh the browser, especially if you're using the lowest pricing tiers in your App Service Plan. The app may take longer to start up when using free tiers, for example, and becomes responsive after you refresh the browser.
The following sections provide additional guidance for specific issues.
#### ModuleNotFoundError when app starts
-If you see an error like `ModuleNotFoundError: No module named 'example'`, this means that Python could not find one or more of your modules when the application started. This most often occurs if you deploy your virtual environment with your code. Virtual environments are not portable, so a virtual environment should not be deployed with your application code. Instead, let Oryx create a virtual environment and install your packages on the web app by creating an app setting, `SCM_DO_BUILD_DURING_DEPLOYMENT`, and setting it to `1`. This will force Oryx to install your packages whenever you deploy to App Service. For more information, please see [this article on virtual environment portability](https://azure.github.io/AppService/2020/12/11/cicd-for-python-apps.html).
+If you see an error like `ModuleNotFoundError: No module named 'example'`, this means that Python couldn't find one or more of your modules when the application started. This most often occurs if you deploy your virtual environment with your code. Virtual environments aren't portable, so a virtual environment shouldn't be deployed with your application code. Instead, let Oryx create a virtual environment and install your packages on the web app by creating an app setting, `SCM_DO_BUILD_DURING_DEPLOYMENT`, and setting it to `1`. This will force Oryx to install your packages whenever you deploy to App Service. For more information, please see [this article on virtual environment portability](https://azure.github.io/AppService/2020/12/11/cicd-for-python-apps.html).
### Database is locked
When attempting to run database migrations with a Django app, you may see "sqlit
Check the `DATABASES` variable in the app's *settings.py* file to ensure that your app is using a cloud database instead of SQLite.
-If you're encountering this error with the sample in [Tutorial: Deploy a Django web app with PostgreSQL](tutorial-python-postgresql-app.md), check that you completed the steps in [Configure environment variables to connect the database](tutorial-python-postgresql-app.md#5connect-the-web-app-to-the-database).
+If you're encountering this error with the sample in [Tutorial: Deploy a Django web app with PostgreSQL](tutorial-python-postgresql-app.md), check that you completed the steps in [Verify connection settings](tutorial-python-postgresql-app.md#2-verify-connection-settings).
#### Other issues -- **Passwords don't appear in the SSH session when typed**: For security reasons, the SSH session keeps your password hidden as you type. The characters are being recorded, however, so type your password as usual and press **Enter** when done.
+- **Passwords don't appear in the SSH session when typed**: For security reasons, the SSH session keeps your password hidden when you type. The characters are being recorded, however, so type your password as usual and press **Enter** when done.
- **Commands in the SSH session appear to be cut off**: The editor may not be word-wrapping commands, but they should still run correctly.
app-service Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview.md
Create your first web app.
> [Python (on Linux)](quickstart-python.md) > [!div class="nextstepaction"]
-> [HTML (on Windows or Linux)](quickstart-html.md)
+> [HTML](quickstart-html.md)
> [!div class="nextstepaction"] > [Custom container (Windows or Linux)](tutorial-custom-container.md)
app-service Quickstart Golang https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-golang.md
+
+ Title: 'Quickstart: Create a Go web app'
+description: Deploy your first Go (GoLang) Hello World to Azure App Service in minutes.
+ Last updated : 10/13/2022
+ms.devlang: go
++++
+# Deploy a Go web app to Azure App Service
+
+> [!IMPORTANT]
+> Go on App Service on Linux is _experimental_.
+>
+
+In this quickstart, you'll deploy a Go web app to Azure App Service. Azure App Service is a fully managed web hosting service that supports Go 1.18 and higher apps hosted in a Linux server environment.
+
+To complete this quickstart, you need:
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs).
+- [Go 1.18](https://go.dev/dl/) or higher installed locally.
+
+## 1 - Sample application
+
+First, create a folder for your project.
+
+Go to the terminal window, change into the folder you created and run `go mod init <ModuleName>`. The ModuleName could just be the folder name at this point.
+
+The `go mod init` command creates a go.mod file to track your code's dependencies. So far, the file includes only the name of your module and the Go version your code supports. But as you add dependencies, the go.mod file will list the versions your code depends on.
+
+Create a file called main.go. We'll be doing most of our coding here.
+
+```go
+package main
+import (
+ "fmt"
+ "net/http"
+)
+func main() {
+ http.HandleFunc("/", HelloServer)
+ http.ListenAndServe(":8080", nil)
+}
+func HelloServer(w http.ResponseWriter, r *http.Request) {
+ fmt.Fprintf(w, "Hello, %s!", r.URL.Path[1:])
+}
+```
+
+This program uses the `net.http` package to handle all requests to the web root with the HelloServer function. The call to `http.ListenAndServe` tells the server to listen on the TCP network address `:8080`.
+
+Using a terminal, go to your projectΓÇÖs directory and run `go run main.go`. Now open a browser window and type the URL `http://localhost:8080/world`. You should see the message `Hello, world!`.
+
+## 2 - Create a web app in Azure
+
+To host your application in Azure, you need to create Azure App Service web app in Azure. You can create a web app using the Azure CLI.
+
+Azure CLI commands can be run on a computer with the [Azure CLI installed](/cli/azure/install-azure-cli).
+
+Azure CLI has a command `az webapp up` that will create the necessary resources and deploy your application in a single step.
+
+If necessary, log in to Azure using [az login](/cli/azure/authenticate-azure-cli).
+
+```azurecli
+az login
+```
+
+Create the webapp and other resources, then deploy your code to Azure using [az webapp up](/cli/azure/webapp#az-webapp-up).
+
+```azurecli
+az webapp up --runtime GO:1.18 --sku B1
+```
+
+* The `--runtime` parameter specifies what version of Go your app is running. This example uses Go 1.18. To list all available runtimes, use the command `az webapp list-runtimes --os linux --output table`.
+* The `--sku` parameter defines the size (CPU, memory) and cost of the app service plan. This example uses the B1 (Basic) service plan, which will incur a small cost in your Azure subscription. For a full list of App Service plans, view the [App Service pricing](https://azure.microsoft.com/pricing/details/app-service/linux/) page.
+* You can optionally specify a name with the argument `--name <app-name>`. If you don't provide one, then a name will be automatically generated.
+* You can optionally include the argument `--location <location-name>` where `<location_name>` is an available Azure region. You can retrieve a list of allowable regions for your Azure account by running the [`az account list-locations`](/cli/azure/appservice#az-appservice-list-locations) command.
+
+The command may take a few minutes to complete. While the command is running, it provides messages about creating the resource group, the App Service plan, and the app resource, configuring logging, and doing ZIP deployment. It then gives the message, "You can launch the app at http://&lt;app-name&gt;.azurewebsites.net", which is the app's URL on Azure.
+
+<pre>
+The webapp '&lt;app-name>' doesn't exist
+Creating Resource group '&lt;group-name>' ...
+Resource group creation complete
+Creating AppServicePlan '&lt;app-service-plan-name>' ...
+Creating webapp '&lt;app-name>' ...
+Creating zip with contents of dir /home/tulika/myGoApp ...
+Getting scm site credentials for zip deployment
+Starting zip deployment. This operation can take a while to complete ...
+Deployment endpoint responded with status code 202
+You can launch the app at http://&lt;app-name>.azurewebsites.net
+{
+ "URL": "http://&lt;app-name>.azurewebsites.net",
+ "appserviceplan": "&lt;app-service-plan-name>",
+ "location": "centralus",
+ "name": "&lt;app-name>",
+ "os": "&lt;os-type>",
+ "resourcegroup": "&lt;group-name>",
+ "runtime_version": "go|1.18",
+ "runtime_version_detected": "0.0",
+ "sku": "FREE",
+ "src_path": "&lt;your-folder-location>"
+}
+</pre>
++
+## 3 - Browse to the app
+
+Browse to the deployed application in your web browser at the URL `http://<app-name>.azurewebsites.net`. If you see a default app page, wait a minute and refresh the browser.
+
+The Go sample code is running a Linux container in App Service using a built-in image.
+
+**Congratulations!** You've deployed your Go app to App Service.
+
+## 4 - Clean up resources
+
+When no longer needed, you can use the [az group delete](/cli/azure/group#az-group-delete) command to remove the resource group, and all related resources:
+
+```azurecli-interactive
+az group delete --resource-group <resource-group-name>
+```
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Configure an App Service app](./configure-common.md)
+
+> [!div class="nextstepaction"]
+> [Tutorial: Deploy from Azure Container Registry](./tutorial-custom-container.md)
+
+> [!div class="nextstepaction"]
+> [Tutorial: Map a custom domain name](./app-service-web-tutorial-custom-domain.md)
app-service Quickstart Html https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-html.md
cd html-docs-hello-world
az webapp up --location westeurope --name <app_name> --html ```
+> [!NOTE]
+> If you want to host your static content on a Linux based App Service instance configure PHP as your runtime using the `--runtime` and `--os-type` flags:
+>
+> `az webapp up --location westeurope --name <app_name> --runtime "PHP:8.1" --os-type linux`
+>
+> The PHP container includes a web server that is suitable to host static HTML content.
++ The `az webapp up` command does the following actions:
app-service Tutorial Python Postgresql App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-python-postgresql-app.md
Title: 'Tutorial: Deploy a Python Django or Flask web app with PostgreSQL' description: Create a Python Django or Flask web app with a PostgreSQL database and deploy it to Azure. The tutorial uses either the Django or Flask framework and the app is hosted on Azure App Service on Linux.-- ms.devlang: python Previously updated : 03/09/2022 Last updated : 10/07/2022 # Deploy a Python (Django or Flask) web app with PostgreSQL in Azure
-In this tutorial, you'll deploy a data-driven Python web app (**[Django](https://www.djangoproject.com/)** or **[Flask](https://flask.palletsprojects.com/)**) with the **[Azure Database for PostgreSQL](../postgresql/index.yml)** relational database service. The Python app is hosted in a fully managed **[Azure App Service](./overview.md#app-service-on-linux)** which supports [Python 3.7 or higher](https://www.python.org/downloads/) in a Linux server environment. You can start with a basic pricing tier that can be scaled up at any later time.
+In this tutorial, you'll deploy a data-driven Python web app (**[Django](https://www.djangoproject.com/)** or **[Flask](https://flask.palletsprojects.com/)**) to **[Azure App Service](./overview.md#app-service-on-linux)** with the **[Azure Database for PostgreSQL](../postgresql/index.yml)** relational database service. Azure App Service supports [Python 3.7 or higher](https://www.python.org/downloads/) in a Linux server environment.
**To complete this tutorial, you'll need:** * An Azure account with an active subscription exists. If you don't have an Azure account, you [can create one for free](https://azure.microsoft.com/free/python). * Knowledge of Python with Flask development or [Python with Django development](/training/paths/django-create-data-driven-websites/)
-* [Python 3.7 or higher](https://www.python.org/downloads/) installed locally.
-* [PostgreSQL](https://www.postgresql.org/download/) installed locally.
-## 1 - Sample application
+## Sample application
-Sample Python applications using the Flask and Django framework are provided to help you follow along with this tutorial. Download or clone one of the sample applications to your local workstation.
+Sample Python applications using the Flask and Django framework are provided to help you follow along with this tutorial. To deploy them without running them locally, skip this part.
+
+To run the application locally, make sure you have [Python 3.7 or higher](https://www.python.org/downloads/) and [PostgreSQL](https://www.postgresql.org/download/) install locally. Then, download or clone the app:
### [Flask](#tab/flask)
git clone https://github.com/Azure-Samples/msdocs-flask-postgresql-sample-app
git clone https://github.com/Azure-Samples/msdocs-django-postgresql-sample-app.git ``` --
-To run the application locally, navigate into the application folder:
+--
-### [Flask](#tab/flask)
-
-```bash
-cd msdocs-flask-postgresql-sample-app
-```
-
-### [Django](#tab/django)
-
-```bash
-cd msdocs-django-postgresql-sample-app
-```
---
-Create a virtual environment for the app:
--
-Install the dependencies:
-
-```Console
-pip install -r requirements.txt
-```
-
-> [!NOTE]
-> If you are following along with this tutorial with your own app, look at the *requirements.txt* file description in each project's *README.md* file ([Flask](https://github.com/Azure-Samples/msdocs-flask-postgresql-sample-app/blob/main/README.md), [Django](https://github.com/Azure-Samples/msdocs-django-postgresql-sample-app/blob/main/README.md)) to see what packages you'll need.
-
-This sample application requires an *.env* file describing how to connect to your local PostgreSQL instance. Create an *.env* file as shown below using the *.env.sample* file as a guide. Set the value of `DBNAME` to the name of an existing database in your local PostgreSQL instance. This tutorial assumes the database name is *restaurant*. Set the values of `DBHOST`, `DBUSER`, and `DBPASS` as appropriate for your local PostgreSQL instance.
+Create an *.env* file as shown below using the *.env.sample* file as a guide. Set the value of `DBNAME` to the name of an existing database in your local PostgreSQL instance. Set the values of `DBHOST`, `DBUSER`, and `DBPASS` as appropriate for your local PostgreSQL instance.
``` DBNAME=<database name>
DBUSER=<db-user-name>
DBPASS=<db-password> ```
-For Django, you can use SQLite locally instead of PostgreSQL by following the instructions in the comments of the [*settings.py*](https://github.com/Azure-Samples/msdocs-django-postgresql-sample-app/blob/main/azureproject/settings.py) file.
-
-Create the `restaurant` and `review` database tables:
-
-### [Flask](#tab/flask)
-
-```Console
-flask db init
-flask db migrate -m "initial migration"
-```
-
-### [Django](#tab/django)
-
-```Console
-python manage.py migrate
-```
---
-Run the app:
+Run the sample application with the following commands:
### [Flask](#tab/flask)
-```Console
+```bash
+# Clone the sample
+git clone https://github.com/Azure-Samples/msdocs-flask-postgresql-sample-app
+cd msdocs-flask-postgresql-sample-app
+# Activate a virtual environment
+python3 -m venv .venv # In CMD on Windows, run "py -m venv .venv" instead
+.venv/scripts/activate
+# Install dependencies
+pip install -r requirements.txt
+# Run database migration
+flask db upgrade
+# Run the app at http://127.0.0.1:5000
flask run ``` ### [Django](#tab/django)
-```Console
+```bash
+# Clone the sample
+git clone https://github.com/Azure-Samples/msdocs-django-postgresql-sample-app.git
+cd msdocs-django-postgresql-sample-app
+# Activate a virtual environment
+python3 -m venv .venv # In CMD on Windows, run "py -m venv .venv" instead
+.venv/scripts/activate
+# Install dependencies
+pip install -r requirements.txt
+# Run database migration
+python manage.py migrate
+# Run the app at http://127.0.0.1:8000
python manage.py runserver ``` --
-### [Flask](#tab/flask)
-
-In a web browser, go to the sample application at `http://127.0.0.1:5000` and add some restaurants and restaurant reviews to see how the app works.
---
-### [Django](#tab/django)
-
-In a web browser, go to the sample application at `http://127.0.0.1:8000` and add some restaurants and restaurant reviews to see how the app works.
----
-> [!TIP]
-> With Django, you can create users with the `python manage.py createsuperuser` command like you would with a typical Django app. For more information, see the documentation for [django django-admin and manage.py](https://docs.djangoproject.com/en/1.8/ref/django-admin/). Use the superuser account to access the `/admin` portion of the web site. For Flask, use an extension such as [Flask-admin](https://github.com/flask-admin/flask-admin) to provide the same functionality.
-
-Having issues? [Let us know](https://aka.ms/DjangoCLITutorialHelp).
-
-## 2 - Create a web app in Azure
-
-To host your application in Azure, you need to create Azure App Service web app.
-### [Azure portal](#tab/azure-portal)
-
-Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps to create your Azure App Service resource.
-
-| Instructions | Screenshot |
-|:-|--:|
-| [!INCLUDE [A screenshot showing how to use the search box in the top tool bar to find App Services in Azure](<./includes/tutorial-python-postgresql-app/create-app-service-azure-portal-1.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/create-app-service-azure-portal-1-240px.png" lightbox="./media/tutorial-python-postgresql-app/create-app-service-azure-portal-1.png" alt-text="A screenshot showing how to use the search box in the top tool bar to find App Services in Azure portal." ::: |
-| [!INCLUDE [A screenshot showing the location of the Create button on the App Services page in the Azure portal](<./includes/tutorial-python-postgresql-app/create-app-service-azure-portal-2.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/create-app-service-azure-portal-2-240px.png" lightbox="./media/tutorial-python-postgresql-app/create-app-service-azure-portal-2.png" alt-text="A screenshot showing the location of the Create button on the App Services page in the Azure portal." ::: |
-| [!INCLUDE [A screenshot showing how to fill out the form to create a new App Service in the Azure portal](<./includes/tutorial-python-postgresql-app/create-app-service-azure-portal-3.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/create-app-service-azure-portal-3-240px.png" lightbox="./media/tutorial-python-postgresql-app/create-app-service-azure-portal-3.png" alt-text="A screenshot showing how to fill out the form to create a new App Service in the Azure portal." ::: |
-| [!INCLUDE [A screenshot showing how to select the basic App Service plan in the Azure portal](<./includes/tutorial-python-postgresql-app/create-app-service-azure-portal-4.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/create-app-service-azure-portal-4-240px.png" lightbox="./media/tutorial-python-postgresql-app/create-app-service-azure-portal-4.png" alt-text="A screenshot showing how to select the basic App Service plan in the Azure portal." ::: |
-| [!INCLUDE [A screenshot showing the location of the Review plus Create button in the Azure portal](<./includes/tutorial-python-postgresql-app/create-app-service-azure-portal-5.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/create-app-service-azure-portal-5-240px.png" lightbox="./media/tutorial-python-postgresql-app/create-app-service-azure-portal-5.png" alt-text="A screenshot showing the location of the Review plus Create button in the Azure portal." ::: |
-
-### [VS Code](#tab/vscode-aztools)
-
-To create Azure resources in VS Code, you must have the [Azure Tools extension pack](https://marketplace.visualstudio.com/items?itemName=ms-vscode.vscode-node-azure-pack) installed and be signed into Azure from VS Code.
-
-> [!div class="nextstepaction"]
-> [Download Azure Tools extension pack](https://marketplace.visualstudio.com/items?itemName=ms-vscode.vscode-node-azure-pack)
-
-| Instructions | Screenshot |
-|:-|--:|
-| [!INCLUDE [A screenshot showing how to use the search box in the top tool bar to find App Services in Azure](<./includes/tutorial-python-postgresql-app/create-app-service-visual-studio-code-1.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/create-app-service-visual-studio-code-1-240px.png" lightbox="./media/tutorial-python-postgresql-app/create-app-service-visual-studio-code-1.png" alt-text="A screenshot showing how to find the VS Code Azure extension in VS Code." ::: |
-| [!INCLUDE [A screenshot showing how to use the search box in the top tool bar to find App Services in Azure](<./includes/tutorial-python-postgresql-app/create-app-service-visual-studio-code-2.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/create-app-service-visual-studio-code-2-240px.png" lightbox="./media/tutorial-python-postgresql-app/create-app-service-visual-studio-code-2.png" alt-text="A screenshot showing how to create a new web app in VS Code." ::: |
-| [!INCLUDE [A screenshot showing how to use the search box in the top tool bar to find App Services in Azure](<./includes/tutorial-python-postgresql-app/create-app-service-visual-studio-code-3.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/create-app-service-visual-studio-code-3-240px.png" lightbox="./media/tutorial-python-postgresql-app/create-app-service-visual-studio-code-3.png" alt-text="A screenshot showing how to use the search box in the top tool bar of VS Code to name a new web app." ::: |
-| [!INCLUDE [A screenshot showing how to use the search box in the top tool bar to find App Services in Azure](<./includes/tutorial-python-postgresql-app/create-app-service-visual-studio-code-4.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/create-app-service-visual-studio-code-4a-240px.png" lightbox="./media/tutorial-python-postgresql-app/create-app-service-visual-studio-code-4a.png" alt-text="A screenshot showing how to use the search box in the top tool bar of VS Code to create a new resource group." ::: :::image type="content" source="./media/tutorial-python-postgresql-app/create-app-service-visual-studio-code-4b-240px.png" lightbox="./media/tutorial-python-postgresql-app/create-app-service-visual-studio-code-4b.png" alt-text="A screenshot showing how to use the search box in the top tool bar of VS Code to name a new resource group." ::: |
-| [!INCLUDE [A screenshot showing how to use the search box in the top tool bar to find App Services in Azure](<./includes/tutorial-python-postgresql-app/create-app-service-visual-studio-code-5.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/create-app-service-visual-studio-code-5-240px.png" lightbox="./media/tutorial-python-postgresql-app/create-app-service-visual-studio-code-5.png" alt-text="A screenshot showing how to use the search box in the top tool bar of VS Code to set the runtime stack of a web app in Azure." ::: |
-| [!INCLUDE [A screenshot showing how to use the search box in the top tool bar to find App Services in Azure](<./includes/tutorial-python-postgresql-app/create-app-service-visual-studio-code-6.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/create-app-service-visual-studio-code-6-240px.png" lightbox="./media/tutorial-python-postgresql-app/create-app-service-visual-studio-code-6.png" alt-text="A screenshot showing how to use the search box in the top tool bar of VS Code to set location for new web app resource in Azure." ::: |
-| [!INCLUDE [A screenshot showing how to use the search box in the top tool bar to find App Services in Azure](<./includes/tutorial-python-postgresql-app/create-app-service-visual-studio-code-7.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/create-app-service-visual-studio-code-7a-240px.png" lightbox="./media/tutorial-python-postgresql-app/create-app-service-visual-studio-code-7a.png" alt-text="A screenshot showing how to use the search box in the top tool bar of VS Code to create a new App Service plan in Azure." ::: :::image type="content" source="./media/tutorial-python-postgresql-app/create-app-service-visual-studio-code-7b-240px.png" lightbox="./media/tutorial-python-postgresql-app/create-app-service-visual-studio-code-7b.png" alt-text="A screenshot showing how to use the search box in the top tool bar in VS Code to name a new App Service plan in Azure." ::: |
-| [!INCLUDE [A screenshot showing how to use the search box in the top tool bar to find App Services in Azure](<./includes/tutorial-python-postgresql-app/create-app-service-visual-studio-code-8.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/create-app-service-visual-studio-code-8-240px.png" lightbox="./media/tutorial-python-postgresql-app/create-app-service-visual-studio-code-8.png" alt-text="A screenshot showing how to use the search box in the top tool bar in VS Code to select a pricing tier for a web app in Azure." ::: |
-| [!INCLUDE [A screenshot showing how to use the search box in the top tool bar to find App Services in Azure](<./includes/tutorial-python-postgresql-app/create-app-service-visual-studio-code-9.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/create-app-service-visual-studio-code-9-240px.png" lightbox="./media/tutorial-python-postgresql-app/create-app-service-visual-studio-code-9.png" alt-text="A screenshot showing how to use the search box in the top tool bar of VS Code to skip configuring Application Insights for a web app in Azure." ::: |
-| [!INCLUDE [A screenshot showing how to use the search box in the top tool bar to find App Services in Azure](<./includes/tutorial-python-postgresql-app/create-app-service-visual-studio-code-10.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/create-app-service-visual-studio-code-10a-240px.png" lightbox="./media/tutorial-python-postgresql-app/create-app-service-visual-studio-code-10a.png" alt-text="A screenshot showing deployment with Visual Studio Code and View Output Button." ::: :::image type="content" source="./media/tutorial-python-postgresql-app/create-app-service-visual-studio-code-10b-240px.png" lightbox="./media/tutorial-python-postgresql-app/create-app-service-visual-studio-code-10b.png" alt-text="A screenshot showing deployment with Visual Studio Code and how to view App Service in Azure portal." ::: :::image type="content" source="./media/tutorial-python-postgresql-app/create-app-service-visual-studio-code-10c-240px.png" lightbox="./media/tutorial-python-postgresql-app/create-app-service-visual-studio-code-10c.png" alt-text="A screenshot showing the default App Service web page when no app has been deployed." ::: |
-
-### [Azure CLI](#tab/azure-cli)
-
-Azure CLI commands can be run in the [Azure Cloud Shell](https://shell.azure.com) or on a workstation with the [Azure CLI installed](/cli/azure/install-azure-cli).
-----
-Having issues? Refer first to the [Troubleshooting guide](configure-language-python.md#troubleshooting), otherwise, [let us know](https://aka.ms/DjangoCLITutorialHelp).
-
-## 3 - Create the PostgreSQL database in Azure
-
-You can create a PostgreSQL database in Azure using the [Azure portal](https://portal.azure.com/), Visual Studio Code, or the Azure CLI.
-
-### [Azure portal](#tab/azure-portal)
-
-Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps to create your Azure Database for PostgreSQL resource.
-
-| Instructions | Screenshot |
-|:-|--:|
-| [!INCLUDE [A screenshot showing how to use the search box in the top tool bar to find Postgres Services in Azure](<./includes/tutorial-python-postgresql-app/create-postgres-service-azure-portal-1.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/create-postgres-service-azure-portal-1-240px.png" lightbox="./media/tutorial-python-postgresql-app/create-postgres-service-azure-portal-1.png" alt-text="A screenshot showing how to use the search box in the top tool bar to find Postgres Services in the Azure portal." ::: |
-| [!INCLUDE [A screenshot showing the location of the Create button on the Azure Database for PostgreSQL servers page in the Azure portal](<./includes/tutorial-python-postgresql-app/create-postgres-service-azure-portal-2.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/create-postgres-service-azure-portal-2-240px.png" lightbox="./media/tutorial-python-postgresql-app/create-postgres-service-azure-portal-2.png" alt-text="A screenshot showing the location of the Create button on the Azure Database for PostgreSQL servers page in the Azure portal." ::: |
-| [!INCLUDE [A screenshot showing the location of the Create button on the Azure Database for PostgreSQL Flexible server deployment option page in the Azure portal](<./includes/tutorial-python-postgresql-app/create-postgres-service-azure-portal-3.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/create-postgres-service-azure-portal-3-240px.png" lightbox="./media/tutorial-python-postgresql-app/create-postgres-service-azure-portal-3.png" alt-text="A screenshot showing the location of the Create Flexible Server button on the Azure Database for PostgreSQL deployment option page in the Azure portal." ::: |
-| [!INCLUDE [A screenshot showing how to fill out the form to create a new Azure Database for PostgreSQL Flexible server in the Azure portal](<./includes/tutorial-python-postgresql-app/create-postgres-service-azure-portal-4.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/create-postgres-service-azure-portal-4-240px.png" lightbox="./media/tutorial-python-postgresql-app/create-postgres-service-azure-portal-4.png" alt-text="A screenshot showing how to fill out the form to create a new Azure Database for PostgreSQL in the Azure portal." ::: |
-| [!INCLUDE [A screenshot showing how to select and configure the compute and storage for PostgreSQL Flexible server in the Azure portal](<./includes/tutorial-python-postgresql-app/create-postgres-service-azure-portal-5.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/create-postgres-service-azure-portal-5-240px.png" lightbox="./media/tutorial-python-postgresql-app/create-postgres-service-azure-portal-5.png" alt-text="A screenshot showing how to select and configure the basic database service plan in the Azure portal." ::: |
-| [!INCLUDE [A screenshot showing creating administrator account information for the PostgreSQL Flexible server in in the Azure portal](<./includes/tutorial-python-postgresql-app/create-postgres-service-azure-portal-6.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/create-postgres-service-azure-portal-6-240px.png" lightbox="./media/tutorial-python-postgresql-app/create-postgres-service-azure-portal-6.png" alt-text="Creating administrator account information for the PostgreSQL Flexible server in the Azure portal." ::: |
-| [!INCLUDE [A screenshot showing adding current IP as a firewall rule for the PostgreSQL Flexible server in the Azure portal](<./includes/tutorial-python-postgresql-app/create-postgres-service-azure-portal-7.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/create-postgres-service-azure-portal-7-240px.png" lightbox="./media/tutorial-python-postgresql-app/create-postgres-service-azure-portal-7.png" alt-text="A screenshot showing adding current IP as a firewall rule for the PostgreSQL Flexible server in the Azure portal." ::: |
--
-### [VS Code](#tab/vscode-aztools)
-
-Follow these steps to create your Azure Database for PostgreSQL resource using the [Azure Tools extension pack](https://marketplace.visualstudio.com/items?itemName=ms-vscode.vscode-node-azure-pack) in Visual Studio Code.
-
-| Instructions | Screenshot |
-|:-|--:|
-| [!INCLUDE [Open Azure Extension - Database in VS Code](<./includes/tutorial-python-postgresql-app/create-postgres-service-visual-studio-code-1.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/create-postgres-service-visual-studio-code-1-240px.png" lightbox="./media/tutorial-python-postgresql-app/create-postgres-service-visual-studio-code-1.png" alt-text="A screenshot showing how to open Azure Extension for Database in VS Code." ::: |
-| [!INCLUDE [Create database server in VS Code](<./includes/tutorial-python-postgresql-app/create-postgres-service-visual-studio-code-2.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/create-postgres-service-visual-studio-code-2-240px.png" alt-text="A screenshot showing how create a database server in VSCode." lightbox="./media/tutorial-python-postgresql-app/create-postgres-service-visual-studio-code-2.png"::: |
-| [!INCLUDE [Azure portal - create new resource](<./includes/tutorial-python-postgresql-app/create-postgres-service-visual-studio-code-3.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/create-postgres-service-visual-studio-code-3-240px.png" alt-text="A screenshot how to create a new resource in VS Code." lightbox="./media/tutorial-python-postgresql-app/create-postgres-service-visual-studio-code-3.png"::: |
-| [!INCLUDE [Azure portal - create new resource](<./includes/tutorial-python-postgresql-app/create-postgres-service-visual-studio-code-4.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/create-postgres-service-visual-studio-code-4a-240px.png" alt-text="A screenshot showing how to create a new resource in the VS Code - server name." lightbox="./media/tutorial-python-postgresql-app/create-postgres-service-visual-studio-code-4a.png"::: :::image type="content" source="./media/tutorial-python-postgresql-app/create-postgres-service-visual-studio-code-4b-240px.png" alt-text="A screenshot showing how to create a new resource in VS Code - SKU." lightbox="./media/tutorial-python-postgresql-app/create-postgres-service-visual-studio-code-4b.png"::: :::image type="content" source="./media/tutorial-python-postgresql-app/create-postgres-service-visual-studio-code-4c-240px.png" alt-text="A screenshot showing how to create a new resource in VS Code - admin account name." lightbox="./media/tutorial-python-postgresql-app/create-postgres-service-visual-studio-code-4c.png"::: :::image type="content" source="./media/tutorial-python-postgresql-app/create-postgres-service-visual-studio-code-4d-240px.png" alt-text="A screenshot showing how to create a new resource in VS Code - admin account password." lightbox="./media/tutorial-python-postgresql-app/create-postgres-service-visual-studio-code-4d.png"::: :::image type="content" source="./media/tutorial-python-postgresql-app/create-postgres-service-visual-studio-code-4e-240px.png" alt-text="A screenshot showing how to create a new resource in VS Code - resource group." lightbox="./media/tutorial-python-postgresql-app/create-postgres-service-visual-studio-code-4e.png"::: :::image type="content" source="./media/tutorial-python-postgresql-app/create-postgres-service-visual-studio-code-4f-240px.png" alt-text="A screenshot showing how to create a new resource in VS Code - location." lightbox="./media/tutorial-python-postgresql-app/create-postgres-service-visual-studio-code-4f.png":::|
-| [!INCLUDE [Configure access for the database in the Azure portal](<./includes/tutorial-python-postgresql-app/create-postgres-service-visual-studio-code-5.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/create-postgres-service-visual-studio-code-5a-240px.png" lightbox="./media/tutorial-python-postgresql-app/create-postgres-service-visual-studio-code-5a.png" alt-text="A screenshot showing how to configure access for a database by configuring a firewall rule in VS Code." ::: :::image type="content" source="./media/tutorial-python-postgresql-app/create-postgres-service-visual-studio-code-5b-240px.png" lightbox="./media/tutorial-python-postgresql-app/create-postgres-service-visual-studio-code-5b.png" alt-text="A screenshot showing how to select the correct PostgreSQL server to add a firewall rule in VS Code." ::: :::image type="content" source="./media/tutorial-python-postgresql-app/create-postgres-service-visual-studio-code-5c-240px.png" lightbox="./media/tutorial-python-postgresql-app/create-postgres-service-visual-studio-code-5c.png" alt-text="A screenshot showing a dialog box asking to add firewall rule for local IP address in VS Code." :::|
-| [!INCLUDE [Create a new Azure resource in the Azure portal](<./includes/tutorial-python-postgresql-app/create-postgres-service-visual-studio-code-6.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/create-postgres-service-visual-studio-code-6-240px.png" lightbox="./media/tutorial-python-postgresql-app/create-postgres-service-visual-studio-code-6.png" alt-text="A screenshot showing how to create a PostgreSQL database server in VS Code." ::: |
-
-### [Azure CLI](#tab/azure-cli)
-
-Run `az login` to sign in to and follow these steps to create your Azure Database for PostgreSQL resource.
----
+--
+
+## 1. Create App Service and PostgreSQL
+
+In this step, you create the Azure resources. The steps used in this tutorial create a set of secure-by-default resources that include App Service and Azure Database for PostgreSQL. For the creation process, you'll specify:
+
+* The **Name** for the web app. It's the name used as part of the DNS name for your webapp in the form of `https://<app-name>.azurewebsites.net`.
+* The **Region** to run the app physically in the world.
+* The **Runtime stack** for the app. It's where you select the version of Python to use for your app.
+* The **Hosting plan** for the app. It's the pricing tier that includes the set of features and scaling capacity for your app.
+* The **Resource Group** for the app. A resource group lets you group (in a logical container) all the Azure resources needed for the application.
+
+Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps to create your Azure App Service resources.
+
+ :::column span="2":::
+ **Step 1.** In the Azure portal:
+ 1. Enter "web app database" in the search bar at the top of the Azure portal.
+ 1. Select the item labeled **Web App + Database** under the **Marketplace** heading.
+ You can also navigate to the [creation wizard](https://portal.azure.com/?feature.customportal=false#create/Microsoft.AppServiceWebAppDatabaseV3) directly.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-create-app-postgres-1.png" alt-text="A screenshot showing how to use the search box in the top tool bar to find the Web App + Database creation wizard." lightbox="./media/tutorial-python-postgresql-app/azure-portal-create-app-postgres-1.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 2.** In the **Create Web App + Database** page, fill out the form as follows.
+ 1. *Resource Group* &rarr; Select **Create new** and use a name of **msdocs-python-postgres-tutorial**.
+ 1. *Region* &rarr; Any Azure region near you.
+ 1. *Name* &rarr; **msdocs-python-postgres-XYZ** where *XYZ* is any three random characters. This name must be unique across Azure.
+ 1. *Runtime stack* &rarr; **Python 3.9**.
+ 1. *Hosting plan* &rarr; **Basic**. When you're ready, you can [scale up](manage-scale-up.md) to a production pricing tier later.
+ 1. **PostgreSQL - Flexible Server** is selected by default as the database engine.
+ 1. Select **Review + create**.
+ 1. After validation completes, select **Create**.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-create-app-postgres-2.png" alt-text="A screenshot showing how to configure a new app and database in the Web App + Database wizard." lightbox="./media/tutorial-python-postgresql-app/azure-portal-create-app-postgres-2.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 3.** The deployment takes a few minutes to complete. Once deployment completes, select the **Go to resource** button. You're taken directly to the App Service app, but the following resources are created:
+ - **Resource group** &rarr; The container for all the created resources.
+ - **App Service plan** &rarr; Defines the compute resources for App Service. A Linux plan in the *Basic* tier is created.
+ - **App Service** &rarr; Represents your app and runs in the App Service plan.
+ - **Virtual network** &rarr; Integrated with the App Service app and isolates back-end network traffic.
+ - **Azure Database for PostgreSQL flexible server** &rarr; Accessible only from within the virtual network. A database and a user are created for you on the server.
+ - **Private DNS zone** &rarr; Enables DNS resolution of the PostgreSQL server in the virtual network.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-create-app-postgres-3.png" alt-text="A screenshot showing the deployment process completed." lightbox="./media/tutorial-python-postgresql-app/azure-portal-create-app-postgres-3.png":::
+ :::column-end:::
Having issues? [Let us know](https://aka.ms/DjangoCLITutorialHelp).
-## 4 - Allow web app to access the database
-
-After the Azure Database for PostgreSQL server is created, configure access to the server from the web app by adding a firewall rule. This can be done through the Azure portal or the Azure CLI.
-
-If you're working in VS Code, right-click the database server and select **Open in Portal** to go to the Azure portal. Or, go to the [Azure Cloud Shell](https://shell.azure.com) and run the Azure CLI commands.
-### [Azure portal](#tab/azure-portal-access)
-
-| Instructions | Screenshot |
-|:-|--:|
-| [!INCLUDE [A screenshot showing the location and adding a firewall rule in the Azure portal](<./includes/tutorial-python-postgresql-app/add-access-to-postgres-from-web-app-portal-1.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/add-access-to-postgres-from-web-app-portal-1-240px.png" lightbox="./media/tutorial-python-postgresql-app/add-access-to-postgres-from-web-app-portal-1.png" alt-text="A screenshot showing how to add access from other Azure services to a PostgreSQL database in the Azure portal." ::: |
-
-### [Azure CLI](#tab/azure-cli-access)
----
+## 2. Verify connection settings
+
+The creation wizard generated the connectivity variables for you already as [app settings](configure-common.md#configure-app-settings).
+
+ :::column span="2":::
+ **Step 1.** In the App Service page, in the left menu, select Configuration.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-get-connection-string-1.png" alt-text="A screenshot showing how to open the configuration page in App Service." lightbox="./media/tutorial-python-postgresql-app/azure-portal-get-connection-string-1.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 2.** In the **Application settings** tab of the **Configuration** page, verify that (`DBNAME`, `DBHOST`, `DBUSER`, and `DBPASS`) are present. They'll be injected into the runtime environment as environment variables.
+ App settings are a good way to keep connection secrets out of your code repository.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-get-connection-string-2.png" alt-text="A screenshot showing how to see the autogenerated connection string." lightbox="./media/tutorial-python-postgresql-app/azure-portal-get-connection-string-2.png":::
+ :::column-end:::
Having issues? Refer first to the [Troubleshooting guide](configure-language-python.md#troubleshooting), otherwise, [let us know](https://aka.ms/DjangoCLITutorialHelp).
-## 5 - Connect the web app to the database
-
-With the web app and PostgreSQL database created, the next step is to connect the web app to the PostgreSQL database in Azure.
-
-The web app code uses database information in four environment variables named `DBHOST`, `DBNAME`, `DBUSER`, and `DBPASS` to connect to the PostgresSQL server.
-
-### [Azure portal](#tab/azure-portal)
-
-| Instructions | Screenshot |
-|:-|--:|
-| [!INCLUDE [Azure portal connect app to postgres step 1](<./includes/tutorial-python-postgresql-app/connect-postgres-to-app-azure-portal-1.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/connect-postgres-to-app-azure-portal-1-240px.png" lightbox="./media/tutorial-python-postgresql-app/connect-postgres-to-app-azure-portal-1.png" alt-text="A screenshot showing how to navigate to App Settings in the Azure portal." ::: |
-| [!INCLUDE [Azure portal connect app to postgres step 2](<./includes/tutorial-python-postgresql-app/connect-postgres-to-app-azure-portal-2.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/connect-postgres-to-app-azure-portal-2-240px.png" lightbox="./media/tutorial-python-postgresql-app/connect-postgres-to-app-azure-portal-2.png" alt-text="A screenshot showing how to configure the App Settings in the Azure portal." ::: |
-
-### [VS Code](#tab/vscode-aztools)
+## 3. Deploy sample code
-To configure environment variables for the web app from VS Code, you must have the [Azure Tools extension pack](https://marketplace.visualstudio.com/items?itemName=ms-vscode.vscode-node-azure-pack) installed and be signed into Azure from VS Code.
+In this step, you'll configure GitHub deployment using GitHub Actions. It's just one of many ways to deploy to App Service, but also a great way to have continuous integration in your deployment process. By default, every `git push` to your GitHub repository will kick off the build and deploy action.
-| Instructions | Screenshot |
-|:-|--:|
-| [!INCLUDE [VS Code connect app to postgres step 1](<./includes/tutorial-python-postgresql-app/connect-postgres-to-app-visual-studio-code-1.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/connect-app-to-database-azure-extension-240px.png" lightbox="./media/tutorial-python-postgresql-app/connect-app-to-database-azure-extension.png" alt-text="A screenshot showing how to locate the Azure Tools extension in VS Code." ::: |
-| [!INCLUDE [VS Code connect app to postgres step 2](<./includes/tutorial-python-postgresql-app/connect-postgres-to-app-visual-studio-code-2.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/connect-app-to-database-create-setting-240px.png" lightbox="./media/tutorial-python-postgresql-app/connect-app-to-database-create-setting.png" alt-text="A screenshot showing how to add a setting to the App Service in VS Code." ::: |
-| [!INCLUDE [VS Code connect app to postgres step 3](<./includes/tutorial-python-postgresql-app/connect-postgres-to-app-visual-studio-code-3.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/connect-app-to-database-settings-example-a-240px.png" lightbox="./media/tutorial-python-postgresql-app/connect-app-to-database-settings-example-a.png" alt-text="A screenshot showing adding setting name for app service to connect to PostgreSQL database in VS Code." ::: :::image type="content" source="./media/tutorial-python-postgresql-app/connect-app-to-database-settings-example-b-240px.png" lightbox="./media/tutorial-python-postgresql-app/connect-app-to-database-settings-example-b.png" alt-text="A screenshot showing adding setting value for app service to connect to PostgreSQL database in VS Code." ::: |
-
-### [Azure CLI](#tab/azure-cli)
-----
-Having issues? Refer first to the [Troubleshooting guide](configure-language-python.md#troubleshooting), otherwise, [let us know](https://aka.ms/DjangoCLITutorialHelp).
-
-## 6 - Deploy your application code to Azure
-
-Azure App service supports multiple methods to deploy your application code to Azure including support for GitHub Actions and all major CI/CD tools. This article focuses on how to deploy your code from your local workstation to Azure.
-
-### [Deploy using VS Code](#tab/vscode-aztools-deploy)
-
-To deploy a web app from VS Code, you must have the [Azure Tools extension pack](https://marketplace.visualstudio.com/items?itemName=ms-vscode.vscode-node-azure-pack) installed and be signed into Azure from VS Code.
-
-| Instructions | Screenshot |
-|:-|--:|
-| [!INCLUDE [VS Code deploy step 1](<./includes/tutorial-python-postgresql-app/deploy-visual-studio-code-1.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/connect-app-to-database-azure-extension-240px.png" lightbox="./media/tutorial-python-postgresql-app/connect-app-to-database-azure-extension.png" alt-text="A screenshot showing how to locate the Azure Tools extension in VS Code." ::: |
-| [!INCLUDE [VS Code deploy step 2](<./includes/tutorial-python-postgresql-app/deploy-visual-studio-code-2.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/deploy-web-app-visual-studio-code-1-240px.png" lightbox="./media/tutorial-python-postgresql-app/deploy-web-app-visual-studio-code-1.png" alt-text="A screenshot showing how to deploy a web app in VS Code." ::: |
-| [!INCLUDE [VS Code deploy step 3](<./includes/tutorial-python-postgresql-app/deploy-visual-studio-code-3.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/deploy-web-app-visual-studio-code-2-240px.png" lightbox="./media/tutorial-python-postgresql-app/deploy-web-app-visual-studio-code-2.png" alt-text="A screenshot showing how to deploy a web app in VS Code: selecting the code to deploy." ::: :::image type="content" source="./media/tutorial-python-postgresql-app/deploy-web-app-visual-studio-code-3-240px.png" lightbox="./media/tutorial-python-postgresql-app/deploy-web-app-visual-studio-code-3.png" alt-text="A screenshot showing how to deploy a web app in VS Code: a dialog box to confirm deployment." ::: |
-| [!INCLUDE [VS Code deploy step 4](<./includes/tutorial-python-postgresql-app/deploy-visual-studio-code-4.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/deploy-web-app-visual-studio-code-4-240px.png" lightbox="./media/tutorial-python-postgresql-app/deploy-web-app-visual-studio-code-4.png" alt-text="A screenshot showing how to deploy a web app in VS Code: a dialog box to choose to always deploy to the app service." ::: |
-| [!INCLUDE [VS Code deploy step 5](<./includes/tutorial-python-postgresql-app/deploy-visual-studio-code-5.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/deploy-web-app-visual-studio-code-5-240px.png" lightbox="./media/tutorial-python-postgresql-app/deploy-web-app-visual-studio-code-5.png" alt-text="A screenshot showing how to deploy a web app in VS Code: a dialog box with choice to browse to website." ::: :::image type="content" source="./media/tutorial-python-postgresql-app/deploy-web-app-visual-studio-code-6-240px.png" lightbox="./media/tutorial-python-postgresql-app/deploy-web-app-visual-studio-code-6.png" alt-text="A screenshot showing how to deploy a web app in VS Code: a dialog box with choice to view deployment details." ::: |
-
-### [Deploy using Local Git](#tab/local-git-deploy)
-
+### [Flask](#tab/flask)
-### [Deploy using a ZIP file](#tab/zip-deploy)
+ :::column span="2":::
+ **Step 1.** In a new browser window:
+ 1. Sign in to your GitHub account.
+ 1. Navigate to [https://github.com/Azure-Samples/msdocs-flask-postgresql-sample-app](https://github.com/Azure-Samples/msdocs-flask-postgresql-sample-app).
+ 1. Select **Fork**.
+ 1. Select **Create fork**.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-deploy-sample-code-flask-1.png" alt-text="A screenshot showing how to create a fork of the sample GitHub repository (Flask)." lightbox="./media/tutorial-python-postgresql-app/azure-portal-deploy-sample-code-flask-1.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 2.** In the GitHub page, open Visual Studio Code in the browser by pressing the `.` key.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-deploy-sample-code-flask-2.png" alt-text="A screenshot showing how to open the Visual Studio Code browser experience in GitHub (Flask)." lightbox="./media/tutorial-python-postgresql-app/azure-portal-deploy-sample-code-flask-2.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 3.** In Visual Studio Code in the browser, open *azureproject/production.py* in the explorer.
+ See the environment variables being used in the production environment, including the app settings that you saw in the configuration page.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-deploy-sample-code-flask-3.png" alt-text="A screenshot showing Visual Studio Code in the browser and an opened file (Flask)." lightbox="./media/tutorial-python-postgresql-app/azure-portal-deploy-sample-code-flask-3.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 4.** Back in the App Service page, in the left menu, select **Deployment Center**.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-deploy-sample-code-flask-4.png" alt-text="A screenshot showing how to open the deployment center in App Service (Flask)." lightbox="./media/tutorial-python-postgresql-app/azure-portal-deploy-sample-code-flask-4.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 5.** In the Deployment Center page:
+ 1. In **Source**, select **GitHub**. By default, **GitHub Actions** is selected as the build provider.
+ 1. Sign in to your GitHub account and follow the prompt to authorize Azure.
+ 1. In **Organization**, select your account.
+ 1. In **Repository**, select **msdocs-flask-postgresql-sample-app**.
+ 1. In **Branch**, select **main**.
+ 1. In the top menu, select **Save**. App Service commits a workflow file into the chosen GitHub repository, in the `.github/workflows` directory.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-deploy-sample-code-flask-5.png" alt-text="A screenshot showing how to configure CI/CD using GitHub Actions (Flask)." lightbox="./media/tutorial-python-postgresql-app/azure-portal-deploy-sample-code-flask-5.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 6.** In the Deployment Center page:
+ 1. Select **Logs**. A deployment run is already started.
+ 1. In the log item for the deployment run, select **Build/Deploy Logs**.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-deploy-sample-code-flask-6.png" alt-text="A screenshot showing how to open deployment logs in the deployment center (Flask)." lightbox="./media/tutorial-python-postgresql-app/azure-portal-deploy-sample-code-flask-6.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 7.** You're taken to your GitHub repository and see that the GitHub action is running. The workflow file defines two separate stages, build and deploy. Wait for the GitHub run to show a status of **Complete**. It takes about 5 minutes.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-deploy-sample-code-flask-7.png" alt-text="A screenshot showing a GitHub run in progress (Flask)." lightbox="./media/tutorial-python-postgresql-app/azure-portal-deploy-sample-code-flask-7.png":::
+ :::column-end:::
+### [Django](#tab/django)
--
+ :::column span="2":::
+ **Step 1.** In a new browser window:
+ 1. Sign in to your GitHub account.
+ 1. Navigate to [https://github.com/Azure-Samples/msdocs-django-postgresql-sample-app](https://github.com/Azure-Samples/msdocs-django-postgresql-sample-app).
+ 1. Select **Fork**.
+ 1. Select **Create fork**.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-deploy-sample-code-django-1.png" alt-text="A screenshot showing how to create a fork of the sample GitHub repository (Django)." lightbox="./media/tutorial-python-postgresql-app/azure-portal-deploy-sample-code-django-1.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 2.** In the GitHub page, open Visual Studio Code in the browser by pressing the `.` key.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-deploy-sample-code-django-2.png" alt-text="A screenshot showing how to open the Visual Studio Code browser experience in GitHub (Django)." lightbox="./media/tutorial-python-postgresql-app/azure-portal-deploy-sample-code-django-2.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 3.** In Visual Studio Code in the browser, open *azureproject/production.py* in the explorer.
+ See the environment variables being used in the production environment, including the app settings that you saw in the configuration page.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-deploy-sample-code-django-3.png" alt-text="A screenshot showing Visual Studio Code in the browser and an opened file (Django)." lightbox="./media/tutorial-python-postgresql-app/azure-portal-deploy-sample-code-django-3.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 4.** Back in the App Service page, in the left menu, select **Deployment Center**.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-deploy-sample-code-django-4.png" alt-text="A screenshot showing how to open the deployment center in App Service (Django)." lightbox="./media/tutorial-python-postgresql-app/azure-portal-deploy-sample-code-django-4.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 5.** In the Deployment Center page:
+ 1. In **Source**, select **GitHub**. By default, **GitHub Actions** is selected as the build provider.
+ 1. Sign in to your GitHub account and follow the prompt to authorize Azure.
+ 1. In **Organization**, select your account.
+ 1. In **Repository**, select **msdocs-django-postgresql-sample-app**.
+ 1. In **Branch**, select **main**.
+ 1. In the top menu, select **Save**. App Service commits a workflow file into the chosen GitHub repository, in the `.github/workflows` directory.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-deploy-sample-code-django-5.png" alt-text="A screenshot showing how to configure CI/CD using GitHub Actions (Django)." lightbox="./media/tutorial-python-postgresql-app/azure-portal-deploy-sample-code-django-5.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 6.** In the Deployment Center page:
+ 1. Select **Logs**. A deployment run is already started.
+ 1. In the log item for the deployment run, select **Build/Deploy Logs**.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-deploy-sample-code-django-6.png" alt-text="A screenshot showing how to open deployment logs in the deployment center (Django)." lightbox="./media/tutorial-python-postgresql-app/azure-portal-deploy-sample-code-django-6.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 7.** You're taken to your GitHub repository and see that the GitHub action is running. The workflow file defines two separate stages, build and deploy. Wait for the GitHub run to show a status of **Complete**. It takes about 5 minutes.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-deploy-sample-code-django-7.png" alt-text="A screenshot showing a GitHub run in progress (Django)." lightbox="./media/tutorial-python-postgresql-app/azure-portal-deploy-sample-code-django-7.png":::
+ :::column-end:::
+
+--
Having issues? Refer first to the [Troubleshooting guide](configure-language-python.md#troubleshooting), otherwise, [let us know](https://aka.ms/DjangoCLITutorialHelp).
-## 7 - Migrate app database
-
-With the code deployed and the database in place, the app is almost ready to use. First, you need to establish the necessary schema in the database itself. You do this by "migrating" the data models in the Django app to the database.
-
-**Step 1.** Create SSH session and connect to web app server.
-
-### [Azure portal](#tab/azure-portal)
-
-Navigate to page for the App Service instance in the Azure portal.
-
-1. Select **SSH**, under **Development Tools** on the left side
-2. Then **Go** to open an SSH console on the web app server. (It may take a minute to connect for the first time as the web app container needs to start.)
-
-### [VS Code](#tab/vscode-aztools)
-
-In VS Code, you can use the [Azure Tools extension pack](https://marketplace.visualstudio.com/items?itemName=ms-vscode.vscode-node-azure-pack), which must be installed and be signed into Azure from VS Code.
-
-In the **App Service** section of the Azure Tools extension:
-
-1. Locate your web app and right-click to bring up the context menu.
-2. Select **SSH into Web App** to open an SSH terminal window.
-
-### [Azure CLI](#tab/azure-cli)
----
-> [!NOTE]
-> If you cannot connect to the SSH session, then the app itself has failed to start. **Check the diagnostic logs** for details. For example, if you haven't created the necessary app settings in the previous section, the logs will indicate `KeyError: 'DBNAME'`.
-
-**Step 2.** In the SSH session, run the following command to migrate the models into the database schema (you can paste commands using **Ctrl**+**Shift**+**V**):
+## 4. Generate database schema
### [Flask](#tab/flask)
-When you deploy the Flask sample app to Azure App Service, the database tables are created automatically in Azure PostgreSQL. If the tables aren't created, try the following command:
-
-```bash
-# Create database tables
-flask db init
-```
+With the PostgreSQL database protected by the virtual network, the easiest way to run Run [Flask database migrations](https://flask-migrate.readthedocs.io/en/latest/) is in an SSH session with the App Service container.
+
+ :::column span="2":::
+ **Step 4.** Back in the App Service page, in the left menu, select **Deployment Center**.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-generate-db-schema-flask-1.png" alt-text="A screenshot showing how to open the SSH shell for your app from the Azure portal (Flask)." lightbox="./media/tutorial-python-postgresql-app/azure-portal-generate-db-schema-flask-1.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 2.** In the SSH terminal, run `flask db upgrade`. If it succeeds, App Service is [connecting successfully to the database](#i-get-an-error-when-running-database-migrations).
+ Only changes to files in `/home` can persist beyond app restarts. Changes outside of `/home` aren't persisted.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-generate-db-schema-flask-2.png" alt-text="A screenshot showing the commands to run in the SSH shell and their output (Flask)." lightbox="./media/tutorial-python-postgresql-app/azure-portal-generate-db-schema-flask-2.png":::
+ :::column-end:::
### [Django](#tab/django)
-```bash
-# Create database tables
-python manage.py migrate
-```
---
-If you encounter any errors related to connecting to the database, check the values of the application settings of the App Service created in the previous section, namely `DBHOST`, `DBNAME`, `DBUSER`, and `DBPASS`. Without those settings, the migrate command can't communicate with the database.
+With the PostgreSQL database protected by the virtual network, the easiest way to run [Django database migrations](https://docs.djangoproject.com/en/4.1/topics/migrations/) is in an SSH session with the App Service container.
+
+ :::column span="2":::
+ **Step 4.** Back in the App Service page, in the left menu, select **Deployment Center**.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-generate-db-schema-django-1.png" alt-text="A screenshot showing how to open the SSH shell for your app from the Azure portal (Django)." lightbox="./media/tutorial-python-postgresql-app/azure-portal-generate-db-schema-django-1.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 2.** In the SSH terminal, run `python manage.py migrate`. If it succeeds, App Service is [connecting successfully to the database](#i-get-an-error-when-running-database-migrations).
+ Only changes to files in `/home` can persist beyond app restarts. Changes outside of `/home` aren't persisted.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-generate-db-schema-django-2.png" alt-text="A screenshot showing the commands to run in the SSH shell and their output (Django)." lightbox="./media/tutorial-python-postgresql-app/azure-portal-generate-db-schema-django-2.png":::
+ :::column-end:::
> [!TIP] > In an SSH session, for Django you can also create users with the `python manage.py createsuperuser` command like you would with a typical Django app. For more information, see the documentation for [django django-admin and manage.py](https://docs.djangoproject.com/en/1.8/ref/django-admin/). Use the superuser account to access the `/admin` portion of the web site. For Flask, use an extension such as [Flask-admin](https://github.com/flask-admin/flask-admin) to provide the same functionality.
+--
+ Having issues? [Let us know](https://aka.ms/DjangoCLITutorialHelp).
-## 8 - Browse to the app
+## 5. Browse to the app
+
+ :::column span="2":::
+ **Step 1.** In the App Service page:
+ 1. From the left menu, select **Overview**.
+ 1. Select the URL of your app. You can also navigate directly to `https://<app-name>.azurewebsites.net`.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-browse-app-1.png" alt-text="A screenshot showing how to launch an App Service from the Azure portal." lightbox="./media/tutorial-python-postgresql-app/azure-portal-browse-app-1.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 2.** Add a few tasks to the list.
+ Congratulations, you're running a secure data-driven Flask app in Azure App Service, with connectivity to Azure Database for PostgreSQL.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-browse-app-2.png" alt-text="A screenshot of the Flask web app with PostgreSQL running in Azure showing restaurants and restaurant reviews." lightbox="./media/tutorial-python-postgresql-app/azure-portal-browse-app-2.png":::
+ :::column-end:::
-Browse to the deployed application in your web browser at the URL `http://<app-name>.azurewebsites.net`. It can take a minute or two for the app to start, so if you see a default app page, wait a minute and refresh the browser.
+Having issues? [Let us know](https://aka.ms/DjangoCLITutorialHelp).
-When you see your sample web app, it's running in a Linux container in App Service using a built-in image **Congratulations!** You've deployed your Python app to App Service.
+## 6. Stream diagnostic logs
+
+Azure App Service captures all messages output to the console to help you diagnose issues with your application. The sample app includes `print()` statements to demonstrate this capability as shown below.
### [Flask](#tab/flask) ### [Django](#tab/django) -
+--
+
+ :::column span="2":::
+ **Step 1.** In the App Service page:
+ 1. From the left menu, select **App Service logs**.
+ 1. Under **Application logging**, select **File System**.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-stream-diagnostic-logs-1.png" alt-text="A screenshot showing how to enable native logs in App Service in the Azure portal." lightbox="./media/tutorial-python-postgresql-app/azure-portal-stream-diagnostic-logs-1.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 2.** From the left menu, select **Log stream**. You see the logs for your app, including platform logs and logs from inside the container.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-stream-diagnostic-logs-2.png" alt-text="A screenshot showing how to view the log stream in the Azure portal." lightbox="./media/tutorial-python-postgresql-app/azure-portal-stream-diagnostic-logs-2.png":::
+ :::column-end:::
Having issues? [Let us know](https://aka.ms/DjangoCLITutorialHelp).
-## 9 - Stream diagnostic logs
+## 7. Clean up resources
+
+When you're finished, you can delete all of the resources from your Azure subscription by deleting the resource group.
+
+ :::column span="2":::
+ **Step 1.** In the search bar at the top of the Azure portal:
+ 1. Enter the resource group name.
+ 1. Select the resource group.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-clean-up-resources-1.png" alt-text="A screenshot showing how to search for and navigate to a resource group in the Azure portal." lightbox="./media/tutorial-python-postgresql-app/azure-portal-clean-up-resources-1.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 2.** In the resource group page, select **Delete resource group**.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-clean-up-resources-2.png" alt-text="A screenshot showing the location of the Delete Resource Group button in the Azure portal." lightbox="./media/tutorial-python-postgresql-app/azure-portal-clean-up-resources-2.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 3.**
+ 1. Enter the resource group name to confirm your deletion.
+ 1. Select **Delete**.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-clean-up-resources-3.png" alt-text="A screenshot of the confirmation dialog for deleting a resource group in the Azure portal." lightbox="./media/tutorial-python-postgresql-app/azure-portal-clean-up-resources-3.png"::::
+ :::column-end:::
-Azure App Service captures all messages output to the console to help you diagnose issues with your application. The sample app includes `print()` statements to demonstrate this capability as shown below.
-
-### [Flask](#tab/flask)
-
+Having issues? [Let us know](https://aka.ms/DjangoCLITutorialHelp).
-### [Django](#tab/django)
+## Frequently asked questions
+- [How much does this setup cost?](#how-much-does-this-setup-cost)
+- [How do I connect to the PostgreSQL server that's secured behind the virtual network with other tools?](#how-do-i-connect-to-the-postgresql-server-thats-secured-behind-the-virtual-network-with-other-tools)
+- [How does local app development work with GitHub Actions?](#how-does-local-app-development-work-with-github-actions)
+- [How is the Django sample configured to run on Azure App Service?](#how-is-the-django-sample-configured-to-run-on-azure-app-service)
+- [I can't connect to the SSH session](#i-cant-connect-to-the-ssh-session)
+- [I get an error when running database migrations](#i-get-an-error-when-running-database-migrations)
-
+#### How much does this setup cost?
-You can access the console logs generated from inside the container that hosts the app on Azure.
+Pricing for the create resources is as follows:
-### [Azure portal](#tab/azure-portal)
+- The App Service plan is created in **Basic** tier and can be scaled up or down. See [App Service pricing](https://azure.microsoft.com/pricing/details/app-service/linux/).
+- The PostgreSQL flexible server is create in the lowest burstable tier **Standard_B1ms**, with the minimum storage size, which can be scaled up or down. See [Azure Database for PostgreSQL pricing](https://azure.microsoft.com/pricing/details/postgresql/flexible-server/).
+- The virtual network doesn't incur a charge unless you configure extra functionality, such as peering. See [Azure Virtual Network pricing](https://azure.microsoft.com/pricing/details/virtual-network/).
+- The private DNS zone incurs a small charge. See [Azure DNS pricing](https://azure.microsoft.com/pricing/details/dns/).
-| Instructions | Screenshot |
-|:-|--:|
-| [!INCLUDE [Stream logs from Azure portal 1](<./includes/tutorial-python-postgresql-app/stream-logs-azure-portal-1.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/stream-logs-azure-portal-1-240px.png" lightbox="./media/tutorial-python-postgresql-app/stream-logs-azure-portal-1.png" alt-text="A screenshot showing how to set application logging in the Azure portal." ::: |
-| [!INCLUDE [Stream logs from Azure portal 2](<./includes/tutorial-python-postgresql-app/stream-logs-azure-portal-2.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/stream-logs-azure-portal-2-240px.png" lightbox="./media/tutorial-python-postgresql-app/stream-logs-azure-portal-2.png" alt-text="A screenshot showing how to stream logs in the Azure portal." ::: |
+#### How do I connect to the PostgreSQL server that's secured behind the virtual network with other tools?
-### [VS Code](#tab/vscode-aztools)
+- For basic access from a commmand-line tool, you can run `psql` from the app's SSH terminal.
+- To connect from a desktop tool, your machine must be within the virtual network. For example, it could be an Azure VM that's connected to one of the subnets, or a machine in an on-premises network that has a [site-to-site VPN](../vpn-gateway/vpn-gateway-about-vpngateways.md) connection with the Azure virtual network.
+- You can also [integrate Azure Cloud Shell](../cloud-shell/private-vnet.md) with the virtual network.
-| Instructions | Screenshot |
-|:-|--:|
-| [!INCLUDE [Stream logs from VS Code 1](<./includes/tutorial-python-postgresql-app/stream-logs-visual-studio-code-1.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/stream-logs-visual-studio-code-1-240px.png" lightbox="./media/tutorial-python-postgresql-app/stream-logs-visual-studio-code-1.png" alt-text="A screenshot showing how to set application logging in VS Code." ::: |
-| [!INCLUDE [Stream logs from VS Code 2](<./includes/tutorial-python-postgresql-app/stream-logs-visual-studio-code-2.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/stream-logs-visual-studio-code-2-240px.png" lightbox="./media/tutorial-python-postgresql-app/stream-logs-visual-studio-code-2.png" alt-text="A screenshot showing VS Code output window." ::: |
+#### How does local app development work with GitHub Actions?
-### [Azure CLI](#tab/azure-cli)
+Take the autogenerated workflow file from App Service as an example, each `git push` kicks off a new build and deployment run. From a local clone of the GitHub repository, you make the desired updates push it to GitHub. For example:
+```terminal
+git add .
+git commit -m "<some-message>"
+git push origin main
+```
--
+#### How is the Django sample configured to run on Azure App Service?
-Having issues? [Let us know](https://aka.ms/DjangoCLITutorialHelp).
+> [!NOTE]
+> If you are following along with this tutorial with your own app, look at the *requirements.txt* file description in each project's *README.md* file ([Flask](https://github.com/Azure-Samples/msdocs-flask-postgresql-sample-app/blob/main/README.md), [Django](https://github.com/Azure-Samples/msdocs-django-postgresql-sample-app/blob/main/README.md)) to see what packages you'll need.
-## Clean up resources
+The [Django sample application](https://github.com/Azure-Samples/msdocs-django-postgresql-sample-app) configures settings in the *azureproject/production.py* file so that it can run in Azure App Service. These changes are common to deploying Django to production, and not specific to App Service.
-You can leave the app and database running as long as you want for further development work and skip ahead to [Next steps](#next-steps).
+- Django validates the HTTP_HOST header in incoming requests. The sample code uses the [`WEBSITE_HOSTNAME` environment variable in App Service](reference-app-settings.md#app-environment) to add the app's domain name to Django's [ALLOWED_HOSTS](https://docs.djangoproject.com/en/4.1/ref/settings/#allowed-hosts) setting.
-However, when you're finished with the sample app, you can remove all of the resources for the app from Azure to ensure you don't incur other charges and keep your Azure subscription uncluttered. Removing the resource group also removes all resources in the resource group and is the fastest way to remove all Azure resources for your app.
+ :::code language="python" source="~/msdocs-django-postgresql-sample-app/azureproject/production.py" range="6" highlight="3":::
-### [Azure portal](#tab/azure-portal)
+- Django doesn't support [serving static files in production](https://docs.djangoproject.com/en/4.1/howto/static-files/deployment/). For this tutorial, you use [WhiteNoise](https://whitenoise.evans.io/) to enable serving the files. The WhiteNoise package was already installed with requirements.txt, and its middleware is added to the list.
-Follow these steps while signed-in to the Azure portal to delete a resource group.
+ :::code language="python" source="~/msdocs-django-postgresql-sample-app/azureproject/production.py" range="11-14" highlight="14":::
-| Instructions | Screenshot |
-|:-|--:|
-| [!INCLUDE [Remove resource group Azure portal 1](<./includes/tutorial-python-postgresql-app/remove-resource-group-azure-portal-1.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/remove-resource-group-azure-portal-1-240px.png" lightbox="./media/tutorial-python-postgresql-app/remove-resource-group-azure-portal-1.png" alt-text="A screenshot showing how to find resource group in the Azure portal." ::: |
-| [!INCLUDE [Remove resource group Azure portal 2](<./includes/tutorial-python-postgresql-app/remove-resource-group-azure-portal-2.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/remove-resource-group-azure-portal-2-240px.png" lightbox="./media/tutorial-python-postgresql-app/remove-resource-group-azure-portal-2.png" alt-text="A screenshot showing how to delete a resource group in the Azure portal." ::: |
-| [!INCLUDE [Remove resource group Azure portal 3](<./includes/tutorial-python-postgresql-app/remove-resource-group-azure-portal-3.md>)] | |
+ Then the static file settings are configured according to the Django documentation.
-### [VS Code](#tab/vscode-aztools)
+ :::code language="python" source="~/msdocs-django-postgresql-sample-app/azureproject/production.py" range="23-24":::
-| Instructions | Screenshot |
-|:-|--:|
-| [!INCLUDE [Remove resource group VS Code 1](<./includes/tutorial-python-postgresql-app/remove-resource-group-visual-studio-code-1.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/remove-resource-group-visual-studio-code-1-240px.png" lightbox="./media/tutorial-python-postgresql-app/remove-resource-group-visual-studio-code-1.png" alt-text="A screenshot showing how to delete a resource group in VS Code." ::: |
-| [!INCLUDE [Remove resource group VS Code 2](<./includes/tutorial-python-postgresql-app/remove-resource-group-visual-studio-code-2.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/remove-resource-group-visual-studio-code-2-240px.png" lightbox="./media/tutorial-python-postgresql-app/remove-resource-group-visual-studio-code-2.png" alt-text="A screenshot showing how to finish deleting a resource in VS Code." ::: |
+For more information, see [Production settings for Django apps](configure-language-python.md#production-settings-for-django-apps).
-### [Azure CLI](#tab/azure-cli)
+#### I can't connect to the SSH session
+If you can't connect to the SSH session, then the app itself has failed to start. Check the [diagnostic logs](#6-stream-diagnostic-logs) for details. For example, if you see an error like `KeyError: 'DBNAME'`, it may mean that the environment variable is missing (you may have removed the app setting).
-
+#### I get an error when running database migrations
-Having issues? [Let us know](https://aka.ms/DjangoCLITutorialHelp).
+If you encounter any errors related to connecting to the database, check if the app settings (`DBHOST`, `DBNAME`, `DBUSER`, and `DBPASS`) have been changed. Without those settings, the migrate command can't communicate with the database.
## Next steps
application-gateway Private Link Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/private-link-configure.md
The Private link configuration defines the infrastructure used by Application Ga
- **Frontend IP Configuration**: The frontend IP address that private link should forward traffic to on Application Gateway. - **Private IP address settings**: specify at least one IP address 1. Select **Add**.
-1. Within your **Application Gateways** properties blade, obtain and make a note of the **Resource ID**, you will require this if setting up a Private Endpoint within a diffrerent Azure AD tenant
+1. Within your **Application Gateways** properties blade, obtain and make a note of the **Resource ID**, you will require this if setting up a Private Endpoint within a different Azure AD tenant
**Configure Private Endpoint**
A private endpoint is a network interface that uses a private IP address from th
> If the public or private IP configuration resource is missing when trying to select a _Target sub-resource_ on the _Resource_ tab of private endpoint creation, please ensure a listener is actively utilizing the respected frontend IP configuration. Frontend IP configurations without an associated listener will not be shown as a _Target sub-resource_. > [!Note]
-> If you are setting up the **Private Endpoint** from within another tenant, you will need to utilise the Azure Application Gateway Resource ID, along with sub-resource as either _appGwPublicFrontendIp_ or _appGwPrivateFrontendIp_, depending upon your Azure Application Gateway Private Link Frontend IP Configuration.
+> If you are provisioning a **Private Endpoint** from within another tenant, you will need to utilize the Azure Application Gateway Resource ID, along with sub-resource to your frontend configuration. For example, if the frontend configuration of the gateway was named _PrivateFrontendIp_, the resource ID would be as follows: _/subscriptions/xxxx-xxxx-xxxx-xxxx-xxxx/resourceGroups/resourceGroupname/providers/Microsoft.Network/applicationGateways/appgwname/frontendIPConfigurations/PrivateFrontendIp_.
# [Azure PowerShell](#tab/powershell)
applied-ai-services Overview Experiment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/overview-experiment.md
- Title: "Overview: What is Azure Form Recognizer?"-
-description: Azure Form Recognizer service that analyzes and extracts text, table and data, maps field relationships as key-value pairs, and returns a structured JSON output from your forms and documents.
----- Previously updated : 10/10/2022-
-recommendations: false
---
-<!-- markdownlint-disable MD033 -->
-<!-- markdownlint-disable MD024 -->
-<!-- markdownlint-disable MD036 -->
-# Overview: What is Azure Form Recognizer?
-
-Azure Form Recognizer is a cloud-based [Azure Applied AI Service](../../applied-ai-services/index.yml) that analyzes forms and documents, extracts text and data, and maps field relationships as key-value pairs. To learn more about each model, *see* Concepts articles:
-
-| Model type | Model name |
-||--|
-|**Document analysis models**| &#9679; [**Read model**](concept-read.md)</br> &#9679; [**General document model**](concept-general-document.md)</br> &#9679; [**Layout model**](concept-layout.md) </br> |
-| **Prebuilt models** | &#9679; [**W-2 form model**](concept-w2.md) </br>&#9679; [**Invoice model**](concept-invoice.md)</br>&#9679; [**Receipt model**](concept-receipt.md) </br>&#9679; [**ID document model**](concept-id-document.md) </br>&#9679; [**Business card model**](concept-business-card.md) </br>
-| **Custom models** | &#9679; [**Custom model**](concept-custom.md) </br>&#9679; [**Composed model**](concept-model-overview.md)|
-
-## Which Form Recognizer model should I use?
-
-This section will help you decide which Form Recognizer v3.0 supported model you should use for your application:
-
-| Type of document | Data to extract |Document format | Your best solution |
-| --|-| -|-|
-|**A text-based document** like a contract or letter.|You want to extract primarily text lines, words, locations, and detected languages.|</li></ul>The document is written or printed in a [supported language](language-support.md#read-layout-and-custom-form-template-model).| [**Read model**](concept-read.md)|
-|**A document that includes structural information** like a report or study.|In addition to text, you need to extract structural information like tables, selection marks, paragraphs, titles, headings, and subheadings.|The document is written or printed in a [supported language](language-support.md#read-layout-and-custom-form-template-model)| [**Layout model**](concept-layout.md)
-|**A structured or semi-structured document that includes content formatted as fields and values**, like a credit application or survey form.|You want to extract fields and values including ones not covered by the scenario-specific prebuilt models **without having to train a custom model**.| The form or document is a standardized format commonly used in your business or industry and printed in a [supported language](language-support.md#read-layout-and-custom-form-template-model).|[**General document model**](concept-general-document.md)
-|**U.S. W-2 form**|You want to extract key information such as salary, wages, and taxes withheld from US W2 tax forms.</li></ul> |The W-2 document is in United States English (en-US) text.|[**W-2 model**](concept-w2.md)
-|**Invoice**|You want to extract key information such as customer name, billing address, and amount due from invoices.</li></ul> |The invoice document is written or printed in a [supported language](language-support.md#invoice-model).|[**Invoice model**](concept-invoice.md)
- |**Receipt**|You want to extract key information such as merchant name, transaction date, and transaction total from a sales or single-page hotel receipt.</li></ul> |The receipt is written or printed in a [supported language](language-support.md#receipt-model). |[**Receipt model**](concept-receipt.md)|
-|**ID document** like a passport or driver's license. |You want to extract key information such as first name, last name, and date of birth from US drivers' licenses or international passports. |Your ID document is a US driver's license or the biographical page from an international passport (not a visa).| [**ID document model**](concept-id-document.md)|
-|**Business card**|You want to extract key information such as first name, last name, company name, email address, and phone number from business cards.</li></ul>|The business card document is in English or Japanese text. | [**Business card model**](concept-business-card.md)|
-|**Mixed-type document(s)**| You want to extract key-value pairs, selection marks, tables, signature fields, and selected regions not extracted by prebuilt or general document models.| You have various documents with structured, semi-structured, and/or unstructured elements.| [**Custom model**](concept-custom.md)|
-
->[!Tip]
->
-> * If you're still unsure which model to use, try the General Document model.
-> * The General Document model is powered by the Read OCR model to detect lines, words, locations, and languages.
-> * General document extracts all the same fields as Layout model (pages, tables, styles) and also extracts key-value pairs.
-
-## Form Recognizer models and development options
--
-> [!NOTE]
-> The following models and development options are supported by the Form Recognizer service v3.0. You can Use Form Recognizer to automate your data processing in applications and workflows, enhance data-driven strategies, and enrich document search capabilities. Use the links in the table to learn more about each model and browse the API references.
-
-| Model | Description |Automation use cases | Development options |
-|-|--|-|--|
-|[ **Read**](concept-read.md)|Extract text lines, words, detected languages, and handwritten style if detected.| <ul><li>Contract processing. </li><li>Financial or medical report processing.</li></ul>|<ul ><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/read)</li><li>[**REST API**](how-to-guides/use-sdk-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**C# SDK**](how-to-guides/use-sdk-rest-api.md?view=form-recog-3.0.0&preserve-view=true?pivots=programming-language-csharp)</li><li>[**Python SDK**](how-to-guides/use-sdk-rest-api.md?view=form-recog-3.0.0&preserve-view=true?pivots=programming-language-python)</li><li>[**Java SDK**](how-to-guides/use-sdk-rest-api.md?view=form-recog-3.0.0&preserve-view=true?pivots=programming-language-java)</li><li>[**JavaScript**](how-to-guides/use-sdk-rest-api.md?view=form-recog-3.0.0&preserve-view=true?pivots=programming-language-javascript)</li></ul> |
-|[**General document model**](concept-general-document.md)|Extract text, tables, structure, and key-value pairs.|<ul><li>Key-value pair extraction.</li><li>Form processing.</li><li>Survey data collection and analysis.</li></ul>|<ul ><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/document)</li><li>[**REST API**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#general-document-model)</li><li>[**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#general-document-model)</li><li>[**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#general-document-model)</li><li>[**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#general-document-model)</li></ul> |
-|[**Layout model**](concept-layout.md) | Extract text, selection marks, and tables structures, along with their bounding box coordinates, from forms and documents.</br></br> Layout API has been updated to a prebuilt model. |<ul><li>Document indexing and retrieval by structure.</li><li>Preprocessing prior to OCR analysis.</li></ul> |<ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/layout)</li><li>[**REST API**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#layout-model)</li><li>[**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#layout-model)</li><li>[**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#layout-model)</li><li>[**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#layout-model)</li></ul>|
-|[**Custom model (updated)**](concept-custom.md) | Extraction and analysis of data from forms and documents specific to distinct business data and use cases.</br></br>Custom model API v3.0 supports **signature detection for custom template (custom form) models**.</br></br>Custom model API v3.0 now supports two model types:<ul><li>[**Custom Template model**](concept-custom-template.md) (custom form) is used to analyze structured and semi-structured documents.</li><li> [**Custom Neural model**](concept-custom-neural.md) (custom document) is used to analyze unstructured documents.</li></ul>|<ul><li>Identification and compilation of data, unique to your business, impacted by a regulatory change or market event.</li><li>Identification and analysis of previously overlooked unique data.</li></ul> |[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</li><li>[**REST API**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li></ul>|
-|[ **W-2 Form**](concept-w2.md) | Extract information reported in each box on a W-2 form.|<ul><li>Automated tax document management.</li><li>Mortgage loan application processing.</li></ul> |<ul ><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.w2)<li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li></ul> |
-|[**Invoice model**](concept-invoice.md) | Automated data processing and extraction of key information from sales invoices. |<ul><li>Accounts payable processing.</li><li>Automated tax recording and reporting.</li></ul> |<ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=invoice)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li></ul>|
-|[**Receipt model (updated)**](concept-receipt.md) | Automated data processing and extraction of key information from sales receipts.</br></br>Receipt model v3.0 supports processing of **single-page hotel receipts**.|<ul><li>Expense management.</li><li>Consumer behavior data analysis.</li><li>Customer loyalty program.</li><li>Merchandise return processing.</li><li>Automated tax recording and reporting.</li></ul> |<ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=receipt)</li><li>[**REST API**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li></ul>|
-|[**ID document model (updated)**](concept-id-document.md) |Automated data processing and extraction of key information from US driver's licenses and international passports.</br></br>Prebuilt ID document API supports the **extraction of endorsements, restrictions, and vehicle classifications from US driver's licenses**. |<ul><li>Know your customer (KYC) financial services guidelines compliance.</li><li>Medical account management.</li><li>Identity checkpoints and gateways.</li><li>Hotel registration.</li></ul> |<ul><li> [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=idDocument)</li><li>[**REST API**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li></ul>|
-|[**Business card model**](concept-business-card.md) |Automated data processing and extraction of key information from business cards.|<ul><li>Sales lead and marketing management.</li></ul> |<ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=businessCard)</li><li>[**REST API**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li></ul>|
---
- >[!TIP]
- >
- > * For an enhanced experience and advanced model quality, try the [Form Recognizer v3.0 Studio ](https://formrecognizer.appliedai.azure.com/studio).
- > * The v3.0 Studio supports any model trained with v2.1 labeled data.
- > * You can refer to the API migration guide for detailed information about migrating from v2.1 to v3.0.
-
-The following models are supported by Form Recognizer v2.1. Use the links in the table to learn more about each model and browse the API references.
-
-| Model| Description | Development options |
-|-|--|-|
-|[**Layout API**](concept-layout.md) | Extraction and analysis of text, selection marks, tables, and bounding box coordinates, from forms and documents. | <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-layout)</li><li>[**REST API**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-2.1.0&preserve-view=true)</li><li>[**Client-library SDK**](quickstarts/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?branch=main&tabs=layout#run-the-container-with-the-docker-compose-up-command)</li></ul>|
-|[**Custom model**](concept-custom.md) | Extraction and analysis of data from forms and documents specific to distinct business data and use cases.| <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#train-a-custom-form-model)</li><li>[**REST API**](quickstarts/try-sdk-rest-api.md)</li><li>[**Client-library SDK**](how-to-guides/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=custom#run-the-container-with-the-docker-compose-up-command)</li></ul>|
-|[**Invoice model**](concept-invoice.md) | Automated data processing and extraction of key information from sales invoices. | <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-2.1.0&preserve-view=true)</li><li>[**Client-library SDK**](quickstarts/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=invoice#run-the-container-with-the-docker-compose-up-command)</li></ul>|
-|[**Receipt model**](concept-receipt.md) | Automated data processing and extraction of key information from sales receipts.| <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-2.1.0&preserve-view=true)</li><li>[**Client-library SDK**](how-to-guides/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=receipt#run-the-container-with-the-docker-compose-up-command)</li></ul>|
-|[**ID document model**](concept-id-document.md) | Automated data processing and extraction of key information from US driver's licenses and international passports.| <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-2.1.0&preserve-view=true)</li><li>[**Client-library SDK**](how-to-guides/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=id-document#run-the-container-with-the-docker-compose-up-command)</li></ul>|
-|[**Business card model**](concept-business-card.md) | Automated data processing and extraction of key information from business cards.| <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-2.1.0&preserve-view=true)</li><li>[**Client-library SDK**](how-to-guides/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=business-card#run-the-container-with-the-docker-compose-up-command)</li></ul>|
--
-## How to use Form Recognizer documentation
-
-This documentation contains the following article types:
-
-* [**Concepts**](concept-layout.md) provide in-depth explanations of the service functionality and features.
-* [**Quickstarts**](quickstarts/try-sdk-rest-api.md) are getting-started instructions to guide you through making requests to the service.
-* [**How-to guides**](how-to-guides/try-sdk-rest-api.md) contain instructions for using the service in more specific or customized ways.
-* [**Tutorials**](tutorial-ai-builder.md) are longer guides that show you how to use the service as a component in broader business solutions.
-
-## Data privacy and security
-
- As with all the cognitive services, developers using the Form Recognizer service should be aware of Microsoft policies on customer data. See our [Data, privacy, and security for Form Recognizer](/legal/cognitive-services/form-recognizer/fr-data-privacy-security) page.
-
-## Next steps
--
-> [!div class="checklist"]
->
-> * Try our [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com)
-> * Explore the [**REST API reference documentation**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument) to learn more.
-> * If you're familiar with a previous version of the API, see the [**What's new**](./whats-new.md) article to learn of recent changes.
---
-> [!div class="checklist"]
->
-> * Try our [**Sample Labeling online tool**](https://aka.ms/fott-2.1-ga/)
-> * Follow our [**client library / REST API quickstart**](./quickstarts/try-sdk-rest-api.md) to get started extracting data from your documents. We recommend that you use the free service when you're learning the technology. Remember that the number of free pages is limited to 500 per month.
-> * Explore the [**REST API reference documentation**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeWithCustomForm) to learn more.
-> * If you're familiar with a previous version of the API, see the [**What's new**](./whats-new.md) article to learn of recent changes.
-
applied-ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/overview.md
Title: What is Azure Form Recognizer?
+ Title: "Overview: What is Azure Form Recognizer?"
-description: The Azure Form Recognizer service allows you to identify and extract key/value pairs and table data from your form documents, as well as extract major information from sales receipts and business cards.
+description: Azure Form Recognizer service that analyzes and extracts text, table and data, maps field relationships as key-value pairs, and returns a structured JSON output from your forms and documents.
Previously updated : 10/06/2022 Last updated : 10/12/2022 recommendations: false
-adobe-target: true
-adobe-target-activity: DocsExpΓÇô463504ΓÇôA/BΓÇôDocs/FormRecognizerΓÇôDecisionTreeΓÇôFY23Q1
-adobe-target-experience: Experience B
-adobe-target-content: ./overview-experiment
-#Customer intent: As a developer of form-processing software, I want to learn what the Form Recognizer service does so I can determine if I should use it.
+ <!-- markdownlint-disable MD033 --> <!-- markdownlint-disable MD024 --> <!-- markdownlint-disable MD036 --> # What is Azure Form Recognizer?
-Azure Form Recognizer is a cloud-based [Azure Applied AI Service](../../applied-ai-services/index.yml) that analyzes forms and documents, extracts text and data, and maps field relationships as key-value pairs. To learn more about each model, *see* Concepts articles:
++
+Azure Form Recognizer is a cloud-based [Azure Applied AI Service](../../applied-ai-services/index.yml) that analyzes forms and documents, extracts text and data, and maps field relationships as key-value pairs. To learn more about each model, *see* the Concepts articles:
| Model type | Model name | ||--|
Azure Form Recognizer is a cloud-based [Azure Applied AI Service](../../applied-
## Which Form Recognizer model should I use?
-This section will help you decide which Form Recognizer v3.0 supported model you should use for your application:
+This section will help you decide which **Form Recognizer v3.0** supported model you should use for your application:
| Type of document | Data to extract |Document format | Your best solution | | --|-| -|-|
This section will help you decide which Form Recognizer v3.0 supported model you
## Form Recognizer models and development options - > [!NOTE]
-> The following models and development options are supported by the Form Recognizer service v3.0. You can Use Form Recognizer to automate your data processing in applications and workflows, enhance data-driven strategies, and enrich document search capabilities. Use the links in the table to learn more about each model and browse the API references.
+>The following models and development options are supported by the Form Recognizer service v3.0.
+
+You can Use Form Recognizer to automate your data processing in applications and workflows, enhance data-driven strategies, and enrich document search capabilities. Use the links in the table to learn more about each model and browse the API references.
| Model | Description |Automation use cases | Development options | |-|--|-|--|
-|[ **Read**](concept-read.md)|Extract text lines, words, detected languages, and handwritten style if detected.| <ul><li>Contract processing. </li><li>Financial or medical report processing.</li></ul>|<ul ><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/read)</li><li>[**REST API**](how-to-guides/use-sdk-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**C# SDK**](how-to-guides/use-sdk-rest-api.md?view=form-recog-3.0.0&preserve-view=true?pivots=programming-language-csharp)</li><li>[**Python SDK**](how-to-guides/use-sdk-rest-api.md?view=form-recog-3.0.0&preserve-view=true?pivots=programming-language-python)</li><li>[**Java SDK**](how-to-guides/use-sdk-rest-api.md?view=form-recog-3.0.0&preserve-view=true?pivots=programming-language-java)</li><li>[**JavaScript**](how-to-guides/use-sdk-rest-api.md?view=form-recog-3.0.0&preserve-view=true?pivots=programming-language-javascript)</li></ul> |
-|[**General document model**](concept-general-document.md)|Extract text, tables, structure, and key-value pairs.|<ul><li>Key-value pair extraction.</li><li>Form processing.</li><li>Survey data collection and analysis.</li></ul>|<ul ><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/document)</li><li>[**REST API**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#general-document-model)</li><li>[**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#general-document-model)</li><li>[**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#general-document-model)</li><li>[**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#general-document-model)</li></ul> |
-|[**Layout model**](concept-layout.md) | Extract text, selection marks, and tables structures, along with their bounding box coordinates, from forms and documents.</br></br> Layout API has been updated to a prebuilt model. |<ul><li>Document indexing and retrieval by structure.</li><li>Preprocessing prior to OCR analysis.</li></ul> |<ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/layout)</li><li>[**REST API**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#layout-model)</li><li>[**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#layout-model)</li><li>[**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#layout-model)</li><li>[**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#layout-model)</li></ul>|
-|[**Custom model (updated)**](concept-custom.md) | Extraction and analysis of data from forms and documents specific to distinct business data and use cases.</br></br>Custom model API v3.0 supports **signature detection for custom template (custom form) models**.</br></br>Custom model API v3.0 now supports two model types:<ul><li>[**Custom Template model**](concept-custom-template.md) (custom form) is used to analyze structured and semi-structured documents.</li><li> [**Custom Neural model**](concept-custom-neural.md) (custom document) is used to analyze unstructured documents.</li></ul>|<ul><li>Identification and compilation of data, unique to your business, impacted by a regulatory change or market event.</li><li>Identification and analysis of previously overlooked unique data.</li></ul> |[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</li><li>[**REST API**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li></ul>|
-|[ **W-2 Form**](concept-w2.md) | Extract information reported in each box on a W-2 form.|<ul><li>Automated tax document management.</li><li>Mortgage loan application processing.</li></ul> |<ul ><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.w2)<li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li></ul> |
-|[**Invoice model**](concept-invoice.md) | Automated data processing and extraction of key information from sales invoices. |<ul><li>Accounts payable processing.</li><li>Automated tax recording and reporting.</li></ul> |<ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=invoice)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li></ul>|
-|[**Receipt model (updated)**](concept-receipt.md) | Automated data processing and extraction of key information from sales receipts.</br></br>Receipt model v3.0 supports processing of **single-page hotel receipts**.|<ul><li>Expense management.</li><li>Consumer behavior data analysis.</li><li>Customer loyalty program.</li><li>Merchandise return processing.</li><li>Automated tax recording and reporting.</li></ul> |<ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=receipt)</li><li>[**REST API**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li></ul>|
-|[**ID document model (updated)**](concept-id-document.md) |Automated data processing and extraction of key information from US driver's licenses and international passports.</br></br>Prebuilt ID document API supports the **extraction of endorsements, restrictions, and vehicle classifications from US driver's licenses**. |<ul><li>Know your customer (KYC) financial services guidelines compliance.</li><li>Medical account management.</li><li>Identity checkpoints and gateways.</li><li>Hotel registration.</li></ul> |<ul><li> [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=idDocument)</li><li>[**REST API**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li></ul>|
-|[**Business card model**](concept-business-card.md) |Automated data processing and extraction of key information from business cards.|<ul><li>Sales lead and marketing management.</li></ul> |<ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=businessCard)</li><li>[**REST API**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li></ul>|
+|[**Read**](concept-read.md)|Extract text lines, words, detected languages, and handwritten style if detected.| <ul><li>Contract processing. </li><li>Financial or medical report processing.</li></ul>|<ul ><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/read)</li><li>[**REST API**](how-to-guides/use-prebuilt-read.md?pivots=programming-language-rest-api)</li><li>[**C# SDK**](how-to-guides/use-prebuilt-read.md?pivots=programming-language-csharp)</li><li>[**Python SDK**](how-to-guides/use-prebuilt-read.md?pivots=programming-language-python)</li><li>[**Java SDK**](how-to-guides/use-prebuilt-read.md?pivots=programming-language-java)</li><li>[**JavaScript**](how-to-guides/use-prebuilt-read.md?pivots=programming-language-javascript)</li></ul> |
+|[**General document model**](concept-general-document.md)|Extract text, tables, structure, and key-value pairs.|<ul><li>Key-value pair extraction.</li><li>Form processing.</li><li>Survey data collection and analysis.</li></ul>|<ul ><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/document)</li><li>[**REST API**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md#general-document-model)</li><li>[**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md#general-document-model)</li><li>[**Java SDK**](quickstarts/get-started-v3-sdk-rest-api.md#general-document-model)</li><li>[**JavaScript**](quickstarts/get-started-v3-sdk-rest-api.md#general-document-model)</li></ul> |
+|[**Layout model**](concept-layout.md) | Extract text, selection marks, and tables structures, along with their bounding box coordinates, from forms and documents.</br></br> Layout API has been updated to a prebuilt model. |<ul><li>Document indexing and retrieval by structure.</li><li>Preprocessing prior to OCR analysis.</li></ul> |<ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/layout)</li><li>[**REST API**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md#layout-model)</li><li>[**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md#layout-model)</li><li>[**Java SDK**](quickstarts/get-started-v3-sdk-rest-api.md#layout-model)</li><li>[**JavaScript**](quickstarts/get-started-v3-sdk-rest-api.md#layout-model)</li></ul>|
+|[**Custom model (updated)**](concept-custom.md) | Extraction and analysis of data from forms and documents specific to distinct business data and use cases.</br></br>Custom model API v3.0 supports **signature detection for custom template (custom form) models**.</br></br>Custom model API v3.0 now supports two model types:<ul><li>[**Custom Template model**](concept-custom-template.md) (custom form) is used to analyze structured and semi-structured documents.</li><li> [**Custom Neural model**](concept-custom-neural.md) (custom document) is used to analyze unstructured documents.</li></ul>|<ul><li>Identification and compilation of data, unique to your business, impacted by a regulatory change or market event.</li><li>Identification and analysis of previously overlooked unique data.</li></ul> |[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</li><li>[**REST API**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**Java SDK**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**JavaScript**](quickstarts/get-started-v3-sdk-rest-api.md)</li></ul>|
+|[ **W-2 Form**](concept-w2.md) | Extract information reported in each box on a W-2 form.|<ul><li>Automated tax document management.</li><li>Mortgage loan application processing.</li></ul> |<ul ><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.w2)<li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**Java SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**JavaScript**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li></ul> |
+|[**Invoice model**](concept-invoice.md) | Automated data processing and extraction of key information from sales invoices. |<ul><li>Accounts payable processing.</li><li>Automated tax recording and reporting.</li></ul> |<ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=invoice)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li></ul>|
+|[**Receipt model (updated)**](concept-receipt.md) | Automated data processing and extraction of key information from sales receipts.</br></br>Receipt model v3.0 supports processing of **single-page hotel receipts**.|<ul><li>Expense management.</li><li>Consumer behavior data analysis.</li><li>Customer loyalty program.</li><li>Merchandise return processing.</li><li>Automated tax recording and reporting.</li></ul> |<ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=receipt)</li><li>[**REST API**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**Java SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**JavaScript**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li></ul>|
+|[**ID document model (updated)**](concept-id-document.md) |Automated data processing and extraction of key information from US driver's licenses and international passports.</br></br>Prebuilt ID document API supports the **extraction of endorsements, restrictions, and vehicle classifications from US driver's licenses**. |<ul><li>Know your customer (KYC) financial services guidelines compliance.</li><li>Medical account management.</li><li>Identity checkpoints and gateways.</li><li>Hotel registration.</li></ul> |<ul><li> [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=idDocument)</li><li>[**REST API**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**Java SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**JavaScript**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li></ul>|
+|[**Business card model**](concept-business-card.md) |Automated data processing and extraction of key information from business cards.|<ul><li>Sales lead and marketing management.</li></ul> |<ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=businessCard)</li><li>[**REST API**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**Java SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**JavaScript**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li></ul>|
::: moniker-end ::: moniker range="form-recog-2.1.0"++
+Azure Form Recognizer is a cloud-based [Azure Applied AI Service](../../applied-ai-services/index.yml) that analyzes forms and documents, extracts text and data, and maps field relationships as key-value pairs. To learn more about each model, *see* the Concepts articles:
+
+| Model type | Model name |
+||--|
+|**Document analysis model**| &#9679; [**Layout model**](concept-layout.md) </br> |
+| **Prebuilt models** | &#9679; [**Invoice model**](concept-invoice.md)</br>&#9679; [**Receipt model**](concept-receipt.md) </br>&#9679; [**ID document model**](concept-id-document.md) </br>&#9679; [**Business card model**](concept-business-card.md) </br>
+| **Custom models** | &#9679; [**Custom model**](concept-custom.md) </br>&#9679; [**Composed model**](concept-model-overview.md)|
+
+## Which Form Recognizer model should I use?
+
+This section will help you decide which Form Recognizer v2.1 supported model you should use for your application:
+
+| Type of document | Data to extract |Document format | Your best solution |
+| --|-| -|-|
+|**A document that includes structural information** like a report or study.|In addition to text, you need to extract structural information like tables, selection marks, paragraphs, titles, headings, and subheadings.|The document is written or printed in a [supported language](language-support.md#read-layout-and-custom-form-template-model)| [**Layout model**](concept-layout.md)
+|**Invoice**|You want to extract key information such as customer name, billing address, and amount due from invoices.</li></ul> |The invoice document is written or printed in a [supported language](language-support.md#invoice-model).|[**Invoice model**](concept-invoice.md)
+ |**Receipt**|You want to extract key information such as merchant name, transaction date, and transaction total from a sales or single-page hotel receipt.</li></ul> |The receipt is written or printed in a [supported language](language-support.md#receipt-model). |[**Receipt model**](concept-receipt.md)|
+|**ID document** like a passport or driver's license. |You want to extract key information such as first name, last name, and date of birth from US drivers' licenses or international passports. |Your ID document is a US driver's license or the biographical page from an international passport (not a visa).| [**ID document model**](concept-id-document.md)|
+|**Business card**|You want to extract key information such as first name, last name, company name, email address, and phone number from business cards.</li></ul>|The business card document is in English or Japanese text. | [**Business card model**](concept-business-card.md)|
+|**Mixed-type document(s)**| You want to extract key-value pairs, selection marks, tables, signature fields, and selected regions not extracted by prebuilt or general document models.| You have various documents with structured, semi-structured, and/or unstructured elements.| [**Custom model**](concept-custom.md)|
+
+## Form Recognizer models and development options
>[!TIP] >
- > * For an enhanced experience and advanced model quality, try the [Form Recognizer v3.0 Studio ](https://formrecognizer.appliedai.azure.com/studio).
+ > * For an enhanced experience and advanced model quality, try the [Form Recognizer v3.0 Studio](https://formrecognizer.appliedai.azure.com/studio).
> * The v3.0 Studio supports any model trained with v2.1 labeled data. > * You can refer to the API migration guide for detailed information about migrating from v2.1 to v3.0.
-The following models are supported by Form Recognizer v2.1. Use the links in the table to learn more about each model and browse the API references.
+> [!NOTE]
+> The following models and development options are supported by the Form Recognizer service v2.1.
+
+Use the links in the table to learn more about each model and browse the API references:
| Model| Description | Development options | |-|--|-|
-|[**Layout API**](concept-layout.md) | Extraction and analysis of text, selection marks, tables, and bounding box coordinates, from forms and documents. | <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-layout)</li><li>[**REST API**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-2.1.0&preserve-view=true)</li><li>[**Client-library SDK**](quickstarts/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?branch=main&tabs=layout#run-the-container-with-the-docker-compose-up-command)</li></ul>|
+|[**Layout API**](concept-layout.md) | Extraction and analysis of text, selection marks, tables, and bounding box coordinates, from forms and documents. | <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-layout)</li><li>[**REST API**](quickstarts/get-started-v2-1-sdk-rest-api.md#try-it-layout-model)</li><li>[**Client-library SDK**](quickstarts/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?branch=main&tabs=layout#run-the-container-with-the-docker-compose-up-command)</li></ul>|
|[**Custom model**](concept-custom.md) | Extraction and analysis of data from forms and documents specific to distinct business data and use cases.| <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#train-a-custom-form-model)</li><li>[**REST API**](quickstarts/try-sdk-rest-api.md)</li><li>[**Client-library SDK**](how-to-guides/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=custom#run-the-container-with-the-docker-compose-up-command)</li></ul>|
-|[**Invoice model**](concept-invoice.md) | Automated data processing and extraction of key information from sales invoices. | <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-2.1.0&preserve-view=true)</li><li>[**Client-library SDK**](quickstarts/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=invoice#run-the-container-with-the-docker-compose-up-command)</li></ul>|
-|[**Receipt model**](concept-receipt.md) | Automated data processing and extraction of key information from sales receipts.| <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-2.1.0&preserve-view=true)</li><li>[**Client-library SDK**](how-to-guides/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=receipt#run-the-container-with-the-docker-compose-up-command)</li></ul>|
-|[**ID document model**](concept-id-document.md) | Automated data processing and extraction of key information from US driver's licenses and international passports.| <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-2.1.0&preserve-view=true)</li><li>[**Client-library SDK**](how-to-guides/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=id-document#run-the-container-with-the-docker-compose-up-command)</li></ul>|
-|[**Business card model**](concept-business-card.md) | Automated data processing and extraction of key information from business cards.| <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-2.1.0&preserve-view=true)</li><li>[**Client-library SDK**](how-to-guides/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=business-card#run-the-container-with-the-docker-compose-up-command)</li></ul>|
+|[**Invoice model**](concept-invoice.md) | Automated data processing and extraction of key information from sales invoices. | <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-v2-1-sdk-rest-api.md#try-it-prebuilt-model)</li><li>[**Client-library SDK**](quickstarts/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=invoice#run-the-container-with-the-docker-compose-up-command)</li></ul>|
+|[**Receipt model**](concept-receipt.md) | Automated data processing and extraction of key information from sales receipts.| <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-v2-1-sdk-rest-api.md#try-it-prebuilt-model)</li><li>[**Client-library SDK**](how-to-guides/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=receipt#run-the-container-with-the-docker-compose-up-command)</li></ul>|
+|[**ID document model**](concept-id-document.md) | Automated data processing and extraction of key information from US driver's licenses and international passports.| <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-v2-1-sdk-rest-api.md#try-it-prebuilt-model)</li><li>[**Client-library SDK**](how-to-guides/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=id-document#run-the-container-with-the-docker-compose-up-command)</li></ul>|
+|[**Business card model**](concept-business-card.md) | Automated data processing and extraction of key information from business cards.| <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-v2-1-sdk-rest-api.md#try-it-prebuilt-model)</li><li>[**Client-library SDK**](how-to-guides/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=business-card#run-the-container-with-the-docker-compose-up-command)</li></ul>|
::: moniker-end
-## How to use Form Recognizer documentation
-
-This documentation contains the following article types:
-
-* [**Concepts**](concept-layout.md) provide in-depth explanations of the service functionality and features.
-* [**Quickstarts**](quickstarts/try-sdk-rest-api.md) are getting-started instructions to guide you through making requests to the service.
-* [**How-to guides**](how-to-guides/try-sdk-rest-api.md) contain instructions for using the service in more specific or customized ways.
-* [**Tutorials**](tutorial-ai-builder.md) are longer guides that show you how to use the service as a component in broader business solutions.
- ## Data privacy and security As with all the cognitive services, developers using the Form Recognizer service should be aware of Microsoft policies on customer data. See our [Data, privacy, and security for Form Recognizer](/legal/cognitive-services/form-recognizer/fr-data-privacy-security) page.
azure-arc Managed Instance Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/managed-instance-disaster-recovery.md
Previously updated : 04/06/2022 Last updated : 06/13/2022 # Azure Arc-enabled SQL Managed Instance - disaster recovery
-To configure disaster recovery in Azure Arc-enabled SQL Managed Instance, set up failover groups.
+To configure disaster recovery in Azure Arc-enabled SQL Managed Instance, set up Azure failover groups.
## Background
-The distributed availability groups used in Azure Arc-enabled SQL Managed Instance is the same technology that is in SQL Server. Because Azure Arc-enabled SQL Managed Instance runs on Kubernetes, there's no Windows failover cluster involved. For more information, see [Distributed availability groups](/sql/database-engine/availability-groups/windows/distributed-availability-groups).
+Azure failover groups use the same distributed availability groups technology that is in SQL Server. Because Azure Arc-enabled SQL Managed Instance runs on Kubernetes, there's no Windows failover cluster involved. For more information, see [Distributed availability groups](/sql/database-engine/availability-groups/windows/distributed-availability-groups).
> [!NOTE] > - The Azure Arc-enabled SQL Managed Instance in both geo-primary and geo-secondary sites need to be identical in terms of their compute & capacity, as well as service tiers they are deployed in. > - Distributed availability groups can be setup for either General Purpose or Business Critical service tiers.
-To configure disaster recovery:
+To configure an Azure failover group:
1. Create custom resource for distributed availability group at the primary site 1. Create custom resource for distributed availability group at the secondary site
-1. Copy the mirroring certificates
+1. Copy the binary data from the mirroring certificates
1. Set up the distributed availability group between the primary and secondary sites The following image shows a properly configured distributed availability group: ![A properly configured distributed availability group](.\media\business-continuity\dag.png)
-### Configure distributed availability groups
+### Configure Azure failover group
1. Provision the managed instance in the primary site.
The following image shows a properly configured distributed availability group:
az sql mi-arc create --name <primaryinstance> --tier bc --replicas 3 --k8s-namespace <namespace> --use-k8s ```
-2. Provision the managed instance in the secondary site and configure as a disaster recovery instance. At this point, the system databases are not part of the contained availability group.
+2. Switch context to the secondary cluster by running ```kubectl config use-context <secondarycluster>``` and provision the managed instance in the secondary site that will be the disaster recovery instance. At this point, the system databases are not part of the contained availability group.
> [!NOTE] > - It is important to specify `--license-type DisasterRecovery` **during** the Azure Arc SQL MI creation. This will allow the DR instance to be seeded from the primary instance in the primary data center. Updating this property post deployment will not have the same effect.
The following image shows a properly configured distributed availability group:
az sql mi-arc create --name <secondaryinstance> --tier bc --replicas 3 --license-type DisasterRecovery --k8s-namespace <namespace> --use-k8s ```
-3. Copy the mirroring certificates from each site to a location that's accessible to both the geo-primary and geo-secondary instances.
+3. Mirroring certificates - The binary data inside the Mirroring Certificate property of the Arc SQL MI is needed for the Instance Failover Group CR (Custom Resource) creation.
- ```azurecli
- az sql mi-arc get-mirroring-cert --name <primaryinstance> --cert-file $HOME/sqlcerts/<name>.pemΓÇï --k8s-namespace <namespace> --use-k8s
- az sql mi-arc get-mirroring-cert --name <secondaryinstance> --cert-file $HOME/sqlcerts/<name>.pem --k8s-namespace <namespace> --use-k8s
- ```
+ This can be achieved in a few ways:
- Example:
+ (a) If using ```az``` CLI, generate the mirroring certificate file first, and then point to that file while configuring the Instance Failover Group so the binary data is read from the file and copied over into the CR. The cert files are not needed post FOG creation.
- ```azurecli
- az sql mi-arc get-mirroring-cert --name sqlprimary --cert-file $HOME/sqlcerts/sqlprimary.pemΓÇï --k8s-namespace my-namespace --use-k8s
- az sql mi-arc get-mirroring-cert --name sqlsecondary --cert-file $HOME/sqlcerts/sqlsecondary.pem --k8s-namespace my-namespace --use-k8s
- ```
+ (b) If using ```kubectl```, directly copy and paste the binary data from the Arc SQL MI CR into the yaml file that will be used to create the Instance Failover Group.
++
+ Using (a) above:
+
+ Create the mirroring certificate file for primary instance:
+ ```azurecli
+ az sql mi-arc get-mirroring-cert --name <primaryinstance> --cert-file </path/name>.pemΓÇï --k8s-namespace <namespace> --use-k8s
+ ```
+
+ Example:
+ ```azurecli
+ az sql mi-arc get-mirroring-cert --name sqlprimary --cert-file $HOME/sqlcerts/sqlprimary.pemΓÇï --k8s-namespace my-namespace --use-k8s
+ ```
+
+ Connect to the secondary cluster and create the mirroring certificate file for secondary instance:
+
+ ```azurecli
+ az sql mi-arc get-mirroring-cert --name <secondaryinstance> --cert-file </path/name>.pem --k8s-namespace <namespace> --use-k8s
+ ```
+
+ Example:
+
+ ```azurecli
+ az sql mi-arc get-mirroring-cert --name sqlsecondary --cert-file $HOME/sqlcerts/sqlsecondary.pem --k8s-namespace my-namespace --use-k8s
+ ```
+
+ Once the mirroring certificate files are created, copy the certificate from the secondary instance to a shared/local path on the primary instance cluster and vice-versa.
4. Create the failover group resource on both sites.
The following image shows a properly configured distributed availability group:
```azurecli az sql instance-failover-group-arc create --shared-name <name of failover group> --name <name for primary DAG resource> --mi <local SQL managed instance name> --role primary --partner-mi <partner SQL managed instance name> --partner-mirroring-url tcp://<secondary IP> --partner-mirroring-cert-file <secondary.pem> --k8s-namespace <namespace> --use-k8s
+ ```
+
+ Example:
+ ```azurecli
+ az sql instance-failover-group-arc create --shared-name myfog --name primarycr --mi sqlinstance1 --role primary --partner-mi sqlinstance2 --partner-mirroring-url tcp://10.20.5.20:970 --partner-mirroring-cert-file $HOME/sqlcerts/sqlinstance2.pem --k8s-namespace my-namespace --use-k8s
+ ```
+ On the secondary instance, run the following command to setup the FOG CR. The ```--partner-mirroring-cert-file``` in this case should point to a path that has the mirroring certificate file generated from the primary instance as described in 3(a) above.
+ ```azurecli
az sql instance-failover-group-arc create --shared-name <name of failover group> --name <name for secondary DAG resource> --mi <local SQL managed instance name> --role secondary --partner-mi <partner SQL managed instance name> --partner-mirroring-url tcp://<primary IP> --partner-mirroring-cert-file <primary.pem> --k8s-namespace <namespace> --use-k8s ``` Example:- ```azurecli
- az sql instance-failover-group-arc create --shared-name myfog --name primarycr --mi sqlinstance1 --role primary --partner-mi sqlinstance2 --partner-mirroring-url tcp://10.20.5.20:970 --partner-mirroring-cert-file $HOME/sqlcerts/sqlinstance2.pem --k8s-namespace my-namespace --use-k8s
- az sql instance-failover-group-arc create --shared-name myfog --name secondarycr --mi sqlinstance2 --role secondary --partner-mi sqlinstance1 --partner-mirroring-url tcp://10.10.5.20:970 --partner-mirroring-cert-file $HOME/sqlcerts/sqlinstance1.pem --k8s-namespace my-namespace --use-k8s ```
azure-arc Tutorial Akv Secrets Provider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-akv-secrets-provider.md
The Azure Key Vault Provider for Secrets Store CSI Driver allows for the integration of Azure Key Vault as a secrets store with a Kubernetes cluster via a [CSI volume](https://kubernetes-csi.github.io/docs/). For Azure Arc-enabled Kubernetes clusters, you can install the Azure Key Vault Secrets Provider extension to fetch secrets.
-Benefits of the Azure Key Vault Secrets Provider extension include the folllowing:
+Benefits of the Azure Key Vault Secrets Provider extension include the following:
- Mounts secrets/keys/certs to pod using a CSI Inline volume - Supports pod portability with the SecretProviderClass CRD
Benefits of the Azure Key Vault Secrets Provider extension include the folllowin
- A cluster with a supported Kubernetes distribution that has already been [connected to Azure Arc](quickstart-connect-cluster.md). The following Kubernetes distributions are currently supported for this scenario: - Cluster API Azure
+ - Azure Kubernetes Service (AKS) clusters on Azure Stack HCI
- AKS hybrid clusters provisioned from Azure - Google Kubernetes Engine - OpenShift Kubernetes Distribution
azure-arc Tutorial Arc Enabled Open Service Mesh https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-arc-enabled-open-service-mesh.md
Azure Arc-enabled Open Service Mesh can be deployed through Azure portal, Azure
- Only one instance of Open Service Mesh can be deployed on an Azure Arc-connected Kubernetes cluster. - Support is available for the two most recently released minor versions of Arc-enabled Open Service Mesh. Find the latest version [here](https://github.com/Azure/osm-azure/releases). Supported release versions are appended with notes. Ignore the tags associated with intermediate releases. - The following Kubernetes distributions are currently supported:
- - AKS Engine
+ - AKS (Azure Kubernetes Service) Engine
+ - AKS clusters on Azure Stack HCI
- AKS hybrid clusters provisioned from Azure - Cluster API Azure - Google Kubernetes Engine
azure-arc Tutorial Gitops Flux2 Ci Cd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-gitops-flux2-ci-cd.md
CD pipeline manipulates PRs in the GitOps repository. It needs a Service Connect
--set orchestratorPAT=<Azure Repos PAT token> ``` > [!NOTE]
-> `Azure Repos PAT token` should have `Build: Read & execute` and `Code: Read` permissions.
+> `Azure Repos PAT token` should have `Build: Read & execute` and `Code: Full` permissions.
3. Configure Flux to send notifications to GitOps connector: ```console
azure-arc Troubleshoot Resource Bridge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/troubleshoot-resource-bridge.md
Title: Troubleshoot Azure Arc resource bridge (preview) issues description: This article tells how to troubleshoot and resolve issues with the Azure Arc resource bridge (preview) when trying to deploy or connect to the service. Previously updated : 08/24/2022 Last updated : 09/26/2022
$HOME\.KVA\.ssh\logkey
To run the `az arcappliance logs` command, the path to the kubeconfig must be provided. The kubeconfig is generated after successful completion of the `az arcappliance deploy` command and is placed in the same directory as the CLI command in ./kubeconfig or as specified in `--outfile` (if the parameter was passed).
-If `az arcappliance deploy` was not completed, then the kubeconfig file may exist but may be empty or missing data, so it can't be used for logs collection. In this case, the Appliance VM IP address can be used to collect logs instead. The Appliance VM IP is assigned when the `az arcappliance deploy` command is run, after Control Plane Endpoint reconciliation. For example, if the message displayed in the command window reads "Appliance IP is 10.97.176.27", the command to use for logs collection would be:
+If `az arcappliance deploy` was not completed, then the kubeconfig file may exist but may be empty or missing data, so it can't be used for logs collection. In this case, the Appliance VM IP address can be used to collect logs instead. The Appliance VM IP is assigned when the `az arcappliance deploy` command is run, after Control Plane Endpoint reconciliation. For example, if the message displayed in the command window reads "Appliance IP is 10.97.176.27", the command to use for logs collection would be:
```azurecli az arcappliance logs hci --out-dir c:\logs --ip 10.97.176.27
When the appliance is deployed to a host resource pool, there is no high availab
### Restricted outbound connectivity
-If outbound connectivity is restricted by your firewall or proxy server, make sure the URLs listed below are not blocked.
+Make sure the URLs listed below are added to your allowlist.
-URLS:
+#### Proxy URLs used by appliance agents and services
-| Agent resource | Description |
-|||
-|`https://mcr.microsoft.com`|Microsoft container registry|
-|`https://*.his.arc.azure.com`|Azure Arc Identity service|
-|`https://*.dp.kubernetesconfiguration.azure.com`|Azure Arc configuration service|
-|`https://*.servicebus.windows.net`|Cluster connect|
-|`https://guestnotificationservice.azure.com` |Guest notification service|
-|`https://*.dp.prod.appliances.azure.com`|Resource bridge data plane service|
-|`https://ecpacr.azurecr.io` |Resource bridge container image download |
-|`.blob.core.windows.net`<br> `*.dl.delivery.mp.microsoft.com`<br> `*.do.dsp.mp.microsoft.com` |Resource bridge image download |
-|`https://azurearcfork8sdev.azurecr.io` |Azure Arc for Kubernetes container image download |
-|`adhs.events.data.microsoft.com ` |Required diagnostic data sent to Microsoft from control plane nodes|
-|`v20.events.data.microsoft.com` |Required diagnostic data sent to Microsoft from the Azure Stack HCI or Windows Server host|
+|**Service**|**Port**|**URL**|**Direction**|**Notes**|
+|--|--|--|--|--|
+|Microsoft container registry | 443 | `https://mcr.microsoft.com`| Appliance VM IP and Control Plane IP need outbound connection. | Required to pull container images for installation. |
+|Azure Arc Identity service | 443 | `https://*.his.arc.azure.com` | Appliance VM IP and Control Plane IP need outbound connection. | Manages identity and access control for Azure resources |
+|Azure Arc configuration service | 443 | `https://*.dp.kubernetesconfiguration.azure.com`| Appliance VM IP and Control Plane IP need outbound connection. | Used for Kubernetes cluster configuration.|
+|Cluster connect service | 443 | `https://*.servicebus.windows.net` | Appliance VM IP and Control Plane IP need outbound connection. | Provides cloud-enabled communication to connect on-premises resources with the cloud. |
+|Guest Notification service| 443 | `https://guestnotificationservice.azure.com`| Appliance VM IP and Control Plane IP need outbound connection. | Used to connect on-prem resources to Azure.|
+|SFS API endpoint | 443 | msk8s.api.cdp.microsoft.com | Host machine, Appliance VM IP and Control Plane IP need outbound connection. | Used when downloading product catalog, product bits, and OS images from SFS. |
+|Resource bridge (appliance) Dataplane service| 443 | `https://*.dp.prod.appliances.azure.com`| Appliance VM IP and Control Plane IP need outbound connection. | Communicate with resource provider in Azure.|
+|Resource bridge (appliance) container image download| 443 | `*.blob.core.windows.net, https://ecpacr.azurecr.io`| Appliance VM IP and Control Plane IP need outbound connection. | Required to pull container images. |
+|Resource bridge (appliance) image download| 80 | `*.dl.delivery.mp.microsoft.com`| Host machine, Appliance VM IP and Control Plane IP need outbound connection. | Download the Arc Resource Bridge OS images. |
+|Azure Arc for Kubernetes container image download| 443 | `https://azurearcfork8sdev.azurecr.io`| Appliance VM IP and Control Plane IP need outbound connection. | Required to pull container images. |
+|ADHS telemetry service | 443 | adhs.events.data.microsoft.com| Appliance VM IP and Control Plane IP need outbound connection. | Runs inside the appliance/mariner OS. Used periodically to send Microsoft required diagnostic data from control plane nodes. Used when telemetry is coming off Mariner, which would mean any Kubernetes control plane. |
+|Microsoft events data service | 443 |v20.events.data.microsoft.com| Appliance VM IP and Control Plane IP need outbound connection. | Used periodically to send Microsoft required diagnostic data from the Azure Stack HCI or Windows Server host. Used when telemetry is coming off Windows like Windows Server or HCI. |
-URLs used by other Arc agents:
+#### Used by other Arc agents
-|Agent resource | Description |
-|||
-|`https://management.azure.com` |Azure Resource Manager|
-|`https://login.microsoftonline.com` |Azure Active Directory|
+|**Service**|**URL**|
+|--|--|
+|Azure Resource Manager| `https://management.azure.com`|
+|Azure Active Directory| `https://login.microsoftonline.com`|
### Azure Arc resource bridge is unreachable
When deploying the resource bridge on VMware Vcenter, you may get an error sayin
If you don't see your problem here or you can't resolve your issue, try one of the following channels for support:
-* Get answers from Azure experts through [Microsoft Q&A](/answers/topics/azure-arc.html).
+- Get answers from Azure experts through [Microsoft Q&A](/answers/topics/azure-arc.html).
-* Connect with [@AzureSupport](https://twitter.com/azuresupport), the official Microsoft Azure account for improving customer experience. Azure Support connects the Azure community to answers, support, and experts.
+- Connect with [@AzureSupport](https://twitter.com/azuresupport), the official Microsoft Azure account for improving customer experience. Azure Support connects the Azure community to answers, support, and experts.
-* [Open an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md).
+- [Open an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md).
azure-arc Troubleshoot Agent Onboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/troubleshoot-agent-onboard.md
Title: Troubleshoot Azure Arc-enabled servers agent connection issues description: This article tells how to troubleshoot and resolve issues with the Connected Machine agent that arise with Azure Arc-enabled servers when trying to connect to the service. Previously updated : 07/16/2021 Last updated : 10/13/2022
Use the following table to identify and resolve issues when configuring the Azur
| Error code | Probable cause | Suggested remediation | ||-|--| | AZCM0000 | The action was successful | N/A |
-| AZCM0001 | An unknown error occurred | Contact Microsoft Support for further assistance |
-| AZCM0011 | The user canceled the action (CTRL+C) | Retry the previous command |
-| AZCM0012 | The access token provided is invalid | Obtain a new access token and try again |
-| AZCM0013 | The tags provided are invalid | Check that the tags are enclosed in double quotes, separated by commas, and that any names or values with spaces are enclosed in single quotes: `--tags "SingleName='Value with spaces',Location=Redmond"`
-| AZCM0014 | The cloud is invalid | Specify a supported cloud: `AzureCloud` or `AzureUSGovernment` |
-| AZCM0015 | The correlation ID specified isn't a valid GUID | Provide a valid GUID for `--correlation-id` |
-| AZCM0016 | Missing a mandatory parameter | Review the output to identify which parameters are missing |
-| AZCM0017 | The resource name is invalid | Specify a name that only uses alphanumeric characters, hyphens and/or underscores. The name cannot end with a hyphen or underscore. |
-| AZCM0018 | The command was executed without administrative privileges | Retry the command with administrator or root privileges in an elevated command prompt or console session. |
-| AZCM0041 | The credentials supplied are invalid | For device logins, verify the user account specified has access to the tenant and subscription where the server resource will be created. For service principal logins, check the client ID and secret for correctness, the expiration date of the secret, and that the service principal is from the same tenant where the server resource will be created. |
-| AZCM0042 | Creation of the Azure Arc-enabled server resource failed | Verify that the user/service principal specified has access to create Azure Arc-enabled server resources in the specified resource group. |
-| AZCM0043 | Deletion of the Azure Arc-enabled server resource failed | Verify that the user/service principal specified has access to delete Azure Arc-enabled server resources in the specified resource group. If the resource no longer exists in Azure, use the `--force-local-only` flag to proceed. |
+| AZCM0001 | An unknown error occurred | Contact Microsoft Support for assistance. |
+| AZCM0011 | The user canceled the action (CTRL+C) | Retry the previous command. |
+| AZCM0012 | The access token is invalid | If authenticating via access token, obtain a new token and try again. If authenticating via service principal or device logins, contact Microsoft Support for assistance. |
+| AZCM0016 | Missing a mandatory parameter | Review the error message in the output to identify which parameters are missing. For the complete syntax of the command, run `azcmagent <command> --help`. |
+| AZCM0018 | The command was executed without administrative privileges | Retry the command in an elevated user context (administrator/root). |
+| AZCM0019 | The path to the configuration file is incorrect | Ensure the path to the configuration file is correct and try again. |
+| AZCM0023 | The value provided for a parameter (argument) is invalid | Review the error message for more specific information. Refer to the syntax of the command (`azcmagent <command> --help`) for valid values or expected format for the arguments. |
+| AZCM0026 | There is an error in network configuration or some critical services are temporarily unavailable | Check if the required endpoints are reachable (for example, hostnames are resolvable, endpoints are not blocked). If the network is configured for Private Link Scope, a Private Link Scope resource ID must be provided for onboarding using the `--private-link-scope` parameter. |
+| AZCM0041 | The credentials supplied are invalid | For device logins, verify that the user account specified has access to the tenant and subscription where the server resource will be created<sup>[1](#footnote3)</sup>.<br> For service principal logins, check the client ID and secret for correctness, the expiration date of the secret<sup>[2](#footnote4)</sup>, and that the service principal is from the same tenant where the server resource will be created<sup>[1](#footnote3)</sup>.<br> <a name="footnote3"></a><sup>1</sup>See [How to find your Azure Active Directory tenant ID](/azure/active-directory/fundamentals/active-directory-how-to-find-tenant).<br> <a name="footnote4"></a><sup>2</sup>In Azure portal, open Azure Active Directory and select the App registration blade. Select the application to be used and the Certificates and secrets within it. Check whether the expiration data has passed. If it has, create new credentials with sufficient roles and try again. See [Connected Machine agent prerequisites-required permissions](prerequisites.md#required-permissions). |
+| AZCM0042 | Creation of the Azure Arc-enabled server resource failed | Review the error message in the output to identify the cause of the failure to create resource and the suggested remediation. For permission issues, see [Connected Machine agent prerequisites-required permissions](prerequisites.md#required-permissions) for more information. |
+| AZCM0043 | Deletion of the Azure Arc-enabled server resource failed | Verify that the user/service principal specified has permissions to delete Azure Arc-enabled server/resources in the specified group ΓÇö see [Connected Machine agent prerequisites-required permissions](prerequisites.md#required-permissions).<br> If the resource no longer exists in Azure, use the `--force-local-only` flag to proceed. |
| AZCM0044 | A resource with the same name already exists | Specify a different name for the `--resource-name` parameter or delete the existing Azure Arc-enabled server in Azure and try again. |
-| AZCM0061 | Unable to reach the agent service | Verify you're running the command in an elevated user context (administrator/root) and that the HIMDS service is running on your server. |
-| AZCM0062 | An error occurred while connecting the server | Review other error codes in the output for more specific information. If the error occurred after the Azure resource was created, you need to delete the Arc server from your resource group before retrying. |
-| AZCM0063 | An error occurred while disconnecting the server | Review other error codes in the output for more specific information. If you continue to encounter this error, you can delete the resource in Azure, and then run `azcmagent disconnect --force-local-only` on the server to disconnect the agent. |
-| AZCM0064 | The agent service is not responding | Check the status of the `himds` service to ensure it is running. Start the service if it is not running. If it is running, wait a minute then try again. |
-| AZCM0065 | An internal agent communication error occurred | Contact Microsoft Support for assistance |
-| AZCM0066 | The agent web service is not responding or unavailable | Contact Microsoft Support for assistance |
-| AZCM0067 | The agent is already connected to Azure | Run `azcmagent disconnect` to remove the current connection, then try again. |
-| AZCM0068 | An internal error occurred while disconnecting the server from Azure | Contact Microsoft Support for assistance |
-| AZCM0070 | Unable to obtain local config | The Hybrid Instance Metadata service (HIMDS) might not be running. Check the status of your HIMDS service (for Windows) or the HIMDS daemon (for Linux). |
+| AZCM0062 | An error occurred while connecting the server | Review the error message in the output for more specific information. If the error occurred after the Azure resource was created, delete this resource before retrying. |
+| AZCM0063 | An error occurred while disconnecting the server | Review the error message in the output for more specific information. If this error persists, delete the resource in Azure, and then run `azcmagent disconnect --force-local-only` on the server. |
+| AZCM0067 | The machine is already connected to Azure | Run `azcmagent disconnect` to remove the current connection, then try again. |
+| AZCM0068 | Subscription name was provided, and an error occurred while looking up the corresponding subscription GUID. | Retry the command with the subscription GUID instead of subscription name. |
+| AZCM0061<br>AZCM0064<br>AZCM0065<br>AZCM0066<br>AZCM0070<br> | The agent service is not responding or unavailable | Verify the command is run in an elevated user context (administrator/root). Ensure that the HIMDS service is running (start or restart HIMDS as needed) then try the command again. |
| AZCM0081 | An error occurred while downloading the Azure Active Directory managed identity certificate | If this message is encountered while attempting to connect the server to Azure, the agent won't be able to communicate with the Azure Arc service. Delete the resource in Azure and try connecting again. |
-| AZCM0101 | The command was not parsed successfully | Run `azcmagent <command> --help` to review the correct command syntax |
-| AZCM0102 | Unable to retrieve the computer hostname | Run `hostname` to check for any system-level error messages, then contact Microsoft Support. |
-| AZCM0103 | An error occurred while generating RSA keys | Contact Microsoft Support for assistance |
-| AZCM0104 | Failed to read system information | Verify the identity used to run `azcmagent` has administrator/root privileges on the system and try again. |
+| AZCM0101 | The command was not parsed successfully | Run `azcmagent <command> --help` to review the command syntax. |
+| AZCM0102 | An error occurred while retrieving the computer hostname | Retry the command and specify a resource name (with parameter --resource-name or ΓÇôn). Use only alphanumeric characters, hyphens and/or underscores; note that resource name cannot end with a hyphen or underscore. |
+| AZCM0103 | An error occurred while generating RSA keys | Contact Microsoft Support for assistance. |
+| AZCM0105 | An error occurred while downloading the Azure Active Directory managed identify certificate | Delete the resource created in Azure and try again. |
+| AZCM0147-<br>AZCM0152 | An error occurred while installing Azcmagent on Windows | Review the error message in the output for more specific information. |
+| AZCM0127-<br>AZCM0146 | An error occurred while installing Azcmagent on Linux | Review the error message in the output for more specific information. |
## Agent verbose log
Before following the troubleshooting steps described later in this article, the
### Windows
-The following is an example of the command to enable verbose logging with the Connected Machine agent for Windows when performing an interactive installation.
+Following is an example of the command to enable verbose logging with the Connected Machine agent for Windows when performing an interactive installation.
```console & "$env:ProgramFiles\AzureConnectedMachineAgent\azcmagent.exe" connect --resource-group "resourceGroupName" --tenant-id "tenantID" --location "regionName" --subscription-id "subscriptionID" --verbose ```
-The following is an example of the command to enable verbose logging with the Connected Machine agent for Windows when performing an at-scale installation using a service principal.
+Following is an example of the command to enable verbose logging with the Connected Machine agent for Windows when performing an at-scale installation using a service principal.
```console & "$env:ProgramFiles\AzureConnectedMachineAgent\azcmagent.exe" connect `
The following is an example of the command to enable verbose logging with the Co
### Linux
-The following is an example of the command to enable verbose logging with the Connected Machine agent for Linux when performing an interactive installation.
+Following is an example of the command to enable verbose logging with the Connected Machine agent for Linux when performing an interactive installation.
>[!NOTE] >You must have *root* access permissions on Linux machines to run **azcmagent**.
The following is an example of the command to enable verbose logging with the Co
azcmagent connect --resource-group "resourceGroupName" --tenant-id "tenantID" --location "regionName" --subscription-id "subscriptionID" --verbose ```
-The following is an example of the command to enable verbose logging with the Connected Machine agent for Linux when performing an at-scale installation using a service principal.
+Following is an example of the command to enable verbose logging with the Connected Machine agent for Linux when performing an at-scale installation using a service principal.
```bash azcmagent connect \
azure-arc Quick Start Connect Vcenter To Arc Using Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/quick-start-connect-vcenter-to-arc-using-script.md
First, the script deploys a virtual appliance called [Azure Arc resource bridge
### Azure Arc Resource Bridge -- Azure Arc Resource Bridge IP needs access to the URLs listed [here](../vmware-vsphere/support-matrix-for-arc-enabled-vmware-vsphere.md#resource-bridge-networking-requirements).
+- Azure Arc resource bridge IP needs access to the URLs listed [here](../vmware-vsphere/support-matrix-for-arc-enabled-vmware-vsphere.md#resource-bridge-networking-requirements).
### vCenter Server
First, the script deploys a virtual appliance called [Azure Arc resource bridge
- A virtual network that can provide internet access, directly or through a proxy. It must also be possible for VMs on this network to communicate with the vCenter server on TCP port (usually 443). -- At least one free IP address on the above network that isn't in the DHCP range. At least three free IP addresses if there's no DHCP server on the network.
+- At least three free static IP addresses on the above network. If you have a DHCP server on the network, the IP addresses must be outside the DHCP range.
- A resource pool or a cluster with a minimum capacity of 16 GB of RAM and four vCPUs.
A typical onboarding that uses the script takes 30 to 60 minutes. During the pro
| **vCenter FQDN/Address** | Enter the fully qualified domain name for the vCenter Server instance (or an IP address). For example: **10.160.0.1** or **nyc-vcenter.contoso.com**. | | **vCenter Username** | Enter the username for the vSphere account. The required permissions for the account are listed in the [prerequisites](#prerequisites). | | **vCenter password** | Enter the password for the vSphere account. |
-| **Data center selection** | Select the name of the datacenter (as shown in the vSphere client) where the Azure Arc resource bridge's VM should be deployed. |
-| **Network selection** | Select the name of the virtual network or segment to which the VM must be connected. This network should allow the appliance to communicate with vCenter Server and the Azure endpoints (or internet). |
-| **Static IP / DHCP** | If you have DHCP server in your network and want to use it, enter **y**. Otherwise, enter **n**. If you are using a DHCP server, reserve the IP address assigned to the Azure Arc Resource Bridge VM (Appliance VM IP). </br>When you choose a static IP configuration, you're asked for the following information: </br> 1. **Static IP address prefix**: Network address in CIDR notation. For example: **192.168.0.0/24**. </br> 2. **Static gateway**: Gateway address. For example: **192.168.0.0**. </br> 3. **DNS servers**: IP address(es) of DNS server(s) used by Azure Arc Resource Bridge VM for DNS resolution. VM must be able to resolve external sites, like mcr.microsoft.com and the vCenter server. </br> 4. **Start range IP**: Minimum size of two available IP addresses is required. One IP address is for the VM, and the other is reserved for upgrade scenarios. Provide the starting IP address of that range. Ensure the Start range IP has internet access. </br> 5. **End range IP**: Last IP address of the IP range requested in the previous field. Ensure the End range IP has internet access. </br> 6. **VLAN ID** (optional) |
-| **Resource pool** | Select the name of the resource pool to which the Azure Arc resource bridge's VM will be deployed. |
-| **Data store** | Select the name of the datastore to be used for the Azure Arc resource bridge's VM. |
+| **Data center selection** | Select the name of the datacenter (as shown in the vSphere client) where the Azure Arc resource bridge VM should be deployed. |
+| **Network selection** | Select the name of the virtual network or segment to which the Azure Arc resource bridge VM must be connected. This network should allow the appliance to communicate with vCenter Server and the Azure endpoints (or internet). |
+| **Static IP / DHCP** | For deploying Azure Arc resource bridge, the preferred configuration is to use Static IP. Enter **n** to select static IP configuration. While not recommended, if you have DHCP server in your network and want to use it instead, enter **y**. If you are using a DHCP server, reserve the IP address assigned to the Azure Arc Resource Bridge VM (Appliance VM IP). If you use DHCP, the cluster configuration IP address still needs to be a static IP address. </br>When you choose a static IP configuration, you're asked for the following information: </br> 1. **Static IP address prefix**: Network address in CIDR notation. For example: **192.168.0.0/24**. </br> 2. **Static gateway**: Gateway address. For example: **192.168.0.0**. </br> 3. **DNS servers**: IP address(es) of DNS server(s) used by Azure Arc resource bridge VM for DNS resolution. Azure Arc resource bridge VM must be able to resolve external sites, like mcr.microsoft.com and the vCenter server. </br> 4. **Start range IP**: Minimum size of two available IP addresses is required. One IP address is for the Azure Arc resource bridge VM, and the other is reserved for upgrade scenarios. Provide the starting IP address of that range. Ensure the Start range IP has internet access. </br> 5. **End range IP**: Last IP address of the IP range requested in the previous field. Ensure the End range IP has internet access. </br> 6. **VLAN ID** (optional) |
+| **Resource pool** | Select the name of the resource pool to which the Azure Arc resource bridge VM will be deployed. |
+| **Data store** | Select the name of the datastore to be used for the Azure Arc resource bridge VM. |
| **Folder** | Select the name of the vSphere VM and the template folder where the Azure Arc resource bridge's VM will be deployed. | | **VM template Name** | Provide a name for the VM template that will be created in your vCenter Server instance based on the downloaded OVA file. For example: **arc-appliance-template**. | | **Control Plane IP** address | Provide a static IP address that's outside the DHCP range but still available on the network. Ensure that this IP address isn't assigned to any other machine on the network. Azure Arc resource bridge (preview) runs a Kubernetes cluster, and its control plane requires a static IP address. Control Plane IP must have internet access. |
azure-arc Support Matrix For Arc Enabled Vmware Vsphere https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/support-matrix-for-arc-enabled-vmware-vsphere.md
The following firewall URL exceptions are needed for the Azure Arc resource brid
| Azure Arc for K8s container image download | 443 | https://azurearcfork8sdev.azurecr.io | Appliance VM IP and control plane endpoint need outbound connection. | Required to pull container images. | | ADHS telemetry service | 443 | adhs.events.data.microsoft.com | Appliance VM IP and control plane endpoint need outbound connection. Runs inside the appliance/mariner OS. | Used periodically to send Microsoft required diagnostic data from control plane nodes. Used when telemetry is coming off Mariner, which would mean any K8s control plane. | | Microsoft events data service | 443 | v20.events.data.microsoft.com | Appliance VM IP and control plane endpoint need outbound connection. | Used periodically to send Microsoft required diagnostic data from the Azure Stack HCI or Windows Server host. Used when telemetry is coming off Windows like Windows Server or HCI. |
+| vCenter Server | 443 | URL of the vCenter server | Appliance VM IP and control plane endpoint need outbound connection. | Used to by the vCenter server to communicate with the Appliance VM and the control plane.|
## Azure permissions required
azure-functions Dotnet Isolated Process Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/dotnet-isolated-process-guide.md
recommendations: false
# Guide for running C# Azure Functions in an isolated process
-This article is an introduction to using C# to develop .NET isolated process functions, which run out-of-process in Azure Functions. Running out-of-process lets you decouple your function code from the Azure Functions runtime. Isolated process C# functions run on .NET 6.0, .NET 7.0, and .NET Framework 4.8 (preview support). [In-process C# class library functions](functions-dotnet-class-library.md) aren't supported on .NET 7.0.
+This article is an introduction to using C# to develop .NET isolated process functions, which runs Azure Functions in an isolated process. This allows you to decouple your function code from the Azure Functions runtime, check out [supported version](#supported-versions) for Azure functions in an isolated process. [In-process C# class library functions](functions-dotnet-class-library.md) aren't supported on .NET 7.0.
| Getting started | Concepts| Samples | |--|--|--|
A [HostBuilder] is used to build and return a fully initialized [IHost] instance
### Configuration
-The [ConfigureFunctionsWorkerDefaults] method is used to add the settings required for the function app to run out-of-process, which includes the following functionality:
+The [ConfigureFunctionsWorkerDefaults] method is used to add the settings required for the function app to run in an isolated process, which includes the following functionality:
+ Default set of converters. + Set the default [JsonSerializerOptions] to ignore casing on property names.
azure-maps Webgl Custom Layer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/webgl-custom-layer.md
description: How to add a custom WebGL layer to a map using the Azure Maps Web SDK. Previously updated : 09/23/2022 Last updated : 10/17/2022
map.layers.add(new atlas.layer.WebGLLayer("layerId",
This sample renders a triangle on the map using a WebGL layer.
-<!-- Insert example here -->
- ![A screenshot showing a triangle rendered on a map, using a WebGL layer.](./media/how-to-webgl-custom-layer/triangle.png)
+For a fully functional sample with source code, see [Simple 2D WebGL layer][Simple 2D WebGL layer] in the Azure Maps Samples.
+ The map's camera matrix is used to project spherical Mercator point to gl coordinates. Mercator point \[0, 0\] represents the top left corner of the Mercator world and \[1, 1\] represents the bottom right corner.
to load a [glTF][glTF] file and render it on the map using [three.js][threejs].
You need to add the following script files. ```html
-<script src="https://unpkg.com/three@0.102.0/build/three.min.js"></script>
-
-<script src="https://unpkg.com/three@0.102.0/examples/js/loaders/GLTFLoader.js"></script>
+<script src="https://unpkg.com/three@latest/build/three.min.js"></script>
+<script src="https://unpkg.com/three@latest/examples/js/loaders/GLTFLoader.js"></script>
``` This sample renders an animated 3D parrot on the map.
-<!-- Insert example here -->
- ![A screenshot showing an an animated 3D parrot on the map.](./media/how-to-webgl-custom-layer/3d-parrot.gif)
+For a fully functional sample with source code, see [Three custom WebGL layer][Three custom WebGL layer] in the Azure Maps Samples.
+ The `onAdd` function loads a `.glb` file into memory and instantiates three.js objects such as Camera, Scene, Light, and a `THREE.WebGLRenderer`.
a single frame by calling `map.triggerRepaint()` in the `render` function.
> - To enable anti-aliasing simply set `antialias` to `true` as one of the style options while creating the map.
+## Render a 3D model using babylon.js
+
+[Babylon.js][babylonjs] is one of the world's leading WebGL-based graphics engines. The following example shows how to load a GLTF file and render it on the map using babylon.js.
+
+You need to add the following script files.
+
+```html
+<script src="https://cdn.babylonjs.com/babylon.js"></script>
+<script src="https://cdn.babylonjs.com/loaders/babylonjs.loaders.min.js"></script>
+```
+
+This sample renders a satellite tower on the map.
+
+The `onAdd` function instantiates a BABYLON engine and a scene. It then loads a `.gltf` file using BABYLON.SceneLoader.
+
+The `render` function calculates the projection matrix of the camera and renders the model to the scene.
+
+![A screenshot showing an example of rendering a 3D model using babylon.js.](./media/how-to-webgl-custom-layer/render-3d-model.png)
+
+For a fully functional sample with source code, see [Babylon custom WebGL layer][Babylon custom WebGL layer] in the Azure Maps Samples.
+ ## Render a deck.gl layer A WebGL layer can be used to render layers from the [deck.gl][deckgl]
within a certain time range.
You need to add the following script file. ```html
-<script src="https://unpkg.com/deck.gl@8.8.9/dist.min.js"></script>
+<script src="https://unpkg.com/deck.gl@latest/dist.min.js"></script>
``` Define a layer class that extends `atlas.layer.WebGLLayer`.
class DeckGLLayer extends atlas.layer.WebGLLayer {
} ```
-This sample renders an arc-layer from the [deck.gl][deckgl] library.
+
+This sample renders an arc-layer google the [deck.gl][deckgl] library.
![A screenshot showing an arc-layer from the Deck G L library.](./media/how-to-webgl-custom-layer/arc-layer.png)
+For a fully functional sample with source code, see [Deck GL custom WebGL layer][Deck GL custom WebGL layer] in the Azure Maps Samples.
+ ## Next steps Learn more about the classes and methods used in this article:
Learn more about the classes and methods used in this article:
[deckgl]: https://deck.gl/ [glTF]: https://www.khronos.org/gltf/ [OpenGL ES]: https://www.khronos.org/opengles/
+[babylonjs]: https://www.babylonjs.com/
[WebGLLayer]: /javascript/api/azure-maps-control/atlas.layer.webgllayer [WebGLLayerOptions]: /javascript/api/azure-maps-control/atlas.webgllayeroptions [WebGLRenderer interface]: /javascript/api/azure-maps-control/atlas.webglrenderer [MercatorPoint]: /javascript/api/azure-maps-control/atlas.data.mercatorpoint
+[Simple 2D WebGL layer]: https://samples.azuremaps.com/?sample=simple-2d-webgl-layer
+[Deck GL custom WebGL layer]: https://samples.azuremaps.com/?sample=deck-gl-custom-webgl-layer
+[Three custom WebGL layer]: https://samples.azuremaps.com/?sample=three-custom-webgl-layer
+[Babylon custom WebGL layer]: https://samples.azuremaps.com/?sample=babylon-custom-webgl-layer
azure-monitor Agents Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agents-overview.md
The following tables list the operating systems that Azure Monitor Agent and the
| Operating system | Azure Monitor agent <sup>1</sup> | Log Analytics agent <sup>1</sup> | Diagnostics extension <sup>2</sup>| |:|::|::|::|::
-| AlmaLinux 8.5 | X<sup>3</sup> | | |
-| AlmaLinux 8 | X | X | |
+| AlmaLinux 8 | X<sup>3</sup> | X | |
| Amazon Linux 2017.09 | | X | | | Amazon Linux 2 | | X | | | CentOS Linux 8 | X | X | | | CentOS Linux 7 | X<sup>3</sup> | X | X | | CentOS Linux 6 | | X | |
-| CentOS Linux 6.5+ | | X | X |
-| CBL-Mariner 2.0 | X | | |
+| CBL-Mariner 2.0 | X<sup>3</sup> | | |
| Debian 11 | X<sup>3</sup> | | | | Debian 10 | X | X | | | Debian 9 | X | X | X | | Debian 8 | | X | |
-| Debian 7 | | | X |
| OpenSUSE 15 | X | | |
-| OpenSUSE 13.1+ | | | X |
| Oracle Linux 8 | X | X | | | Oracle Linux 7 | X | X | X | | Oracle Linux 6 | | X | |
azure-monitor Convert Classic Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/convert-classic-resource.md
# Migrate to workspace-based Application Insights resources
-This guide will walk you through migrating a classic Application Insights resource to a workspace-based resource. Workspace-based resources support full integration between Application Insights and Log Analytics. Workspace-based resources send Application Insights telemetry to a common Log Analytics workspace. This behavior allows you to access [the latest features of Azure Monitor](#new-capabilities) while keeping application, infrastructure, and platform logs in a consolidated location.
+This article walks you through migrating a classic Application Insights resource to a workspace-based resource. Workspace-based resources support full integration between Application Insights and Log Analytics. Workspace-based resources send Application Insights telemetry to a common Log Analytics workspace. This behavior allows you to access [the latest features of Azure Monitor](#new-capabilities) while keeping application, infrastructure, and platform logs in a consolidated location.
-Workspace-based resources enable common Azure role-based access control (Azure RBAC) across your resources, and eliminate the need for cross-app/workspace queries.
+Workspace-based resources enable common Azure role-based access control across your resources and eliminate the need for cross-app/workspace queries.
-**Workspace-based resources are currently available in all commercial regions and Azure US Government.**
+Workspace-based resources are currently available in all commercial regions and Azure US Government.
## New capabilities
-Workspace-based Application Insights allows you to take advantage of all the latest capabilities of Azure Monitor and Log Analytics, including:
+Workspace-based Application Insights allow you to take advantage of the latest capabilities of Azure Monitor and Log Analytics:
-* [Customer-Managed Keys (CMK)](../logs/customer-managed-keys.md) provides encryption at rest for your data with encryption keys that only you have access to.
-* [Azure Private Link](../logs/private-link-security.md) allows you to securely link Azure PaaS services to your virtual network using private endpoints.
-* [Bring Your Own Storage (BYOS) for Profiler and Snapshot Debugger](./profiler-bring-your-own-storage.md) gives you full control over:
- - Encryption-at-rest policy
- - Lifetime management policy
- - Network access for all data associated with Application Insights Profiler and Snapshot Debugger
-* [Commitment Tiers](../logs/cost-logs.md#commitment-tiers) enable you to save as much as 30% compared to the Pay-As-You-Go price. Otherwise, Pay-as-you-go data ingestion and data retention are billed similarly in Log Analytics as they are in Application Insights.
-* Faster data ingestion via Log Analytics streaming ingestion.
+* [Customer-managed keys](../logs/customer-managed-keys.md) provide encryption at rest for your data with encryption keys that only you have access to.
+* [Azure Private Link](../logs/private-link-security.md) allows you to securely link the Azure platform as a service (PaaS) to your virtual network by using private endpoints.
+* [Bring your own storage (BYOS) for Profiler and Snapshot Debugger](./profiler-bring-your-own-storage.md) gives you full control over:
+ - Encryption-at-rest policy.
+ - Lifetime management policy.
+ - Network access for all data associated with Application Insights Profiler and Snapshot Debugger.
+* [Commitment tiers](../logs/cost-logs.md#commitment-tiers) enable you to save as much as 30% compared to the pay-as-you-go price. Otherwise, pay-as-you-go data ingestion and data retention are billed similarly in Log Analytics as they are in Application Insights.
+* Data is ingested faster via Log Analytics streaming ingestion.
> [!NOTE]
-> After migrating to a workspace-based Application Insights resource, telemetry from multiple Application Insights resources may be stored in a common Log Analytics workspace. You will still be able to pull data from a specific Application Insights resource, as described under [Understanding log queries](#understanding-log-queries).
+> After you migrate to a workspace-based Application Insights resource, telemetry from multiple Application Insights resources might be stored in a common Log Analytics workspace. You'll still be able to pull data from a specific Application Insights resource, as described in the section [Understand log queries](#understand-log-queries).
## Migration process When you migrate to a workspace-based resource, no data is transferred from your classic resource's storage to the new workspace-based storage. Choosing to migrate will change the location where new data is written to a Log Analytics workspace while preserving access to your classic resource data. Your classic resource data will persist and be subject to the retention settings on your classic Application Insights resource. All new data ingested post migration will be subject to the [retention settings](../logs/data-retention-archive.md) of the associated Log Analytics workspace, which also supports [different retention settings by data type](../logs/data-retention-archive.md#set-retention-and-archive-policy-by-table).
-The migration process is **permanent, and cannot be reversed**. Once you migrate a resource to workspace-based Application Insights, it will always be a workspace-based resource. However, once you migrate you're able to change the target workspace as often as needed.
-If you don't need to migrate an existing resource, and instead want to create a new workspace-based Application Insights resource, use the [workspace-based resource creation guide](create-workspace-resource.md).
+*The migration process is permanent and can't be reversed*. After you migrate a resource to workspace-based Application Insights, it will always be a workspace-based resource. After you migrate, you can change the target workspace as often as needed.
-## Pre-requisites
+If you don't need to migrate an existing resource, and instead want to create a new workspace-based Application Insights resource, see the [Workspace-based resource creation guide](create-workspace-resource.md).
-- A Log Analytics workspace with the access control mode set to the **`use resource or workspace permissions`** setting.
+## Prerequisites
- - Workspace-based Application Insights resources aren't compatible with workspaces set to the dedicated **`workspace based permissions`** setting. To learn more about Log Analytics workspace access control, consult the [access control mode guidance](../logs/manage-access.md#access-control-mode)
+- A Log Analytics workspace with the access control mode set to the **Use resource or workspace permissions** setting:
- - If you don't already have an existing Log Analytics Workspace, [consult the Log Analytics workspace creation documentation](../logs/quick-create-workspace.md).
+ - Workspace-based Application Insights resources aren't compatible with workspaces set to the dedicated **workspace-based permissions** setting. To learn more about Log Analytics workspace access control, see the [Access control mode guidance](../logs/manage-access.md#access-control-mode).
+
+ - If you don't already have an existing Log Analytics workspace, see the [Log Analytics workspace creation documentation](../logs/quick-create-workspace.md).
-- **Continuous export is not supported for workspace-based resources** and must be disabled.
-Once the migration is complete, you can use [diagnostic settings](../essentials/diagnostic-settings.md) to configure data archiving to a storage account or streaming to Azure Event Hubs.
+- **Continuous export** isn't supported for workspace-based resources and must be disabled. After the migration is finished, you can use [diagnostic settings](../essentials/diagnostic-settings.md) to configure data archiving to a storage account or streaming to Azure Event Hubs.
> [!CAUTION]
- > * Diagnostics settings uses a different export format/schema than continuous export, migrating will break any existing integrations with Stream Analytics.
- > * Diagnostic settings export may increase costs. ([more information](export-telemetry.md#diagnostic-settings-based-export))
+ > * Diagnostic settings use a different export format/schema than continuous export. Migrating will break any existing integrations with Azure Stream Analytics.
+ > * Diagnostic settings export might increase costs. For more information, see [Export telemetry from Application Insights](export-telemetry.md#diagnostic-settings-based-export).
-- Check your current retention settings under **General** > **Usage and estimated costs** > **Data Retention** for your Log Analytics workspace. This setting will affect how long any new ingested data is stored once you migrate your Application Insights resource.
+- Check your current retention settings under **General** > **Usage and estimated costs** > **Data Retention** for your Log Analytics workspace. This setting will affect how long any new ingested data is stored after you migrate your Application Insights resource.
> [!NOTE]
- > - If you currently store Application Insights data for longer than the default 90 days and want to retain this larger retention period after migration, you will need to adjust your [workspace retention settings](/azure/azure-monitor/logs/data-retention-archive?tabs=portal-1%2Cportal-2#set-retention-and-archive-policy-by-table) from the default 90 days to the desired longer retention period.
- > - If youΓÇÖve selected data retention greater than 90 days on data ingested into the Classic Application Insights resource prior to migration, data retention will continue to be billed to through that Application Insights resource until that data exceeds the retention period.
- > - If the retention setting for your Application Insights instance under **Configure** > **Usage and estimated costs** > **Data Retention** is enabled, then use that setting to control the retention days for the telemetry data still saved in your classic resource's storage.
+ > - If you currently store Application Insights data for longer than the default 90 days and want to retain this longer retention period after migration, adjust your [workspace retention settings](/azure/azure-monitor/logs/data-retention-archive?tabs=portal-1%2Cportal-2#set-retention-and-archive-policy-by-table) from the default 90 days to the desired longer retention period.
+ > - If you've selected data retention longer than 90 days on data ingested into the classic Application Insights resource prior to migration, data retention will continue to be billed through that Application Insights resource until the data exceeds the retention period.
+ > - If the retention setting for your Application Insights instance under **Configure** > **Usage and estimated costs** > **Data Retention** is enabled, use that setting to control the retention days for the telemetry data still saved in your classic resource's storage.
-- Understand [Workspace-based Application Insights](../logs/cost-logs.md#application-insights-billing) usage and costs.
+- Understand [workspace-based Application Insights](../logs/cost-logs.md#application-insights-billing) usage and costs.
## Migrate your resource
-This section walks through migrating a classic Application Insights resource to a workspace-based resource.
+To migrate a classic Application Insights resource to a workspace-based resource:
-1. From your Application Insights resource, select **Properties** under the **Configure** heading in the left-hand menu bar.
+1. From your Application Insights resource, select **Properties** under the **Configure** heading in the menu on the left.
-![Properties highlighted in red box](./media/convert-classic-resource/properties.png)
+ ![Screenshot that shows Properties under the Configure heading.](./media/convert-classic-resource/properties.png)
-2. Select **`Migrate to Workspace-based`**.
-
-![Migrate resource button](./media/convert-classic-resource/migrate.png)
+1. Select **Migrate to Workspace-based**.
-3. Choose the Log Analytics workspace where you want all future ingested Application Insights telemetry to be stored. It can either be a Log Analytics workspace in the same subscription, or in a different subscription that shares the same Azure AD tenant. The Log Analytics workspace doesn't have to be in the same resource group as the Application Insights resource.
+ ![Screenshot that shows the Migrate to Workspace-based resource button.](./media/convert-classic-resource/migrate.png)
-> [!NOTE]
-> Migrating to a workspace-based resource can take up to 24 hours, but is usually faster than that. Please rely on accessing data through your Application Insights resource while waiting for the migration process to complete. Once completed, you will start seeing new data stored in the Log Analytics workspace tables.
+1. Choose the Log Analytics workspace where you want all future ingested Application Insights telemetry to be stored. It can either be a Log Analytics workspace in the same subscription or a different subscription that shares the same Azure Active Directory tenant. The Log Analytics workspace doesn't have to be in the same resource group as the Application Insights resource.
-![Migration wizard UI with option to select targe workspace](./media/convert-classic-resource/migration.png)
-
-Once your resource is migrated, you'll see the corresponding workspace info in the **Overview** pane:
+ > [!NOTE]
+ > Migrating to a workspace-based resource can take up to 24 hours, but the process is usually faster than that. Rely on accessing data through your Application Insights resource while you wait for the migration process to finish. After it's finished, you'll see new data stored in the Log Analytics workspace tables.
+
+ ![Screenshot that shows the Migration wizard UI with the option to select target workspace.](./media/convert-classic-resource/migration.png)
+
+ After your resource is migrated, you'll see the corresponding workspace information in the **Overview** pane:
-![Workspace Name](./media/create-workspace-resource/workspace-name.png)
+ ![Screenshot that shows the Workspace Name](./media/create-workspace-resource/workspace-name.png)
-Clicking the blue link text will take you to the associated Log Analytics workspace where you can take advantage of the new unified workspace query environment.
+ Selecting the blue link text takes you to the associated Log Analytics workspace where you can take advantage of the new unified workspace query environment.
> [!TIP]
-> After migrating to a workspace-based Application Insights resource, we recommend using the [workspace's daily cap](../logs/daily-cap.md) to limit ingestion and costs instead of the cap in Application Insights.
+> After you migrate to a workspace-based Application Insights resource, we recommend using the [workspace's daily cap](../logs/daily-cap.md) to limit ingestion and costs instead of the cap in Application Insights.
-## Understanding log queries
+## Understand log queries
-We still provide full backwards compatibility for your Application Insights classic resource queries, workbooks, and log-based alerts within the Application Insights experience.
+We still provide full backward compatibility for your Application Insights classic resource queries, workbooks, and log-based alerts within the Application Insights experience.
-To write queries against the [new workspace-based table structure/schema](#workspace-based-resource-changes), you must first navigate to your Log Analytics workspace.
+To write queries against the [new workspace-based table structure/schema](#workspace-based-resource-changes), you must first go to your Log Analytics workspace.
To ensure the queries successfully run, validate that the query's fields align with the [new schema fields](#appmetrics).
-If you have multiple Application Insights resources store telemetry in one Log Analytics workspace, but you only want to query data from one specific Application Insights resource, you have two options:
+If you have multiple Application Insights resources that store telemetry in one Log Analytics workspace, but you want to query data from one specific Application Insights resource, you have two options:
-- Option 1: Go to the desired Application Insights resource and open the **Logs** tab. All queries from this tab will automatically pull data from the selected Application Insights resource.-- Option 2: Go to the Log Analytics workspace that you configured as the destination for your Application Insights telemetry and open the **Logs** tab. To query data from a specific Application Insights resource, filter for the built-in ```_ResourceId``` property that is available in all application specific tables.
+- **Option 1:** Go to the desired Application Insights resource and select the **Logs** tab. All queries from this tab will automatically pull data from the selected Application Insights resource.
+- **Option 2:** Go to the Log Analytics workspace that you configured as the destination for your Application Insights telemetry and select the **Logs** tab. To query data from a specific Application Insights resource, filter for the built-in `_ResourceId` property that's available in all application-specific tables.
-Notice that if you query directly from the Log Analytics workspace, you'll only see data that is ingested post migration. To see both your classic Application Insights data and the new data ingested after migration in a unified query experience, use the **Logs** tab from within your migrated Application Insights resource.
+Notice that if you query directly from the Log Analytics workspace, you'll only see data that's ingested post migration. To see both your classic Application Insights data and the new data ingested after migration in a unified query experience, use the **Logs** tab from within your migrated Application Insights resource.
> [!NOTE]
-> If you rename your Application Insights resource after migrating to workspace-based model, the Application Insights Logs tab will no longer show the telemetry collected before renaming. You will be able to see all data (old and new) on the Logs tab of the associated Log Analytics resource.
+> If you rename your Application Insights resource after you migrate to the workspace-based model, the Application Insights **Logs** tab will no longer show the telemetry collected before renaming. You can see all old and new data on the **Logs** tab of the associated Log Analytics resource.
## Programmatic resource migration
+This section helps you migrate your resources.
+ ### Azure CLI To access the preview Application Insights Azure CLI commands, you first need to run:
To access the preview Application Insights Azure CLI commands, you first need to
az extension add -n application-insights ```
-If you don't run the `az extension add` command, you'll see an error message that states: `az : ERROR: az monitor: 'app-insights' is not in the 'az monitor' command group. See 'az monitor --help'.`
+If you don't run the `az extension add` command, you'll see an error message that states `az : ERROR: az monitor: 'app-insights' is not in the 'az monitor' command group. See 'az monitor --help'.`
-Now you can run the following to create your Application Insights resource:
+Now you can run the following code to create your Application Insights resource:
```azurecli az monitor app-insights component update --app
az monitor app-insights component update --app
az monitor app-insights component update --app your-app-insights-resource-name -g your_resource_group --workspace "/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/test1234/providers/microsoft.operationalinsights/workspaces/test1234555" ```
-For the full Azure CLI documentation for this command, consult the [Azure CLI documentation](/cli/azure/monitor/app-insights/component#az-monitor-app-insights-component-update).
+For the full Azure CLI documentation for this command, see the [Azure CLI documentation](/cli/azure/monitor/app-insights/component#az-monitor-app-insights-component-update).
### Azure PowerShell
-The `Update-AzApplicationInsights` PowerShell command doesn't currently support migrating a classic Application Insights resource to workspace-based. To create a workspace-based resource with PowerShell, you can use the Azure Resource Manager templates below and deploy with PowerShell.
+The `Update-AzApplicationInsights` PowerShell command doesn't currently support migrating a classic Application Insights resource to workspace based. To create a workspace-based resource with PowerShell, you can use the following Azure Resource Manager templates and deploy with PowerShell.
### Azure Resource Manager templates
+This section provides templates.
+ #### Template file ```json
The `Update-AzApplicationInsights` PowerShell command doesn't currently support
```
-## Modifying the associated workspace
+## Modify the associated workspace
-Once a workspace-based Application Insights resource has been created, you can modify the associated Log Analytics Workspace.
+After a workspace-based Application Insights resource has been created, you can modify the associated Log Analytics workspace.
From within the Application Insights resource pane, select **Properties** > **Change Workspace** > **Log Analytics Workspaces**. ## Frequently asked questions
+This section provides answers to common questions.
+ ### Is there any implication on the cost from migration?
-There's usually no difference, with a couple of exceptions.
+There's usually no difference, with a couple of exceptions:
+ - Migrated Application Insights resources can use [Log Analytics commitment tiers](../logs/cost-logs.md#commitment-tiers) to reduce cost if the data volumes in the workspace are high enough.
- Grandfathered Application Insights resources will no longer get 1 GB per month free from the original Application Insights pricing model. ### How will telemetry capping work? You can set a [daily cap on the Log Analytics workspace](../logs/daily-cap.md#application-insights).
-There's no strict (billing-wise) capping available.
+There's no strict billing capping available.
### How will ingestion-based sampling work?
There are no changes to ingestion-based sampling.
No. We merge data during query time.
-### Will my old logs queries continue to work?
+### Will my old log queries continue to work?
Yes, they'll continue to work.
Yes, they'll continue to work.
Yes, they'll continue to work.
-### Will migration impact AppInsights API accessing data?
+### Will migration affect AppInsights API accessing data?
-No, migration won't impact existing API access to data. After migration, you'll be able to access data directly from a workspace using a [slightly different schema](#workspace-based-resource-changes).
+No. Migration won't affect existing API access to data. After migration, you'll be able to access data directly from a workspace by using a [slightly different schema](#workspace-based-resource-changes).
### Will there be any impact on Live Metrics or other monitoring experiences?
-No, there's no impact to [Live Metrics](live-stream.md#live-metrics-monitor--diagnose-with-1-second-latency) or other monitoring experiences.
+No. There's no impact to [Live Metrics](live-stream.md#live-metrics-monitor--diagnose-with-1-second-latency) or other monitoring experiences.
-### What happens with Continuous export after migration?
+### What happens with continuous export after migration?
Continuous export doesn't support workspace-based resources.
-You'll need to switch to [Diagnostic Settings](../essentials/diagnostic-settings.md#diagnostic-settings-in-azure-monitor).
+You'll need to switch to [diagnostic settings](../essentials/diagnostic-settings.md#diagnostic-settings-in-azure-monitor).
## Troubleshooting
+This section offers troubleshooting tips for common issues.
+ ### Access mode
-**Error message:** *The selected workspace is configured with workspace-based access mode. Some APM features may be impacted. Select another workspace or allow resource-based access in the workspace settings. You can override this error by using CLI.*
+**Error message:** "The selected workspace is configured with workspace-based access mode. Some APM features may be impacted. Select another workspace or allow resource-based access in the workspace settings. You can override this error by using CLI."
-In order for your workspace-based Application Insights resource to operate properly you need to change the access control mode of your target Log Analytics workspace to the **resource or workspace permissions** setting. This setting is located in the Log Analytics workspace UI under **Properties** > **Access control mode**. For detailed instructions, consult the [Log Analytics configure access control mode guidance](../logs/manage-access.md#access-control-mode). If your access control mode is set to the exclusive **Require workspace permissions** setting, migration via the portal migration experience will remain blocked.
+For your workspace-based Application Insights resource to operate properly, you need to change the access control mode of your target Log Analytics workspace to the **Resource or workspace permissions** setting. This setting is located in the Log Analytics workspace UI under **Properties** > **Access control mode**. For instructions, see the [Log Analytics configure access control mode guidance](../logs/manage-access.md#access-control-mode). If your access control mode is set to the exclusive **Require workspace permissions** setting, migration via the portal migration experience will remain blocked.
-If you canΓÇÖt change the access control mode for security reasons for your current target workspace, we recommend creating a new Log Analytics workspace to use for the migration.
+If you can't change the access control mode for security reasons for your current target workspace, create a new Log Analytics workspace to use for the migration.
### Continuous export
-**Error message:** *Continuous Export needs to be disabled before continuing. After migration, use Diagnostic Settings for export.*
+**Error message:** "Continuous Export needs to be disabled before continuing. After migration, use Diagnostic Settings for export."
-The legacy continuous export functionality isn't supported for workspace-based resources. Prior to migrating you need to disable continuous export.
+The legacy **Continuous export** functionality isn't supported for workspace-based resources. Prior to migrating, you need to disable continuous export.
-1. From your Application Insights resource view, under the **Configure** heading select **Continuous Export**.
+1. From your Application Insights resource view, under the **Configure** heading, select **Continuous export**.
- ![Continuous export menu item](./media/convert-classic-resource/continuous-export.png)
+ ![Screenshot that shows the Continuous export menu item.](./media/convert-classic-resource/continuous-export.png)
-2. Select **Disable**.
+1. Select **Disable**.
- ![Continuous export disable button](./media/convert-classic-resource/disable.png)
+ ![Screenshot that shows the Continuous export Disable button.](./media/convert-classic-resource/disable.png)
-- Once you have selected disable, you can navigate back to the migration UI. If the edit continuous export page prompts you that your settings won't be saved, you can select ok for this prompt as it doesn't pertain to disabling/enabling continuous export.
+ - After you select **Disable**, you can go back to the migration UI. If the **Edit continuous export** page prompts you that your settings won't be saved, select **OK** for this prompt because it doesn't pertain to disabling or enabling continuous export.
-- Once you've successfully migrated your Application Insights resource to workspace-based, you can use Diagnostic settings to replace the functionality that continuous export used to provide. Select **Diagnostic settings** > **add diagnostic setting** from within your Application Insights resource. You can select all tables, or a subset of tables to archive to a storage account, or to stream to Azure Event Hubs. For detailed guidance on diagnostic settings, refer to the [Azure Monitor diagnostic settings guidance](../essentials/diagnostic-settings.md).
+ - After you've successfully migrated your Application Insights resource to workspace based, you can use diagnostic settings to replace the functionality that continuous export used to provide. Select **Diagnostics settings** > **Add diagnostic setting** from within your Application Insights resource. You can select all tables, or a subset of tables, to archive to a storage account or stream to Azure Event Hubs. For more information on diagnostic settings, see the [Azure Monitor diagnostic settings guidance](../essentials/diagnostic-settings.md).
### Retention settings
-**Warning Message:** *Your customized Application Insights retention settings won't apply to data sent to the workspace. You'll need to reconfigure these separately.*
+**Warning message:** "Your customized Application Insights retention settings won't apply to data sent to the workspace. You'll need to reconfigure these separately."
-You don't have to make any changes prior to migrating. This message alerts you that your current Application Insights retention settings aren't set to the default 90-day retention period. This warning message means you may want to modify the retention settings for your Log Analytics workspace prior to migrating and starting to ingest new data.
+You don't have to make any changes prior to migrating. This message alerts you that your current Application Insights retention settings aren't set to the default 90-day retention period. This warning message means you might want to modify the retention settings for your Log Analytics workspace prior to migrating and starting to ingest new data.
-You can check your current retention settings for Log Analytics under **General** > **Usage and estimated costs** > **Data Retention** from within the Log Analytics UI. This setting will affect how long any new ingested data is stored once you migrate your Application Insights resource.
+You can check your current retention settings for Log Analytics under **General** > **Usage and estimated costs** > **Data Retention** from within the Log Analytics UI. This setting will affect how long any new ingested data is stored after you migrate your Application Insights resource.
## Workspace-based resource changes
-Prior to the introduction of [workspace-based Application Insights resources](create-workspace-resource.md), Application Insights data was stored separate from other log data in Azure Monitor. Both are based on Azure Data Explorer and use the same Kusto Query Language (KQL). Workspace-based Application Insights resources data is stored in a Log Analytics workspace, together with other monitoring data and application data. This simplifies your configuration by allowing you to analyze data across multiple solutions more easily, and to use the capabilities of workspaces.
+Prior to the introduction of [workspace-based Application Insights resources](create-workspace-resource.md), Application Insights data was stored separately from other log data in Azure Monitor. Both are based on Azure Data Explorer and use the same Kusto Query Language (KQL). Workspace-based Application Insights resources data is stored in a Log Analytics workspace, together with other monitoring data and application data. This arrangement simplifies your configuration by allowing you to analyze data across multiple solutions more easily, and to use the capabilities of workspaces.
### Classic data structure
-The structure of a Log Analytics workspace is described in [Log Analytics workspace overview](../logs/log-analytics-workspace-overview.md). For a classic application, the data isn't stored in a Log Analytics workspace. It uses the same query language, and you create and run queries by using the same Log Analytics tool in the Azure portal. Data items for classic applications are stored separately from each other. The general structure is the same as for workspace-based applications, although the table and column names are different.
+
+The structure of a Log Analytics workspace is described in [Log Analytics workspace overview](../logs/log-analytics-workspace-overview.md). For a classic application, the data isn't stored in a Log Analytics workspace. It uses the same query language, and you create and run queries by using the same Log Analytics tool in the Azure portal. Data items for classic applications are stored separately from each other. The general structure is the same as for workspace-based applications, although the table and column names are different.
> [!NOTE] > The classic Application Insights experience includes backward compatibility for your resource queries, workbooks, and log-based alerts. To query or view against the [new workspace-based table structure or schema](#table-structure), you must first go to your Log Analytics workspace. During the preview, selecting **Logs** from within the Application Insights panes will give you access to the classic Application Insights query experience. For more information, see [Query scope](../logs/scope.md).
The structure of a Log Analytics workspace is described in [Log Analytics worksp
|:|:|:| | availabilityResults | AppAvailabilityResults | Summary data from availability tests.| | browserTimings | AppBrowserTimings | Data about client performance, such as the time taken to process the incoming data.|
-| dependencies | AppDependencies | Calls from the application to other components (including external components) recorded via TrackDependency() ΓÇô for example, calls to REST API, database or a file system. |
+| dependencies | AppDependencies | Calls from the application to other components (including external components) recorded via `TrackDependency()`. Examples are calls to the REST API or database or a file system. |
| customEvents | AppEvents | Custom events created by your application. | | customMetrics | AppMetrics | Custom metrics created by your application. | | pageViews | AppPageViews| Data about each website view with browser information. |
-| performanceCounters | AppPerformanceCounters | Performance measurements from the compute resources supporting the application, for example, Windows performance counters. |
+| performanceCounters | AppPerformanceCounters | Performance measurements from the compute resources that support the application. An example is Windows performance counters. |
| requests | AppRequests | Requests received by your application. For example, a separate request record is logged for each HTTP request that your web app receives. |
-| exceptions | AppExceptions | Exceptions thrown by the application runtime, captures both server side and client-side (browsers) exceptions. |
-| traces | AppTraces | Detailed logs (traces) emitted through application code/logging frameworks recorded via TrackTrace(). |
+| exceptions | AppExceptions | Exceptions thrown by the application runtime. Captures both server side and client-side (browsers) exceptions. |
+| traces | AppTraces | Detailed logs (traces) emitted through application code/logging frameworks recorded via `TrackTrace()`. |
> [!CAUTION]
-> Do not take a production dependency on the Log Analytics tables, until you see new telemetry records show up directly in Log Analytics. This might take up to 24 hours after the migration process started.
+> Don't take a production dependency on the Log Analytics tables until you see new telemetry records show up directly in Log Analytics. It might take up to 24 hours after the migration process started for records to appear.
### Table schemas
-The following sections show the mapping between the classic property names and the new workspace-based Application Insights property names. Use this information to convert any queries using legacy tables.
+The following sections show the mapping between the classic property names and the new workspace-based Application Insights property names. Use this information to convert any queries using legacy tables.
-Most of the columns have the same name with different capitalization. Since KQL is case-sensitive, you'll need to change each column name along with the table names in existing queries. Columns with changes in addition to capitalization are highlighted. You can still use your classic Application Insights queries within the **Logs** pane of your Application Insights resource, even if it's a workspace-based resource. The new property names are required for when querying from within the context of the Log Analytics workspace experience.
+Most of the columns have the same name with different capitalization. Since KQL is case sensitive, you'll need to change each column name along with the table names in existing queries. Columns with changes in addition to capitalization are highlighted. You can still use your classic Application Insights queries within the **Logs** pane of your Application Insights resource, even if it's a workspace-based resource. The new property names are required when you query from within the context of the Log Analytics workspace experience.
#### AppAvailabilityResults
Legacy table: customMetrics
|valueSum|real|ValueSum|real| > [!NOTE]
-> Older versions of Application Insights SDKs used to report standard deviation (valueStdDev) in the metrics pre-aggregation. Due to little adoption in metrics analysis, the field was removed and is no longer aggregated by the SDKs. If the value is received by the Application Insights data collection end point, it gets dropped during ingestion and is not sent to the Log Analytics workspace. If you are interested in using standard deviation in your analysis, we recommend using queries against Application Insights raw events.
+> Older versions of Application Insights SDKs used to report standard deviation (`valueStdDev`) in the metrics pre-aggregation. Because adoption in metrics analysis was light, the field was removed and is no longer aggregated by the SDKs. If the value is received by the Application Insights data collection endpoint, it gets dropped during ingestion and isn't sent to the Log Analytics workspace. If you're interested in using standard deviation in your analysis, we recommend using queries against Application Insights raw events.
#### AppPageViews
Legacy table: traces
## Next steps * [Explore metrics](../essentials/metrics-charts.md)
-* [Write Analytics queries](../logs/log-query-overview.md)
+* [Write Log Analytics queries](../logs/log-query-overview.md)
azure-monitor Opencensus Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opencensus-python.md
Title: Monitor Python applications with Azure Monitor | Microsoft Docs
-description: Provides instructions to wire up OpenCensus Python with Azure Monitor
+description: This article provides instructions on how to wire up OpenCensus Python with Azure Monitor.
Last updated 8/19/2022 ms.devlang: python
Azure Monitor supports distributed tracing, metric collection, and logging of Python applications.
-Microsoft's supported solution for tracking and exporting data for your Python applications is through the [Opencensus Python SDK](#introducing-opencensus-python-sdk) via the [Azure Monitor exporters](#instrument-with-opencensus-python-sdk-with-azure-monitor-exporters).
+Microsoft's supported solution for tracking and exporting data for your Python applications is through the [OpenCensus Python SDK](#introducing-opencensus-python-sdk) via the [Azure Monitor exporters](#instrument-with-opencensus-python-sdk-with-azure-monitor-exporters).
-Any other telemetry SDKs for Python are UNSUPPORTED and are NOT recommended by Microsoft to use as a telemetry solution.
+Any other telemetry SDKs for Python *are unsupported and are not recommended* by Microsoft to use as a telemetry solution.
-You may have noted that OpenCensus is converging into [OpenTelemetry](https://opentelemetry.io/). However, we continue to recommend OpenCensus while OpenTelemetry gradually matures.
+OpenCensus is converging into [OpenTelemetry](https://opentelemetry.io/). We continue to recommend OpenCensus while OpenTelemetry gradually matures.
> [!NOTE]
-> A preview [OpenTelemetry-based Python offering](opentelemetry-enable.md?tabs=python) is available. [Learn more](opentelemetry-overview.md).
+> A preview [OpenTelemetry-based Python offering](opentelemetry-enable.md?tabs=python) is available. To learn more, see the [OpenTelemetry overview](opentelemetry-overview.md).
## Prerequisites -- An Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
+You need an Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
[!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)]
-## Introducing Opencensus Python SDK
+## Introducing OpenCensus Python SDK
-[OpenCensus](https://opencensus.io) is a set of open source libraries to allow collection of distributed tracing, metrics and logging telemetry. Through the use of [Azure Monitor exporters](https://github.com/census-instrumentation/opencensus-python/tree/master/contrib/opencensus-ext-azure), you will be able to send this collected telemetry to Application insights. This article walks you through the process of setting up OpenCensus and Azure Monitor Exporters for Python to send your monitoring data to Azure Monitor.
+[OpenCensus](https://opencensus.io) is a set of open-source libraries to allow collection of distributed tracing, metrics, and logging telemetry. By using [Azure Monitor exporters](https://github.com/census-instrumentation/opencensus-python/tree/master/contrib/opencensus-ext-azure), you can send this collected telemetry to Application Insights. This article walks you through the process of setting up OpenCensus and Azure Monitor exporters for Python to send your monitoring data to Azure Monitor.
## Instrument with OpenCensus Python SDK with Azure Monitor exporters
Install the OpenCensus Azure Monitor exporters:
python -m pip install opencensus-ext-azure ```
-The SDK uses three Azure Monitor exporters to send different types of telemetry to Azure Monitor. They are `trace`, `metrics`, and `logs`. For more information on these telemetry types, see [the data platform overview](../data-platform.md). Use the following instructions to send these telemetry types via the three exporters.
+The SDK uses three Azure Monitor exporters to send different types of telemetry to Azure Monitor. They're `trace`, `metrics`, and `logs`. For more information on these telemetry types, see the [Data platform overview](../data-platform.md). Use the following instructions to send these telemetry types via the three exporters.
## Telemetry type mappings
-Here are the exporters that OpenCensus provides mapped to the types of telemetry that you see in Azure Monitor.
+OpenCensus maps the following exporters to the types of telemetry that you see in Azure Monitor.
| Pillar of observability | Telemetry type in Azure Monitor | Explanation | |-||--|
Here are the exporters that OpenCensus provides mapped to the types of telemetry
90 ```
-1. Although entering values is helpful for demonstration purposes, ultimately we want to emit the log data to Azure Monitor. Pass your connection string directly into the exporter. Or, you can specify it in an environment variable, `APPLICATIONINSIGHTS_CONNECTION_STRING`. We recommend using the connection string to instantiate the exporters that are used to send telemetry to Application Insights. Modify your code from the previous step based on the following code sample:
+1. Entering values is helpful for demonstration purposes, but we want to emit the log data to Azure Monitor. Pass your connection string directly into the exporter. Or you can specify it in an environment variable, `APPLICATIONINSIGHTS_CONNECTION_STRING`. We recommend using the connection string to instantiate the exporters that are used to send telemetry to Application Insights. Modify your code from the previous step based on the following code sample:
```python import logging
Here are the exporters that OpenCensus provides mapped to the types of telemetry
1. The exporter sends log data to Azure Monitor. You can find the data under `traces`.
- > [!NOTE]
- > In this context, `traces` isn't the same as `tracing`. Here, `traces` refers to the type of telemetry that you'll see in Azure Monitor when you utilize `AzureLogHandler`. But `tracing` refers to a concept in OpenCensus and relates to [distributed tracing](./distributed-tracing.md).
+ In this context, `traces` isn't the same as `tracing`. Here, `traces` refers to the type of telemetry that you'll see in Azure Monitor when you utilize `AzureLogHandler`. But `tracing` refers to a concept in OpenCensus and relates to [distributed tracing](./distributed-tracing.md).
> [!NOTE]
- > The root logger is configured with the level of WARNING. That means any logs that you send that have less of a severity are ignored, and in turn, won't be sent to Azure Monitor. For more information, see [documentation](https://docs.python.org/3/library/logging.html#logging.Logger.setLevel).
+ > The root logger is configured with the level of `warning`. That means any logs that you send that have less severity are ignored, and in turn, won't be sent to Azure Monitor. For more information, see [Logging documentation](https://docs.python.org/3/library/logging.html#logging.Logger.setLevel).
-1. You can also add custom properties to your log messages in the *extra* keyword argument by using the custom_dimensions field. These properties appear as key-value pairs in `customDimensions` in Azure Monitor.
+1. You can also add custom properties to your log messages in the `extra` keyword argument by using the `custom_dimensions` field. These properties appear as key-value pairs in `customDimensions` in Azure Monitor.
> [!NOTE]
- > For this feature to work, you need to pass a dictionary to the custom_dimensions field. If you pass arguments of any other type, the logger ignores them.
+ > For this feature to work, you need to pass a dictionary to the `custom_dimensions` field. If you pass arguments of any other type, the logger ignores them.
```python import logging
Here are the exporters that OpenCensus provides mapped to the types of telemetry
``` > [!NOTE]
-> As part of using Application Insights instrumentation, we collect and send diagnostic data to Microsoft. This data helps us run and improve Application Insights. You have the option to disable non-essential data collection. [Learn More](./statsbeat.md).
+> As part of using Application Insights instrumentation, we collect and send diagnostic data to Microsoft. This data helps us run and improve Application Insights. You have the option to disable non-essential data collection. To learn more, see [Statsbeat in Application Insights](./statsbeat.md).
#### Configure logging for Django applications
-You can configure logging explicitly in your application code like above for your Django applications, or you can specify it in Django's logging configuration. This code can go into whatever file you use for Django settings configuration. For how to configure Django settings, see [Django settings](https://docs.djangoproject.com/en/4.0/topics/settings/). For more information on configuring logging, see [Django logging](https://docs.djangoproject.com/en/4.0/topics/logging/).
+You can configure logging explicitly in your application code like the preceding for your Django applications, or you can specify it in Django's logging configuration. This code can go into whatever file you use for Django settings configuration. For information on how to configure Django settings, see [Django settings](https://docs.djangoproject.com/en/4.0/topics/settings/). For more information on how to configure logging, see [Django logging](https://docs.djangoproject.com/en/4.0/topics/logging/).
```json LOGGING = {
logger.warning("this will be tracked")
#### Send exceptions
-OpenCensus Python doesn't automatically track and send `exception` telemetry. They're sent through `AzureLogHandler` by using exceptions through the Python logging library. You can add custom properties just like with normal logging.
+OpenCensus Python doesn't automatically track and send `exception` telemetry. It's sent through `AzureLogHandler` by using exceptions through the Python logging library. You can add custom properties like you do with normal logging.
```python import logging
try:
except Exception: logger.exception('Captured an exception.', extra=properties) ```
-Because you must log exceptions explicitly, it's up to the user how they want to log unhandled exceptions. OpenCensus doesn't place restrictions on how a user wants to do this, as long as they explicitly log an exception telemetry.
+
+Because you must log exceptions explicitly, it's up to you how to log unhandled exceptions. OpenCensus doesn't place restrictions on how to do this logging, but you must explicitly log exception telemetry.
#### Send events
-You can send `customEvent` telemetry in exactly the same way that you send `trace` telemetry except by using `AzureEventHandler` instead.
+You can send `customEvent` telemetry in exactly the same way that you send `trace` telemetry, except by using `AzureEventHandler` instead.
```python import logging
logger.info('Hello, World!')
#### Sampling
-For information on sampling in OpenCensus, take a look at [sampling in OpenCensus](sampling.md#configuring-fixed-rate-sampling-for-opencensus-python-applications).
+For information on sampling in OpenCensus, see [Sampling in OpenCensus](sampling.md#configuring-fixed-rate-sampling-for-opencensus-python-applications).
#### Log correlation
-For details on how to enrich your logs with trace context data, see OpenCensus Python [logs integration](./correlation.md#log-correlation).
+For information on how to enrich your logs with trace context data, see OpenCensus Python [logs integration](./correlation.md#log-correlation).
#### Modify telemetry
-For details on how to modify tracked telemetry before it's sent to Azure Monitor, see OpenCensus Python [telemetry processors](./api-filtering-sampling.md#opencensus-python-telemetry-processors).
-
+For information on how to modify tracked telemetry before it's sent to Azure Monitor, see OpenCensus Python [telemetry processors](./api-filtering-sampling.md#opencensus-python-telemetry-processors).
### Metrics
-OpenCensus.stats supports 4 aggregation methods but provides partial support for Azure Monitor:
+OpenCensus.stats supports four aggregation methods but provides partial support for Azure Monitor:
-- **Count:** The count of the number of measurement points. The value is cumulative, can only increase and resets to 0 on restart. -- **Sum:** A sum up of the measurement points. The value is cumulative, can only increase and resets to 0 on restart. -- **LastValue:** Keeps the last recorded value, drops everything else.-- **Distribution:** Histogram distribution of the measurement points. This method is **NOT supported by the Azure Exporter**.
+- **Count**: The count of the number of measurement points. The value is cumulative, can only increase, and resets to 0 on restart.
+- **Sum**: A sum up of the measurement points. The value is cumulative, can only increase, and resets to 0 on restart.
+- **LastValue**: Keeps the last recorded value and drops everything else.
+- **Distribution**: Histogram distribution of the measurement points. *This method is not supported by the Azure exporter*.
-### Count Aggregation example
+### Count aggregation example
-1. First, let's generate some local metric data. We'll create a simple metric to track the number of times the user selects the **Enter** key.
+1. First, let's generate some local metric data. We'll create a metric to track the number of times the user selects the **Enter** key.
```python from datetime import datetime
OpenCensus.stats supports 4 aggregation methods but provides partial support for
Point(value=ValueLong(7), timestamp=2019-10-09 20:58:07.138614) ```
-1. Although entering values is helpful for demonstration purposes, ultimately we want to emit the metric data to Azure Monitor. Pass your connection string directly into the exporter. Or, you can specify it in an environment variable, `APPLICATIONINSIGHTS_CONNECTION_STRING`. We recommend using the connection string to instantiate the exporters that are used to send telemetry to Application Insights. Modify your code from the previous step based on the following code sample:
+1. Entering values is helpful for demonstration purposes, but we want to emit the metric data to Azure Monitor. Pass your connection string directly into the exporter. Or you can specify it in an environment variable, `APPLICATIONINSIGHTS_CONNECTION_STRING`. We recommend using the connection string to instantiate the exporters that are used to send telemetry to Application Insights. Modify your code from the previous step based on the following code sample:
```python from datetime import datetime
OpenCensus.stats supports 4 aggregation methods but provides partial support for
main() ```
-1. The exporter sends metric data to Azure Monitor at a fixed interval. The default is every 15 seconds. To modify the export interval, pass in `export_interval` as a parameter in seconds to `new_metrics_exporter()`. We're tracking a single metric, so this metric data, with whatever value and time stamp it contains, is sent every interval. The value is cumulative, can only increase and resets to 0 on restart. You can find the data under `customMetrics`, but `customMetrics` properties valueCount, valueSum, valueMin, valueMax, and valueStdDev are not effectively used.
+1. The exporter sends metric data to Azure Monitor at a fixed interval. The default is every 15 seconds. To modify the export interval, pass in `export_interval` as a parameter in seconds to `new_metrics_exporter()`. We're tracking a single metric, so this metric data, with whatever value and time stamp it contains, is sent every interval. The value is cumulative, can only increase, and resets to 0 on restart.
-### Setting custom dimensions in metrics
+ You can find the data under `customMetrics`, but the `customMetrics` properties `valueCount`, `valueSum`, `valueMin`, `valueMax`, and `valueStdDev` aren't effectively used.
-Opencensus Python SDK allows adding custom dimensions to your metrics telemetry by the way of `tags`, which are essentially a dictionary of key/value pairs.
+### Set custom dimensions in metrics
+
+The OpenCensus Python SDK allows you to add custom dimensions to your metrics telemetry by using `tags`, which are like a dictionary of key-value pairs.
1. Insert the tags that you want to use into the tag map. The tag map acts like a sort of "pool" of all available tags you can use.
Opencensus Python SDK allows adding custom dimensions to your metrics telemetry
... ```
-1. For a specific `View`, specify the tags you want to use when recording metrics with that view via the tag key.
+1. For a specific `View`, specify the tags you want to use when you're recording metrics with that view via the tag key.
```python ...
Opencensus Python SDK allows adding custom dimensions to your metrics telemetry
... ```
-1. Be sure to use the tag map when recording in the measurement map. The tag keys that are specified in the `View` must be found in the tag map used to record.
+1. Be sure to use the tag map when you're recording in the measurement map. The tag keys that are specified in the `View` must be found in the tag map used to record.
```python ...
Opencensus Python SDK allows adding custom dimensions to your metrics telemetry
... ```
-1. Under the `customMetrics` table, all metrics records emitted using the `prompt_view` will have custom dimensions `{"url":"http://example.com"}`.
+1. Under the `customMetrics` table, all metrics records emitted by using `prompt_view` will have custom dimensions `{"url":"http://example.com"}`.
-1. To produce tags with different values using the same keys, create new tag maps for them.
+1. To produce tags with different values by using the same keys, create new tag maps for them.
```python ...
Opencensus Python SDK allows adding custom dimensions to your metrics telemetry
#### Performance counters
-By default, the metrics exporter sends a set of performance counters to Azure Monitor. You can disable this by setting the `enable_standard_metrics` flag to `False` in the constructor of the metrics exporter.
+By default, the metrics exporter sends a set of performance counters to Azure Monitor. You can disable this capability by setting the `enable_standard_metrics` flag to `False` in the constructor of the metrics exporter.
```python ...
exporter = metrics_exporter.new_metrics_exporter(
... ```
-These performance counters are currently sent:
+The following performance counters are currently sent:
- Available Memory (bytes) - CPU Processor Time (percentage)
These performance counters are currently sent:
- Process CPU Usage (percentage) - Process Private Bytes (bytes)
-You should be able to see these metrics in `performanceCounters`. For more information, see [performance counters](./performance-counters.md).
+You should be able to see these metrics in `performanceCounters`. For more information, see [Performance counters](./performance-counters.md).
#### Modify telemetry
For information on how to modify tracked telemetry before it's sent to Azure Mon
### Tracing > [!NOTE]
-> In OpenCensus, `tracing` refers to [distributed tracing](./distributed-tracing.md). The `AzureExporter` sends `requests` and `dependency` telemetry to Azure Monitor.
+> In OpenCensus, `tracing` refers to [distributed tracing](./distributed-tracing.md). The `AzureExporter` parameter sends `requests` and `dependency` telemetry to Azure Monitor.
1. First, let's generate some trace data locally. In Python IDLE, or your editor of choice, enter the following code:
For information on how to modify tracked telemetry before it's sent to Azure Mon
main() ```
-1. Running the code repeatedly prompts you to enter a value. With each entry, the value is printed to the shell. The OpenCensus Python Module generates a corresponding piece of `SpanData`. The OpenCensus project defines a [trace as a tree of spans](https://opencensus.io/core-concepts/tracing/).
+1. Running the code repeatedly prompts you to enter a value. With each entry, the value is printed to the shell. The OpenCensus Python module generates a corresponding piece of `SpanData`. The OpenCensus project defines a [trace as a tree of spans](https://opencensus.io/core-concepts/tracing/).
```output Enter a value: 4
For information on how to modify tracked telemetry before it's sent to Azure Mon
[SpanData(name='test', context=SpanContext(trace_id=8aa41bc469f1a705aed1bdb20c342603, span_id=None, trace_options=TraceOptions(enabled=True), tracestate=None), span_id='f3f9f9ee6db4740a', parent_span_id=None, attributes=BoundedDict({}, maxlen=32), start_time='2019-06-27T18:21:46.157732Z', end_time='2019-06-27T18:21:47.269583Z', child_span_count=0, stack_trace=None, annotations=BoundedList([], maxlen=32), message_events=BoundedList([], maxlen=128), links=BoundedList([], maxlen=32), status=None, same_process_as_parent_span=None, span_kind=0)] ```
-1. Although entering values is helpful for demonstration purposes, ultimately we want to emit `SpanData` to Azure Monitor. Pass your connection string directly into the exporter. Or, you can specify it in an environment variable, `APPLICATIONINSIGHTS_CONNECTION_STRING`. We recommend using the connection string to instantiate the exporters that are used to send telemetry to Application Insights. Modify your code from the previous step based on the following code sample:
+1. Entering values is helpful for demonstration purposes, but we want to emit `SpanData` to Azure Monitor. Pass your connection string directly into the exporter. Or you can specify it in an environment variable, `APPLICATIONINSIGHTS_CONNECTION_STRING`. We recommend using the connection string to instantiate the exporters that are used to send telemetry to Application Insights. Modify your code from the previous step based on the following code sample:
```python from opencensus.ext.azure.trace_exporter import AzureExporter
For information on how to modify tracked telemetry before it's sent to Azure Mon
main() ```
-1. Now when you run the Python script, you should still be prompted to enter values, but only the value is being printed in the shell. The created `SpanData` is sent to Azure Monitor. You can find the emitted span data under `dependencies`. For more information about outgoing requests, see OpenCensus Python [dependencies](./opencensus-python-dependency.md).
-For more information on incoming requests, see OpenCensus Python [requests](./opencensus-python-request.md).
+1. Now when you run the Python script, you should still be prompted to enter values, but only the value is being printed in the shell. The created `SpanData` is sent to Azure Monitor. You can find the emitted span data under `dependencies`.
+
+ For more information about outgoing requests, see OpenCensus Python [dependencies](./opencensus-python-dependency.md). For more information on incoming requests, see OpenCensus Python [requests](./opencensus-python-request.md).
#### Sampling
-For information on sampling in OpenCensus, take a look at [sampling in OpenCensus](sampling.md#configuring-fixed-rate-sampling-for-opencensus-python-applications).
+For information on sampling in OpenCensus, see [Sampling in OpenCensus](sampling.md#configuring-fixed-rate-sampling-for-opencensus-python-applications).
#### Trace correlation
-For more information on telemetry correlation in your trace data, take a look at OpenCensus Python [telemetry correlation](./correlation.md#telemetry-correlation-in-opencensus-python).
+For more information on telemetry correlation in your trace data, see OpenCensus Python [telemetry correlation](./correlation.md#telemetry-correlation-in-opencensus-python).
#### Modify telemetry
For more information on how to modify tracked telemetry before it's sent to Azur
## Configure Azure Monitor exporters
-As shown, there are three different Azure Monitor exporters that support OpenCensus. Each one sends different types of telemetry to Azure Monitor. To see what types of telemetry each exporter sends, see the following list.
+As shown, there are three different Azure Monitor exporters that support OpenCensus. Each one sends different types of telemetry to Azure Monitor. To see what types of telemetry each exporter sends, see the following table.
-Each exporter accepts the same arguments for configuration, passed through the constructors. You can see details about each one here:
+Each exporter accepts the same arguments for configuration, passed through the constructors. You can see information about each one here:
-- `connection_string`: The connection string used to connect to your Azure Monitor resource. Takes priority over `instrumentation_key`.-- `credential`: Credential class used by AAD authentication. See `Authentication` section below.-- `enable_standard_metrics`: Used for `AzureMetricsExporter`. Signals the exporter to send [performance counter](../essentials/app-insights-metrics.md#performance-counters) metrics automatically to Azure Monitor. Defaults to `True`.-- `export_interval`: Used to specify the frequency in seconds of exporting. Defaults to 15s.-- `grace_period`: Used to specify the timeout for shutdown of exporters in seconds. Defaults to 5s.-- `instrumentation_key`: The instrumentation key used to connect to your Azure Monitor resource.-- `logging_sampling_rate`: Used for `AzureLogHandler` and `AzureEventHandler`. Provides a sampling rate [0,1.0] for exporting logs/events. Defaults to 1.0.-- `max_batch_size`: Specifies the maximum size of telemetry that's exported at once.-- `proxies`: Specifies a sequence of proxies to use for sending data to Azure Monitor. For more information, see [proxies](https://requests.readthedocs.io/en/latest/user/advanced/#proxies).-- `storage_path`: A path to where the local storage folder exists (unsent telemetry). As of `opencensus-ext-azure` v1.0.3, the default path is the OS temp directory + `opencensus-python` + `your-ikey`. Prior to v1.0.3, the default path is $USER + `.opencensus` + `.azure` + `python-file-name`.-- `timeout`: Specifies the networking timeout to send telemetry to the ingestion service in seconds. Defaults to 10s.
+|Exporter telemetry|Description|
+|:|:|
+`connection_string`| The connection string used to connect to your Azure Monitor resource. Takes priority over `instrumentation_key`.|
+`credential`| Credential class used by Azure Active Directory authentication. See the "Authentication" section that follows.|
+`enable_standard_metrics`| Used for `AzureMetricsExporter`. Signals the exporter to send [performance counter](../essentials/app-insights-metrics.md#performance-counters) metrics automatically to Azure Monitor. Defaults to `True`.|
+`export_interval`| Used to specify the frequency in seconds of exporting. Defaults to `15s`.|
+`grace_period`| Used to specify the timeout for shutdown of exporters in seconds. Defaults to `5s`.|
+`instrumentation_key`| The instrumentation key used to connect to your Azure Monitor resource.|
+`logging_sampling_rate`| Used for `AzureLogHandler` and `AzureEventHandler`. Provides a sampling rate [0,1.0] for exporting logs/events. Defaults to `1.0`.|
+`max_batch_size`| Specifies the maximum size of telemetry that's exported at once.|
+`proxies`| Specifies a sequence of proxies to use for sending data to Azure Monitor. For more information, see [proxies](https://requests.readthedocs.io/en/latest/user/advanced/#proxies).|
+`storage_path`| A path to where the local storage folder exists (unsent telemetry). As of `opencensus-ext-azure` v1.0.3, the default path is the OS temp directory + `opencensus-python` + `your-ikey`. Prior to v1.0.3, the default path is `$USER` + `.opencensus` + `.azure` + `python-file-name`.|
+`timeout`| Specifies the networking timeout to send telemetry to the ingestion service in seconds. Defaults to `10s`.|
## Integrate with Azure Functions
-Users who want to capture custom telemetry in Azure Functions environments are encouraged to used the OpenCensus Python Azure Functions [extension](https://github.com/census-ecosystem/opencensus-python-extensions-azure/tree/main/extensions/functions#opencensus-python-azure-functions-extension). More details can be found in this [document](../../azure-functions/functions-reference-python.md#log-custom-telemetry).
+To capture custom telemetry in Azure Functions environments, use the OpenCensus Python Azure Functions [extension](https://github.com/census-ecosystem/opencensus-python-extensions-azure/tree/main/extensions/functions#opencensus-python-azure-functions-extension). For more information, see the [Azure Functions Python developer guide](../../azure-functions/functions-reference-python.md#log-custom-telemetry).
## Authentication (preview)+ > [!NOTE]
-> Authentication feature is available starting from `opencensus-ext-azure` v1.1b0
+> The authentication feature is available starting from `opencensus-ext-azure` v1.1b0.
-Each of the Azure Monitor exporters supports configuration of securely sending telemetry payloads via OAuth authentication with Azure Active Directory (AAD).
-For more information, check out the [Authentication](./azure-ad-authentication.md) documentation.
+Each of the Azure Monitor exporters supports configuration of securely sending telemetry payloads via OAuth authentication with Azure Active Directory. For more information, see the [Authentication documentation](./azure-ad-authentication.md).
## View your data with queries You can view the telemetry data that was sent from your application through the **Logs (Analytics)** tab.
-![Screenshot of the overview pane with "Logs (Analytics)" selected in a red box](./media/opencensus-python/0010-logs-query.png)
+![Screenshot of the Overview pane with the Logs (Analytics) tab selected.](./media/opencensus-python/0010-logs-query.png)
In the list under **Active**:
In the list under **Active**:
- For telemetry sent with the Azure Monitor metrics exporter, sent metrics appear under `customMetrics`. - For telemetry sent with the Azure Monitor logs exporter, logs appear under `traces`. Exceptions appear under `exceptions`.
-For more detailed information about how to use queries and logs, see [Logs in Azure Monitor](../logs/data-platform-logs.md).
+For more information about how to use queries and logs, see [Logs in Azure Monitor](../logs/data-platform-logs.md).
## Learn more about OpenCensus for Python * [OpenCensus Python on GitHub](https://github.com/census-instrumentation/opencensus-python) * [Customization](https://github.com/census-instrumentation/opencensus-python/blob/master/README.rst#customization)
-* [Azure Monitor Exporters on GitHub](https://github.com/census-instrumentation/opencensus-python/tree/master/contrib/opencensus-ext-azure)
-* [OpenCensus Integrations](https://github.com/census-instrumentation/opencensus-python#extensions)
-* [Azure Monitor Sample Applications](https://github.com/Azure-Samples/azure-monitor-opencensus-python/tree/master/azure_monitor)
+* [Azure Monitor exporters on GitHub](https://github.com/census-instrumentation/opencensus-python/tree/master/contrib/opencensus-ext-azure)
+* [OpenCensus integrations](https://github.com/census-instrumentation/opencensus-python#extensions)
+* [Azure Monitor sample applications](https://github.com/Azure-Samples/azure-monitor-opencensus-python/tree/master/azure_monitor)
## Troubleshooting
For more detailed information about how to use queries and logs, see [Logs in Az
## Next steps * [Tracking incoming requests](./opencensus-python-dependency.md)
-* [Tracking out-going requests](./opencensus-python-request.md)
+* [Tracking outgoing requests](./opencensus-python-request.md)
* [Application map](./app-map.md) * [End-to-end performance monitoring](../app/tutorial-performance.md)
azure-monitor Opentelemetry Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-overview.md
Title: OpenTelemetry with Azure Monitor overview
-description: Provides an overview of how to use OpenTelemetry with Azure Monitor.
+description: This article provides an overview of how to use OpenTelemetry with Azure Monitor.
Last updated 10/11/2021
# OpenTelemetry overview
-Microsoft is excited to embrace [OpenTelemetry](https://opentelemetry.io/) as the future of telemetry instrumentation. You, our customers, have asked for vendor-neutral instrumentation, and we're delighted to partner with the OpenTelemetry community to create consistent APIs/SDKs across languages.
+Microsoft is excited to embrace [OpenTelemetry](https://opentelemetry.io/) as the future of telemetry instrumentation. You, our customers, have asked for vendor-neutral instrumentation, and we're pleased to partner with the OpenTelemetry community to create consistent APIs and SDKs across languages.
-Microsoft worked together with project stakeholders from two previously popular open-source telemetry projects, [OpenCensus](https://opencensus.io/) and [OpenTracing](https://opentracing.io/), to help create a single project--OpenTelemetry. OpenTelemetry includes contributions from all major cloud and Application Performance Management (APM) vendors and lives within the [Cloud Native Computing Foundation (CNCF)](https://www.cncf.io/) of which Microsoft is a Platinum Member.
+Microsoft worked with project stakeholders from two previously popular open-source telemetry projects, [OpenCensus](https://opencensus.io/) and [OpenTracing](https://opentracing.io/). Together, we helped to create a single project, OpenTelemetry. OpenTelemetry includes contributions from all major cloud and Application Performance Management (APM) vendors and lives within the [Cloud Native Computing Foundation (CNCF)](https://www.cncf.io/). Microsoft is a Platinum Member of the CNCF.
## Concepts Telemetry, the data collected to observe your application, can be broken into three types or "pillars":
-1. Distributed Tracing
-2. Metrics
-3. Logs
-Initially the OpenTelemetry community took on Distributed Tracing. Metrics and Logs are still in progress. A complete observability story includes all three pillars, but currently our [Azure Monitor OpenTelemetry-based exporter **preview** offerings for .NET, Python, and JavaScript](opentelemetry-enable.md) **only include Distributed Tracing**.
+- Distributed Tracing
+- Metrics
+- Logs
-There are several sources that explain the three pillars in detail including the [OpenTelemetry community website](https://opentelemetry.io/docs/concepts/data-collection/), [OpenTelemetry Specifications](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/overview.md), and [Distributed Systems Observability](https://www.oreilly.com/library/view/distributed-systems-observability/9781492033431/ch04.html) by Cindy Sridharan.
+Initially, the OpenTelemetry community took on Distributed Tracing. Metrics and Logs are still in progress. A complete observability story includes all three pillars, but currently our [Azure Monitor OpenTelemetry-based exporter preview offerings for .NET, Python, and JavaScript](opentelemetry-enable.md) only include Distributed Tracing.
+
+The following sources explain the three pillars:
+
+- [OpenTelemetry community website](https://opentelemetry.io/docs/concepts/data-collection/)
+- [OpenTelemetry specifications](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/overview.md)
+- [Distributed Systems Observability](https://www.oreilly.com/library/view/distributed-systems-observability/9781492033431/ch04.html) by Cindy Sridharan
In the following sections, we'll cover some telemetry collection basics.
-### Instrumenting your application
+### Instrument your application
-At a basic level, "instrumentingΓÇ¥ is simply enabling an application to capture telemetry.
+At a basic level, "instrumenting" is simply enabling an application to capture telemetry.
There are two methods to instrument your application:
-1. Manual Instrumentation
-2. Automatic Instrumentation (Auto-Instrumentation)
-Manual instrumentation is coding against the OpenTelemetry API. In the context of an end user, it typically refers to installing a language-specific SDK in an application. Manual instrumentation packages consist of our [Azure Monitor OpenTelemetry-based exporter **preview** offerings for .NET, Python, and JavaScript](opentelemetry-enable.md).
+- Manual instrumentation
+- Automatic instrumentation (auto-instrumentation)
+
+Manual instrumentation is coding against the OpenTelemetry API. In the context of a user, it typically refers to installing a language-specific SDK in an application. Manual instrumentation packages consist of [Azure Monitor OpenTelemetry-based exporter preview offerings for .NET, Python, and JavaScript](opentelemetry-enable.md).
> [!IMPORTANT]
-> ΓÇ£ManualΓÇ¥ does **NOT** mean youΓÇÖll be required to write complex code to define spans for distributed traces (though it remains an option). A rich and growing set of instrumentation libraries maintained by OpenTelemetry contributors will enable you to effortlessly capture telemetry signals across common frameworks and libraries. A subset of OpenTelemetry Instrumentation Libraries will be supported by Azure Monitor, informed by customer feedback. Additionally, we are working to [instrument the most popular Azure Service SDKs using OpenTelemetry](https://devblogs.microsoft.com/azure-sdk/introducing-experimental-opentelemetry-support-in-the-azure-sdk-for-net/).
+> "Manual" doesn't mean you'll be required to write complex code to define spans for distributed traces, although it remains an option. A rich and growing set of instrumentation libraries maintained by OpenTelemetry contributors will enable you to effortlessly capture telemetry signals across common frameworks and libraries.
+>
+> A subset of OpenTelemetry instrumentation libraries will be supported by Azure Monitor, informed by customer feedback. We're also working to [instrument the most popular Azure Service SDKs using OpenTelemetry](https://devblogs.microsoft.com/azure-sdk/introducing-experimental-opentelemetry-support-in-the-azure-sdk-for-net/).
+
+Auto-instrumentation enables telemetry collection through configuration without touching the application's code. Although it's more convenient, it tends to be less configurable. It's also not available in all languages. The Azure Monitor OpenTelemetry-based auto-instrumentation offering consists of the [Java 3.X OpenTelemetry-based GA offering](java-in-process-agent.md). We continue to invest in it informed by customer feedback. The OpenTelemetry community is also experimenting with C# and Python auto-instrumentation, but Azure Monitor is focused on creating a simple and effective manual instrumentation story in the near term.
-On the other hand, auto-instrumentation is enabling telemetry collection through configuration without touching the application's code. While more convenient, it tends to be less configurable and itΓÇÖs not available in all languages. Azure MonitorΓÇÖs OpenTelemetry-based auto-instrumentation offering consists of the [Java 3.X OpenTelemetry-based GA offering](java-in-process-agent.md), and we continue to invest in it informed by customer feedback. The OpenTelemetry community is also experimenting with C# and Python auto-instrumentation, but Azure Monitor is focused on creating a simple and effective manual instrumentation story in the near-term.
+### Send your telemetry
-### Sending your telemetry
+There are two ways to send your data to Azure Monitor (or any vendor):
-There are also two ways to send your data to Azure Monitor (or any vendor).
-1. Direct Exporter
-2. Via an Agent
+- Via a direct exporter
+- Via an agent
-A direct exporter sends telemetry in-process (from the applicationΓÇÖs code) directly to Azure MonitorΓÇÖs ingestion endpoint. The main advantage of this approach is onboarding simplicity.
+A direct exporter sends telemetry in-process (from the application's code) directly to the Azure Monitor ingestion endpoint. The main advantage of this approach is onboarding simplicity.
-**All Azure MonitorΓÇÖs currently supported OpenTelemetry-based offerings use a direct exporter**.
+*All currently supported OpenTelemetry-based offerings in Azure Monitor use a direct exporter*.
-Alternatively, sending telemetry via an agent will provide a path for any OpenTelemetry supported language to send to Azure Monitor via [OTLP](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/protocol/README.md). Receiving OTLP will enable customers to observe applications written in languages beyond our [supported languages](platforms.md).
+Alternatively, sending telemetry via an agent will provide a path for any OpenTelemetry-supported language to send to Azure Monitor via [online transactional processing (OTLP)](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/protocol/README.md). Receiving OTLP will enable customers to observe applications written in languages beyond our [supported languages](platforms.md).
> [!NOTE]
-> Some customers have begun to use the [OpenTelemetry-Collector](https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/design.md) as an agent alternative even though Microsoft doesnΓÇÖt officially support ΓÇ£Via an AgentΓÇ¥ approach for application monitoring yet. In the meantime, the open source community has contributed an OpenTelemetry-Collector Azure Monitor Exporter that some customers are using to send data to Azure Monitor Application Insights.
+> Some customers have begun to use the [OpenTelemetry-Collector](https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/design.md) as an agent alternative even though Microsoft doesn't officially support the "via an agent" approach for application monitoring yet. In the meantime, the open-source community has contributed an OpenTelemetry-Collector Azure Monitor exporter that some customers are using to send data to Azure Monitor Application Insights.
## Terms
-See [glossary](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/glossary.md) in the OpenTelemetry specifications.
+For terminology, see the [glossary](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/glossary.md) in the OpenTelemetry specifications.
-Some legacy terms in Application Insights are confusing given the industry convergence on OpenTelemetry. The table below highlights these differences. Eventually Application Insights terms will be replaced by OpenTelemetry terms.
+Some legacy terms in Application Insights are confusing because of the industry convergence on OpenTelemetry. The following table highlights these differences. Eventually, Application Insights terms will be replaced by OpenTelemetry terms.
Application Insights | OpenTelemetry |
-Auto-Collectors | Instrumentation Libraries
-Channel | Exporter
-Codeless / Agent-based | Auto-Instrumentation
+Auto-collectors | Instrumentation libraries
+Channel | Exporter
+Codeless / Agent-based | Auto-instrumentation
Traces | Logs
+## Next steps
-## Next step
+The following websites consist of language-by-language guidance to enable and configure Microsoft's OpenTelemetry-based offerings. The available functionality and limitations of each offering are explained so that you can determine whether OpenTelemetry is right for your project.
-The following pages consist of language-by-language guidance to enable and configure MicrosoftΓÇÖs OpenTelemetry-based offerings. Importantly, we share the available functionality and limitations of each offering so you can determine whether OpenTelemetry is right for your project.
-- [.NET](opentelemetry-enable.md)
+- [.NET](opentelemetry-enable.md)
- [Java](java-in-process-agent.md) - [JavaScript](opentelemetry-enable.md) - [Python](opentelemetry-enable.md)
azure-monitor Overview Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/overview-dashboard.md
Title: Azure Application Insights Overview Dashboard | Microsoft Docs
-description: Monitor applications with Azure Application Insights and Overview Dashboard functionality.
+ Title: Application Insights Overview dashboard | Microsoft Docs
+description: Monitor applications with Application Insights and Overview dashboard functionality.
Last updated 06/03/2019 # Application Insights Overview dashboard
-Application Insights has always provided a summary overview pane to allow quick, at-a-glance assessment of your application's health and performance. The new overview dashboard provides a faster more flexible experience.
+Application Insights has always provided a summary overview pane to allow quick, at-a-glance assessment of your application's health and performance. The new **Overview** dashboard provides a faster more flexible experience.
## How do I test out the new experience?
-The new overview dashboard now launches by default:
+The new **Overview** dashboard now launches by default.
-![Overview Preview Pane](./media/overview-dashboard/overview.png)
+![Screenshot that shows the Overview preview pane.](./media/overview-dashboard/overview.png)
## Better performance Time range selection has been simplified to a simple one-click interface.
-![Time range](./media/overview-dashboard/app-insights-overview-dashboard-03.png)
+![Screenshot that shows the time range.](./media/overview-dashboard/app-insights-overview-dashboard-03.png)
-Overall performance has been greatly increased. You have one-click access to popular features like **Search** and **Analytics**. Each default dynamically updating KPI tile provides insight into corresponding Application Insights features. To learn more about failed requests select **Failures** under the **Investigate** header:
+Overall performance has been greatly increased. You have one-click access to popular features like **Search** and **Analytics**. Each default dynamically updating KPI tile provides insight into corresponding Application Insights features. To learn more about failed requests, under **Investigate**, select **Failures**.
-![Failures](./media/overview-dashboard/app-insights-overview-dashboard-04.png)
+![Screenshot that shows failures.](./media/overview-dashboard/app-insights-overview-dashboard-04.png)
## Application dashboard
-Application dashboard leverages the existing dashboard technology within Azure to provide a fully customizable single pane view of your application health and performance.
+The application dashboard uses the existing dashboard technology within Azure to provide a fully customizable single pane view of your application health and performance.
-To access the default dashboard select _Application Dashboard_ in the upper left corner.
+To access the default dashboard, select **Application Dashboard** in the upper-left corner.
-![Screenshot shows the Application Dashboard button highlighted.](./media/overview-dashboard/app-insights-overview-dashboard-05.png)
+![Screenshot that shows the Application Dashboard button.](./media/overview-dashboard/app-insights-overview-dashboard-05.png)
-If this is your first time accessing the dashboard, it will launch a default view:
+If this is your first time accessing the dashboard, it opens a default view.
-![Dashboard view](./media/overview-dashboard/0001-dashboard.png)
+![Screenshot that shows the Dashboard view.](./media/overview-dashboard/0001-dashboard.png)
-You can keep the default view if you like it. Or you can also add, and delete from the dashboard to best fit the needs of your team.
+You can keep the default view if you like it. Or you can also add and delete from the dashboard to best fit the needs of your team.
> [!NOTE]
-> All users with access to the Application Insights resource share the same Application dashboard experience. Changes made by one user will modify the view for all users.
+> All users with access to the Application Insights resource share the same **Application Dashboard** experience. Changes made by one user will modify the view for all users.
-To navigate back to the overview experience just select:
+To go back to the overview experience, select the **Overview** button.
-![Overview Button](./media/overview-dashboard/app-insights-overview-dashboard-07.png)
+![Screenshot that shows the Overview button.](./media/overview-dashboard/app-insights-overview-dashboard-07.png)
## Troubleshooting
-There is currently a limit of 30 days of data for data displayed in a dashboard.If you select a time filter beyond 30 days, or if you select **Configure tile settings** and set a custom time range in excess of 30 days your dashboard will not display beyond 30 days of data, even with the default data retention of 90 days. There is currently no workaround for this behavior.
+Currently, there's a limit of 30 days of data displayed in a dashboard. If you select a time filter beyond 30 days, or if you select **Configure tile settings** and set a custom time range in excess of 30 days, your dashboard won't display beyond 30 days of data. This is the case even with the default data retention of 90 days. There's currently no workaround for this behavior.
-The default Application Dashboard is created during Application Insights resource creation. If you move or rename your Application Insights instance, then queries on the dashboard will fail with Resource not found errors as the dashboard queries rely on the original resource URI. Delete the default dashboard, then from the Application Insights Overview resource menu select Application Dashboard again and the default dashboard will be re-created with the new resource name. Make other custom edits to the dashboard as needed.
+The default **Application Dashboard** is created during Application Insights resource creation. If you move or rename your Application Insights instance, queries on the dashboard will fail with "Resource not found" errors because the dashboard queries rely on the original resource URI. Delete the default dashboard. On the Application Insights **Overview** resource menu, select **Application Dashboard** again. The default dashboard will be re-created with the new resource name. Make other custom edits to the dashboard as needed.
## Next steps - [Funnels](./usage-funnels.md) - [Retention](./usage-retention.md)-- [User Flows](./usage-flows.md)-
+- [User flows](./usage-flows.md)
azure-monitor Tutorial Alert https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/tutorial-alert.md
Title: Send alerts from Azure Application Insights | Microsoft Docs
-description: Tutorial to send alerts in response to errors in your application using Azure Application Insights.
+description: Tutorial shows how to send alerts in response to errors in your application by using Application Insights.
Last updated 04/10/2019
-# Monitor and alert on application health with Azure Application Insights
+# Monitor and alert on application health with Application Insights
-Azure Application Insights allows you to monitor your application and send you alerts when it is either unavailable, experiencing failures, or suffering from performance issues. This tutorial takes you through the process of creating tests to continuously check the availability of your application.
+Application Insights allows you to monitor your application and sends you alerts when it's unavailable, experiencing failures, or suffering from performance issues. This tutorial takes you through the process of creating tests to continuously check the availability of your application.
-You learn how to:
+You'll learn how to:
> [!div class="checklist"]
-> * Create availability test to continuously check the response of the application
-> * Send mail to administrators when a problem occurs
+> * Create availability tests to continuously check the response of the application.
+> * Send mail to administrators when a problem occurs.
## Prerequisites
-To complete this tutorial:
-
-Create an [Application Insights resource](../app/create-new-resource.md).
+To complete this tutorial, create an [Application Insights resource](../app/create-new-resource.md).
## Sign in to Azure
-Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com).
+Sign in to the [Azure portal](https://portal.azure.com).
## Create availability test
-Availability tests in Application Insights allow you to automatically test your application from various locations around the world. In this tutorial, you will perform a url test to ensure that your web application is available. You could also create a complete walkthrough to test its detailed operation.
+Availability tests in Application Insights allow you to automatically test your application from various locations around the world. In this tutorial, you'll perform a URL test to ensure that your web application is available. You could also create a complete walkthrough to test its detailed operation.
+
+1. Select **Application Insights** and then select your subscription.
-1. Select **Application Insights** and then select your subscription.
+1. Under the **Investigate** menu, select **Availability**. Then select **Create test**.
-2. Select **Availability** under the **Investigate** menu and then click **Create test**.
+ ![Screenshot that shows adding an availability test.](media/tutorial-alert/add-test-001.png)
- ![Add availability test](media/tutorial-alert/add-test-001.png)
+1. Enter a name for the test and leave the other defaults. This selection will trigger requests for the application URL every 5 minutes from five different geographic locations.
-3. Type in a name for the test and leave the other defaults. This selection will trigger requests for the application url every 5 minutes from five different geographic locations.
+1. Select **Alerts** to open the **Alerts** dropdown where you can define details for how to respond if the test fails. Choose **Near-realtime** and set the status to **Enabled.**
-4. Select **Alerts** to open the **Alerts** dropdown where you can define details for how to respond if the test fails. Choose **Near-realtime** and set the status to **Enabled.**
+ Enter an email address to send when the alert criteria are met. Optionally, you can enter the address of a webhook to call when the alert criteria are met.
- Type in an email address to send when the alert criteria is met. You could optionally type in the address of a webhook to call when the alert criteria is met.
+ ![Screenshot that shows creating a test.](media/tutorial-alert/create-test-001.png)
- ![Create test](media/tutorial-alert/create-test-001.png)
+1. Return to the test panel, select the ellipses, and edit the alert to enter the configuration for your near-realtime alert.
-5. Return to the test panel, select the ellipses and edit alert to enter the configuration for your near-realtime alert.
+ ![Screenshot that shows editing an alert.](media/tutorial-alert/edit-alert-001.png)
- ![Edit alert](media/tutorial-alert/edit-alert-001.png)
+1. Set failed locations to greater than or equal to 3. Create an [action group](../alerts/action-groups.md) to configure who gets notified when your alert threshold is breached.
-6. Set failed locations to greater than or equal to 3. Create an [action group](../alerts/action-groups.md) to configure who gets notified when your alert threshold is breached.
+ ![Screenshot that shows saving alert UI.](media/tutorial-alert/save-alert-001.png)
- ![Save alert UI](media/tutorial-alert/save-alert-001.png)
+1. After you've configured your alert, select the test name to view details from each location. Tests can be viewed in both line graph and scatter plot format to visualize the successes and failures for a given time range.
-7. Once you have configured your alert, click on the test name to view details from each location. Tests can be viewed in both line graph and scatter plot format to visualize the success/failures for a given time range.
+ ![Screenshot that shows test details.](media/tutorial-alert/test-details-001.png)
- ![Test details](media/tutorial-alert/test-details-001.png)
+1. To see the details of any test, select its dot in the scatter chart to open the **End-to-end transaction details** screen. The following example shows the details for a failed request.
-8. You can drill down into the details of any test by clicking on its dot in the scatter chart. This will launch the end-to-end transaction details view. The example below shows the details for a failed request.
+ ![Screenshot that shows test results.](media/tutorial-alert/test-result-001.png)
- ![Test result](media/tutorial-alert/test-result-001.png)
-
## Next steps Now that you've learned how to alert on issues, advance to the next tutorial to learn how to analyze how users are interacting with your application. > [!div class="nextstepaction"] > [Understand users](./tutorial-users.md)-
azure-monitor Container Insights Prometheus Metrics Addon https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-prometheus-metrics-addon.md
This article describes how to configure Container insights to send Prometheus me
## Prerequisites -- The cluster must be [onboarded to Container insights](container-insights-enable-aks.md). - The cluster must use [managed identity authentication](container-insights-enable-aks.md#migrate-to-managed-identity-authentication). - The following resource providers must be registered in the subscription of the AKS cluster and the Azure Monitor Workspace. - Microsoft.ContainerService
Use any of the following methods to install the metrics addon on your cluster an
Managed Prometheus can be enabled in the Azure portal through either Container insights or an Azure Monitor workspace.
+### Prerequisites
+
+- The cluster must be [onboarded to Container insights](container-insights-enable-aks.md).
+ #### Enable from Container insights 1. Open the **Kubernetes services** menu in the Azure portal and select your AKS cluster.
Use the following procedure to install the Azure Monitor agent and the metrics a
#### Prerequisites - Register the `AKS-PrometheusAddonPreview` feature flag in the Azure Kubernetes clusters subscription with the following command in Azure CLI: `az feature register --namespace Microsoft.ContainerService --name AKS-PrometheusAddonPreview`.-- The aks-preview extension needs to be installed using the command `az extension add --name aks-preview`. For more information on how to install a CLI extension, see [Use and manage extensions with the Azure CLI](https://learn.microsoft.com/cli/azure/azure-cli-extensions-overview).
+- The aks-preview extension needs to be installed using the command `az extension add --name aks-preview`. For more information on how to install a CLI extension, see [Use and manage extensions with the Azure CLI](/azure/azure-cli-extensions-overview).
- Azure CLI version 2.41.0 or higher is required for this feature. #### Install metrics addon
azure-monitor Activity Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/activity-log.md
For more functionality, create a diagnostic setting to send the activity log to
For details on how to create a diagnostic setting, see [Create diagnostic settings to send platform logs and metrics to different destinations](./diagnostic-settings.md). > [!NOTE]
-> Entries in the activity log are system generated and can't be changed or deleted.
+> * Entries in the Activity Log are system generated and can't be changed or deleted.
+> * Entries in the Activity Log are representing control plane changes like a virtual machine restart, any non related entries should be written into [Azure Resource Logs](https://learn.microsoft.com/azure/azure-monitor/essentials/resource-logs)
## Retention period
Log profiles are the legacy method for sending the activity log to storage or ev
#### [PowerShell](#tab/powershell)
-If a log profile already exists, you first must remove the existing log profile and then create a new one.
+If a log profile already exists, you first must remove the existing log profile, and then create a new one.
1. Use `Get-AzLogProfile` to identify if a log profile exists. If a log profile exists, note the `Name` property.
This sample PowerShell script creates a log profile that writes the activity log
#### [CLI](#tab/cli)
-If a log profile already exists, you first must remove the existing log profile and then create a log profile.
+If a log profile already exists, you first must remove the existing log profile, and then create a log profile.
1. Use `az monitor log-profiles list` to identify if a log profile exists. 1. Use `az monitor log-profiles delete --name "<log profile name>` to remove the log profile by using the value from the `name` property.
azure-monitor Azure Monitor Workspace Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/azure-monitor-workspace-overview.md
description: Overview of Azure Monitor workspace, which is a unique environment
Previously updated : 05/09/2022 Last updated : 10/05/2022 # Azure Monitor workspace (preview)
The following table lists the contents of Azure Monitor workspaces. This table w
| Prometheus metrics | Native platform metrics<br>Native custom metrics<br>Prometheus metrics | - ## Workspace design A single Azure Monitor workspace can collect data from multiple sources, but there may be circumstances where you require multiple workspaces to address your particular business requirements. Azure Monitor workspace design is similar to [Log Analytics workspace design](../logs/workspace-design.md). There are several reasons that you may consider creating additional workspaces including the following. -- If you have multiple Azure tenants, you'll usually create a workspace in each because several data sources can only send monitoring data to a workspace in the same Azure tenant.-- Each workspace resides in a particular Azure region, and you may have regulatory or compliance requirements to store data in particular locations.-- You may choose to create separate workspaces to define data ownership, for example by subsidiaries or affiliated companies.
+- Azure tenants. If you have multiple Azure tenants, you'll usually create a workspace in each because several data sources can only send monitoring data to a workspace in the same Azure tenant.
+- Azure regions. Each workspace resides in a particular Azure region, and you may have regulatory or compliance requirements to store data in particular locations.
+- Data ownership. You may choose to create separate workspaces to define data ownership, for example by subsidiaries or affiliated companies.
+- Workspace limits. See [Azure Monitor service limits](../service-limits.md#prometheus-metrics) for current capacity limits related to Azure Monitor workspaces.
+- Multiple environments. You may have Azure Monitor workspaces supporting different environments such as test, pre-production, and production.
+
+> [!NOTE]
+> You cannot currently query across multiple Azure Monitor workspaces.
+
+## Workspace limits
+These are currently only related to Prometheus metrics, since this is the only data currently stored in Azure Monitor workspaces.
Many customers will choose an Azure Monitor workspace design to match their Log Analytics workspace design. Since Azure Monitor workspaces currently only contain Prometheus metrics, and metric data is typically not as sensitive as log data, you may choose to further consolidate your Azure Monitor workspaces for simplicity. ++ ## Create an Azure Monitor workspace In addition to the methods below, you may be given the option to create a new Azure Monitor workspace in the Azure portal as part of a configuration that requires one. For example, when you configure Azure Monitor managed service for Prometheus, you can select an existing Azure Monitor workspace or create a new one.
azure-monitor Prometheus Grafana https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-grafana.md
+
+ Title: Use Azure Monitor managed service for Prometheus (preview) as data source for Grafana
+description: Details on how to configure Azure Monitor managed service for Prometheus (preview) as data source for both Azure Managed Grafana and self-hosted Grafana in an Azure virtual machine.
++ Last updated : 09/28/2022++
+# Use Azure Monitor managed service for Prometheus (preview) as data source for Grafana using managed system identity
+
+[Azure Monitor managed service for Prometheus (preview)](prometheus-metrics-overview.md) allows you to collect and analyze metrics at scale using a [Prometheus](https://aka.ms/azureprometheus-promio)-compatible monitoring solution. The most common way to analyze and present Prometheus data is with a Grafana dashboard. This article explains how to configure Prometheus as a data source for both [Azure Managed Grafana](../../managed-grafan) and [self-hosted Grafana](https://grafana.com/) running in an Azure virtual machine using managed system identity authentication.
++
+## Azure Managed Grafana
+The following sections describe how to configure Azure Monitor managed service for Prometheus (preview) as a data source for Azure Managed Grafana.
+
+> [!IMPORTANT]
+> This section describes the manual process for adding an Azure Monitor managed service for Prometheus data source to Azure Managed Grafana. You can achieve the same functionality by linking the Azure Monitor workspace and Grafana workspace as described in [Link a Grafana workspace](azure-monitor-workspace-overview.md#link-a-grafana-workspace).
+
+### Configure system identify
+Your Grafana workspace requires the following:
+
+- System managed identity enabled
+- *Monitoring Data Reader* role for the Azure Monitor workspace
+
+Both of these settings are configured by default when you created your Grafana workspace. Verify these settings on the **Identity** page for your Grafana workspace.
+++
+**Configure from Grafana workspace**<br>
+Use the following steps to allow access all Azure Monitor workspaces in a resource group or subscription:
+
+1. Open the **Identity** page for your Grafana workspace in the Azure portal.
+2. If **Status** is **No**, change it to **Yes**.
+3. Click **Azure role assignments** to review the existing access in your subscription.
+4. If **Monitoring Data Reader** is not listed for your subscription or resource group:
+ 1. Click **+ Add role assignment**.
+ 2. For **Scope**, select either **Subscription** or **Resource group**.
+ 3. For **Role**, select **Monitoring Data Reader**.
+ 4. Click **Save**.
++
+**Configure from Azure Monitor workspace**<br>
+Use the following steps to allow access to only a specific Azure Monitor workspace:
+
+1. Open the **Access Control (IAM)** page for your Azure Monitor workspace in the Azure portal.
+2. Click **Add role assignment**.
+3. Select **Monitoring Data Reader** and click **Next**.
+4. For **Assign access to**, select **Managed identity**.
+5. Click **+ Select members**.
+6. For **Managed identity**, select **Azure Managed Grafana**.
+7. Select your Grafana workspace and then click **Select**.
+8. Click **Review + assign** to save the configuration.
+
+### Create Prometheus data source
+
+Azure Managed Grafana supports Azure authentication by default.
+
+1. Open the **Overview** page for your Azure Monitor workspace in the Azure portal.
+2. Copy the **Query endpoint**, which you'll need in a step below.
+3. Open your Azure Managed Grafana workspace in the Azure portal.
+4. Click on the **Endpoint** to view the Grafana workspace.
+5. Select **Configuration** and then **Data source**.
+6. Click **Add data source** and then **Prometheus**.
+7. For **URL**, paste in the query endpoint for your Azure Monitor workspace.
+8. Select **Azure Authentication** to turn it on.
+9. For **Authentication** under **Azure Authentication**, select **Managed Identity**.
+10. Scroll to the bottom of the page and click **Save & test**.
+++
+## Self-managed Grafana
+The following sections describe how to configure Azure Monitor managed service for Prometheus (preview) as a data source for self-managed Grafana on an Azure virtual machine.
+### Configure system identify
+Azure virtual machines support both system assigned and user assigned identity. The following steps configure system assigned identity.
+
+**Configure from Azure virtual machine**<br>
+Use the following steps to allow access all Azure Monitor workspaces in a resource group or subscription:
+
+1. Open the **Identity** page for your virtual machine in the Azure portal.
+2. If **Status** is **No**, change it to **Yes**.
+3. Click **Azure role assignments** to review the existing access in your subscription.
+4. If **Monitoring Data Reader** is not listed for your subscription or resource group:
+ 1. Click **+ Add role assignment**.
+ 2. For **Scope**, select either **Subscription** or **Resource group**.
+ 3. For **Role**, select **Monitoring Data Reader**.
+ 4. Click **Save**.
+
+**Configure from Azure Monitor workspace**<br>
+Use the following steps to allow access to only a specific Azure Monitor workspace:
+
+1. Open the **Access Control (IAM)** page for your Azure Monitor workspace in the Azure portal.
+2. Click **Add role assignment**.
+3. Select **Monitoring Data Reader** and click **Next**.
+4. For **Assign access to**, select **Managed identity**.
+5. Click **+ Select members**.
+6. For **Managed identity**, select **Virtual machine**.
+7. Select your Grafana workspace and then click **Select**.
+8. Click **Review + assign** to save the configuration.
++++
+### Create Prometheus data source
+
+Versions 9.x and greater of Grafana support Azure Authentication, but it's not enabled by default. To enable this feature, you need to update your Grafana configuration. To determine where your Grafana.ini file is and how to edit your Grafana config, please review this document from Grafana Labs. Once you know where your configuration file lives on your VM, make the following update:
++
+1. Locate and open the *Grafana.ini* file on your virtual machine.
+2. Under the `[auth]` section of the configuration file, change the `azure_auth_enabled` setting to `true`.
+3. Open the **Overview** page for your Azure Monitor workspace in the Azure portal.
+4. Copy the **Query endpoint**, which you'll need in a step below.
+5. Open your Azure Managed Grafana workspace in the Azure portal.
+6. Click on the **Endpoint** to view the Grafana workspace.
+7. Select **Configuration** and then **Data source**.
+8. Click **Add data source** and then **Prometheus**.
+9. For **URL**, paste in the query endpoint for your Azure Monitor workspace.
+10. Select **Azure Authentication** to turn it on.
+11. For **Authentication** under **Azure Authentication**, select **Managed Identity**.
+12. Scroll to the bottom of the page and click **Save & test**.
++++
+## Next steps
+
+- [Collect Prometheus metrics for your AKS cluster](../containers/container-insights-prometheus-metrics-addon.md).
+- [Configure Prometheus alerting and recording rules groups](prometheus-rule-groups.md).
+- [Customize scraping of Prometheus metrics](prometheus-metrics-scrape-configuration.md).
azure-monitor Prometheus Metrics Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-metrics-overview.md
Azure Monitor managed service for Prometheus can currently collect data from any
## Grafana integration
-The primary method for visualizing Prometheus metrics is [Azure Managed Grafana](../../managed-grafan). Connect your Azure Monitor workspace to a Grafana workspace so that it can be used as a data source in a Grafana dashboard. You then have access to multiple prebuilt dashboards that use Prometheus metrics and the ability to create any number of custom dashboards.
+The primary method for visualizing Prometheus metrics is [Azure Managed Grafana](../../managed-grafan#link-a-grafana-workspace) so that it can be used as a data source in a Grafana dashboard. You then have access to multiple prebuilt dashboards that use Prometheus metrics and the ability to create any number of custom dashboards.
## Alerts Azure Monitor managed service for Prometheus adds a new Prometheus alert type for creating alerts using PromQL queries. You can view fired and resolved Prometheus alerts in the Azure portal along with other alert types. Prometheus alerts are configured with the same [alert rules](https://aka.ms/azureprometheus-promio-alertrules) used by Prometheus. For your AKS cluster, you can use a [set of predefined Prometheus alert rules]
azure-monitor Analyze Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/analyze-usage.md
There are two approaches to investigating the amount of data collected for Appli
> [!NOTE]
-> Queries against Application Insights table except `SystemEvents` will work for both a workspace-based and classic Application Insights resource, since [backwards compatibility](../app/convert-classic-resource.md#understanding-log-queries) allows you to continue to use [legacy table names](../app/apm-tables.md). For a workspace-based resource, open **Logs** from the **Log Analytics workspace** menu. For a classic resource, open **Logs** from the **Application Insights** menu.
+> Queries against Application Insights table except `SystemEvents` will work for both a workspace-based and classic Application Insights resource, since [backwards compatibility](../app/convert-classic-resource.md#understand-log-queries) allows you to continue to use [legacy table names](../app/apm-tables.md). For a workspace-based resource, open **Logs** from the **Log Analytics workspace** menu. For a classic resource, open **Logs** from the **Application Insights** menu.
**Dependency operations generate the most data volume in the last 30 days (workspace-based or classic)**
azure-monitor Basic Logs Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/basic-logs-configure.md
Setting a table's [log data plan](log-analytics-workspace-overview.md#log-data-p
By default, all tables in your Log Analytics are Analytics tables, and available for query and alerts. You can currently configure the following tables for Basic Logs: -- All tables created with or converted to the [Data Collection Rule (DCR)-based logs ingestion API.](logs-ingestion-api-overview.md)
+- All custom tables created with or migrated to the [Data Collection Rule (DCR)-based logs ingestion API.](logs-ingestion-api-overview.md)
- [ContainerLogV2](/azure/azure-monitor/reference/tables/containerlogv2) - Used in [Container Insights](../containers/container-insights-overview.md) and includes verbose text-based log records. - [AppTraces](/azure/azure-monitor/reference/tables/apptraces) - Freeform Application Insights traces. - [ContainerAppConsoleLogs](/azure/azure-monitor/reference/tables/ContainerAppConsoleLogs) - Logs generated by Container Apps, within a Container App environment.
azure-monitor Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/whats-new.md
Title: "What's new in Azure Monitor documentation" description: "What's new in Azure Monitor documentation" Previously updated : 08/08/2022 Last updated : 10/13/2022 - # What's new in Azure Monitor documentation
This article lists significant changes to Azure Monitor documentation.
## September 2022 +
+### Agents
+
+| Article | Description |
+|||
+|[Azure Monitor Agent overview](https://docs.microsoft.com/azure/azure-monitor/agents/agents-overview)|Added Azure Monitor Agent support for ARM64-based virtual machines for a number of distributions. <br><br>Azure Monitor Agent and legacy agents don't support machines and appliances that run heavily customized or stripped-down versions of operating system distributions. <br><br>Azure Monitor Agent versions 1.15.2 and higher now support syslog RFC formats, including Cisco Meraki, Cisco ASA, Cisco FTD, Sophos XG, Juniper Networks, Corelight Zeek, CipherTrust, NXLog, McAfee, and Common Event Format (CEF).|
+
+### Alerts
+
+| Article | Description |
+|||
+|[Convert ITSM actions that send events to ServiceNow to secure webhook actions](https://docs.microsoft.com/azure/azure-monitor/alerts/itsm-convert-servicenow-to-webhook)|As of September 2022, we're starting the 3-year process of deprecating support of using ITSM actions to send events to ServiceNow. Learn how to convert ITSM actions that send events to ServiceNow to secure webhook actions|
+|[Create a new alert rule](https://docs.microsoft.com/azure/azure-monitor/alerts/alerts-create-new-alert-rule)|Added description of all available monitoring services to create a new alert rule and alert processing rules pages. <br><br>Added support for regional processing for metric alert rules that monitor a custom metric with the scope defined as one of the supported regions. <br><br> Clarified that selecting the **Automatically resolve alerts** setting makes log alerts stateful.<|
+|[Types of Azure Monitor alerts](https://learn.microsoft.com/azure/azure-monitor/alerts/alerts-types)|Azure Database for PostgreSQL - Flexible Servers is supported for monitoring multiple resources.|
+|[Upgrade legacy rules management to the current Log Alerts API from legacy Log Analytics Alert API](https://docs.microsoft.com/azure/azure-monitor/alerts/alerts-log-api-switch)|The process of moving legacy log alert rules management from the legacy API to the current API is now supported by the government cloud.|
+
+### Application insights
+
+| Article | Description |
+|||
+|[Azure Monitor OpenTelemetry-based auto-instrumentation for Java applications](https://docs.microsoft.com/azure/azure-monitor/app/java-in-process-agent)|New OpenTelemetry `@WithSpan` annotation guidance.|
+|[Capture Application Insights custom metrics with .NET and .NET Core](https://docs.microsoft.com/azure/azure-monitor/app/tutorial-asp-net-custom-metrics)|Tutorial steps and images have been updated.|
+|[Configuration options - Azure Monitor Application Insights for Java](https://learn.microsoft.com/azure/azure-monitor/app/java-in-process-agent)|Connection string guidance updated.|
+|[Enable Application Insights for ASP.NET Core applications](https://docs.microsoft.com/azure/azure-monitor/app/tutorial-asp-net-core)|Tutorial steps and images have been updated.|
+|[Enable Azure Monitor OpenTelemetry Exporter for .NET, Node.js, and Python applications (preview)](https://docs.microsoft.com/azure/azure-monitor/app/opentelemetry-enable)|Our product feedback link at the bottom of each document has been fixed.|
+|[Filter and preprocess telemetry in the Application Insights SDK](https://docs.microsoft.com/azure/azure-monitor/app/api-filtering-sampling)|Added sample initializer to control which client IP gets used as part of geo-location mapping.|
+|[Java Profiler for Azure Monitor Application Insights](https://docs.microsoft.com/azure/azure-monitor/app/java-standalone-profiler)|Our new Java Profiler was announced at Ignite. Read all about it!|
+|[Release notes for Azure Web App extension for Application Insights](https://docs.microsoft.com/azure/azure-monitor/app/web-app-extension-release-notes)|Added release notes for 2.8.44 and 2.8.43.|
+|[Resource Manager template samples for creating Application Insights resources](https://docs.microsoft.com/azure/azure-monitor/app/resource-manager-app-resource)|Fixed inaccurate tagging of workspace-based resources as still in Preview.|
+|[Unified cross-component transaction diagnostics](https://docs.microsoft.com/azure/azure-monitor/app/transaction-diagnostics)|A complete FAQ section is added to help troubleshoot Azure portal errors, such as "error retrieving data".|
+|[Upgrading from Application Insights Java 2.x SDK](https://docs.microsoft.com/azure/azure-monitor/app/java-standalone-upgrade-from-2x)|Additional upgrade guidance added. Java 2.x has been deprecated.|
+|[Using Azure Monitor Application Insights with Spring Boot](https://docs.microsoft.com/azure/azure-monitor/app/java-spring-boot)|Configuration options have been updated.|
+
+### Autoscale
+| Article | Description |
+|||
+|[Autoscale with multiple profiles](https://docs.microsoft.com/azure/azure-monitor/autoscale/autoscale-multiprofile)|New article: Using multiple profiles in autoscale with CLI PowerShell and templates.|
+|[Flapping in Autoscale](https://docs.microsoft.com/azure/azure-monitor/autoscale/autoscale-flapping)|New Article: Flapping in autoscale.|
+|[Understand Autoscale settings](https://docs.microsoft.com/azure/azure-monitor/autoscale/autoscale-understanding-settings)|Clarified how often autoscale runs.|
+
+### Change analysis
+| Article | Description |
+|||
+|[Troubleshoot Azure Monitor's Change Analysis](https://docs.microsoft.com/azure/azure-monitor/change/change-analysis-troubleshoot)|Added section about partial data and how to mitigate to the troubleshooting guide.|
+
+### Essentials
+| Article | Description |
+|||
+|[Structure of transformation in Azure Monitor (preview)](https://docs.microsoft.com/azure/azure-monitor/essentials/data-collection-transformations-structure)|New KQL functions supported.|
+
+### Virtual Machines
+| Article | Description |
+|||
+|[Migrate from Service Map to Azure Monitor VM insights](https://docs.microsoft.com/azure/azure-monitor/vm/vminsights-migrate-from-service-map)|Added a new article with guidance for migrating from the Service Map solution to Azure Monitor VM insights.|
+ ### Network Insights | Article | Description |
This article lists significant changes to Azure Monitor documentation.
|[Network Insights](../network-watcher/network-insights-overview.md)| Onboarded the new topology experience to Network Insights in Azure Monitor.|
-## August 2022
+### Visualizations
+| Article | Description |
+|||
+|[Access deprecated Troubleshooting guides in Azure Workbooks](https://docs.microsoft.com/azure/azure-monitor/visualize/workbooks-access-troubleshooting-guide)|New article: Access deprecated Troubleshooting guides in Azure Workbooks.|
+## August 2022
+ ### Agents | Article | Description |
This article lists significant changes to Azure Monitor documentation.
|:|:| |[Create Azure Monitor alert rules](alerts/alerts-create-new-alert-rule.md)|Added support for data processing in a specified region, for action groups and for metric alert rules that monitor a custom metric.|
-### Application-insights
+### Application insights
| Article | Description | |||
This article lists significant changes to Azure Monitor documentation.
|[Autoscale in Microsoft Azure](autoscale/autoscale-overview.md)|Updated conceptual diagrams| |[Use predictive autoscale to scale out before load demands in virtual machine scale sets (preview)](autoscale/autoscale-predictive.md)|Predictive autoscale (preview) is now available in all regions|
-### Change-analysis
+### Change analysis
| Article | Description | |||
azure-resource-manager Bicep Functions Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions-resource.md
The possible uses of `list*` are shown in the following table.
| Microsoft.Logic/workflows/versions/triggers | [listCallbackUrl](/rest/api/logic/workflowversions/listcallbackurl) | | Microsoft.MachineLearning/webServices | [listkeys](/rest/api/machinelearning/webservices/listkeys) | | Microsoft.MachineLearning/Workspaces | listworkspacekeys |
-| Microsoft.MachineLearningServices/workspaces/computes | [listKeys](/rest/api/azureml/2022-05-01/compute/list-keys) |
-| Microsoft.MachineLearningServices/workspaces/computes | [listNodes](/rest/api/azureml/2022-05-01/compute/list-nodes) |
-| Microsoft.MachineLearningServices/workspaces | [listKeys](/rest/api/azureml/2022-05-01/workspaces/list-keys) |
+| Microsoft.MachineLearningServices/workspaces/computes | [listKeys](/rest/api/azureml/2022-10-01/compute/list-keys) |
+| Microsoft.MachineLearningServices/workspaces/computes | [listNodes](/rest/api/azureml/2022-10-01/compute/list-nodes) |
+| Microsoft.MachineLearningServices/workspaces | [listKeys](/rest/api/azureml/2022-10-01/workspaces/list-keys) |
| Microsoft.Maps/accounts | [listKeys](/rest/api/maps-management/accounts/listkeys) | | Microsoft.Media/mediaservices/assets | [listContainerSas](/rest/api/media/assets/listcontainersas) | | Microsoft.Media/mediaservices/assets | [listStreamingLocators](/rest/api/media/assets/liststreaminglocators) |
azure-resource-manager Template Functions Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-functions-resource.md
The possible uses of `list*` are shown in the following table.
| Microsoft.Logic/workflows/versions/triggers | [listCallbackUrl](/rest/api/logic/workflowversions/listcallbackurl) | | Microsoft.MachineLearning/webServices | [listkeys](/rest/api/machinelearning/webservices/listkeys) | | Microsoft.MachineLearning/Workspaces | listworkspacekeys |
-| Microsoft.MachineLearningServices/workspaces/computes | [listKeys](/rest/api/azureml/2022-05-01/compute/list-keys) |
-| Microsoft.MachineLearningServices/workspaces/computes | [listNodes](/rest/api/azureml/2022-05-01/compute/list-nodes) |
-| Microsoft.MachineLearningServices/workspaces | [listKeys](/rest/api/azureml/2022-05-01/workspaces/list-keys) |
+| Microsoft.MachineLearningServices/workspaces/computes | [listKeys](/rest/api/azureml/2022-10-01/compute/list-keys) |
+| Microsoft.MachineLearningServices/workspaces/computes | [listNodes](/rest/api/azureml/2022-10-01/compute/list-nodes) |
+| Microsoft.MachineLearningServices/workspaces | [listKeys](/rest/api/azureml/2022-10-01/workspaces/list-keys) |
| Microsoft.Maps/accounts | [listKeys](/rest/api/maps-management/accounts/listkeys) | | Microsoft.Media/mediaservices/assets | [listContainerSas](/rest/api/media/assets/listcontainersas) | | Microsoft.Media/mediaservices/assets | [listStreamingLocators](/rest/api/media/assets/liststreaminglocators) |
azure-vmware Concepts Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/concepts-storage.md
vSAN datastores use data-at-rest encryption by default using keys stored in Azur
## Datastore capacity expansion options
-The vSAN datastore capacity can be expanded by connecting Azure storage resources such as [Azure NetApp Files volumes as datastores](/azure/azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts). Virtual machines can be migrated between vSAN and Azure NetApp Files datastores using storage vMotion. Azure NetApp Files datastores can be replicated to other regions using storage based [Cross-region replication](/azure/azure-netapp-files/cross-region-replication-introduction) for testing, development and failover purposes.
+The vSAN datastore capacity can be expanded by connecting Azure storage resources such as [Azure NetApp Files volumes as datastores](/azure/azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts). Virtual machines can be migrated between vSAN and Azure NetApp Files datastores using storage vMotion.
Azure NetApp Files is available in [Ultra, Premium and Standard performance tiers](/azure/azure-netapp-files/azure-netapp-files-service-levels) to allow for adjusting performance and cost to the requirements of the workloads. ## Azure storage integration
azure-vmware Configure Vmware Hcx https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/configure-vmware-hcx.md
VMware HCX Connector deploys a subset of virtual appliances (automated) that req
- vMotion - Replication - Uplink
+
+ > [!NOTE]
+ > * Azure VMware Solution connected via VPN should set Uplink Network Profile MTU's to 1350 to account for IPSec overhead.
+ > * Azure VMWare Solution defaults to 1500 MTU and is sufficient for most ExpressRoute implementations.
+ > * If your ExpressRoute provider does not support jumbo frame, MTU may need to be lowered in ExpressRoute setups as well.
+ > * Changes to MTU should be performed on both HCX Connector (on-premises) and HCX Cloud Manager (Azure VMware Solution) network profiles.
1. Under **Infrastructure**, select **Interconnect** > **Multi-Site Service Mesh** > **Network Profiles** > **Create Network Profile**.
backup Backup Azure Enhanced Soft Delete About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-enhanced-soft-delete-about.md
+
+ Title: Overview of enhanced soft delete for Azure Backup (preview)
+description: This article gives an overview of enhanced soft delete for Azure Backup.
++ Last updated : 10/13/2022+++++
+# About Enhanced soft delete for Azure Backup (preview)
+
+[Soft delete](backup-azure-security-feature-cloud.md) for Azure Backup enables you to recover your backup data even after it's deleted. This is useful when:
+
+- You've accidentally deleted backup data and you need it back.
+- Backup data is maliciously deleted by ransomware or bad actors.
+
+*Basic soft delete* is available for Recovery Services vaults for a while; *enhanced soft delete* now provides additional data protection capabilities.
+
+In this article, you'll learn about:
+
+>[!div class="checklist"]
+>- What's soft delete?
+>- What's enhanced soft delete?
+>- States of soft delete setting
+>- Soft delete retention
+>- Soft deleted items reregistration
+>- Pricing
+>- Supported scenarios
+
+## What's soft delete?
+
+[Soft delete](backup-azure-security-feature-cloud.md) primarily delays permanent deletion of backup data and gives you an opportunity to recover data after deletion. This deleted data is retained for a specified duration (*14*-*180* days) called soft delete retention period.
+
+After deletion (while the data is in soft deleted state), if you need the deleted data, you can undelete. This returns the data to *stop protection with retain data* state. You can then use it to perform restore operations or you can resume backups for this instance.
+
+The following diagram shows the flow of a backup item (or a backup instance) that gets deleted:
++
+## What's enhanced soft delete?
+
+The key benefits of enhanced soft delete are:
+
+- **Always-on soft delete**: You can now opt to set soft delete always-on (irreversible). Once opted, you can't disable the soft delete settings for the vault. [Learn more](#states-of-soft-delete-settings).
+- **Configurable soft delete retention**: You can now specify the retention duration for deleted backup data, ranging from *14* to *180* days. By default, the retention duration is set to *14* days (as per basic soft delete) for the vault, and you can extend it as required.
+
+ >[!Note]
+ >The soft delete doesn't cost you for first 14 days of retention; however, you're charged for the period beyond 14 days. [Learn more](#states-of-soft-delete-settings).
+- **Re-registration of soft deleted items**: You can now register the items in soft deleted state with another vault. However, you can't register the same item with two vaults for active backups.
+- **Soft delete and reregistration of backup containers**: You can now unregister the backup containers (which you can soft delete) if you've deleted all backup items in the container. You can now register such soft deleted containers to other vaults. This is applicable for applicable workloads only, including SQL in Azure VM backup, SAP HANA in Azure VM backup and backup of on-premises servers.
+- **Soft delete across workloads**: Enhanced soft delete applies to all vaulted workloads alike and is supported for Recovery Services vaults and Backup vaults. However, it currently doesn't support operational tier workloads, such as Azure Files backup, Operational backup for Blobs, Disk and VM snapshot backups.
+
+## States of soft delete settings
+
+The following table lists the soft delete properties for vaults:
+
+| State | Description |
+| | |
+| **Disabled** | Deleted items aren't retained in the soft deleted state, and are permanently deleted. |
+| **Enabled** | This is the default state for a new vault. <br><br> Deleted items are retained for the specified soft delete retention period, and are permanently deleted after the expiry of the soft delete retention duration. <br><br> Disabling soft delete immediate purges deleted data. |
+| **Enabled and always-on** | Deleted items are retained for the specified soft delete retention period, and are permanently deleted after the expiry of the soft delete retention duration. <br><br> Once you opt for this state, soft delete can't be disabled. |
+
+## Soft delete retention
+
+Soft delete retention is the retention period (in days) of a deleted item in soft deleted state. Once the soft delete retention period elapses (from the date of deletion), the item is permanently deleted, and you can't undelete. You can choose the soft delete retention period between *14* and *180* days. Longer durations allow you to recover data from threats that may take time to identify (for example, Advanced Persistent Threats).
+
+>[!Note]
+>Soft delete retention for *14* days involves no cost. However, [regular backup charges apply for additional retention days](#pricing).
+>
+>By default, soft delete retention is set to *14* days and you can change it any time. However, the *soft delete retention period* that is active at the time of the deletion governs retention of the item in soft deleted state.
+
+## Soft deleted items reregistration
+
+If a backup item/container is in soft deleted state, you can register it to a vault different from the original one where the soft deleted data belongs.
+
+>[!Note]
+>You can't actively protect one item to two vaults simultaneously. So, if you start protecting a backup container using another vault, you can no longer re-protect the same backup container to the previous vault.
+
+## Pricing
+
+Soft deleted data involves no retention cost for the default duration of *14* days. For soft deleted data retention more than the default period, it incurs regular backup charges.
+
+For example, you've deleted backups for one of the instances in the vault that has soft delete retention of *60* days. If you want to recover the soft deleted data after *50* days of deletion, the pricing is:
+
+- Standard rates (similar rates apply when the instance is in *stop protection with retain data* state) are applicable for the first *36* days (*50* days of data retained in soft deleted state minus *14* days of default soft delete retention).
+
+- No charges for the last *6* days of soft delete retention.
+
+## Supported scenarios
+
+- Enhanced soft delete is currently available in the following regions: West Central US, Australia East, North Europe.
+- It's supported for Recovery Services vaults and Backup vaults. Also, it's supported for new and existing vaults.
+- All existing Recovery Services vaults in the preview regions are upgraded with an option to use enhanced soft delete.
+
+## Next steps
+
+[Configure and manage enhanced soft delete for Azure Backup (preview)](backup-azure-enhanced-soft-delete-configure-manage.md).
backup Backup Azure Enhanced Soft Delete Configure Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-enhanced-soft-delete-configure-manage.md
+
+ Title: Configure and manage enhanced soft delete for Azure Backup (preview)
+description: This article describes about how to configure and manage enhanced soft delete for Azure Backup.
+ Last updated : 10/13/2022+++++
+# Configure and manage enhanced soft delete in Azure Backup (preview)
+
+This article describes how to configure and use enhanced soft delete to protect your data and recover backups, if they're deleted.
+
+In this article, you'll learn about:
+
+>[!div class="checklist"]
+>- Before you start
+>- Enable soft delete with always-on state
+>- Delete a backup item
+>- Recover a soft-deleted backup item
+>- Unregister containers
+>- Disable soft delete
+
+## Before you start
+
+- Enhanced soft delete is supported for Recovery Services vaults and Backup vaults.
+- It's supported for new and existing vaults.
+- All existing Recovery Services vaults in the [preview regions](backup-azure-enhanced-soft-delete-about.md#supported-scenarios) are upgraded with an option to use enhanced soft delete.
++
+## Enable soft delete with always-on state
+
+Soft delete is enabled by default for all new vaults you create. To make enabled settings irreversible, select **Enable Always-on Soft Delete**.
+
+**Choose a vault**
+
+# [Recovery Services vault](#tab/recovery-services-vault)
+
+Follow these steps:
+
+1. Go to **Recovery Services vault** > **Properties**.
+
+1. Under **Soft Delete**, select **Update** to modify the soft delete setting.
+
+ :::image type="content" source="./media/backup-azure-enhanced-soft-delete/open-soft-delete-properties-blade-inline.png" alt-text="Screenshot showing you how to open Soft Delete blade." lightbox="./media/backup-azure-enhanced-soft-delete/open-soft-delete-properties-blade-expanded.png":::
+
+ The soft delete settings for cloud and hybrid workloads are already enabled, unless you've explicitly disabled them earlier.
+
+1. If soft delete settings are disabled for any workload type in the **Soft Delete** blade, select the respective checkboxes to enable them.
+
+ >[!Note]
+ >Enabling soft delete for hybrid workloads also enables other security settings, such as Multi-factor authentication and alert notification for back up of workloads running in the on-premises servers.
+
+1. Choose the number of days between *14* and *180* to specify the soft delete retention period.
+
+ >[!Note]
+ >- There is no cost for soft delete for *14* days. However, deleted instances in soft delete state are charged if the soft delete retention period is *>14* days. Learn about [pricing details](backup-azure-enhanced-soft-delete-about.md#pricing).
+ >- Once configured, the soft delete retention period applies to all soft deleted instances of cloud and hybrid workloads in the vault.
+
+1. Select the **Enable Always-on Soft delete** checkbox to enable soft delete and make it irreversible.
+
+ :::image type="content" source="./media/backup-azure-enhanced-soft-delete/enable-always-on-soft-delete.png" alt-text="Screenshot showing you how to enable a;ways-on state of soft delete.":::
+
+ >[!Note]
+ >If you opt for *Enable Always-on Soft Delete*, select the *confirmation checkbox* to proceed. Once enabled, you can't disable the settings for this vault.
+
+1. Select **Update** to save the changes.
+
+# [Backup vault](#tab/backup-vault)
+
+Follow these steps:
+
+1. Go to **Backup vault** > **Properties**.
+
+1. Under **Soft Delete**, select **Update** to modify the soft delete setting.
+
+ :::image type="content" source="./media/backup-azure-enhanced-soft-delete/open-soft-delete-properties.png" alt-text="Screenshot showing you how to open soft delete blade for Backup vault.":::
+
+ Soft delete is enabled by default with the checkboxes selected.
+
+1. If you've explicitly disabled soft delete for any workload type in the **Soft Delete** blade earlier, select the checkboxes to enable them.
+
+1. Choose the number of days between *14* and *180* to specify the soft delete retention period.
+
+ >[!Note]
+ >There is no cost for enabling soft delete for *14* days. However, you're charged for the soft delete instances if soft delete retention period is *>14* days. Learn about the [pricing details](backup-azure-enhanced-soft-delete-about.md#pricing).
+
+1. Select the **Enable Always-on Soft Delete** checkbox to enable soft delete always-on and make it irreversible.
+
+ :::image type="content" source="./media/backup-azure-enhanced-soft-delete/enable-always-on-soft-delete-backup-vault.png" alt-text="Screenshot showing you how to enable always-on state for Backup vault.":::
+
+ >[!Note]
+ >If you opt for *Enable Always-on Soft Delete*, select the *confirmation checkbox* to proceed. Once enabled, you can't disable the settings for this vault.
+
+1. Select **Update** to save the changes.
+++
+## Delete a backup item
+
+You can delete backup items/instances even if the soft delete settings are enabled. However, if the soft delete is enabled, the deleted items don't get permanently deleted immediately and stays in soft deleted state as per [configured retention period](#enable-soft-delete-with-always-on-state). Soft delete delays permanent deletion of backup data by retaining deleted data for *14*-*180* days.
+
+**Choose a vault**
+
+# [Recovery Services vault](#tab/recovery-services-vault)
+
+Follow these steps:
+
+1. Go to the *backup item* that you want to delete.
+1. Select **Stop backup**.
+1. On the **Stop Backup** page, select **Delete Backup Data** from the drop-down list to delete all backups for the instance.
+1. Provide the applicable information, and then select **Stop backup** to delete all backups for the instance.
+
+ Once the *delete* operation completes, the backup item is moved to soft deleted state. In **Backup items**, the soft deleted item is marked in *Red*, and the last backup status shows that backups are disabled for the item.
+
+ :::image type="content" source="./media/backup-azure-enhanced-soft-delete/soft-deleted-backup-items-marked-red-inline.png" alt-text="Screenshot showing the soft deleted backup items marked red." lightbox="./media/backup-azure-enhanced-soft-delete/soft-deleted-backup-items-marked-red-expanded.png":::
+
+ In the item details, the soft deleted item shows no recovery point. Also, a notification appears to mention the state of the item, and the number of days left before the item is permanently deleted. You can select **Undelete** to recover the soft deleted items.
+
+ :::image type="content" source="./media/backup-azure-enhanced-soft-delete/soft-deleted-item-shows-no-recovery-point-inline.png" alt-text="Screenshot showing the soft deleted backup item that shows no recovery point." lightbox="./media/backup-azure-enhanced-soft-delete/soft-deleted-item-shows-no-recovery-point-expanded.png":::
+
+>[!Note]
+>When the item is in soft deleted state, no recovery points are cleaned on their expiry as per the backup policy.
+
+# [Backup vault](#tab/backup-vault)
+
+Follow these steps:
+
+1. In the **Backup center**, go to the *backup instance* that you want to delete.
+
+1. Select **Stop backup**.
+
+ :::image type="content" source="./media/backup-azure-enhanced-soft-delete/stop-backup-for-backup-vault-items-inline.png" alt-text="Screenshot showing how to initiate the stop backup process for backup items in Backup vault." lightbox="./media/backup-azure-enhanced-soft-delete/stop-backup-for-backup-vault-items-expanded.png":::
+
+ You can also select **Delete** in the instance view to delete backups.
+
+1. On the **Stop Backup** page, select **Delete Backup Data** from the drop-down list to delete all backups for the instance.
+
+1. Provide the applicable information, and then select **Stop backup** to initiate the deletion of the backup instance.
+
+ :::image type="content" source="./media/backup-azure-enhanced-soft-delete/start-stop-backup-process.png" alt-text="Screenshot showing how to stop the backup process.":::
+
+ Once deletion completes, the instance appears as *Soft deleted*.
+
+ :::image type="content" source="./media/backup-azure-enhanced-soft-delete/deleted-backup-items-marked-soft-deleted-inline.png" alt-text="Screenshot showing the deleted backup items marked as Soft Deleted." lightbox="./media/backup-azure-enhanced-soft-delete/deleted-backup-items-marked-soft-deleted-expanded.png":::
+++
+## Recover a soft-deleted backup item
+
+If a backup item/ instance is soft deleted, you can recover it before it's permanently deleted.
+
+**Choose a vault**
+
+# [Recovery Services vault](#tab/recovery-services-vault)
+
+Follow these steps:
+
+1. Go to the *backup item* that you want to retrieve from the *soft deleted* state.
+
+ You can also use the **Backup center** to go to the item by applying the filter **Protection status == Soft deleted** in the *Backup instances*.
+
+1. Select **Undelete** corresponding to the *soft deleted item*.
+
+ :::image type="content" source="./media/backup-azure-enhanced-soft-delete/start-recover-backup-items-inline.png" alt-text="Screenshot showing how to start recovering backup items from soft delete state." lightbox="./media/backup-azure-enhanced-soft-delete/start-recover-backup-items-expanded.png":::
+
+1. In the **Undelete** *backup item* blade, select **Undelete** to recover the deleted item.
+
+ All recovery points now appear and the backup item changes to *Stop protection with retain data* state. However, backups don't resume automatically. To continue taking backups for this item, select **Resume backup**.
+
+# [Backup vault](#tab/backup-vault)
+
+Follow these steps:
+
+1. Go to the *deleted backup instance* that you want to recover.
+
+ You can also use the **Backup center** to go to the *instance* by applying the filter **Protection status == Soft deleted** in the *Backup instances*.
+
+1. Select **Undelete** corresponding to the *soft deleted instance*.
+
+ :::image type="content" source="./media/backup-azure-enhanced-soft-delete/start-recover-deleted-backup-vault-items-inline.png" alt-text="Screenshot showing how to start recovering deleted backup vault items from soft delete state." lightbox="./media/backup-azure-enhanced-soft-delete/start-recover-deleted-backup-vault-items-expanded.png":::
+
+1. In the **Undelete** *backup instance* blade, select **Undelete** to recover the item.
+
+ All recovery points appear and the backup item changes to *Stop protection with retain data* state. However, backups don't resume automatically. To continue taking backups for this instance, select **Resume backup**.
+
+>[!Note]
+>Undeleting a soft deleted item reinstates the backup item into Stop backup with retain data state and doesn't automatically restart scheduled backups. You need to explicitly [resume backups](backup-azure-manage-vms.md#resume-protection-of-a-vm) if you want to continue taking new backups. Resuming backup will also clean up expired recovery points, if any.
+++
+## Unregister containers
+
+In the case of workloads that group multiple backup items into a container, you can unregister a container if all its backup items are either deleted or soft deleted.
+
+Here are some points to note:
+
+- You can unregister a container only if it has no protected items, that is, all backup items inside it are either deleted or soft deleted.
+
+- Unregistering a container while its backup items are soft deleted (not permanently deleted) will change the state of the container to Soft deleted.
+
+- You can re-register containers that are in soft deleted state to another vault. However, in such scenarios, the existing backups (that are soft deleted) will continue to be in the original vault and will be permanently deleted when the soft delete retention period expires.
+
+- You can also *undelete* the container. Once undeleted, it's re-registered to the original vault.
+
+ You can undelete a container only if it's not registered to another vault. If it's registered, then you need to unregister it with the vault before performing the *undelete* operation.
+
+## Disable soft delete
+
+Follow these steps:
+
+1. Go to your *vault* > **Properties**.
+
+1. On the **Properties** page, under **Soft delete**, select **Update**.
+1. In the **Soft Delete settings** blade, clear the **Enable soft delete** checkbox to disable soft delete.
+
+>[!Note]
+>You can't disable soft delete if **Enable Always-on Soft Delete** is enabled for this vault.
+
+## Next steps
+
+[About Enhanced soft delete for Azure Backup (preview)](backup-azure-enhanced-soft-delete-about.md).
backup Backup Azure Immutable Vault Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-immutable-vault-concept.md
+
+ Title: Concept of Immutable vault for Azure Backup (preview)
+description: This article explains about the concept of Immutable vault for Azure Backup, and how it helps in protecting data from malicious actors.
+++ Last updated : 09/15/2022++++
+# Immutable vault for Azure Backup (preview)
+
+Immutable vault can help you protect your backup data by blocking any operations that could lead to loss of recovery points. Further, you can lock the Immutable vault setting to make it irreversible to prevent any malicious actors from disabling immutability and deleting backups.
+
+## Before you start
+
+- Immutable vault is currently in preview and is available in the following regions: East US, West US, West US 2, West Central US, North Europe, Brazil South, Japan East.
+- Immutable vault is currently supported for Recovery Services vaults only.
+- Enabling Immutable vault blocks you from performing specific operations on the vault and its protected items. See the [restricted operations](#restricted-operations).
+- Enabling immutability for the vault is a reversible operation. However, you can choose to make it irreversible to prevent any malicious actors from disabling it (after disabling it, they can perform destructive operations). Learn about [making Immutable vault irreversible](#making-immutability-irreversible).
+- Immutable vault applies to all the data in the vault. Therefore, all instances that are protected in the vault have immutability applied to them.
+- Immutability doesn't apply to operational backups, such as operational backup of blobs, files, and disks.
+
+## How does immutability work?
+
+While Azure Backup stores data in isolation from production workloads, it allows performing management operations to help you manage your backups, including those operations that allow you to delete recovery points. However, in certain scenarios, you may want to make the backup data immutable by preventing any such operations that, if used by malicious actors, could lead to the loss of backups. The Immutable vault setting on your vault enables you to block such operations to ensure that your backup data is protected, even if any malicious actors try to delete them to affect the recoverability of data.
+
+## Making immutability irreversible
+
+The immutability of a vault is a reversible setting that allows you to disable the immutability (which would allow deletion of backup data) if needed. However, we recommend you, after being satisfied with the impact of immutability, lock the vault to make the Immutable vault settings irreversible, so that any bad actors canΓÇÖt disable it. Therefore, the Immutable vault settings accept following three states.
+
+| State of Immutable vault setting | Description |
+| | |
+| **Disabled** | The vault doesn't have immutability enabled and no operations are blocked. |
+| **Enabled** | The vault has immutability enabled and doesn't allow operations that could result in loss of backups. <br><br> However, the setting can be disabled. |
+| **Enabled and locked** | The vault has immutability enabled and doesn't allow operations that could result in loss of backups. <br><br> As the Immutable vault setting is now locked, it can't be disabled. <br><br> Note that immutability locking is irreversible, so ensure that you take a well-informed decision when opting to lock. |
+
+## Restricted operations
+
+Immutable vault prevents you from performing the following operations on the vault that could lead to loss of data:
+
+| Operation type | Description |
+| | |
+| **Stop protection with delete data** | A protected item can't have its recovery points deleted before their respective expiry date. However, you can still stop the protection of the instances while retaining data forever or until their expiry. |
+| **Modify backup policy to reduce retention** | Any actions that reduce the retention period in a backup policy are disallowed on Immutable vault. However, you can make policy changes that result in the increase of retention. You can also make changes to the schedule of a backup policy. |
+| **Change backup policy to reduce retention** | Any attempt to replace a backup policy associated with a backup item with another policy with retention lower than the existing one is blocked. However, you can replace a policy with the one that has higher retention. |
+
+## Next steps
+
+- Learn [how to manage operations of Azure Backup vault immutability (preview)](backup-azure-immutable-vault-how-to-manage.md).
+
backup Backup Azure Immutable Vault How To Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-immutable-vault-how-to-manage.md
+
+ Title: How to manage Azure Backup Immutable vault operations (preview)
+description: This article explains how to manage Azure Backup Immutable vault operations.
++ Last updated : 09/15/2022++++
+# Manage Azure Backup Immutable vault operations (preview)
+
+[Immutable vault](backup-azure-immutable-vault-concept.md) can help you protect your backup data by blocking any operations that could lead to loss of recovery points. Further, you can lock the Immutable vault setting to make it irreversible to prevent any malicious actors from disabling immutability and deleting backups.
+
+In this article, you'll learn how to:
+
+> [!div class="checklist"]
+>
+> - Enable Immutable vault
+> - Perform operations on Immutable vault
+> - Disable immutability
+
+## Enable Immutable vault
+
+You can enable immutability for a vault through its properties, follow these steps:
+
+1. Go to the Recovery Services vault for which you want to enable immutability.
+
+1. In the vault, go to **Properties** -> **Immutable vault**, and then select **Settings**.
+
+ :::image type="content" source="./media/backup-azure-immutable-vault/enable-immutable-vault-settings.png" alt-text="Screenshot showing how to open the Immutable vault settings.":::
+
+1. On **Immutable vault**, select the checkbox for **Enable vault immutability** to enable immutability for the vault.
+
+ At this point, immutability of the vault is reversible, and it can be disabled, if needed.
+
+1. Once you enable immutability, the option to lock the immutability for the vault appears.
+
+ This makes immutability setting for the vault irreversible. While this helps secure the backup data in the vault, we recommend you make a well-informed decision when opting to lock. You can also test and validate how the current settings of the vault, backup policies, and so on, meet your requirements and can lock the immutability setting later.
+
+1. Select **Apply** to save the changes.
+
+ :::image type="content" source="./media/backup-azure-immutable-vault/backup-azure-enable-immutability.png" alt-text="Screenshot showing how to enable the Immutable vault settings.":::
+
+## Perform operations on Immutable vault
+
+As per the [Restricted operations](backup-azure-immutable-vault-concept.md#restricted-operations), certain operations are restricted on Immutable vault. However, other operations on the vault or the items it contains remain unaffected.
+
+### Perform restricted operations
+
+[Restricted operations](backup-azure-immutable-vault-concept.md#restricted-operations) are disallowed on the vault. Consider the following example when trying to modify a policy to reduce its retention in a vault with immutability enabled.
+
+Consider a policy with a daily backup point retention of *35 days* and weekly backup point retention of *two weeks*, as shown in the following screenshot.
++
+Now, let's try to reduce the retention of daily backup points to *30 days*, reducing by *5 days*, and save the policy.
+
+You'll see that the operation fails with the information that the vault has immutability enabled, and therefore, any changes that could reduce retention of recovery points are disallowed.
++
+Now, let's try to increase the retention of daily backup points to *40 days*, increasing by *5 days*, and save the policy.
+
+This time, the operation successfully passes as no recovery points can be deleted as part of this update.
++
+## Disable immutability
+
+You can disable immutability only for vaults that have immutability enabled, but not locked. To disable immutability for such vaults, follow these steps:
+
+1. Go to the Recovery Services vault for which you want to disable immutability.
+
+1. In the vault, go to **Properties** -> **Immutable vault**, and then select **Settings**.
+
+ :::image type="content" source="./media/backup-azure-immutable-vault/disable-immutable-vault-settings.png" alt-text="Screenshot showing how to open the Immutable vault settings to disable.":::
+
+1. In the **Immutable vault** blade, clear the checkbox for **Enable vault Immutability**.
+
+1. Select **Apply** to save the changes.
+
+ :::image type="content" source="./media/backup-azure-immutable-vault/backup-azure-disable-immutability.png" alt-text="Screenshot showing how to disable the Immutable vault settings.":::
+
+## Next steps
+
+- Learn [about Immutable vault for Azure Backup (preview)](backup-azure-immutable-vault-concept.md).
backup Enable Multi User Authorization Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/enable-multi-user-authorization-quickstart.md
Follow these steps:
## Next steps -- [Protect against unauthorized (protected) operations](multi-user-authorization.md#protect-against-unauthorized-protected-operations)
+- [Protected operations using MUA](multi-user-authorization.md?pivots=vaults-recovery-services-vault#protected-operations-using-mua)
- [Authorize critical (protected) operations using Azure AD Privileged Identity Management](multi-user-authorization.md#authorize-critical-protected-operations-using-azure-ad-privileged-identity-management) - [Performing a protected operation after approval](multi-user-authorization.md#performing-a-protected-operation-after-approval) - [Disable MUA on a Recovery Services vault](multi-user-authorization.md#disable-mua-on-a-recovery-services-vault)
backup Multi User Authorization Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/multi-user-authorization-concept.md
Title: Multi-user authorization using Resource Guard description: An overview of Multi-user authorization using Resource Guard. Previously updated : 06/08/2022 Last updated : 09/15/2022 # Multi-user authorization using Resource Guard
-Multi-user authorization (MUA) for Azure Backup allows you to add an additional layer of protection to critical operations on your Recovery Services vaults. For MUA, Azure Backup uses another Azure resource called the Resource Guard to ensure critical operations are performed only with applicable authorization.
+Multi-user authorization (MUA) for Azure Backup allows you to add an additional layer of protection to critical operations on your Recovery Services vaults and Backup vaults. For MUA, Azure Backup uses another Azure resource called the Resource Guard to ensure critical operations are performed only with applicable authorization.
+
+>[!Note]
+>Multi-user authorization using Resource Guard for Backup vault is in preview.
## How does MUA for Backup work?
-Azure Backup uses the Resource Guard as an authorization service for a Recovery Services vault. Therefore, to perform a critical operation (described below) successfully, you must have sufficient permissions on the associated Resource Guard as well.
+Azure Backup uses the Resource Guard as an additional authorization mechanism for a Recovery Services vault or a Backup vault. Therefore, to perform a critical operation (described below) successfully, you must have sufficient permissions on the associated Resource Guard as well.
> [!Important]
-> To function as intended, the Resource Guard must be owned by a different user, and the vault admin must not have Contributor permissions. You can place Resource Guard in a subscription or tenant different from the one containing the Recovery Services vault to provide better protection.
+> To function as intended, the Resource Guard must be owned by a different user, and the vault admin must not have Contributor permissions. You can place Resource Guard in a subscription or tenant different from the one containing the vaults to provide better protection.
## Critical operations
-The following table lists the operations defined as critical operations and can be protected by a Resource Guard. You can choose to exclude certain operations from being protected using the Resource Guard when associating vaults with it. Note that operations denoted as Mandatory cannot be excluded from being protected using the Resource Guard for vaults associated with it. Also, the excluded critical operations would apply to all vaults associated with a Resource Guard.
+The following table lists the operations defined as critical operations and can be protected by a Resource Guard. You can choose to exclude certain operations from being protected using the Resource Guard when associating vaults with it.
+
+>[!Note]
+>You can't excluded the operations denoted as Mandatory from being protected using the Resource Guard for vaults associated with it. Also, the excluded critical operations would apply to all vaults associated with a Resource Guard.
+
+**Choose a vault**
+
+# [Recovery Services vault](#tab/recovery-services-vault)
-**Operation** | **Mandatory/Optional**
+**Operation** | **Mandatory/ Optional**
| Disable soft delete | Mandatory Disable MUA protection | Mandatory
-Modify backup policy (reduced retention) | Optional: Can be excluded
-Modify protection (reduced retention) | Optional: Can be excluded
-Stop protection with delete data | Optional: Can be excluded
-Change MARS security PIN | Optional: Can be excluded
+Modify backup policy (reduced retention) | Optional
+Modify protection (reduced retention) | Optional
+Stop protection with delete data | Optional
+Change MARS security PIN | Optional
+
+# [Backup vault (preview)](#tab/backup-vault)
+
+**Operation** | **Mandatory/ Optional**
+ |
+Disable MUA protection | Mandatory
+Delete backup instance | Optional
++ ### Concepts and process
-The concepts and the processes involved when using MUA for Backup are explained below.
+
+The concepts and the processes involved when using MUA for Azure Backup are explained below.
LetΓÇÖs consider the following two users for a clear understanding of the process and responsibilities. These two roles are referenced throughout this article.
-**Backup admin**: Owner of the Recovery Services vault and performs management operations on the vault. To begin with, the Backup admin must not have any permissions on the Resource Guard.
+**Backup admin**: Owner of the Recovery Services vault or the Backup vault who performs management operations on the vault. To begin with, the Backup admin must not have any permissions on the Resource Guard.
**Security admin**: Owner of the Resource Guard and serves as the gatekeeper of critical operations on the vault. Hence, the Security admin controls permissions that the Backup admin needs to perform critical operations on the vault. Following is a diagrammatic representation for performing a critical operation on a vault that has MUA configured using a Resource Guard.
-Here is the flow of events in a typical scenario:
+Here's the flow of events in a typical scenario:
-1. The Backup admin creates the Recovery Services vault.
-1. The Security admin creates the Resource Guard. The Resource Guard can be in a different subscription or a different tenant with respect to the Recovery Services vault. It must be ensured that the Backup admin does not have Contributor permissions on the Resource Guard.
+1. The Backup admin creates the Recovery Services vault or the Backup vault.
+1. The Security admin creates the Resource Guard. The Resource Guard can be in a different subscription or a different tenant with respect to the vault. It must be ensured that the Backup admin doesn't have Contributor permissions on the Resource Guard.
1. The Security admin grants the **Reader** role to the Backup Admin for the Resource Guard (or a relevant scope). The Backup admin requires the reader role to enable MUA on the vault.
-1. The Backup admin now configures the Recovery Services vault to be protected by MUA via the Resource Guard.
+1. The Backup admin now configures the vault to be protected by MUA via the Resource Guard.
1. Now, if the Backup admin wants to perform a critical operation on the vault, they need to request access to the Resource Guard. The Backup admin can contact the Security admin for details on gaining access to perform such operations. They can do this using Privileged Identity Management (PIM) or other processes as mandated by the organization. 1. The Security admin temporarily grants the **Contributor** role on the Resource Guard to the Backup admin to perform critical operations. 1. Now, the Backup admin initiates the critical operation. 1. The Azure Resource Manager checks if the Backup admin has sufficient permissions or not. Since the Backup admin now has Contributor role on the Resource Guard, the request is completed.
- - If the Backup admin did not have the required permissions/roles, the request would have failed.
+
+ If the Backup admin didn't have the required permissions/roles, the request would have failed.
+ 1. The security admin ensures that the privileges to perform critical operations are revoked after authorized actions are performed or after a defined duration. Using JIT tools [Azure Active Directory Privileged Identity Management](../active-directory/privileged-identity-management/pim-configure.md) may be useful in ensuring this. >[!NOTE]
->- MUA provides protection on the above listed operations performed on the Recovery Services vaults only. Any operations performed directly on the data source (i.e., the Azure resource/workload that is protected) are beyond the scope of the Resource Guard.
->- This feature is currently available via the Azure portal only.
->- This feature is currently supported for Recovery Services vaults only and not available for Backup vaults.
+>MUA provides protection on the above listed operations performed on the vaulted backups only. Any operations performed directly on the data source (that is, the Azure resource/workload that is protected) are beyond the scope of the Resource Guard.
## Usage scenarios
-The following table depicts scenarios for creating your Resource Guard and Recovery Services vault (RS vault), along with the relative protection offered by each.
+The following table lists the scenarios for creating your Resource Guard and vaults (Recovery Services vault and Backup vault), along with the relative protection offered by each.
>[!Important] > The Backup admin must not have Contributor permissions to the Resource Guard in any scenario. **Usage scenario** | **Protection due to MUA** | **Ease of implementation** | **Notes** | | | |
-RS vault and Resource Guard are **in the same subscription.** </br> The Backup admin does not have access to the Resource Guard. | Least isolation between the Backup admin and the Security admin. | Relatively easy to implement since only one subscription is required. | Resource level permissions/ roles need to be ensured are correctly assigned.
-RS vault and Resource Guard are **in different subscriptions but the same tenant.** </br> The Backup admin does not have access to the Resource Guard or the corresponding subscription. | Medium isolation between the Backup admin and the Security admin. | Relatively medium ease of implementation since two subscriptions (but a single tenant) are required. | Ensure that that permissions/ roles are correctly assigned for the resource or the subscription.
-RS vault and Resource Guard are **in different tenants.** </br> The Backup admin does not have access to the Resource Guard, the corresponding subscription, or the corresponding tenant.| Maximum isolation between the Backup admin and the Security admin, hence, maximum security. | Relatively difficult to test since requires two tenants or directories to test. | Ensure that permissions/ roles are correctly assigned for the resource, the subscription or the directory.
-
- >[!NOTE]
- > For this article, we will demonstrate creation of the Resource Guard in a different tenant that offers maximum protection. In terms of requesting and approving requests for performing critical operations, this article demonstrates the same using [Azure Active Directory Privileged Identity Management](../active-directory/privileged-identity-management/pim-configure.md) in the tenant housing the Resource Guard. You can optionally use other mechanisms to manage JIT permissions on the Resource Guard as per your setup.
+Vault and Resource Guard are **in the same subscription.** </br> The Backup admin does't have access to the Resource Guard. | Least isolation between the Backup admin and the Security admin. | Relatively easy to implement since only one subscription is required. | Resource level permissions/ roles need to be ensured are correctly assigned.
+Vault and Resource Guard are **in different subscriptions but the same tenant.** </br> The Backup admin doesn't have access to the Resource Guard or the corresponding subscription. | Medium isolation between the Backup admin and the Security admin. | Relatively medium ease of implementation since two subscriptions (but a single tenant) are required. | Ensure that that permissions/ roles are correctly assigned for the resource or the subscription.
+Vault and Resource Guard are **in different tenants.** </br> The Backup admin doesn't have access to the Resource Guard, the corresponding subscription, or the corresponding tenant.| Maximum isolation between the Backup admin and the Security admin, hence, maximum security. | Relatively difficult to test since requires two tenants or directories to test. | Ensure that permissions/ roles are correctly assigned for the resource, the subscription or the directory.
## Next steps
-[Configure Multi-user authorization using Resource Guard](multi-user-authorization.md)
+[Configure Multi-user authorization using Resource Guard](multi-user-authorization.md).
backup Multi User Authorization Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/multi-user-authorization-tutorial.md
Follow these steps:
Follow these steps: 1. In the Resource Guard created above, go to **Properties**.
-2. Select **Disable** for operations that you wish to exclude from being authorized using the Resource Guard. Note that the operations **Disable soft delete** and **Remove MUA protection** cannot be disabled.
+1. Select **Disable** for operations that you wish to exclude from being authorized using the Resource Guard.
+
+ >[!Note]
+ >The operations **Disable soft delete** and **Remove MUA protection** cannot be disabled.
1. Optionally, you can also update the description for the Resource Guard using this blade.
-1. Click **Save**.
+1. Select **Save**.
## Assign permissions to the Backup admin on the Resource Guard to enable MUA
Follow these steps:
## Next steps -- [Protect against unauthorized (protected) operations](multi-user-authorization.md#protect-against-unauthorized-protected-operations)
+- [Protected operations using MUA](multi-user-authorization.md?pivots=vaults-recovery-services-vault#protected-operations-using-mua)
- [Authorize critical (protected) operations using Azure AD Privileged Identity Management](multi-user-authorization.md#authorize-critical-protected-operations-using-azure-ad-privileged-identity-management) - [Performing a protected operation after approval](multi-user-authorization.md#performing-a-protected-operation-after-approval) - [Disable MUA on a Recovery Services vault](multi-user-authorization.md#disable-mua-on-a-recovery-services-vault)
backup Multi User Authorization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/multi-user-authorization.md
Title: Configure Multi-user authorization using Resource Guard description: This article explains how to configure Multi-user authorization using Resource Guard. Previously updated : 05/05/2022
+zone_pivot_groups: backup-vaults-recovery-services-vault-backup-vault
Last updated : 09/15/2022 # Configure Multi-user authorization using Resource Guard in Azure Backup
-This article describes how to configure Multi-user authorization (MUA) for Azure Backup to add an additional layer of protection to critical operations on your Recovery Services vaults
-This document includes the following:
+++
+This article describes how to configure Multi-user authorization (MUA) for Azure Backup to add an additional layer of protection to critical operations on your Recovery Services vaults.
+
+This article demonstrates Resource Guard creation in a different tenant that offers maximum protection. It also demonstrates how to request and approve requests for performing critical operations using [Azure Active Directory Privileged Identity Management](../active-directory/privileged-identity-management/pim-configure.md) in the tenant housing the Resource Guard. You can optionally use other mechanisms to manage JIT permissions on the Resource Guard as per your setup.
+
+This document includes the following sections:
>[!div class="checklist"] >- Before you start >- Testing scenarios >- Create a Resource Guard >- Enable MUA on a Recovery Services vault
->- Protect against unauthorized operations on a vault
+>- Protected operations on a vault using MUA
>- Authorize critical operations on a vault >- Disable MUA on a Recovery Services vault
This document includes the following:
- Ensure the Resource Guard and the Recovery Services vault are in the same Azure region. - Ensure the Backup admin does **not** have **Contributor** permissions on the Resource Guard. You can choose to have the Resource Guard in another subscription of the same directory or in another directory to ensure maximum isolation.-- Ensure that your subscriptions containing the Recovery Services vault as well as the Resource Guard (in different subscriptions or tenants) are registered to use the providers - **Microsoft.RecoveryServices** and **Microsoft.DataProtection** . For more details, see [Azure resource providers and types](../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider-1).
+- Ensure that your subscriptions containing the Recovery Services vault as well as the Resource Guard (in different subscriptions or tenants) are registered to use the providers - **Microsoft.RecoveryServices** and **Microsoft.DataProtection** . For more information, see [Azure resource providers and types](../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider-1).
-Learn about various [MUA usage scenarios](multi-user-authorization-concept.md#usage-scenarios).
+Learn about various [MUA usage scenarios](./multi-user-authorization-concept.md?tabs=recovery-services-vault#usage-scenarios).
## Create a Resource Guard The **Security admin** creates the Resource Guard. We recommend that you create it in a **different subscription** or a **different tenant** as the vault. However, it should be in the **same region** as the vault. The Backup admin must **NOT** have *contributor* access on the Resource Guard or the subscription that contains it. For the following example, create the Resource Guard in a tenant different from the vault tenant.
-1. In the Azure portal, go to the directory under which you wish to create the Resource Guard.
+1. In the Azure portal, go to the directory under which you want to create the Resource Guard.
:::image type="content" source="./media/multi-user-authorization/portal-settings-directories-subscriptions.png" alt-text="Screenshot showing the portal settings.":::
-1. Search for **Resource Guards** in the search bar and select the corresponding item from the drop-down.
+1. Search for **Resource Guards** in the search bar and select the corresponding item from the drop-down list.
:::image type="content" source="./media/multi-user-authorization/resource-guards-preview-inline.png" alt-text="Screenshot showing resource guards." lightbox="./media/multi-user-authorization/resource-guards-preview-expanded.png":::
- - Click **Create** to start creating a Resource Guard.
+ - Select **Create** to start creating a Resource Guard.
- In the create blade, fill in the required details for this Resource Guard. - Make sure the Resource Guard is in the same Azure regions as the Recovery Services vault.
- - Also, it is helpful to add a description of how to get or request access to perform actions on associated vaults when needed. This description would also appear in the associated vaults to guide the backup admin on getting the required permissions. You can edit the description later if needed, but having a well-defined description at all times is encouraged.
+ - Also, it's helpful to add a description of how to get or request access to perform actions on associated vaults when needed. This description would also appear in the associated vaults to guide the backup admin on getting the required permissions. You can edit the description later if needed, but having a well-defined description at all times is encouraged.
1. On the **Protected operations** tab, select the operations you need to protect using this resource guard.
- You can also [select the operations to be protected after creating the resource guard](#select-operations-to-protect-using-resource-guard).
+ You can also [select the operations for protection after creating the resource guard](?pivots=vaults-recovery-services-vault#select-operations-to-protect-using-resource-guard).
1. Optionally, add any tags to the Resource Guard as per the requirements
-1. Click **Review + Create**.
-1. Follow notifications for status and successful creation of the Resource Guard.
+1. Select **Review + Create**.
+
+ Follow notifications for status and successful creation of the Resource Guard.
### Select operations to protect using Resource Guard Choose the operations you want to protect using the Resource Guard out of all supported critical operations. By default, all supported critical operations are enabled. However, you can exempt certain operations from falling under the purview of MUA using Resource Guard. The security admin can perform the following steps: 1. In the Resource Guard created above, go to **Properties**.
-2. Select **Disable** for operations that you wish to exclude from being authorized using the Resource Guard. Note that the operations **Disable soft delete** and **Remove MUA protection** cannot be disabled.
-3. Optionally, you can also update the description for the Resource Guard using this blade.
-4. Click **Save**.
+2. Select **Disable** for operations that you want to exclude from being authorized using the Resource Guard.
+
+ >[!Note]
+ > You can't disable the protected operations - **Disable soft delete** and **Remove MUA protection**.
+1. Optionally, you can also update the description for the Resource Guard using this blade.
+1. Select **Save**.
:::image type="content" source="./media/multi-user-authorization/demo-resource-guard-properties.png" alt-text="Screenshot showing demo resource guard properties.":::
Choose the operations you want to protect using the Resource Guard out of all su
To enable MUA on a vault, the admin of the vault must have **Reader** role on the Resource Guard or subscription containing the Resource Guard. To assign the **Reader** role on the Resource Guard:
-1. In the Resource Guard created above, go to the Access Control (IAM) blade, and then go to **Add role assignment**.
+1. In the Resource Guard created above, go to the **Access Control (IAM)** blade, and then go to **Add role assignment**.
:::image type="content" source="./media/multi-user-authorization/demo-resource-guard-access-control.png" alt-text="Screenshot showing demo resource guard-access control.":::
-1. Select **Reader** from the list of built-in roles and click **Next** on the bottom of the screen.
+1. Select **Reader** from the list of built-in roles and select **Next** on the bottom of the screen.
:::image type="content" source="./media/multi-user-authorization/demo-resource-guard-add-role-assignment-inline.png" alt-text="Screenshot showing demo resource guard-add role assignment." lightbox="./media/multi-user-authorization/demo-resource-guard-add-role-assignment-expanded.png":::
-1. Click **Select members** and add the Backup adminΓÇÖs email ID to add them as the **Reader**. Since the Backup admin is in another tenant in this case, they will be added as guests to the tenant containing the Resource Guard.
+1. Click **Select members** and add the Backup adminΓÇÖs email ID to add them as the **Reader**. As the Backup admin is in another tenant in this case, they'll be added as guests to the tenant containing the Resource Guard.
1. Click **Select** and then proceed to **Review + assign** to complete the role assignment.
Now that the Backup admin has the Reader role on the Resource Guard, they can ea
:::image type="content" source="./media/multi-user-authorization/test-vault-properties.png" alt-text="Screenshot showing the Recovery services vault properties.":::
-1. Now you are presented with the option to enable MUA and choose a Resource Guard using one of the following ways:
+1. Now, you're presented with the option to enable MUA and choose a Resource Guard using one of the following ways:
1. You can either specify the URI of the Resource Guard, make sure you specify the URI of a Resource Guard you have **Reader** access to and that is the same regions as the vault. You can find the URI (Resource Guard ID) of the Resource Guard in its **Overview** screen: :::image type="content" source="./media/multi-user-authorization/resource-guard-rg-inline.png" alt-text="Screenshot showing the Resource Guard." lightbox="./media/multi-user-authorization/resource-guard-rg-expanded.png":::
- 1. Or you can select the Resource Guard from the list of Resource Guards you have **Reader** access to, and those available in the region.
+ 1. Or, you can select the Resource Guard from the list of Resource Guards you have **Reader** access to, and those available in the region.
1. Click **Select Resource Guard** 1. Click on the dropdown and select the directory the Resource Guard is in.
Now that the Backup admin has the Reader role on the Resource Guard, they can ea
:::image type="content" source="./media/multi-user-authorization/testvault1-multi-user-authorization-inline.png" alt-text="Screenshot showing multi-user authorization." lightbox="./media/multi-user-authorization/testvault1-multi-user-authorization-expanded.png" :::
-1. Click **Save** once done to enable MUA
+1. Select **Save** once done to enable MUA.
:::image type="content" source="./media/multi-user-authorization/testvault1-enable-mua.png" alt-text="Screenshot showing how to enable Multi-user authentication.":::
-## Protect against unauthorized (protected) operations
+## Protected operations using MUA
-Once you have enabled MUA, the operations in scope will be restricted on the vault, if the Backup admin tries to perform them without having the required role (i.e., Contributor role) on the Resource Guard.
+Once you have enabled MUA, the operations in scope will be restricted on the vault, if the Backup admin tries to perform them without having the required role (that is, Contributor role) on the Resource Guard.
>[!NOTE]
- >It is highly recommended that you test your setup after enabling MUA to ensure that protected operations are blocked as expected and to ensure that MUA is correctly configured.
+ >We highly recommend that you test your setup after enabling MUA to ensure that protected operations are blocked as expected and to ensure that MUA is correctly configured.
Depicted below is an illustration of what happens when the Backup admin tries to perform such a protected operation (For example, disabling soft delete is depicted here. Other protected operations have a similar experience). The following steps are performed by a Backup admin without required permissions.
-1. To disable soft delete, go to the Recovery Services Vault > Properties > Security Settings and click **Update**, which brings up the Security Settings.
-1. Disable the soft delete using the slider. You are informed that this is a protected operation, and you need to verify their access to the Resource Guard.
+1. To disable soft delete, go to the Recovery Services vault > **Properties** > **Security Settings** and select **Update**, which brings up the Security Settings.
+1. Disable the soft delete using the slider. You're informed that this is a protected operation, and you need to verify their access to the Resource Guard.
1. Select the directory containing the Resource Guard and Authenticate yourself. This step may not be required if the Resource Guard is in the same directory as the vault.
-1. Proceed to click **Save**. The request fails with an error informing them about not having sufficient permissions on the Resource Guard to let you perform this operation.
+1. Proceed to select **Save**. The request fails with an error informing them about not having sufficient permissions on the Resource Guard to let you perform this operation.
:::image type="content" source="./media/multi-user-authorization/test-vault-properties-security-settings-inline.png" alt-text="Screenshot showing the Test Vault properties security settings." lightbox="./media/multi-user-authorization/test-vault-properties-security-settings-expanded.png"::: ## Authorize critical (protected) operations using Azure AD Privileged Identity Management
-The following sub-sections discuss authorizing these requests using PIM. There are cases where you may need to perform critical operations on your backups and MUA can help you ensure that these are performed only when the right approvals or permissions exist. As discussed earlier, the Backup admin needs to have a Contributor role on the Resource Guard to perform critical operations that are in the Resource Guard scope. One of the ways to allow just-in-time for such operations is through the use of [Azure Active Directory (Azure AD) Privileged Identity Management](../active-directory/privileged-identity-management/pim-configure.md).
+The following sections discuss authorizing these requests using PIM. There are cases where you may need to perform critical operations on your backups and MUA can help you ensure that these are performed only when the right approvals or permissions exist. As discussed earlier, the Backup admin needs to have a Contributor role on the Resource Guard to perform critical operations that are in the Resource Guard scope. One of the ways to allow just-in-time for such operations is through the use of [Azure Active Directory (Azure AD) Privileged Identity Management](../active-directory/privileged-identity-management/pim-configure.md).
>[!NOTE]
-> Though using Azure AD PIM is the recommended approach, you can use manual or custom methods to manage access for the Backup admin on the Resource Guard. For managing access to the Resource Guard manually, use the ΓÇÿAccess control (IAM)ΓÇÖ setting on the left navigation bar of the Resource Guard and grant the **Contributor** role to the Backup admin.
+>Though using Azure AD PIM is the recommended approach, you can use manual or custom methods to manage access for the Backup admin on the Resource Guard. For managing access to the Resource Guard manually, use the ΓÇÿAccess control (IAM)ΓÇÖ setting on the left navigation bar of the Resource Guard and grant the **Contributor** role to the Backup admin.
### Create an eligible assignment for the Backup admin (if using Azure AD Privileged Identity Management)
The Security admin can use PIM to create an eligible assignment for the Backup a
1. In the security tenant (which contains the Resource Guard), go to **Privileged Identity Management** (search for this in the search bar in the Azure portal) and then go to **Azure Resources** (under **Manage** on the left menu). 1. Select the resource (the Resource Guard or the containing subscription/RG) to which you want to assign the **Contributor** role.
- 1. If you donΓÇÖt see the corresponding resource in the list of resources, ensure you add the containing subscription to be managed by PIM.
+ If you donΓÇÖt see the corresponding resource in the list of resources, ensure you add the containing subscription to be managed by PIM.
1. In the selected resource, go to **Assignments** (under **Manage** on the left menu) and go to **Add assignments**. :::image type="content" source="./media/multi-user-authorization/add-assignments.png" alt-text="Screenshot showing how to add assignments.":::
-1. In the Add assignments
+1. In the Add assignments:
1. Select the role as Contributor.
- 1. Go to Select members and add the username (or email IDs) of the Backup admin
- 1. Click Next
+ 1. Go to **Select members** and add the username (or email IDs) of the Backup admin.
+ 1. Select **Next**.
:::image type="content" source="./media/multi-user-authorization/add-assignments-membership.png" alt-text="Screenshot showing how to add assignments-membership.":::
-1. In the next screen
+1. In the next screen:
1. Under assignment type, choose **Eligible**. 1. Specify the duration for which the eligible permission is valid.
- 1. Click **Assign** to finish creating the eligible assignment.
+ 1. Select **Assign** to finish creating the eligible assignment.
:::image type="content" source="./media/multi-user-authorization/add-assignments-setting.png" alt-text="Screenshot showing how to add assignments-setting."::: ### Set up approvers for activating Contributor role By default, the setup above may not have an approver (and an approval flow requirement) configured in PIM. To ensure that approvers are required for allowing only authorized requests to go through, the security admin must perform the following steps.
-Note if this is not configured, any requests will be automatically approved without going through the security admins or a designated approverΓÇÖs review. More details on this can be found [here](../active-directory/privileged-identity-management/pim-resource-roles-configure-role-settings.md)
+
+> [!Note]
+> If this isn't configured, any requests will be automatically approved without going through the security admins or a designated approverΓÇÖs review. More details on this can be found [here](../active-directory/privileged-identity-management/pim-resource-roles-configure-role-settings.md)
1. In Azure AD PIM, select **Azure Resources** on the left navigation bar and select your Resource Guard.
Note if this is not configured, any requests will be automatically approved with
:::image type="content" source="./media/multi-user-authorization/add-contributor.png" alt-text="Screenshot showing how to add contributor.":::
-1. If the setting named **Approvers** shows None or displays incorrect approvers, click **Edit** to add the reviewers who would need to review and approve the activation request for the Contributor role.
+1. If the setting named **Approvers** shows *None* or displays incorrect approvers, select **Edit** to add the reviewers who would need to review and approve the activation request for the Contributor role.
-1. In the **Activation** tab, select **Require approval to activate** and add the approver(s) who need to approve each request. You can also select other security options like using MFA and mandating ticket options to activate the Contributor role. Optionally, select relevant settings in the **Assignment** and **Notification** tabs as per your requirements.
+1. On the **Activation** tab, select **Require approval to activate** and add the approver(s) who need to approve each request. You can also select other security options like using MFA and mandating ticket options to activate the Contributor role. Optionally, select relevant settings on the **Assignment** and **Notification** tabs as per your requirements.
:::image type="content" source="./media/multi-user-authorization/edit-role-settings.png" alt-text="Screenshot showing how to edit role setting.":::
-1. Click **Update** once done.
+1. Select **Update** once done.
### Request activation of an eligible assignment to perform critical operations After the security admin creates an eligible assignment, the Backup admin needs to activate the assignment for the Contributor role to be able to perform protected actions. The following actions are performed by the **Backup admin** to activate the role assignment. 1. Go to [Azure AD Privileged Identity Management](../active-directory/privileged-identity-management/pim-configure.md). If the Resource Guard is in another directory, switch to that directory and then go to [Azure AD Privileged Identity Management](../active-directory/privileged-identity-management/pim-configure.md).
-1. Go to My roles > Azure resources on the left menu.
-1. The Backup admin can see an Eligible assignment for the contributor role. Click **Activate** to activate it.
+1. Go to **My roles** > **Azure resources** on the left menu.
+1. The Backup admin can see an Eligible assignment for the contributor role. Select **Activate** to activate it.
1. The Backup admin is informed via portal notification that the request is sent for approval. :::image type="content" source="./media/multi-user-authorization/identity-management-myroles-inline.png" alt-text="Screenshot showing to activate eligible assignments." lightbox="./media/multi-user-authorization/identity-management-myroles-expanded.png":::
Once the Backup admin raises a request for activating the Contributor role, the
1. In the security tenant, go to [Azure AD Privileged Identity Management.](../active-directory/privileged-identity-management/pim-configure.md) 1. Go to **Approve Requests**. 1. Under **Azure resources**, the request raised by the Backup admin requesting activation as a **Contributor** can be seen.
-1. Review the request. If genuine, select the request and click **Approve** to approve it.
+1. Review the request. If genuine, select the request and select **Approve** to approve it.
1. The Backup admin is informed by email (or other organizational alerting mechanisms) that their request is now approved. 1. Once approved, the Backup admin can perform protected operations for the requested period.
The following screenshot shows an example of disabling soft delete for an MUA-en
Disabling MUA is a protected operation, and hence, is protected using MUA. This means that the Backup admin must have the required Contributor role in the Resource Guard. Details on obtaining this role are described here. Following is a summary of steps to disable MUA on a vault. 1. The Backup admin requests the Security admin for **Contributor** role on the Resource Guard. They can request this to use the methods approved by the organization such as JIT procedures, like [Azure AD Privileged Identity Management](../active-directory/privileged-identity-management/pim-configure.md), or other internal tools and procedures. 1. The Security admin approves the request (if they find it worthy of being approved) and informs the Backup admin. Now the Backup admin has the ΓÇÿContributorΓÇÖ role on the Resource Guard.
-1. The Backup admin goes to the vault -> **Properties** -> **Multi-user Authorization**.
-1. Click **Update**
- 1. Uncheck the Protect with Resource Guard check box
+1. The Backup admin goes to the vault > **Properties** > **Multi-user Authorization**.
+1. Select **Update**.
+ 1. Clear the **Protect with Resource Guard** checkbox.
1. Choose the Directory that contains the Resource Guard and verify access using the Authenticate button (if applicable).
- 1. After **authentication**, click **Save**. With the right access, the request should be successfully completed.
+ 1. After **authentication**, select **Save**. With the right access, the request should be successfully completed.
:::image type="content" source="./media/multi-user-authorization/disable-mua.png" alt-text="Screenshot showing to disable multi-user authentication.":::++++
+This article describes how to configure Multi-user authorization (MUA) for Azure Backup to add an additional layer of protection to critical operations on your Backup vault (preview).
+
+>[!Note]
+>Multi-user authorization using Resource Guard for Backup vault is in preview.
+
+This article demonstrates Resource Guard creation in a different tenant that offers maximum protection. It also demonstrates how to request and approve requests for performing critical operations using [Azure Active Directory Privileged Identity Management](../active-directory/privileged-identity-management/pim-configure.md) in the tenant housing the Resource Guard. You can optionally use other mechanisms to manage JIT permissions on the Resource Guard as per your setup.
+
+This document includes the following sections:
+
+>[!div class="checklist"]
+>- Before you start
+>- Testing scenarios
+>- Create a Resource Guard
+>- Enable MUA on a Backup vault
+>- Protected operations on a vault using MUA
+>- Authorize critical operations on a vault
+>- Disable MUA on a Backup vault
+
+>[!NOTE]
+>Multi-user authorization for Azure Backup is available in all public Azure regions.
+
+## Before you start
+
+- Ensure the Resource Guard and the Backup vault are in the same Azure region.
+- Ensure the Backup admin does **not** have **Contributor** permissions on the Resource Guard. You can choose to have the Resource Guard in another subscription of the same directory or in another directory to ensure maximum isolation.
+- Ensure that your subscriptions contain the Backup vault as well as the Resource Guard (in different subscriptions or tenants) are registered to use the provider - **Microsoft.DataProtection**4. For more information, see [Azure resource providers and types](../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider-1).
+
+Learn about various [MUA usage scenarios](./multi-user-authorization-concept.md?tabs=backup-vault#usage-scenarios).
+
+## Create a Resource Guard
+
+The **Security admin** creates the Resource Guard. We recommend that you create it in a **different subscription** or a **different tenant** as the vault. However, it should be in the **same region** as the vault.
+
+The Backup admin must **NOT** have *contributor* access on the Resource Guard or the subscription that contains it.
+
+To create the Resource Guard in a tenant different from the vault tenant as a Security admin, follow these steps:
+
+1. In the Azure portal, go to the directory under which you want to create the Resource Guard.
+
+ :::image type="content" source="./media/multi-user-authorization/portal-settings-directories-subscriptions.png" alt-text="Screenshot showing the portal settings to configure for Backup vault.":::
+
+1. Search for **Resource Guards** in the search bar and select the corresponding item from the drop-down list.
+
+ :::image type="content" source="./media/multi-user-authorization/resource-guards-preview-inline.png" alt-text="Screenshot showing resource guards for Backup vault." lightbox="./media/multi-user-authorization/resource-guards-preview-expanded.png":::
+
+ 1. Select **Create** to create a Resource Guard.
+ 1. In the Create blade, fill in the required details for this Resource Guard.
+ - Ensure that the Resource Guard is in the same Azure regions as the Backup vault.
+ - Add a description on how to request access to perform actions on associated vaults when needed. This description appears in the associated vaults to guide the Backup admin on how to get the required permissions.
+
+1. On the **Protected operations** tab, select the operations you need to protect using this resource guard under the **Backup vault** tab.
+
+ Currently, the **Protected operations** tab includes only the *Delete backup instance* option to disable.
+
+ You can also [select the operations for protection after creating the resource guard](?pivots=vaults-recovery-services-vault#select-operations-to-protect-using-resource-guard).
+
+ :::image type="content" source="./media/multi-user-authorization/backup-vault-select-operations-for-protection.png" alt-text="Screenshot showing how to select operations for protecting using Resource Guard.":::
+
+1. Optionally, add any tags to the Resource Guard as per the requirements.
+1. Select **Review + Create** and then follow the notifications to monitor the status and a successful creation of the Resource Guard.
+
+### Select operations to protect using Resource Guard
+
+After vault creation, the Security admin can also choose the operations for protection using the Resource Guard among all supported critical operations. By default, all supported critical operations are enabled. However, the Security admin can exempt certain operations from falling under the purview of MUA using Resource Guard.
+
+To select the operations for protection, follow these steps:
+
+1. In the Resource Guard that you've created, go to **Properties** > **Backup vault** tab.
+1. Select **Disable** for the operations that you want to exclude from being authorized.
+
+ You can't disable the **Remove MUA protection** operation.
+
+1. Optionally, in the **Backup vaults** tab, update the description for the Resource Guard.
+1. Select **Save**.
+
+ :::image type="content" source="./media/multi-user-authorization/demo-resource-guard-properties-backup-vault-inline.png" alt-text="Screenshot showing demo resource guard properties for Backup vault." lightbox="./media/multi-user-authorization/demo-resource-guard-properties-backup-vault-expanded.png":::
+
+## Assign permissions to the Backup admin on the Resource Guard to enable MUA
+
+The Backup admin must have **Reader** role on the Resource Guard or subscription that contains the Resource Guard to enable MUA on a vault. The Security admin needs to assign this role to the Backup admin.
+
+To assign the **Reader** role on the Resource Guard, follow these steps:
+
+1. In the Resource Guard created above, go to the **Access Control (IAM)** blade, and then go to **Add role assignment**.
+
+ :::image type="content" source="./media/multi-user-authorization/demo-resource-guard-access-control.png" alt-text="Screenshot showing demo resource guard-access control for Backup vault.":::
+
+1. Select **Reader** from the list of built-in roles and select **Next** on the bottom of the screen.
+
+ :::image type="content" source="./media/multi-user-authorization/demo-resource-guard-add-role-assignment-inline.png" alt-text="Screenshot showing demo resource guard-add role assignment for Backup vault." lightbox="./media/multi-user-authorization/demo-resource-guard-add-role-assignment-expanded.png":::
+
+1. Click **Select members** and add the Backup admin's email ID to assign the **Reader** role.
+
+ As the Backup admins are in another tenant, they'll be added as guests to the tenant that contains the Resource Guard.
+
+1. Click **Select** > **Review + assign** to complete the role assignment.
+
+ :::image type="content" source="./media/multi-user-authorization/demo-resource-guard-select-members-inline.png" alt-text="Screenshot showing demo resource guard-select members to protect the backup items in Backup vault." lightbox="./media/multi-user-authorization/demo-resource-guard-select-members-expanded.png":::
+
+## Enable MUA on a Backup vault
+
+Once the Backup admin has the Reader role on the Resource Guard, they can enable multi-user authorization on vaults managed by following these steps:
+
+1. Go to the Backup vault for which you want to configure MUA.
+1. On the left panel, select **Properties**.
+1. Go to **Multi-User Authorization** and select **Update**.
+
+ :::image type="content" source="./media/multi-user-authorization/test-backup-vault-properties.png" alt-text="Screenshot showing the Backup vault properties.":::
+
+1. To enable MUA and choose a Resource Guard, perform one of the following actions:
+
+ - You can either specify the URI of the Resource Guard. Ensure that you specify the URI of a Resource Guard you have **Reader** access to and it's in the same regions as the vault. You can find the URI (Resource Guard ID) of the Resource Guard on its **Overview** page.
+
+ :::image type="content" source="./media/multi-user-authorization/resource-guard-rg-inline.png" alt-text="Screenshot showing the Resource Guard for Backup vault protection." lightbox="./media/multi-user-authorization/resource-guard-rg-expanded.png":::
+
+ - Or, you can select the Resource Guard from the list of Resource Guards you have **Reader** access to, and those available in the region.
+
+ 1. Click **Select Resource Guard**.
+ 1. Select the drop-down and select the directory the Resource Guard is in.
+ 1. Select **Authenticate** to validate your identity and access.
+ 1. After authentication, choose the **Resource Guard** from the list displayed.
+
+ :::image type="content" source="./media/multi-user-authorization/test-backup-vault-1-multi-user-authorization.png" alt-text="Screenshot showing multi-user authorization enabled on Backup vault.":::
+
+1. Select **Save** to enable MUA.
+
+ :::image type="content" source="./media/multi-user-authorization/testvault1-enable-mua.png" alt-text="Screenshot showing how to enable Multi-user authentication.":::
+
+## Protected operations using MUA
+
+Once the Backup admin enables MUA, the operations in scope will be restricted on the vault, and the operations fail if the Backup admin tries to perform them without having the **Contributor** role on the Resource Guard.
+
+>[!NOTE]
+>We highly recommend you to test your setup after enabling MUA to ensure that:
+>- Protected operations are blocked as expected.
+>- MUA is correctly configured.
+
+To perform a protected operation (disabling MUA), follow these steps:
+
+1. Go to the vault > **Properties** in the left pane.
+1. Clear the checkbox to disable MUA.
+
+ You'll receive a notification that it's a protected operation, and you need to have access to the Resource Guard.
+
+1. Select the directory containing the Resource Guard and authenticate yourself.
+
+ This step may not be required if the Resource Guard is in the same directory as the vault.
+
+1. Select **Save**.
+
+ The request fails with an error that you don't have sufficient permissions on the Resource Guard to perform this operation.
+
+ :::image type="content" source="./media/multi-user-authorization/test-vault-properties-security-settings-inline.png" alt-text="Screenshot showing the test Backup vault properties security settings." lightbox="./media/multi-user-authorization/test-vault-properties-security-settings-expanded.png":::
+
+## Authorize critical (protected) operations using Azure AD Privileged Identity Management
+
+There are scenarios where you may need to perform critical operations on your backups and you can perform them with the right approvals or permissions with MUA. The following sections explain on how to authorize the critical operation requests using Privileged Identity Management (PIM).
+
+The Backup admin must have a Contributor role on the Resource Guard to perform critical operations in the Resource Guard scope. One of the ways to allow just-in-time (JIT) operations is through the use of [Azure Active Directory (Azure AD) Privileged Identity Management](../active-directory/privileged-identity-management/pim-configure.md).
+
+>[!NOTE]
+>We recommend to use the Azure AD PIM. However, you can also use manual or custom methods to manage access for the Backup admin on the Resource Guard. To manually manage access to the Resource Guard, use the *Access control (IAM)* setting on the left pane of the Resource Guard and grant the **Contributor** role to the Backup admin.
+
+### Create an eligible assignment for the Backup admin using Azure AD Privileged Identity Management
+
+The **Security admin** can use PIM to create an eligible assignment for the Backup admin as a Contributor to the Resource Guard. This enables the Backup admin to raise a request (for the Contributor role) when they need to perform a protected operation.
+
+To create an eligible assignment, follow the steps:
++
+1. Sign into the [Azure portal](https://portal.azure.com).
+1. Go to security tenant of Resource Guard, and in the search, enter **Privileged Identity Management**.
+1. In the left pane, select **Manage and go to Azure Resources**.
+1. Select the resource (the Resource Guard or the containing subscription/RG) to which you want to assign the Contributor role.
+
+ If you don't find any corresponding resources, then add the containing subscription that is managed by PIM.
+
+1. Select the resource and go to **Manage** > **Assignments** > **Add assignments**.
+
+ :::image type="content" source="./media/multi-user-authorization/add-assignments.png" alt-text="Screenshot showing how to add assignments to protect a Backup vault.":::
+
+1. In the Add assignments:
+ 1. Select the role as Contributor.
+ 1. Go to **Select members** and add the username (or email IDs) of the Backup admin.
+ 1. Select **Next**.
+
+ :::image type="content" source="./media/multi-user-authorization/add-assignments-membership.png" alt-text="Screenshot showing how to add assignments-membership to protect a Backup vault.":::
+
+1. In Assignment, select **Eligible** and specify the validity of the duration of eligible permission.
+1. Select **Assign** to complete creating the eligible assignment.
+
+ :::image type="content" source="./media/multi-user-authorization/add-assignments-setting.png" alt-text="Screenshot showing how to add assignments-setting to protect a Backup vault.":::
+
+### Set up approvers for activating Contributor role
+
+By default, the above setup may not have an approver (and an approval flow requirement) configured in PIM. To ensure that approvers have the **Contributor** role for request approval, the Security admin must follow these steps:
+
+>[!Note]
+>If the approver setup isn't configured, the requests are automatically approved without going through the Security admins or a designated approverΓÇÖs review. [Learn more](../active-directory/privileged-identity-management/pim-resource-roles-configure-role-settings.md).
+
+1. In Azure AD PIM, select **Azure Resources** on the left pane and select your Resource Guard.
+
+1. Go to **Settings** > **Contributor** role.
+
+ :::image type="content" source="./media/multi-user-authorization/add-contributor.png" alt-text="Screenshot showing how to add a contributor.":::
+
+1. Select **Edit** to add the reviewers who must review and approve the activation request for the *Contributor* role in case you find that Approvers show *None* or displays incorrect approvers.
+
+1. On the **Activation** tab, select **Require approval to activate** to add the approver(s) who must approve each request.
+1. Select security options, such as Multi Factor Authentication (MFA), Mandating ticket. to activate *Contributor* role.
+1. Select the appropriate options on **Assignment** and **Notification** tabs as per your requirement.
+
+ :::image type="content" source="./media/multi-user-authorization/edit-role-settings.png" alt-text="Screenshot showing how to edit the role setting.":::
+
+1. Select **Update** to complete the set-up of approvers to activate *Contributor* role.
+
+### Request activation of an eligible assignment to perform critical operations
+
+After the Security admin creates an eligible assignment, the Backup admin needs to activate the role assignment for the Contributor role to perform protected actions.
+
+To activate the role assignment, follow the steps:
+
+1. Go to [Azure AD Privileged Identity Management](../active-directory/privileged-identity-management/pim-configure.md). If the Resource Guard is in another directory, switch to that directory and then go to [Azure AD Privileged Identity Management](../active-directory/privileged-identity-management/pim-configure.md).
+1. Go to **My roles** > **Azure resources** in the left pane.
+1. Select **Activate** to activate the eligible assignment for *Contributor* role.
+
+ A notification appears notifying that the request is sent for approval.
+
+ :::image type="content" source="./media/multi-user-authorization/identity-management-myroles-inline.png" alt-text="Screenshot showing how to activate eligible assignments." lightbox="./media/multi-user-authorization/identity-management-myroles-expanded.png":::
+
+### Approve activation requests to perform critical operations
+
+Once the Backup admin raises a request for activating the Contributor role, the **Security admin** must review and approve the request.
+
+To review and approve the request, follow these steps:
+
+1. In the security tenant, go to [Azure AD Privileged Identity Management.](../active-directory/privileged-identity-management/pim-configure.md).
+1. Go to **Approve Requests**.
+1. Under **Azure resources**, you can see the request awaiting approval.
+
+ Select **Approve** to review and approve the genuine request.
+
+After the approval, the Backup admin receives a notification, via email or other internal alerting options, that the request is approved. Now, the Backup admin can perform the protected operations for the requested period.
+
+## Perform a protected operation after approval
+
+Once the Security admin approves the Backup admin's request for the Contributor role on the Resource Guard, they can perform protected operations on the associated vault. If the Resource Guard is in another directory, the Backup admin must authenticate themselves.
+
+>[!NOTE]
+>If the access was assigned using a JIT mechanism, the Contributor role is retracted at the end of the approved period. Otherwise, the Security admin manually removes the **Contributor** role assigned to the Backup admin to perform the critical operation.
+
+The following screenshot shows an example of [disabling soft delete](backup-azure-security-feature-cloud.md#disabling-soft-delete-using-azure-portal) for an MUA-enabled vault.
++
+## Disable MUA on a Backup vault
+
+Disabling the MUA is a protected operation that must be done by the Backup admin only. To do this, the Backup admin must have the required *Contributor* role in the Resource Guard. To obtain this permission, the Backup admin must first request the Security admin for the Contributor role on the Resource Guard using the just-in-time (JIT) procedure, such as [Azure Active Directory (Azure AD) Privileged Identity Management](../active-directory/privileged-identity-management/pim-configure.md) or internal tools.
+
+Then the Security admin approves the request if it's genuine and updates the Backup admin who now has Contributor role on the Resource guard. Learn more on [how to get this role](?pivots=vaults-backup-vault#assign-permissions-to-the-backup-admin-on-the-resource-guard-to-enable-mua).
+
+To disable the MUA, the Backup admins must follow these steps:
+
+1. Go to vault > **Properties** > **Multi-user Authorization**.
+1. Select **Update** and clear the **Protect with Resource Guard** checkbox.
+1. Select **Authenticate** (if applicable) to choose the Directory that contains the Resource Guard and verify access.
+1. Select **Save** to complete the process of disabling the MUA.
+
+ :::image type="content" source="./media/multi-user-authorization/disable-mua.png" alt-text="Screenshot showing how to disable multi-user authorization.":::
+++
+## Next steps
+
+[Learn more about Multi-user authorization using Resource Guard](multi-user-authorization-concept.md).
+
backup Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/whats-new.md
Title: What's new in Azure Backup description: Learn about new features in Azure Backup. Previously updated : 09/14/2022 Last updated : 10/14/2022
You can learn more about the new releases by bookmarking this page or by [subscr
## Updates summary - October 2022
+ - [Multi-user authorization using Resource Guard for Backup vault (in preview)](#multi-user-authorization-using-resource-guard-for-backup-vault-in-preview)
+ - [Enhanced soft delete for Azure Backup (preview)](#enhanced-soft-delete-for-azure-backup-preview)
+ - [Immutable vault for Azure Backup (in preview)](#immutable-vault-for-azure-backup-in-preview)
- [SAP HANA instance snapshot backup support (preview)](#sap-hana-instance-snapshot-backup-support-preview) - [SAP HANA System Replication database backup support (preview)](#sap-hana-system-replication-database-backup-support-preview) - September 2022 - [Built-in Azure Monitor alerting for Azure Backup is now generally available](#built-in-azure-monitor-alerting-for-azure-backup-is-now-generally-available) - June 2022
- - [Multi-user authorization using Resource Guard is now generally available](#multi-user-authorization-using-resource-guard-is-now-generally-available)
+ - [Multi-user authorization using Resource Guard for Recovery Services vault is now generally available](#multi-user-authorization-using-resource-guard-for-recovery-services-vault-is-now-generally-available)
- May 2022 - [Archive tier support for Azure Virtual Machines is now generally available](#archive-tier-support-for-azure-virtual-machines-is-now-generally-available) - February 2022
You can learn more about the new releases by bookmarking this page or by [subscr
- [Back up Azure Database for PostgreSQL is now generally available](#back-up-azure-database-for-postgresql-is-now-generally-available) - October 2021 - [Archive Tier support for SQL Server/ SAP HANA in Azure VM from Azure portal](#archive-tier-support-for-sql-server-sap-hana-in-azure-vm-from-azure-portal)
- - [Multi-user authorization using Resource Guard (in preview)](#multi-user-authorization-using-resource-guard-in-preview)
+ - [Multi-user authorization using Resource Guard for Recovery Services vault (in preview)](#multi-user-authorization-using-resource-guard-for-recovery-services-vault-in-preview)
- [Multiple backups per day for Azure Files (in preview)](#multiple-backups-per-day-for-azure-files-in-preview) - [Azure Backup Metrics and Metrics Alerts (in preview)](#azure-backup-metrics-and-metrics-alerts-in-preview) - July 2021
You can learn more about the new releases by bookmarking this page or by [subscr
- February 2021 - [Backup for Azure Blobs (in preview)](#backup-for-azure-blobs-in-preview)
+## Multi-user authorization using Resource Guard for Backup vault (in preview)
+
+Azure Backup now supports multi-user authorization (MUA) that allows you to add an additional layer of protection to critical operations on your Backup vaults. For MUA, Azure Backup uses the Azure resource, Resource Guard, to ensure critical operations are performed only with applicable authorization.
+
+For more information, see [MUA for Backup vault](multi-user-authorization-concept.md?tabs=backup-vault).
+
+## Enhanced soft delete for Azure Backup (preview)
+
+Enhanced soft delete provides improvements to the existing [soft delete](backup-azure-security-feature-cloud.md) feature. With enhanced soft delete, you can now make soft delete irreversible to prevent malicious actors from disabling it and deleting backups.
+
+You can also customize soft delete retention period (for which soft deleted data must be retained). Enhanced soft delete is available for Recovery Services vaults and Backup vaults.
+
+For more information, see [Enhanced soft delete for Azure Backup](backup-azure-enhanced-soft-delete-about.md).
+
+## Immutable vault for Azure Backup (in preview)
+
+Azure Backup now supports immutable vaults that help you ensure that recovery points once created can't be deleted before their expiry as per the backup policy (expiry at the time at which the recovery point was created). You can also choose to make the immutability irreversible to offer maximum protection to your backup data, thus helping you protect your data better against various threats, including ransomware attacks and malicious actors.
+
+For more information, see the [concept of Immutable vault for Azure Backup (preview)](backup-azure-immutable-vault-concept.md).
+ ## SAP HANA instance snapshot backup support (preview) Azure Backup now supports SAP HANA instance snapshot backup that provides a cost-effective backup solution using Managed disk incremental snapshots. Because instant backup uses snapshot, the effect on the database is minimum.
If you're currently using the [classic alerts solution](backup-azure-monitoring-
For more information, see [Switch to Azure Monitor based alerts for Azure Backup](move-to-azure-monitor-alerts.md). -
-## Multi-user authorization using Resource Guard is now generally available
+## Multi-user authorization using Resource Guard for Recovery Services vault is now generally available
Azure Backup now supports multi-user authorization (MUA) that allows you to add an additional layer of protection to critical operations on your Recovery Services vaults. For MUA, Azure Backup uses the Azure resource, Resource Guard, to ensure critical operations are performed only with applicable authorization.
Also, the support is extended via Azure CLI for the above workloads, along with
For more information, see [Archive Tier support in Azure Backup](archive-tier-support.md).
-## Multi-user authorization using Resource Guard (in preview)
+## Multi-user authorization using Resource Guard for Recovery Services vault (in preview)
Azure Backup now supports multi-user authorization (MUA) that allows you to add an additional layer of protection to critical operations on your Recovery Services vaults. For MUA, Azure Backup uses the Azure resource, Resource Guard, to ensure critical operations are performed only with applicable authorization.
batch Batch Certificate Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-certificate-migration-guide.md
Previously updated : 10/07/2022 Last updated : 10/12/2022 # Migrate Batch account certificates to Azure Key Vault
Certificates are often required in various scenarios such as decrypting a secret
After the certificates feature in Azure Batch is retired on February 29, 2024, a certificate in Batch won't work as expected. After that date, you'll no longer be able to add certificates to a Batch account or link these certificates to Batch pools. Pools that continue to use this feature after this date may not behave as expected such as updating certificate references or the ability to install existing certificate references.
-## Alternative: Use Azure Key Vault VM Extension with Pool User-assigned Managed Identity
+## Alternative: Use Azure Key Vault VM extension with pool user-assigned managed identity
Azure Key Vault is a fully managed Azure service that provides controlled access to store and manage secrets, certificates, tokens, and keys. Key Vault provides security at the transport layer by ensuring that any data flow from the key vault to the client application is encrypted. Azure Key Vault gives you a secure way to store essential access information and to set fine-grained access control. You can manage all secrets from one dashboard. Choose to store a key in either software-protected or hardware-protected hardware security modules (HSMs). You also can set Key Vault to auto-renew certificates.
For a complete guide on how to enable Azure Key Vault VM Extension with Pool Use
- Do `CloudServiceConfiguration` pools support Azure Key Vault VM extension and managed identity on pools?
- No. `CloudServiceConfiguration` pools will be [retired](https://azure.microsoft.com/updates/azure-batch-cloudserviceconfiguration-pools-will-be-retired-on-29-february-2024/) on the same date as Azure Batch account certificate retirement on February 29, 2024. We recommend that you migrate to `VirtualMachinceConfiguration` pools before that date where you'll be able to use these solutions.
+ No. `CloudServiceConfiguration` pools will be [retired](https://azure.microsoft.com/updates/azure-batch-cloudserviceconfiguration-pools-will-be-retired-on-29-february-2024/) on the same date as Azure Batch account certificate retirement on February 29, 2024. We recommend that you migrate to `VirtualMachineConfiguration` pools before that date where you'll be able to use these solutions.
- Do user subscription pool allocation Batch accounts support Azure Key Vault?
batch Batch Tls 101 Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-tls-101-migration-guide.md
Previously updated : 08/16/2022 Last updated : 10/12/2022 # Migrate client code to TLS 1.2 in Batch
-To support security best practices and remain in compliance with industry standards, Azure Batch will retire Transport Layer Security (TLS) 1.0 and TLS 1.1 in Azure Batch on *March 31, 2023*. Learn how to migrate to TLS 1.2 in the client code you manage by using Batch.
+To support security best practices and remain in compliance with industry standards, Azure Batch will retire Transport Layer Security (TLS) 1.0 and TLS 1.1 in Azure Batch on *March 31, 2023*. Learn how to migrate to TLS 1.2 in your Batch service client code.
## End of support for TLS 1.0 and TLS 1.1 in Batch TLS versions 1.0 and TLS 1.1 are known to be susceptible to BEAST and POODLE attacks and to have other Common Vulnerabilities and Exposures (CVE) weaknesses. TLS 1.0 and TLS 1.1 don't support the modern encryption methods and cipher suites that the Payment Card Industry (PCI) compliance standards recommends. Microsoft is participating in an industry-wide push toward the exclusive use of TLS version 1.2 or later.
-Most customers have already migrated to TLS 1.2. Customers who continue to use TLS 1.0 or TLS 1.1 can be identified via existing BatchOperation data. If you're using TLS 1.0 or TLS 1.1, to avoid disruption to your Batch workflows, update existing workflows to use TLS 1.2.
+If you've already migrated to use TLS 1.2 in your Batch client applications, then this retirement doesn't apply to you. Only API requests that go directly to the Batch service via the data plane API (not management plane) are impacted. API requests at the management plane layer are routed through ARM and are subject to ARM TLS minimum version requirements. We recommend that you migrate to TLS 1.2 across Batch data plane or management plane API calls for security best practices, if possible.
## Alternative: Use TLS 1.2
For more information, see [TLS best practices for the .NET Framework](/dotnet/fr
- Why do I need to upgrade to TLS 1.2?
- TLS 1.0 and TLS 1.1 have security issues that are fixed in TLS 1.2. TLS 1.2 has been available since 2008. TLS 1.2 is the current default version in most development frameworks.
+ TLS 1.0 and TLS 1.1 are considered insecure and have security issues that are addressed in TLS 1.2. TLS 1.2 has been available since 2008. TLS 1.2 is widely adopted as the minimum version for securing communication channels using TLS.
-- What happens if I donΓÇÖt upgrade?
+- What happens if I don't upgrade?
- After the feature retirement from Azure Batch, your client application won't work until you upgrade the code to use TLS 1.2.
+ After the feature retirement from Azure Batch, your client application won't be able to communicate with Batch data plane API services unless you upgrade to TLS 1.2.
-- Will upgrading to TLS 1.2 affect the performance of my application?
+- Does upgrading to TLS 1.2 affect the performance of my application?
- Upgrading to TLS 1.2 won't affect your application's performance.
+ Upgrading to TLS 1.2 generally shouldn't affect your application's performance.
- How do I know if IΓÇÖm using TLS 1.0 or TLS 1.1?
- To determine the TLS version you're using, check the audit log for your Batch deployment.
+ To determine the TLS version you're using, check your client application logs and the audit log for your Batch deployment.
## Next steps
cognitive-services How To Custom Speech Test And Train https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-test-and-train.md
Specialized or made up words might have unique pronunciations. These words can b
You can provide a custom pronunciation file to improve recognition. Don't use custom pronunciation files to alter the pronunciation of common words. For a list of languages that support custom pronunciation, see [language support](language-support.md?tabs=stt-tts). > [!NOTE]
-> You can either use a pronunciation data file on its own, or you can add pronunciation within a structured text data file. The Speech service doesn't support training a model where you select both of those datasets as input.
+> You can use a pronunciation file alongside any other training dataset except structured text training data. To use pronunciation data with structured text, it must be within a structured text file.
The spoken form is the phonetic sequence spelled out. It can be composed of letters, words, syllables, or a combination of all three. This table includes some examples:
cognitive-services Releasenotes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/releasenotes.md
See below for information about changes to Speech services and resources.
## What's new?
-* Speech SDK 1.23.0 and Speech CLI 1.23.0 were released in July 2022. See details below.
+* Speech SDK 1.24.0 and Speech CLI 1.24.0 were released in October 2022. See details below.
* Custom speech-to-text container v3.1.0 released in March 2022, with support to get display models. * TTS Service August 2022, five new voices in public preview were released. * TTS Service September 2022, all the prebuilt neural voices have been upgraded to high-fidelity voices with 48kHz sample rate.
cognitive-services Speech Synthesis Markup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-synthesis-markup.md
The following table has descriptions of each supported style.
### Style degree
-The intensity of speaking style can be adjusted to better fit your use case. You specify a stronger or softer style with the `styledegree` attribute to make the speech more expressive or subdued.
+The intensity of speaking style can be adjusted to better fit your use case. You specify a stronger or softer style with the `styledegree` attribute to make the speech more expressive or subdued.
For a list of neural voices that support speaking style degree, see [supported voice styles and roles](language-support.md?tabs=stt-tts#voice-styles-and-roles).
This SSML snippet illustrates how the `styledegree` attribute is used to change
### Role
-Apart from adjusting the speaking styles and style degree, you can also adjust the `role` parameter so that the voice imitates a different age and gender. For example, a male voice can raise the pitch and change the intonation to imitate a female voice, but the voice name won't be changed.
+Apart from adjusting the speaking styles and style degree, you can also adjust the `role` parameter so that the voice imitates a different age and gender. For example, a male voice can raise the pitch and change the intonation to imitate a female voice, but the voice name won't be changed.
For a list of supported roles per neural voice, see [supported voice styles and roles](language-support.md?tabs=stt-tts#voice-styles-and-roles).
To define how multiple entities are read, you can create a custom lexicon, which
The `lexicon` element contains at least one `lexeme` element. Each `lexeme` element contains at least one `grapheme` element and one or more `grapheme`, `alias`, and `phoneme` elements. The `grapheme` element contains text that describes the [orthography](https://www.w3.org/TR/pronunciation-lexicon/#term-Orthography). The `alias` elements are used to indicate the pronunciation of an acronym or an abbreviated term. The `phoneme` element provides text that describes how the `lexeme` is pronounced. When the `alias` and `phoneme` elements are provided with the same `grapheme` element, `alias` has higher priority. > [!IMPORTANT]
-> The `lexeme` element is case sensitive in the custom lexicon. For example, if you only provide a phoneme for the `lexeme` "Hello," it won't work for the `lexeme` "hello."
+> The `lexeme` element is case sensitive in the custom lexicon. For example, if you only provide a phoneme for the `lexeme` "Hello," it won't work for the `lexeme` "hello."
Lexicon contains the necessary `xml:lang` attribute to indicate which locale it should be applied for. One custom lexicon is limited to one locale by design, so if you apply it for a different locale, it won't work.
The `say-as` element is optional. It indicates the content type, such as number
| `detail` | Indicates the level of detail to be spoken. For example, this attribute might request that the speech synthesis engine pronounce punctuation marks. There are no standard values defined for `detail`. | Optional | The following content types are supported for the `interpret-as` and `format` attributes. Include the `format` attribute only if `format` column isn't empty in the table below.+ | interpret-as | format | Interpretation | | -- | | -- | | `characters`, `spell-out` | | The text is spoken as individual letters (spelled out). The speech synthesis engine pronounces:<br /><br />`<say-as interpret-as="characters">test</say-as>`<br /><br />As "T E S T." |
Only one background audio file is allowed per SSML document. You can intersperse
> [!NOTE] > The `mstts:backgroundaudio` element should be put in front of all `voice` elements, i.e., the first child of the `speak` element.
->
+>
> The `mstts:backgroundaudio` element is not supported by the [Long Audio API](long-audio-api.md). **Syntax**
A viseme is the visual description of a phoneme in spoken language. It defines t
| `type` | Specifies the type of viseme output.<ul><li>`redlips_front` ΓÇô lip-sync with viseme ID and audio offset output </li><li>`FacialExpression` ΓÇô blend shapes output</li></ul> | Required | > [!NOTE]
-> Currently, `redlips_front` only supports neural voices in `en-US` locale, and `FacialExpression` supports neural voices in `en-US` and `zh-CN` locales.
+> Currently, `redlips_front` only supports neural voices in `en-US` locale, and `FacialExpression` supports neural voices in `en-US` and `zh-CN` locales.
**Example**
cognitive-services What Is Dictionary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/what-is-dictionary.md
Previously updated : 12/06/2021 Last updated : 10/11/2022
You can train a model using only dictionary data. To do so, select only the dict
## Recommendations -- Dictionaries aren't a substitute for training a model using training data. We recommended letting the system learn from your training data for better results. However, when sentences or compound nouns must be rendered as-is, use a dictionary.-- The phrase dictionary should be used sparingly. When a phrase within a sentence is replaced, the context within that sentence is lost or limited for translating the rest of the sentence. The result is that while the phrase or word within the sentence will translate according to the provided dictionary, the overall translation quality of the sentence will often suffer.-- The phrase dictionary works well for compound nouns like product names ("Microsoft SQL Server"), proper names ("City of Hamburg"), or features of the product ("pivot table"). It doesn't work equally well for verbs or adjectives because those words are typically highly inflected in the source or in the target language. Best practice is to avoid phrase dictionary entries for anything but compound nouns.-- If you're using a phrase dictionary, capitalization and punctuation are important. Dictionary entries will only match words and phrases in the input sentence that use exactly the same capitalization and punctuation as specified in the source dictionary file. Also the translations will reflect the capitalization and punctuation provided in the target dictionary file. For example, if you trained an English to Spanish system that uses a phrase dictionary that specifies "US" in the source file, and "EE.UU." in the target file. When you request translation of a sentence that includes the word "us" (not capitalized), it will NOT return a match from the dictionary. However, if you request translation of a sentence that contains the word "US" (capitalized), it will match the dictionary and the translation will contain "EE.UU." The capitalization and punctuation in the translation may be different than specified in the dictionary target file, and may be different from the capitalization and punctuation in the source. It follows the rules of the target language.-- If you're using a sentence dictionary, the end of sentence punctuation is ignored. For example, if your source dictionary contains "this sentence ends with punctuation!", then any translation requests containing "this sentence ends with punctuation" would match.-- If a word appears more than once in a dictionary file, the system will always use the last entry provided. Thus, your dictionary shouldn't contain multiple translations of the same word.
+- Dictionaries aren't a substitute for training a model using training data. For better results, we recommended letting the system learn from your training data. However, when sentences or compound nouns must be translated verbatim, use a dictionary.
+
+- The phrase dictionary should be used sparingly. When a phrase within a sentence is replaced, the context of that sentence is lost or limited for translating the rest of the sentence. The result is that, while the phrase or word within the sentence will translate according to the provided dictionary, the overall translation quality of the sentence often suffers.
+
+- The phrase dictionary works well for compound nouns like product names ("_Microsoft SQL Server_"), proper names ("_City of Hamburg_"), or product features ("_pivot table_"). It doesn't work as well for verbs or adjectives because those words are typically highly contextual within the source or target language. The best practice is to avoid phrase dictionary entries for anything but compound nouns.
+
+- If you're using a phrase dictionary, capitalization and punctuation are important. Dictionary entries are case- and punctuation-sensitive. Custom Translator will only match words and phrases in the input sentence that use exactly the same capitalization and punctuation marks as specified in the source dictionary file. Also, translations will reflect the capitalization and punctuation provided in the target dictionary file.
+
+ **Example**
+
+ - If you're training an English-to-Spanish system that uses a phrase dictionary and you specify "_SQL server_" in the source file and "_Microsoft SQL Server_" in the target file. When you request the translation of a sentence that contains the phrase "_SQL server_", Custom Translator will match the dictionary entry and the translation will contain "_Microsoft SQL Server_."
+ - When you request translation of a sentence that includes the same phrase but **doesn't** match what is in your source file, such as "_sql server_", "_sql Server_" or "_SQL Server_", it **won't** return a match from your dictionary.
+ - The translation follows the rules of the target language as specified in your phrase dictionary.
+
+- If you're using a sentence dictionary, end-of-sentence punctuation is ignored.
+
+ **Example**
+
+ - If your source dictionary contains "_This sentence ends with punctuation!_", then any translation requests containing "_This sentence ends with punctuation_" will match.
+
+- Your dictionary should contain unique source lines. If a source line (a word, phrase, or sentence) appears more than once in a dictionary file, the system will always use the **last entry** provided and return the target when a match is found.
+
+- Avoid adding phrases that consist of only numbers or are two- or three-letter words, such as acronyms, in the source dictionary file.
## Next steps
cognitive-services Multi Region Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/concepts/custom-features/multi-region-deployment.md
+
+ Title: Deploy custom language projects to multiple regions in Azure Cognitive Service for Language
+
+description: Learn about deploying your language projects to multiple regions.
++++++ Last updated : 10/11/2022++++
+# Deploy custom language projects to multiple regions
+
+> [!NOTE]
+> This article applies to the following custom features in Azure Cognitive Service for Language:
+> * [Conversational language understanding](../../conversational-language-understanding/overview.md)
+> * [Custom text classification](../../custom-text-classification/overview.md)
+> * [Custom NER](../../custom-named-entity-recognition/overview.md)
+> * [Orchestration workflow](../../orchestration-workflow/overview.md)
+
+Custom Language service features enable you to deploy your project to more than one region, making it much easier to access your project globally while managing only one instance of your project in one place.
+
+Before you deploy a project, you can assign **deployment resources** in other regions. Each deployment resource is a different Language resource from the one you use to author your project. You deploy to those resources and then target your prediction requests to that resource in their respective regions and your queries are served directly from that region.
+
+When creating a deployment, you can select which of your assigned deployment resources and their corresponding regions you would like to deploy to. The model you deploy is then replicated to each region and accessible with its own endpoint dependent on the deployment resource's custom subdomain.
+
+## Example
+
+Suppose you want to make sure your project, which is used as part of a customer support chatbot, is accessible by customers across the US and India. You would author a project with the name **ContosoSupport** using a _West US 2_ Language resource named **MyWestUS2**. Before deployment, you would assign two deployment resources to your project - **MyEastUS** and **MyCentralIndia** in _East US_ and _Central India_, respectively.
+
+When deploying your project, You would select all three regions for deployment: the original _West US 2_ region and the assigned ones through _East US_ and _Central India_.
+
+You would now have three different endpoint URLs to access your project in all three regions:
+* West US 2: `https://mywestus2.cognitiveservices.azure.com/language/:analyze-conversations`
+* East US: `https://myeastus.cognitiveservices.azure.com/language/:analyze-conversations`
+* Central India: `https://mycentralindia.cognitiveservices.azure.com/language/:analyze-conversations`
+
+The same request body to each of those different URLs serves the exact same response directly from that region.
+
+## Validations and requirements
+
+Assigning deployment resources requires Microsoft Azure Active Directory (Azure AD) authentication. Azure AD is used to confirm you have access to the resources you are interested in assigning to your project for multi-region deployment. In the Language Studio, you can automatically [enable Azure AD authentication](https://aka.ms/rbac-language) by assigning yourself the _Cognitive Services Language Owner_ role to your original resource. To programmatically use Azure AD authentication, learn more from the [Cognitive Services documentation](/azure/cognitive-services/authentication?tabs=powershell&tryIt=true&source=docs#authenticate-with-azure-active-directory).
+
+Your project name and resource are used as its main identifiers. Therefore, a Language resource can only have a specific project name in each resource. Any other projects with the same name will not be deployable to that resource.
+
+For example, if a project **ContosoSupport** was created by resource **MyWestUS2** in _West US 2_ and deployed to resource **MyEastUS** in _East US_, the resource **MyEastUS** cannot create a different project called **ContosoSupport** and deploy a project to that region. Similarly, your collaborators cannot then create a project **ContosoSupport** with resource **MyCentralIndia** in _Central India_ and deploy it to either **MyWestUS2** or **MyEastUS**.
+
+You can only swap deployments that are available in the exact same regions, otherwise swapping will fail.
+
+If you remove an assigned resource from your project, all of the project deployments to that resource will then be deleted.
+
+> [!NOTE]
+> Orchestration workflow only:
+>
+> You **cannot** assign deployment resources to orchestration workflow projects with custom question answering or LUIS connections. You subsequently cannot add custom question answering or LUIS connections to projects that have assigned resources.
+>
+> For multi-region deployment to work as expected, the connected CLU projects **must also be deployed** to the same regional resources you've deployed the orchestration workflow project to. Otherwise the orchestration workflow project will attempt to route a request to a deployment in its region that doesn't exist.
+
+Some regions are only available for deployment and not for authoring projects.
+
+## Next steps
+
+Learn how to deploy models for:
+* [Conversational language understanding](../../conversational-language-understanding/how-to/deploy-model.md)
+* [Custom text classification](../../custom-text-classification/how-to/deploy-model.md)
+* [Custom NER](../../custom-named-entity-recognition/how-to/deploy-model.md)
+* [Orchestration workflow](../../orchestration-workflow/how-to/deploy-model.md)
cognitive-services Project Versioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/concepts/custom-features/project-versioning.md
+
+ Title: Conversational Language Understanding Project Versioning
+
+description: Learn how versioning works in conversational language understanding
++++++ Last updated : 10/10/2022+++++
+# Project versioning
+
+> [!NOTE]
+> This article applies to the following custom features in Azure Cognitive Service for Language:
+> * Conversational language understanding
+> * Custom text classification
+> * Custom NER
+> * Orchestration workflow
+
+Building your project typically happens in increments. You may add, remove, or edit intents, entities, labels and data at each stage. Every time you train, a snapshot of your current project state is taken to produce a model. That model saves the snapshot to be loaded back at any time. Every model acts as its own version of the project.
+
+For example, if your project has 10 intents and/or entities, with 50 training documents or utterances, it can be trained to create a model named **v1**. Afterwards, you might make changes to the project to alter the numbers of training data. The project can be trained again to create a new model named **v2**. If you don't like the changes you've made in **v2** and would like to continue from where you left off in model **v1**, then you would just need to load the model data from **v1** back into the project. Loading a model's data is possible through both the Language Studio and API. Once complete, the project will have the original amount and types of training data.
+
+If the project data is not saved in a trained model, it can be lost. For example, if you loaded model **v1**, your project now has the data that was used to train it. If you then made changes, didn't train, and loaded model **v2**, you would lose those changes as they weren't saved to any specific snapshot.
+
+If you overwrite a model with a new snapshot of data, you won't be able to revert back to any previous state of that model.
+
+You always have the option to locally export the data for every model.
+
+## Data location
+
+The data for your model versions will be saved in different locations, depending on the custom feature you're using.
+
+# [Custom NER](#tab/custom-ner)
+
+In custom named entity recognition, the data being saved to the snapshot is the labels file.
+
+# [Custom text classification](#tab/custom-text-classification)
+
+In custom text classification, the data being saved to the snapshot is the labels file.
+
+# [Orchestration workflow](#tab/orchestration-workflow)
+
+In orchestration workflow, you do not version or store the assets of the connected intents as part of the orchestration snapshot - those are managed separately. The only snapshot being taken is of the connection itself and the intents and utterances that do not have connections, including all the test data.
+
+# [Conversational language understanding](#tab/clu)
+
+In conversational language understanding, the data being saved to the snapshot are the intents and utterances included in the project.
++++
+## Next steps
+Learn how to load or export model data for:
+* [Conversational language understanding](../../conversational-language-understanding/how-to/view-model-evaluation.md#export-model-data)
+* [Custom text classification](../../custom-text-classification/how-to/view-model-evaluation.md)
+* [Custom NER](../../custom-named-entity-recognition/how-to/view-model-evaluation.md)
+* [Orchestration workflow](../../orchestration-workflow/how-to/view-model-evaluation.md)
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/concepts/language-support.md
Previously updated : 10/06/2022 Last updated : 10/13/2022
See the following service-level language support articles for information on mod
* [Opinion mining](../sentiment-opinion-mining/language-support.md#opinion-mining-language-support) * [Text Analytics for health](../text-analytics-for-health/language-support.md) * [Summarization](../summarization/language-support.md?tabs=document-summarization)
-* [Conversation summarization](../summarization/language-support.md?tabs=conversation-summarization)|
+* [Conversation summarization](../summarization/language-support.md?tabs=conversation-summarization)
cognitive-services Model Lifecycle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/concepts/model-lifecycle.md
# Model lifecycle
-Language service features utilize AI models that are versioned. We update the language service with new model versions to improve accuracy, support, and quality. As models become older, they are retired. Use this article for information on that process, and what you can expect for your applications.
+Language service features utilize AI models that are versioned. We update the language service with new model versions to improve accuracy, support, and quality. As models become older, they're retired. Use this article for information on that process, and what you can expect for your applications.
## Prebuilt features
Language service features utilize AI models that are versioned. We update the la
Our standard (not customized) language service features are built upon AI models that we call pre-trained models. We update the language service with new model versions every few months to improve model accuracy, support, and quality.
-As new models and functionalities become available, older less accurate models are deprecated. To ensure you are using the latest model version and avoid interruptions to your applications, we highly recommend using the default model-version parameter (`latest`) in your API calls. After their deprecation date, pre-built model versions will no longer be functional and your implementation may be broken.
+As new models and functionalities become available, older less accurate models are deprecated. To ensure you're using the latest model version and avoid interruptions to your applications, we highly recommend using the default model-version parameter (`latest`) in your API calls. After their deprecation date, pre-built model versions will no longer be functional, and your implementation may be broken.
Stable (not preview) model versions are deprecated six months after the release of another stable model version. Features in preview don't maintain a minimum retirement period and may be deprecated at any time.
Use the table below to find which model versions are supported by each feature:
| Sentiment Analysis and opinion mining | `2021-10-01`, `2022-06-01*` | `2019-10-01`, `2020-04-01` | | Language Detection | `2021-11-20*` | `2019-10-01`, `2020-07-01`, `2020-09-01`, `2021-01-05` | | Entity Linking | `2021-06-01*` | `2019-10-01`, `2020-02-01` |
-| Named Entity Recognition (NER) | `2021-06-01*` | `2019-10-01`, `2020-02-01`, `2020-04-01`, `2021-01-15` |
+| Named Entity Recognition (NER) | `2021-06-01*`, `2022-10-01-preview` | `2019-10-01`, `2020-02-01`, `2020-04-01`, `2021-01-15` |
| Personally Identifiable Information (PII) detection | `2020-07-01`, `2021-01-15*` | `2019-10-01`, `2020-02-01`, `2020-04-01`, `2020-07-01` | | PII detection for conversations (Preview) | `2022-05-15-preview**` | | | Question answering | `2021-10-01*` | |
Use the table below to find which model versions are supported by each feature:
As new training configs and new functionality become available; older and less accurate configs are retired, see the following timelines for configs expiration:
-New configs are being released every few months. So, training configs expiration of any publicly available config is **six months** after its release. If you have assigned a trained model to a deployment, this deployment expires after **twelve months** from the training config expiration.
+New configs are being released every few months. So, training configs expiration of any publicly available config is **six months** after its release. If you have assigned a trained model to a deployment, this deployment expires after **twelve months** from the training config expiration. If your models are about to expire, you can retrain and redeploy your models with the latest training configuration version.
-After training config version expires, API calls will return an error when called or used if called with an expired config version. By default, training requests will use the latest available training config version. To change the config version, use `trainingConfigVersion` when submitting a training job and assign the version you want.
+After training config version expires, API calls will return an error when called or used if called with an expired config version. By default, training requests will use the latest available training config version. To change the config version, use `trainingConfigVersion` when submitting a training job and assign the version you want.
> [!Tip] > It's recommended to use the latest supported config version
Use the table below to find which model versions are supported by each feature:
| Feature | Supported Training config versions | Training config expiration | Deployment expiration | ||--|||
-| Custom text classification | `2022-05-01` | `10/28/2022` | `10/28/2023` |
+| Custom text classification | `2022-05-01` | `04/10/2023` | `04/28/2024` |
| Conversational language understanding | `2022-05-01` | `10/28/2022` | `10/28/2023` |
-| Custom named entity recognition | `2022-05-01` | `10/28/2022` | `10/28/2023` |
+| Conversational language understanding | `2022-09-01` | `04/10/2023` | `04/28/2024` |
+| Custom named entity recognition | `2022-05-01` | `04/10/2023` | `04/28/2024` |
| Orchestration workflow | `2022-05-01` | `10/28/2022` | `10/28/2023` |
Use the table below to find which model versions are supported by each feature:
When you're making API calls to the following features, you need to specify the `API-VERISON` you want to use to complete your request. It's recommended to use the latest available API versions.
-If you are using the [Language Studio](https://aka.ms/languageStudio) for building your project you will be using the latest API version available. If you need to use another API version this is only available directly through APIs.
+If you're using the [Language Studio](https://aka.ms/languageStudio) for building your project you will be using the latest API version available. If you need to use another API version this is only available directly through APIs.
Use the table below to find which API versions are supported by each feature: | Feature | Supported versions | Latest Generally Available version | Latest preview version | |--||||
-| Custom text classification | `2022-05-01` | `2022-05-01` | |
-| Conversational language understanding | `2022-05-01` | `2022-05-01` | |
-| Custom named entity recognition | `2022-05-01` | `2022-05-01` | |
-| Orchestration workflow | `2022-05-01` | `2022-05-01` | |
-
+| Custom text classification | `2022-05-01`, `2022-10-01-preview` | `2022-05-01` | |
+| Conversational language understanding | `2022-05-01`, `2022-10-01-preview` | `2022-05-01` | |
+| Custom named entity recognition | `2022-05-01`, `2022-10-01-preview` | `2022-05-01` | |
+| Orchestration workflow | `2022-05-01`, `2022-10-01-preview` | `2022-05-01` | |
## Next steps
cognitive-services Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/concepts/best-practices.md
+
+ Title: Conversational language understanding best practices
+
+description: Apply best practices when using conversational language understanding
++++++ Last updated : 10/11/2022++++
+# Best practices for conversational language understanding
+
+Use the following guidelines to create the best possible projects in conversational language understanding.
+
+## Choose a consistent schema
+
+Schema is the definition of your intents and entities. There are different approaches you could take when defining what you should create as an intent versus an entity. There are some questions you need to ask yourself:
+
+- What actions or queries am I trying to capture from my user?
+- What pieces of information are relevant in each action?
+
+You can typically think of actions and queries as _intents_, while the information required to fulfill those queries as _entities_.
+
+For example, assume you want your customers to cancel subscriptions for various products that you offer through your chatbot. You can create a _Cancel_ intent with various examples like _"Cancel the Contoso service"_, or _"stop charging me for the Fabrikam subscription"_. The user's intent here is to _cancel_, the _Contoso service_ or _Fabrikam subscription_ are the subscriptions they would like to cancel. Therefore, you can create an entity for _subscriptions_. You can then model your entire project to capture actions as intents and use entities to fill in those actions. This allows you to cancel anything you define as an entity, such as other products. You can then have intents for signing up, renewing, upgrading, etc. that all make use of the _subscriptions_ and other entities.
+
+The above schema design makes it easy for you to extend existing capabilities (canceling, upgrading, signing up) to new targets by creating a new entity.
+
+Another approach is to model the _information_ as intents and _actions_ as entities. Let's take the same example, allowing your customers to cancel subscriptions through your chatbot. You can create an intent for each subscription available, such as _Contoso_ with utterances like _"cancel Contoso"_, _"stop charging me for contoso services"_, _"Cancel the Contoso subscription"_. You would then create an entity to capture the action, _cancel_. You can define different entities for each action or consolidate actions as one entity with a list component to differentiate between actions with different keys.
+
+This schema design makes it easy for you to extend new actions to existing targets by adding new action entities or entity components.
+
+Make sure to avoid trying to funnel all the concepts into just intents, for example don't try to create a _Cancel Contoso_ intent that only has the purpose of that one specific action. Intents and entities should work together to capture all the required information from the customer.
+
+You also want to avoid mixing different schema designs. Do not build half of your application with actions as intents and the other half with information as intents. Ensure it is consistent to get the possible results.
++++++++
cognitive-services Entity Components https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/concepts/entity-components.md
Previously updated : 05/13/2022 Last updated : 10/11/2022
In Conversational Language Understanding, entities are relevant pieces of inform
## Component types
-An entity component determines a way you can extract the entity. An entity can simply contain one component, which would determine the only method that would be used to extract the entity, or multiple components to expand the ways in which the entity is defined and extracted.
+An entity component determines a way you can extract the entity. An entity can contain one component, which would determine the only method that would be used to extract the entity, or multiple components to expand the ways in which the entity is defined and extracted.
### Learned component
The prebuilt component allows you to select from a library of common types such
:::image type="content" source="../media/prebuilt-component.png" alt-text="A screenshot showing an example of prebuilt components for entities." lightbox="../media/prebuilt-component.png":::
+### Regex component
+
+The regex component matches regular expressions to capture consistent patterns. When added, any text that matches the regular expression will be extracted. You can have multiple regular expressions within the same entity, each with a different key identifier. A matched expression will return the key as part of the prediction response.
+
+In multilingual projects, you can specify a different expression for each language. While using the prediction API, you can specify the language in the input request, which will only match the regular expression associated to that language.
++ ## Entity options
When you do not combine components, the entity will return twice:
:::image type="content" source="../media/separated-overlap-example-1-part-2.svg" alt-text="A screenshot showing the entity returned twice." lightbox="../media/separated-overlap-example-1-part-2.svg":::
+### Required components
+
+An entity can sometimes be defined by multiple components but requires one or more of them to be present. Every component can be set as **required**, which means the entity will **not** be returned if that component wasn't present. For example, if you have an entity with a list component and a required learned component, it is guaranteed that any returned entity includes a learned component; if it doesn't, the entity will not be returned.
+
+Required components are most frequently used with learned components, as they can restrict the other component types to a specific context, which is commonly associated to **roles**. You can also require all components to make sure that every component is present for an entity.
+
+In the Language Studio, every component in an entity has a toggle next to it that allows you to set it as required.
+
+#### Example
+
+Suppose you have an entity called **Ticket Quantity** that attempts to extract the number of tickets you want to reserve for flights, for utterances such as _"Book **two** tickets tomorrow to Cairo"_.
+
+Typically, you would add a prebuilt component for _Quantity.Number_ that already extracts all numbers. However if your entity was only defined with the prebuilt, it would also extract other numbers as part of the **Ticket Quantity** entity, such as _"Book **two** tickets tomorrow to Cairo at **3** PM"_.
+
+To resolve this, you would label a learned component in your training data for all the numbers that are meant to be **Ticket Quantity**. The entity now has 2 components, the prebuilt that knows all numbers, and the learned one that predicts where the Ticket Quantity is in a sentence. If you require the learned component, you make sure that Ticket Quantity only returns when the learned component predicts it in the right context. If you also require the prebuilt component, you can then guarantee that the returned Ticket Quantity entity is both a number and in the correct position.
-> [!NOTE]
-> During public preview of the service, there were 4 available options: **Longest overlap**, **Exact overlap**, **Union overlap**, and **Return all separately**. **Longest overlap** and **exact overlap** are deprecated and will only be supported for projects that previously had those options selected. **Union overlap** has been renamed to **Combine components**, while **Return all separately** has been renamed to **Do not combine components**.
## How to use components and options
A common practice is to extend a prebuilt component with a list of values that t
Other times you may be interested in extracting an entity through context such as a **Product** in a retail project. You would label for the learned component of the product to learn _where_ a product is based on its position within the sentence. You may also have a list of products that you already know before hand that you'd like to always extract. Combining both components in one entity allows you to get both options for the entity.
-When you do not combine components, you simply allow every component to act as an independent entity extractor. One way of using this option is to separate the entities extracted from a list to the ones extracted through the learned or prebuilt components to handle and treat them differently.
+When you do not combine components, you allow every component to act as an independent entity extractor. One way of using this option is to separate the entities extracted from a list to the ones extracted through the learned or prebuilt components to handle and treat them differently.
+
+> [!NOTE]
+> Previously during the public preview of the service, there were 4 available options: **Longest overlap**, **Exact overlap**, **Union overlap**, and **Return all separately**. **Longest overlap** and **exact overlap** are deprecated and will only be supported for projects that previously had those options selected. **Union overlap** has been renamed to **Combine components**, while **Return all separately** has been renamed to **Do not combine components**.
## Next steps
cognitive-services Deploy Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/how-to/deploy-model.md
Previously updated : 04/26/2022 Last updated : 10/12/2022
This can be used to swap your `production` and `staging` deployments when you wa
+## Assign deployment resources
+
+You can [deploy your project to multiple regions](../../concepts/custom-features/multi-region-deployment.md) by assigning different Language resources that exist in different regions.
+
+# [Language Studio](#tab/language-studio)
++
+# [REST APIs](#tab/rest-api)
++++
+## Unassign deployment resources
+
+When unassigning or removing a deployment resource from a project, you will also delete all the deployments that have been deployed to that resource's region.
+
+# [Language Studio](#tab/language-studio)
++
+# [REST APIs](#tab/rest-api)
++++ ## Next steps * Use [prediction API to query your model](call-api.md)
cognitive-services View Model Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/how-to/view-model-evaluation.md
In the **view model details** page, you'll be able to see all your models, with
+## Load or export model data
+
+### [Language studio](#tab/Language-studio)
+++
+### [REST APIs](#tab/REST-APIs)
++++ ## Delete model ### [Language studio](#tab/Language-studio)
cognitive-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/service-limits.md
Previously updated : 05/12/2022 Last updated : 10/12/2022
Use this article to learn about the data and service limits when using conversat
|Tier|Description|Limit| |--|--|--|
- |F0|Free tier|You are only allowed **One** Language resource **per subscription**.|
+ |F0|Free tier|You are only allowed **one** F0 Language resource **per subscription**.|
|S |Paid tier|You can have up to 100 Language resources in the S tier per region.|
See [pricing](https://azure.microsoft.com/pricing/details/cognitive-services/lan
### Regional availability
-Conversational language understanding is only available in some Azure regions. To use conversational language understanding, you must choose a Language resource in one of following regions:
-* Australia East
-* Central India
-* East US
-* East US 2
-* North Europe
-* South Central US
-* Switzerland North
-* UK South
-* West Europe
-* West US 2
-* West US 3
--
+Conversational language understanding is only available in some Azure regions. Some regions are available for **both authoring and prediction**, while other regions are **prediction only**. Language resources in authoring regions allow you to create, edit, train, and deploy your projects. Language resources in prediction regions allow you to get [predictions from a deployment](../concepts/custom-features/multi-region-deployment.md).
+
+| Region | Authoring | Prediction |
+|--|--|-|
+| Australia East | Γ£ô | Γ£ô |
+| Brazil South | | Γ£ô |
+| Canada Central | | Γ£ô |
+| Central India | Γ£ô | Γ£ô |
+| Central US | | Γ£ô |
+| East Asia | | Γ£ô |
+| East US | Γ£ô | Γ£ô |
+| East US 2 | Γ£ô | Γ£ô |
+| France Central | | Γ£ô |
+| Japan East | | Γ£ô |
+| Japan West | | Γ£ô |
+| Jio India West | | Γ£ô |
+| Korea Central | | Γ£ô |
+| North Central US | | Γ£ô |
+| North Europe | Γ£ô | Γ£ô |
+| Norway East | | Γ£ô |
+| South Africa North | | Γ£ô |
+| South Central US | Γ£ô | Γ£ô |
+| Southeast Asia | | Γ£ô |
+| Sweden Central | | Γ£ô |
+| Switzerland North | Γ£ô | Γ£ô |
+| UAE North | | Γ£ô |
+| UK South | Γ£ô | Γ£ô |
+| West Central US | | Γ£ô |
## API limits
The following limits are observed for the conversational language understanding.
| Item | Limits | |--|--|
-| Project name | You can only use letters `(a-z, A-Z)`, and numbers `(0-9)` ,symbols `_ . -`,with no spaces. Maximum allowed length is 50 characters. |
+| Project name | You can only use letters `(a-z, A-Z)`, and numbers `(0-9)` , symbols `_ . -`, with no spaces. Maximum allowed length is 50 characters. |
| Model name | You can only use letters `(a-z, A-Z)`, numbers `(0-9)` and symbols `_ . -`. Maximum allowed length is 50 characters. | | Deployment name | You can only use letters `(a-z, A-Z)`, numbers `(0-9)` and symbols `_ . -`. Maximum allowed length is 50 characters. | | Intent name| You can only use letters `(a-z, A-Z)`, numbers `(0-9)` and all symbols except ":", `$ & % * ( ) + ~ # / ?`. Maximum allowed length is 50 characters.|
cognitive-services Deploy Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/how-to/deploy-model.md
Previously updated : 05/09/2022 Last updated : 10/12/2022
After you are done testing a model assigned to one deployment and you want to as
+## Assign deployment resources
+
+You can [deploy your project to multiple regions](../../concepts/custom-features/multi-region-deployment.md) by assigning different Language resources that exist in different regions.
+
+# [Language Studio](#tab/language-studio)
++
+# [REST APIs](#tab/rest-api)
++++
+## Unassign deployment resources
+
+When unassigning or removing a deployment resource from a project, you will also delete all the deployments that have been deployed to that resource's region.
+
+# [Language Studio](#tab/language-studio)
++
+# [REST APIs](#tab/rest-api)
++++ ## Next steps After you have a deployment, you can use it to [extract entities](call-api.md) from text.
cognitive-services View Model Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/how-to/view-model-evaluation.md
Previously updated : 05/24/2022 Last updated : 10/12/2022
See the [project development lifecycle](../overview.md#project-development-lifec
[!INCLUDE [Model evaluation](../includes/rest-api/model-evaluation.md)] +
+## Load or export model data
+
+### [Language studio](#tab/Language-studio)
+++
+### [REST APIs](#tab/REST-APIs)
++++ ## Delete model ### [Language studio](#tab/language-studio)
cognitive-services Deploy Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-classification/how-to/deploy-model.md
Previously updated : 05/04/2022 Last updated : 10/12/2022
You can swap deployments after you've tested a model assigned to one deployment,
[!INCLUDE [Delete deployment](../includes/rest-api/delete-deployment.md)] +
+## Assign deployment resources
+
+You can [deploy your project to multiple regions](../../concepts/custom-features/multi-region-deployment.md) by assigning different Language resources that exist in different regions.
+
+# [Language Studio](#tab/language-studio)
++
+# [REST APIs](#tab/rest-api)
++++
+## Unassign deployment resources
+
+When you unassign or remove a deployment resource from a project, you will also delete all the deployments that have been deployed to that resource's region.
+
+# [Language Studio](#tab/language-studio)
++
+# [REST APIs](#tab/rest-api)
++++
+## Next steps
+
+* Use [prediction API to query your model](call-api.md)
cognitive-services View Model Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-classification/how-to/view-model-evaluation.md
Previously updated : 08/09/2022 Last updated : 10/12/2022
See the [project development lifecycle](../overview.md#project-development-lifec
+## Load or export model data
+
+### [Language studio](#tab/Language-studio)
+++
+### [REST APIs](#tab/REST-APIs)
++++ ## Delete model ### [Language studio](#tab/language-studio)
cognitive-services Entity Resolutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/named-entity-recognition/concepts/entity-resolutions.md
+
+ Title: Entity resolutions provided by Named Entity Recognition
+
+description: Learn about entity resolutions in the NER feature.
++++++ Last updated : 10/12/2022++++
+# Resolve entities to standard formats
+
+A resolution is a standard format for an entity. Entities can be expressed in various forms and resolutions provide standard predictable formats for common quantifiable types. For example, "eighty" and "80" should both resolve to the integer `80`.
+
+You can use NER resolutions to implement actions or retrieve further information. For example, your service can extract datetime entities to extract dates and times that will be provided to a meeting scheduling system.
+
+This article documents the resolution objects returned for each entity category or subcategory.
+
+## Age
+
+Examples: "10 years old", "23 months old", "sixty Y.O."
+
+```json
+"resolutions": [
+ {
+ "resolutionKind": "AgeResolution",
+ "unit": "Year",
+ "value": 10
+ }
+ ]
+```
+
+Possible values for "unit":
+- Year
+- Month
+- Week
+- Day
++
+## Currency
+
+Examples: "30 Egyptian pounds", "77 USD"
+
+```json
+"resolutions": [
+ {
+ "resolutionKind": "CurrencyResolution",
+ "unit": "Egyptian pound",
+ "ISO4217": "EGP",
+ "value": 30
+ }
+ ]
+```
+
+Possible values for "unit" and "ISO4217":
+- [ISO 4217 reference](https://docs.1010data.com/1010dataReferenceManual/DataTypesAndFormats/currencyUnitCodes.html).
+
+## Datetime
+
+Datetime includes several different subtypes that return different response objects.
+
+### Date
+
+Specific days.
+
+Examples: "January 1 1995", "12 april", "7th of October 2022", "tomorrow"
+
+```json
+"resolutions": [
+ {
+ "resolutionKind": "DateTimeResolution",
+ "dateTimeSubKind": "Date",
+ "timex": "1995-01-01",
+ "value": "1995-01-01"
+ }
+ ]
+```
+
+Whenever an ambiguous date is provided, you're offered different options for your resolution. For example, "12 April" could refer to any year. Resolution provides this year and the next as options. The `timex` value `XXXX` indicates no year was specified in the query.
+
+```json
+"resolutions": [
+ {
+ "resolutionKind": "DateTimeResolution",
+ "dateTimeSubKind": "Date",
+ "timex": "XXXX-04-12",
+ "value": "2022-04-12"
+ },
+ {
+ "resolutionKind": "DateTimeResolution",
+ "dateTimeSubKind": "Date",
+ "timex": "XXXX-04-12",
+ "value": "2023-04-12"
+ }
+ ]
+```
+
+Ambiguity can occur even for a given day of the week. For example, saying "Monday" could refer to last Monday or this Monday. Once again the `timex` value indicates no year or month was specified, and uses a day of the week identifier (W) to indicate the first day of the week.
+
+```json
+"resolutions": [
+ {
+ "resolutionKind": "DateTimeResolution",
+ "dateTimeSubKind": "Date",
+ "timex": "XXXX-WXX-1",
+ "value": "2022-10-03"
+ },
+ {
+ "resolutionKind": "DateTimeResolution",
+ "dateTimeSubKind": "Date",
+ "timex": "XXXX-WXX-1",
+ "value": "2022-10-10"
+ }
+ ]
+```
++
+### Time
+
+Specific times.
+
+Examples: "9:39:33 AM", "seven AM", "20:03"
+
+```json
+"resolutions": [
+ {
+ "resolutionKind": "DateTimeResolution",
+ "dateTimeSubKind": "Time",
+ "timex": "T09:39:33",
+ "value": "09:39:33"
+ }
+ ]
+```
+
+### Datetime
+
+Specific date and time combinations.
+
+Examples: "6 PM tomorrow", "8 PM on January 3rd", "Nov 1 19:30"
+
+```json
+"resolutions": [
+ {
+ "resolutionKind": "DateTimeResolution",
+ "dateTimeSubKind": "DateTime",
+ "timex": "2022-10-07T18",
+ "value": "2022-10-07 18:00:00"
+ }
+ ]
+```
+
+Similar to dates, you can have ambiguous datetime entities. For example, "May 3rd noon" could refer to any year. Resolution provides this year and the next as options. The `timex` value **XXXX** indicates no year was specified.
+
+```json
+"resolutions": [
+ {
+ "resolutionKind": "DateTimeResolution",
+ "dateTimeSubKind": "DateTime",
+ "timex": "XXXX-05-03T12",
+ "value": "2022-05-03 12:00:00"
+ },
+ {
+ "resolutionKind": "DateTimeResolution",
+ "dateTimeSubKind": "DateTime",
+ "timex": "XXXX-05-03T12",
+ "value": "2023-05-03 12:00:00"
+ }
+ ]
+```
+
+### Datetime ranges
+
+A datetime range is a period with a beginning and end date, time, or datetime.
+
+Examples: "from january 3rd 6 AM to april 25th 8 PM 2022", "between Monday to Thursday", "June", "the weekend"
+
+The "duration" parameter indicates the time passed in seconds (S), minutes (M), hours (H), or days (D). This parameter is only returned when an explicit start and end datetime are in the query. "Next week" would only return with "begin" and "end" parameters for the week.
+
+```json
+"resolutions": [
+ {
+ "resolutionKind": "TemporalSpanResolution",
+ "duration": "PT2702H",
+ "begin": "2022-01-03 06:00:00",
+ "end": "2022-04-25 20:00:00"
+ }
+ ]
+```
+
+### Set
+
+A set is a recurring datetime period. Sets don't resolve to exact values, as they don't indicate an exact datetime.
+
+Examples: "every Monday at 6 PM", "every Thursday", "every weekend"
+
+For "every Monday at 6 PM", the `timex` value indicates no specified year with the starting **XXXX**, then every Monday through **WXX-1** to determine first day of every week, and finally **T18** to indicate 6 PM.
+
+```json
+"resolutions": [
+ {
+ "resolutionKind": "DateTimeResolution",
+ "dateTimeSubKind": "Set",
+ "timex": "XXXX-WXX-1T18",
+ "value": "not resolved"
+ }
+ ]
+```
+
+## Dimensions
+
+Examples: "24 km/hr", "44 square meters", "sixty six kilobytes"
+
+```json
+"resolutions": [
+ {
+ "resolutionKind": "SpeedResolution",
+ "unit": "KilometersPerHour",
+ "value": 24
+ }
+ ]
+```
+
+Possible values for "resolutionKind" and their "unit" values:
+
+- **AreaResolution**:
+ - SquareKilometer
+ - SquareHectometer
+ - SquareDecameter
+ - SquareMeter
+ - SquareDecimeter
+ - SquareCentimeter
+ - SquareMillimeter
+ - SquareInch
+ - SquareFoot
+ - SquareMile
+ - SquareYard
+ - Acre
+
+- **InformationResolution**:
+ - Bit
+ - Kilobit
+ - Megabit
+ - Gigabit
+ - Terabit
+ - Petabit
+ - Byte
+ - Kilobyte
+ - Megabyte
+ - Gigabyte
+ - Terabyte
+ - Petabyte
+
+- **LengthResolution**:
+ - Kilometer
+ - Hectometer
+ - Decameter
+ - Meter
+ - Decimeter
+ - Centimeter
+ - Millimeter
+ - Micrometer
+ - Nanometer
+ - Picometer
+ - Mile
+ - Yard
+ - Inch
+ - Foot
+ - Light year
+ - Pt
+
+- **SpeedResolution**:
+ - MetersPerSecond
+ - KilometersPerHour
+ - KilometersPerMinute
+ - KilometersPerSecond
+ - MilesPerHour
+ - Knot
+ - FootPerSecond
+ - FootPerMinute
+ - YardsPerMinute
+ - YardsPerSecond
+ - MetersPerMillisecond
+ - CentimetersPerMillisecond
+ - KilometersPerMillisecond
+
+- **VolumeResolution**:
+ - CubicMeter
+ - CubicCentimeter
+ - CubicMillimiter
+ - Hectoliter
+ - Decaliter
+ - Liter
+ - Deciliter
+ - Centiliter
+ - Milliliter
+ - CubicYard
+ - CubicInch
+ - CubicFoot
+ - CubicMile
+ - FluidOunce
+ - Teaspoon
+ - Tablespoon
+ - Pint
+ - Quart
+ - Cup
+ - Gill
+ - Pinch
+ - FluidDram
+ - Barrel
+ - Minim
+ - Cord
+ - Peck
+ - Bushel
+ - Hogshead
+
+- **WeightResolution**:
+ - Kilogram
+ - Gram
+ - Milligram
+ - Microgram
+ - Gallon
+ - MetricTon
+ - Ton
+ - Pound
+ - Ounce
+ - Grain
+ - Pennyweight
+ - LongTonBritish
+ - ShortTonUS
+ - ShortHundredweightUS
+ - Stone
+ - Dram
++
+## Number
+
+Examples: "27", "one hundred and three", "38.5", "2/3", "33%"
+
+```json
+"resolutions": [
+ {
+ "resolutionKind": "NumberResolution",
+ "numberKind": "Integer",
+ "value": 27
+ }
+ ]
+```
+
+Possible values for "numberKind":
+- Integer
+- Decimal
+- Fraction
+- Power
+- Percent
++
+## Ordinal
+
+Examples: "3rd", "first", "last"
+
+```json
+"resolutions": [
+ {
+ "resolutionKind": "OrdinalResolution",
+ "offset": "3",
+ "relativeTo": "Start",
+ "value": "3"
+ }
+ ]
+```
+
+Possible values for "relativeTo":
+- Start
+- End
+
+## Temperature
+
+Examples: "88 deg fahrenheit", "twenty three degrees celsius"
+
+```json
+"resolutions": [
+ {
+ "resolutionKind": "TemperatureResolution",
+ "unit": "Fahrenheit",
+ "value": 88
+ }
+ ]
+```
+
+Possible values for "unit":
+- Celsius
+- Fahrenheit
+- Kelvin
+- Rankine
++++
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/named-entity-recognition/language-support.md
Use this article to learn which natural languages are supported by the NER featu
## NER language support
-| Language | Language code | Starting with model version: | Notes |
-|:-|:-:|:-:|::|
-| Arabic* | `ar` | 2019-10-01 | |
-| Chinese-Simplified | `zh-hans` | 2021-01-15 | `zh` also accepted |
-| Chinese-Traditional* | `zh-hant` | 2019-10-01 | |
-| Czech* | `cs` | 2019-10-01 | |
-| Danish* | `da` | 2019-10-01 | |
-| Dutch* | `nl` | 2019-10-01 | |
-| English | `en` | 2019-10-01 | |
-| Finnish* | `fi` | 2019-10-01 | |
-| French | `fr` | 2021-01-15 | |
-| German | `de` | 2021-01-15 | |
-| Hebrew | `he` | 2022-10-01 | |
-| Hindi | `hi` | 2022-10-01 | |
-| Hungarian* | `hu` | 2019-10-01 | |
-| Italian | `it` | 2021-01-15 | |
-| Japanese | `ja` | 2021-01-15 | |
-| Korean | `ko` | 2021-01-15 | |
-| Norwegian (Bokmål)* | `no` | 2019-10-01 | `nb` also accepted |
-| Polish* | `pl` | 2019-10-01 | |
-| Portuguese (Brazil) | `pt-BR` | 2021-01-15 | |
-| Portuguese (Portugal) | `pt-PT` | 2021-01-15 | `pt` also accepted |
-| Russian* | `ru` | 2019-10-01 | |
-| Spanish | `es` | 2020-04-01 | |
-| Swedish* | `sv` | 2019-10-01 | |
-| Turkish* | `tr` | 2019-10-01 | |
+| Language | Language code | Starting with model version: | Supports entity resolution | Notes |
+|:-|:-:|:-:|:--:|::|
+| Arabic* | `ar` | 2019-10-01 | | |
+| Chinese-Simplified | `zh-hans` | 2021-01-15 | Γ£ô | `zh` also accepted |
+| Chinese-Traditional* | `zh-hant` | 2019-10-01 | | |
+| Czech* | `cs` | 2019-10-01 | | |
+| Danish* | `da` | 2019-10-01 | | |
+| Dutch* | `nl` | 2019-10-01 | Γ£ô | |
+| English | `en` | 2019-10-01 | Γ£ô | |
+| Finnish* | `fi` | 2019-10-01 | | |
+| French | `fr` | 2021-01-15 | Γ£ô | |
+| German | `de` | 2021-01-15 | Γ£ô | |
+| Hebrew | `he` | 2022-10-01 | | |
+| Hindi | `hi` | 2022-10-01 | Γ£ô | |
+| Hungarian* | `hu` | 2019-10-01 | | |
+| Italian | `it` | 2021-01-15 | Γ£ô | |
+| Japanese | `ja` | 2021-01-15 | Γ£ô | |
+| Korean | `ko` | 2021-01-15 | | |
+| Norwegian (Bokmål)* | `no` | 2019-10-01 | | `nb` also accepted |
+| Polish* | `pl` | 2019-10-01 | | |
+| Portuguese (Brazil) | `pt-BR` | 2021-01-15 | Γ£ô | |
+| Portuguese (Portugal) | `pt-PT` | 2021-01-15 | | `pt` also accepted |
+| Russian* | `ru` | 2019-10-01 | | |
+| Spanish | `es` | 2020-04-01 | Γ£ô | |
+| Swedish* | `sv` | 2019-10-01 | | |
+| Turkish* | `tr` | 2019-10-01 | Γ£ô | |
## Next steps
-[PII feature overview](overview.md)
+[NER feature overview](overview.md)
cognitive-services Deploy Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/orchestration-workflow/how-to/deploy-model.md
Previously updated : 05/20/2022 Last updated : 10/12/2022
This can be used to swap your `production` and `staging` deployments when you wa
+## Assign deployment resources
+
+You can [deploy your project to multiple regions](../../concepts/custom-features/multi-region-deployment.md) by assigning different Language resources that exist in different regions.
+
+# [Language Studio](#tab/language-studio)
++
+# [REST APIs](#tab/rest-api)
++++
+## Unassign deployment resources
+
+When unassigning or removing a deployment resource from a project, you will also delete all the deployments that have been deployed to that resource's region.
+
+# [Language Studio](#tab/language-studio)
++
+# [REST APIs](#tab/rest-api)
++++ ## Next steps Use [prediction API to query your model](call-api.md)
cognitive-services View Model Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/orchestration-workflow/how-to/view-model-evaluation.md
Previously updated : 04/26/2022 Last updated : 10/12/2022
In the **view model details** page, you'll be able to see all your models, with
+## Load or export model data
+
+### [Language studio](#tab/Language-studio)
+++
+### [REST APIs](#tab/REST-APIs)
++++ ## Delete model ### [Language studio](#tab/Language-studio)
cognitive-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/orchestration-workflow/service-limits.md
Previously updated : 05/18/2022 Last updated : 10/12/2022
Use this article to learn about the data and service limits when using orchestra
## Language resource limits
-* Your Language resource has to be created in one of the [supported regions](#regional-support).
+* Your Language resource has to be created in one of the [supported regions](#regional-availability).
* Pricing tiers
See [pricing](https://azure.microsoft.com/pricing/details/cognitive-services/lan
* Project names have to be unique within the same resource across all custom features.
-## Regional support
-
-Orchestration workflow is only available in some Azure regions. To use orchestration workflow, you must choose a Language resource in one of following regions:
-
-* West US 2
-* East US
-* East US 2
-* West US 3
-* South Central US
-* West Europe
-* North Europe
-* UK south
-* Australia East
+## Regional availability
+
+Orchestration workflow is only available in some Azure regions. Some regions are available for **both authoring and prediction**, while other regions are **prediction only**. Language resources in authoring regions allow you to create, edit, train, and deploy your projects. Language resources in prediction regions allow you to get [predictions from a deployment](../concepts/custom-features/multi-region-deployment.md).
+
+| Region | Authoring | Prediction |
+|--|--|-|
+| Australia East | Γ£ô | Γ£ô |
+| Brazil South | | Γ£ô |
+| Canada Central | | Γ£ô |
+| Central India | Γ£ô | Γ£ô |
+| Central US | | Γ£ô |
+| East Asia | | Γ£ô |
+| East US | Γ£ô | Γ£ô |
+| East US 2 | Γ£ô | Γ£ô |
+| France Central | | Γ£ô |
+| Japan East | | Γ£ô |
+| Japan West | | Γ£ô |
+| Jio India West | | Γ£ô |
+| Korea Central | | Γ£ô |
+| North Central US | | Γ£ô |
+| North Europe | Γ£ô | Γ£ô |
+| Norway East | | Γ£ô |
+| South Africa North | | Γ£ô |
+| South Central US | Γ£ô | Γ£ô |
+| Southeast Asia | | Γ£ô |
+| Sweden Central | | Γ£ô |
+| Switzerland North | Γ£ô | Γ£ô |
+| UAE North | | Γ£ô |
+| UK South | Γ£ô | Γ£ô |
+| West Central US | | Γ£ô |
## API limits
The following limits are observed for orchestration workflow.
| Attribute | Limits | |--|--|
-| Project name | You can only use letters `(a-z, A-Z)`, and numbers `(0-9)` ,symbols `_ . -`,with no spaces. Maximum allowed length is 50 characters. |
+| Project name | You can only use letters `(a-z, A-Z)`, and numbers `(0-9)` , symbols `_ . -`, with no spaces. Maximum allowed length is 50 characters. |
| Model name | You can only use letters `(a-z, A-Z)`, numbers `(0-9)` and symbols `_ . -`. Maximum allowed length is 50 characters. | | Deployment name | You can only use letters `(a-z, A-Z)`, numbers `(0-9)` and symbols `_ . -`. Maximum allowed length is 50 characters. | | Intent name| You can only use letters `(a-z, A-Z)`, numbers `(0-9)` and all symbols except ":", `$ & % * ( ) + ~ # / ?`. Maximum allowed length is 50 characters.|
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/whats-new.md
Azure Cognitive Service for Language is updated on an ongoing basis. To stay up-
* [Sentiment analysis](./sentiment-opinion-mining/language-support.md) * [Key phrase extraction](./key-phrase-extraction/language-support.md) * [Named entity recognition](./key-phrase-extraction/language-support.md)
+* [Multi-region deployment](./concepts/custom-features/multi-region-deployment.md) and [project asset versioning](./concepts/custom-features/project-versioning.md) for:
+ * [Conversational language understanding](./conversational-language-understanding/overview.md)
+ * [Orchestration workflow](./orchestration-workflow/overview.md)
+ * [Custom text classification](./custom-text-classification/overview.md)
+ * [Custom named entity recognition](./custom-named-entity-recognition/overview.md).
+* [Regular expressions](./conversational-language-understanding/concepts/entity-components.md#regex-component) in conversational language understanding and [required components](./conversational-language-understanding/concepts/entity-components.md#required-components), offering an additional ability to influence entity predictions.
+* [Entity resolution](./named-entity-recognition/concepts/entity-resolutions.md) in named entity recognition
## September 2022
communication-services Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/analytics/insights.md
# Communications Services Insights Preview ## Overview
-Within your Communications Resource, we have provided an **Insights Preview** feature that displays a number of data visualizations conveying insights from the Azure Monitor logs and metrics monitored for your Communications Services. The visualizations within Insights is made possible via [Azure Monitor Workbooks](../../../azure-monitor/visualize/workbooks-overview.md). In order to take advantage of Workbooks, follow the instructions outlined in [Enable Azure Monitor in Diagnostic Settings](enable-logging.md), and to enable Workbooks, you will need to send your logs to a [Log Analytics workspace](../../../azure-monitor/logs/log-analytics-overview.md) destination.
+Within your Communications Resource, we have provided an **Insights Preview** feature that displays a number of data visualizations conveying insights from the Azure Monitor logs and metrics monitored for your Communications Services. The visualizations within Insights are made possible via [Azure Monitor Workbooks](../../../azure-monitor/visualize/workbooks-overview.md). In order to take advantage of Workbooks, follow the instructions outlined in [Enable Azure Monitor in Diagnostic Settings](enable-logging.md), and to enable Workbooks, you will need to send your logs to a [Log Analytics workspace](../../../azure-monitor/logs/log-analytics-overview.md) destination.
:::image type="content" source="media\workbooks\insights-overview-2.png" alt-text="Communication Services Insights":::
The **SMS** tab displays the operations and results for SMS usage through an Azu
:::image type="content" source="media\workbooks\sms.png" alt-text="SMS tab":::
+The **Email** tab displays delivery status, email size, and email count:
+[Screenshot displays email count, size and email delivery status level that illustrate email insights]
+ ## Editing dashboards The **Insights** dashboards provided with your **Communication Service** resource can be customized by clicking on the **Edit** button on the top navigation bar:
Editing these dashboards does not modify the **Insights** tab, but rather create
:::image type="content" source="media\workbooks\workbooks-tab.png" alt-text="Workbooks tab":::
-For an in-depth description of workbooks, please refer to the [Azure Monitor Workbooks](../../../azure-monitor/visualize/workbooks-overview.md) documentation.
+For an in-depth description of workbooks, please refer to the [Azure Monitor Workbooks](../../../azure-monitor/visualize/workbooks-overview.md) documentation.
communication-services Azure Ad Api Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/teams-user/azure-ad-api-permissions.md
# Azure AD permissions for communication as Teams user
-In this article, you will learn about Azure AD permissions available for communication as a Teams user in Azure Communication Services.
+In this article, you will learn about Azure AD permissions available for communication as a Teams user in Azure Communication Services. Azure AD application for Azure Communication Services provides delegated permissions for chat and calling. Both permissions are required to exchange Azure AD access token for Communication Services access token for Teams users.
## Delegated permissions
communication-services Call Recording https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/call-recording.md
Call recording currently supports mixed audio+video MP4 and mixed audio MP3/WAV
| Channel type | Content format | Output | Scenario | Release Stage | ||--|||-|
-| Mixed audio+video | Mp4 | Single file, single channel | Keeping records and meeting notes Coaching and Training | Public Preview |
-| Mixed audio | Mp3 (lossy)/ wav (lossless) | Single file, single channel | Compliance & Adherence Coaching and Training | Public Preview |
-| **Unmixed audio** | wav | Single file, up to 5 wav channels | Quality Assurance Analytics | **Private Preview** |
+| Mixed audio+video | Mp4 | Single file, single channel | keeping records and meeting notes, coaching and training | Public Preview |
+| Mixed audio | Mp3 (lossy)/ wav (lossless) | Single file, single channel | compliance & adherence, coaching and training | Public Preview |
+| **Unmixed audio** | wav | Single file, up to 5 wav channels | quality assurance, advance analytics | **Private Preview** |
## Run-time Control APIs
-Run-time control APIs can be used to manage recording via internal business logic triggers, such as an application creating a group call and recording the conversation. Also, recordings can be triggered by a user action that tells the server application to start recording. Call Recording APIs are [Out-of-Call APIs](./call-automation-apis.md#out-of-call-apis), using the `serverCallId` to initiate recording. Once a call is created, a `serverCallId` is returned via the `Microsoft.Communication.CallLegStateChanged` event after a call has been established. The `serverCallId` can be found in the `data.serverCallId` field. See our [Call Recording Quickstart Sample](../../quickstarts/voice-video-calling/call-recording-sample.md) to learn about retrieving the `serverCallId` from the Calling Client SDK. A `recordingOperationId` is returned when recording is started, which is then used for follow-on operations like pause and resume.
+Run-time control APIs can be used to manage recording via internal business logic triggers, such as an application creating a group call and recording the conversation. Also, recordings can be triggered by a user action that tells the server application to start recording. Call Recording APIs use the `serverCallId` to initiate recording. Once a call is created, a `serverCallId` is returned via the `Microsoft.Communication.CallLegStateChanged` event after a call has been established. The `serverCallId` can be found in the `data.serverCallId` field. Learn how to [Get `serverCallId`](../../quickstarts/voice-video-calling/get-server-call-id.md) from the Calling Client SDK. A `recordingOperationId` is returned when recording is started, which is then used for follow-on operations like pause and resume.
| Operation | Operates On | Comments | | :-- | : | :-- |
An Event Grid notification `Microsoft.Communication.RecordingFileStatusUpdated`
```typescript { "id": string, // Unique guid for event
- "topic": string, // Azure Communication Services resource id
- "subject": string, // /recording/call/{call-id}
+ "topic": string, // /subscriptions/{subscription-id}/resourceGroups/{group-name}/providers/Microsoft.Communication/communicationServices/{communication-services-resource-name}
+ "subject": string, // /recording/call/{call-id}/serverCallId/{serverCallId}
"data": { "recordingStorageInfo": { "recordingChunks": [
container-apps Communicate Between Microservices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/communicate-between-microservices.md
Output from the `az acr build` command shows the upload progress of the source c
# [Bash](#tab/bash) ```azurecli
- docker push $ACR_NAME.azurecr.io/albumapp-ui
+
+ docker push "$ACR_NAME.azurecr.io/albumapp-ui"
``` # [PowerShell](#tab/powershell) ```powershell
- docker push $ACR_NAME.azurecr.io/albumapp-ui
+
+ docker push "$ACR_NAME.azurecr.io/albumapp-ui"
```
container-apps Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/networking.md
There's no forced tunneling in Container Apps routes.
## Managed resources
-When you deploy an internal or an external environment into your own network, a new resource group prefixed with `MC_` is created in the Azure subscription where your environment is hosted. This resource group contains infrastructure components managed by the Azure Container Apps platform, and shouldn't be modified. The resource group contains Public IP addresses used specifically for outbound connectivity from your environment and a load balancer. In addition to the [Azure Container Apps billing](./billing.md), you will be billed for the following:
-- Three standard static [public IPs](https://azure.microsoft.com/pricing/details/ip-addresses/) if using an internal environment, or four standard static [public IPs](https://azure.microsoft.com/pricing/details/ip-addresses/) if using an external environment.
+When you deploy an internal or an external environment into your own network, a new resource group prefixed with `MC_` is created in the Azure subscription where your environment is hosted. This resource group contains infrastructure components managed by the Azure Container Apps platform, and shouldn't be modified. The resource group contains Public IP addresses used specifically for outbound connectivity from your environment and a load balancer. In addition to the [Azure Container Apps billing](./billing.md), you are billed for the following:
+
+- Two standard static [public IPs](https://azure.microsoft.com/pricing/details/ip-addresses/), one for ingress and one for egress. If you need more IPs for egress due to SNAT issues, [open a support ticket to request an override](https://azure.microsoft.com/support/create-ticket/).
+ - Two standard [Load Balancers](https://azure.microsoft.com/pricing/details/load-balancer/) if using an internal environment, or one standard [Load Balancer](https://azure.microsoft.com/pricing/details/load-balancer/) if using an external environment. Each load balancer has less than six rules. The cost of data processed (GB) includes both ingress and egress for management operations.
container-registry Container Registry Enable Conditional Access Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-enable-conditional-access-policy.md
+
+ Title: Configure conditional access to your Azure Container Registry
+description: Learn how to configure conditional access to your registry by using Azure CLI and Azure portal.
+ Last updated : 09/13/2021++++
+# Azure Container Registry (ACR) introduces the Conditional Access policy
+
+Azure Container Registry (ACR) gives you the option to create and configure the *Conditional Access policy*.
+
+The [Conditional Access policy](/azure/active-directory/conditional-access/overview) is designed to enforce strong authentication. The authentication is based on the location, trusted and compliant devices, user assigned roles, authorization method, and the client applications. The policy enables the security to meet the organizations compliance requirements and keep the data and user accounts safe.
+
+Learn more about [Conditional Access policy](/azure/active-directory/conditional-access/overview), the [conditions](/azure/active-directory/conditional-access/overview#common-signals) you'll take it into consideration to make [policy decisions.](/azure/active-directory/conditional-access/overview#common-decisions)
+
+The Conditional Access policy applies after the first-factor authentication to the Azure Container Registry is complete. The purpose of Conditional Access for ACR is for user authentication only. The policy enables the user to choose the controls and further blocks or grants access based on the policy decisions.
+
+The following steps will help create a Conditional Access policy for Azure Container Registry (ACR).
+
+1. Disable authentication-as-arm in ACR - Azure CLI.
+2. Disable authentication-as-arm in the ACR - Azure portal.
+3. Create and configure Conditional Access policy for Azure Container Registry.
+
+## Prerequisites
+
+>* [Install or upgrade Azure CLI](/cli/azure/install-azure-cli) version 2.40.0 or later. To find the version, run `az --version`.
+>* Sign into [Azure portal.](https://portal.azure.com).
+
+## Disable authentication-as-arm in ACR - Azure CLI
+
+Disabling `azureADAuthenticationAsArmPolicy` will force the registry to use ACR audience token. You can use Azure CLI version 2.40.0 or later, run `az --version` to find the version.
+
+1. Run the command to show the current configuration of the registry's policy for authentication using ARM tokens with the registry. If the status is `enabled`, then both ACRs and ARM audience tokens can be used for authentication. If the status is `disabled` it means only ACR's audience tokens can be used for authentication.
+
+ ```azurecli-interactive
+ az acr config authentication-as-arm show -r <registry>
+ ```
+
+1. Run the command to update the status of the registry's policy.
+
+ ```azurecli-interactive
+ az acr config authentication-as-arm update -r <registry> --status [enabled/disabled]
+ ```
+
+## Disable authentication-as-arm in the ACR - Azure portal
+
+Disabling `authentication-as-arm` property by assigning a built-in policy will automatically disable the registry property for the current and the future registries. This automatic behavior is for registries created within the policy scope. The possible policy scopes include either Resource Group level scope or Subscription ID level scope within the tenant.
+
+You can disable authentication-as-arm in the ACR, by following below steps:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+2. Refer to the ACR's built-in policy definitions in the [azure-container-registry-built-in-policy definition's](policy-reference.md).
+3. Assign a built-in policy to disable authentication-as-arm definition - Azure portal.
+
+### Assign a built-in policy definition to disable ARM audience token authentication - Azure portal.
+
+You can enable registry's Conditional Access policy in the [Azure portal](https://portal.azure.com).
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Navigate to your **Azure Container Registry** > **Resource Group** > **Settings** > **Policies** .
+
+ :::image type="content" source="media/container-registry-enable-conditional-policy/01-azure-policies.png" alt-text="Screenshot showing how to navigate Azure policies.":::
+
+1. Navigate to **Azure Policy**, On the **Assignments**, select **Assign policy**.
+
+ :::image type="content" source="media/container-registry-enable-conditional-policy/02-Assign-policy.png" alt-text="Screenshot showing how to assign a policy.":::
+
+1. Under the **Assign policy** , use filters to search and find the **Scope**, **Policy definition**, **Assignment name**.
+
+ :::image type="content" source="media/container-registry-enable-conditional-policy/03-Assign-policy-tab.png" alt-text="Screenshot of the assign policy tab.":::
+
+1. Select **Scope** to filter and search for the **Subscription** and **ResourceGroup** and choose **Select**.
+
+ :::image type="content" source="media/container-registry-enable-conditional-policy/04-select-scope.png" alt-text="Screenshot of the Scope tab.":::
+
+1. Select **Policy definition** to filter and search the built-in policy definitions for the Conditional Access policy.
+
+ :::image type="content" source="media/container-registry-enable-conditional-policy/05-built-in-policy-definitions.png" alt-text="Screenshot of built-in-policy-definitions.":::
+
+Azure Container Registry has two built-in policy definitions to disable authentication-as-arm, as below:
+
+>* `Container registries should have ARM audience token authentication disabled.` - This policy will report, block any non-compliant resources, and also sends a request to update non-compliant to compliant.
+>* `Configure container registries to disable ARM audience token authentication.` - This policy offers remediation and updates non-compliant to compliant resources.
+
+1. Use filters to select and confirm **Scope**, **Policy definition**, and **Assignment name**.
+
+1. Use the filters to limit compliance states or to search for policies.
+
+1. Confirm your settings and set policy enforcement as **enabled**.
+
+1. Select **Review+Create**.
+
+ :::image type="content" source="media/container-registry-enable-conditional-policy/06-enable-policy.png" alt-text="Screenshot showing how to activate a Conditional Access policy.":::
++
+## Create and configure a Conditional Access policy - Azure portal
+
+ACR supports Conditional Access policy for Active Directory users only. It currently doesn't support Conditional Access policy for Service Principal. To configure Conditional Access policy for the registry, you must disable `authentication-as-arm` for all the registries within the desired tenant. In this tutorial, we'll create a basic Conditional Access policy for the Azure Container Registry from the Azure portal.
+
+Create a Conditional Access policy and assign your test group of users as follows:
+
+1. Sign in to the [Azure portal](https://portal.azure.com) by using an account with *global administrator* permissions.
+
+1. Search for and select **Azure Active Directory**. Then select **Security** from the menu on the left-hand side.
+
+1. Select **Conditional Access**, select **+ New policy**, and then select **Create new policy**.
+
+ :::image type="content" alt-text="A screenshot of the Conditional Access page, where you select 'New policy' and then select 'Create new policy'." source="media/container-registry-enable-conditional-policy/01-create-conditional-access.png":::
+
+1. Enter a name for the policy, such as *demo*.
+
+1. Under **Assignments**, select the current value under **Users or workload identities**.
+
+ :::image type="content" alt-text="A screenshot of the Conditional Access page, where you select the current value under 'Users or workload identities'." source="media/container-registry-enable-conditional-policy/02-conditional-access-users-and-groups.png":::
+
+1. Under **What does this policy apply to?**, verify and select **Users and groups**.
+
+1. Under **Include**, choose **Select users and groups**, and then select **All users**.
+
+ :::image type="content" alt-text="A screenshot of the page for creating a new policy, where you select options to specify users." source="media/container-registry-enable-conditional-policy/03-conditional-access-users-groups-select-users.png":::
+
+1. Under **Exclude**, choose **Select users and groups**, to exclude any choice of selection.
+
+1. Under **Cloud apps or actions**, choose **Cloud apps**.
+
+1. Under **Include**, choose **Select apps**.
+
+ :::image type="content" alt-text="A screenshot of the page for creating a new policy, where you select options to specify cloud apps." source="media/container-registry-enable-conditional-policy/04-select-cloud-apps-select-apps.png":::
+
+1. Browse for and select apps to apply Conditional Access, in this case *Azure Container Registry*, then choose **Select**.
+
+ :::image type="content" alt-text="A screenshot of the list of apps, with results filtered, and 'Azure Container Registry' selected." source="media/container-registry-enable-conditional-policy/05-select-azure-container-registry-app.png":::
+
+1. Under **Conditions** , configure control access level with options such as *User risk level*, *Sign-in risk level*, *Sign-in risk detections (Preview)*, *Device platforms*, *Locations*, *Client apps*, *Time (Preview)*, *Filter for devices*.
+
+1. Under **Grant**, filter and choose from options to enforce grant access or block access, during a sign-in event to the Azure portal. In this case grant access with *Require multifactor authentication*, then choose **Select**.
+
+ >[!TIP]
+ > To configure and grant multi-factor authentication, see [configure and conditions for multi-factor authentication.](/azure/active-directory/authentication/tutorial-enable-azure-mfa#configure-the-conditions-for-multi-factor-authentication)
+
+1. Under **Session**, filter and choose from options to enable any control on session level experience of the cloud apps.
+
+1. After selecting and confirming, Under **Enable policy**, select **On**.
+
+1. To apply and activate the policy, Select **Create**.
+
+ :::image type="content" alt-text="A screenshot showing how to activate the Conditional Access policy." source="media/container-registry-enable-conditional-policy/06-enable-conditional-access-policy.png":::
+
+We have now completed creating the Conditional Access policy for the Azure Container Registry.
+
+## Next steps
+
+* Learn more about [Azure Policy definitions](../governance/policy/concepts/definition-structure.md) and [effects](../governance/policy/concepts/effects.md).
+* Learn more about [common access concerns that Conditional Access policies can help with](/azure/active-directory/conditional-access/concept-conditional-access-policy-common).
+* Learn more about [Conditional Access policy components](/azure/active-directory/conditional-access/concept-conditional-access-policies).
cosmos-db Materialized Views https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/materialized-views.md
You'll be able to add a column to the base table, but you won't be able to remov
### Can we create MV on existing base table?
-No. Materialized Views can't be created on a table that existed before the account was onboarded to support materialized views. Create new table after account is onboarded on which materialized views can be defined. MV on existing table is planned for the future.
+No. Materialized Views can't be created on a table that existed before the account was onboarded to support materialized views. You would need to create a new table with materialized views defined and move the existing data using [container copy jobs](../intra-account-container-copy.md). MV on existing table is planned for the future.
### What are the conditions on which records won't make it to MV and how to identify such records?
cosmos-db Hierarchical Partition Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/hierarchical-partition-keys.md
For more information, see [Azure Cosmos DB emulator](./local-emulator.md).
* Support for automation platforms (Azure PowerShell, Azure CLI) is planned and not yet available. * In the Data Explorer in the portal, you currently can't view documents in a container with hierarchical partition keys. You can read or edit these documents with the supported .NET v3 or Java v4 SDK version\[s\]. * You can only specify hierarchical partition keys up to three layers in depth.
-* Hierarchical partition keys can currently only be enabled on new containers. The desired partition key paths must be specified at the time of container creation and can't be changed later.
+* Hierarchical partition keys can currently only be enabled on new containers. The desired partition key paths must be specified at the time of container creation and can't be changed later. To use hierarchical partitions on existing containers, you should create a new container with the hierarchical partition keys set and move the data using [container copy jobs](intra-account-container-copy.md).
* Hierarchical partition keys are currently supported only for API for NoSQL accounts (API for MongoDB and Cassandra aren't currently supported). ## Next steps
cosmos-db Import Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/import-data.md
The Data Migration tool is an open-source solution that imports data to Azure Co
While the import tool includes a graphical user interface (dtui.exe), it can also be driven from the command-line (dt.exe). In fact, there's an option to output the associated command after setting up an import through the UI. You can transform tabular source data, such as SQL Server or CSV files, to create hierarchical relationships (subdocuments) during import. Keep reading to learn more about source options, sample commands to import from each source, target options, and viewing import results. > [!NOTE]
+> We recommend using [container copy jobs](intra-account-container-copy.md) for copying data within the same Azure Cosmos DB account.
+>
> You should only use the Azure Cosmos DB migration tool for small migrations. For large migrations, view our [guide for ingesting data](migration-choices.md). ## <a id="Install"></a>Installation
cosmos-db Migration Choices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/migration-choices.md
Last updated 04/02/2022
You can load data from various data sources to Azure Cosmos DB. Since Azure Cosmos DB supports multiple APIs, the targets can be any of the existing APIs. The following are some scenarios where you migrate data to Azure Cosmos DB:
-* Move data from one Azure Cosmos DB container to another container in the same database or a different databases.
-* Moving data between dedicated containers to shared database containers.
-* Move data from an Azure Cosmos DB account located in region1 to another Azure Cosmos DB account in the same or a different region.
+* Move data from one Azure Cosmos DB container to another container within the Azure Cosmos DB account (could be in the same database or a different database).
+* Move data from one Azure Cosmos DB account to another Azure Cosmos DB account (could be in the same region or a different regions, same subscription or a different one).
* Move data from a source such as Azure blob storage, a JSON file, Oracle database, Couchbase, DynamoDB to Azure Cosmos DB. In order to support migration paths from the various sources to the different Azure Cosmos DB APIs, there are multiple solutions that provide specialized handling for each migration path. This document lists the available solutions and describes their advantages and limitations.
If you need help with capacity planning, consider reading our [guide to estimati
|Migration type|Solution|Supported sources|Supported targets|Considerations| ||||||
+|Offline|[Intra-account container copy](intra-account-container-copy.md)|Azure Cosmos DB for NoSQL|Azure Cosmos DB for NoSQL|&bull; CLI-based; No set up needed. <br/>&bull; Supports large datasets.|
|Offline|[Data Migration Tool](import-data.md)| &bull;JSON/CSV Files<br/>&bull;Azure Cosmos DB for NoSQL<br/>&bull;MongoDB<br/>&bull;SQL Server<br/>&bull;Table Storage<br/>&bull;AWS DynamoDB<br/>&bull;Azure Blob Storage|&bull;Azure Cosmos DB for NoSQL<br/>&bull;Azure Cosmos DB Tables API<br/>&bull;JSON Files |&bull; Easy to set up and supports multiple sources. <br/>&bull; Not suitable for large datasets.| |Offline|[Azure Data Factory](../data-factory/connector-azure-cosmos-db.md)| &bull;JSON/CSV Files<br/>&bull;Azure Cosmos DB for NoSQL<br/>&bull;Azure Cosmos DB for MongoDB<br/>&bull;MongoDB <br/>&bull;SQL Server<br/>&bull;Table Storage<br/>&bull;Azure Blob Storage <br/> <br/>See the [Azure Data Factory](../data-factory/connector-overview.md) article for other supported sources.|&bull;Azure Cosmos DB for NoSQL<br/>&bull;Azure Cosmos DB for MongoDB<br/>&bull;JSON Files <br/><br/> See the [Azure Data Factory](../data-factory/connector-overview.md) article for other supported targets. |&bull; Easy to set up and supports multiple sources.<br/>&bull; Makes use of the Azure Cosmos DB bulk executor library. <br/>&bull; Suitable for large datasets. <br/>&bull; Lack of checkpointing - It means that if an issue occurs during the course of migration, you need to restart the whole migration process.<br/>&bull; Lack of a dead letter queue - It means that a few erroneous files can stop the entire migration process.| |Offline|[Azure Cosmos DB Spark connector](./nosql/quickstart-spark.md)|Azure Cosmos DB for NoSQL. <br/><br/>You can use other sources with additional connectors from the Spark ecosystem.| Azure Cosmos DB for NoSQL. <br/><br/>You can use other targets with additional connectors from the Spark ecosystem.| &bull; Makes use of the Azure Cosmos DB bulk executor library. <br/>&bull; Suitable for large datasets. <br/>&bull; Needs a custom Spark setup. <br/>&bull; Spark is sensitive to schema inconsistencies and this can be a problem during migration. |
If you need help with capacity planning, consider reading our [guide to estimati
|Migration type|Solution|Supported sources|Supported targets|Considerations| ||||||
+|Offline|[Intra-account container copy](intra-account-container-copy.md)|Azure Cosmos DB API for Cassandra | Azure Cosmos DB API for Cassandra| &bull; CLI-based; No set up needed. <br/>&bull; Supports large datasets.|
|Offline|[cqlsh COPY command](cassandr#migrate-data-by-using-the-cqlsh-copy-command)|CSV Files | Azure Cosmos DB API for Cassandra| &bull; Easy to set up. <br/>&bull; Not suitable for large datasets. <br/>&bull; Works only when the source is a Cassandra table.| |Offline|[Copy table with Spark](cassandr#migrate-data-by-using-spark) | &bull;Apache Cassandra<br/> | Azure Cosmos DB API for Cassandra | &bull; Can make use of Spark capabilities to parallelize transformation and ingestion. <br/>&bull; Needs configuration with a custom retry policy to handle throttles.| |Online|[Dual-write proxy + Spark](cassandr)| &bull;Apache Cassandra<br/>|&bull;Azure Cosmos DB API for Cassandra <br/>| &bull; Supports larger datasets, but careful attention required for setup and validation. <br/>&bull; Open-source tools, no purchase required.|
cosmos-db Change Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/change-log.md
Title: Change log for Azure CosmosDB API for MongoDB
+ Title: Change log for Azure Cosmos DB API for MongoDB
description: Notifies our customers of any minor/medium updates that were pushed--- Previously updated : 06/22/2022 ++++ Last updated : 10/12/2022+ # Change log for Azure Cosmos DB for MongoDB+ The Change log for the API for MongoDB is meant to inform you about our feature updates. This document covers more granular updates and complements [Azure Updates](https://azure.microsoft.com/updates/).
-## Azure Cosmos DB's API for MongoDB updates
+## Azure Cosmos DB for MongoDB updates
+
+### Role-based access control (RBAC) (GA)
+
+Azure Cosmos DB for MongoDB now offers a built-in role-based access control (RBAC) that allows you to authorize your data requests with a fine-grained, role-based permission model. Using this role-based access control (RBAC) allows you access with more options for control, security, and auditability of your database account data.
+
+[Learn more](./how-to-setup-rbac.md)
+
+### 16-MB limit per document in Cosmos DB for MongoDB (GA)
+
+The 16-MB document limit in Azure Cosmos DB for MongoDB provides developers the flexibility to store more data per document. This ease-of-use feature will speed up your development process and provide you with more flexibility in certain new application and migration cases.
+
+[Learn more](./feature-support-42.md#data-types)
### Azure Data Studio MongoDB extension for Azure Cosmos DB (Preview)
-You can now use the free and lightweight tool feature to manage and query your MongoDB resources using mongo shell. Azure Data Studio MongoDB extension for Azure Cosmos DB allows you to manage multiple accounts all in one view by
-1. Connecting your Mongo resources
-2. Configuring the database settings
-3. Performing create, read, update, and delete (CRUD) across Windows, macOS, and Linux.
+
+You can now use the free and lightweight tool feature to manage and query your MongoDB resources using mongo shell. Azure Data Studio MongoDB extension for Azure Cosmos DB allows you to manage multiple accounts all in one view by:
+
+1. Connecting your Mongo resources
+1. Configuring the database settings
+1. Performing create, read, update, and delete (CRUD) across Windows, macOS, and Linux
[Learn more](https://aka.ms/cosmosdb-ads)
+### Linux emulator with Azure Cosmos DB for MongoDB
-### Linux emulator with Azure Cosmos DB for MongoDB
-The Azure Cosmos DB Linux emulator with API for MongoDB support provides a local environment that emulates the Azure Cosmos DB service for development purposes on Linux and macOS. Using the emulator, you can develop and test your MongoDB applications locally, without creating an Azure subscription or incurring any costs.
+The Azure Cosmos DB Linux emulator with API for MongoDB support provides a local environment that emulates the Azure Cosmos DB service for development purposes on Linux and macOS. Using the emulator, you can develop and test your MongoDB applications locally, without creating an Azure subscription or incurring any costs.
[Learn more](https://aka.ms/linux-emulator-mongo)
+### 16-MB limit per document in Azure Cosmos DB for MongoDB (Preview)
-### 16-MB limit per document in API for MongoDB (Preview)
-The 16-MB document limit in the Azure Cosmos DB for MongoDB provides developers the flexibility to store more data per document. This ease-of-use feature will speed up your development process in these cases.
+The 16-MB document limit in the Azure Cosmos DB for MongoDB provides developers the flexibility to store more data per document. This ease-of-use feature will speed up your development process in these cases.
[Learn more](./introduction.md) - ### Azure Cosmos DB for MongoDB data plane Role-Based Access Control (RBAC) (Preview)
-The API for MongoDB now offers a built-in role-based access control (RBAC) that allows you to authorize your data requests with a fine-grained, role-based permission model. Using this role-based access control (RBAC) allows you access with more options for control, security, and auditability of your database account data.
+
+Azure Cosmos DB for MongoDB now offers a built-in role-based access control (RBAC) that allows you to authorize your data requests with a fine-grained, role-based permission model. Using this role-based access control (RBAC) allows you access with more options for control, security, and auditability of your database account data.
[Learn more](./how-to-setup-rbac.md)
The Azure Cosmos DB for MongoDB version 4.2 includes new aggregation functionali
[Learn more](./feature-support-42.md) ### Support $expr in Mongo 3.6+
-`$expr` allows the use of [aggregation expressions](https://www.mongodb.com/docs/manual/meta/aggregation-quick-reference/#std-label-aggregation-expressions) within the query language.
+
+`$expr` allows the use of [aggregation expressions](https://www.mongodb.com/docs/manual/meta/aggregation-quick-reference/#std-label-aggregation-expressions) within the query language.
`$expr` can build query expressions that compare fields from the same document in a `$match` stage. [Learn more](https://www.mongodb.com/docs/manual/reference/operator/query/expr/)
+### Role-Based Access Control for $merge stage
-### Role-Based Access Control for $merge stage
-* Added Role-Based Access Control(RBAC) for `$merge` stage.
-* `$merge` writes the results of aggregation pipeline to specified collection. The `$merge` operator must be the last stage in the pipeline
+- Added Role-Based Access Control(RBAC) for `$merge` stage.
+- `$merge` writes the results of aggregation pipeline to specified collection. The `$merge` operator must be the last stage in the pipeline
[Learn more](https://www.mongodb.com/docs/manual/reference/operator/aggregation/merge/) - ## Next steps - Learn how to [use Studio 3T](connect-using-mongochef.md) with Azure Cosmos DB for MongoDB. - Learn how to [use Robo 3T](connect-using-robomongo.md) with Azure Cosmos DB for MongoDB. - Explore MongoDB [samples](nodejs-console-app.md) with Azure Cosmos DB for MongoDB. - Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
- - If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md).
- - If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-capacity-planner.md).
+ - If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md).
+ - If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-capacity-planner.md).
cosmos-db Feature Support 32 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/feature-support-32.md
Title: Azure Cosmos DB's API for MongoDB (3.2 version) supported features and syntax
-description: Learn about Azure Cosmos DB's API for MongoDB (3.2 version) supported features and syntax.
+ Title: Azure Cosmos DB for MongoDB (3.2 version) supported features and syntax
+description: Learn about Azure Cosmos DB for MongoDB (3.2 version) supported features and syntax.
+++ -- Previously updated : 10/16/2019--+ Last updated : 10/12/2022
-# Azure Cosmos DB's API for MongoDB (3.2 version): supported features and syntax
+# Azure Cosmos DB for MongoDB (3.2 version): supported features and syntax
+ [!INCLUDE[MongoDB](../includes/appliesto-mongodb.md)]
-Azure Cosmos DB is Microsoft's globally distributed multi-model database service. You can communicate with the Azure Cosmos DB's API for MongoDB using any of the open-source MongoDB client [drivers](https://docs.mongodb.org/ecosystem/drivers). The Azure Cosmos DB's API for MongoDB enables the use of existing client drivers by adhering to the MongoDB [wire protocol](https://docs.mongodb.org/manual/reference/mongodb-wire-protocol).
+Azure Cosmos DB is Microsoft's globally distributed multi-model database service. You can communicate with the Azure Cosmos DB for MongoDB using any of the open-source MongoDB client [drivers](https://docs.mongodb.org/ecosystem/drivers). The Azure Cosmos DB for MongoDB enables the use of existing client drivers by adhering to the MongoDB [wire protocol](https://docs.mongodb.org/manual/reference/mongodb-wire-protocol).
-By using the Azure Cosmos DB's API for MongoDB, you can enjoy the benefits of the MongoDB you're used to, with all of the enterprise capabilities that Azure Cosmos DB provides: [global distribution](../distribute-data-globally.md), [automatic sharding](../partitioning-overview.md), availability and latency guarantees, automatic indexing of every field, encryption at rest, backups, and much more.
+By using the Azure Cosmos DB for MongoDB, you can enjoy the benefits of the MongoDB you're used to, with all of the enterprise capabilities that Azure Cosmos DB provides: [global distribution](../distribute-data-globally.md), [automatic sharding](../partitioning-overview.md), availability and latency guarantees, automatic indexing of every field, encryption at rest, backups, and much more.
> [!NOTE] > Version 3.2 of the Azure Cosmos DB for MongoDB has no current plans for end-of-life (EOL). The minimum notice for a future EOL is three years. ## Protocol Support
-All new accounts for Azure Cosmos DB's API for MongoDB are compatible with MongoDB server version **3.6**. This article covers MongoDB version 3.2. The supported operators and any limitations or exceptions are listed below. Any client driver that understands these protocols should be able to connect to Azure Cosmos DB's API for MongoDB.
+All new accounts for Azure Cosmos DB for MongoDB are compatible with MongoDB server version **3.6**. This article covers MongoDB version 3.2. The supported operators and any limitations or exceptions are listed below. Any client driver that understands these protocols should be able to connect to Azure Cosmos DB for MongoDB.
-Azure Cosmos DB's API for MongoDB also offers a seamless upgrade experience for qualifying accounts. Learn more on the [MongoDB version upgrade guide](upgrade-version.md).
+Azure Cosmos DB for MongoDB also offers a seamless upgrade experience for qualifying accounts. Learn more on the [MongoDB version upgrade guide](upgrade-version.md).
## Query language support
-Azure Cosmos DB's API for MongoDB provides comprehensive support for MongoDB query language constructs. Below you can find the detailed list of currently supported operations, operators, stages, commands, and options.
+Azure Cosmos DB for MongoDB provides comprehensive support for MongoDB query language constructs. Below you can find the detailed list of currently supported operations, operators, stages, commands, and options.
## Database commands
-Azure Cosmos DB's API for MongoDB supports the following database commands:
+Azure Cosmos DB for MongoDB supports the following database commands:
> [!NOTE]
-> This article only lists the supported server commands and excludes client-side wrapper functions. Client-side wrapper functions such as `deleteMany()` and `updateMany()` internally utilize the `delete()` and `update()` server commands. Functions utilizing supported server commands are compatible with Azure Cosmos DB's API for MongoDB.
+> This article only lists the supported server commands and excludes client-side wrapper functions. Client-side wrapper functions such as `deleteMany()` and `updateMany()` internally utilize the `delete()` and `update()` server commands. Functions utilizing supported server commands are compatible with Azure Cosmos DB for MongoDB.
### Query and write operation commands -- delete-- find-- findAndModify-- getLastError-- getMore-- insert-- update
+- `delete`
+- `find`
+- `findAndModify`
+- `getLastError`
+- `getMore`
+- `insert`
+- `update`
### Authentication commands -- logout-- authenticate-- getnonce
+- `logout`
+- `authenticate`
+- `getnonce`
### Administration commands -- dropDatabase-- listCollections-- drop-- create-- filemd5-- createIndexes-- listIndexes-- dropIndexes-- connectionStatus-- reIndex
+- `dropDatabase`
+- `listCollections`
+- `drop`
+- `create`
+- `filemd5`
+- `createIndexes`
+- `listIndexes`
+- `dropIndexes`
+- `connectionStatus`
+- `reIndex`
### Diagnostics commands -- buildInfo-- collStats-- dbStats-- hostInfo-- listDatabases-- whatsmyuri
+- `buildInfo`
+- `collStats`
+- `dbStats`
+- `hostInfo`
+- `listDatabases`
+- `whatsmyuri`
<a name="aggregation-pipeline"></a>
Azure Cosmos DB's API for MongoDB supports the following database commands:
### Aggregation commands -- aggregate-- count-- distinct
+- `aggregate`
+- `count`
+- `distinct`
### Aggregation stages -- $project-- $match-- $limit-- $skip-- $unwind-- $group-- $sample-- $sort-- $lookup-- $out-- $count-- $addFields
+- `$project`
+- `$match`
+- `$limit`
+- `$skip`
+- `$unwind`
+- `$group`
+- `$sample`
+- `$sort`
+- `$lookup`
+- `$out`
+- `$count`
+- `$addFields`
### Aggregation expressions #### Boolean expressions -- $and-- $or-- $not
+- `$and`
+- `$or`
+- `$not`
#### Set expressions -- $setEquals-- $setIntersection-- $setUnion-- $setDifference-- $setIsSubset-- $anyElementTrue-- $allElementsTrue
+- `$setEquals`
+- `$setIntersection`
+- `$setUnion`
+- `$setDifference`
+- `$setIsSubset`
+- `$anyElementTrue`
+- `$allElementsTrue`
#### Comparison expressions -- $cmp-- $eq-- $gt-- $gte-- $lt-- $lte-- $ne
+- `$cmp`
+- `$eq`
+- `$gt`
+- `$gte`
+- `$lt`
+- `$lte`
+- `$ne`
#### Arithmetic expressions -- $abs-- $add-- $ceil-- $divide-- $exp-- $floor-- $ln-- $log-- $log10-- $mod-- $multiply-- $pow-- $sqrt-- $subtract-- $trunc
+- `$abs`
+- `$add`
+- `$ceil`
+- `$divide`
+- `$exp`
+- `$floor`
+- `$ln`
+- `$log`
+- `$log10`
+- `$mod`
+- `$multiply`
+- `$pow`
+- `$sqrt`
+- `$subtract`
+- `$trunc`
#### String expressions -- $concat-- $indexOfBytes-- $indexOfCP-- $split-- $strLenBytes-- $strLenCP-- $strcasecmp-- $substr-- $substrBytes-- $substrCP-- $toLower-- $toUpper
+- `$concat`
+- `$indexOfBytes`
+- `$indexOfCP`
+- `$split`
+- `$strLenBytes`
+- `$strLenCP`
+- `$strcasecmp`
+- `$substr`
+- `$substrBytes`
+- `$substrCP`
+- `$toLower`
+- `$toUpper`
#### Array expressions -- $arrayElemAt-- $concatArrays-- $filter-- $indexOfArray-- $isArray-- $range-- $reverseArray-- $size-- $slice-- $in
+- `$arrayElemAt`
+- `$concatArrays`
+- `$filter`
+- `$indexOfArray`
+- `$isArray`
+- `$range`
+- `$reverseArray`
+- `$size`
+- `$slice`
+- `$in`
#### Date expressions -- $dayOfYear-- $dayOfMonth-- $dayOfWeek-- $year-- $month-- $week-- $hour-- $minute-- $second-- $millisecond-- $isoDayOfWeek-- $isoWeek
+- `$dayOfYear`
+- `$dayOfMonth`
+- `$dayOfWeek`
+- `$year`
+- `$month`
+- `$week`
+- `$hour`
+- `$minute`
+- `$second`
+- `$millisecond`
+- `$isoDayOfWeek`
+- `$isoWeek`
#### Conditional expressions -- $cond-- $ifNull
+- `$cond`
+- `$ifNull`
## Aggregation accumulators -- $sum-- $avg-- $first-- $last-- $max-- $min-- $push-- $addToSet
+- `$sum`
+- `$avg`
+- `$first`
+- `$last`
+- `$max`
+- `$min`
+- `$push`
+- `$addToSet`
## Operators
Following operators are supported with corresponding examples of their use. Cons
| Operator | Example | | | |
-| $eq | `{ "Volcano Name": { $eq: "Rainier" } }` |
-| $gt | `{ "Elevation": { $gt: 4000 } }` |
-| $gte | `{ "Elevation": { $gte: 4392 } }` |
-| $lt | `{ "Elevation": { $lt: 5000 } }` |
-| $lte | `{ "Elevation": { $lte: 5000 } }` |
-| $ne | `{ "Elevation": { $ne: 1 } }` |
-| $in | `{ "Volcano Name": { $in: ["St. Helens", "Rainier", "Glacier Peak"] } }` |
-| $nin | `{ "Volcano Name": { $nin: ["Lassen Peak", "Hood", "Baker"] } }` |
-| $or | `{ $or: [ { Elevation: { $lt: 4000 } }, { "Volcano Name": "Rainier" } ] }` |
-| $and | `{ $and: [ { Elevation: { $gt: 4000 } }, { "Volcano Name": "Rainier" } ] }` |
-| $not | `{ "Elevation": { $not: { $gt: 5000 } } }`|
-| $nor | `{ $nor: [ { "Elevation": { $lt: 4000 } }, { "Volcano Name": "Baker" } ] }` |
-| $exists | `{ "Status": { $exists: true } }`|
-| $type | `{ "Status": { $type: "string" } }`|
-| $mod | `{ "Elevation": { $mod: [ 4, 0 ] } }` |
-| $regex | `{ "Volcano Name": { $regex: "^Rain"} }`|
+| `eq` | `{ "Volcano Name": { $eq: "Rainier" } }` |
+| `gt` | `{ "Elevation": { $gt: 4000 } }` |
+| `gte` | `{ "Elevation": { $gte: 4392 } }` |
+| `lt` | `{ "Elevation": { $lt: 5000 } }` |
+| `lte` | `{ "Elevation": { $lte: 5000 } }` |
+| `ne` | `{ "Elevation": { $ne: 1 } }` |
+| `in` | `{ "Volcano Name": { $in: ["St. Helens", "Rainier", "Glacier Peak"] } }` |
+| `nin` | `{ "Volcano Name": { $nin: ["Lassen Peak", "Hood", "Baker"] } }` |
+| `or` | `{ $or: [ { Elevation: { $lt: 4000 } }, { "Volcano Name": "Rainier" } ] }` |
+| `and` | `{ $and: [ { Elevation: { $gt: 4000 } }, { "Volcano Name": "Rainier" } ] }` |
+| `not` | `{ "Elevation": { $not: { $gt: 5000 } } }`|
+| `nor` | `{ $nor: [ { "Elevation": { $lt: 4000 } }, { "Volcano Name": "Baker" } ] }` |
+| `exists` | `{ "Status": { $exists: true } }`|
+| `type` | `{ "Status": { $type: "string" } }`|
+| `mod` | `{ "Elevation": { $mod: [ 4, 0 ] } }` |
+| `regex` | `{ "Volcano Name": { $regex: "^Rain"} }`|
### Notes In $regex queries, Left-anchored expressions allow index search. However, using 'i' modifier (case-insensitivity) and 'm' modifier (multiline) causes the collection scan in all expressions.
-When there's a need to include '$' or '|', it is best to create two (or more) regex queries.
-For example, given the following original query: ```find({x:{$regex: /^abc$/})```, it has to be modified as follows:
-```find({x:{$regex: /^abc/, x:{$regex:/^abc$/}})```.
+When there's a need to include '$' or '|', it's best to create two (or more) regex queries.
+For example, given the following original query: `find({x:{$regex: /^abc$/})`, it has to be modified as follows:
+`find({x:{$regex: /^abc/, x:{$regex:/^abc$/}})`.
The first part will use the index to restrict the search to those documents beginning with ^abc and the second part will match the exact entries.
-The bar operator '|' acts as an "or" function - the query ```find({x:{$regex: /^abc|^def/})``` matches the documents in which field 'x' has values that begin with "abc" or "def". To utilize the index, it's recommended to break the query into two different queries joined by the $or operator: ```find( {$or : [{x: $regex: /^abc/}, {$regex: /^def/}] })```.
+The bar operator '|' acts as an "or" function - the query `find({x:{$regex: /^abc|^def/})` matches the documents in which field 'x' has values that begin with "abc" or "def". To utilize the index, it's recommended to break the query into two different queries joined by the $or operator: `find( {$or : [{x: $regex: /^abc/}, {$regex: /^def/}] })`.
### Update operators #### Field update operators -- $inc-- $mul-- $rename-- $setOnInsert-- $set-- $unset-- $min-- $max-- $currentDate
+- `$inc`
+- `$mul`
+- `$rename`
+- `$setOnInsert`
+- `$set`
+- `$unset`
+- `$min`
+- `$max`
+- `$currentDate`
#### Array update operators -- $addToSet-- $pop-- $pullAll-- $pull (Note: $pull with condition is not supported)-- $pushAll-- $push-- $each-- $slice-- $sort-- $position
+- `$addToSet`
+- `$pop`
+- `$pullAll`
+- `$pull` (Note: $pull with condition isn't supported)
+- `$pushAll`
+- `$push`
+- `$each`
+- `$slice`
+- `$sort`
+- `$position`
#### Bitwise update operator -- $bit
+- `$bit`
### Geospatial operators
-Operator | Example | Supported |
- | | |
-$geoWithin | ```{ "Location.coordinates": { $geoWithin: { $centerSphere: [ [ -121, 46 ], 5 ] } } }``` | Yes |
-$geoIntersects | ```{ "Location.coordinates": { $geoIntersects: { $geometry: { type: "Polygon", coordinates: [ [ [ -121.9, 46.7 ], [ -121.5, 46.7 ], [ -121.5, 46.9 ], [ -121.9, 46.9 ], [ -121.9, 46.7 ] ] ] } } } }``` | Yes |
-$near | ```{ "Location.coordinates": { $near: { $geometry: { type: "Polygon", coordinates: [ [ [ -121.9, 46.7 ], [ -121.5, 46.7 ], [ -121.5, 46.9 ], [ -121.9, 46.9 ], [ -121.9, 46.7 ] ] ] } } } }``` | Yes |
-$nearSphere | ```{ "Location.coordinates": { $nearSphere : [ -121, 46 ], $maxDistance: 0.50 } }``` | Yes |
-$geometry | ```{ "Location.coordinates": { $geoWithin: { $geometry: { type: "Polygon", coordinates: [ [ [ -121.9, 46.7 ], [ -121.5, 46.7 ], [ -121.5, 46.9 ], [ -121.9, 46.9 ], [ -121.9, 46.7 ] ] ] } } } }``` | Yes |
-$minDistance | ```{ "Location.coordinates": { $nearSphere : { $geometry: {type: "Point", coordinates: [ -121, 46 ]}, $minDistance: 1000, $maxDistance: 1000000 } } }``` | Yes |
-$maxDistance | ```{ "Location.coordinates": { $nearSphere : [ -121, 46 ], $maxDistance: 0.50 } }``` | Yes |
-$center | ```{ "Location.coordinates": { $geoWithin: { $center: [ [-121, 46], 1 ] } } }``` | Yes |
-$centerSphere | ```{ "Location.coordinates": { $geoWithin: { $centerSphere: [ [ -121, 46 ], 5 ] } } }``` | Yes |
-$box | ```{ "Location.coordinates": { $geoWithin: { $box: [ [ 0, 0 ], [ -122, 47 ] ] } } }``` | Yes |
-$polygon | ```{ "Location.coordinates": { $near: { $geometry: { type: "Polygon", coordinates: [ [ [ -121.9, 46.7 ], [ -121.5, 46.7 ], [ -121.5, 46.9 ], [ -121.9, 46.9 ], [ -121.9, 46.7 ] ] ] } } } }``` | Yes |
+| Operator | Example | Supported |
+| | | |
+| `$geoWithin` | `{ "Location.coordinates": { $geoWithin: { $centerSphere: [ [ -121, 46 ], 5 ] } } }` | Yes |
+| `$geoIntersects` | `{ "Location.coordinates": { $geoIntersects: { $geometry: { type: "Polygon", coordinates: [ [ [ -121.9, 46.7 ], [ -121.5, 46.7 ], [ -121.5, 46.9 ], [ -121.9, 46.9 ], [ -121.9, 46.7 ] ] ] } } } }` | Yes |
+| `$near` | `{ "Location.coordinates": { $near: { $geometry: { type: "Polygon", coordinates: [ [ [ -121.9, 46.7 ], [ -121.5, 46.7 ], [ -121.5, 46.9 ], [ -121.9, 46.9 ], [ -121.9, 46.7 ] ] ] } } } }` | Yes |
+| `$nearSphere` | `{ "Location.coordinates": { $nearSphere : [ -121, 46 ], $maxDistance: 0.50 } }` | Yes |
+| `$geometry` | `{ "Location.coordinates": { $geoWithin: { $geometry: { type: "Polygon", coordinates: [ [ [ -121.9, 46.7 ], [ -121.5, 46.7 ], [ -121.5, 46.9 ], [ -121.9, 46.9 ], [ -121.9, 46.7 ] ] ] } } } }` | Yes |
+| `$minDistance` | `{ "Location.coordinates": { $nearSphere : { $geometry: {type: "Point", coordinates: [ -121, 46 ]}, $minDistance: 1000, $maxDistance: 1000000 } } }` | Yes |
+| `$maxDistance` | `{ "Location.coordinates": { $nearSphere : [ -121, 46 ], $maxDistance: 0.50 } }` | Yes |
+| `$center` | `{ "Location.coordinates": { $geoWithin: { $center: [ [-121, 46], 1 ] } } }` | Yes |
+| `$centerSphere` | `{ "Location.coordinates": { $geoWithin: { $centerSphere: [ [ -121, 46 ], 5 ] } } }` | Yes |
+| `$box` | `{ "Location.coordinates": { $geoWithin: { $box: [ [ 0, 0 ], [ -122, 47 ] ] } } }` | Yes |
+| `$polygon` | `{ "Location.coordinates": { $near: { $geometry: { type: "Polygon", coordinates: [ [ [ -121.9, 46.7 ], [ -121.5, 46.7 ], [ -121.5, 46.9 ], [ -121.9, 46.9 ], [ -121.9, 46.7 ] ] ] } } } }` | Yes |
## Sort Operations
-When using the `findOneAndUpdate` operation, sort operations on a single field are supported but sort operations on multiple fields are not supported.
+When you use the `findOneAndUpdate` operation, sort operations on a single field are supported, but sort operations on multiple fields aren't supported.
## Other operators
-Operator | Example | Notes
- | | |
-$all | ```{ "Location.coordinates": { $all: [-121.758, 46.87] } }``` |
-$elemMatch | ```{ "Location.coordinates": { $elemMatch: { $lt: 0 } } }``` |
-$size | ```{ "Location.coordinates": { $size: 2 } }``` |
-$comment | ```{ "Location.coordinates": { $elemMatch: { $lt: 0 } }, $comment: "Negative values"}``` |
-$text | | Not supported. Use $regex instead.
+| Operator | Example | Notes
+| | | |
+| `$all` | `{ "Location.coordinates": { $all: [-121.758, 46.87] } }` |
+| `$elemMatch` | `{ "Location.coordinates": { $elemMatch: { $lt: 0 } } }` |
+| `$size` | `{ "Location.coordinates": { $size: 2 } }` |
+| `$comment` | `{ "Location.coordinates": { $elemMatch: { $lt: 0 } }, $comment: "Negative values"}` |
+| `$text` | | Not supported. Use $regex instead.
## Unsupported operators
-The ```$where``` and the ```$eval``` operators are not supported by Azure Cosmos DB.
+The `$where` and the `$eval` operators aren't supported by Azure Cosmos DB.
### Methods
Following methods are supported:
#### Cursor methods
-Method | Example | Notes
- | | |
-cursor.sort() | ```cursor.sort({ "Elevation": -1 })``` | Documents without sort key do not get returned
+| Method | Example | Notes |
+| | | |
+| `cursor.sort()` | `cursor.sort({ "Elevation": -1 })` | Documents without sort key don't get returned |
## Unique indexes Azure Cosmos DB indexes every field in documents that are written to the database by default. Unique indexes ensure that a specific field doesn't have duplicate values across all documents in a collection, similar to the way uniqueness is preserved on the default `_id` key. You can create custom indexes in Azure Cosmos DB by using the createIndex command, including the 'unique' constraint.
-Unique indexes are available for all Azure Cosmos DB accounts using Azure Cosmos DB's API for MongoDB.
+Unique indexes are available for all Azure Cosmos DB accounts using Azure Cosmos DB for MongoDB.
## Time-to-live (TTL)
Azure Cosmos DB supports a time-to-live (TTL) based on the timestamp of the docu
## User and role management
-Azure Cosmos DB does not yet support users and roles. However, Azure Cosmos DB supports Azure role-based access control (Azure RBAC) and read-write and read-only passwords/keys that can be obtained through the [Azure portal](https://portal.azure.com) (Connection String page).
+Azure Cosmos DB doesn't yet support users and roles. However, Azure Cosmos DB supports Azure role-based access control (Azure RBAC) and read-write and read-only passwords/keys that can be obtained through the [Azure portal](https://portal.azure.com) (Connection String page).
## Replication
-Azure Cosmos DB supports automatic, native replication at the lowest layers. This logic is extended out to achieve low-latency, global replication as well. Azure Cosmos DB does not support manual replication commands.
+Azure Cosmos DB supports automatic, native replication at the lowest layers. This logic is extended out to achieve low-latency, global replication as well. Azure Cosmos DB doesn't support manual replication commands.
## Write Concern
-Some applications rely on a [Write Concern](https://docs.mongodb.com/manual/reference/write-concern/) that specifies the number of responses required during a write operation. Due to how Azure Cosmos DB handles replication in the background all writes are all automatically Quorum by default. Any write concern specified by the client code is ignored. Learn more in [Using consistency levels to maximize availability and performance](../consistency-levels.md).
+Some applications rely on a [Write Concern](https://docs.mongodb.com/manual/reference/write-concern/) that specifies the number of responses required during a write operation. Due to how Azure Cosmos DB handles replication in the background all writes are automatically Quorum by default. Any write concern specified by the client code is ignored. Learn more in [Using consistency levels to maximize availability and performance](../consistency-levels.md).
## Sharding
-Azure Cosmos DB supports automatic, server-side sharding. It manages shard creation, placement, and balancing automatically. Azure Cosmos DB does not support manual sharding commands, which means you don't have to invoke commands such as shardCollection, addShard, balancerStart, moveChunk etc. You only need to specify the shard key while creating the containers or querying the data.
+Azure Cosmos DB supports automatic, server-side sharding. It manages shard creation, placement, and balancing automatically. Azure Cosmos DB doesn't support manual sharding commands, which means you don't have to invoke commands such as shardCollection, addShard, balancerStart, moveChunk etc. You only need to specify the shard key while creating the containers or querying the data.
## Next steps -- Learn how to [use Studio 3T](connect-using-mongochef.md) with Azure Cosmos DB's API for MongoDB.-- Learn how to [use Robo 3T](connect-using-robomongo.md) with Azure Cosmos DB's API for MongoDB.-- Explore MongoDB [samples](nodejs-console-app.md) with Azure Cosmos DB's API for MongoDB.
+- Learn how to [use Studio 3T](connect-using-mongochef.md) with Azure Cosmos DB for MongoDB.
+- Learn how to [use Robo 3T](connect-using-robomongo.md) with Azure Cosmos DB for MongoDB.
+- Explore MongoDB [samples](nodejs-console-app.md) with Azure Cosmos DB for MongoDB.
cosmos-db Feature Support 36 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/feature-support-36.md
Title: Azure Cosmos DB's API for MongoDB (3.6 version) supported features and syntax
-description: Learn about Azure Cosmos DB's API for MongoDB (3.6 version) supported features and syntax.
+ Title: Azure Cosmos DB for MongoDB (3.6 version) supported features and syntax
+description: Learn about Azure Cosmos DB for MongoDB (3.6 version) supported features and syntax.
+++ -- Previously updated : 04/04/2022--+ Last updated : 10/12/2022
-# Azure Cosmos DB's API for MongoDB (3.6 version): supported features and syntax
+# Azure Cosmos DB for MongoDB (3.6 version): supported features and syntax
+ [!INCLUDE[MongoDB](../includes/appliesto-mongodb.md)]
-Azure Cosmos DB is Microsoft's globally distributed multi-model database service. You can communicate with the Azure Cosmos DB's API for MongoDB using any of the open-source MongoDB client [drivers](https://docs.mongodb.org/ecosystem/drivers). The Azure Cosmos DB's API for MongoDB enables the use of existing client drivers by adhering to the MongoDB [wire protocol](https://docs.mongodb.org/manual/reference/mongodb-wire-protocol).
+Azure Cosmos DB is Microsoft's globally distributed multi-model database service. You can communicate with the Azure Cosmos DB for MongoDB using any of the open-source MongoDB client [drivers](https://docs.mongodb.org/ecosystem/drivers). The Azure Cosmos DB for MongoDB enables the use of existing client drivers by adhering to the MongoDB [wire protocol](https://docs.mongodb.org/manual/reference/mongodb-wire-protocol).
-By using the Azure Cosmos DB's API for MongoDB, you can enjoy the benefits of the MongoDB you're used to, with all of the enterprise capabilities that Azure Cosmos DB provides: [global distribution](../distribute-data-globally.md), [automatic sharding](../partitioning-overview.md), availability and latency guarantees, encryption at rest, backups, and much more.
+By using the Azure Cosmos DB for MongoDB, you can enjoy the benefits of the MongoDB you're used to, with all of the enterprise capabilities that Azure Cosmos DB provides: [global distribution](../distribute-data-globally.md), [automatic sharding](../partitioning-overview.md), availability and latency guarantees, encryption at rest, backups, and much more.
> [!NOTE] > Version 3.6 of the Azure Cosmos DB for MongoDB has no current plans for end-of-life (EOL). The minimum notice for a future EOL is three years. ## Protocol Support
-The Azure Cosmos DB's API for MongoDB is compatible with MongoDB server version **3.6** by default for new accounts. The supported operators and any limitations or exceptions are listed below. Any client driver that understands these protocols should be able to connect to Azure Cosmos DB's API for MongoDB. Note that when using Azure Cosmos DB's API for MongoDB accounts, the 3.6 version of account has the endpoint in the format `*.mongo.cosmos.azure.com` whereas the 3.2 version of account has the endpoint in the format `*.documents.azure.com`.
+The Azure Cosmos DB for MongoDB is compatible with MongoDB server version **3.6** by default for new accounts. The supported operators and any limitations or exceptions are listed below. Any client driver that understands these protocols should be able to connect to Azure Cosmos DB for MongoDB. When you create Azure Cosmos DB API for MongoDB accounts, the 3.6 version of account has the endpoint in the format `*.mongo.cosmos.azure.com` whereas the 3.2 version of account has the endpoint in the format `*.documents.azure.com`.
## Query language support
-Azure Cosmos DB's API for MongoDB provides comprehensive support for MongoDB query language constructs. The following sections show the detailed list of server operations, operators, stages, commands, and options currently supported by Azure Cosmos DB.
+Azure Cosmos DB for MongoDB provides comprehensive support for MongoDB query language constructs. The following sections show the detailed list of server operations, operators, stages, commands, and options currently supported by Azure Cosmos DB.
> [!NOTE]
-> This article only lists the supported server commands and excludes client-side wrapper functions. Client-side wrapper functions such as `deleteMany()` and `updateMany()` internally utilize the `delete()` and `update()` server commands. Functions utilizing supported server commands are compatible with Azure Cosmos DB's API for MongoDB.
+> This article only lists the supported server commands and excludes client-side wrapper functions. Client-side wrapper functions such as `deleteMany()` and `updateMany()` internally utilize the `delete()` and `update()` server commands. Functions utilizing supported server commands are compatible with Azure Cosmos DB for MongoDB.
## Database commands
-Azure Cosmos DB's API for MongoDB supports the following database commands:
+Azure Cosmos DB for MongoDB supports the following database commands:
### Query and write operation commands | Command | Supported | |||
-| [change streams](change-streams.md) | Yes |
-| delete | Yes |
-| eval | No |
-| find | Yes |
-| findAndModify | Yes |
-| getLastError | Yes |
-| getMore | Yes |
-| getPrevError | No |
-| insert | Yes |
-| parallelCollectionScan | No |
-| resetError | No |
-| update | Yes |
+| [`change streams`](change-streams.md) | Yes |
+| `delete` | Yes |
+| `eval` | No |
+| `find` | Yes |
+| `findAndModify` | Yes |
+| `getLastError` | Yes |
+| `getMore` | Yes |
+| `getPrevError` | No |
+| `insert` | Yes |
+| `parallelCollectionScan` | No |
+| `resetError` | No |
+| `update` | Yes |
### Authentication commands | Command | Supported | |||
-| authenticate | Yes |
-| getnonce | Yes |
-| logout | Yes |
+| `authenticate` | Yes |
+| `getnonce` | Yes |
+| `logout` | Yes |
### Administration commands | Command | Supported | |||
-| cloneCollectionAsCapped | No |
-| collMod | No |
-| connectionStatus | No |
-| convertToCapped | No |
-| copydb | No |
-| create | Yes |
-| createIndexes | Yes |
-| currentOp | Yes |
-| drop | Yes |
-| dropDatabase | Yes |
-| dropIndexes | Yes |
-| filemd5 | Yes |
-| killCursors | Yes |
-| killOp | No |
-| listCollections | Yes |
-| listDatabases | Yes |
-| listIndexes | Yes |
-| reIndex | Yes |
-| renameCollection | No |
-
+| `cloneCollectionAsCapped` | No |
+| `collMod` | No |
+| `connectionStatus` | No |
+| `convertToCapped` | No |
+| `copydb` | No |
+| `create` | Yes |
+| `createIndexes` | Yes |
+| `currentOp` | Yes |
+| `drop` | Yes |
+| `dropDatabase` | Yes |
+| `dropIndexes` | Yes |
+| `filemd5` | Yes |
+| `killCursors` | Yes |
+| `killOp` | No |
+| `listCollections` | Yes |
+| `listDatabases` | Yes |
+| `listIndexes` | Yes |
+| `reIndex` | Yes |
+| `renameCollection` | No |
### Diagnostics commands | Command | Supported | |||
-| buildInfo | Yes |
-| collStats | Yes |
-| connPoolStats | No |
-| connectionStatus | No |
-| dataSize | No |
-| dbHash | No |
-| dbStats | Yes |
-| explain | Yes |
-| features | No |
-| hostInfo | Yes |
-| listDatabases | Yes |
-| listCommands | No |
-| profiler | No |
-| serverStatus | No |
-| top | No |
-| whatsmyuri | Yes |
+| `buildInfo` | Yes |
+| `collStats` | Yes |
+| `connPoolStats` | No |
+| `connectionStatus` | No |
+| `dataSize` | No |
+| `dbHash` | No |
+| `dbStats` | Yes |
+| `explain` | Yes |
+| `features` | No |
+| `hostInfo` | Yes |
+| `listDatabases` | Yes |
+| `listCommands` | No |
+| `profiler` | No |
+| `serverStatus` | No |
+| `top` | No |
+| `whatsmyuri` | Yes |
<a name="aggregation-pipeline"></a>
Azure Cosmos DB's API for MongoDB supports the following database commands:
| Command | Supported | |||
-| aggregate | Yes |
-| count | Yes |
-| distinct | Yes |
-| mapReduce | No |
+| `aggregate` | Yes |
+| `count` | Yes |
+| `distinct` | Yes |
+| `mapReduce` | No |
### Aggregation stages | Command | Supported | |||
-| $addFields | Yes |
-| $bucket | No |
-| $bucketAuto | No |
-| $changeStream | Yes |
-| $collStats | No |
-| $count | Yes |
-| $currentOp | No |
-| $facet | Yes |
-| $geoNear | Yes |
-| $graphLookup | Yes |
-| $group | Yes |
-| $indexStats | No |
-| $limit | Yes |
-| $listLocalSessions | No |
-| $listSessions | No |
-| $lookup | Partial |
-| $match | Yes |
-| $out | Yes |
-| $project | Yes |
-| $redact | Yes |
-| $replaceRoot | Yes |
-| $replaceWith | No |
-| $sample | Yes |
-| $skip | Yes |
-| $sort | Yes |
-| $sortByCount | Yes |
-| $unwind | Yes |
+| `addFields` | Yes |
+| `bucket` | No |
+| `bucketAuto` | No |
+| `changeStream` | Yes |
+| `collStats` | No |
+| `count` | Yes |
+| `currentOp` | No |
+| `facet` | Yes |
+| `geoNear` | Yes |
+| `graphLookup` | Yes |
+| `group` | Yes |
+| `indexStats` | No |
+| `limit` | Yes |
+| `listLocalSessions` | No |
+| `listSessions` | No |
+| `lookup` | Partial |
+| `match` | Yes |
+| `out` | Yes |
+| `project` | Yes |
+| `redact` | Yes |
+| `replaceRoot` | Yes |
+| `replaceWith` | No |
+| `sample` | Yes |
+| `skip` | Yes |
+| `sort` | Yes |
+| `sortByCount` | Yes |
+| `unwind` | Yes |
> [!NOTE] > `$lookup` does not yet support the [uncorrelated subqueries](https://docs.mongodb.com/manual/reference/operator/aggregation/lookup/#join-conditions-and-uncorrelated-sub-queries) feature introduced in server version 3.6. You will receive an error with a message containing `let is not supported` if you attempt to use the `$lookup` operator with `let` and `pipeline` fields.
Azure Cosmos DB's API for MongoDB supports the following database commands:
| Command | Supported | |||
-| $and | Yes |
-| $not | Yes |
-| $or | Yes |
+| `and` | Yes |
+| `not` | Yes |
+| `or` | Yes |
### Set expressions | Command | Supported | |||
-| $setEquals | Yes |
-| $setIntersection | Yes |
-| $setUnion | Yes |
-| $setDifference | Yes |
-| $setIsSubset | Yes |
-| $anyElementTrue | Yes |
-| $allElementsTrue | Yes |
+| `setEquals` | Yes |
+| `setIntersection` | Yes |
+| `setUnion` | Yes |
+| `setDifference` | Yes |
+| `setIsSubset` | Yes |
+| `anyElementTrue` | Yes |
+| `allElementsTrue` | Yes |
### Comparison expressions
Azure Cosmos DB's API for MongoDB supports the following database commands:
| Command | Supported | |||
-| $cmp | Yes |
-| $eq | Yes |
-| $gt | Yes |
-| $gte | Yes |
-| $lt | Yes |
-| $lte | Yes |
-| $ne | Yes |
-| $in | Yes |
-| $nin | Yes |
+| `cmp` | Yes |
+| `eq` | Yes |
+| `gt` | Yes |
+| `gte` | Yes |
+| `lt` | Yes |
+| `lte` | Yes |
+| `ne` | Yes |
+| `in` | Yes |
+| `nin` | Yes |
### Arithmetic expressions | Command | Supported | |||
-| $abs | Yes |
-| $add | Yes |
-| $ceil | Yes |
-| $divide | Yes |
-| $exp | Yes |
-| $floor | Yes |
-| $ln | Yes |
-| $log | Yes |
-| $log10 | Yes |
-| $mod | Yes |
-| $multiply | Yes |
-| $pow | Yes |
-| $sqrt | Yes |
-| $subtract | Yes |
-| $trunc | Yes |
+| `abs` | Yes |
+| `add` | Yes |
+| `ceil` | Yes |
+| `divide` | Yes |
+| `exp` | Yes |
+| `floor` | Yes |
+| `ln` | Yes |
+| `log` | Yes |
+| `log10` | Yes |
+| `mod` | Yes |
+| `multiply` | Yes |
+| `pow` | Yes |
+| `sqrt` | Yes |
+| `subtract` | Yes |
+| `trunc` | Yes |
### String expressions | Command | Supported | |||
-| $concat | Yes |
-| $indexOfBytes | Yes |
-| $indexOfCP | Yes |
-| $split | Yes |
-| $strLenBytes | Yes |
-| $strLenCP | Yes |
-| $strcasecmp | Yes |
-| $substr | Yes |
-| $substrBytes | Yes |
-| $substrCP | Yes |
-| $toLower | Yes |
-| $toUpper | Yes |
+| `concat` | Yes |
+| `indexOfBytes` | Yes |
+| `indexOfCP` | Yes |
+| `split` | Yes |
+| `strLenBytes` | Yes |
+| `strLenCP` | Yes |
+| `strcasecmp` | Yes |
+| `substr` | Yes |
+| `substrBytes` | Yes |
+| `substrCP` | Yes |
+| `toLower` | Yes |
+| `toUpper` | Yes |
### Text search operator | Command | Supported | |||
-| $meta | No |
+| `meta` | No |
### Array expressions | Command | Supported | |||
-| $arrayElemAt | Yes |
-| $arrayToObject | Yes |
-| $concatArrays | Yes |
-| $filter | Yes |
-| $indexOfArray | Yes |
-| $isArray | Yes |
-| $objectToArray | Yes |
-| $range | Yes |
-| $reverseArray | Yes |
-| $reduce | Yes |
-| $size | Yes |
-| $slice | Yes |
-| $zip | Yes |
-| $in | Yes |
+| `arrayElemAt` | Yes |
+| `arrayToObject` | Yes |
+| `concatArrays` | Yes |
+| `filter` | Yes |
+| `indexOfArray` | Yes |
+| `isArray` | Yes |
+| `objectToArray` | Yes |
+| `range` | Yes |
+| `reverseArray` | Yes |
+| `reduce` | Yes |
+| `size` | Yes |
+| `slice` | Yes |
+| `zip` | Yes |
+| `in` | Yes |
### Variable operators | Command | Supported | |||
-| $map | Yes |
-| $let | Yes |
+| `map` | Yes |
+| `let` | Yes |
### System variables | Command | Supported | |||
-| $$CURRENT | Yes |
-| $$DESCEND | Yes |
-| $$KEEP | Yes |
-| $$PRUNE | Yes |
-| $$REMOVE | Yes |
-| $$ROOT | Yes |
+| `$$CURRENT` | Yes |
+| `$$DESCEND` | Yes |
+| `$$KEEP` | Yes |
+| `$$PRUNE` | Yes |
+| `$$REMOVE` | Yes |
+| `$$ROOT` | Yes |
### Literal operator | Command | Supported | |||
-| $literal | Yes |
+| `literal` | Yes |
### Date expressions | Command | Supported | |||
-| $dayOfYear | Yes |
-| $dayOfMonth | Yes |
-| $dayOfWeek | Yes |
-| $year | Yes |
-| $month | Yes |
-| $week | Yes |
-| $hour | Yes |
-| $minute | Yes |
-| $second | Yes |
-| $millisecond | Yes |
-| $dateToString | Yes |
-| $isoDayOfWeek | Yes |
-| $isoWeek | Yes |
-| $dateFromParts | Yes |
-| $dateToParts | Yes |
-| $dateFromString | Yes |
-| $isoWeekYear | Yes |
+| `dayOfYear` | Yes |
+| `dayOfMonth` | Yes |
+| `dayOfWeek` | Yes |
+| `year` | Yes |
+| `month` | Yes |
+| `week` | Yes |
+| `hour` | Yes |
+| `minute` | Yes |
+| `second` | Yes |
+| `millisecond` | Yes |
+| `dateToString` | Yes |
+| `isoDayOfWeek` | Yes |
+| `isoWeek` | Yes |
+| `dateFromParts` | Yes |
+| `dateToParts` | Yes |
+| `dateFromString` | Yes |
+| `isoWeekYear` | Yes |
### Conditional expressions | Command | Supported | |||
-| $cond | Yes |
-| $ifNull | Yes |
-| $switch | Yes |
+| `cond` | Yes |
+| `ifNull` | Yes |
+| `switch` | Yes |
### Data type operator | Command | Supported | |||
-| $type | Yes |
+| `type` | Yes |
### Accumulator expressions | Command | Supported | |||
-| $sum | Yes |
-| $avg | Yes |
-| $first | Yes |
-| $last | Yes |
-| $max | Yes |
-| $min | Yes |
-| $push | Yes |
-| $addToSet | Yes |
-| $stdDevPop | Yes |
-| $stdDevSamp | Yes |
+| `sum` | Yes |
+| `avg` | Yes |
+| `first` | Yes |
+| `last` | Yes |
+| `max` | Yes |
+| `min` | Yes |
+| `push` | Yes |
+| `addToSet` | Yes |
+| `stdDevPop` | Yes |
+| `stdDevSamp` | Yes |
### Merge operator | Command | Supported | |||
-| $mergeObjects | Yes |
+| `mergeObjects` | Yes |
## Data types | Command | Supported | |||
-| Double | Yes |
-| String | Yes |
-| Object | Yes |
-| Array | Yes |
-| Binary Data | Yes |
-| ObjectId | Yes |
-| Boolean | Yes |
-| Date | Yes |
-| Null | Yes |
-| 32-bit Integer (int) | Yes |
-| Timestamp | Yes |
-| 64-bit Integer (long) | Yes |
-| MinKey | Yes |
-| MaxKey | Yes |
-| Decimal128 | Yes |
-| Regular Expression | Yes |
-| JavaScript | Yes |
-| JavaScript (with scope)| Yes |
-| Undefined | Yes |
+| `Double` | Yes |
+| `String` | Yes |
+| `Object` | Yes |
+| `Array` | Yes |
+| `Binary Data` | Yes |
+| `ObjectId` | Yes |
+| `Boolean` | Yes |
+| `Date` | Yes |
+| `Null` | Yes |
+| `32-bit Integer (int)` | Yes |
+| `Timestamp` | Yes |
+| `64-bit Integer (long)` | Yes |
+| `MinKey` | Yes |
+| `MaxKey` | Yes |
+| `Decimal128` | Yes |
+| `Regular Expression` | Yes |
+| `JavaScript` | Yes |
+| `JavaScript (with scope)` | Yes |
+| `Undefined` | Yes |
## Indexes and index properties
Azure Cosmos DB's API for MongoDB supports the following database commands:
| Command | Supported | |||
-| Single Field Index | Yes |
-| Compound Index | Yes |
-| Multikey Index | Yes |
-| Text Index | No |
-| 2dsphere | Yes |
-| 2d Index | No |
-| Hashed Index | Yes |
+| `Single Field Index` | Yes |
+| `Compound Index` | Yes |
+| `Multikey Index` | Yes |
+| `Text Index` | No |
+| `2dsphere` | Yes |
+| `2d Index` | No |
+| `Hashed Index` | Yes |
### Index properties | Command | Supported | |||
-| TTL | Yes |
-| Unique | Yes |
-| Partial | No |
-| Case Insensitive | No |
-| Sparse | No |
-| Background | Yes |
+| `TTL` | Yes |
+| `Unique` | Yes |
+| `Partial` | No |
+| `Case Insensitive` | No |
+| `Sparse` | No |
+| `Background` | Yes |
## Operators
Azure Cosmos DB's API for MongoDB supports the following database commands:
| Command | Supported | |||
-| $or | Yes |
-| $and | Yes |
-| $not | Yes |
-| $nor | Yes |
+| `or` | Yes |
+| `and` | Yes |
+| `not` | Yes |
+| `nor` | Yes |
### Element operators | Command | Supported | |||
-| $exists | Yes |
-| $type | Yes |
+| `exists` | Yes |
+| `type` | Yes |
### Evaluation query operators | Command | Supported | |||
-| $expr | Yes |
-| $jsonSchema | No |
-| $mod | Yes |
-| $regex | Yes |
-| $text | No (Not supported. Use $regex instead.)|
-| $where | No |
+| `expr` | Yes |
+| `jsonSchema` | No |
+| `mod` | Yes |
+| `regex` | Yes |
+| `text` | No (Not supported. Use $regex instead.)|
+| `where` | No |
In the $regex queries, left-anchored expressions allow index search. However, using 'i' modifier (case-insensitivity) and 'm' modifier (multiline) causes the collection scan in all expressions.
-When there's a need to include '$' or '|', it is best to create two (or more) regex queries. For example, given the following original query: ```find({x:{$regex: /^abc$/})```, it has to be modified as follows:
+When there's a need to include `$` or `|`, it's best to create two (or more) regex queries. For example, given the following original query: `find({x:{$regex: /^abc$/})`, it has to be modified as follows:
`find({x:{$regex: /^abc/, x:{$regex:/^abc$/}})`
-The first part will use the index to restrict the search to those documents beginning with ^abc and the second part will match the exact entries. The bar operator '|' acts as an "or" function - the query ```find({x:{$regex: /^abc |^def/})``` matches the documents in which field 'x' has values that begin with "abc" or "def". To utilize the index, it's recommended to break the query into two different queries joined by the $or operator: ```find( {$or : [{x: $regex: /^abc/}, {$regex: /^def/}] })```.
+The first part will use the index to restrict the search to those documents beginning with ^abc and the second part will match the exact entries. The bar operator `|` acts as an "or" function - the query `find({x:{$regex: /^abc |^def/})` matches the documents in which field `x` has values that begin with `"abc"` or `"def"`. To utilize the index, it's recommended to break the query into two different queries joined by the $or operator: `find( {$or : [{x: $regex: /^abc/}, {$regex: /^def/}] })`.
### Array operators
-| Command | Supported |
+| Command | Supported |
|||
-| $all | Yes |
-| $elemMatch | Yes |
-| $size | Yes |
+| `all` | Yes |
+| `elemMatch` | Yes |
+| `size` | Yes |
### Comment operator
-| Command | Supported |
+| Command | Supported |
|||
-| $comment | Yes |
+| `comment` | Yes |
### Projection operators | Command | Supported | |||
-| $elemMatch | Yes |
-| $meta | No |
-| $slice | Yes |
+| `elemMatch` | Yes |
+| `meta` | No |
+| `slice` | Yes |
### Update operators
The first part will use the index to restrict the search to those documents begi
| Command | Supported | |||
-| $inc | Yes |
-| $mul | Yes |
-| $rename | Yes |
-| $setOnInsert | Yes |
-| $set | Yes |
-| $unset | Yes |
-| $min | Yes |
-| $max | Yes |
-| $currentDate | Yes |
+| `inc` | Yes |
+| `mul` | Yes |
+| `rename` | Yes |
+| `setOnInsert` | Yes |
+| `set` | Yes |
+| `unset` | Yes |
+| `min` | Yes |
+| `max` | Yes |
+| `currentDate` | Yes |
#### Array update operators | Command | Supported | |||
-| $ | Yes |
-| $[]| Yes |
-| $[\<identifier\>]| Yes |
-| $addToSet | Yes |
-| $pop | Yes |
-| $pullAll | Yes |
-| $pull | Yes |
-| $push | Yes |
-| $pushAll | Yes |
-
+| `$` | Yes |
+| `$[]` | Yes |
+| `$[\<identifier\>]` | Yes |
+| `addToSet` | Yes |
+| `pop` | Yes |
+| `pullAll` | Yes |
+| `pull` | Yes |
+| `push` | Yes |
+| `pushAll` | Yes |
#### Update modifiers | Command | Supported | |||
-| $each | Yes |
-| $slice | Yes |
-| $sort | Yes |
-| $position | Yes |
+| `each` | Yes |
+| `slice` | Yes |
+| `sort` | Yes |
+| `position` | Yes |
#### Bitwise update operator | Command | Supported | |||
-| $bit | Yes |
-| $bitsAllSet | No |
-| $bitsAnySet | No |
-| $bitsAllClear | No |
-| $bitsAnyClear | No |
+| `bit` | Yes |
+| `bitsAllSet` | No |
+| `bitsAnySet` | No |
+| `bitsAllClear` | No |
+| `bitsAnyClear` | No |
### Geospatial operators
-Operator | Supported |
- | |
-$geoWithin | Yes |
-$geoIntersects | Yes |
-$near | Yes |
-$nearSphere | Yes |
-$geometry | Yes |
-$minDistance | Yes |
-$maxDistance | Yes |
-$center | No |
-$centerSphere | No |
-$box | No |
-$polygon | No |
+| Operator | Supported |
+| | |
+| `$geoWithin` | Yes |
+| `$geoIntersects` | Yes |
+| `$near` | Yes |
+| `$nearSphere` | Yes |
+| `$geometry` | Yes |
+| `$minDistance` | Yes |
+| `$maxDistance` | Yes |
+| `$center` | No |
+| `$centerSphere` | No |
+| `$box` | No |
+| `$polygon` | No |
## Sort operations
-When using the `findOneAndUpdate` operation, sort operations on a single field are supported but sort operations on multiple fields are not supported.
+When you use the `findOneAndUpdate` operation, sort operations on a single field are supported, but sort operations on multiple fields aren't supported.
## Indexing
-The API for MongoDB [supports a variety of indexes](indexing.md) to enable sorting on multiple fields, improve query performance, and enforce uniqueness.
+
+The API for MongoDB [supports various indexes](indexing.md) to enable sorting on multiple fields, improve query performance, and enforce uniqueness.
## GridFS
Azure Cosmos DB supports GridFS through any GridFS-compatible MongoDB driver.
## Replication
-Azure Cosmos DB supports automatic, native replication at the lowest layers. This logic is extended out to achieve low-latency, global replication as well. Azure Cosmos DB does not support manual replication commands.
+Azure Cosmos DB supports automatic, native replication at the lowest layers. This logic is extended out to achieve low-latency, global replication as well. Azure Cosmos DB doesn't support manual replication commands.
## Retryable Writes
-Azure Cosmos DB does not yet support retryable writes. Client drivers must add `retryWrites=false` to their connection string.
+Azure Cosmos DB doesn't yet support retryable writes. Client drivers must add `retryWrites=false` to their connection string.
## Sharding
-Azure Cosmos DB supports automatic, server-side sharding. It manages shard creation, placement, and balancing automatically. Azure Cosmos DB does not support manual sharding commands, which means you don't have to invoke commands such as addShard, balancerStart, moveChunk etc. You only need to specify the shard key while creating the containers or querying the data.
+Azure Cosmos DB supports automatic, server-side sharding. It manages shard creation, placement, and balancing automatically. Azure Cosmos DB doesn't support manual sharding commands, which means you don't have to invoke commands such as addShard, balancerStart, moveChunk etc. You only need to specify the shard key while creating the containers or querying the data.
## Sessions
-Azure Cosmos DB does not yet support server-side sessions commands.
+Azure Cosmos DB doesn't yet support server-side sessions commands.
## Time-to-live (TTL)
Azure Cosmos DB supports a time-to-live (TTL) based on the timestamp of the docu
## User and role management
-Azure Cosmos DB does not yet support users and roles. However, it supports Azure role-based access control (Azure RBAC) and read-write and read-only passwords or keys that can be obtained through the connection string pane in the [Azure portal](https://portal.azure.com).
+Azure Cosmos DB doesn't yet support users and roles. However, it supports Azure role-based access control (Azure RBAC) and read-write and read-only passwords or keys that can be obtained through the connection string pane in the [Azure portal](https://portal.azure.com).
## Write Concern
-Some applications rely on a [Write Concern](https://docs.mongodb.com/manual/reference/write-concern/) which specifies the number of responses required during a write operation. Due to how Azure Cosmos DB handles replication, all writes are automatically majority quorum by default when using strong consistency. Any write concern specified by the client code is ignored. To learn more, see [Using consistency levels to maximize availability and performance](../consistency-levels.md) article.
+Some applications rely on a [Write Concern](https://docs.mongodb.com/manual/reference/write-concern/), which specifies the number of responses required during a write operation. Due to how Azure Cosmos DB handles replication, all writes are automatically majority quorum by default when using strong consistency. Any write concern specified by the client code is ignored. To learn more, see [Using consistency levels to maximize availability and performance](../consistency-levels.md) article.
## Next steps - For further information check [Mongo 3.6 version features](https://devblogs.microsoft.com/cosmosdb/azure-cosmos-dbs-api-for-mongodb-now-supports-server-version-3-6/)-- Learn how to [use Studio 3T](connect-using-mongochef.md) with Azure Cosmos DB's API for MongoDB.-- Learn how to [use Robo 3T](connect-using-robomongo.md) with Azure Cosmos DB's API for MongoDB.-- Explore MongoDB [samples](nodejs-console-app.md) with Azure Cosmos DB's API for MongoDB.
+- Learn how to [use Studio 3T](connect-using-mongochef.md) with Azure Cosmos DB for MongoDB.
+- Learn how to [use Robo 3T](connect-using-robomongo.md) with Azure Cosmos DB for MongoDB.
+- Explore MongoDB [samples](nodejs-console-app.md) with Azure Cosmos DB for MongoDB.
cosmos-db Feature Support 40 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/feature-support-40.md
Title: 4.0 server version supported features and syntax in Azure Cosmos DB's API for MongoDB
-description: Learn about Azure Cosmos DB's API for MongoDB 4.0 server version supported features and syntax. Learn about the database commands, query language support, datatypes, aggregation pipeline commands, and operators supported.
+ Title: 4.0 server version supported features and syntax in Azure Cosmos DB for MongoDB
+description: Learn about Azure Cosmos DB for MongoDB 4.0 server version supported features and syntax. Learn about the database commands, query language support, datatypes, aggregation pipeline commands, and operators supported.
+++ + Last updated : 10/12/2022 - Previously updated : 04/05/2022--
-# Azure Cosmos DB's API for MongoDB (4.0 server version): supported features and syntax
+# Azure Cosmos DB for MongoDB (4.0 server version): supported features and syntax
+ [!INCLUDE[MongoDB](../includes/appliesto-mongodb.md)]
-Azure Cosmos DB is Microsoft's globally distributed multi-model database service. You can communicate with the Azure Cosmos DB's API for MongoDB using any of the open-source MongoDB client [drivers](https://docs.mongodb.org/ecosystem/drivers). The Azure Cosmos DB's API for MongoDB enables the use of existing client drivers by adhering to the MongoDB [wire protocol](https://docs.mongodb.org/manual/reference/mongodb-wire-protocol).
+Azure Cosmos DB is Microsoft's globally distributed multi-model database service. You can communicate with the Azure Cosmos DB for MongoDB using any of the open-source MongoDB client [drivers](https://docs.mongodb.org/ecosystem/drivers). The Azure Cosmos DB for MongoDB enables the use of existing client drivers by adhering to the MongoDB [wire protocol](https://docs.mongodb.org/manual/reference/mongodb-wire-protocol).
-By using the Azure Cosmos DB's API for MongoDB, you can enjoy the benefits of the MongoDB you're used to, with all of the enterprise capabilities that Azure Cosmos DB provides: [global distribution](../distribute-data-globally.md), [automatic sharding](../partitioning-overview.md), availability and latency guarantees, encryption at rest, backups, and much more.
+By using the Azure Cosmos DB for MongoDB, you can enjoy the benefits of the MongoDB you're used to, with all of the enterprise capabilities that Azure Cosmos DB provides: [global distribution](../distribute-data-globally.md), [automatic sharding](../partitioning-overview.md), availability and latency guarantees, encryption at rest, backups, and much more.
## Protocol Support
-The supported operators and any limitations or exceptions are listed below. Any client driver that understands these protocols should be able to connect to Azure Cosmos DB's API for MongoDB. When using Azure Cosmos DB's API for MongoDB accounts, the 3.6+ versions of accounts have the endpoint in the format `*.mongo.cosmos.azure.com` whereas the 3.2 version of accounts has the endpoint in the format `*.documents.azure.com`.
+The supported operators and any limitations or exceptions are listed below. Any client driver that understands these protocols should be able to connect to Azure Cosmos DB for MongoDB. When you create Azure Cosmos DB for MongoDB accounts, the 3.6+ versions of accounts have the endpoint in the format `*.mongo.cosmos.azure.com` whereas the 3.2 version of accounts has the endpoint in the format `*.documents.azure.com`.
> [!NOTE]
-> This article only lists the supported server commands and excludes client-side wrapper functions. Client-side wrapper functions such as `deleteMany()` and `updateMany()` internally utilize the `delete()` and `update()` server commands. Functions utilizing supported server commands are compatible with Azure Cosmos DB's API for MongoDB.
+> This article only lists the supported server commands and excludes client-side wrapper functions. Client-side wrapper functions such as `deleteMany()` and `updateMany()` internally utilize the `delete()` and `update()` server commands. Functions utilizing supported server commands are compatible with Azure Cosmos DB for MongoDB.
## Query language support
-Azure Cosmos DB's API for MongoDB provides comprehensive support for MongoDB query language constructs. Below you can find the detailed list of currently supported operations, operators, stages, commands, and options.
+Azure Cosmos DB for MongoDB provides comprehensive support for MongoDB query language constructs. Below you can find the detailed list of currently supported operations, operators, stages, commands, and options.
## Database commands
-Azure Cosmos DB's API for MongoDB supports the following database commands:
+Azure Cosmos DB for MongoDB supports the following database commands:
### Query and write operation commands | Command | Supported | |||
-| [change streams](change-streams.md) | Yes |
-| delete | Yes |
-| eval | No |
-| find | Yes |
-| findAndModify | Yes |
-| getLastError | Yes |
-| getMore | Yes |
-| getPrevError | No |
-| insert | Yes |
-| parallelCollectionScan | No |
-| resetError | No |
-| update | Yes |
+| [`change streams`](change-streams.md) | Yes |
+| `delete` | Yes |
+| `eval` | No |
+| `find` | Yes |
+| `findAndModify` | Yes |
+| `getLastError` | Yes |
+| `getMore` | Yes |
+| `getPrevError` | No |
+| `insert` | Yes |
+| `parallelCollectionScan` | No |
+| `resetError` | No |
+| `update` | Yes |
### Transaction commands | Command | Supported | |||
-| abortTransaction | Yes |
-| commitTransaction | Yes |
+| `abortTransaction` | Yes |
+| `commitTransaction` | Yes |
### Authentication commands | Command | Supported | |||
-| authenticate | Yes |
-| getnonce | Yes |
-| logout | Yes |
+| `authenticate` | Yes |
+| `getnonce` | Yes |
+| `logout` | Yes |
### Administration commands | Command | Supported | |||
-| cloneCollectionAsCapped | No |
-| collMod | No |
-| connectionStatus | No |
-| convertToCapped | No |
-| copydb | No |
-| create | Yes |
-| createIndexes | Yes |
-| currentOp | Yes |
-| drop | Yes |
-| dropDatabase | Yes |
-| dropIndexes | Yes |
-| filemd5 | Yes |
-| killCursors | Yes |
-| killOp | No |
-| listCollections | Yes |
-| listDatabases | Yes |
-| listIndexes | Yes |
-| reIndex | Yes |
-| renameCollection | No |
+| `cloneCollectionAsCapped` | No |
+| `collMod` | No |
+| `connectionStatus` | No |
+| `convertToCapped` | No |
+| `copydb` | No |
+| `create` | Yes |
+| `createIndexes` | Yes |
+| `currentOp` | Yes |
+| `drop` | Yes |
+| `dropDatabase` | Yes |
+| `dropIndexes` | Yes |
+| `filemd5` | Yes |
+| `killCursors` | Yes |
+| `killOp` | No |
+| `listCollections` | Yes |
+| `listDatabases` | Yes |
+| `listIndexes` | Yes |
+| `reIndex` | Yes |
+| `renameCollection` | No |
### Diagnostics commands | Command | Supported | |||
-| buildInfo | Yes |
-| collStats | Yes |
-| connPoolStats | No |
-| connectionStatus | No |
-| dataSize | No |
-| dbHash | No |
-| dbStats | Yes |
-| explain | Yes |
-| features | No |
-| hostInfo | Yes |
-| listDatabases | Yes |
-| listCommands | No |
-| profiler | No |
-| serverStatus | No |
-| top | No |
-| whatsmyuri | Yes |
+| `buildInfo` | Yes |
+| `collStats` | Yes |
+| `connPoolStats` | No |
+| `connectionStatus` | No |
+| `dataSize` | No |
+| `dbHash` | No |
+| `dbStats` | Yes |
+| `explain` | Yes |
+| `features` | No |
+| `hostInfo` | Yes |
+| `listDatabases` | Yes |
+| `listCommands` | No |
+| `profiler` | No |
+| `serverStatus` | No |
+| `top` | No |
+| `whatsmyuri` | Yes |
## <a name="aggregation-pipeline"></a>Aggregation pipeline
Azure Cosmos DB's API for MongoDB supports the following database commands:
| Command | Supported | |||
-| aggregate | Yes |
-| count | Yes |
-| distinct | Yes |
-| mapReduce | No |
+| `aggregate` | Yes |
+| `count` | Yes |
+| `distinct` | Yes |
+| `mapReduce` | No |
### Aggregation stages | Command | Supported | |||
-| $addFields | Yes |
-| $bucket | No |
-| $bucketAuto | No |
-| $changeStream | Yes |
-| $collStats | No |
-| $count | Yes |
-| $currentOp | No |
-| $facet | Yes |
-| $geoNear | Yes |
-| $graphLookup | Yes |
-| $group | Yes |
-| $indexStats | No |
-| $limit | Yes |
-| $listLocalSessions | No |
-| $listSessions | No |
-| $lookup | Partial |
-| $match | Yes |
-| $out | Yes |
-| $project | Yes |
-| $redact | Yes |
-| $replaceRoot | Yes |
-| $replaceWith | No |
-| $sample | Yes |
-| $skip | Yes |
-| $sort | Yes |
-| $sortByCount | Yes |
-| $unwind | Yes |
+| `addFields` | Yes |
+| `bucket` | No |
+| `bucketAuto` | No |
+| `changeStream` | Yes |
+| `collStats` | No |
+| `count` | Yes |
+| `currentOp` | No |
+| `facet` | Yes |
+| `geoNear` | Yes |
+| `graphLookup` | Yes |
+| `group` | Yes |
+| `indexStats` | No |
+| `limit` | Yes |
+| `listLocalSessions` | No |
+| `listSessions` | No |
+| `lookup` | Partial |
+| `match` | Yes |
+| `out` | Yes |
+| `project` | Yes |
+| `redact` | Yes |
+| `replaceRoot` | Yes |
+| `replaceWith` | No |
+| `sample` | Yes |
+| `skip` | Yes |
+| `sort` | Yes |
+| `sortByCount` | Yes |
+| `unwind` | Yes |
> [!NOTE] > `$lookup` does not yet support the [uncorrelated subqueries](https://docs.mongodb.com/manual/reference/operator/aggregation/lookup/#join-conditions-and-uncorrelated-sub-queries) feature introduced in server version 3.6. You will receive an error with a message containing `let is not supported` if you attempt to use the `$lookup` operator with `let` and `pipeline` fields.
Azure Cosmos DB's API for MongoDB supports the following database commands:
| Command | Supported | |||
-| $and | Yes |
-| $not | Yes |
-| $or | Yes |
+| `and` | Yes |
+| `not` | Yes |
+| `or` | Yes |
### Conversion expressions | Command | Supported | |||
-| $convert | Yes |
-| $toBool | Yes |
-| $toDate | Yes |
-| $toDecimal | Yes |
-| $toDouble | Yes |
-| $toInt | Yes |
-| $toLong | Yes |
-| $toObjectId | Yes |
-| $toString | Yes |
+| `convert` | Yes |
+| `toBool` | Yes |
+| `toDate` | Yes |
+| `toDecimal` | Yes |
+| `toDouble` | Yes |
+| `toInt` | Yes |
+| `toLong` | Yes |
+| `toObjectId` | Yes |
+| `toString` | Yes |
### Set expressions | Command | Supported | |||
-| $setEquals | Yes |
-| $setIntersection | Yes |
-| $setUnion | Yes |
-| $setDifference | Yes |
-| $setIsSubset | Yes |
-| $anyElementTrue | Yes |
-| $allElementsTrue | Yes |
+| `setEquals` | Yes |
+| `setIntersection` | Yes |
+| `setUnion` | Yes |
+| `setDifference` | Yes |
+| `setIsSubset` | Yes |
+| `anyElementTrue` | Yes |
+| `allElementsTrue` | Yes |
### Comparison expressions
Azure Cosmos DB's API for MongoDB supports the following database commands:
| Command | Supported | |||
-| $cmp | Yes |
-| $eq | Yes |
-| $gt | Yes |
-| $gte | Yes |
-| $lt | Yes |
-| $lte | Yes |
-| $ne | Yes |
-| $in | Yes |
-| $nin | Yes |
+| `cmp` | Yes |
+| `eq` | Yes |
+| `gt` | Yes |
+| `gte` | Yes |
+| `lt` | Yes |
+| `lte` | Yes |
+| `ne` | Yes |
+| `in` | Yes |
+| `nin` | Yes |
### Arithmetic expressions | Command | Supported | |||
-| $abs | Yes |
-| $add | Yes |
-| $ceil | Yes |
-| $divide | Yes |
-| $exp | Yes |
-| $floor | Yes |
-| $ln | Yes |
-| $log | Yes |
-| $log10 | Yes |
-| $mod | Yes |
-| $multiply | Yes |
-| $pow | Yes |
-| $sqrt | Yes |
-| $subtract | Yes |
-| $trunc | Yes |
+| `abs` | Yes |
+| `add` | Yes |
+| `ceil` | Yes |
+| `divide` | Yes |
+| `exp` | Yes |
+| `floor` | Yes |
+| `ln` | Yes |
+| `log` | Yes |
+| `log10` | Yes |
+| `mod` | Yes |
+| `multiply` | Yes |
+| `pow` | Yes |
+| `sqrt` | Yes |
+| `subtract` | Yes |
+| `trunc` | Yes |
### String expressions | Command | Supported | |||
-| $concat | Yes |
-| $indexOfBytes | Yes |
-| $indexOfCP | Yes |
-| $ltrim | Yes |
-| $rtrim | Yes |
-| $trim | Yes |
-| $split | Yes |
-| $strLenBytes | Yes |
-| $strLenCP | Yes |
-| $strcasecmp | Yes |
-| $substr | Yes |
-| $substrBytes | Yes |
-| $substrCP | Yes |
-| $toLower | Yes |
-| $toUpper | Yes |
+| `concat` | Yes |
+| `indexOfBytes` | Yes |
+| `indexOfCP` | Yes |
+| `ltrim` | Yes |
+| `rtrim` | Yes |
+| `trim` | Yes |
+| `split` | Yes |
+| `strLenBytes` | Yes |
+| `strLenCP` | Yes |
+| `strcasecmp` | Yes |
+| `substr` | Yes |
+| `substrBytes` | Yes |
+| `substrCP` | Yes |
+| `toLower` | Yes |
+| `toUpper` | Yes |
### Text search operator | Command | Supported | |||
-| $meta | No |
+| `meta` | No |
### Array expressions | Command | Supported | |||
-| $arrayElemAt | Yes |
-| $arrayToObject | Yes |
-| $concatArrays | Yes |
-| $filter | Yes |
-| $indexOfArray | Yes |
-| $isArray | Yes |
-| $objectToArray | Yes |
-| $range | Yes |
-| $reverseArray | Yes |
-| $reduce | Yes |
-| $size | Yes |
-| $slice | Yes |
-| $zip | Yes |
-| $in | Yes |
+| `arrayElemAt` | Yes |
+| `arrayToObject` | Yes |
+| `concatArrays` | Yes |
+| `filter` | Yes |
+| `indexOfArray` | Yes |
+| `isArray` | Yes |
+| `objectToArray` | Yes |
+| `range` | Yes |
+| `reverseArray` | Yes |
+| `reduce` | Yes |
+| `size` | Yes |
+| `slice` | Yes |
+| `zip` | Yes |
+| `in` | Yes |
### Variable operators | Command | Supported | |||
-| $map | Yes |
-| $let | Yes |
+| `map` | Yes |
+| `let` | Yes |
### System variables | Command | Supported | |||
-| $$CURRENT | Yes |
-| $$DESCEND | Yes |
-| $$KEEP | Yes |
-| $$PRUNE | Yes |
-| $$REMOVE | Yes |
-| $$ROOT | Yes |
+| `$$CURRENT` | Yes |
+| `$$DESCEND` | Yes |
+| `$$KEEP` | Yes |
+| `$$PRUNE` | Yes |
+| `$$REMOVE` | Yes |
+| `$$ROOT` | Yes |
### Literal operator | Command | Supported | |||
-| $literal | Yes |
+| `literal` | Yes |
### Date expressions | Command | Supported | |||
-| $dayOfYear | Yes |
-| $dayOfMonth | Yes |
-| $dayOfWeek | Yes |
-| $year | Yes |
-| $month | Yes |
-| $week | Yes |
-| $hour | Yes |
-| $minute | Yes |
-| $second | Yes |
-| $millisecond | Yes |
-| $dateToString | Yes |
-| $isoDayOfWeek | Yes |
-| $isoWeek | Yes |
-| $dateFromParts | Yes |
-| $dateToParts | Yes |
-| $dateFromString | Yes |
-| $isoWeekYear | Yes |
+| `dayOfYear` | Yes |
+| `dayOfMonth` | Yes |
+| `dayOfWeek` | Yes |
+| `year` | Yes |
+| `month` | Yes |
+| `week` | Yes |
+| `hour` | Yes |
+| `minute` | Yes |
+| `second` | Yes |
+| `millisecond` | Yes |
+| `dateToString` | Yes |
+| `isoDayOfWeek` | Yes |
+| `isoWeek` | Yes |
+| `dateFromParts` | Yes |
+| `dateToParts` | Yes |
+| `dateFromString` | Yes |
+| `isoWeekYear` | Yes |
### Conditional expressions | Command | Supported | |||
-| $cond | Yes |
-| $ifNull | Yes |
-| $switch | Yes |
+| `cond` | Yes |
+| `ifNull` | Yes |
+| `switch` | Yes |
### Data type operator | Command | Supported | |||
-| $type | Yes |
+| `type` | Yes |
### Accumulator expressions | Command | Supported | |||
-| $sum | Yes |
-| $avg | Yes |
-| $first | Yes |
-| $last | Yes |
-| $max | Yes |
-| $min | Yes |
-| $push | Yes |
-| $addToSet | Yes |
-| $stdDevPop | Yes |
-| $stdDevSamp | Yes |
+| `sum` | Yes |
+| `avg` | Yes |
+| `first` | Yes |
+| `last` | Yes |
+| `max` | Yes |
+| `min` | Yes |
+| `push` | Yes |
+| `addToSet` | Yes |
+| `stdDevPop` | Yes |
+| `stdDevSamp` | Yes |
### Merge operator | Command | Supported | |||
-| $mergeObjects | Yes |
+| `mergeObjects` | Yes |
## Data types
-Azure Cosmos DB for MongoDB supports documents encoded in MongoDB BSON format. The 4.0 API version enhances the internal usage of this format to improve performance and reduce costs. Documents written or updated through an endpoint running 4.0+ benefit from this.
-
-In an [upgrade scenario](upgrade-version.md), documents written prior to the upgrade to version 4.0+ will not benefit from the enhanced performance until they are updated via a write operation through the 4.0+ endpoint.
+Azure Cosmos DB for MongoDB supports documents encoded in MongoDB BSON format. The 4.0 API version enhances the internal usage of this format to improve performance and reduce costs. Documents written or updated through an endpoint running 4.0+ benefit from optimization.
+
+In an [upgrade scenario](upgrade-version.md), documents written prior to the upgrade to version 4.0+ won't benefit from the enhanced performance until they're updated via a write operation through the 4.0+ endpoint.
-16MB document support raises the size limit for your documents from 2MB to 16MB. This limit only applies to collections created after this feature has been enabled. Once this feature is enabled for your database account, it cannot be disabled. This feature is not compatible with the Azure Synapse Link feature and/or Continuous Backup.
+16-MB document support raises the size limit for your documents from 2 MB to 16 MB. This limit only applies to collections created after this feature has been enabled. Once this feature is enabled for your database account, it can't be disabled. This feature isn't compatible with the Azure Synapse Link feature and/or Continuous Backup.
-Enabling 16MB can be done in the features tab in the Azure portal or programmatically by [adding the "EnableMongo16MBDocumentSupport" capability](how-to-configure-capabilities.md).
+Enabling 16 MB can be done in the features tab in the Azure portal or programmatically by [adding the "EnableMongo16MBDocumentSupport" capability](how-to-configure-capabilities.md).
-We recommend enabling Server Side Retry to ensure requests with larger documents succeed. If necessary, raising your DB/Collection RUs may also help performance.
+We recommend enabling Server Side Retry and avoiding wildcard indexes to ensure requests with larger documents succeed. If necessary, raising your DB/Collection RUs may also help performance.
| Command | Supported | |||
-| Double | Yes |
-| String | Yes |
-| Object | Yes |
-| Array | Yes |
-| Binary Data | Yes |
-| ObjectId | Yes |
-| Boolean | Yes |
-| Date | Yes |
-| Null | Yes |
-| 32-bit Integer (int) | Yes |
-| Timestamp | Yes |
-| 64-bit Integer (long) | Yes |
-| MinKey | Yes |
-| MaxKey | Yes |
-| Decimal128 | Yes |
-| Regular Expression | Yes |
-| JavaScript | Yes |
-| JavaScript (with scope)| Yes |
-| Undefined | Yes |
+| `Double` | Yes |
+| `String` | Yes |
+| `Object` | Yes |
+| `Array` | Yes |
+| `Binary Data` | Yes |
+| `ObjectId` | Yes |
+| `Boolean` | Yes |
+| `Date` | Yes |
+| `Null` | Yes |
+| `32-bit Integer (int)` | Yes |
+| `Timestamp` | Yes |
+| `64-bit Integer (long)` | Yes |
+| `MinKey` | Yes |
+| `MaxKey` | Yes |
+| `Decimal128` | Yes |
+| `Regular Expression` | Yes |
+| `JavaScript` | Yes |
+| `JavaScript (with scope)` | Yes |
+| `Undefined` | Yes |
## Indexes and index properties
We recommend enabling Server Side Retry to ensure requests with larger documents
| Command | Supported | |||
-| Single Field Index | Yes |
-| Compound Index | Yes |
-| Multikey Index | Yes |
-| Text Index | No |
-| 2dsphere | Yes |
-| 2d Index | No |
-| Hashed Index | Yes |
+| `Single Field Index` | Yes |
+| `Compound Index` | Yes |
+| `Multikey Index` | Yes |
+| `Text Index` | No |
+| `2dsphere` | Yes |
+| `2d Index` | No |
+| `Hashed Index` | Yes |
### Index properties | Command | Supported | |||
-| TTL | Yes |
-| Unique | Yes |
-| Partial | No |
-| Case Insensitive | No |
-| Sparse | No |
-| Background | Yes |
+| `TTL` | Yes |
+| `Unique` | Yes |
+| `Partial` | No |
+| `Case Insensitive` | No |
+| `Sparse` | No |
+| `Background` | Yes |
## Operators
We recommend enabling Server Side Retry to ensure requests with larger documents
| Command | Supported | |||
-| $or | Yes |
-| $and | Yes |
-| $not | Yes |
-| $nor | Yes |
+| `or` | Yes |
+| `and` | Yes |
+| `not` | Yes |
+| `nor` | Yes |
### Element operators | Command | Supported | |||
-| $exists | Yes |
-| $type | Yes |
+| `exists` | Yes |
+| `type` | Yes |
### Evaluation query operators | Command | Supported | |||
-| $expr | Yes |
-| $jsonSchema | No |
-| $mod | Yes |
-| $regex | Yes |
-| $text | No (Not supported. Use $regex instead.)|
-| $where | No |
+| `expr` | Yes |
+| `jsonSchema` | No |
+| `mod` | Yes |
+| `regex` | Yes |
+| `text` | No (Not supported. Use $regex instead.) |
+| `where` | No |
In the $regex queries, left-anchored expressions allow index search. However, using 'i' modifier (case-insensitivity) and 'm' modifier (multiline) causes the collection scan in all expressions.
-When there's a need to include '$' or '|', it is best to create two (or more) regex queries. For example, given the following original query: `find({x:{$regex: /^abc$/})`, it has to be modified as follows:
+When there's a need to include '$' or '|', it's best to create two (or more) regex queries. For example, given the following original query: `find({x:{$regex: /^abc$/})`, it has to be modified as follows:
`find({x:{$regex: /^abc/, x:{$regex:/^abc$/}})`
The first part will use the index to restrict the search to those documents begi
### Array operators
-| Command | Supported |
+| Command | Supported |
|||
-| $all | Yes |
-| $elemMatch | Yes |
-| $size | Yes |
+| `all` | Yes |
+| `elemMatch` | Yes |
+| `size` | Yes |
### Comment operator
-| Command | Supported |
+| Command | Supported |
|||
-| $comment | Yes |
+| `comment` | Yes |
### Projection operators | Command | Supported | |||
-| $elemMatch | Yes |
-| $meta | No |
-| $slice | Yes |
+| `elemMatch` | Yes |
+| `meta` | No |
+| `slice` | Yes |
### Update operators
The first part will use the index to restrict the search to those documents begi
| Command | Supported | |||
-| $inc | Yes |
-| $mul | Yes |
-| $rename | Yes |
-| $setOnInsert | Yes |
-| $set | Yes |
-| $unset | Yes |
-| $min | Yes |
-| $max | Yes |
-| $currentDate | Yes |
+| `inc` | Yes |
+| `mul` | Yes |
+| `rename` | Yes |
+| `setOnInsert` | Yes |
+| `set` | Yes |
+| `unset` | Yes |
+| `min` | Yes |
+| `max` | Yes |
+| `currentDate` | Yes |
#### Array update operators | Command | Supported | |||
-| $ | Yes |
-| $[]| Yes |
-| $[\<identifier\>]| Yes |
-| $addToSet | Yes |
-| $pop | Yes |
-| $pullAll | Yes |
-| $pull | Yes |
-| $push | Yes |
-| $pushAll | Yes |
+| `$` | Yes |
+| `$[]` | Yes |
+| `$[\<identifier\>]` | Yes |
+| `addToSet` | Yes |
+| `pop` | Yes |
+| `pullAll` | Yes |
+| `pull` | Yes |
+| `push` | Yes |
+| `pushAll` | Yes |
#### Update modifiers | Command | Supported | |||
-| $each | Yes |
-| $slice | Yes |
-| $sort | Yes |
-| $position | Yes |
+| `each` | Yes |
+| `slice` | Yes |
+| `sort` | Yes |
+| `position` | Yes |
#### Bitwise update operator | Command | Supported | |||
-| $bit | Yes |
-| $bitsAllSet | No |
-| $bitsAnySet | No |
-| $bitsAllClear | No |
-| $bitsAnyClear | No |
+| `bit` | Yes |
+| `bitsAllSet` | No |
+| `bitsAnySet` | No |
+| `bitsAllClear` | No |
+| `bitsAnyClear` | No |
### Geospatial operators
-Operator | Supported |
- | |
-$geoWithin | Yes |
-$geoIntersects | Yes |
-$near | Yes |
-$nearSphere | Yes |
-$geometry | Yes |
-$minDistance | Yes |
-$maxDistance | Yes |
-$center | No |
-$centerSphere | No |
-$box | No |
-$polygon | No |
+| Operator | Supported |
+| | |
+| `$geoWithin` | Yes |
+| `$geoIntersects` | Yes |
+| `$near` | Yes |
+| `$nearSphere` | Yes |
+| `$geometry` | Yes |
+| `$minDistance` | Yes |
+| `$maxDistance` | Yes |
+| `$center` | No |
+| `$centerSphere` | No |
+| `$box` | No |
+| `$polygon` | No |
## Sort operations
-When using the `findOneAndUpdate` operation with API for MongoDB version 4.0, sort operations on a single field and multiple fields are supported. Sort operations on multiple fields was a limitation of previous wire protocols.
+When you use the `findOneAndUpdate` operation with API for MongoDB version 4.0, sort operations on a single field and multiple fields are supported. Sort operations on multiple fields were a limitation of previous wire protocols.
## Indexing
-The API for MongoDB [supports a variety of indexes](indexing.md) to enable sorting on multiple fields, improve query performance, and enforce uniqueness.
+
+The API for MongoDB [supports various indexes](indexing.md) to enable sorting on multiple fields, improve query performance, and enforce uniqueness.
## GridFS
Azure Cosmos DB supports GridFS through any GridFS-compatible Mongo driver.
## Replication
-Azure Cosmos DB supports automatic, native replication at the lowest layers. This logic is extended out to achieve low-latency, global replication as well. Azure Cosmos DB does not support manual replication commands.
+Azure Cosmos DB supports automatic, native replication at the lowest layers. This logic is extended out to achieve low-latency, global replication as well. Azure Cosmos DB doesn't support manual replication commands.
## Retryable Writes
-Retryable writes enables MongoDB drivers to automatically retry certain write operations in case of failure, but results in more stringent requirements for certain operations, which match MongoDB protocol requirements. With this feature enabled, update operations, including deletes, in sharded collections will require the shard key to be included in the query filter or update statement.
+Retryable writes enable MongoDB drivers to automatically retry certain write operations if there was failure, but results in more stringent requirements for certain operations, which match MongoDB protocol requirements. With this feature enabled, update operations, including deletes, in sharded collections will require the shard key to be included in the query filter or update statement.
-For example, with a sharded collection, sharded on key ΓÇ£countryΓÇ¥: To delete all the documents with the field city = "NYC", the application will need to execute the operation for all shard key (country) values if Retryable writes is enabled.
+For example, with a sharded collection, sharded on key ΓÇ£countryΓÇ¥: To delete all the documents with the field **city** = `"NYC"`, the application will need to execute the operation for all shard key (country) values if Retryable writes are enabled.
-- `db.coll.deleteMany({"country": "USA", "city": "NYC"})` - **Success** -- `db.coll.deleteMany({"city": "NYC"})` - **Fails with error `ShardKeyNotFound(61)`**
+- `db.coll.deleteMany({"country": "USA", "city": "NYC"})` - **Success**
+- `db.coll.deleteMany({"city": "NYC"})` - Fails with error **ShardKeyNotFound(61)**
-To enable the feature, [add the EnableMongoRetryableWrites capability](how-to-configure-capabilities.md) to your database account. This feature can also be enabled in the features tab in the Azure portal.
+To enable the feature, [add the EnableMongoRetryableWrites capability](how-to-configure-capabilities.md) to your database account. This feature can also be enabled in the features tab in the Azure portal.
## Sharding
-Azure Cosmos DB supports automatic, server-side sharding. It manages shard creation, placement, and balancing automatically. Azure Cosmos DB does not support manual sharding commands, which means you don't have to invoke commands such as addShard, balancerStart, moveChunk etc. You only need to specify the shard key while creating the containers or querying the data.
+Azure Cosmos DB supports automatic, server-side sharding. It manages shard creation, placement, and balancing automatically. Azure Cosmos DB doesn't support manual sharding commands, which means you don't have to invoke commands such as addShard, balancerStart, moveChunk etc. You only need to specify the shard key while creating the containers or querying the data.
## Sessions
-Azure Cosmos DB does not yet support server-side sessions commands.
+Azure Cosmos DB doesn't yet support server-side sessions commands.
## Time-to-live (TTL)
Azure Cosmos DB supports a time-to-live (TTL) based on the timestamp of the docu
## Transactions
-Multi-document transactions are supported within an unsharded collection. Multi-document transactions are not supported across collections or in sharded collections. The timeout for transactions is a fixed 5 seconds.
+Multi-document transactions are supported within an unsharded collection. Multi-document transactions aren't supported across collections or in sharded collections. The timeout for transactions is a fixed 5 seconds.
## User and role management
-Azure Cosmos DB does not yet support users and roles. However, Azure Cosmos DB supports Azure role-based access control (Azure RBAC) and read-write and read-only passwords/keys that can be obtained through the [Azure portal](https://portal.azure.com) (Connection String page).
+Azure Cosmos DB doesn't yet support users and roles. However, Azure Cosmos DB supports Azure role-based access control (Azure RBAC) and read-write and read-only passwords/keys that can be obtained through the [Azure portal](https://portal.azure.com) (Connection String page).
## Write Concern
-Some applications rely on a [Write Concern](https://docs.mongodb.com/manual/reference/write-concern/), which specifies the number of responses required during a write operation. Due to how Azure Cosmos DB handles replication in the background all writes are all automatically Quorum by default. Any write concern specified by the client code is ignored. Learn more in [Using consistency levels to maximize availability and performance](../consistency-levels.md).
+Some applications rely on a [Write Concern](https://docs.mongodb.com/manual/reference/write-concern/), which specifies the number of responses required during a write operation. Due to how Azure Cosmos DB handles replication in the background all writes are automatically Quorum by default. Any write concern specified by the client code is ignored. Learn more in [Using consistency levels to maximize availability and performance](../consistency-levels.md).
## Next steps -- Learn how to [use Studio 3T](connect-using-mongochef.md) with Azure Cosmos DB's API for MongoDB.-- Learn how to [use Robo 3T](connect-using-robomongo.md) with Azure Cosmos DB's API for MongoDB.-- Explore MongoDB [samples](nodejs-console-app.md) with Azure Cosmos DB's API for MongoDB.
+- Learn how to [use Studio 3T](connect-using-mongochef.md) with Azure Cosmos DB for MongoDB.
+- Learn how to [use Robo 3T](connect-using-robomongo.md) with Azure Cosmos DB for MongoDB.
+- Explore MongoDB [samples](nodejs-console-app.md) with Azure Cosmos DB for MongoDB.
- Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
- - If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
- - If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-capacity-planner.md)
+ - If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
+ - If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-capacity-planner.md)
cosmos-db Feature Support 42 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/feature-support-42.md
Title: 4.2 server version supported features and syntax in Azure Cosmos DB for MongoDB description: Learn about Azure Cosmos DB for MongoDB 4.2 server version supported features and syntax. Learn about the database commands, query language support, datatypes, aggregation pipeline commands, and operators supported.+++ + Last updated : 10/12/2022 - Previously updated : 04/05/2022-- # Azure Cosmos DB for MongoDB (4.2 server version): supported features and syntax+ [!INCLUDE[MongoDB](../includes/appliesto-mongodb.md)] Azure Cosmos DB is Microsoft's globally distributed multi-model database service, offering [multiple database APIs](../choose-api.md). You can communicate with the Azure Cosmos DB for MongoDB using any of the open-source MongoDB client [drivers](https://docs.mongodb.org/ecosystem/drivers). The Azure Cosmos DB for MongoDB enables the use of existing client drivers by adhering to the MongoDB [wire protocol](https://docs.mongodb.org/manual/reference/mongodb-wire-protocol).
By using the Azure Cosmos DB for MongoDB, you can enjoy the benefits of the Mong
## Protocol Support
-The supported operators and any limitations or exceptions are listed below. Any client driver that understands these protocols should be able to connect to Azure Cosmos DB for MongoDB. When using Azure Cosmos DB for MongoDB accounts, the 3.6+ versions of accounts have the endpoint in the format `*.mongo.cosmos.azure.com` whereas the 3.2 version of accounts has the endpoint in the format `*.documents.azure.com`.
+The supported operators and any limitations or exceptions are listed below. Any client driver that understands these protocols should be able to connect to Azure Cosmos DB for MongoDB. When you create Azure Cosmos DB for MongoDB accounts, the 3.6+ versions of accounts have the endpoint in the format `*.mongo.cosmos.azure.com` whereas the 3.2 version of accounts has the endpoint in the format `*.documents.azure.com`.
> [!NOTE] > Th