Updates from: 10/14/2022 01:15:04
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Partner Gallery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-gallery.md
Microsoft partners with the following ISVs for Web Application Firewall (WAF).
| ![Screenshot of Azure WAF logo](./medi) provides centralized protection of your web applications from common exploits and vulnerabilities. | ![Screenshot of Cloudflare logo](./medi) is a WAF provider that helps organizations protect against malicious attacks that aim to exploit vulnerabilities such as SQLi, and XSS. |
-## Identity verification tools
+## Developer tools
Microsoft partners with the following ISVs for tools that can help with implementation of your authentication solution. | ISV partner | Description and integration walkthroughs | |:-|:--|
-| ![Screenshot of a grit ief editor logo.](./medi) is a tool that saves time during authentication deployment. It supports multiple languages without the need to write code. It also has a no code debugger for user journeys.|
+| ![Screenshot of a grit ief editor logo.](./medi) provides a low code/no code experience for developers to create sophisticated authentication user journeys. The tool comes with integrated debugger and templates for the most used scenarios.|
## Additional information
active-directory Concept Certificate Based Authentication Certificateuserids https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-certificate-based-authentication-certificateuserids.md
To map the pattern supported by certificateUserIds, administrators must use expr
You can use the following expression for mapping to SKI and SHA1-PUKEY: ```
-(Contains([alternativeSecurityId],"x509:\<SKI>")>0,[alternativeSecurityId],Error("No altSecurityIdentities SKI match found."))
-& IIF(Contains([alternativeSecurityId],"x509:\<SHA1-PUKEY>")>0,[alternativeSecurityId],Error("No altSecurityIdentities SHA1-PUKEY match found."))
+IF(IsPresent([alternativeSecurityId]),
+ Where($item,[alternativeSecurityId],BitOr(InStr($item, "x509:<SKI>"),InStr($item, "x509:<SHA1-PUKEY>"))>0),[alternativeSecurityId]
+)
``` ## Look up certificateUserIds using Microsoft Graph queries
active-directory Concept Mfa Authprovider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-mfa-authprovider.md
Title: Azure Multi-Factor Auth Providers - Azure Active Directory
+ Title: Azure AD Multi-Factor Auth Providers - Azure Active Directory
description: When should you use an Auth Provider with Azure MFA? Previously updated : 11/21/2019 Last updated : 10/10/2022
-# When to use an Azure Multi-Factor Authentication Provider
+# When to use an Azure AD Multi-Factor Authentication provider
> [!IMPORTANT] > Effective September 1st, 2018 new auth providers may no longer be created. Existing auth providers may continue to be used and updated, but migration is no longer possible. Multi-factor authentication will continue to be available as a feature in Azure AD Premium licenses.
-Two-step verification is available by default for global administrators who have Azure Active Directory, and Microsoft 365 users. However, if you wish to take advantage of [advanced features](howto-mfa-mfasettings.md) then you should purchase the full version of Azure Multi-Factor Authentication (MFA).
+Two-step verification is available by default for Global Administrators who have Azure Active Directory, and Microsoft 365 users. However, if you wish to take advantage of [advanced features](howto-mfa-mfasettings.md) then you should purchase the full version of Azure AD Multi-Factor Authentication (MFA).
-An Azure Multi-Factor Auth Provider is used to take advantage of features provided by Azure Multi-Factor Authentication for users who **do not have licenses**.
+An Azure AD Multi-Factor Auth Provider is used to take advantage of features provided by Azure AD Multi-Factor Authentication for users who **do not have licenses**.
## Caveats related to the Azure MFA SDK Note the SDK has been deprecated and will only continue to work until November 14, 2018. After that time, calls to the SDK will fail.
-## What is an MFA Provider?
+## What is an MFA provider?
-There are two types of Auth providers, and the distinction is around how your Azure subscription is charged. The per-authentication option calculates the number of authentications performed against your tenant in a month. This option is best if you have a number of users authenticating only occasionally. The per-user option calculates the number of individuals in your tenant who perform two-step verification in a month. This option is best if you have some users with licenses but need to extend MFA to more users beyond your licensing limits.
+There are two types of Auth providers, and the distinction is around how your Azure subscription is charged. The per-authentication option calculates the number of authentications performed against your tenant in a month. This option is best if some users authenticate only occasionally. The per-user option calculates the number of users who are eligible to perform MFA, which is all users in Azure AD, and all enabled users in MFA Server. This option is best if some users have licenses but you need to extend MFA to more users beyond your licensing limits.
-## Manage your MFA Provider
+## Manage your MFA provider
-You cannot change the usage model (per enabled user or per authentication) after an MFA provider is created.
+You can't change the usage model (per enabled user or per authentication) after an MFA provider is created.
If you purchased enough licenses to cover all users that are enabled for MFA, you can delete the MFA provider altogether.
-If your MFA provider is not linked to an Azure AD tenant, or you link the new MFA provider to a different Azure AD tenant, user settings and configuration options are not transferred. Also, existing Azure MFA Servers need to be reactivated using activation credentials generated through the MFA Provider.
+If your MFA provider isn't linked to an Azure AD tenant, or you link the new MFA provider to a different Azure AD tenant, user settings and configuration options aren't transferred. Also, existing Azure MFA Servers need to be reactivated using activation credentials generated through the MFA Provider.
### Removing an authentication provider
Azure MFA Servers linked to providers will need to be reactivated using credenti
![Delete an auth provider from the Azure portal](./media/concept-mfa-authprovider/authentication-provider-removal.png)
-When you have confirmed that all settings have been migrated, you can browse to the **Azure portal** > **Azure Active Directory** > **Security** > **MFA** > **Providers** and select the ellipses **...** and select **Delete**.
+After you confirm that all settings are migrated, you can browse to the **Azure portal** > **Azure Active Directory** > **Security** > **MFA** > **Providers** and select the ellipses **...** and select **Delete**.
> [!WARNING] > Deleting an authentication provider will delete any reporting information associated with that provider. You may want to save activity reports before deleting your provider.
active-directory Howto Sspr Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-sspr-windows.md
Previously updated : 03/18/2022 Last updated : 10/13/2022
# Enable Azure Active Directory self-service password reset at the Windows sign-in screen
-Self-service password reset (SSPR) gives users in Azure Active Directory (Azure AD) the ability to change or reset their password, with no administrator or help desk involvement. Typically, users open a web browser on another device to access the [SSPR portal](https://aka.ms/sspr). To improve the experience on computers that run Windows 7, 8, 8.1, and 10, you can enable users to reset their password at the Windows sign-in screen.
+Self-service password reset (SSPR) gives users in Azure Active Directory (Azure AD) the ability to change or reset their password, with no administrator or help desk involvement. Typically, users open a web browser on another device to access the [SSPR portal](https://aka.ms/sspr). To improve the experience on computers that run Windows 7, 8, 8.1, 10, and 11 you can enable users to reset their password at the Windows sign-in screen.
-![Example Windows 7 and 10 login screens with SSPR link shown](./media/howto-sspr-windows/windows-reset-password.png)
+![Example Windows login screens with SSPR link shown](./media/howto-sspr-windows/windows-reset-password.png)
> [!IMPORTANT] > This tutorial shows an administrator how to enable SSPR for Windows devices in an enterprise.
The following limitations apply to using SSPR from the Windows sign-in screen:
- Hybrid Azure AD joined machines must have network connectivity line of sight to a domain controller to use the new password and update cached credentials. This means that devices must either be on the organization's internal network or on a VPN with network access to an on-premises domain controller. - If using an image, prior to running sysprep ensure that the web cache is cleared for the built-in Administrator prior to performing the CopyProfile step. More information about this step can be found in the support article [Performance poor when using custom default user profile](https://support.microsoft.com/help/4056823/performance-issue-with-custom-default-user-profile). - The following settings are known to interfere with the ability to use and reset passwords on Windows 10 devices:
- - If Ctrl+Alt+Del is required by policy in Windows 10, **Reset password** won't work.
- If lock screen notifications are turned off, **Reset password** won't work. - *HideFastUserSwitching* is set to enabled or 1 - *DontDisplayLastUserName* is set to enabled or 1
The following limitations apply to using SSPR from the Windows sign-in screen:
> These limitations also apply to Windows Hello for Business PIN reset from the device lock screen. >
-## Windows 10 password reset
+## Windows 11 and 10 password reset
-To configure a Windows 10 device for SSPR at the sign-in screen, review the following prerequisites and configuration steps.
+To configure a Windows 11 or 10 device for SSPR at the sign-in screen, review the following prerequisites and configuration steps.
-### Windows 10 prerequisites
+### Windows 11 and 10 prerequisites
- An administrator [must enable Azure AD self-service password reset from the Azure portal](tutorial-enable-sspr.md). - Users must register for SSPR before using this feature at [https://aka.ms/ssprsetup](https://aka.ms/ssprsetup)
To configure a Windows 10 device for SSPR at the sign-in screen, review the foll
- Azure AD joined - Hybrid Azure AD joined
-### Enable for Windows 10 using Microsoft Endpoint Manager
+### Enable for Windows 11 and 10 using Microsoft Endpoint Manager
Deploying the configuration change to enable SSPR from the login screen using Microsoft Endpoint Manager is the most flexible method. Microsoft Endpoint Manager allows you to deploy the configuration change to a specific group of machines you define. This method requires Microsoft Endpoint Manager enrollment of the device.
Deploying the configuration change to enable SSPR from the login screen using Mi
1. Sign in to the [Azure portal](https://portal.azure.com) and select **Endpoint Manager**. 1. Create a new device configuration profile by going to **Device configuration** > **Profiles**, then select **+ Create Profile**
- - For **Platform** choose *Windows 10 and later*
+ - For **Platform** choose *Windows 11 and later*
- For **Profile type**, choose *Custom*
-1. Select **Create**, then provide a meaningful name for the profile, such as *Windows 10 sign-in screen SSPR*
+1. Select **Create**, then provide a meaningful name for the profile, such as *Windows 11 sign-in screen SSPR*
Optionally, provide a meaningful description of the profile, then select **Next**. 1. Under *Configuration settings*, select **Add** and provide the following OMA-URI setting to enable the reset password link:
Deploying the configuration change to enable SSPR from the login screen using Mi
1. Configure applicability rules as desired for your environment, such as to *Assign profile if OS edition is Windows 10 Enterprise*, then select **Next**. 1. Review your profile, then select **Create**.
-### Enable for Windows 10 using the Registry
+### Enable for Windows 11 and 10 using the Registry
To enable SSPR at the sign-in screen using a registry key, complete the following steps:
To enable SSPR at the sign-in screen using a registry key, complete the followin
"AllowPasswordReset"=dword:00000001 ```
-### Troubleshooting Windows 10 password reset
+### Troubleshooting Windows 11 and 10 password reset
If you have problems with using SSPR from the Windows sign-in screen, the Azure AD audit log includes information about the IP address and *ClientType* where the password reset occurred, as shown in the following example output: ![Example Windows 7 password reset in the Azure AD Audit log](media/howto-sspr-windows/windows-7-sspr-azure-ad-audit-log.png)
-When users reset their password from the sign-in screen of a Windows 10 device, a low-privilege temporary account called `defaultuser1` is created. This account is used to keep the password reset process secure.
+When users reset their password from the sign-in screen of a Windows 11 or 10 device, a low-privilege temporary account called `defaultuser1` is created. This account is used to keep the password reset process secure.
The account itself has a randomly generated password, which is validated against an organizations password policy, doesn't show up for device sign-in, and is automatically removed after the user resets their password. Multiple `defaultuser` profiles may exist but can be safely ignored.
active-directory Concept Condition Filters For Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-condition-filters-for-devices.md
The following device attributes can be used with the filter for devices conditio
| | | | | | deviceId | Equals, NotEquals, In, NotIn | A valid deviceId that is a GUID | (device.deviceid -eq "498c4de7-1aee-4ded-8d5d-000000000000") | | displayName | Equals, NotEquals, StartsWith, NotStartsWith, EndsWith, NotEndsWith, Contains, NotContains, In, NotIn | Any string | (device.displayName -contains "ABC") |
-| deviceOwnership | Equals, NotEquals | Supported values are "Personal" for bring your own devices and "Company" for corprate owned devices | (device.deviceOwnership -eq "Company") |
+| deviceOwnership | Equals, NotEquals | Supported values are "Personal" for bring your own devices and "Company" for corporate owned devices | (device.deviceOwnership -eq "Company") |
| isCompliant | Equals, NotEquals | Supported values are "True" for compliant devices and "False" for non compliant devices | (device.isCompliant -eq "True") | | manufacturer | Equals, NotEquals, StartsWith, NotStartsWith, EndsWith, NotEndsWith, Contains, NotContains, In, NotIn | Any string | (device.manufacturer -startsWith "Microsoft") | | mdmAppId | Equals, NotEquals, In, NotIn | A valid MDM application ID | (device.mdmAppId -in ["0000000a-0000-0000-c000-000000000000"] |
active-directory V2 Oauth2 On Behalf Of Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-oauth2-on-behalf-of-flow.md
Previously updated : 08/30/2021 Last updated : 09/30/2022 -+ # Microsoft identity platform and OAuth 2.0 On-Behalf-Of flow
-The OAuth 2.0 On-Behalf-Of flow (OBO) serves the use case where an application invokes a service/web API, which in turn needs to call another service/web API. The idea is to propagate the delegated user identity and permissions through the request chain. For the middle-tier service to make authenticated requests to the downstream service, it needs to secure an access token from the Microsoft identity platform, on behalf of the user.
+The on-behalf-of (OBO) flow describes the scenario of a web API using an identity other than its own to call another web API. Referred to as delegation in OAuth, the intent is to pass a user's identity and permissions through the request chain.
-The OBO flow only works for user principals at this time. A service principal cannot request an app-only token, send it to an API, and have that API exchange that for another token that represents that original service principal. Additionally, the OBO flow is focused on acting on another party's behalf, known as a delegated scenario - this means that it uses only delegated *scopes*, and not application *roles*, for reasoning about permissions. *Roles* remain attached to the principal (the user) in the flow, never the application operating on the users behalf.
+For the middle-tier service to make authenticated requests to the downstream service, it needs to secure an access token from the Microsoft identity platform. It only uses delegated *scopes* and not application *roles*. *Roles* remain attached to the principal (the user) and never to the application operating on the user's behalf. This occurs to prevent the user gaining permission to resources they shouldn't have access to.
-This article describes how to program directly against the protocol in your application. When possible, we recommend you use the supported Microsoft Authentication Libraries (MSAL) instead to [acquire tokens and call secured web APIs](authentication-flows-app-scenarios.md#scenarios-and-supported-authentication-flows). Also take a look at the [sample apps that use MSAL](sample-v2-code.md).
+This article describes how to program directly against the protocol in your application. When possible, we recommend you use the supported Microsoft Authentication Libraries (MSAL) instead to [acquire tokens and call secured web APIs](authentication-flows-app-scenarios.md#scenarios-and-supported-authentication-flows). Also refer to the [sample apps that use MSAL](sample-v2-code.md) for examples.
[!INCLUDE [try-in-postman-link](includes/try-in-postman-link.md)] ## Client limitations
-As of May 2018, some implicit-flow derived `id_token` can't be used for OBO flow. Single-page apps (SPAs) should pass an **access** token to a middle-tier confidential client to perform OBO flows instead.
+If a service principal requested an app-only token and sent it to an API, that API would then exchange a token that doesn't represent the original service principal. This is because the OBO flow only works for user principals. Instead, it must use the [client credentials flow](v2-oauth2-client-creds-grant-flow.md) to get an app-only token. In the case of Single-page apps (SPAs), they should pass an access token to a middle-tier confidential client to perform OBO flows instead.
-If a client uses the implicit flow to get an id_token, and that client also has wildcards in a reply URL, the id_token can't be used for an OBO flow. However, access tokens acquired through the implicit grant flow can still be redeemed by a confidential client even if the initiating client has a wildcard reply URL registered.
+If a client uses the implicit flow to get an id_token and also has wildcards in a reply URL, the id_token can't be used for an OBO flow. A wildcard is a URL that ends with a `*` character. For example, if `https://myapp.com/*` was the reply URL the id_token can't be used because it isn't specific enough to identify the client. This would prevent the token being issued. However, access tokens acquired through the implicit grant flow can be redeemed by a confidential client, even if the initiating client has a wildcard reply URL registered. This is because the confidential client can identify the client that acquired the access token. The confidential client can then use the access token to acquire a new access token for the downstream API.
-Additionally, applications with custom signing keys cannot be used as middle-tier API's in the OBO flow (this includes enterprise applications configured for single sign-on). This will result in an error because tokens signed with a key controlled by the client cannot be safely accepted.
+Additionally, applications with custom signing keys can't be used as middle-tier APIs in the OBO flow. This includes enterprise applications configured for single sign-on. If the middle-tier API uses a custom signing key, the downstream API won't be able to validate the signature of the access token that is passed to it. This will result in an error because tokens signed with a key controlled by the client can't be safely accepted.
## Protocol diagram
-Assume that the user has been authenticated on an application using the [OAuth 2.0 authorization code grant flow](v2-oauth2-auth-code-flow.md) or another login flow. At this point, the application has an access token *for API A* (token A) with the user's claims and consent to access the middle-tier web API (API A). Now, API A needs to make an authenticated request to the downstream web API (API B).
+Assume that the user has been authenticated on an application using the [OAuth 2.0 authorization code grant flow](v2-oauth2-auth-code-flow.md) or another log in flow. At this point, the application has an access token for *API A* (token A) with the user's claims and consent to access the middle-tier web API (API A). Now, API A needs to make an authenticated request to the downstream web API (API B).
The steps that follow constitute the OBO flow and are explained with the help of the following diagram.
The steps that follow constitute the OBO flow and are explained with the help of
1. Token B is set by API A in the authorization header of the request to API B. 1. Data from the secured resource is returned by API B to API A, then to the client.
-In this scenario, the middle-tier service has no user interaction to get the user's consent to access the downstream API. Therefore, the option to grant access to the downstream API is presented upfront as a part of the consent step during authentication. To learn how to set this up for your app, see [Gaining consent for the middle-tier application](#gaining-consent-for-the-middle-tier-application).
+In this scenario, the middle-tier service has no user interaction to get the user's consent to access the downstream API. Therefore, the option to grant access to the downstream API is presented upfront as part of the consent step during authentication. To learn how to implement this in your app, see [Gaining consent for the middle-tier application](#gaining-consent-for-the-middle-tier-application).
## Middle-tier access token request
When using a shared secret, a service-to-service access token request contains t
| `grant_type` | Required | The type of token request. For a request using a JWT, the value must be `urn:ietf:params:oauth:grant-type:jwt-bearer`. | | `client_id` | Required | The application (client) ID that [the Azure portal - App registrations](https://go.microsoft.com/fwlink/?linkid=2083908) page has assigned to your app. | | `client_secret` | Required | The client secret that you generated for your app in the Azure portal - App registrations page. The Basic auth pattern of instead providing credentials in the Authorization header, per [RFC 6749](https://datatracker.ietf.org/doc/html/rfc6749#section-2.3.1) is also supported. |
-| `assertion` | Required | The access token that was sent to the middle-tier API. This token must have an audience (`aud`) claim of the app making this OBO request (the app denoted by the `client-id` field). Applications cannot redeem a token for a different app (so e.g. if a client sends an API a token meant for MS Graph, the API cannot redeem it using OBO. It should instead reject the token). |
+| `assertion` | Required | The access token that was sent to the middle-tier API. This token must have an audience (`aud`) claim of the app making this OBO request (the app denoted by the `client-id` field). Applications can't redeem a token for a different app (for example, if a client sends an API a token meant for Microsoft Graph, the API can't redeem it using OBO. It should instead reject the token). |
| `scope` | Required | A space separated list of scopes for the token request. For more information, see [scopes](v2-permissions-and-consent.md). | | `requested_token_use` | Required | Specifies how the request should be processed. In the OBO flow, the value must be set to `on_behalf_of`. | #### Example
-The following HTTP POST requests an access token and refresh token with `user.read` scope for the https://graph.microsoft.com web API.
+The following HTTP POST requests an access token and refresh token with `user.read` scope for the https://graph.microsoft.com web API. The request is signed with the client secret and is made by a confidential client.
```HTTP //line breaks for legibility only
client_id=535fb089-9ff3-47b6-9bfb-4f1264799865
### Second case: Access token request with a certificate
-A service-to-service access token request with a certificate contains the following parameters:
+A service-to-service access token request with a certificate contains the following parameters in addition to the parameters from the previous example:
| Parameter | Type | Description | | | | |
A service-to-service access token request with a certificate contains the follow
| `client_id` | Required | The application (client) ID that [the Azure portal - App registrations](https://go.microsoft.com/fwlink/?linkid=2083908) page has assigned to your app. | | `client_assertion_type` | Required | The value must be `urn:ietf:params:oauth:client-assertion-type:jwt-bearer`. | | `client_assertion` | Required | An assertion (a JSON web token) that you need to create and sign with the certificate you registered as credentials for your application. To learn how to register your certificate and the format of the assertion, see [certificate credentials](active-directory-certificate-credentials.md). |
-| `assertion` | Required | The access token that was sent to the middle-tier API. This token must have an audience (`aud`) claim of the app making this OBO request (the app denoted by the `client-id` field). Applications cannot redeem a token for a different app (so e.g. if a client sends an API a token meant for MS Graph, the API cannot redeem it using OBO. It should instead reject the token). |
+| `assertion` | Required | The access token that was sent to the middle-tier API. This token must have an audience (`aud`) claim of the app making this OBO request (the app denoted by the `client-id` field). Applications can't redeem a token for a different app (for example, if a client sends an API a token meant for MS Graph, the API can't redeem it using OBO. It should instead reject the token). |
| `requested_token_use` | Required | Specifies how the request should be processed. In the OBO flow, the value must be set to `on_behalf_of`. | | `scope` | Required | A space-separated list of scopes for the token request. For more information, see [scopes](v2-permissions-and-consent.md).|
-Notice that the parameters are almost the same as in the case of the request by shared secret except that the `client_secret` parameter is replaced by two parameters: a `client_assertion_type` and `client_assertion`.
+Notice that the parameters are almost the same as in the case of the request by shared secret except that the `client_secret` parameter is replaced by two parameters: a `client_assertion_type` and `client_assertion`. The `client_assertion_type` parameter is set to `urn:ietf:params:oauth:client-assertion-type:jwt-bearer` and the `client_assertion` parameter is set to the JWT token that is signed with the private key of the certificate.
#### Example
-The following HTTP POST requests an access token with `user.read` scope for the https://graph.microsoft.com web API with a certificate.
+The following HTTP POST requests an access token with `user.read` scope for the https://graph.microsoft.com web API with a certificate. The request is signed with the client secret and is made by a confidential client.
```HTTP // line breaks for legibility only
A success response is a JSON OAuth 2.0 response with the following parameters.
### Success response example
-The following example shows a success response to a request for an access token for the https://graph.microsoft.com web API.
+The following example shows a success response to a request for an access token for the https://graph.microsoft.com web API. The response contains an access token and a refresh token and is signed with the private key of the certificate.
```json {
The following example shows a success response to a request for an access token
} ```
-The above access token is a v1.0-formatted token for Microsoft Graph. This is because the token format is based on the **resource** being accessed and unrelated to the endpoints used to request it. The Microsoft Graph is setup to accept v1.0 tokens, so the Microsoft identity platform produces v1.0 access tokens when a client requests tokens for Microsoft Graph. Other apps may indicate that they want v2.0-format tokens, v1.0-format tokens, or even proprietary or encrypted token formats. Both the v1.0 and v2.0 endpoints can emit either format of token - this way the resource can always get the right format of token regardless of how or where the token was requested by the client.
+This access token is a v1.0-formatted token for Microsoft Graph. This is because the token format is based on the **resource** being accessed and unrelated to the endpoints used to request it. The Microsoft Graph is set up to accept v1.0 tokens, so the Microsoft identity platform produces v1.0 access tokens when a client requests tokens for Microsoft Graph. Other apps may indicate that they want v2.0-format tokens, v1.0-format tokens, or even proprietary or encrypted token formats. Both the v1.0 and v2.0 endpoints can emit either format of token. This way, the resource can always get the right format of token regardless of how or where the token was requested by the client.
[!INCLUDE [remind-not-to-validate-access-tokens](includes/remind-not-to-validate-access-tokens.md)]
A service-to-service request for a SAML assertion contains the following paramet
| assertion |required | The value of the access token used in the request.| | client_id |required | The app ID assigned to the calling service during registration with Azure AD. To find the app ID in the Azure portal, select **Active Directory**, choose the directory, and then select the application name. | | client_secret |required | The key registered for the calling service in Azure AD. This value should have been noted at the time of registration. The Basic auth pattern of instead providing credentials in the Authorization header, per [RFC 6749](https://datatracker.ietf.org/doc/html/rfc6749#section-2.3.1) is also supported. |
-| scope |required | A space-separated list of scopes for the token request. For more information, see [scopes](v2-permissions-and-consent.md). SAML itself doesn't have a concept of scopes, but here it is used to identify the target SAML application for which you want to receive a token. For this OBO flow, the scope value must always be the SAML Entity ID with `/.default` appended. For example, in case the SAML application's Entity ID is `https://testapp.contoso.com`, then the requested scope should be `https://testapp.contoso.com/.default`. In case the Entity ID doesn't start with a URI scheme such as `https:`, Azure AD prefixes the Entity ID with `spn:`. In that case you must request the scope `spn:<EntityID>/.default`, for example `spn:testapp/.default` in case the Entity ID is `testapp`. Note that the scope value you request here determines the resulting `Audience` element in the SAML token, which may be important to the SAML application receiving the token. |
+| scope |required | A space-separated list of scopes for the token request. For more information, see [scopes](v2-permissions-and-consent.md). SAML itself doesn't have a concept of scopes, but is used to identify the target SAML application for which you want to receive a token. For this OBO flow, the scope value must always be the SAML Entity ID with `/.default` appended. For example, in case the SAML application's Entity ID is `https://testapp.contoso.com`, then the requested scope should be `https://testapp.contoso.com/.default`. In case the Entity ID doesn't start with a URI scheme such as `https:`, Azure AD prefixes the Entity ID with `spn:`. In that case you must request the scope `spn:<EntityID>/.default`, for example `spn:testapp/.default` in case the Entity ID is `testapp`. The scope value you request here determines the resulting `Audience` element in the SAML token, which may be important to the SAML application receiving the token. |
| requested_token_use |required | Specifies how the request should be processed. In the On-Behalf-Of flow, the value must be `on_behalf_of`. | | requested_token_type | required | Specifies the type of token requested. The value can be `urn:ietf:params:oauth:token-type:saml2` or `urn:ietf:params:oauth:token-type:saml1` depending on the requirements of the accessed resource. | The response contains a SAML token encoded in UTF8 and Base64url. -- **SubjectConfirmationData for a SAML assertion sourced from an OBO call**: If the target application requires a `Recipient` value in `SubjectConfirmationData`, then the value must be configured as the first non-wildcard Reply URL in the resource application configuration. Since the default Reply URL isn't used to determine the `Recipient` value, you might have to reorder the Reply URLs in the application configuration.
+- **SubjectConfirmationData for a SAML assertion sourced from an OBO call**: If the target application requires a `Recipient` value in `SubjectConfirmationData`, then the value must be configured as the first non-wildcard Reply URL in the resource application configuration. Since the default Reply URL isn't used to determine the `Recipient` value, you might have to reorder the Reply URLs in the application configuration to ensure that the first non-wildcard Reply URL is used. For more information, see [Reply URLs](reply-url.md).
- **The SubjectConfirmationData node**: The node can't contain an `InResponseTo` attribute since it's not part of a SAML response. The application receiving the SAML token must be able to accept the SAML assertion without an `InResponseTo` attribute. - **API permissions**: You have to [add the necessary API permissions](quickstart-configure-app-access-web-apis.md) on the middle-tier application to allow access to the SAML application, so that it can request a token for the `/.default` scope of the SAML application. - **Consent**: Consent must have been granted to receive a SAML token containing user data on an OAuth flow. For information, see [Gaining consent for the middle-tier application](#gaining-consent-for-the-middle-tier-application) below.
The response contains a SAML token encoded in UTF8 and Base64url.
## Gaining consent for the middle-tier application
-Depending on the architecture or usage of your application, you may consider different strategies for ensuring that the OBO flow is successful. In all cases, the ultimate goal is to ensure proper consent is given so that the client app can call the middle-tier app, and the middle tier app has permission to call the back-end resource.
-
-> [!NOTE]
-> Previously the Microsoft account system (personal accounts) did not support the "known client applications" field, nor could it show combined consent. This has been added and all apps in the Microsoft identity platform can use the known client application approach for getting consent for OBO calls.
+The goal of the OBO flow is to ensure proper consent is given so that the client app can call the middle-tier app and the middle-tier app has permission to call the back-end resource. Depending on the architecture or usage of your application, you may want to consider the following to ensure that OBO flow is successful.
### .default and combined consent
-The middle tier application adds the client to the [known client applications list](reference-app-manifest.md#knownclientapplications-attribute) (`knownClientApplications`) in its manifest. If a consent prompt is triggered by the client, the consent flow will be both for itself and the middle tier application. On the Microsoft identity platform, this is done using the [`.default` scope](v2-permissions-and-consent.md#the-default-scope). When triggering a consent screen using known client applications and `.default`, the consent screen will show permissions for **both** the client to the middle tier API, and also request whatever permissions are required by the middle-tier API. The user provides consent for both applications, and then the OBO flow works.
+The middle tier application adds the client to the [known client applications list](reference-app-manifest.md#knownclientapplications-attribute) (`knownClientApplications`) in its manifest. If a consent prompt is triggered by the client, the consent flow will be both for itself and the middle tier application. On the Microsoft identity platform, this is done using the [`.default` scope](v2-permissions-and-consent.md#the-default-scope). The `.default` scope is a special scope that is used to request consent to access all the scopes that the application has permissions for. This is useful when the application needs to access multiple resources, but the user should only be prompted for consent once.
+
+When triggering a consent screen using known client applications and `.default`, the consent screen will show permissions for **both** the client to the middle tier API, and also request whatever permissions are required by the middle-tier API. The user provides consent for both applications, and then the OBO flow works.
-The resource service (API) identified in the request should be the API for which the client application is requesting an access token as a result of the user's sign-in. For example, `scope=openid https://middle-tier-api.example.com/.default` (to request an access token for the middle tier API), or `scope=openid offline_access .default` (when a resource is not identified, it defaults to Microsoft Graph).
+The resource service (API) identified in the request should be the API for which the client application is requesting an access token as a result of the user's sign-in. For example, `scope=openid https://middle-tier-api.example.com/.default` (to request an access token for the middle tier API), or `scope=openid offline_access .default` (when a resource isn't identified, it defaults to Microsoft Graph).
-Regardless of which API is identified in the authorization request, the consent prompt will be a combined consent prompt including all required permissions configured for the client app, as well as all required permissions configured for each middle tier API listed in the client's required permissions list, and which have identified the client as a known client application.
+Regardless of which API is identified in the authorization request, the consent prompt will be combined with all required permissions configured for the client app. As well as this, all required permissions configured for each middle tier API listed in the client's required permissions list, which have identified the client as a known client application, are also included.
### Pre-authorized applications
-Resources can indicate that a given application always has permission to receive certain scopes. This is primarily useful to make connections between a front-end client and a back-end resource more seamless. A resource can [declare multiple pre-authorized applications](reference-app-manifest.md#preauthorizedapplications-attribute) (`preAuthorizedApplications`) in its manifest - any such application can request these permissions in an OBO flow and receive them without the user providing consent.
+Resources can indicate that a given application always has permission to receive certain scopes. This is useful to make connections between a front-end client and a back-end resource more seamless. A resource can [declare multiple pre-authorized applications](reference-app-manifest.md#preauthorizedapplications-attribute) (`preAuthorizedApplications`) in its manifest. Any such application can request these permissions in an OBO flow and receive them without the user providing consent.
### Admin consent
active-directory Add Users Administrator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/add-users-administrator.md
Previously updated : 08/31/2022 Last updated : 10/12/2022
To add B2B collaboration users to the directory, follow these steps:
> Group email addresses arenΓÇÖt supported; enter the email address for an individual. Also, some email providers allow users to add a plus symbol (+) and additional text to their email addresses to help with things like inbox filtering. However, Azure AD doesnΓÇÖt currently support plus symbols in email addresses. To avoid delivery issues, omit the plus symbol and any characters following it up to the @ symbol. 6. Select **Invite** to automatically send the invitation to the guest user.
-After you send the invitation, the user account is automatically added to the directory as a guest.
+After you send the invitation, the user account is automatically added to the directory as a guest.
![Screenshot showing the user list including the new Guest user.](media/add-users-administrator//guest-user-type.png)
+The user is added to your directory with a user principal name (UPN) in the format *emailaddress*#EXT#\@*domain*, for example, *john_contoso.com#EXT#\@fabrikam.onmicrosoft.com*, where fabrikam.onmicrosoft.com is the organization from which you sent the invitations. ([Learn more about B2B collaboration user properties](user-properties.md).)
## Add guest users to a group If you need to manually add B2B collaboration users to a group, follow these steps:
active-directory Add Users Information Worker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/add-users-information-worker.md
# How users in your organization can invite guest users to an app
-After a guest user has been added to the directory in Azure AD, an application owner can send the guest user a direct link to the app they want to share. Azure AD admins can also set up self-service management for gallery or SAML-based apps in their Azure AD tenant. This way, application owners can manage their own guest users, even if the guest users havenΓÇÖt been added to the directory yet. When an app is configured for self-service, the application owner uses their Access Panel to invite a guest user to an app or add a guest user to a group that has access to the app. Self-service app management for gallery and SAML-based apps requires some initial setup by an admin. The following is a summary of the setup steps (for more detailed instructions, see [Prerequisites](#prerequisites) later on this page):
+After a guest user has been added to the directory in Azure AD, an application owner can send the guest user a direct link to the app they want to share. Azure AD admins can also set up self-service management for gallery or SAML-based apps in their Azure AD tenant. This way, application owners can manage their own guest users, even if the guest users havenΓÇÖt been added to the directory yet. When an app is configured for self-service, the application owner uses their Access Panel to invite a guest user to an app or add a guest user to a group that has access to the app. Self-service app management for gallery and SAML-based apps requires some initial setup by an admin. Follow the summary of the setup steps (for more detailed instructions, see [Prerequisites](#prerequisites) later on this page):
- Enable self-service group management for your tenant - Create a group to assign to the app and make the user an owner
After a guest user has been added to the directory in Azure AD, an application o
> [!NOTE] > * This article describes how to set up self-service management for gallery and SAML-based apps that youΓÇÖve added to your Azure AD tenant. You can also [set up self-service Microsoft 365 groups](../enterprise-users/groups-self-service-management.md) so your users can manage access to their own Microsoft 365 groups. For more ways users can share Office files and apps with guest users, see [Guest access in Microsoft 365 groups](https://support.office.com/article/guest-access-in-office-365-groups-bfc7a840-868f-4fd6-a390-f347bf51aff6) and [Share SharePoint files or folders](https://support.office.com/article/share-sharepoint-files-or-folders-1fe37332-0f9a-4719-970e-d2578da4941c).
-> * Users are only able to invite guests if they have the **Guest inviter** role.
+> * Users are only able to invite guests if they have the [**Guest inviter**](../roles/permissions-reference.md#guest-inviter
+) role.
## Invite a guest user to an app from the Access Panel After an app is configured for self-service, application owners can use their own Access Panel to invite a guest user to the app they want to share. The guest user doesn't necessarily need to be added to Azure AD in advance. 1. Open your Access Panel by going to `https://myapps.microsoft.com`. 2. Point to the app, select the ellipses (**...**), and then select **Manage app**.
-
- ![Screenshot showing the Manage app sub-menu for the Salesforce app](media/add-users-iw/access-panel-manage-app.png)
-
-3. At the top of the users list, select **+** on the right-hand side.
+
+3. At the top of the users list, select **+** on the right-hand side.
4. In the **Add members** search box, type the email address for the guest user. Optionally, include a welcome message.
- ![Screenshot showing the Add members window for adding a guest](media/add-users-iw/access-panel-invitation.png)
+ 5. Select **Add** to send an invitation to the guest user. After you send the invitation, the user account is automatically added to the directory as a guest.
After an app is configured for self-service, application owners can invite guest
2. Open your Access Panel by going to `https://myapps.microsoft.com`. 3. Select the **Groups** app.
- ![Screenshot showing the Groups app in the Access Panel](media/add-users-iw/access-panel-groups.png)
4. Under **Groups I own**, select the group that has access to the app you want to share.
- ![Screenshot showing where to select a group under the Groups I own](media/add-users-iw/access-panel-groups-i-own.png)
5. At the top of the group members list, select **+**.
- ![Screenshot showing the plus symbol for adding members to the group](media/add-users-iw/access-panel-groups-add-member.png)
6. In the **Add members** search box, type the email address for the guest user. Optionally, include a welcome message.
- ![Screenshot showing the Add members window for adding a guest](media/add-users-iw/access-panel-invitation.png)
7. Select **Add** to automatically send the invitation to the guest user. After you send the invitation, the user account is automatically added to the directory as a guest.
active-directory B2b Direct Connect Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/b2b-direct-connect-overview.md
Previously updated : 06/30/2022 Last updated : 10/12/2022
For example, say Contoso (the resource tenant) trusts MFA claims from Fabrikam.
For information about Conditional Access and Teams, see [Overview of security and compliance](/microsoftteams/security-compliance-overview) in the Microsoft Teams documentation.
+## Trust settings for device compliance
+
+In your cross-tenant access settings, you can use **Trust settings** to trust claims from an external user's home tenant about whether the user's device meets their device compliance policies or is hybrid Azure AD joined. When device trust settings are enabled, Azure AD checks a user's authentication session for a device claim. If the session contains a device claim indicating that the policies have already been met in the user's home tenant, the external user is granted seamless sign-on to your shared resource. You can enable device trust settings for all Azure AD organizations or individual organizations. ([Learn more](authentication-conditional-access.md#device-compliance-and-hybrid-azure-ad-joined-device-policies))
+ ## B2B direct connect user experience Currently, B2B direct connect enables the Teams Connect shared channels feature. B2B direct connect users can access an external organization's Teams shared channel without having to switch tenants or sign in with a different account. The B2B direct connect userΓÇÖs access is determined by the shared channelΓÇÖs policies.
active-directory User Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/user-properties.md
Previously updated : 08/30/2022 Last updated : 10/12/2022
Now, let's see what an Azure AD B2B collaboration user looks like in Azure AD.
### Before invitation redemption
-B2B collaboration user accounts are the result of inviting guest users to collaborate by using the guest users' own credentials. When the invitation is initially sent to the guest user, an account is created in your tenant. This account doesnΓÇÖt have any credentials associated with it because authentication is performed by the guest user's identity provider. The **Issuer** property for the guest user account in your directory is set to the host's organization domain until the guest redeems their invitation. In the portal, the **Invitation accepted** property in the invited userΓÇÖs Azure AD portal profile will be set to **No** and querying for `externalUserState` using the Microsoft Graph API will return `Pending Acceptance`.
+B2B collaboration user accounts are the result of inviting guest users to collaborate by using the guest users' own credentials. When the invitation is initially sent to the guest user, an account is created in your tenant. This account doesnΓÇÖt have any credentials associated with it because authentication is performed by the guest user's identity provider. The **Identities** property for the guest user account in your directory is set to the host's organization domain until the guest redeems their invitation. In the portal, the invited userΓÇÖs profile will show an **External user state** of **PendingAcceptance**. Querying for `externalUserState` using the Microsoft Graph API will return `Pending Acceptance`.
![Screenshot of user profile before redemption.](media/user-properties/before-redemption.png) ### After invitation redemption
-After the B2B collaboration user accepts the invitation, the **Issuer** property is updated based on the userΓÇÖs identity provider.
+After the B2B collaboration user accepts the invitation, the **Identities** property is updated based on the userΓÇÖs identity provider.
-If the B2B collaboration user is using credentials from another Azure AD organization, the **Issuer** is **External Azure AD**.
+- If the B2B collaboration user is using a Microsoft account or credentials from another external identity provider, **Identities** reflects the identity provider, for example **Microsoft Account**, **google.com**, or **facebook.com**.
-![Screenshot of user profile after redemption.](media/user-properties/after-redemption-state-1.png)
+ ![Screenshot of user profile after redemption.](media/user-properties/after-redemption-state-1.png)
-If the B2B collaboration user is using a Microsoft account or credentials from another external identity provider, the **Issuer** reflects the identity provider, for example **Microsoft Account**, **google.com**, or **facebook.com**.
+- If the B2B collaboration user is using credentials from another Azure AD organization, **Identities** is **External Azure AD**.
-![Screenshot of user profile showing an external identity provider.](media/user-properties/after-redemption-state-2.png)
-
-For external users who are using internal credentials, the **Issuer** property is set to the hostΓÇÖs organization domain. The **Directory synced** property is **Yes** if the account is homed in the organizationΓÇÖs on-premises Active Directory and synced with Azure AD, or **No** if the account is a cloud-only Azure AD account. The directory sync information is also available via the `onPremisesSyncEnabled` property in Microsoft Graph.
+- For external users who are using internal credentials, the **Identities** property is set to the hostΓÇÖs organization domain. The **Directory synced** property is **Yes** if the account is homed in the organizationΓÇÖs on-premises Active Directory and synced with Azure AD, or **No** if the account is a cloud-only Azure AD account. The directory sync information is also available via the `onPremisesSyncEnabled` property in Microsoft Graph.
## Key properties of the Azure AD B2B collaboration user
This property indicates the relationship of the user to the host tenancy. This p
> [!NOTE] > The UserType has no relation to how the user signs in, the directory role of the user, and so on. This property simply indicates the user's relationship to the host organization and allows the organization to enforce policies that depend on this property.
-### Issuer
+### Identities
-This property indicates the userΓÇÖs primary identity provider. A user can have several identity providers, which can be viewed by selecting issuer in the userΓÇÖs profile or by querying the `onPremisesSyncEnabled` property via the Microsoft Graph API.
+This property indicates the userΓÇÖs primary identity provider. A user can have several identity providers, which can be viewed by selecting the link next to **Identities** in the userΓÇÖs profile or by querying the `onPremisesSyncEnabled` property via the Microsoft Graph API.
> [!NOTE]
-> Issuer and UserType are independent properties. A value of issuer does not imply a particular value for UserType
+> Identities and UserType are independent properties. A value of Identities does not imply a particular value for UserType
-Issuer property value | Sign-in state
+Identities property value | Sign-in state
| - External Azure AD | This user is homed in an external organization and authenticates by using an Azure AD account that belongs to the other organization. Microsoft account | This user is homed in a Microsoft account and authenticates by using a Microsoft account.
google.com | This user has a Gmail account and has signed up by using self-servi
facebook.com | This user has a Facebook account and has signed up by using self-service to the other organization. mail | This user has an email address that doesn't match with verified Azure AD or SAML/WS-Fed domains, and isn't a Gmail address or a Microsoft account. phone | This user has an email address that doesn't match a verified Azure AD domain or a SAML/WS-Fed domain, and isn't a Gmail address or Microsoft account.
-{issuer URI} | This user is homed in an external organization that doesn't use Azure Active Directory as their identity provider, but instead uses a SAML/WS-Fed-based identity provider. The issuer URI is shown when the issuer field is clicked.
+{issuer URI} | This user is homed in an external organization that doesn't use Azure Active Directory as their identity provider, but instead uses a SAML/WS-Fed-based identity provider. The issuer URI is shown when the Identities field is clicked.
### Directory synced
Typically, an Azure AD B2B user and guest user are synonymous. Therefore, an Azu
## Filter for guest users in the directory
+In the **Users** list, you can use **Add filter** to display only the guest users in your directory.
+
+![Screenshot showing how to add a User type filter for guests.](media/user-properties/add-guest-filter.png)
++ ![Screenshot showing the filter for guest users.](media/user-properties/filter-guest-users.png) ## Convert UserType
active-directory Whats New Sovereign Clouds https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new-sovereign-clouds.md
This page is updated monthly, so revisit it regularly.
-## September 2022
+## October 2022
### General Availability - Azure AD certificate-based authentication
For more information on how to use this feature, see: [Dynamic membership rule f
+## September 2022
+ ### General Availability - No more waiting, provision groups on demand into your SaaS applications.
active-directory How To Connect Group Writeback V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-group-writeback-v2.md
Group writeback allows you to write cloud groups back to your on-premises Active Directory instance by using Azure Active Directory (Azure AD) Connect sync. You can use this feature to manage groups in the cloud, while controlling access to on-premises applications and resources.
->[NOTE]
->The Group writeback functionality is currently in Public Preview as we are collecting customer feedback and telemetry. Please refer to [the limitations](https://learn.microsoft.com/azure/active-directory/hybrid/how-to-connect-group-writeback-v2#understand-limitations-of-public-preview) before you enable this functionality.
+> [!NOTE]
+> The group writeback functionality is currently in Public Preview as we are collecting customer feedback and telemetry. Please refer to [the limitations](https://learn.microsoft.com/azure/active-directory/hybrid/how-to-connect-group-writeback-v2#understand-limitations-of-public-preview) before you enable this functionality.
There are two versions of group writeback. The original version is in general availability and is limited to writing back Microsoft 365 groups to your on-premises Active Directory instance as distribution groups. The new, expanded version of group writeback is in public preview and enables the following capabilities:
active-directory How To Connect Modify Group Writeback https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-modify-group-writeback.md
To configure directory settings to disable automatic writeback of newly created
import-module ADSync $precedenceValue = Read-Host -Prompt "Enter a unique sync rule precedence value [0-99]"
- New-ADSyncRule `
- -Name 'In from AAD - Group SOAinAAD Delete WriteBackOutOfScope and SoftDelete' `
- -Identifier 'cb871f2d-0f01-4c32-a333-ff809145b947' `
- -Description 'Delete AD groups that fall out of scope of Group Writeback or get Soft Deleted in Azure AD' `
- -Direction 'Inbound' `
- -Precedence $precedenceValue `
- -PrecedenceAfter '00000000-0000-0000-0000-000000000000' `
- -PrecedenceBefore '00000000-0000-0000-0000-000000000000' `
- -SourceObjectType 'group' `
- -TargetObjectType 'group' `
- -Connector 'b891884f-051e-4a83-95af-2544101c9083' `
- -LinkType 'Join' `
- -SoftDeleteExpiryInterval 0 `
- -ImmutableTag '' `
- -OutVariable syncRule
-
- Add-ADSyncAttributeFlowMapping `
- -SynchronizationRule $syncRule[0] `
- -Destination 'reasonFiltered' `
- -FlowType 'Expression' `
- -ValueMergeType 'Update' `
- -Expression 'IIF((IsPresent([reasonFiltered]) = True) && (InStr([reasonFiltered], "WriteBackOutOfScope") > 0 || InStr([reasonFiltered], "SoftDelete") > 0), "DeleteThisGroupInAD", [reasonFiltered])' `
- -OutVariable syncRule
-
- New-Object `
- -TypeName 'Microsoft.IdentityManagement.PowerShell.ObjectModel.ScopeCondition' `
- -ArgumentList 'cloudMastered','true','EQUAL' `
- -OutVariable condition0
-
- Add-ADSyncScopeConditionGroup `
- -SynchronizationRule $syncRule[0] `
- -ScopeConditions @($condition0[0]) `
- -OutVariable syncRule
+ New-ADSyncRule `
+ -Name 'In from AAD - Group SOAinAAD Delete WriteBackOutOfScope and SoftDelete' `
+ -Identifier 'cb871f2d-0f01-4c32-a333-ff809145b947' `
+ -Description 'Delete AD groups that fall out of scope of Group Writeback or get Soft Deleted in Azure AD' `
+ -Direction 'Inbound' `
+ -Precedence $precedenceValue `
+ -PrecedenceAfter '00000000-0000-0000-0000-000000000000' `
+ -PrecedenceBefore '00000000-0000-0000-0000-000000000000' `
+ -SourceObjectType 'group' `
+ -TargetObjectType 'group' `
+ -Connector 'b891884f-051e-4a83-95af-2544101c9083' `
+ -LinkType 'Join' `
+ -SoftDeleteExpiryInterval 0 `
+ -ImmutableTag '' `
+ -OutVariable syncRule
+
+ Add-ADSyncAttributeFlowMapping `
+ -SynchronizationRule $syncRule[0] `
+ -Destination 'reasonFiltered' `
+ -FlowType 'Expression' `
+ -ValueMergeType 'Update' `
+ -Expression 'IIF((IsPresent([reasonFiltered]) = True) && (InStr([reasonFiltered], "WriteBackOutOfScope") > 0 || InStr([reasonFiltered], "SoftDelete") > 0), "DeleteThisGroupInAD", [reasonFiltered])' `
+ -OutVariable syncRule
+
+ New-Object `
+ -TypeName 'Microsoft.IdentityManagement.PowerShell.ObjectModel.ScopeCondition' `
+ -ArgumentList 'cloudMastered','true','EQUAL' `
+ -OutVariable condition0
+
+ Add-ADSyncScopeConditionGroup `
+ -SynchronizationRule $syncRule[0] `
+ -ScopeConditions @($condition0[0]) `
+ -OutVariable syncRule
- New-Object `
- -TypeName 'Microsoft.IdentityManagement.PowerShell.ObjectModel.JoinCondition' `
- -ArgumentList 'cloudAnchor','cloudAnchor',$false `
- -OutVariable condition0
-
- Add-ADSyncJoinConditionGroup `
- -SynchronizationRule $syncRule[0] `
- -JoinConditions @($condition0[0]) `
- -OutVariable syncRule
-
- Add-ADSyncRule `
- -SynchronizationRule $syncRule[0]
-
- Get-ADSyncRule `
- -Identifier 'cb871f2d-0f01-4c32-a333-ff809145b947'
- ```
+ New-Object `
+ -TypeName 'Microsoft.IdentityManagement.PowerShell.ObjectModel.JoinCondition' `
+ -ArgumentList 'cloudAnchor','cloudAnchor',$false `
+ -OutVariable condition0
+
+ Add-ADSyncJoinConditionGroup `
+ -SynchronizationRule $syncRule[0] `
+ -JoinConditions @($condition0[0]) `
+ -OutVariable syncRule
+
+ Add-ADSyncRule `
+ -SynchronizationRule $syncRule[0]
+
+ Get-ADSyncRule `
+ -Identifier 'cb871f2d-0f01-4c32-a333-ff809145b947'
+ ```
4. [Enable group writeback](how-to-connect-group-writeback-enable.md). 5. Enable the Azure AD Connect sync scheduler:
active-directory Cato Networks Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cato-networks-provisioning-tutorial.md
The scenario outlined in this tutorial assumes that you already have the followi
## Step 2. Configure Cato Networks to support provisioning with Azure AD 1. Log in to your account in the [Cato Management Application](https://cc2.catonetworks.com).
-1. From the navigation menu, select **Configuration > Global Settings**, and then expand the **VPN Settings** section.
- ![VPN Settings section](media/cato-networks-provisioning-tutorial/vpn-settings.png)
-1. Expand the **SCIM Provisioning** section and enable SCIM provisioning by clicking on **Enable SCIM Provisioning**.
- ![Enable SCIM Provisioning](media/cato-networks-provisioning-tutorial/scim-settings.png)
-1. Copy the Base URL and Bearer Token from the Cato Management Application to the SCIM app in the Azure portal:
- 1. In the Cato Management Application (from the **SCIM Provisioning** section), copy the Base URL.
- 1. In the Cato Networks SCIM app in the Azure portal, in the **Provisioning** tab, paste the base URL in the **Tenant URL** field.
- ![Copy tenant URL](media/cato-networks-provisioning-tutorial/tenant-url.png)
- 1. In the Cato Management Application (from the **SCIM Provisioning** section), click **Generate Token** and copy the bearer token.
- 1. In the Cato Networks SCIM app in the Azure portal, paste the bearer token in the **Secret Token** field.
- ![Copy secret token](media/cato-networks-provisioning-tutorial/secret-token.png)
-1. In the Cato Management Application (from the **SCIM Provisioning** section), click **Save**. SCIM Provisioning between your Cato account and Azure AD is configured.
- ![Save SCIM Configuration](media/cato-networks-provisioning-tutorial/save-CC.png)
-1. Test the connection between the Azure SCIM app and the Cato Cloud. In the Cato Networks SCIM apps in the Azure portal, in the **Provisioning** tab, click **Test Connection**.
+1. From the navigation menu select **Access > Directory Services** and click the **SCIM** section tab.
+ ![Screenshot of navigate to SCIM setting.](media/cato-networks-provisioning-tutorial/navigate.png)
+1. Select **Enable SCIM Provisioning** to set your account to connect to the SCIM app. And then click **Save**.
+ ![Screenshot of Enable SCIM Provisioning.](media/cato-networks-provisioning-tutorial/scim-setting.png)
+1. Copy the **Base URL**.Click **Generate Token** and copy the bearer token. Base Url and token will be entered in the **Tenant URL** and **Secret Token** field in the Provisioning tab of your Cato Network application in the Azure portal.
## Step 3. Add Cato Networks from the Azure AD application gallery
This section guides you through the steps to configure the Azure AD provisioning
1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
- ![Enterprise applications blade](common/enterprise-applications.png)
+ ![Screenshot of enterprise applications blade.](common/enterprise-applications.png)
1. In the applications list, select **Cato Networks**.
- ![The Cato Networks link in the Applications list](common/all-applications.png)
+ ![Screenshot of the Cato Networks link in the Applications list.](common/all-applications.png)
1. Select the **Provisioning** tab.
- ![Provisioning tab](common/provisioning.png)
+ ![Screenshot of provisioning tab.](common/provisioning.png)
1. Set the **Provisioning Mode** to **Automatic**.
- ![Provisioning tab automatic](common/provisioning-automatic.png)
+ ![Screenshot of provisioning tab automatic.](common/provisioning-automatic.png)
1. Under the **Admin Credentials** section, input your Cato Networks Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to Cato Networks. If the connection fails, ensure your Cato Networks account has Admin permissions and try again.
- ![Token](common/provisioning-testconnection-tenanturltoken.png)
+ ![Screenshot of token.](common/provisioning-testconnection-tenanturltoken.png)
1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
- ![Notification Email](common/provisioning-notification-email.png)
+ ![Screenshot of notification email.](common/provisioning-notification-email.png)
1. Select **Save**.
This section guides you through the steps to configure the Azure AD provisioning
1. To enable the Azure AD provisioning service for Cato Networks, change the **Provisioning Status** to **On** in the **Settings** section.
- ![Provisioning Status Toggled On](common/provisioning-toggle-on.png)
+ ![Screenshot of provisioning status toggled on.](common/provisioning-toggle-on.png)
1. Define the users and/or groups that you would like to provision to Cato Networks by choosing the desired values in **Scope** in the **Settings** section.
- ![Provisioning Scope](common/provisioning-scope.png)
+ ![Screenshot of provisioning scope.](common/provisioning-scope.png)
1. When you're ready to provision, click **Save**.
- ![Saving Provisioning Configuration](common/provisioning-configuration-save.png)
+ ![Screenshot of saving provisioning configuration.](common/provisioning-configuration-save.png)
This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to complete than next cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
active-directory Code42 Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/code42-provisioning-tutorial.md
# Tutorial: Configure Code42 for automatic user provisioning
-This tutorial describes the steps you need to perform in both Code42 and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Code42](https://www.crashplan.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
+This tutorial describes the steps you need to perform in both Code42 and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Code42](https://www.code42.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
## Capabilities Supported
active-directory Confluencemicrosoft Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/confluencemicrosoft-tutorial.md
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
> To enable the default login form for admin login on the login page when the force azure login is enabled, add the query parameter in the browser URL. > `https://<DOMAIN:PORT>/login.action?force_azure_login=false`
- 1. **Enable Use of Application Proxy** checkbox, if you have configured your on-premise atlassian application in an App Proxy setup. For App proxy setup , follow the steps on the [Azure AD App Proxy Documentation](/articles/active-directory/app-proxy/what-is-application-proxy.md).
+ 1. **Enable Use of Application Proxy** checkbox, if you have configured your on-premise atlassian application in an App Proxy setup. For App proxy setup , follow the steps on the [Azure AD App Proxy Documentation](../app-proxy/what-is-application-proxy.md).
1. Click **Save** button to save the settings.
active-directory Contentkalender Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/contentkalender-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with Contentkalender'
+description: Learn how to configure single sign-on between Azure Active Directory and Contentkalender.
++++++++ Last updated : 10/10/2022++++
+# Tutorial: Azure AD SSO integration with Contentkalender
+
+In this tutorial, you'll learn how to integrate Contentkalender with Azure Active Directory (Azure AD). When you integrate Contentkalender with Azure AD, you can:
+
+* Control in Azure AD who has access to Contentkalender.
+* Enable your users to be automatically signed-in to Contentkalender with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Contentkalender single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Contentkalender supports **SP** initiated SSO.
+* Contentkalender supports **Just In Time** user provisioning.
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+
+## Add Contentkalender from the gallery
+
+To configure the integration of Contentkalender into Azure AD, you need to add Contentkalender from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Contentkalender** in the search box.
+1. Select **Contentkalender** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+ Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, and walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides)
+
+## Configure and test Azure AD SSO for Contentkalender
+
+Configure and test Azure AD SSO with Contentkalender using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Contentkalender.
+
+To configure and test Azure AD SSO with Contentkalender, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Contentkalender SSO](#configure-contentkalender-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Contentkalender test user](#create-contentkalender-test-user)** - to have a counterpart of B.Simon in Contentkalender that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Contentkalender** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** text box, type one of the following URLs:
+
+ | **Identifier** |
+ ||
+ | `https://login.contentkalender.nl` |
+ | `https://login.decontentkalender.be` |
+ | `https://contentkalender-acc.bettywebblocks.com/` |
+
+ b. In the **Reply URL** text box, type one of the following URLs:
+
+ | **Reply URL** |
+ |--|
+ | `https://login.contentkalender.nl/sso/saml/callback` |
+ | `https://login.decontentkalender.be/sso/saml/callback` |
+ | `https://contentkalender-acc.bettywebblocks.com/sso/saml/callback` |
+
+ c. In the **Sign-on URL** text box, type the URL:
+ `https://contentkalender-acc.bettywebblocks.com/v2/login`
+
+1. Your Contentkalender application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows an example for this. The default value of **Unique User Identifier** is **user.userprincipalname** but Contentkalender expects this to be mapped with the user's email address. For that you can use **user.mail** attribute from the list or use the appropriate attribute value based on your organization configuration.
+
+ ![Screenshot shows the image of attribute mappings.](common/default-attributes.png "Attributes")
+
+1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/copy-metadataurl.png "Certificate")
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Contentkalender.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Contentkalender**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Contentkalender SSO
+
+To configure single sign-on on **Contentkalender** side, you need to send the **App Federation Metadata Url** to [Contentkalender support team](mailto:info@contentkalender.nl). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Contentkalender test user
+
+In this section, a user called B.Simon is created in Contentkalender. Contentkalender supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Contentkalender, a new one is created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to Contentkalender Sign-on URL where you can initiate the login flow.
+
+* Go to Contentkalender Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the Contentkalender tile in the My Apps, this will redirect to Contentkalender Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+
+## Next steps
+
+Once you configure Contentkalender you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
active-directory Github Enterprise Managed User Oidc Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/github-enterprise-managed-user-oidc-provisioning-tutorial.md
This section guides you through the steps to configure the Azure AD provisioning
5. Under the **Admin Credentials** section, input your GitHub Enterprise Managed User (OIDC) Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to GitHub Enterprise Managed User (OIDC). If the connection fails, ensure your GitHub Enterprise Managed User (OIDC) account has created the secret token as an enterprise owner and try again.
+ For "Tenant URL", type https://api.github.com/scim/v2/enterprises/YOUR_ENTERPRISE, replacing YOUR_ENTERPRISE with the name of your enterprise account.
+
+ For example, if your enterprise account's URL is https://github.com/enterprises/octo-corp, the name of the enterprise account is octo-corp.
+
+ For "Secret token", paste the personal access token with the admin:enterprise scope that you created earlier.
+
![Token](common/provisioning-testconnection-tenanturltoken.png) 6. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
Once you've configured provisioning, use the following resources to monitor your
## Next steps
-* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Jiramicrosoft Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/jiramicrosoft-tutorial.md
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
k. **Enable Use of Application Proxy** checkbox, if you have configured your on-premise atlassian application in an App Proxy setup.
- * For App proxy setup , follow the steps on the [Azure AD App Proxy Documentation](/articles/active-directory/app-proxy/what-is-application-proxy.md).
+ * For App proxy setup , follow the steps on the [Azure AD App Proxy Documentation](../app-proxy/what-is-application-proxy.md).
l. Click **Save** button to save the settings.
active-directory Lessonly Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/lessonly-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Lesson.ly | Microsoft Docs'
-description: Learn how to configure single sign-on between Azure Active Directory and Lesson.ly.
+ Title: 'Tutorial: Azure AD SSO integration with Lessonly'
+description: Learn how to configure single sign-on between Azure Active Directory and Lessonly.
Previously updated : 04/20/2021 Last updated : 10/13/2022
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with Lesson.ly
+# Tutorial: Azure AD SSO integration with Lessonly
-In this tutorial, you'll learn how to integrate Lesson.ly with Azure Active Directory (Azure AD). When you integrate Lesson.ly with Azure AD, you can:
+In this tutorial, you'll learn how to integrate Lessonly with Azure Active Directory (Azure AD). When you integrate Lessonly with Azure AD, you can:
-* Control in Azure AD who has access to Lesson.ly.
-* Enable your users to be automatically signed-in to Lesson.ly with their Azure AD accounts.
+* Control in Azure AD who has access to Lessonly.
+* Enable your users to be automatically signed-in to Lessonly with their Azure AD accounts.
* Manage your accounts in one central location - the Azure portal. ## Prerequisites
In this tutorial, you'll learn how to integrate Lesson.ly with Azure Active Dire
To get started, you need the following items: * An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
-* Lesson.ly single sign-on (SSO) enabled subscription.
+* Lessonly single sign-on (SSO) enabled subscription.
## Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment.
-* Lesson.ly supports **SP** initiated SSO.
-* Lesson.ly supports **Just In Time** user provisioning.
+* Lessonly supports **SP** initiated SSO.
+* Lessonly supports **Just In Time** user provisioning.
-## Add Lesson.ly from the gallery
+## Add Lessonly from the gallery
-To configure the integration of Lesson.ly into Azure AD, you need to add Lesson.ly from the gallery to your list of managed SaaS apps.
+To configure the integration of Lessonly into Azure AD, you need to add Lessonly from the gallery to your list of managed SaaS apps.
1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account. 1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**.
-1. In the **Add from the gallery** section, type **Lesson.ly** in the search box.
-1. Select **Lesson.ly** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+1. In the **Add from the gallery** section, type **Lessonly** in the search box.
+1. Select **Lessonly** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
- Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides)
+ Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, and walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides)
-## Configure and test Azure AD SSO for Lesson.ly
+## Configure and test Azure AD SSO for Lessonly
-Configure and test Azure AD SSO with Lesson.ly using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Lesson.ly.
+Configure and test Azure AD SSO with Lessonly using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Lessonly.
-To configure and test Azure AD SSO with Lesson.ly, perform the following steps:
+To configure and test Azure AD SSO with Lessonly, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon. 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
-1. **[Configure Lesson.ly SSO](#configure-lessonly-sso)** - to configure the single sign-on settings on application side.
- 1. **[Create Lesson.ly test user](#create-lessonly-test-user)** - to have a counterpart of B.Simon in Lesson.ly that is linked to the Azure AD representation of user.
+1. **[Configure Lessonly SSO](#configure-lessonly-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Lessonly test user](#create-lessonly-test-user)** - to have a counterpart of B.Simon in Lessonly that is linked to the Azure AD representation of user.
1. **[Test SSO](#test-sso)** - to verify whether the configuration works. ## Configure Azure AD SSO Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the Azure portal, on the **Lesson.ly** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **Lessonly** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**. 1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
Follow these steps to enable Azure AD SSO in the Azure portal.
> [!NOTE] > These values are not real. Update these values with the actual Sign on URL, Reply URL, and Identifier. Contact [Lessonly.com Client support team](mailto:support@lessonly.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
-1. Lesson.ly application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+1. Lessonly application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
![image](common/default-attributes.png)
-1. In addition to above, Lesson.ly application expects few more attributes to be passed back in SAML response which are shown below. These attributes are also pre populated but you can review them as per your requirements.
+1. In addition to above, Lessonly application expects few more attributes to be passed back in SAML response which are shown below. These attributes are also pre populated but you can review them as per your requirements.
| Name | Source Attribute| | | -|
Follow these steps to enable Azure AD SSO in the Azure portal.
![The Certificate download link](common/certificatebase64.png)
-1. On the **Set up Lesson.ly** section, copy the appropriate URL(s) based on your requirement.
+1. On the **Set up Lessonly** section, copy the appropriate URL(s) based on your requirement.
![Copy configuration URLs](common/copy-configuration-urls.png)
In this section, you'll create a test user in the Azure portal called B.Simon.
### Assign the Azure AD test user
-In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Lesson.ly.
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Lessonly.
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
-1. In the applications list, select **Lesson.ly**.
+1. In the applications list, select **Lessonly**.
1. In the app's overview page, find the **Manage** section and select **Users and groups**. 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog. 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. 1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected. 1. In the **Add Assignment** dialog, click the **Assign** button.
-## Configure Lesson.ly SSO
+## Configure Lessonly SSO
-To configure single sign-on on **Lesson.ly** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Lesson.ly support team](mailto:support@lessonly.com). They set this setting to have the SAML SSO connection set properly on both sides.
+To configure single sign-on on **Lessonly** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Lessonly support team](mailto:support@lessonly.com). They set this setting to have the SAML SSO connection set properly on both sides.
-### Create Lesson.ly test user
+### Create Lessonly test user
The objective of this section is to create a user called B.Simon in Lessonly.com. Lessonly.com supports just-in-time provisioning, which is by default enabled.
There is no action item for you in this section. A new user will be created duri
In this section, you test your Azure AD single sign-on configuration with following options.
-* Click on **Test this application** in Azure portal. This will redirect to Lesson.ly Sign-on URL where you can initiate the login flow.
+* Click on **Test this application** in Azure portal. This will redirect to Lessonly Sign-on URL where you can initiate the login flow.
-* Go to Lesson.ly Sign-on URL directly and initiate the login flow from there.
+* Go to Lessonly Sign-on URL directly and initiate the login flow from there.
-* You can use Microsoft My Apps. When you click the Lesson.ly tile in the My Apps, this will redirect to Lesson.ly Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+* You can use Microsoft My Apps. When you click the Lessonly tile in the My Apps, this will redirect to Lessonly Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
## Next steps
-Once you configure Lesson.ly you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
+Once you configure Lessonly you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
active-directory Sap Successfactors Writeback Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sap-successfactors-writeback-tutorial.md
Once the SuccessFactors provisioning app configurations have been completed, you
> ![Select Writeback scope](./media/sap-successfactors-inbound-provisioning/select-writeback-scope.png) > [!NOTE]
- > The SuccessFactors Writeback provisioning app does not support "group assignment". Only "user assignment" is supported.
+ > SuccessFactors Writeback provisioning apps created after 12-Oct-2022 support the "group assignment" feature. If you created the app prior to 12-Oct-2022, it will only have "user assignment" support. To use the "group assignment" feature, create a new instance of the SuccessFactors Writeback application and move your existing mapping configurations to this app.
1. Click **Save**.
Refer to the [Writeback scenarios section](../app-provisioning/sap-successfactor
* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md) * [Learn how to configure single sign-on between SuccessFactors and Azure Active Directory](successfactors-tutorial.md) * [Learn how to integrate other SaaS applications with Azure Active Directory](tutorial-list.md)
-* [Learn how to export and import your provisioning configurations](../app-provisioning/export-import-provisioning-configuration.md)
+* [Learn how to export and import your provisioning configurations](../app-provisioning/export-import-provisioning-configuration.md)
active-directory Zendesk Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/zendesk-provisioning-tutorial.md
Title: 'Tutorial: Configure Zendesk for automatic user provisioning with Azure Active Directory | Microsoft Docs'
-description: Learn how to configure Azure Active Directory to automatically provision and deprovision user accounts to Zendesk.
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to Zendesk.
-
-writer: zhchia
-
+documentationcenter: ''
+
+writer: Thwimmer
+
+ms.assetid: 620f0aa6-42af-4356-85f9-04aa329767f3
+ms.devlang: na
Last updated 08/06/2019-+ # Tutorial: Configure Zendesk for automatic user provisioning
-This tutorial demonstrates the steps to perform in Zendesk and Azure Active Directory (Azure AD) to configure Azure AD to automatically provision and deprovision users and groups to Zendesk.
+This tutorial describes the steps you need to perform in both Zendesk and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Zendesk](http://www.zendesk.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
-> [!NOTE]
-> This tutorial describes a connector that's built on top of the Azure AD user provisioning service. For information on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to software-as-a-service (SaaS) applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
+
+## Capabilities supported
+> [!div class="checklist"]
+> * Create users in Zendesk.
+> * Remove users in Zendesk when they do not require access anymore.
+> * Keep user attributes synchronized between Azure AD and Zendesk.
+> * Provision groups and group memberships in Zendesk.
+> * [Single sign-on](./zendesk-tutorial.md) to Zendesk (recommended)
## Prerequisites
-The scenario outlined in this tutorial assumes that you have:
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
-* An Azure AD tenant.
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md).
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* A user account in Zendesk with Admin rights.
* A Zendesk tenant with the Professional plan or better enabled.
-* A user account in Zendesk with admin permissions.
-
-## Add Zendesk from the Azure Marketplace
-
-Before you configure Zendesk for automatic user provisioning with Azure AD, add Zendesk from the Azure Marketplace to your list of managed SaaS applications.
-
-To add Zendesk from the Marketplace, follow these steps.
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. Determine what data to [map between Azure AD and Zendesk](../app-provisioning/customize-application-attributes.md).
1. In the [Azure portal](https://portal.azure.com), in the navigation pane on the left, select **Azure Active Directory**.
- ![The Azure Active Directory icon](common/select-azuread.png)
+## Step 2. Configure Zendesk to support provisioning with Azure AD
+
+1. Log in to [Admin Center](https://support.zendesk.com/hc/en-us/articles/4408839227290#topic_hfg_dyz_1hb), click **Apps and integrations** in the sidebar, then select **APIs > Zendesk APIs**.
+1. Click the **Settings** tab, and make sure Token Access is **enabled**.
+1. Click the **Add API token** button to the right of **Active API Tokens**.The token is generated and displayed.
+1. Enter an **API token description**.
+1. **Copy** the token and paste it somewhere secure. Once you close this window, the full token will never be displayed again.
+1. Click **Save** to return to the API page.If you click the token to reopen it, a truncated version of the token is displayed.
-2. Go to **Enterprise applications**, and then select **All applications**.
+## Step 3. Add Zendesk from the Azure AD application gallery
- ![The Enterprise applications blade](common/enterprise-applications.png)
+Add Zendesk from the Azure AD application gallery to start managing provisioning to Zendesk. If you have previously setup Zendesk for SSO, you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
-3. To add a new application, select **New application** at the top of the dialog box.
+## Step 4. Define who will be in scope for provisioning
- ![The New application button](common/add-new-app.png)
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-4. In the search box, enter **Zendesk** and select **Zendesk** from the result panel. To add the application, select **Add**.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
- ![Zendesk in the results list](common/search-new-app.png)
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
-## Assign users to Zendesk
-Azure Active Directory uses a concept called *assignments* to determine which users should receive access to selected apps. In the context of automatic user provisioning, only the users or groups that were assigned to an application in Azure AD are synchronized.
+## Step 5. Configure automatic user provisioning to Zendesk
-Before you configure and enable automatic user provisioning, decide which users or groups in Azure AD need access to Zendesk. To assign these users or groups to Zendesk, follow the instructions in [Assign a user or group to an enterprise app](../manage-apps/assign-user-or-group-access-portal.md).
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in Zendesk based on user and/or group assignments in Azure AD.
### Important tips for assigning users to Zendesk
Before you configure and enable automatic user provisioning, decide which users
* When you assign a user to Zendesk, select any valid application-specific role, if available, in the assignment dialog box. Users with the **Default Access** role are excluded from provisioning.
-## Configure automatic user provisioning to Zendesk
-
-This section guides you through the steps to configure the Azure AD provisioning service. Use it to create, update, and disable users or groups in Zendesk based on user or group assignments in Azure AD.
-
-> [!TIP]
-> You also can enable SAML-based single sign-on for Zendesk. Follow the instructions in the [Zendesk single sign-on tutorial](zendesk-tutorial.md). Single sign-on can be configured independently of automatic user provisioning, although these two features complement each other.
### Configure automatic user provisioning for Zendesk in Azure AD
-1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise applications** > **All applications** > **Zendesk**.
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
- ![Enterprise applications blade](common/enterprise-applications.png)
+ ![Screenshot of Enterprise applications blade.](common/enterprise-applications.png)
-2. In the applications list, select **Zendesk**.
+1. In the applications list, select **Zendesk**.
- ![The Zendesk link in the applications list](common/all-applications.png)
+ ![Screenshot of the Zendesk link in the Applications list.](common/all-applications.png)
-3. Select the **Provisioning** tab.
+1. Select the **Provisioning** tab.
- ![Zendesk Provisioning](./media/zendesk-provisioning-tutorial/ZenDesk16.png)
+ ![Screenshot of Provisioning tab.](common/provisioning.png)
-4. Set the **Provisioning Mode** to **Automatic**.
+1. Set the **Provisioning Mode** to **Automatic**.
- ![Zendesk Provisioning Mode](./media/zendesk-provisioning-tutorial/ZenDesk1.png)
+ ![Screenshot of Provisioning tab automatic.](common/provisioning-automatic.png)
-5. Under the **Admin Credentials** section, input the admin username, secret token, and domain of your Zendesk account. Examples of these values are:
+1. Under the **Admin Credentials** section, input the admin username, secret token, and domain of your Zendesk account. Examples of these values are:
* In the **Admin Username** box, fill in the username of the admin account on your Zendesk tenant. An example is admin@contoso.com.
This section guides you through the steps to configure the Azure AD provisioning
* In the **Domain** box, fill in the subdomain of your Zendesk tenant. For example, for an account with a tenant URL of `https://my-tenant.zendesk.com`, your subdomain is **my-tenant**.
-6. The secret token for your Zendesk account is located in **Admin** > **API** > **Settings**. Make sure that **Token Access** is set to **Enabled**.
-
- ![Zendesk admin settings](./media/zendesk-provisioning-tutorial/ZenDesk4.png)
+1. The secret token for your Zendesk account can be generated by following steps mentioned in **Step 2** above.
- ![Zendesk secret token](./media/zendesk-provisioning-tutorial/ZenDesk2.png)
+1. After you fill in the boxes shown in Step 5, select **Test Connection** to make sure that Azure AD can connect to Zendesk. If the connection fails, make sure your Zendesk account has admin permissions and try again.
-7. After you fill in the boxes shown in Step 5, select **Test Connection** to make sure that Azure AD can connect to Zendesk. If the connection fails, make sure your Zendesk account has admin permissions and try again.
-
- ![Zendesk Test Connection](./media/zendesk-provisioning-tutorial/ZenDesk19.png)
+ ![Screenshot of Zendesk Test Connection](./media/zendesk-provisioning-tutorial/ZenDesk19.png)
8. In the **Notification Email** box, enter the email address of the person or group to receive the provisioning error notifications. Select the **Send an email notification when a failure occurs** check box.
- ![Zendesk Notification Email](./media/zendesk-provisioning-tutorial/ZenDesk9.png)
+ ![Screenshot of Zendesk Notification Email](./media/zendesk-provisioning-tutorial/ZenDesk9.png)
9. Select **Save**. 10. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Zendesk**.
- ![Zendesk user synchronization](./media/zendesk-provisioning-tutorial/ZenDesk10.png)
+ ![Screenshot of Zendesk user synchronization](./media/zendesk-provisioning-tutorial/ZenDesk10.png)
11. Review the user attributes that are synchronized from Azure AD to Zendesk in the **Attribute Mappings** section. The attributes selected as **Matching** properties are used to match the user accounts in Zendesk for update operations. To save any changes, select **Save**.
- ![Zendesk matching user attributes](./media/zendesk-provisioning-tutorial/ZenDesk11.png)
+ ![Screenshot of Zendesk matching user attributes](./media/zendesk-provisioning-tutorial/ZenDesk11.png)
12. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Zendesk**.
- ![Zendesk group synchronization](./media/zendesk-provisioning-tutorial/ZenDesk12.png)
+ ![Screenshot of Zendesk group synchronization](./media/zendesk-provisioning-tutorial/ZenDesk12.png)
13. Review the group attributes that are synchronized from Azure AD to Zendesk in the **Attribute Mappings** section. The attributes selected as **Matching** properties are used to match the groups in Zendesk for update operations. To save any changes, select **Save**.
- ![Zendesk matching group attributes](./media/zendesk-provisioning-tutorial/ZenDesk13.png)
+ ![Screenshot of Zendesk matching group attributes](./media/zendesk-provisioning-tutorial/ZenDesk13.png)
14. To configure scoping filters, follow the instructions in the [scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). 15. To enable the Azure AD provisioning service for Zendesk, in the **Settings** section, change **Provisioning Status** to **On**.
- ![Zendesk Provisioning Status](./media/zendesk-provisioning-tutorial/ZenDesk14.png)
+ ![Screenshot of Zendesk Provisioning Status](./media/zendesk-provisioning-tutorial/ZenDesk14.png)
16. Define the users or groups that you want to provision to Zendesk. In the **Settings** section, select the values you want in **Scope**.
- ![Zendesk Scope](./media/zendesk-provisioning-tutorial/ZenDesk15.png)
+ ![Screenshot of Zendesk Scope](./media/zendesk-provisioning-tutorial/ZenDesk15.png)
17. When you're ready to provision, select **Save**.
- ![Zendesk Save](./media/zendesk-provisioning-tutorial/ZenDesk18.png)
+ ![Screenshot of Zendesk Save](./media/zendesk-provisioning-tutorial/ZenDesk18.png)
This operation starts the initial synchronization of all users or groups defined in **Scope** in the **Settings** section. The initial sync takes longer to perform than later syncs. They occur approximately every 40 minutes as long as the Azure AD provisioning service runs.
For information on how to read the Azure AD provisioning logs, see [Reporting on
* Import of all roles will fail if any of the custom roles has a display name similar to the built in roles of "agent" or "end-user". To avoid this, ensure that none of the custom roles being imported has the above display names.
-## Additional resources
+## More resources
-* [Manage user account provisioning for enterprise apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) ## Next steps * [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)-
-<!--Image references-->
-[1]: ./media/zendesk-tutorial/tutorial_general_01.png
-[2]: ./media/zendesk-tutorial/tutorial_general_02.png
-[3]: ./media/zendesk-tutorial/tutorial_general_03.png
aks Csi Secrets Store Identity Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/csi-secrets-store-identity-access.md
# Provide an identity to access the Azure Key Vault Provider for Secrets Store CSI Driver
-The Secrets Store CSI Driver on Azure Kubernetes Service (AKS) provides a variety of methods of identity-based access to your Azure key vault. This article outlines these methods and how to use them to access your key vault and its contents from your AKS cluster. For more information, see [Use the Secrets Store CSI Driver][csi-secrets-store-driver].
+The Secrets Store CSI Driver on Azure Kubernetes Service (AKS) provides a variety of methods of identity-based access to your Azure key vault. This article outlines these methods and how to use them to access your key vault and its contents from your AKS cluster. For more information, see [Use the Secrets Store CSI Driver][csi-secrets-store-driver].
## Use Azure AD workload identity (preview)
-An Azure AD workload identity (preview) is an identity used by an application running on a pod that can authenticate itself against other Azure services that support it, such as Storage or SQL. It integrates with the capabilities native to Kubernetes to federate with external identity providers. In this security model, the AKS cluster acts as token issuer, Azure Active Directory uses OpenID Connect to discover public signing keys and verify the authenticity of the service account token before exchanging it for an Azure AD token. Your workload can exchange a service account token projected to its volume for an Azure AD token using the Azure Identity client library using the Azure SDK or the Microsoft Authentication Library (MSAL).
+An [Azure AD workload identity][workload-identity] is an identity used by an application running on a pod that can authenticate itself against other Azure services that support it, such as Storage or SQL. It integrates with the capabilities native to Kubernetes to federate with external identity providers. In this security model, the AKS cluster acts as token issuer where Azure Active Directory uses OpenID Connect to discover public signing keys and verify the authenticity of the service account token before exchanging it for an Azure AD token. Your workload can exchange a service account token projected to its volume for an Azure AD token using the Azure Identity client library using the Azure SDK or the Microsoft Authentication Library (MSAL).
> [!NOTE] > This authentication method replaces pod-managed identity (preview).
An Azure AD workload identity (preview) is an identity used by an application ru
### Prerequisites - Installed the latest version of the `aks-preview` extension, version 0.5.102 or later. To learn more, see [How to install extensions][how-to-install-extensions].
+- Existing Keyvault
+- Existing Azure Subscription with EnableWorkloadIdentityPreview feature enabled
+- Existing AKS cluster with enable-oidc-issuer and enable-workload-identity enabled
Azure AD workload identity (preview) is supported on both Windows and Linux clusters. ### Configure workload identity
-1. Use the Azure CLI [az account set][az-account-set] command to set a specific subscription to be the current active subscription.
+1. Use the Azure CLI `az account set` command to set a specific subscription to be the current active subscription. Then use the `az identity create` command to create a managed identity.
```azurecli
- az account set --subscription "subscriptionID"
+ export subscriptionID=<subscription id>
+ export resourceGroupName=<resource group name>
+ export UAMI=<name for user assigned identity>
+ export KEYVAULT_NAME=<existing keyvault name>
+ export clusterName=<aks cluster name>
+
+ az account set --subscription $subscriptionID
+ az identity create --name $UAMI --resource-group $resourceGroupName
+ export USER_ASSIGNED_CLIENT_ID="$(az identity show -g $resourceGroupName --name $UAMI --query 'clientId' -o tsv)"
+ export IDENTITY_TENANT=$(az aks show --name $clusterName --resource-group $RG --query aadProfile.tenantId -o tsv)
```
-2. Use the Azure CLI [az account set][az-account-set] command to set a specific subscription to be the current active subscription. Then use the [az identity create][az-identity-create] command to create a managed identity.
+2. You need to set an access policy that grants the workload identity permission to access the Key Vault secrets, access keys, and certificates. The rights are assigned using the `az keyvault set-policy` command shown below.
```azurecli
- az account set --subscription "subscriptionID"
+ az keyvault set-policy -n $KEYVAULT_NAME --key-permissions get --spn $USER_ASSIGNED_CLIENT_ID
+ az keyvault set-policy -n $KEYVAULT_NAME --secret-permissions get --spn $USER_ASSIGNED_CLIENT_ID
+ az keyvault set-policy -n $KEYVAULT_NAME --certificate-permissions get --spn $USER_ASSIGNED_CLIENT_ID
```
- ```azurecli
- az identity create --name "userAssignedIdentityName" --resource-group "resourceGroupName" --location "location" --subscription "subscriptionID"
- ```
-
-3. You need to set an access policy that grants the workload identity permission to access the Key Vault secrets, access keys, and certificates. The rights are assigned using the [az keyvault set-policy][az-keyvault-set-policy] command as shown below.
-
- ```azurecli
- az keyvault set-policy -n $KEYVAULT_NAME --key-permissions get --spn $APPLICATION_CLIENT_ID
- az keyvault set-policy -n $KEYVAULT_NAME --secret-permissions get --spn $APPLICATION_CLIENT_ID
- az keyvault set-policy -n $KEYVAULT_NAME --certificate-permissions get --spn $APPLICATION_CLIENT_ID
- ```
+3. Run the [az aks show][az-aks-show] command to get the AKS cluster OIDC issuer URL.
-4. Run the [az aks show][az-aks-show] command to get the AKS cluster OIDC issuer URL, and replace the default value for the cluster name and the resource group name.
-
- ```azurecli
- az aks show --resource-group resourceGroupName --name clusterName --query "oidcIssuerProfile.issuerUrl" -otsv
+ ```bash
+ export AKS_OIDC_ISSUER="$(az aks show --resource-group $resourceGroupName --name $clusterName --query "oidcIssuerProfile.issuerUrl" -o tsv)"
+ echo $AKS_OIDC_ISSUER
``` > [!NOTE] > If the URL is empty, verify you have installed the latest version of the `aks-preview` extension, version 0.5.102 or later. Also verify you've [enabled the > OIDC issuer][enable-oidc-issuer] (preview).
-5. Establish a federated identity credential between the Azure AD application and the service account issuer and subject. Get the object ID of the Azure AD application. Update the values for `serviceAccountName` and `serviceAccountNamespace` with the Kubernetes service account name and its namespace.
+4. Establish a federated identity credential between the Azure AD application and the service account issuer and subject. Get the object ID of the Azure AD application. Update the values for `serviceAccountName` and `serviceAccountNamespace` with the Kubernetes service account name and its namespace.
```bash
- export APPLICATION_OBJECT_ID="$(az ad app show --id ${APPLICATION_CLIENT_ID} --query id -otsv)"
- export SERVICE_ACCOUNT_NAME=serviceAccountName
- export SERVICE_ACCOUNT_NAMESPACE=serviceAccountNamespace
+ export serviceAccountName="workload-identity-sa" # sample name; can be changed
+ export serviceAccountNamespace="default" # can be changed to namespace of your workload
+
+ cat <<EOF | kubectl apply -f -
+ apiVersion: v1
+ kind: ServiceAccount
+ metadata:
+ annotations:
+ azure.workload.identity/client-id: ${USER_ASSIGNED_CLIENT_ID}
+ labels:
+ azure.workload.identity/use: "true"
+ name: ${serviceAccountName}
+ namespace: ${serviceAccountNamespace}
+ EOF
```
- Then add the federated identity credential by first copying and pasting the following multi-line input in the Azure CLI.
+ Next, use the [az identity federated-credential create][az-identity-federated-credential-create] command to create the federated identity credential between the Managed Identity, the service account issuer, and the subject.
- ```azurecli
- cat <<EOF > body.json
- {
- "name": "kubernetes-federated-credential",
- "issuer": "${SERVICE_ACCOUNT_ISSUER}",
- "subject": "system:serviceaccount:${SERVICE_ACCOUNT_NAMESPACE}:${SERVICE_ACCOUNT_NAME}",
- "description": "Kubernetes service account federated credential",
- "audiences": [
- "api://AzureADTokenExchange"
- ]
- }
- EOF
+ ```bash
+ export federatedIdentityName="aksfederatedidentity" # can be changed as needed
+ az identity federated-credential create --name $federatedIdentityName --identity-name $UAMI --resource-group $RG --issuer ${AKS_OIDC_ISSUER} --subject system:serviceaccount:${serviceAccountNamespace}:${serviceAccountName}
```
+5. Deploy a `SecretProviderClass` by using the following YAML script, noticing that the variables will be interpolated:
- Next, use the [az identity federated-credential create][az-identity-federated-credential-create] command to create the federated identity credential between the Managed Identity, the service account issuer, and the subject. Replace the values `resourceGroupName`, `userAssignedIdentityName`, and `federatedIdentityName`.
-
- ```azurecli
- az identity federated-credential create --name federatedIdentityName --identity-name userAssignedIdentityName --resource-group resourceGroupName --issuer ${AKS_OIDC_ISSUER} --subject ${SERVICE_ACCOUNT_NAMESPACE}:${SERVICE_ACCOUNT_NAME}"
+ ```bash
+ cat <<EOF | kubectl apply -f -
+ # This is a SecretProviderClass example using workload identity to access your key vault
+ apiVersion: secrets-store.csi.x-k8s.io/v1
+ kind: SecretProviderClass
+ metadata:
+ name: azure-kvname-workload-identity # needs to be unique per namespace
+ spec:
+ provider: azure
+ parameters:
+ usePodIdentity: "false"
+ useVMManagedIdentity: "false"
+ clientID: "${USER_ASSIGNED_CLIENT_ID}" # Setting this to use workload identity
+ keyvaultName: ${$KEYVAULT_NAME} # Set to the name of your key vault
+ cloudName: "" # [OPTIONAL for Azure] if not provided, the Azure environment defaults to AzurePublicCloud
+ objects: |
+ array:
+ - |
+ objectName: secret1
+ objectType: secret # object types: secret, key, or cert
+ objectVersion: "" # [OPTIONAL] object versions, default to latest if empty
+ - |
+ objectName: key1
+ objectType: key
+ objectVersion: ""
+ tenantId: "${IDENTITY_TENANT}" # The tenant ID of the key vault
+ EOF
```
-6. Deploy your secretproviderclass and application by setting the `clientID` in the `SecretProviderClass` to the client ID of the Azure AD application.
+6. Deploy a sample pod. Notice the service account reference in the pod definition:
```bash
- clientID: "${APPLICATION_CLIENT_ID}"
+ cat <<EOF | kubectl -n $serviceAccountNamespace -f -
+ # This is a sample pod definition for using SecretProviderClass and the user-assigned identity to access your key vault
+ kind: Pod
+ apiVersion: v1
+ metadata:
+ name: busybox-secrets-store-inline-user-msi
+ spec:
+ serviceAccountName: ${serviceAccountName}
+ containers:
+ - name: busybox
+ image: k8s.gcr.io/e2e-test-images/busybox:1.29-1
+ command:
+ - "/bin/sleep"
+ - "10000"
+ volumeMounts:
+ - name: secrets-store01-inline
+ mountPath: "/mnt/secrets-store"
+ readOnly: true
+ volumes:
+ - name: secrets-store01-inline
+ csi:
+ driver: secrets-store.csi.k8s.io
+ readOnly: true
+ volumeAttributes:
+ secretProviderClass: "azure-kvname-workload-identity"
+ EOF
``` ## Use pod-managed identities
Azure Active Directory (Azure AD) pod-managed identities (preview) use AKS primi
### Usage
-1. Verify that your virtual machine scale set or availability set nodes have their own system-assigned identity:
+1. Verify that your Virtual Machine Scale Set or Availability Set nodes have their own system-assigned identity:
```azurecli-interactive az vmss identity show -g <resource group> -n <vmss scalset name> -o yaml
To validate that the secrets are mounted at the volume path that's specified in
[az-rest]: /cli/azure/reference-index#az-rest [az-identity-federated-credential-create]: /cli/azure/identity/federated-credential#az-identity-federated-credential-create [enable-oidc-issuer]: cluster-configuration.md#oidc-issuer-
+[workload-identity]: ./workload-identity-overview.md
<!-- LINKS EXTERNAL -->
aks Dapr Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/dapr-overview.md
- Title: Dapr extension for Azure Kubernetes Service (AKS) overview
-description: Learn more about using Dapr on your Azure Kubernetes Service (AKS) cluster to develop applications.
--- Previously updated : 07/21/2022---
-# Dapr
-
-Distributed Application Runtime (Dapr) offers APIs that simplify microservice development and implementation. Running as a sidecar process in tandem with your applications, Dapr APIs abstract away common complexities developers regularly encounter when building distributed applications, such as service discovery, message broker integration, encryption, observability, and secret management. Whether your inter-application communication is direct service-to-service, or pub/sub messaging, Dapr helps you write simple, portable, resilient, and secured microservices.
-
-Dapr is incrementally adoptable ΓÇô the API building blocks can be used as the need arises. Use one, several, or all to develop your application faster.
--
-## Capabilities and features
-
-Dapr provides the following set of capabilities to help with your microservice development on AKS:
-
-* Easy provisioning of Dapr on AKS through [cluster extensions][cluster-extensions].
-* Portability enabled through HTTP and gRPC APIs which abstract underlying technologies choices
-* Reliable, secure, and resilient service-to-service calls through HTTP and gRPC APIs
-* Publish and subscribe messaging made easy with support for CloudEvent filtering and ΓÇ£at-least-onceΓÇ¥ semantics for message delivery
-* Pluggable observability and monitoring through Open Telemetry API collector
-* Works independent of language, while also offering language specific SDKs
-* Integration with VS Code through the Dapr extension
-* [More APIs for solving distributed application challenges][dapr-blocks]
-
-## Frequently asked questions
-
-### How do Dapr and Service meshes compare?
-
-A: Where a service mesh is defined as a networking service mesh, Dapr is not a service mesh. While Dapr and service meshes do offer some overlapping capabilities, a service mesh is focused on networking concerns, whereas Dapr is focused on providing building blocks that make it easier for developers to build applications as microservices. Dapr is developer-centric, while service meshes are infrastructure-centric.
-
-Some common capabilities that Dapr shares with service meshes include:
-
-* Secure service-to-service communication with mTLS encryption
-* Service-to-service metric collection
-* Service-to-service distributed tracing
-* Resiliency through retries
-
-In addition, Dapr provides other application-level building blocks for state management, pub/sub messaging, actors, and more. However, Dapr does not provide capabilities for traffic behavior such as routing or traffic splitting. If your solution would benefit from the traffic splitting a service mesh provides, consider using [Open Service Mesh][osm-docs].
-
-For more information on Dapr and service meshes, and how they can be used together, visit the [Dapr documentation][dapr-docs].
-
-### How does the Dapr secrets API compare to the Secrets Store CSI driver?
-
-Both the Dapr secrets API and the managed Secrets Store CSI driver allow for the integration of secrets held in an external store, abstracting secret store technology from application code. The Secrets Store CSI driver mounts secrets held in Azure Key Vault as a CSI volume for consumption by an application. Dapr exposes secrets via a RESTful API that can be called by application code and can be configured with assorted secret stores. The following table lists the capabilities of each offering:
-
-| | Dapr secrets API | Secrets Store CSI driver |
-| | | |
-| **Supported secrets stores** | Local environment variables (for Development); Local file (for Development); Kubernetes Secrets; AWS Secrets Manager; Azure Key Vault secret store; Azure Key Vault with Managed Identities on Kubernetes; GCP Secret Manager; HashiCorp Vault | Azure Key Vault secret store|
-| **Accessing secrets in application code** | Call the Dapr secrets API | Access the mounted volume or sync mounted content as a Kubernetes secret and set an environment variable |
-| **Secret rotation** | New API calls obtain the updated secrets | Polls for secrets and updates the mount at a configurable interval |
-| **Logging and metrics** | The Dapr sidecar generates logs, which can be configured with collectors such as Azure Monitor, emits metrics via Prometheus, and exposes an HTTP endpoint for health checks | Emits driver and Azure Key Vault provider metrics via Prometheus |
-
-For more information on the secret management in Dapr, see the [secrets management building block overview][dapr-secrets-block].
-
-For more information on the Secrets Store CSI driver and Azure Key Vault provider, see the [Secrets Store CSI driver overview][csi-secrets-store].
-
-### How does the managed Dapr cluster extension compare to the open source Dapr offering?
-
-The managed Dapr cluster extension is the easiest method to provision Dapr on an AKS cluster. With the extension, you're able to offload management of the Dapr runtime version by opting into automatic upgrades. Additionally, the extension installs Dapr with smart defaults (for example, provisioning the Dapr control plane in high availability mode).
-
-When installing Dapr OSS via helm or the Dapr CLI, runtime versions and configuration options are the responsibility of developers and cluster maintainers.
-
-Lastly, the Dapr extension is an extension of AKS, therefore you can expect the same support policy as other AKS features.
-
-[Learn more about migrating from Dapr OSS to the Dapr extension for AKS][dapr-migration].
-
-### How can I switch to using the Dapr extension if IΓÇÖve already installed Dapr via a method, such as Helm?
-
-Recommended guidance is to completely uninstall Dapr from the AKS cluster and reinstall it via the cluster extension.
-
-If you install Dapr through the AKS extension, our recommendation is to continue using the extension for future management of Dapr instead of the Dapr CLI. Combining the two tools can cause conflicts and result in undesired behavior.
-
-## Next Steps
-
-After learning about Dapr and some of the challenges it solves, try [installing the dapr extension][dapr-extension].
-
-<!-- Links Internal -->
-[csi-secrets-store]: ./csi-secrets-store-driver.md
-[osm-docs]: ./open-service-mesh-about.md
-[cluster-extensions]: ./cluster-extensions.md
-[dapr-quickstart]: ./quickstart-dapr.md
-[dapr-migration]: ./dapr-migration.md
-[dapr-extension]: ./dapr.md
-
-<!-- Links External -->
-[dapr-docs]: https://docs.dapr.io/
-[dapr-blocks]: https://docs.dapr.io/concepts/building-blocks-concept/
-[dapr-secrets-block]: https://docs.dapr.io/developing-applications/building-blocks/secrets/secrets-overview/
+
+ Title: Dapr extension for Azure Kubernetes Service (AKS) overview
+description: Learn more about using Dapr on your Azure Kubernetes Service (AKS) cluster to develop applications.
+++ Last updated : 10/11/2022+++
+# Dapr
+
+Distributed Application Runtime (Dapr) offers APIs that simplify microservice development and implementation. Running as a sidecar process in tandem with your applications, Dapr APIs abstract away common complexities developers regularly encounter when building distributed applications, such as service discovery, message broker integration, encryption, observability, and secret management. Whether your inter-application communication is direct service-to-service, or pub/sub messaging, Dapr helps you write simple, portable, resilient, and secured microservices.
+
+Dapr is incrementally adoptable ΓÇô the API building blocks can be used as the need arises. Use one, several, or all to develop your application faster.
++
+## Capabilities and features
+
+Dapr provides the following set of capabilities to help with your microservice development on AKS:
+
+* Easy provisioning of Dapr on AKS through [cluster extensions][cluster-extensions].
+* Portability enabled through HTTP and gRPC APIs which abstract underlying technologies choices
+* Reliable, secure, and resilient service-to-service calls through HTTP and gRPC APIs
+* Publish and subscribe messaging made easy with support for CloudEvent filtering and ΓÇ£at-least-onceΓÇ¥ semantics for message delivery
+* Pluggable observability and monitoring through Open Telemetry API collector
+* Works independent of language, while also offering language specific SDKs
+* Integration with VS Code through the Dapr extension
+* [More APIs for solving distributed application challenges][dapr-blocks]
+
+## Frequently asked questions
+
+### How do Dapr and Service meshes compare?
+
+A: Where a service mesh is defined as a networking service mesh, Dapr is not a service mesh. While Dapr and service meshes do offer some overlapping capabilities, a service mesh is focused on networking concerns, whereas Dapr is focused on providing building blocks that make it easier for developers to build applications as microservices. Dapr is developer-centric, while service meshes are infrastructure-centric.
+
+Some common capabilities that Dapr shares with service meshes include:
+
+* Secure service-to-service communication with mTLS encryption
+* Service-to-service metric collection
+* Service-to-service distributed tracing
+* Resiliency through retries
+
+In addition, Dapr provides other application-level building blocks for state management, pub/sub messaging, actors, and more. However, Dapr does not provide capabilities for traffic behavior such as routing or traffic splitting. If your solution would benefit from the traffic splitting a service mesh provides, consider using [Open Service Mesh][osm-docs].
+
+For more information on Dapr and service meshes, and how they can be used together, visit the [Dapr documentation][dapr-docs].
+
+### How does the Dapr secrets API compare to the Secrets Store CSI driver?
+
+Both the Dapr secrets API and the managed Secrets Store CSI driver allow for the integration of secrets held in an external store, abstracting secret store technology from application code. The Secrets Store CSI driver mounts secrets held in Azure Key Vault as a CSI volume for consumption by an application. Dapr exposes secrets via a RESTful API that can be called by application code and can be configured with assorted secret stores. The following table lists the capabilities of each offering:
+
+| | Dapr secrets API | Secrets Store CSI driver |
+| | | |
+| **Supported secrets stores** | Local environment variables (for Development); Local file (for Development); Kubernetes Secrets; AWS Secrets Manager; Azure Key Vault secret store; Azure Key Vault with Managed Identities on Kubernetes; GCP Secret Manager; HashiCorp Vault | Azure Key Vault secret store|
+| **Accessing secrets in application code** | Call the Dapr secrets API | Access the mounted volume or sync mounted content as a Kubernetes secret and set an environment variable |
+| **Secret rotation** | New API calls obtain the updated secrets | Polls for secrets and updates the mount at a configurable interval |
+| **Logging and metrics** | The Dapr sidecar generates logs, which can be configured with collectors such as Azure Monitor, emits metrics via Prometheus, and exposes an HTTP endpoint for health checks | Emits driver and Azure Key Vault provider metrics via Prometheus |
+
+For more information on the secret management in Dapr, see the [secrets management building block overview][dapr-secrets-block].
+
+For more information on the Secrets Store CSI driver and Azure Key Vault provider, see the [Secrets Store CSI driver overview][csi-secrets-store].
+
+### How does the managed Dapr cluster extension compare to the open source Dapr offering?
+
+The managed Dapr cluster extension is the easiest method to provision Dapr on an AKS cluster. With the extension, you're able to offload management of the Dapr runtime version by opting into automatic upgrades. Additionally, the extension installs Dapr with smart defaults (for example, provisioning the Dapr control plane in high availability mode).
+
+When installing Dapr OSS via helm or the Dapr CLI, runtime versions and configuration options are the responsibility of developers and cluster maintainers.
+
+Lastly, the Dapr extension is an extension of AKS, therefore you can expect the same support policy as other AKS features.
+
+[Learn more about migrating from Dapr OSS to the Dapr extension for AKS][dapr-migration].
+
+### How can I authenticate Dapr components with Azure AD using managed identities?
+
+- Learn how [Dapr components authenticate with Azure AD][dapr-msi].
+- Learn about [using managed identities with AKS][aks-msi].
+
+### How can I switch to using the Dapr extension if IΓÇÖve already installed Dapr via a method, such as Helm?
+
+Recommended guidance is to completely uninstall Dapr from the AKS cluster and reinstall it via the cluster extension.
+
+If you install Dapr through the AKS extension, our recommendation is to continue using the extension for future management of Dapr instead of the Dapr CLI. Combining the two tools can cause conflicts and result in undesired behavior.
+
+## Next Steps
+
+After learning about Dapr and some of the challenges it solves, try [Deploying an application with the Dapr cluster extension][dapr-quickstart].
+
+<!-- Links Internal -->
+[csi-secrets-store]: ./csi-secrets-store-driver.md
+[osm-docs]: ./open-service-mesh-about.md
+[cluster-extensions]: ./cluster-extensions.md
+[dapr-quickstart]: ./quickstart-dapr.md
+[dapr-migration]: ./dapr-migration.md
+[aks-msi]: ./use-managed-identity.md
+
+<!-- Links External -->
+[dapr-docs]: https://docs.dapr.io/
+[dapr-blocks]: https://docs.dapr.io/concepts/building-blocks-concept/
+[dapr-secrets-block]: https://docs.dapr.io/developing-applications/building-blocks/secrets/secrets-overview/
+[dapr-msi]: https://docs.dapr.io/developing-applications/integrations/azure/authenticating-azure
aks Supported Kubernetes Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/supported-kubernetes-versions.md
For the past release history, see [Kubernetes](https://en.wikipedia.org/wiki/Kub
| K8s version | Upstream release | AKS preview | AKS GA | End of life | |--|-|--||-|
-| 1.20 | Dec-08-20 | Jan 2021 | Mar 2021 | 1.23 GA |
| 1.21 | Apr-08-21 | May 2021 | Jul 2021 | 1.24 GA | | 1.22 | Aug-04-21 | Sept 2021 | Dec 2021 | 1.25 GA | | 1.23 | Dec 2021 | Jan 2022 | Apr 2022 | 1.26 GA | | 1.24 | Apr-22-22 | May 2022 | Jul 2022 | 1.27 GA | 1.25 | Aug 2022 | Oct 2022 | Nov 2022 | 1.28 GA
+| 1.26 | Dec 2022 | Jan 2023 | Mar 2023 | 1.29 GA
## FAQ
aks Use Multiple Node Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-multiple-node-pools.md
Use the following instructions to migrate your Ubuntu nodes to Mariner nodes.
> [!NOTE] > When adding a new Mariner node pool, you need to add at least one as `--mode System`. Otherwise, AKS won't allow you to delete your existing Ubuntu node pool.+ 2. [Cordon the existing Ubuntu nodes][cordon-and-drain]. 3. [Drain the existing Ubuntu nodes][drain-nodes]. 4. Remove the existing Ubuntu nodes using the `az aks delete` command.
api-management Api Management Access Restriction Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-access-restriction-policies.md
This policy can be used in the following policy [sections](./api-management-howt
## <a name="ValidateJWT"></a> Validate JWT
-The `validate-jwt` policy enforces existence and validity of a JSON web token (JWT) extracted from a specified HTTP header, extracted from a specified query parameter, or matching a specific value.
+The `validate-jwt` policy enforces existence and validity of a JSON web token (JWT) extracted from a specified HTTP header, extracted from a specified query parameter, or matching a specific value. The JSON Web Key Set (JWKS) is cached and is not fetched on each request. Automatic metadata refresh occurs once per hour. If retrieval fails, it will be refreshed in five minutes.
> [!IMPORTANT] > The `validate-jwt` policy requires that the `exp` registered claim is included in the JWT token, unless `require-expiration-time` attribute is specified and set to `false`.
api-management Api Management Howto Manage Protocols Ciphers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-manage-protocols-ciphers.md
Title: Manage protocols and ciphers in Azure API Management | Microsoft Docs
-description: Learn how to manage protocols (TLS) and ciphers (DES) in Azure API Management.
+ Title: Manage protocols and ciphers in Azure API Management | Microsoft Learn
+description: Learn how to manage transport layer security (TLS) protocols and cipher suites in Azure API Management.
- -- Previously updated : 09/07/2021+ Last updated : 09/22/2022 # Manage protocols and ciphers in Azure API Management
-Azure API Management supports multiple versions of Transport Layer Security (TLS) protocol for:
+Azure API Management supports multiple versions of Transport Layer Security (TLS) protocol to secure API traffic for:
* Client side * Backend side
-* The 3DES cipher
-This guide shows you how to manage protocols and ciphers configuration for an Azure API Management instance.
+API Management also supports multiple cipher suites used by the API gateway.
-![Manage protocols and ciphers in APIM](./media/api-management-howto-manage-protocols-ciphers/api-management-protocols-ciphers.png)
+By default, API Management enables TLS 1.2 for client and backend connectivity and several supported cipher suites. This guide shows you how to manage protocols and ciphers configuration for an Azure API Management instance.
+++
+> [!NOTE]
+> * If you're using the self-hosted gateway, see [self-hosted gateway security](self-hosted-gateway-overview.md#security) to manage TLS protocols and cipher suites.
+> * The Consumption tier doesn't support changes to the default cipher configuration.
## Prerequisites * An API Management instance. [Create one if you haven't already](get-started-create-service-instance.md).
-## How to manage TLS protocols and 3DES cipher
+
+## How to manage TLS protocols cipher suites
-1. Navigate to your **API Management instance** in the Azure portal.
-1. Scroll to the **Security** section in the side menu.
-1. Under the Security section, select **Protocols + ciphers**.
+1. In the left navigation of your API Management instance, under **Security**, select **Protocols + ciphers**.
1. Enable or disable desired protocols or ciphers.
-1. Click **Save**. Changes will be applied within an hour.
+1. Select **Save**. Changes are applied within an hour.
> [!NOTE]
-> Some protocols or cipher suites (like backend-side TLS 1.2) can't be enabled or disabled from the Azure portal. Instead, you'll need to apply the REST call. Use the `properties.customProperties` structure in the [Create/Update API Management Service REST API](/rest/api/apimanagement/current-ga/api-management-service/create-or-update) article.
+> Some protocols or cipher suites (such as backend-side TLS 1.2) can't be enabled or disabled from the Azure portal. Instead, you'll need to apply the REST API call. Use the `properties.customProperties` structure in the [Create/Update API Management Service](/rest/api/apimanagement/current-ga/api-management-service/create-or-update) REST API.
## Next steps
+* For recommendations on securing your API Management instance, see [Azure security baseline for API Management](/security/benchmark/azure/baselines/api-management-security-baseline).
+* Learn about security considerations in the API Management [landing zone accelerator](/azure/cloud-adoption-framework/scenarios/app-platform/api-management/security).
* Learn more about [TLS](/dotnet/framework/network-programming/tls).
app-service Configure Language Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-python.md
By default, the `PRE_BUILD_COMMAND`, `POST_BUILD_COMMAND`, and `DISABLE_COLLECTS
- To run post-build commands, set the `POST_BUILD_COMMAND` setting to contain either a command, such as `echo Post-build command`, or a path to a script file relative to your project root, such as `scripts/postbuild.sh`. All commands must use relative paths to the project root folder.
-For additional settings that customize build automation, see [Oryx configuration](https://github.com/microsoft/Oryx/blob/master/doc/configuration.md).
+For other settings that customize build automation, see [Oryx configuration](https://github.com/microsoft/Oryx/blob/master/doc/configuration.md).
To access the build and deployment logs, see [Access deployment logs](#access-deployment-logs).
Existing web applications can be redeployed to Azure as follows:
1. **Source repository**: Maintain your source code in a suitable repository like GitHub, which enables you to set up continuous deployment later in this process. 1. Your *requirements.txt* file must be at the root of your repository for App Service to automatically install the necessary packages.
-1. **Database**: If your app depends on a database, provision the necessary resources on Azure as well. See [Tutorial: Deploy a Django web app with PostgreSQL - create a database](tutorial-python-postgresql-app.md#3create-the-postgresql-database-in-azure) for an example.
+1. **Database**: If your app depends on a database, create the necessary resources on Azure as well.
-1. **App service resources**: Create a resource group, App Service Plan, and App Service web app to host your application. You can most easily do this by doing an initial deployment of your code through the Azure CLI command [`az webapp up`](/cli/azure/webapp?az-webapp-up). Or, you can create and deploy resources as shown in [Tutorial: Deploy a Django web app with PostgreSQL](tutorial-python-postgresql-app.md). Replace the names of the resource group, App Service Plan, and the web app to be more suitable for your application.
+1. **App service resources**: Create a resource group, App Service Plan, and App Service web app to host your application. You can do it easily by running the Azure CLI command [`az webapp up`](/cli/azure/webapp?az-webapp-up). Or, you can create and deploy resources as shown in [Tutorial: Deploy a Django web app with PostgreSQL](tutorial-python-postgresql-app.md). Replace the names of the resource group, App Service Plan, and the web app to be more suitable for your application.
1. **Environment variables**: If your application requires any environment variables, create equivalent [App Service application settings](configure-common.md#configure-app-settings). These App Service settings appear to your code as environment variables, as described on [Access environment variables](#access-app-settings-as-environment-variables).
- - Database connections, for example, are often managed through such settings, as shown in [Tutorial: Deploy a Django web app with PostgreSQL - configure variables to connect the database](tutorial-python-postgresql-app.md#5connect-the-web-app-to-the-database).
+ - Database connections, for example, are often managed through such settings, as shown in [Tutorial: Deploy a Django web app with PostgreSQL - verify connection settings](tutorial-python-postgresql-app.md#2-verify-connection-settings).
- See [Production settings for Django apps](#production-settings-for-django-apps) for specific settings for typical Django apps. 1. **App startup**: Review the section, [Container startup process](#container-startup-process) later in this article to understand how App Service attempts to run your app. App Service uses the Gunicorn web server by default, which must be able to find your app object or *wsgi.py* folder. If needed, you can [Customize the startup command](#customize-startup-command). 1. **Continuous deployment**: Set up continuous deployment, as described on [Continuous deployment to Azure App Service](deploy-continuous-deployment.md) if using Azure Pipelines or Kudu deployment, or [Deploy to App Service using GitHub Actions](./deploy-continuous-deployment.md) if using GitHub actions.
-1. **Custom actions**: To perform actions within the App Service container that hosts your app, such as Django database migrations, you can [connect to the container through SSH](configure-linux-open-ssh-session.md). For an example of running Django database migrations, see [Tutorial: Deploy a Django web app with PostgreSQL - run database migrations](tutorial-python-postgresql-app.md#7migrate-app-database).
+1. **Custom actions**: To perform actions within the App Service container that hosts your app, such as Django database migrations, you can [connect to the container through SSH](configure-linux-open-ssh-session.md). For an example of running Django database migrations, see [Tutorial: Deploy a Django web app with PostgreSQL - generate database schema](tutorial-python-postgresql-app.md#4-generate-database-schema).
- When using continuous deployment, you can perform those actions using post-build commands as described earlier under [Customize build automation](#customize-build-automation). With these steps completed, you should be able to commit changes to your source repository and have those updates automatically deployed to App Service.
For App Service, you then make the following modifications:
Here, `FRONTEND_DIR`, to build a path to where a build tool like yarn is run. You can again use an environment variable and App Setting as desired.
-1. Add `whitenoise` to your *requirements.txt* file. [Whitenoise](http://whitenoise.evans.io/en/stable/) (whitenoise.evans.io) is a Python package that makes it simple for a production Django app to serve it's own static files. Whitenoise specifically serves those files that are found in the folder specified by the Django `STATIC_ROOT` variable.
+1. Add `whitenoise` to your *requirements.txt* file. [Whitenoise](http://whitenoise.evans.io/en/stable/) (whitenoise.evans.io) is a Python package that makes it simple for a production Django app to serve its own static files. Whitenoise specifically serves those files that are found in the folder specified by the Django `STATIC_ROOT` variable.
1. In your *settings.py* file, add the following line for Whitenoise:
For App Service, you then make the following modifications:
## Serve static files for Flask apps
-If your Flask web app includes static front-end files, first follow the instructions on [managing static files](https://flask.palletsprojects.com/en/2.1.x/tutorial/static/) in the Flask documentation. For an example of serving static files in a Flask application, see the [quickstart sample Flask application](https://github.com/Azure-Samples/msdocs-python-flask-webapp-quickstart) on Github.
+If your Flask web app includes static front-end files, first follow the instructions on [managing static files](https://flask.palletsprojects.com/en/2.1.x/tutorial/static/) in the Flask documentation. For an example of serving static files in a Flask application, see the [quickstart sample Flask application](https://github.com/Azure-Samples/msdocs-python-flask-webapp-quickstart) on GitHub.
To serve static files directly from a route on your application, you can use the [`send_from_directory`](https://flask.palletsprojects.com/en/2.2.x/api/#flask.send_from_directory) method:
When deployed to App Service, Python apps run within a Linux Docker container th
This container has the following characteristics: -- Apps are run using the [Gunicorn WSGI HTTP Server](https://gunicorn.org/), using the additional arguments `--bind=0.0.0.0 --timeout 600`.
+- Apps are run using the [Gunicorn WSGI HTTP Server](https://gunicorn.org/), using the extra arguments `--bind=0.0.0.0 --timeout 600`.
- You can provide configuration settings for Gunicorn by [customizing the startup command](#customize-startup-command). - To protect your web app from accidental or deliberate DDOS attacks, Gunicorn is run behind an Nginx reverse proxy as described on [Deploying Gunicorn](https://docs.gunicorn.org/en/latest/deploy.html) (docs.gunicorn.org). - By default, the base container image includes only the Flask web framework, but the container supports other frameworks that are WSGI-compliant and compatible with Python 3.6+, such as Django. -- To install additional packages, such as Django, create a [*requirements.txt*](https://pip.pypa.io/en/stable/user_guide/#requirements-files) file in the root of your project that specifies your direct dependencies. App Service then installs those dependencies automatically when you deploy your project.
+- To install other packages, such as Django, create a [*requirements.txt*](https://pip.pypa.io/en/stable/user_guide/#requirements-files) file in the root of your project that specifies your direct dependencies. App Service then installs those dependencies automatically when you deploy your project.
The *requirements.txt* file *must* be in the project root for dependencies to be installed. Otherwise, the build process reports the error: "Could not find setup.py or requirements.txt; Not running pip install." If you encounter this error, check the location of your requirements file.
During startup, the App Service on Linux container runs the following steps:
3. Check for the existence of a [Flask app](#flask-app), and launch Gunicorn for it if detected. 4. If no other app is found, start a default app that's built into the container.
-The following sections provide additional details for each option.
+The following sections provide extra details for each option.
### Django app
For Django apps, App Service looks for a file named `wsgi.py` within your app co
gunicorn --bind=0.0.0.0 --timeout 600 <module>.wsgi ```
-If you want more specific control over the startup command, use a [custom startup command](#customize-startup-command), replace `<module>` with the name of folder that contains *wsgi.py*, and add a `--chdir` argument if that module is not in the project root. For example, if your *wsgi.py* is located under *knboard/backend/config* from your project root, use the arguments `--chdir knboard/backend config.wsgi`.
+If you want more specific control over the startup command, use a [custom startup command](#customize-startup-command), replace `<module>` with the name of folder that contains *wsgi.py*, and add a `--chdir` argument if that module isn't in the project root. For example, if your *wsgi.py* is located under *knboard/backend/config* from your project root, use the arguments `--chdir knboard/backend config.wsgi`.
To enable production logging, add the `--access-logfile` and `--error-logfile` parameters as shown in the examples for [custom startup commands](#customize-startup-command).
gunicorn --bind=0.0.0.0 --timeout 600 application:app
gunicorn --bind=0.0.0.0 --timeout 600 app:app ```
-If your main app module is contained in a different file, use a different name for the app object, or you want to provide additional arguments to Gunicorn, use a [custom startup command](#customize-startup-command).
+If your main app module is contained in a different file, use a different name for the app object, or you want to provide other arguments to Gunicorn, use a [custom startup command](#customize-startup-command).
### Default behavior
To specify a startup command or command file:
Replace `<custom-command>` with either the full text of your startup command or the name of your startup command file.
-App Service ignores any errors that occur when processing a custom startup command or file, then continues its startup process by looking for Django and Flask apps. If you don't see the behavior you expect, check that your startup command or file is error-free and that a startup command file is deployed to App Service along with your app code. You can also check the [Diagnostic logs](#access-diagnostic-logs) for additional information. Also check the app's **Diagnose and solve problems** page on the [Azure portal](https://portal.azure.com).
+App Service ignores any errors that occur when processing a custom startup command or file, then continues its startup process by looking for Django and Flask apps. If you don't see the behavior you expect, check that your startup command or file is error-free and that a startup command file is deployed to App Service along with your app code. You can also check the [Diagnostic logs](#access-diagnostic-logs) for more information. Also check the app's **Diagnose and solve problems** page on the [Azure portal](https://portal.azure.com).
### Example startup commands
App Service ignores any errors that occur when processing a custom startup comma
gunicorn --bind=0.0.0.0 --timeout 600 --workers=4 --chdir <module_path> <module>.wsgi ```
- For more information, see [Running Gunicorn](https://docs.gunicorn.org/en/stable/run.html) (docs.gunicorn.org). If you are using auto-scale rules to scale your web app up and down, you should also dynamically set the number of gunicorn workers using the `NUM_CORES` environment variable in your startup command, for example: `--workers $((($NUM_CORES*2)+1))`. For more information on setting the recommended number of gunicorn workers, see [the Gunicorn FAQ](https://docs.gunicorn.org/en/stable/design.html#how-many-workers)
+ For more information, see [Running Gunicorn](https://docs.gunicorn.org/en/stable/run.html) (docs.gunicorn.org). If you're using auto-scale rules to scale your web app up and down, you should also dynamically set the number of gunicorn workers using the `NUM_CORES` environment variable in your startup command, for example: `--workers $((($NUM_CORES*2)+1))`. For more information on setting the recommended number of gunicorn workers, see [the Gunicorn FAQ](https://docs.gunicorn.org/en/stable/design.html#how-many-workers)
- **Enable production logging for Django**: Add the `--access-logfile '-'` and `--error-logfile '-'` arguments to the command line:
Use the following steps to access the deployment logs:
1. On the **Logs** tab, select the **Commit ID** for the most recent commit. 1. On the **Log details** page that appears, select the **Show Logs...** link that appears next to "Running oryx build...".
-Build issues such as incorrect dependencies in *requirements.txt* and errors in pre- or post-build scripts will appear in these logs. Errors also appear if your requirements file is not exactly named *requirements.txt* or does not appear in the root folder of your project.
+Build issues such as incorrect dependencies in *requirements.txt* and errors in pre- or post-build scripts will appear in these logs. Errors also appear if your requirements file isn't exactly named *requirements.txt* or doesn't appear in the root folder of your project.
## Open SSH session in browser
In general, the first step in troubleshooting is to use App Service Diagnostics:
Next, examine both the [deployment logs](#access-deployment-logs) and the [app logs](#access-diagnostic-logs) for any error messages. These logs often identify specific issues that can prevent app deployment or app startup. For example, the build can fail if your *requirements.txt* file has the wrong filename or isn't present in your project root folder.
-The following sections provide additional guidance for specific issues.
+The following sections provide guidance for specific issues.
- [App doesn't appear - default app shows](#app-doesnt-appear) - [App doesn't appear - "service unavailable" message](#service-unavailable)
The following sections provide additional guidance for specific issues.
- If your files exist, then App Service wasn't able to identify your specific startup file. Check that your app is structured as App Service expects for [Django](#django-app) or [Flask](#flask-app), or use a [custom startup command](#customize-startup-command). -- <a name="service-unavailable"></a>**You see the message "Service Unavailable" in the browser.** The browser has timed out waiting for a response from App Service, which indicates that App Service started the Gunicorn server, but the app itself did not start. This condition could indicate that the Gunicorn arguments are incorrect, or that there's an error in the app code.
+- <a name="service-unavailable"></a>**You see the message "Service Unavailable" in the browser.** The browser has timed out waiting for a response from App Service, which indicates that App Service started the Gunicorn server, but the app itself didn't start. This condition could indicate that the Gunicorn arguments are incorrect, or that there's an error in the app code.
- Refresh the browser, especially if you're using the lowest pricing tiers in your App Service Plan. The app may take longer to start up when using free tiers, for example, and becomes responsive after you refresh the browser.
The following sections provide additional guidance for specific issues.
#### ModuleNotFoundError when app starts
-If you see an error like `ModuleNotFoundError: No module named 'example'`, this means that Python could not find one or more of your modules when the application started. This most often occurs if you deploy your virtual environment with your code. Virtual environments are not portable, so a virtual environment should not be deployed with your application code. Instead, let Oryx create a virtual environment and install your packages on the web app by creating an app setting, `SCM_DO_BUILD_DURING_DEPLOYMENT`, and setting it to `1`. This will force Oryx to install your packages whenever you deploy to App Service. For more information, please see [this article on virtual environment portability](https://azure.github.io/AppService/2020/12/11/cicd-for-python-apps.html).
+If you see an error like `ModuleNotFoundError: No module named 'example'`, this means that Python couldn't find one or more of your modules when the application started. This most often occurs if you deploy your virtual environment with your code. Virtual environments aren't portable, so a virtual environment shouldn't be deployed with your application code. Instead, let Oryx create a virtual environment and install your packages on the web app by creating an app setting, `SCM_DO_BUILD_DURING_DEPLOYMENT`, and setting it to `1`. This will force Oryx to install your packages whenever you deploy to App Service. For more information, please see [this article on virtual environment portability](https://azure.github.io/AppService/2020/12/11/cicd-for-python-apps.html).
### Database is locked
When attempting to run database migrations with a Django app, you may see "sqlit
Check the `DATABASES` variable in the app's *settings.py* file to ensure that your app is using a cloud database instead of SQLite.
-If you're encountering this error with the sample in [Tutorial: Deploy a Django web app with PostgreSQL](tutorial-python-postgresql-app.md), check that you completed the steps in [Configure environment variables to connect the database](tutorial-python-postgresql-app.md#5connect-the-web-app-to-the-database).
+If you're encountering this error with the sample in [Tutorial: Deploy a Django web app with PostgreSQL](tutorial-python-postgresql-app.md), check that you completed the steps in [Verify connection settings](tutorial-python-postgresql-app.md#2-verify-connection-settings).
#### Other issues -- **Passwords don't appear in the SSH session when typed**: For security reasons, the SSH session keeps your password hidden as you type. The characters are being recorded, however, so type your password as usual and press **Enter** when done.
+- **Passwords don't appear in the SSH session when typed**: For security reasons, the SSH session keeps your password hidden when you type. The characters are being recorded, however, so type your password as usual and press **Enter** when done.
- **Commands in the SSH session appear to be cut off**: The editor may not be word-wrapping commands, but they should still run correctly.
app-service Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview.md
Create your first web app.
> [Python (on Linux)](quickstart-python.md) > [!div class="nextstepaction"]
-> [HTML (on Windows or Linux)](quickstart-html.md)
+> [HTML](quickstart-html.md)
> [!div class="nextstepaction"] > [Custom container (Windows or Linux)](tutorial-custom-container.md)
app-service Quickstart Golang https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-golang.md
+
+ Title: 'Quickstart: Create a Go web app'
+description: Deploy your first Go (GoLang) Hello World to Azure App Service in minutes.
+ Last updated : 10/13/2022
+ms.devlang: go
++++
+# Deploy a Go web app to Azure App Service
+
+> [!IMPORTANT]
+> Go on App Service on Linux is _experimental_.
+>
+
+In this quickstart, you'll deploy a Go web app to Azure App Service. Azure App Service is a fully managed web hosting service that supports Go 1.18 and higher apps hosted in a Linux server environment.
+
+To complete this quickstart, you need:
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs).
+- [Go 1.18](https://go.dev/dl/) or higher installed locally.
+
+## 1 - Sample application
+
+First, create a folder for your project.
+
+Go to the terminal window, change into the folder you created and run `go mod init <ModuleName>`. The ModuleName could just be the folder name at this point.
+
+The `go mod init` command creates a go.mod file to track your code's dependencies. So far, the file includes only the name of your module and the Go version your code supports. But as you add dependencies, the go.mod file will list the versions your code depends on.
+
+Create a file called main.go. We'll be doing most of our coding here.
+
+```go
+package main
+import (
+ "fmt"
+ "net/http"
+)
+func main() {
+ http.HandleFunc("/", HelloServer)
+ http.ListenAndServe(":8080", nil)
+}
+func HelloServer(w http.ResponseWriter, r *http.Request) {
+ fmt.Fprintf(w, "Hello, %s!", r.URL.Path[1:])
+}
+```
+
+This program uses the `net.http` package to handle all requests to the web root with the HelloServer function. The call to `http.ListenAndServe` tells the server to listen on the TCP network address `:8080`.
+
+Using a terminal, go to your projectΓÇÖs directory and run `go run main.go`. Now open a browser window and type the URL `http://localhost:8080/world`. You should see the message `Hello, world!`.
+
+## 2 - Create a web app in Azure
+
+To host your application in Azure, you need to create Azure App Service web app in Azure. You can create a web app using the Azure CLI.
+
+Azure CLI commands can be run on a computer with the [Azure CLI installed](/cli/azure/install-azure-cli).
+
+Azure CLI has a command `az webapp up` that will create the necessary resources and deploy your application in a single step.
+
+If necessary, log in to Azure using [az login](/cli/azure/authenticate-azure-cli).
+
+```azurecli
+az login
+```
+
+Create the webapp and other resources, then deploy your code to Azure using [az webapp up](/cli/azure/webapp#az-webapp-up).
+
+```azurecli
+az webapp up --runtime GO:1.18 --sku B1
+```
+
+* The `--runtime` parameter specifies what version of Go your app is running. This example uses Go 1.18. To list all available runtimes, use the command `az webapp list-runtimes --os linux --output table`.
+* The `--sku` parameter defines the size (CPU, memory) and cost of the app service plan. This example uses the B1 (Basic) service plan, which will incur a small cost in your Azure subscription. For a full list of App Service plans, view the [App Service pricing](https://azure.microsoft.com/pricing/details/app-service/linux/) page.
+* You can optionally specify a name with the argument `--name <app-name>`. If you don't provide one, then a name will be automatically generated.
+* You can optionally include the argument `--location <location-name>` where `<location_name>` is an available Azure region. You can retrieve a list of allowable regions for your Azure account by running the [`az account list-locations`](/cli/azure/appservice#az-appservice-list-locations) command.
+
+The command may take a few minutes to complete. While the command is running, it provides messages about creating the resource group, the App Service plan, and the app resource, configuring logging, and doing ZIP deployment. It then gives the message, "You can launch the app at http://&lt;app-name&gt;.azurewebsites.net", which is the app's URL on Azure.
+
+<pre>
+The webapp '&lt;app-name>' doesn't exist
+Creating Resource group '&lt;group-name>' ...
+Resource group creation complete
+Creating AppServicePlan '&lt;app-service-plan-name>' ...
+Creating webapp '&lt;app-name>' ...
+Creating zip with contents of dir /home/tulika/myGoApp ...
+Getting scm site credentials for zip deployment
+Starting zip deployment. This operation can take a while to complete ...
+Deployment endpoint responded with status code 202
+You can launch the app at http://&lt;app-name>.azurewebsites.net
+{
+ "URL": "http://&lt;app-name>.azurewebsites.net",
+ "appserviceplan": "&lt;app-service-plan-name>",
+ "location": "centralus",
+ "name": "&lt;app-name>",
+ "os": "&lt;os-type>",
+ "resourcegroup": "&lt;group-name>",
+ "runtime_version": "go|1.18",
+ "runtime_version_detected": "0.0",
+ "sku": "FREE",
+ "src_path": "&lt;your-folder-location>"
+}
+</pre>
++
+## 3 - Browse to the app
+
+Browse to the deployed application in your web browser at the URL `http://<app-name>.azurewebsites.net`. If you see a default app page, wait a minute and refresh the browser.
+
+The Go sample code is running a Linux container in App Service using a built-in image.
+
+**Congratulations!** You've deployed your Go app to App Service.
+
+## 4 - Clean up resources
+
+When no longer needed, you can use the [az group delete](/cli/azure/group#az-group-delete) command to remove the resource group, and all related resources:
+
+```azurecli-interactive
+az group delete --resource-group <resource-group-name>
+```
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Configure an App Service app](./configure-common.md)
+
+> [!div class="nextstepaction"]
+> [Tutorial: Deploy from Azure Container Registry](./tutorial-custom-container.md)
+
+> [!div class="nextstepaction"]
+> [Tutorial: Map a custom domain name](./app-service-web-tutorial-custom-domain.md)
app-service Quickstart Html https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-html.md
cd html-docs-hello-world
az webapp up --location westeurope --name <app_name> --html ```
+> [!NOTE]
+> If you want to host your static content on a Linux based App Service instance configure PHP as your runtime using the `--runtime` and `--os-type` flags:
+>
+> `az webapp up --location westeurope --name <app_name> --runtime "PHP:8.1" --os-type linux`
+>
+> The PHP container includes a web server that is suitable to host static HTML content.
++ The `az webapp up` command does the following actions:
app-service Tutorial Python Postgresql App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-python-postgresql-app.md
Title: 'Tutorial: Deploy a Python Django or Flask web app with PostgreSQL' description: Create a Python Django or Flask web app with a PostgreSQL database and deploy it to Azure. The tutorial uses either the Django or Flask framework and the app is hosted on Azure App Service on Linux.-- ms.devlang: python Previously updated : 03/09/2022 Last updated : 10/07/2022 # Deploy a Python (Django or Flask) web app with PostgreSQL in Azure
-In this tutorial, you'll deploy a data-driven Python web app (**[Django](https://www.djangoproject.com/)** or **[Flask](https://flask.palletsprojects.com/)**) with the **[Azure Database for PostgreSQL](../postgresql/index.yml)** relational database service. The Python app is hosted in a fully managed **[Azure App Service](./overview.md#app-service-on-linux)** which supports [Python 3.7 or higher](https://www.python.org/downloads/) in a Linux server environment. You can start with a basic pricing tier that can be scaled up at any later time.
+In this tutorial, you'll deploy a data-driven Python web app (**[Django](https://www.djangoproject.com/)** or **[Flask](https://flask.palletsprojects.com/)**) to **[Azure App Service](./overview.md#app-service-on-linux)** with the **[Azure Database for PostgreSQL](../postgresql/index.yml)** relational database service. Azure App Service supports [Python 3.7 or higher](https://www.python.org/downloads/) in a Linux server environment.
**To complete this tutorial, you'll need:** * An Azure account with an active subscription exists. If you don't have an Azure account, you [can create one for free](https://azure.microsoft.com/free/python). * Knowledge of Python with Flask development or [Python with Django development](/training/paths/django-create-data-driven-websites/)
-* [Python 3.7 or higher](https://www.python.org/downloads/) installed locally.
-* [PostgreSQL](https://www.postgresql.org/download/) installed locally.
-## 1 - Sample application
+## Sample application
-Sample Python applications using the Flask and Django framework are provided to help you follow along with this tutorial. Download or clone one of the sample applications to your local workstation.
+Sample Python applications using the Flask and Django framework are provided to help you follow along with this tutorial. To deploy them without running them locally, skip this part.
+
+To run the application locally, make sure you have [Python 3.7 or higher](https://www.python.org/downloads/) and [PostgreSQL](https://www.postgresql.org/download/) install locally. Then, download or clone the app:
### [Flask](#tab/flask)
git clone https://github.com/Azure-Samples/msdocs-flask-postgresql-sample-app
git clone https://github.com/Azure-Samples/msdocs-django-postgresql-sample-app.git ``` --
-To run the application locally, navigate into the application folder:
+--
-### [Flask](#tab/flask)
-
-```bash
-cd msdocs-flask-postgresql-sample-app
-```
-
-### [Django](#tab/django)
-
-```bash
-cd msdocs-django-postgresql-sample-app
-```
---
-Create a virtual environment for the app:
--
-Install the dependencies:
-
-```Console
-pip install -r requirements.txt
-```
-
-> [!NOTE]
-> If you are following along with this tutorial with your own app, look at the *requirements.txt* file description in each project's *README.md* file ([Flask](https://github.com/Azure-Samples/msdocs-flask-postgresql-sample-app/blob/main/README.md), [Django](https://github.com/Azure-Samples/msdocs-django-postgresql-sample-app/blob/main/README.md)) to see what packages you'll need.
-
-This sample application requires an *.env* file describing how to connect to your local PostgreSQL instance. Create an *.env* file as shown below using the *.env.sample* file as a guide. Set the value of `DBNAME` to the name of an existing database in your local PostgreSQL instance. This tutorial assumes the database name is *restaurant*. Set the values of `DBHOST`, `DBUSER`, and `DBPASS` as appropriate for your local PostgreSQL instance.
+Create an *.env* file as shown below using the *.env.sample* file as a guide. Set the value of `DBNAME` to the name of an existing database in your local PostgreSQL instance. Set the values of `DBHOST`, `DBUSER`, and `DBPASS` as appropriate for your local PostgreSQL instance.
``` DBNAME=<database name>
DBUSER=<db-user-name>
DBPASS=<db-password> ```
-For Django, you can use SQLite locally instead of PostgreSQL by following the instructions in the comments of the [*settings.py*](https://github.com/Azure-Samples/msdocs-django-postgresql-sample-app/blob/main/azureproject/settings.py) file.
-
-Create the `restaurant` and `review` database tables:
-
-### [Flask](#tab/flask)
-
-```Console
-flask db init
-flask db migrate -m "initial migration"
-```
-
-### [Django](#tab/django)
-
-```Console
-python manage.py migrate
-```
---
-Run the app:
+Run the sample application with the following commands:
### [Flask](#tab/flask)
-```Console
+```bash
+# Clone the sample
+git clone https://github.com/Azure-Samples/msdocs-flask-postgresql-sample-app
+cd msdocs-flask-postgresql-sample-app
+# Activate a virtual environment
+python3 -m venv .venv # In CMD on Windows, run "py -m venv .venv" instead
+.venv/scripts/activate
+# Install dependencies
+pip install -r requirements.txt
+# Run database migration
+flask db upgrade
+# Run the app at http://127.0.0.1:5000
flask run ``` ### [Django](#tab/django)
-```Console
+```bash
+# Clone the sample
+git clone https://github.com/Azure-Samples/msdocs-django-postgresql-sample-app.git
+cd msdocs-django-postgresql-sample-app
+# Activate a virtual environment
+python3 -m venv .venv # In CMD on Windows, run "py -m venv .venv" instead
+.venv/scripts/activate
+# Install dependencies
+pip install -r requirements.txt
+# Run database migration
+python manage.py migrate
+# Run the app at http://127.0.0.1:8000
python manage.py runserver ``` --
-### [Flask](#tab/flask)
-
-In a web browser, go to the sample application at `http://127.0.0.1:5000` and add some restaurants and restaurant reviews to see how the app works.
---
-### [Django](#tab/django)
-
-In a web browser, go to the sample application at `http://127.0.0.1:8000` and add some restaurants and restaurant reviews to see how the app works.
----
-> [!TIP]
-> With Django, you can create users with the `python manage.py createsuperuser` command like you would with a typical Django app. For more information, see the documentation for [django django-admin and manage.py](https://docs.djangoproject.com/en/1.8/ref/django-admin/). Use the superuser account to access the `/admin` portion of the web site. For Flask, use an extension such as [Flask-admin](https://github.com/flask-admin/flask-admin) to provide the same functionality.
-
-Having issues? [Let us know](https://aka.ms/DjangoCLITutorialHelp).
-
-## 2 - Create a web app in Azure
-
-To host your application in Azure, you need to create Azure App Service web app.
-### [Azure portal](#tab/azure-portal)
-
-Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps to create your Azure App Service resource.
-
-| Instructions | Screenshot |
-|:-|--:|
-| [!INCLUDE [A screenshot showing how to use the search box in the top tool bar to find App Services in Azure](<./includes/tutorial-python-postgresql-app/create-app-service-azure-portal-1.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/create-app-service-azure-portal-1-240px.png" lightbox="./media/tutorial-python-postgresql-app/create-app-service-azure-portal-1.png" alt-text="A screenshot showing how to use the search box in the top tool bar to find App Services in Azure portal." ::: |
-| [!INCLUDE [A screenshot showing the location of the Create button on the App Services page in the Azure portal](<./includes/tutorial-python-postgresql-app/create-app-service-azure-portal-2.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/create-app-service-azure-portal-2-240px.png" lightbox="./media/tutorial-python-postgresql-app/create-app-service-azure-portal-2.png" alt-text="A screenshot showing the location of the Create button on the App Services page in the Azure portal." ::: |
-| [!INCLUDE [A screenshot showing how to fill out the form to create a new App Service in the Azure portal](<./includes/tutorial-python-postgresql-app/create-app-service-azure-portal-3.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/create-app-service-azure-portal-3-240px.png" lightbox="./media/tutorial-python-postgresql-app/create-app-service-azure-portal-3.png" alt-text="A screenshot showing how to fill out the form to create a new App Service in the Azure portal." ::: |
-| [!INCLUDE [A screenshot showing how to select the basic App Service plan in the Azure portal](<./includes/tutorial-python-postgresql-app/create-app-service-azure-portal-4.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/create-app-service-azure-portal-4-240px.png" lightbox="./media/tutorial-python-postgresql-app/create-app-service-azure-portal-4.png" alt-text="A screenshot showing how to select the basic App Service plan in the Azure portal." ::: |
-| [!INCLUDE [A screenshot showing the location of the Review plus Create button in the Azure portal](<./includes/tutorial-python-postgresql-app/create-app-service-azure-portal-5.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/create-app-service-azure-portal-5-240px.png" lightbox="./media/tutorial-python-postgresql-app/create-app-service-azure-portal-5.png" alt-text="A screenshot showing the location of the Review plus Create button in the Azure portal." ::: |
-
-### [VS Code](#tab/vscode-aztools)
-
-To create Azure resources in VS Code, you must have the [Azure Tools extension pack](https://marketplace.visualstudio.com/items?itemName=ms-vscode.vscode-node-azure-pack) installed and be signed into Azure from VS Code.
-
-> [!div class="nextstepaction"]
-> [Download Azure Tools extension pack](https://marketplace.visualstudio.com/items?itemName=ms-vscode.vscode-node-azure-pack)
-
-| Instructions | Screenshot |
-|:-|--:|
-| [!INCLUDE [A screenshot showing how to use the search box in the top tool bar to find App Services in Azure](<./includes/tutorial-python-postgresql-app/create-app-service-visual-studio-code-1.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/create-app-service-visual-studio-code-1-240px.png" lightbox="./media/tutorial-python-postgresql-app/create-app-service-visual-studio-code-1.png" alt-text="A screenshot showing how to find the VS Code Azure extension in VS Code." ::: |
-| [!INCLUDE [A screenshot showing how to use the search box in the top tool bar to find App Services in Azure](<./includes/tutorial-python-postgresql-app/create-app-service-visual-studio-code-2.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/create-app-service-visual-studio-code-2-240px.png" lightbox="./media/tutorial-python-postgresql-app/create-app-service-visual-studio-code-2.png" alt-text="A screenshot showing how to create a new web app in VS Code." ::: |
-| [!INCLUDE [A screenshot showing how to use the search box in the top tool bar to find App Services in Azure](<./includes/tutorial-python-postgresql-app/create-app-service-visual-studio-code-3.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/create-app-service-visual-studio-code-3-240px.png" lightbox="./media/tutorial-python-postgresql-app/create-app-service-visual-studio-code-3.png" alt-text="A screenshot showing how to use the search box in the top tool bar of VS Code to name a new web app." ::: |
-| [!INCLUDE [A screenshot showing how to use the search box in the top tool bar to find App Services in Azure](<./includes/tutorial-python-postgresql-app/create-app-service-visual-studio-code-4.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/create-app-service-visual-studio-code-4a-240px.png" lightbox="./media/tutorial-python-postgresql-app/create-app-service-visual-studio-code-4a.png" alt-text="A screenshot showing how to use the search box in the top tool bar of VS Code to create a new resource group." ::: :::image type="content" source="./media/tutorial-python-postgresql-app/create-app-service-visual-studio-code-4b-240px.png" lightbox="./media/tutorial-python-postgresql-app/create-app-service-visual-studio-code-4b.png" alt-text="A screenshot showing how to use the search box in the top tool bar of VS Code to name a new resource group." ::: |
-| [!INCLUDE [A screenshot showing how to use the search box in the top tool bar to find App Services in Azure](<./includes/tutorial-python-postgresql-app/create-app-service-visual-studio-code-5.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/create-app-service-visual-studio-code-5-240px.png" lightbox="./media/tutorial-python-postgresql-app/create-app-service-visual-studio-code-5.png" alt-text="A screenshot showing how to use the search box in the top tool bar of VS Code to set the runtime stack of a web app in Azure." ::: |
-| [!INCLUDE [A screenshot showing how to use the search box in the top tool bar to find App Services in Azure](<./includes/tutorial-python-postgresql-app/create-app-service-visual-studio-code-6.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/create-app-service-visual-studio-code-6-240px.png" lightbox="./media/tutorial-python-postgresql-app/create-app-service-visual-studio-code-6.png" alt-text="A screenshot showing how to use the search box in the top tool bar of VS Code to set location for new web app resource in Azure." ::: |
-| [!INCLUDE [A screenshot showing how to use the search box in the top tool bar to find App Services in Azure](<./includes/tutorial-python-postgresql-app/create-app-service-visual-studio-code-7.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/create-app-service-visual-studio-code-7a-240px.png" lightbox="./media/tutorial-python-postgresql-app/create-app-service-visual-studio-code-7a.png" alt-text="A screenshot showing how to use the search box in the top tool bar of VS Code to create a new App Service plan in Azure." ::: :::image type="content" source="./media/tutorial-python-postgresql-app/create-app-service-visual-studio-code-7b-240px.png" lightbox="./media/tutorial-python-postgresql-app/create-app-service-visual-studio-code-7b.png" alt-text="A screenshot showing how to use the search box in the top tool bar in VS Code to name a new App Service plan in Azure." ::: |
-| [!INCLUDE [A screenshot showing how to use the search box in the top tool bar to find App Services in Azure](<./includes/tutorial-python-postgresql-app/create-app-service-visual-studio-code-8.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/create-app-service-visual-studio-code-8-240px.png" lightbox="./media/tutorial-python-postgresql-app/create-app-service-visual-studio-code-8.png" alt-text="A screenshot showing how to use the search box in the top tool bar in VS Code to select a pricing tier for a web app in Azure." ::: |
-| [!INCLUDE [A screenshot showing how to use the search box in the top tool bar to find App Services in Azure](<./includes/tutorial-python-postgresql-app/create-app-service-visual-studio-code-9.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/create-app-service-visual-studio-code-9-240px.png" lightbox="./media/tutorial-python-postgresql-app/create-app-service-visual-studio-code-9.png" alt-text="A screenshot showing how to use the search box in the top tool bar of VS Code to skip configuring Application Insights for a web app in Azure." ::: |
-| [!INCLUDE [A screenshot showing how to use the search box in the top tool bar to find App Services in Azure](<./includes/tutorial-python-postgresql-app/create-app-service-visual-studio-code-10.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/create-app-service-visual-studio-code-10a-240px.png" lightbox="./media/tutorial-python-postgresql-app/create-app-service-visual-studio-code-10a.png" alt-text="A screenshot showing deployment with Visual Studio Code and View Output Button." ::: :::image type="content" source="./media/tutorial-python-postgresql-app/create-app-service-visual-studio-code-10b-240px.png" lightbox="./media/tutorial-python-postgresql-app/create-app-service-visual-studio-code-10b.png" alt-text="A screenshot showing deployment with Visual Studio Code and how to view App Service in Azure portal." ::: :::image type="content" source="./media/tutorial-python-postgresql-app/create-app-service-visual-studio-code-10c-240px.png" lightbox="./media/tutorial-python-postgresql-app/create-app-service-visual-studio-code-10c.png" alt-text="A screenshot showing the default App Service web page when no app has been deployed." ::: |
-
-### [Azure CLI](#tab/azure-cli)
-
-Azure CLI commands can be run in the [Azure Cloud Shell](https://shell.azure.com) or on a workstation with the [Azure CLI installed](/cli/azure/install-azure-cli).
-----
-Having issues? Refer first to the [Troubleshooting guide](configure-language-python.md#troubleshooting), otherwise, [let us know](https://aka.ms/DjangoCLITutorialHelp).
-
-## 3 - Create the PostgreSQL database in Azure
-
-You can create a PostgreSQL database in Azure using the [Azure portal](https://portal.azure.com/), Visual Studio Code, or the Azure CLI.
-
-### [Azure portal](#tab/azure-portal)
-
-Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps to create your Azure Database for PostgreSQL resource.
-
-| Instructions | Screenshot |
-|:-|--:|
-| [!INCLUDE [A screenshot showing how to use the search box in the top tool bar to find Postgres Services in Azure](<./includes/tutorial-python-postgresql-app/create-postgres-service-azure-portal-1.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/create-postgres-service-azure-portal-1-240px.png" lightbox="./media/tutorial-python-postgresql-app/create-postgres-service-azure-portal-1.png" alt-text="A screenshot showing how to use the search box in the top tool bar to find Postgres Services in the Azure portal." ::: |
-| [!INCLUDE [A screenshot showing the location of the Create button on the Azure Database for PostgreSQL servers page in the Azure portal](<./includes/tutorial-python-postgresql-app/create-postgres-service-azure-portal-2.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/create-postgres-service-azure-portal-2-240px.png" lightbox="./media/tutorial-python-postgresql-app/create-postgres-service-azure-portal-2.png" alt-text="A screenshot showing the location of the Create button on the Azure Database for PostgreSQL servers page in the Azure portal." ::: |
-| [!INCLUDE [A screenshot showing the location of the Create button on the Azure Database for PostgreSQL Flexible server deployment option page in the Azure portal](<./includes/tutorial-python-postgresql-app/create-postgres-service-azure-portal-3.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/create-postgres-service-azure-portal-3-240px.png" lightbox="./media/tutorial-python-postgresql-app/create-postgres-service-azure-portal-3.png" alt-text="A screenshot showing the location of the Create Flexible Server button on the Azure Database for PostgreSQL deployment option page in the Azure portal." ::: |
-| [!INCLUDE [A screenshot showing how to fill out the form to create a new Azure Database for PostgreSQL Flexible server in the Azure portal](<./includes/tutorial-python-postgresql-app/create-postgres-service-azure-portal-4.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/create-postgres-service-azure-portal-4-240px.png" lightbox="./media/tutorial-python-postgresql-app/create-postgres-service-azure-portal-4.png" alt-text="A screenshot showing how to fill out the form to create a new Azure Database for PostgreSQL in the Azure portal." ::: |
-| [!INCLUDE [A screenshot showing how to select and configure the compute and storage for PostgreSQL Flexible server in the Azure portal](<./includes/tutorial-python-postgresql-app/create-postgres-service-azure-portal-5.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/create-postgres-service-azure-portal-5-240px.png" lightbox="./media/tutorial-python-postgresql-app/create-postgres-service-azure-portal-5.png" alt-text="A screenshot showing how to select and configure the basic database service plan in the Azure portal." ::: |
-| [!INCLUDE [A screenshot showing creating administrator account information for the PostgreSQL Flexible server in in the Azure portal](<./includes/tutorial-python-postgresql-app/create-postgres-service-azure-portal-6.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/create-postgres-service-azure-portal-6-240px.png" lightbox="./media/tutorial-python-postgresql-app/create-postgres-service-azure-portal-6.png" alt-text="Creating administrator account information for the PostgreSQL Flexible server in the Azure portal." ::: |
-| [!INCLUDE [A screenshot showing adding current IP as a firewall rule for the PostgreSQL Flexible server in the Azure portal](<./includes/tutorial-python-postgresql-app/create-postgres-service-azure-portal-7.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/create-postgres-service-azure-portal-7-240px.png" lightbox="./media/tutorial-python-postgresql-app/create-postgres-service-azure-portal-7.png" alt-text="A screenshot showing adding current IP as a firewall rule for the PostgreSQL Flexible server in the Azure portal." ::: |
--
-### [VS Code](#tab/vscode-aztools)
-
-Follow these steps to create your Azure Database for PostgreSQL resource using the [Azure Tools extension pack](https://marketplace.visualstudio.com/items?itemName=ms-vscode.vscode-node-azure-pack) in Visual Studio Code.
-
-| Instructions | Screenshot |
-|:-|--:|
-| [!INCLUDE [Open Azure Extension - Database in VS Code](<./includes/tutorial-python-postgresql-app/create-postgres-service-visual-studio-code-1.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/create-postgres-service-visual-studio-code-1-240px.png" lightbox="./media/tutorial-python-postgresql-app/create-postgres-service-visual-studio-code-1.png" alt-text="A screenshot showing how to open Azure Extension for Database in VS Code." ::: |
-| [!INCLUDE [Create database server in VS Code](<./includes/tutorial-python-postgresql-app/create-postgres-service-visual-studio-code-2.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/create-postgres-service-visual-studio-code-2-240px.png" alt-text="A screenshot showing how create a database server in VSCode." lightbox="./media/tutorial-python-postgresql-app/create-postgres-service-visual-studio-code-2.png"::: |
-| [!INCLUDE [Azure portal - create new resource](<./includes/tutorial-python-postgresql-app/create-postgres-service-visual-studio-code-3.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/create-postgres-service-visual-studio-code-3-240px.png" alt-text="A screenshot how to create a new resource in VS Code." lightbox="./media/tutorial-python-postgresql-app/create-postgres-service-visual-studio-code-3.png"::: |
-| [!INCLUDE [Azure portal - create new resource](<./includes/tutorial-python-postgresql-app/create-postgres-service-visual-studio-code-4.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/create-postgres-service-visual-studio-code-4a-240px.png" alt-text="A screenshot showing how to create a new resource in the VS Code - server name." lightbox="./media/tutorial-python-postgresql-app/create-postgres-service-visual-studio-code-4a.png"::: :::image type="content" source="./media/tutorial-python-postgresql-app/create-postgres-service-visual-studio-code-4b-240px.png" alt-text="A screenshot showing how to create a new resource in VS Code - SKU." lightbox="./media/tutorial-python-postgresql-app/create-postgres-service-visual-studio-code-4b.png"::: :::image type="content" source="./media/tutorial-python-postgresql-app/create-postgres-service-visual-studio-code-4c-240px.png" alt-text="A screenshot showing how to create a new resource in VS Code - admin account name." lightbox="./media/tutorial-python-postgresql-app/create-postgres-service-visual-studio-code-4c.png"::: :::image type="content" source="./media/tutorial-python-postgresql-app/create-postgres-service-visual-studio-code-4d-240px.png" alt-text="A screenshot showing how to create a new resource in VS Code - admin account password." lightbox="./media/tutorial-python-postgresql-app/create-postgres-service-visual-studio-code-4d.png"::: :::image type="content" source="./media/tutorial-python-postgresql-app/create-postgres-service-visual-studio-code-4e-240px.png" alt-text="A screenshot showing how to create a new resource in VS Code - resource group." lightbox="./media/tutorial-python-postgresql-app/create-postgres-service-visual-studio-code-4e.png"::: :::image type="content" source="./media/tutorial-python-postgresql-app/create-postgres-service-visual-studio-code-4f-240px.png" alt-text="A screenshot showing how to create a new resource in VS Code - location." lightbox="./media/tutorial-python-postgresql-app/create-postgres-service-visual-studio-code-4f.png":::|
-| [!INCLUDE [Configure access for the database in the Azure portal](<./includes/tutorial-python-postgresql-app/create-postgres-service-visual-studio-code-5.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/create-postgres-service-visual-studio-code-5a-240px.png" lightbox="./media/tutorial-python-postgresql-app/create-postgres-service-visual-studio-code-5a.png" alt-text="A screenshot showing how to configure access for a database by configuring a firewall rule in VS Code." ::: :::image type="content" source="./media/tutorial-python-postgresql-app/create-postgres-service-visual-studio-code-5b-240px.png" lightbox="./media/tutorial-python-postgresql-app/create-postgres-service-visual-studio-code-5b.png" alt-text="A screenshot showing how to select the correct PostgreSQL server to add a firewall rule in VS Code." ::: :::image type="content" source="./media/tutorial-python-postgresql-app/create-postgres-service-visual-studio-code-5c-240px.png" lightbox="./media/tutorial-python-postgresql-app/create-postgres-service-visual-studio-code-5c.png" alt-text="A screenshot showing a dialog box asking to add firewall rule for local IP address in VS Code." :::|
-| [!INCLUDE [Create a new Azure resource in the Azure portal](<./includes/tutorial-python-postgresql-app/create-postgres-service-visual-studio-code-6.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/create-postgres-service-visual-studio-code-6-240px.png" lightbox="./media/tutorial-python-postgresql-app/create-postgres-service-visual-studio-code-6.png" alt-text="A screenshot showing how to create a PostgreSQL database server in VS Code." ::: |
-
-### [Azure CLI](#tab/azure-cli)
-
-Run `az login` to sign in to and follow these steps to create your Azure Database for PostgreSQL resource.
----
+--
+
+## 1. Create App Service and PostgreSQL
+
+In this step, you create the Azure resources. The steps used in this tutorial create a set of secure-by-default resources that include App Service and Azure Database for PostgreSQL. For the creation process, you'll specify:
+
+* The **Name** for the web app. It's the name used as part of the DNS name for your webapp in the form of `https://<app-name>.azurewebsites.net`.
+* The **Region** to run the app physically in the world.
+* The **Runtime stack** for the app. It's where you select the version of Python to use for your app.
+* The **Hosting plan** for the app. It's the pricing tier that includes the set of features and scaling capacity for your app.
+* The **Resource Group** for the app. A resource group lets you group (in a logical container) all the Azure resources needed for the application.
+
+Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps to create your Azure App Service resources.
+
+ :::column span="2":::
+ **Step 1.** In the Azure portal:
+ 1. Enter "web app database" in the search bar at the top of the Azure portal.
+ 1. Select the item labeled **Web App + Database** under the **Marketplace** heading.
+ You can also navigate to the [creation wizard](https://portal.azure.com/?feature.customportal=false#create/Microsoft.AppServiceWebAppDatabaseV3) directly.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-create-app-postgres-1.png" alt-text="A screenshot showing how to use the search box in the top tool bar to find the Web App + Database creation wizard." lightbox="./media/tutorial-python-postgresql-app/azure-portal-create-app-postgres-1.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 2.** In the **Create Web App + Database** page, fill out the form as follows.
+ 1. *Resource Group* &rarr; Select **Create new** and use a name of **msdocs-python-postgres-tutorial**.
+ 1. *Region* &rarr; Any Azure region near you.
+ 1. *Name* &rarr; **msdocs-python-postgres-XYZ** where *XYZ* is any three random characters. This name must be unique across Azure.
+ 1. *Runtime stack* &rarr; **Python 3.9**.
+ 1. *Hosting plan* &rarr; **Basic**. When you're ready, you can [scale up](manage-scale-up.md) to a production pricing tier later.
+ 1. **PostgreSQL - Flexible Server** is selected by default as the database engine.
+ 1. Select **Review + create**.
+ 1. After validation completes, select **Create**.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-create-app-postgres-2.png" alt-text="A screenshot showing how to configure a new app and database in the Web App + Database wizard." lightbox="./media/tutorial-python-postgresql-app/azure-portal-create-app-postgres-2.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 3.** The deployment takes a few minutes to complete. Once deployment completes, select the **Go to resource** button. You're taken directly to the App Service app, but the following resources are created:
+ - **Resource group** &rarr; The container for all the created resources.
+ - **App Service plan** &rarr; Defines the compute resources for App Service. A Linux plan in the *Basic* tier is created.
+ - **App Service** &rarr; Represents your app and runs in the App Service plan.
+ - **Virtual network** &rarr; Integrated with the App Service app and isolates back-end network traffic.
+ - **Azure Database for PostgreSQL flexible server** &rarr; Accessible only from within the virtual network. A database and a user are created for you on the server.
+ - **Private DNS zone** &rarr; Enables DNS resolution of the PostgreSQL server in the virtual network.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-create-app-postgres-3.png" alt-text="A screenshot showing the deployment process completed." lightbox="./media/tutorial-python-postgresql-app/azure-portal-create-app-postgres-3.png":::
+ :::column-end:::
Having issues? [Let us know](https://aka.ms/DjangoCLITutorialHelp).
-## 4 - Allow web app to access the database
-
-After the Azure Database for PostgreSQL server is created, configure access to the server from the web app by adding a firewall rule. This can be done through the Azure portal or the Azure CLI.
-
-If you're working in VS Code, right-click the database server and select **Open in Portal** to go to the Azure portal. Or, go to the [Azure Cloud Shell](https://shell.azure.com) and run the Azure CLI commands.
-### [Azure portal](#tab/azure-portal-access)
-
-| Instructions | Screenshot |
-|:-|--:|
-| [!INCLUDE [A screenshot showing the location and adding a firewall rule in the Azure portal](<./includes/tutorial-python-postgresql-app/add-access-to-postgres-from-web-app-portal-1.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/add-access-to-postgres-from-web-app-portal-1-240px.png" lightbox="./media/tutorial-python-postgresql-app/add-access-to-postgres-from-web-app-portal-1.png" alt-text="A screenshot showing how to add access from other Azure services to a PostgreSQL database in the Azure portal." ::: |
-
-### [Azure CLI](#tab/azure-cli-access)
----
+## 2. Verify connection settings
+
+The creation wizard generated the connectivity variables for you already as [app settings](configure-common.md#configure-app-settings).
+
+ :::column span="2":::
+ **Step 1.** In the App Service page, in the left menu, select Configuration.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-get-connection-string-1.png" alt-text="A screenshot showing how to open the configuration page in App Service." lightbox="./media/tutorial-python-postgresql-app/azure-portal-get-connection-string-1.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 2.** In the **Application settings** tab of the **Configuration** page, verify that (`DBNAME`, `DBHOST`, `DBUSER`, and `DBPASS`) are present. They'll be injected into the runtime environment as environment variables.
+ App settings are a good way to keep connection secrets out of your code repository.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-get-connection-string-2.png" alt-text="A screenshot showing how to see the autogenerated connection string." lightbox="./media/tutorial-python-postgresql-app/azure-portal-get-connection-string-2.png":::
+ :::column-end:::
Having issues? Refer first to the [Troubleshooting guide](configure-language-python.md#troubleshooting), otherwise, [let us know](https://aka.ms/DjangoCLITutorialHelp).
-## 5 - Connect the web app to the database
-
-With the web app and PostgreSQL database created, the next step is to connect the web app to the PostgreSQL database in Azure.
-
-The web app code uses database information in four environment variables named `DBHOST`, `DBNAME`, `DBUSER`, and `DBPASS` to connect to the PostgresSQL server.
-
-### [Azure portal](#tab/azure-portal)
-
-| Instructions | Screenshot |
-|:-|--:|
-| [!INCLUDE [Azure portal connect app to postgres step 1](<./includes/tutorial-python-postgresql-app/connect-postgres-to-app-azure-portal-1.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/connect-postgres-to-app-azure-portal-1-240px.png" lightbox="./media/tutorial-python-postgresql-app/connect-postgres-to-app-azure-portal-1.png" alt-text="A screenshot showing how to navigate to App Settings in the Azure portal." ::: |
-| [!INCLUDE [Azure portal connect app to postgres step 2](<./includes/tutorial-python-postgresql-app/connect-postgres-to-app-azure-portal-2.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/connect-postgres-to-app-azure-portal-2-240px.png" lightbox="./media/tutorial-python-postgresql-app/connect-postgres-to-app-azure-portal-2.png" alt-text="A screenshot showing how to configure the App Settings in the Azure portal." ::: |
-
-### [VS Code](#tab/vscode-aztools)
+## 3. Deploy sample code
-To configure environment variables for the web app from VS Code, you must have the [Azure Tools extension pack](https://marketplace.visualstudio.com/items?itemName=ms-vscode.vscode-node-azure-pack) installed and be signed into Azure from VS Code.
+In this step, you'll configure GitHub deployment using GitHub Actions. It's just one of many ways to deploy to App Service, but also a great way to have continuous integration in your deployment process. By default, every `git push` to your GitHub repository will kick off the build and deploy action.
-| Instructions | Screenshot |
-|:-|--:|
-| [!INCLUDE [VS Code connect app to postgres step 1](<./includes/tutorial-python-postgresql-app/connect-postgres-to-app-visual-studio-code-1.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/connect-app-to-database-azure-extension-240px.png" lightbox="./media/tutorial-python-postgresql-app/connect-app-to-database-azure-extension.png" alt-text="A screenshot showing how to locate the Azure Tools extension in VS Code." ::: |
-| [!INCLUDE [VS Code connect app to postgres step 2](<./includes/tutorial-python-postgresql-app/connect-postgres-to-app-visual-studio-code-2.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/connect-app-to-database-create-setting-240px.png" lightbox="./media/tutorial-python-postgresql-app/connect-app-to-database-create-setting.png" alt-text="A screenshot showing how to add a setting to the App Service in VS Code." ::: |
-| [!INCLUDE [VS Code connect app to postgres step 3](<./includes/tutorial-python-postgresql-app/connect-postgres-to-app-visual-studio-code-3.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/connect-app-to-database-settings-example-a-240px.png" lightbox="./media/tutorial-python-postgresql-app/connect-app-to-database-settings-example-a.png" alt-text="A screenshot showing adding setting name for app service to connect to PostgreSQL database in VS Code." ::: :::image type="content" source="./media/tutorial-python-postgresql-app/connect-app-to-database-settings-example-b-240px.png" lightbox="./media/tutorial-python-postgresql-app/connect-app-to-database-settings-example-b.png" alt-text="A screenshot showing adding setting value for app service to connect to PostgreSQL database in VS Code." ::: |
-
-### [Azure CLI](#tab/azure-cli)
-----
-Having issues? Refer first to the [Troubleshooting guide](configure-language-python.md#troubleshooting), otherwise, [let us know](https://aka.ms/DjangoCLITutorialHelp).
-
-## 6 - Deploy your application code to Azure
-
-Azure App service supports multiple methods to deploy your application code to Azure including support for GitHub Actions and all major CI/CD tools. This article focuses on how to deploy your code from your local workstation to Azure.
-
-### [Deploy using VS Code](#tab/vscode-aztools-deploy)
-
-To deploy a web app from VS Code, you must have the [Azure Tools extension pack](https://marketplace.visualstudio.com/items?itemName=ms-vscode.vscode-node-azure-pack) installed and be signed into Azure from VS Code.
-
-| Instructions | Screenshot |
-|:-|--:|
-| [!INCLUDE [VS Code deploy step 1](<./includes/tutorial-python-postgresql-app/deploy-visual-studio-code-1.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/connect-app-to-database-azure-extension-240px.png" lightbox="./media/tutorial-python-postgresql-app/connect-app-to-database-azure-extension.png" alt-text="A screenshot showing how to locate the Azure Tools extension in VS Code." ::: |
-| [!INCLUDE [VS Code deploy step 2](<./includes/tutorial-python-postgresql-app/deploy-visual-studio-code-2.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/deploy-web-app-visual-studio-code-1-240px.png" lightbox="./media/tutorial-python-postgresql-app/deploy-web-app-visual-studio-code-1.png" alt-text="A screenshot showing how to deploy a web app in VS Code." ::: |
-| [!INCLUDE [VS Code deploy step 3](<./includes/tutorial-python-postgresql-app/deploy-visual-studio-code-3.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/deploy-web-app-visual-studio-code-2-240px.png" lightbox="./media/tutorial-python-postgresql-app/deploy-web-app-visual-studio-code-2.png" alt-text="A screenshot showing how to deploy a web app in VS Code: selecting the code to deploy." ::: :::image type="content" source="./media/tutorial-python-postgresql-app/deploy-web-app-visual-studio-code-3-240px.png" lightbox="./media/tutorial-python-postgresql-app/deploy-web-app-visual-studio-code-3.png" alt-text="A screenshot showing how to deploy a web app in VS Code: a dialog box to confirm deployment." ::: |
-| [!INCLUDE [VS Code deploy step 4](<./includes/tutorial-python-postgresql-app/deploy-visual-studio-code-4.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/deploy-web-app-visual-studio-code-4-240px.png" lightbox="./media/tutorial-python-postgresql-app/deploy-web-app-visual-studio-code-4.png" alt-text="A screenshot showing how to deploy a web app in VS Code: a dialog box to choose to always deploy to the app service." ::: |
-| [!INCLUDE [VS Code deploy step 5](<./includes/tutorial-python-postgresql-app/deploy-visual-studio-code-5.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/deploy-web-app-visual-studio-code-5-240px.png" lightbox="./media/tutorial-python-postgresql-app/deploy-web-app-visual-studio-code-5.png" alt-text="A screenshot showing how to deploy a web app in VS Code: a dialog box with choice to browse to website." ::: :::image type="content" source="./media/tutorial-python-postgresql-app/deploy-web-app-visual-studio-code-6-240px.png" lightbox="./media/tutorial-python-postgresql-app/deploy-web-app-visual-studio-code-6.png" alt-text="A screenshot showing how to deploy a web app in VS Code: a dialog box with choice to view deployment details." ::: |
-
-### [Deploy using Local Git](#tab/local-git-deploy)
-
+### [Flask](#tab/flask)
-### [Deploy using a ZIP file](#tab/zip-deploy)
+ :::column span="2":::
+ **Step 1.** In a new browser window:
+ 1. Sign in to your GitHub account.
+ 1. Navigate to [https://github.com/Azure-Samples/msdocs-flask-postgresql-sample-app](https://github.com/Azure-Samples/msdocs-flask-postgresql-sample-app).
+ 1. Select **Fork**.
+ 1. Select **Create fork**.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-deploy-sample-code-flask-1.png" alt-text="A screenshot showing how to create a fork of the sample GitHub repository (Flask)." lightbox="./media/tutorial-python-postgresql-app/azure-portal-deploy-sample-code-flask-1.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 2.** In the GitHub page, open Visual Studio Code in the browser by pressing the `.` key.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-deploy-sample-code-flask-2.png" alt-text="A screenshot showing how to open the Visual Studio Code browser experience in GitHub (Flask)." lightbox="./media/tutorial-python-postgresql-app/azure-portal-deploy-sample-code-flask-2.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 3.** In Visual Studio Code in the browser, open *azureproject/production.py* in the explorer.
+ See the environment variables being used in the production environment, including the app settings that you saw in the configuration page.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-deploy-sample-code-flask-3.png" alt-text="A screenshot showing Visual Studio Code in the browser and an opened file (Flask)." lightbox="./media/tutorial-python-postgresql-app/azure-portal-deploy-sample-code-flask-3.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 4.** Back in the App Service page, in the left menu, select **Deployment Center**.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-deploy-sample-code-flask-4.png" alt-text="A screenshot showing how to open the deployment center in App Service (Flask)." lightbox="./media/tutorial-python-postgresql-app/azure-portal-deploy-sample-code-flask-4.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 5.** In the Deployment Center page:
+ 1. In **Source**, select **GitHub**. By default, **GitHub Actions** is selected as the build provider.
+ 1. Sign in to your GitHub account and follow the prompt to authorize Azure.
+ 1. In **Organization**, select your account.
+ 1. In **Repository**, select **msdocs-flask-postgresql-sample-app**.
+ 1. In **Branch**, select **main**.
+ 1. In the top menu, select **Save**. App Service commits a workflow file into the chosen GitHub repository, in the `.github/workflows` directory.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-deploy-sample-code-flask-5.png" alt-text="A screenshot showing how to configure CI/CD using GitHub Actions (Flask)." lightbox="./media/tutorial-python-postgresql-app/azure-portal-deploy-sample-code-flask-5.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 6.** In the Deployment Center page:
+ 1. Select **Logs**. A deployment run is already started.
+ 1. In the log item for the deployment run, select **Build/Deploy Logs**.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-deploy-sample-code-flask-6.png" alt-text="A screenshot showing how to open deployment logs in the deployment center (Flask)." lightbox="./media/tutorial-python-postgresql-app/azure-portal-deploy-sample-code-flask-6.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 7.** You're taken to your GitHub repository and see that the GitHub action is running. The workflow file defines two separate stages, build and deploy. Wait for the GitHub run to show a status of **Complete**. It takes about 5 minutes.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-deploy-sample-code-flask-7.png" alt-text="A screenshot showing a GitHub run in progress (Flask)." lightbox="./media/tutorial-python-postgresql-app/azure-portal-deploy-sample-code-flask-7.png":::
+ :::column-end:::
+### [Django](#tab/django)
--
+ :::column span="2":::
+ **Step 1.** In a new browser window:
+ 1. Sign in to your GitHub account.
+ 1. Navigate to [https://github.com/Azure-Samples/msdocs-django-postgresql-sample-app](https://github.com/Azure-Samples/msdocs-django-postgresql-sample-app).
+ 1. Select **Fork**.
+ 1. Select **Create fork**.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-deploy-sample-code-django-1.png" alt-text="A screenshot showing how to create a fork of the sample GitHub repository (Django)." lightbox="./media/tutorial-python-postgresql-app/azure-portal-deploy-sample-code-django-1.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 2.** In the GitHub page, open Visual Studio Code in the browser by pressing the `.` key.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-deploy-sample-code-django-2.png" alt-text="A screenshot showing how to open the Visual Studio Code browser experience in GitHub (Django)." lightbox="./media/tutorial-python-postgresql-app/azure-portal-deploy-sample-code-django-2.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 3.** In Visual Studio Code in the browser, open *azureproject/production.py* in the explorer.
+ See the environment variables being used in the production environment, including the app settings that you saw in the configuration page.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-deploy-sample-code-django-3.png" alt-text="A screenshot showing Visual Studio Code in the browser and an opened file (Django)." lightbox="./media/tutorial-python-postgresql-app/azure-portal-deploy-sample-code-django-3.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 4.** Back in the App Service page, in the left menu, select **Deployment Center**.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-deploy-sample-code-django-4.png" alt-text="A screenshot showing how to open the deployment center in App Service (Django)." lightbox="./media/tutorial-python-postgresql-app/azure-portal-deploy-sample-code-django-4.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 5.** In the Deployment Center page:
+ 1. In **Source**, select **GitHub**. By default, **GitHub Actions** is selected as the build provider.
+ 1. Sign in to your GitHub account and follow the prompt to authorize Azure.
+ 1. In **Organization**, select your account.
+ 1. In **Repository**, select **msdocs-django-postgresql-sample-app**.
+ 1. In **Branch**, select **main**.
+ 1. In the top menu, select **Save**. App Service commits a workflow file into the chosen GitHub repository, in the `.github/workflows` directory.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-deploy-sample-code-django-5.png" alt-text="A screenshot showing how to configure CI/CD using GitHub Actions (Django)." lightbox="./media/tutorial-python-postgresql-app/azure-portal-deploy-sample-code-django-5.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 6.** In the Deployment Center page:
+ 1. Select **Logs**. A deployment run is already started.
+ 1. In the log item for the deployment run, select **Build/Deploy Logs**.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-deploy-sample-code-django-6.png" alt-text="A screenshot showing how to open deployment logs in the deployment center (Django)." lightbox="./media/tutorial-python-postgresql-app/azure-portal-deploy-sample-code-django-6.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 7.** You're taken to your GitHub repository and see that the GitHub action is running. The workflow file defines two separate stages, build and deploy. Wait for the GitHub run to show a status of **Complete**. It takes about 5 minutes.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-deploy-sample-code-django-7.png" alt-text="A screenshot showing a GitHub run in progress (Django)." lightbox="./media/tutorial-python-postgresql-app/azure-portal-deploy-sample-code-django-7.png":::
+ :::column-end:::
+
+--
Having issues? Refer first to the [Troubleshooting guide](configure-language-python.md#troubleshooting), otherwise, [let us know](https://aka.ms/DjangoCLITutorialHelp).
-## 7 - Migrate app database
-
-With the code deployed and the database in place, the app is almost ready to use. First, you need to establish the necessary schema in the database itself. You do this by "migrating" the data models in the Django app to the database.
-
-**Step 1.** Create SSH session and connect to web app server.
-
-### [Azure portal](#tab/azure-portal)
-
-Navigate to page for the App Service instance in the Azure portal.
-
-1. Select **SSH**, under **Development Tools** on the left side
-2. Then **Go** to open an SSH console on the web app server. (It may take a minute to connect for the first time as the web app container needs to start.)
-
-### [VS Code](#tab/vscode-aztools)
-
-In VS Code, you can use the [Azure Tools extension pack](https://marketplace.visualstudio.com/items?itemName=ms-vscode.vscode-node-azure-pack), which must be installed and be signed into Azure from VS Code.
-
-In the **App Service** section of the Azure Tools extension:
-
-1. Locate your web app and right-click to bring up the context menu.
-2. Select **SSH into Web App** to open an SSH terminal window.
-
-### [Azure CLI](#tab/azure-cli)
----
-> [!NOTE]
-> If you cannot connect to the SSH session, then the app itself has failed to start. **Check the diagnostic logs** for details. For example, if you haven't created the necessary app settings in the previous section, the logs will indicate `KeyError: 'DBNAME'`.
-
-**Step 2.** In the SSH session, run the following command to migrate the models into the database schema (you can paste commands using **Ctrl**+**Shift**+**V**):
+## 4. Generate database schema
### [Flask](#tab/flask)
-When you deploy the Flask sample app to Azure App Service, the database tables are created automatically in Azure PostgreSQL. If the tables aren't created, try the following command:
-
-```bash
-# Create database tables
-flask db init
-```
+With the PostgreSQL database protected by the virtual network, the easiest way to run Run [Flask database migrations](https://flask-migrate.readthedocs.io/en/latest/) is in an SSH session with the App Service container.
+
+ :::column span="2":::
+ **Step 4.** Back in the App Service page, in the left menu, select **Deployment Center**.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-generate-db-schema-flask-1.png" alt-text="A screenshot showing how to open the SSH shell for your app from the Azure portal (Flask)." lightbox="./media/tutorial-python-postgresql-app/azure-portal-generate-db-schema-flask-1.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 2.** In the SSH terminal, run `flask db upgrade`. If it succeeds, App Service is [connecting successfully to the database](#i-get-an-error-when-running-database-migrations).
+ Only changes to files in `/home` can persist beyond app restarts. Changes outside of `/home` aren't persisted.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-generate-db-schema-flask-2.png" alt-text="A screenshot showing the commands to run in the SSH shell and their output (Flask)." lightbox="./media/tutorial-python-postgresql-app/azure-portal-generate-db-schema-flask-2.png":::
+ :::column-end:::
### [Django](#tab/django)
-```bash
-# Create database tables
-python manage.py migrate
-```
---
-If you encounter any errors related to connecting to the database, check the values of the application settings of the App Service created in the previous section, namely `DBHOST`, `DBNAME`, `DBUSER`, and `DBPASS`. Without those settings, the migrate command can't communicate with the database.
+With the PostgreSQL database protected by the virtual network, the easiest way to run [Django database migrations](https://docs.djangoproject.com/en/4.1/topics/migrations/) is in an SSH session with the App Service container.
+
+ :::column span="2":::
+ **Step 4.** Back in the App Service page, in the left menu, select **Deployment Center**.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-generate-db-schema-django-1.png" alt-text="A screenshot showing how to open the SSH shell for your app from the Azure portal (Django)." lightbox="./media/tutorial-python-postgresql-app/azure-portal-generate-db-schema-django-1.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 2.** In the SSH terminal, run `python manage.py migrate`. If it succeeds, App Service is [connecting successfully to the database](#i-get-an-error-when-running-database-migrations).
+ Only changes to files in `/home` can persist beyond app restarts. Changes outside of `/home` aren't persisted.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-generate-db-schema-django-2.png" alt-text="A screenshot showing the commands to run in the SSH shell and their output (Django)." lightbox="./media/tutorial-python-postgresql-app/azure-portal-generate-db-schema-django-2.png":::
+ :::column-end:::
> [!TIP] > In an SSH session, for Django you can also create users with the `python manage.py createsuperuser` command like you would with a typical Django app. For more information, see the documentation for [django django-admin and manage.py](https://docs.djangoproject.com/en/1.8/ref/django-admin/). Use the superuser account to access the `/admin` portion of the web site. For Flask, use an extension such as [Flask-admin](https://github.com/flask-admin/flask-admin) to provide the same functionality.
+--
+ Having issues? [Let us know](https://aka.ms/DjangoCLITutorialHelp).
-## 8 - Browse to the app
+## 5. Browse to the app
+
+ :::column span="2":::
+ **Step 1.** In the App Service page:
+ 1. From the left menu, select **Overview**.
+ 1. Select the URL of your app. You can also navigate directly to `https://<app-name>.azurewebsites.net`.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-browse-app-1.png" alt-text="A screenshot showing how to launch an App Service from the Azure portal." lightbox="./media/tutorial-python-postgresql-app/azure-portal-browse-app-1.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 2.** Add a few tasks to the list.
+ Congratulations, you're running a secure data-driven Flask app in Azure App Service, with connectivity to Azure Database for PostgreSQL.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-browse-app-2.png" alt-text="A screenshot of the Flask web app with PostgreSQL running in Azure showing restaurants and restaurant reviews." lightbox="./media/tutorial-python-postgresql-app/azure-portal-browse-app-2.png":::
+ :::column-end:::
-Browse to the deployed application in your web browser at the URL `http://<app-name>.azurewebsites.net`. It can take a minute or two for the app to start, so if you see a default app page, wait a minute and refresh the browser.
+Having issues? [Let us know](https://aka.ms/DjangoCLITutorialHelp).
-When you see your sample web app, it's running in a Linux container in App Service using a built-in image **Congratulations!** You've deployed your Python app to App Service.
+## 6. Stream diagnostic logs
+
+Azure App Service captures all messages output to the console to help you diagnose issues with your application. The sample app includes `print()` statements to demonstrate this capability as shown below.
### [Flask](#tab/flask) ### [Django](#tab/django) -
+--
+
+ :::column span="2":::
+ **Step 1.** In the App Service page:
+ 1. From the left menu, select **App Service logs**.
+ 1. Under **Application logging**, select **File System**.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-stream-diagnostic-logs-1.png" alt-text="A screenshot showing how to enable native logs in App Service in the Azure portal." lightbox="./media/tutorial-python-postgresql-app/azure-portal-stream-diagnostic-logs-1.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 2.** From the left menu, select **Log stream**. You see the logs for your app, including platform logs and logs from inside the container.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-stream-diagnostic-logs-2.png" alt-text="A screenshot showing how to view the log stream in the Azure portal." lightbox="./media/tutorial-python-postgresql-app/azure-portal-stream-diagnostic-logs-2.png":::
+ :::column-end:::
Having issues? [Let us know](https://aka.ms/DjangoCLITutorialHelp).
-## 9 - Stream diagnostic logs
+## 7. Clean up resources
+
+When you're finished, you can delete all of the resources from your Azure subscription by deleting the resource group.
+
+ :::column span="2":::
+ **Step 1.** In the search bar at the top of the Azure portal:
+ 1. Enter the resource group name.
+ 1. Select the resource group.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-clean-up-resources-1.png" alt-text="A screenshot showing how to search for and navigate to a resource group in the Azure portal." lightbox="./media/tutorial-python-postgresql-app/azure-portal-clean-up-resources-1.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 2.** In the resource group page, select **Delete resource group**.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-clean-up-resources-2.png" alt-text="A screenshot showing the location of the Delete Resource Group button in the Azure portal." lightbox="./media/tutorial-python-postgresql-app/azure-portal-clean-up-resources-2.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 3.**
+ 1. Enter the resource group name to confirm your deletion.
+ 1. Select **Delete**.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-clean-up-resources-3.png" alt-text="A screenshot of the confirmation dialog for deleting a resource group in the Azure portal." lightbox="./media/tutorial-python-postgresql-app/azure-portal-clean-up-resources-3.png"::::
+ :::column-end:::
-Azure App Service captures all messages output to the console to help you diagnose issues with your application. The sample app includes `print()` statements to demonstrate this capability as shown below.
-
-### [Flask](#tab/flask)
-
+Having issues? [Let us know](https://aka.ms/DjangoCLITutorialHelp).
-### [Django](#tab/django)
+## Frequently asked questions
+- [How much does this setup cost?](#how-much-does-this-setup-cost)
+- [How do I connect to the PostgreSQL server that's secured behind the virtual network with other tools?](#how-do-i-connect-to-the-postgresql-server-thats-secured-behind-the-virtual-network-with-other-tools)
+- [How does local app development work with GitHub Actions?](#how-does-local-app-development-work-with-github-actions)
+- [How is the Django sample configured to run on Azure App Service?](#how-is-the-django-sample-configured-to-run-on-azure-app-service)
+- [I can't connect to the SSH session](#i-cant-connect-to-the-ssh-session)
+- [I get an error when running database migrations](#i-get-an-error-when-running-database-migrations)
-
+#### How much does this setup cost?
-You can access the console logs generated from inside the container that hosts the app on Azure.
+Pricing for the create resources is as follows:
-### [Azure portal](#tab/azure-portal)
+- The App Service plan is created in **Basic** tier and can be scaled up or down. See [App Service pricing](https://azure.microsoft.com/pricing/details/app-service/linux/).
+- The PostgreSQL flexible server is create in the lowest burstable tier **Standard_B1ms**, with the minimum storage size, which can be scaled up or down. See [Azure Database for PostgreSQL pricing](https://azure.microsoft.com/pricing/details/postgresql/flexible-server/).
+- The virtual network doesn't incur a charge unless you configure extra functionality, such as peering. See [Azure Virtual Network pricing](https://azure.microsoft.com/pricing/details/virtual-network/).
+- The private DNS zone incurs a small charge. See [Azure DNS pricing](https://azure.microsoft.com/pricing/details/dns/).
-| Instructions | Screenshot |
-|:-|--:|
-| [!INCLUDE [Stream logs from Azure portal 1](<./includes/tutorial-python-postgresql-app/stream-logs-azure-portal-1.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/stream-logs-azure-portal-1-240px.png" lightbox="./media/tutorial-python-postgresql-app/stream-logs-azure-portal-1.png" alt-text="A screenshot showing how to set application logging in the Azure portal." ::: |
-| [!INCLUDE [Stream logs from Azure portal 2](<./includes/tutorial-python-postgresql-app/stream-logs-azure-portal-2.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/stream-logs-azure-portal-2-240px.png" lightbox="./media/tutorial-python-postgresql-app/stream-logs-azure-portal-2.png" alt-text="A screenshot showing how to stream logs in the Azure portal." ::: |
+#### How do I connect to the PostgreSQL server that's secured behind the virtual network with other tools?
-### [VS Code](#tab/vscode-aztools)
+- For basic access from a commmand-line tool, you can run `psql` from the app's SSH terminal.
+- To connect from a desktop tool, your machine must be within the virtual network. For example, it could be an Azure VM that's connected to one of the subnets, or a machine in an on-premises network that has a [site-to-site VPN](../vpn-gateway/vpn-gateway-about-vpngateways.md) connection with the Azure virtual network.
+- You can also [integrate Azure Cloud Shell](../cloud-shell/private-vnet.md) with the virtual network.
-| Instructions | Screenshot |
-|:-|--:|
-| [!INCLUDE [Stream logs from VS Code 1](<./includes/tutorial-python-postgresql-app/stream-logs-visual-studio-code-1.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/stream-logs-visual-studio-code-1-240px.png" lightbox="./media/tutorial-python-postgresql-app/stream-logs-visual-studio-code-1.png" alt-text="A screenshot showing how to set application logging in VS Code." ::: |
-| [!INCLUDE [Stream logs from VS Code 2](<./includes/tutorial-python-postgresql-app/stream-logs-visual-studio-code-2.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/stream-logs-visual-studio-code-2-240px.png" lightbox="./media/tutorial-python-postgresql-app/stream-logs-visual-studio-code-2.png" alt-text="A screenshot showing VS Code output window." ::: |
+#### How does local app development work with GitHub Actions?
-### [Azure CLI](#tab/azure-cli)
+Take the autogenerated workflow file from App Service as an example, each `git push` kicks off a new build and deployment run. From a local clone of the GitHub repository, you make the desired updates push it to GitHub. For example:
+```terminal
+git add .
+git commit -m "<some-message>"
+git push origin main
+```
--
+#### How is the Django sample configured to run on Azure App Service?
-Having issues? [Let us know](https://aka.ms/DjangoCLITutorialHelp).
+> [!NOTE]
+> If you are following along with this tutorial with your own app, look at the *requirements.txt* file description in each project's *README.md* file ([Flask](https://github.com/Azure-Samples/msdocs-flask-postgresql-sample-app/blob/main/README.md), [Django](https://github.com/Azure-Samples/msdocs-django-postgresql-sample-app/blob/main/README.md)) to see what packages you'll need.
-## Clean up resources
+The [Django sample application](https://github.com/Azure-Samples/msdocs-django-postgresql-sample-app) configures settings in the *azureproject/production.py* file so that it can run in Azure App Service. These changes are common to deploying Django to production, and not specific to App Service.
-You can leave the app and database running as long as you want for further development work and skip ahead to [Next steps](#next-steps).
+- Django validates the HTTP_HOST header in incoming requests. The sample code uses the [`WEBSITE_HOSTNAME` environment variable in App Service](reference-app-settings.md#app-environment) to add the app's domain name to Django's [ALLOWED_HOSTS](https://docs.djangoproject.com/en/4.1/ref/settings/#allowed-hosts) setting.
-However, when you're finished with the sample app, you can remove all of the resources for the app from Azure to ensure you don't incur other charges and keep your Azure subscription uncluttered. Removing the resource group also removes all resources in the resource group and is the fastest way to remove all Azure resources for your app.
+ :::code language="python" source="~/msdocs-django-postgresql-sample-app/azureproject/production.py" range="6" highlight="3":::
-### [Azure portal](#tab/azure-portal)
+- Django doesn't support [serving static files in production](https://docs.djangoproject.com/en/4.1/howto/static-files/deployment/). For this tutorial, you use [WhiteNoise](https://whitenoise.evans.io/) to enable serving the files. The WhiteNoise package was already installed with requirements.txt, and its middleware is added to the list.
-Follow these steps while signed-in to the Azure portal to delete a resource group.
+ :::code language="python" source="~/msdocs-django-postgresql-sample-app/azureproject/production.py" range="11-14" highlight="14":::
-| Instructions | Screenshot |
-|:-|--:|
-| [!INCLUDE [Remove resource group Azure portal 1](<./includes/tutorial-python-postgresql-app/remove-resource-group-azure-portal-1.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/remove-resource-group-azure-portal-1-240px.png" lightbox="./media/tutorial-python-postgresql-app/remove-resource-group-azure-portal-1.png" alt-text="A screenshot showing how to find resource group in the Azure portal." ::: |
-| [!INCLUDE [Remove resource group Azure portal 2](<./includes/tutorial-python-postgresql-app/remove-resource-group-azure-portal-2.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/remove-resource-group-azure-portal-2-240px.png" lightbox="./media/tutorial-python-postgresql-app/remove-resource-group-azure-portal-2.png" alt-text="A screenshot showing how to delete a resource group in the Azure portal." ::: |
-| [!INCLUDE [Remove resource group Azure portal 3](<./includes/tutorial-python-postgresql-app/remove-resource-group-azure-portal-3.md>)] | |
+ Then the static file settings are configured according to the Django documentation.
-### [VS Code](#tab/vscode-aztools)
+ :::code language="python" source="~/msdocs-django-postgresql-sample-app/azureproject/production.py" range="23-24":::
-| Instructions | Screenshot |
-|:-|--:|
-| [!INCLUDE [Remove resource group VS Code 1](<./includes/tutorial-python-postgresql-app/remove-resource-group-visual-studio-code-1.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/remove-resource-group-visual-studio-code-1-240px.png" lightbox="./media/tutorial-python-postgresql-app/remove-resource-group-visual-studio-code-1.png" alt-text="A screenshot showing how to delete a resource group in VS Code." ::: |
-| [!INCLUDE [Remove resource group VS Code 2](<./includes/tutorial-python-postgresql-app/remove-resource-group-visual-studio-code-2.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/remove-resource-group-visual-studio-code-2-240px.png" lightbox="./media/tutorial-python-postgresql-app/remove-resource-group-visual-studio-code-2.png" alt-text="A screenshot showing how to finish deleting a resource in VS Code." ::: |
+For more information, see [Production settings for Django apps](configure-language-python.md#production-settings-for-django-apps).
-### [Azure CLI](#tab/azure-cli)
+#### I can't connect to the SSH session
+If you can't connect to the SSH session, then the app itself has failed to start. Check the [diagnostic logs](#6-stream-diagnostic-logs) for details. For example, if you see an error like `KeyError: 'DBNAME'`, it may mean that the environment variable is missing (you may have removed the app setting).
-
+#### I get an error when running database migrations
-Having issues? [Let us know](https://aka.ms/DjangoCLITutorialHelp).
+If you encounter any errors related to connecting to the database, check if the app settings (`DBHOST`, `DBNAME`, `DBUSER`, and `DBPASS`) have been changed. Without those settings, the migrate command can't communicate with the database.
## Next steps
application-gateway Private Link Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/private-link-configure.md
The Private link configuration defines the infrastructure used by Application Ga
- **Frontend IP Configuration**: The frontend IP address that private link should forward traffic to on Application Gateway. - **Private IP address settings**: specify at least one IP address 1. Select **Add**.
-1. Within your **Application Gateways** properties blade, obtain and make a note of the **Resource ID**, you will require this if setting up a Private Endpoint within a diffrerent Azure AD tenant
+1. Within your **Application Gateways** properties blade, obtain and make a note of the **Resource ID**, you will require this if setting up a Private Endpoint within a different Azure AD tenant
**Configure Private Endpoint**
A private endpoint is a network interface that uses a private IP address from th
> If the public or private IP configuration resource is missing when trying to select a _Target sub-resource_ on the _Resource_ tab of private endpoint creation, please ensure a listener is actively utilizing the respected frontend IP configuration. Frontend IP configurations without an associated listener will not be shown as a _Target sub-resource_. > [!Note]
-> If you are setting up the **Private Endpoint** from within another tenant, you will need to utilise the Azure Application Gateway Resource ID, along with sub-resource as either _appGwPublicFrontendIp_ or _appGwPrivateFrontendIp_, depending upon your Azure Application Gateway Private Link Frontend IP Configuration.
+> If you are provisioning a **Private Endpoint** from within another tenant, you will need to utilize the Azure Application Gateway Resource ID, along with sub-resource to your frontend configuration. For example, if the frontend configuration of the gateway was named _PrivateFrontendIp_, the resource ID would be as follows: _/subscriptions/xxxx-xxxx-xxxx-xxxx-xxxx/resourceGroups/resourceGroupname/providers/Microsoft.Network/applicationGateways/appgwname/frontendIPConfigurations/PrivateFrontendIp_.
# [Azure PowerShell](#tab/powershell)
applied-ai-services Overview Experiment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/overview-experiment.md
- Title: "Overview: What is Azure Form Recognizer?"-
-description: Azure Form Recognizer service that analyzes and extracts text, table and data, maps field relationships as key-value pairs, and returns a structured JSON output from your forms and documents.
----- Previously updated : 10/10/2022-
-recommendations: false
---
-<!-- markdownlint-disable MD033 -->
-<!-- markdownlint-disable MD024 -->
-<!-- markdownlint-disable MD036 -->
-# Overview: What is Azure Form Recognizer?
-
-Azure Form Recognizer is a cloud-based [Azure Applied AI Service](../../applied-ai-services/index.yml) that analyzes forms and documents, extracts text and data, and maps field relationships as key-value pairs. To learn more about each model, *see* Concepts articles:
-
-| Model type | Model name |
-||--|
-|**Document analysis models**| &#9679; [**Read model**](concept-read.md)</br> &#9679; [**General document model**](concept-general-document.md)</br> &#9679; [**Layout model**](concept-layout.md) </br> |
-| **Prebuilt models** | &#9679; [**W-2 form model**](concept-w2.md) </br>&#9679; [**Invoice model**](concept-invoice.md)</br>&#9679; [**Receipt model**](concept-receipt.md) </br>&#9679; [**ID document model**](concept-id-document.md) </br>&#9679; [**Business card model**](concept-business-card.md) </br>
-| **Custom models** | &#9679; [**Custom model**](concept-custom.md) </br>&#9679; [**Composed model**](concept-model-overview.md)|
-
-## Which Form Recognizer model should I use?
-
-This section will help you decide which Form Recognizer v3.0 supported model you should use for your application:
-
-| Type of document | Data to extract |Document format | Your best solution |
-| --|-| -|-|
-|**A text-based document** like a contract or letter.|You want to extract primarily text lines, words, locations, and detected languages.|</li></ul>The document is written or printed in a [supported language](language-support.md#read-layout-and-custom-form-template-model).| [**Read model**](concept-read.md)|
-|**A document that includes structural information** like a report or study.|In addition to text, you need to extract structural information like tables, selection marks, paragraphs, titles, headings, and subheadings.|The document is written or printed in a [supported language](language-support.md#read-layout-and-custom-form-template-model)| [**Layout model**](concept-layout.md)
-|**A structured or semi-structured document that includes content formatted as fields and values**, like a credit application or survey form.|You want to extract fields and values including ones not covered by the scenario-specific prebuilt models **without having to train a custom model**.| The form or document is a standardized format commonly used in your business or industry and printed in a [supported language](language-support.md#read-layout-and-custom-form-template-model).|[**General document model**](concept-general-document.md)
-|**U.S. W-2 form**|You want to extract key information such as salary, wages, and taxes withheld from US W2 tax forms.</li></ul> |The W-2 document is in United States English (en-US) text.|[**W-2 model**](concept-w2.md)
-|**Invoice**|You want to extract key information such as customer name, billing address, and amount due from invoices.</li></ul> |The invoice document is written or printed in a [supported language](language-support.md#invoice-model).|[**Invoice model**](concept-invoice.md)
- |**Receipt**|You want to extract key information such as merchant name, transaction date, and transaction total from a sales or single-page hotel receipt.</li></ul> |The receipt is written or printed in a [supported language](language-support.md#receipt-model). |[**Receipt model**](concept-receipt.md)|
-|**ID document** like a passport or driver's license. |You want to extract key information such as first name, last name, and date of birth from US drivers' licenses or international passports. |Your ID document is a US driver's license or the biographical page from an international passport (not a visa).| [**ID document model**](concept-id-document.md)|
-|**Business card**|You want to extract key information such as first name, last name, company name, email address, and phone number from business cards.</li></ul>|The business card document is in English or Japanese text. | [**Business card model**](concept-business-card.md)|
-|**Mixed-type document(s)**| You want to extract key-value pairs, selection marks, tables, signature fields, and selected regions not extracted by prebuilt or general document models.| You have various documents with structured, semi-structured, and/or unstructured elements.| [**Custom model**](concept-custom.md)|
-
->[!Tip]
->
-> * If you're still unsure which model to use, try the General Document model.
-> * The General Document model is powered by the Read OCR model to detect lines, words, locations, and languages.
-> * General document extracts all the same fields as Layout model (pages, tables, styles) and also extracts key-value pairs.
-
-## Form Recognizer models and development options
--
-> [!NOTE]
-> The following models and development options are supported by the Form Recognizer service v3.0. You can Use Form Recognizer to automate your data processing in applications and workflows, enhance data-driven strategies, and enrich document search capabilities. Use the links in the table to learn more about each model and browse the API references.
-
-| Model | Description |Automation use cases | Development options |
-|-|--|-|--|
-|[ **Read**](concept-read.md)|Extract text lines, words, detected languages, and handwritten style if detected.| <ul><li>Contract processing. </li><li>Financial or medical report processing.</li></ul>|<ul ><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/read)</li><li>[**REST API**](how-to-guides/use-sdk-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**C# SDK**](how-to-guides/use-sdk-rest-api.md?view=form-recog-3.0.0&preserve-view=true?pivots=programming-language-csharp)</li><li>[**Python SDK**](how-to-guides/use-sdk-rest-api.md?view=form-recog-3.0.0&preserve-view=true?pivots=programming-language-python)</li><li>[**Java SDK**](how-to-guides/use-sdk-rest-api.md?view=form-recog-3.0.0&preserve-view=true?pivots=programming-language-java)</li><li>[**JavaScript**](how-to-guides/use-sdk-rest-api.md?view=form-recog-3.0.0&preserve-view=true?pivots=programming-language-javascript)</li></ul> |
-|[**General document model**](concept-general-document.md)|Extract text, tables, structure, and key-value pairs.|<ul><li>Key-value pair extraction.</li><li>Form processing.</li><li>Survey data collection and analysis.</li></ul>|<ul ><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/document)</li><li>[**REST API**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#general-document-model)</li><li>[**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#general-document-model)</li><li>[**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#general-document-model)</li><li>[**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#general-document-model)</li></ul> |
-|[**Layout model**](concept-layout.md) | Extract text, selection marks, and tables structures, along with their bounding box coordinates, from forms and documents.</br></br> Layout API has been updated to a prebuilt model. |<ul><li>Document indexing and retrieval by structure.</li><li>Preprocessing prior to OCR analysis.</li></ul> |<ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/layout)</li><li>[**REST API**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#layout-model)</li><li>[**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#layout-model)</li><li>[**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#layout-model)</li><li>[**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#layout-model)</li></ul>|
-|[**Custom model (updated)**](concept-custom.md) | Extraction and analysis of data from forms and documents specific to distinct business data and use cases.</br></br>Custom model API v3.0 supports **signature detection for custom template (custom form) models**.</br></br>Custom model API v3.0 now supports two model types:<ul><li>[**Custom Template model**](concept-custom-template.md) (custom form) is used to analyze structured and semi-structured documents.</li><li> [**Custom Neural model**](concept-custom-neural.md) (custom document) is used to analyze unstructured documents.</li></ul>|<ul><li>Identification and compilation of data, unique to your business, impacted by a regulatory change or market event.</li><li>Identification and analysis of previously overlooked unique data.</li></ul> |[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</li><li>[**REST API**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li></ul>|
-|[ **W-2 Form**](concept-w2.md) | Extract information reported in each box on a W-2 form.|<ul><li>Automated tax document management.</li><li>Mortgage loan application processing.</li></ul> |<ul ><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.w2)<li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li></ul> |
-|[**Invoice model**](concept-invoice.md) | Automated data processing and extraction of key information from sales invoices. |<ul><li>Accounts payable processing.</li><li>Automated tax recording and reporting.</li></ul> |<ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=invoice)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li></ul>|
-|[**Receipt model (updated)**](concept-receipt.md) | Automated data processing and extraction of key information from sales receipts.</br></br>Receipt model v3.0 supports processing of **single-page hotel receipts**.|<ul><li>Expense management.</li><li>Consumer behavior data analysis.</li><li>Customer loyalty program.</li><li>Merchandise return processing.</li><li>Automated tax recording and reporting.</li></ul> |<ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=receipt)</li><li>[**REST API**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li></ul>|
-|[**ID document model (updated)**](concept-id-document.md) |Automated data processing and extraction of key information from US driver's licenses and international passports.</br></br>Prebuilt ID document API supports the **extraction of endorsements, restrictions, and vehicle classifications from US driver's licenses**. |<ul><li>Know your customer (KYC) financial services guidelines compliance.</li><li>Medical account management.</li><li>Identity checkpoints and gateways.</li><li>Hotel registration.</li></ul> |<ul><li> [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=idDocument)</li><li>[**REST API**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li></ul>|
-|[**Business card model**](concept-business-card.md) |Automated data processing and extraction of key information from business cards.|<ul><li>Sales lead and marketing management.</li></ul> |<ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=businessCard)</li><li>[**REST API**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li></ul>|
---
- >[!TIP]
- >
- > * For an enhanced experience and advanced model quality, try the [Form Recognizer v3.0 Studio ](https://formrecognizer.appliedai.azure.com/studio).
- > * The v3.0 Studio supports any model trained with v2.1 labeled data.
- > * You can refer to the API migration guide for detailed information about migrating from v2.1 to v3.0.
-
-The following models are supported by Form Recognizer v2.1. Use the links in the table to learn more about each model and browse the API references.
-
-| Model| Description | Development options |
-|-|--|-|
-|[**Layout API**](concept-layout.md) | Extraction and analysis of text, selection marks, tables, and bounding box coordinates, from forms and documents. | <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-layout)</li><li>[**REST API**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-2.1.0&preserve-view=true)</li><li>[**Client-library SDK**](quickstarts/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?branch=main&tabs=layout#run-the-container-with-the-docker-compose-up-command)</li></ul>|
-|[**Custom model**](concept-custom.md) | Extraction and analysis of data from forms and documents specific to distinct business data and use cases.| <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#train-a-custom-form-model)</li><li>[**REST API**](quickstarts/try-sdk-rest-api.md)</li><li>[**Client-library SDK**](how-to-guides/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=custom#run-the-container-with-the-docker-compose-up-command)</li></ul>|
-|[**Invoice model**](concept-invoice.md) | Automated data processing and extraction of key information from sales invoices. | <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-2.1.0&preserve-view=true)</li><li>[**Client-library SDK**](quickstarts/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=invoice#run-the-container-with-the-docker-compose-up-command)</li></ul>|
-|[**Receipt model**](concept-receipt.md) | Automated data processing and extraction of key information from sales receipts.| <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-2.1.0&preserve-view=true)</li><li>[**Client-library SDK**](how-to-guides/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=receipt#run-the-container-with-the-docker-compose-up-command)</li></ul>|
-|[**ID document model**](concept-id-document.md) | Automated data processing and extraction of key information from US driver's licenses and international passports.| <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-2.1.0&preserve-view=true)</li><li>[**Client-library SDK**](how-to-guides/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=id-document#run-the-container-with-the-docker-compose-up-command)</li></ul>|
-|[**Business card model**](concept-business-card.md) | Automated data processing and extraction of key information from business cards.| <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-2.1.0&preserve-view=true)</li><li>[**Client-library SDK**](how-to-guides/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=business-card#run-the-container-with-the-docker-compose-up-command)</li></ul>|
--
-## How to use Form Recognizer documentation
-
-This documentation contains the following article types:
-
-* [**Concepts**](concept-layout.md) provide in-depth explanations of the service functionality and features.
-* [**Quickstarts**](quickstarts/try-sdk-rest-api.md) are getting-started instructions to guide you through making requests to the service.
-* [**How-to guides**](how-to-guides/try-sdk-rest-api.md) contain instructions for using the service in more specific or customized ways.
-* [**Tutorials**](tutorial-ai-builder.md) are longer guides that show you how to use the service as a component in broader business solutions.
-
-## Data privacy and security
-
- As with all the cognitive services, developers using the Form Recognizer service should be aware of Microsoft policies on customer data. See our [Data, privacy, and security for Form Recognizer](/legal/cognitive-services/form-recognizer/fr-data-privacy-security) page.
-
-## Next steps
--
-> [!div class="checklist"]
->
-> * Try our [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com)
-> * Explore the [**REST API reference documentation**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument) to learn more.
-> * If you're familiar with a previous version of the API, see the [**What's new**](./whats-new.md) article to learn of recent changes.
---
-> [!div class="checklist"]
->
-> * Try our [**Sample Labeling online tool**](https://aka.ms/fott-2.1-ga/)
-> * Follow our [**client library / REST API quickstart**](./quickstarts/try-sdk-rest-api.md) to get started extracting data from your documents. We recommend that you use the free service when you're learning the technology. Remember that the number of free pages is limited to 500 per month.
-> * Explore the [**REST API reference documentation**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeWithCustomForm) to learn more.
-> * If you're familiar with a previous version of the API, see the [**What's new**](./whats-new.md) article to learn of recent changes.
-
applied-ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/overview.md
Title: What is Azure Form Recognizer?
+ Title: "Overview: What is Azure Form Recognizer?"
-description: The Azure Form Recognizer service allows you to identify and extract key/value pairs and table data from your form documents, as well as extract major information from sales receipts and business cards.
+description: Azure Form Recognizer service that analyzes and extracts text, table and data, maps field relationships as key-value pairs, and returns a structured JSON output from your forms and documents.
Previously updated : 10/06/2022 Last updated : 10/12/2022 recommendations: false
-adobe-target: true
-adobe-target-activity: DocsExpΓÇô463504ΓÇôA/BΓÇôDocs/FormRecognizerΓÇôDecisionTreeΓÇôFY23Q1
-adobe-target-experience: Experience B
-adobe-target-content: ./overview-experiment
-#Customer intent: As a developer of form-processing software, I want to learn what the Form Recognizer service does so I can determine if I should use it.
+ <!-- markdownlint-disable MD033 --> <!-- markdownlint-disable MD024 --> <!-- markdownlint-disable MD036 --> # What is Azure Form Recognizer?
-Azure Form Recognizer is a cloud-based [Azure Applied AI Service](../../applied-ai-services/index.yml) that analyzes forms and documents, extracts text and data, and maps field relationships as key-value pairs. To learn more about each model, *see* Concepts articles:
++
+Azure Form Recognizer is a cloud-based [Azure Applied AI Service](../../applied-ai-services/index.yml) that analyzes forms and documents, extracts text and data, and maps field relationships as key-value pairs. To learn more about each model, *see* the Concepts articles:
| Model type | Model name | ||--|
Azure Form Recognizer is a cloud-based [Azure Applied AI Service](../../applied-
## Which Form Recognizer model should I use?
-This section will help you decide which Form Recognizer v3.0 supported model you should use for your application:
+This section will help you decide which **Form Recognizer v3.0** supported model you should use for your application:
| Type of document | Data to extract |Document format | Your best solution | | --|-| -|-|
This section will help you decide which Form Recognizer v3.0 supported model you
## Form Recognizer models and development options - > [!NOTE]
-> The following models and development options are supported by the Form Recognizer service v3.0. You can Use Form Recognizer to automate your data processing in applications and workflows, enhance data-driven strategies, and enrich document search capabilities. Use the links in the table to learn more about each model and browse the API references.
+>The following models and development options are supported by the Form Recognizer service v3.0.
+
+You can Use Form Recognizer to automate your data processing in applications and workflows, enhance data-driven strategies, and enrich document search capabilities. Use the links in the table to learn more about each model and browse the API references.
| Model | Description |Automation use cases | Development options | |-|--|-|--|
-|[ **Read**](concept-read.md)|Extract text lines, words, detected languages, and handwritten style if detected.| <ul><li>Contract processing. </li><li>Financial or medical report processing.</li></ul>|<ul ><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/read)</li><li>[**REST API**](how-to-guides/use-sdk-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**C# SDK**](how-to-guides/use-sdk-rest-api.md?view=form-recog-3.0.0&preserve-view=true?pivots=programming-language-csharp)</li><li>[**Python SDK**](how-to-guides/use-sdk-rest-api.md?view=form-recog-3.0.0&preserve-view=true?pivots=programming-language-python)</li><li>[**Java SDK**](how-to-guides/use-sdk-rest-api.md?view=form-recog-3.0.0&preserve-view=true?pivots=programming-language-java)</li><li>[**JavaScript**](how-to-guides/use-sdk-rest-api.md?view=form-recog-3.0.0&preserve-view=true?pivots=programming-language-javascript)</li></ul> |
-|[**General document model**](concept-general-document.md)|Extract text, tables, structure, and key-value pairs.|<ul><li>Key-value pair extraction.</li><li>Form processing.</li><li>Survey data collection and analysis.</li></ul>|<ul ><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/document)</li><li>[**REST API**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#general-document-model)</li><li>[**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#general-document-model)</li><li>[**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#general-document-model)</li><li>[**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#general-document-model)</li></ul> |
-|[**Layout model**](concept-layout.md) | Extract text, selection marks, and tables structures, along with their bounding box coordinates, from forms and documents.</br></br> Layout API has been updated to a prebuilt model. |<ul><li>Document indexing and retrieval by structure.</li><li>Preprocessing prior to OCR analysis.</li></ul> |<ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/layout)</li><li>[**REST API**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#layout-model)</li><li>[**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#layout-model)</li><li>[**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#layout-model)</li><li>[**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#layout-model)</li></ul>|
-|[**Custom model (updated)**](concept-custom.md) | Extraction and analysis of data from forms and documents specific to distinct business data and use cases.</br></br>Custom model API v3.0 supports **signature detection for custom template (custom form) models**.</br></br>Custom model API v3.0 now supports two model types:<ul><li>[**Custom Template model**](concept-custom-template.md) (custom form) is used to analyze structured and semi-structured documents.</li><li> [**Custom Neural model**](concept-custom-neural.md) (custom document) is used to analyze unstructured documents.</li></ul>|<ul><li>Identification and compilation of data, unique to your business, impacted by a regulatory change or market event.</li><li>Identification and analysis of previously overlooked unique data.</li></ul> |[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</li><li>[**REST API**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li></ul>|
-|[ **W-2 Form**](concept-w2.md) | Extract information reported in each box on a W-2 form.|<ul><li>Automated tax document management.</li><li>Mortgage loan application processing.</li></ul> |<ul ><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.w2)<li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li></ul> |
-|[**Invoice model**](concept-invoice.md) | Automated data processing and extraction of key information from sales invoices. |<ul><li>Accounts payable processing.</li><li>Automated tax recording and reporting.</li></ul> |<ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=invoice)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li></ul>|
-|[**Receipt model (updated)**](concept-receipt.md) | Automated data processing and extraction of key information from sales receipts.</br></br>Receipt model v3.0 supports processing of **single-page hotel receipts**.|<ul><li>Expense management.</li><li>Consumer behavior data analysis.</li><li>Customer loyalty program.</li><li>Merchandise return processing.</li><li>Automated tax recording and reporting.</li></ul> |<ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=receipt)</li><li>[**REST API**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li></ul>|
-|[**ID document model (updated)**](concept-id-document.md) |Automated data processing and extraction of key information from US driver's licenses and international passports.</br></br>Prebuilt ID document API supports the **extraction of endorsements, restrictions, and vehicle classifications from US driver's licenses**. |<ul><li>Know your customer (KYC) financial services guidelines compliance.</li><li>Medical account management.</li><li>Identity checkpoints and gateways.</li><li>Hotel registration.</li></ul> |<ul><li> [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=idDocument)</li><li>[**REST API**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li></ul>|
-|[**Business card model**](concept-business-card.md) |Automated data processing and extraction of key information from business cards.|<ul><li>Sales lead and marketing management.</li></ul> |<ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=businessCard)</li><li>[**REST API**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li></ul>|
+|[**Read**](concept-read.md)|Extract text lines, words, detected languages, and handwritten style if detected.| <ul><li>Contract processing. </li><li>Financial or medical report processing.</li></ul>|<ul ><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/read)</li><li>[**REST API**](how-to-guides/use-prebuilt-read.md?pivots=programming-language-rest-api)</li><li>[**C# SDK**](how-to-guides/use-prebuilt-read.md?pivots=programming-language-csharp)</li><li>[**Python SDK**](how-to-guides/use-prebuilt-read.md?pivots=programming-language-python)</li><li>[**Java SDK**](how-to-guides/use-prebuilt-read.md?pivots=programming-language-java)</li><li>[**JavaScript**](how-to-guides/use-prebuilt-read.md?pivots=programming-language-javascript)</li></ul> |
+|[**General document model**](concept-general-document.md)|Extract text, tables, structure, and key-value pairs.|<ul><li>Key-value pair extraction.</li><li>Form processing.</li><li>Survey data collection and analysis.</li></ul>|<ul ><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/document)</li><li>[**REST API**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md#general-document-model)</li><li>[**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md#general-document-model)</li><li>[**Java SDK**](quickstarts/get-started-v3-sdk-rest-api.md#general-document-model)</li><li>[**JavaScript**](quickstarts/get-started-v3-sdk-rest-api.md#general-document-model)</li></ul> |
+|[**Layout model**](concept-layout.md) | Extract text, selection marks, and tables structures, along with their bounding box coordinates, from forms and documents.</br></br> Layout API has been updated to a prebuilt model. |<ul><li>Document indexing and retrieval by structure.</li><li>Preprocessing prior to OCR analysis.</li></ul> |<ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/layout)</li><li>[**REST API**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md#layout-model)</li><li>[**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md#layout-model)</li><li>[**Java SDK**](quickstarts/get-started-v3-sdk-rest-api.md#layout-model)</li><li>[**JavaScript**](quickstarts/get-started-v3-sdk-rest-api.md#layout-model)</li></ul>|
+|[**Custom model (updated)**](concept-custom.md) | Extraction and analysis of data from forms and documents specific to distinct business data and use cases.</br></br>Custom model API v3.0 supports **signature detection for custom template (custom form) models**.</br></br>Custom model API v3.0 now supports two model types:<ul><li>[**Custom Template model**](concept-custom-template.md) (custom form) is used to analyze structured and semi-structured documents.</li><li> [**Custom Neural model**](concept-custom-neural.md) (custom document) is used to analyze unstructured documents.</li></ul>|<ul><li>Identification and compilation of data, unique to your business, impacted by a regulatory change or market event.</li><li>Identification and analysis of previously overlooked unique data.</li></ul> |[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</li><li>[**REST API**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**Java SDK**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**JavaScript**](quickstarts/get-started-v3-sdk-rest-api.md)</li></ul>|
+|[ **W-2 Form**](concept-w2.md) | Extract information reported in each box on a W-2 form.|<ul><li>Automated tax document management.</li><li>Mortgage loan application processing.</li></ul> |<ul ><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.w2)<li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**Java SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**JavaScript**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li></ul> |
+|[**Invoice model**](concept-invoice.md) | Automated data processing and extraction of key information from sales invoices. |<ul><li>Accounts payable processing.</li><li>Automated tax recording and reporting.</li></ul> |<ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=invoice)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li></ul>|
+|[**Receipt model (updated)**](concept-receipt.md) | Automated data processing and extraction of key information from sales receipts.</br></br>Receipt model v3.0 supports processing of **single-page hotel receipts**.|<ul><li>Expense management.</li><li>Consumer behavior data analysis.</li><li>Customer loyalty program.</li><li>Merchandise return processing.</li><li>Automated tax recording and reporting.</li></ul> |<ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=receipt)</li><li>[**REST API**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**Java SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**JavaScript**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li></ul>|
+|[**ID document model (updated)**](concept-id-document.md) |Automated data processing and extraction of key information from US driver's licenses and international passports.</br></br>Prebuilt ID document API supports the **extraction of endorsements, restrictions, and vehicle classifications from US driver's licenses**. |<ul><li>Know your customer (KYC) financial services guidelines compliance.</li><li>Medical account management.</li><li>Identity checkpoints and gateways.</li><li>Hotel registration.</li></ul> |<ul><li> [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=idDocument)</li><li>[**REST API**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**Java SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**JavaScript**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li></ul>|
+|[**Business card model**](concept-business-card.md) |Automated data processing and extraction of key information from business cards.|<ul><li>Sales lead and marketing management.</li></ul> |<ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=businessCard)</li><li>[**REST API**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**Java SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**JavaScript**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li></ul>|
::: moniker-end ::: moniker range="form-recog-2.1.0"++
+Azure Form Recognizer is a cloud-based [Azure Applied AI Service](../../applied-ai-services/index.yml) that analyzes forms and documents, extracts text and data, and maps field relationships as key-value pairs. To learn more about each model, *see* the Concepts articles:
+
+| Model type | Model name |
+||--|
+|**Document analysis model**| &#9679; [**Layout model**](concept-layout.md) </br> |
+| **Prebuilt models** | &#9679; [**Invoice model**](concept-invoice.md)</br>&#9679; [**Receipt model**](concept-receipt.md) </br>&#9679; [**ID document model**](concept-id-document.md) </br>&#9679; [**Business card model**](concept-business-card.md) </br>
+| **Custom models** | &#9679; [**Custom model**](concept-custom.md) </br>&#9679; [**Composed model**](concept-model-overview.md)|
+
+## Which Form Recognizer model should I use?
+
+This section will help you decide which Form Recognizer v2.1 supported model you should use for your application:
+
+| Type of document | Data to extract |Document format | Your best solution |
+| --|-| -|-|
+|**A document that includes structural information** like a report or study.|In addition to text, you need to extract structural information like tables, selection marks, paragraphs, titles, headings, and subheadings.|The document is written or printed in a [supported language](language-support.md#read-layout-and-custom-form-template-model)| [**Layout model**](concept-layout.md)
+|**Invoice**|You want to extract key information such as customer name, billing address, and amount due from invoices.</li></ul> |The invoice document is written or printed in a [supported language](language-support.md#invoice-model).|[**Invoice model**](concept-invoice.md)
+ |**Receipt**|You want to extract key information such as merchant name, transaction date, and transaction total from a sales or single-page hotel receipt.</li></ul> |The receipt is written or printed in a [supported language](language-support.md#receipt-model). |[**Receipt model**](concept-receipt.md)|
+|**ID document** like a passport or driver's license. |You want to extract key information such as first name, last name, and date of birth from US drivers' licenses or international passports. |Your ID document is a US driver's license or the biographical page from an international passport (not a visa).| [**ID document model**](concept-id-document.md)|
+|**Business card**|You want to extract key information such as first name, last name, company name, email address, and phone number from business cards.</li></ul>|The business card document is in English or Japanese text. | [**Business card model**](concept-business-card.md)|
+|**Mixed-type document(s)**| You want to extract key-value pairs, selection marks, tables, signature fields, and selected regions not extracted by prebuilt or general document models.| You have various documents with structured, semi-structured, and/or unstructured elements.| [**Custom model**](concept-custom.md)|
+
+## Form Recognizer models and development options
>[!TIP] >
- > * For an enhanced experience and advanced model quality, try the [Form Recognizer v3.0 Studio ](https://formrecognizer.appliedai.azure.com/studio).
+ > * For an enhanced experience and advanced model quality, try the [Form Recognizer v3.0 Studio](https://formrecognizer.appliedai.azure.com/studio).
> * The v3.0 Studio supports any model trained with v2.1 labeled data. > * You can refer to the API migration guide for detailed information about migrating from v2.1 to v3.0.
-The following models are supported by Form Recognizer v2.1. Use the links in the table to learn more about each model and browse the API references.
+> [!NOTE]
+> The following models and development options are supported by the Form Recognizer service v2.1.
+
+Use the links in the table to learn more about each model and browse the API references:
| Model| Description | Development options | |-|--|-|
-|[**Layout API**](concept-layout.md) | Extraction and analysis of text, selection marks, tables, and bounding box coordinates, from forms and documents. | <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-layout)</li><li>[**REST API**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-2.1.0&preserve-view=true)</li><li>[**Client-library SDK**](quickstarts/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?branch=main&tabs=layout#run-the-container-with-the-docker-compose-up-command)</li></ul>|
+|[**Layout API**](concept-layout.md) | Extraction and analysis of text, selection marks, tables, and bounding box coordinates, from forms and documents. | <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-layout)</li><li>[**REST API**](quickstarts/get-started-v2-1-sdk-rest-api.md#try-it-layout-model)</li><li>[**Client-library SDK**](quickstarts/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?branch=main&tabs=layout#run-the-container-with-the-docker-compose-up-command)</li></ul>|
|[**Custom model**](concept-custom.md) | Extraction and analysis of data from forms and documents specific to distinct business data and use cases.| <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#train-a-custom-form-model)</li><li>[**REST API**](quickstarts/try-sdk-rest-api.md)</li><li>[**Client-library SDK**](how-to-guides/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=custom#run-the-container-with-the-docker-compose-up-command)</li></ul>|
-|[**Invoice model**](concept-invoice.md) | Automated data processing and extraction of key information from sales invoices. | <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-2.1.0&preserve-view=true)</li><li>[**Client-library SDK**](quickstarts/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=invoice#run-the-container-with-the-docker-compose-up-command)</li></ul>|
-|[**Receipt model**](concept-receipt.md) | Automated data processing and extraction of key information from sales receipts.| <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-2.1.0&preserve-view=true)</li><li>[**Client-library SDK**](how-to-guides/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=receipt#run-the-container-with-the-docker-compose-up-command)</li></ul>|
-|[**ID document model**](concept-id-document.md) | Automated data processing and extraction of key information from US driver's licenses and international passports.| <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-2.1.0&preserve-view=true)</li><li>[**Client-library SDK**](how-to-guides/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=id-document#run-the-container-with-the-docker-compose-up-command)</li></ul>|
-|[**Business card model**](concept-business-card.md) | Automated data processing and extraction of key information from business cards.| <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-2.1.0&preserve-view=true)</li><li>[**Client-library SDK**](how-to-guides/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=business-card#run-the-container-with-the-docker-compose-up-command)</li></ul>|
+|[**Invoice model**](concept-invoice.md) | Automated data processing and extraction of key information from sales invoices. | <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-v2-1-sdk-rest-api.md#try-it-prebuilt-model)</li><li>[**Client-library SDK**](quickstarts/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=invoice#run-the-container-with-the-docker-compose-up-command)</li></ul>|
+|[**Receipt model**](concept-receipt.md) | Automated data processing and extraction of key information from sales receipts.| <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-v2-1-sdk-rest-api.md#try-it-prebuilt-model)</li><li>[**Client-library SDK**](how-to-guides/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=receipt#run-the-container-with-the-docker-compose-up-command)</li></ul>|
+|[**ID document model**](concept-id-document.md) | Automated data processing and extraction of key information from US driver's licenses and international passports.| <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-v2-1-sdk-rest-api.md#try-it-prebuilt-model)</li><li>[**Client-library SDK**](how-to-guides/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=id-document#run-the-container-with-the-docker-compose-up-command)</li></ul>|
+|[**Business card model**](concept-business-card.md) | Automated data processing and extraction of key information from business cards.| <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-v2-1-sdk-rest-api.md#try-it-prebuilt-model)</li><li>[**Client-library SDK**](how-to-guides/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=business-card#run-the-container-with-the-docker-compose-up-command)</li></ul>|
::: moniker-end
-## How to use Form Recognizer documentation
-
-This documentation contains the following article types:
-
-* [**Concepts**](concept-layout.md) provide in-depth explanations of the service functionality and features.
-* [**Quickstarts**](quickstarts/try-sdk-rest-api.md) are getting-started instructions to guide you through making requests to the service.
-* [**How-to guides**](how-to-guides/try-sdk-rest-api.md) contain instructions for using the service in more specific or customized ways.
-* [**Tutorials**](tutorial-ai-builder.md) are longer guides that show you how to use the service as a component in broader business solutions.
- ## Data privacy and security As with all the cognitive services, developers using the Form Recognizer service should be aware of Microsoft policies on customer data. See our [Data, privacy, and security for Form Recognizer](/legal/cognitive-services/form-recognizer/fr-data-privacy-security) page.
azure-arc Managed Instance Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/managed-instance-disaster-recovery.md
Previously updated : 04/06/2022 Last updated : 06/13/2022 # Azure Arc-enabled SQL Managed Instance - disaster recovery
-To configure disaster recovery in Azure Arc-enabled SQL Managed Instance, set up failover groups.
+To configure disaster recovery in Azure Arc-enabled SQL Managed Instance, set up Azure failover groups.
## Background
-The distributed availability groups used in Azure Arc-enabled SQL Managed Instance is the same technology that is in SQL Server. Because Azure Arc-enabled SQL Managed Instance runs on Kubernetes, there's no Windows failover cluster involved. For more information, see [Distributed availability groups](/sql/database-engine/availability-groups/windows/distributed-availability-groups).
+Azure failover groups use the same distributed availability groups technology that is in SQL Server. Because Azure Arc-enabled SQL Managed Instance runs on Kubernetes, there's no Windows failover cluster involved. For more information, see [Distributed availability groups](/sql/database-engine/availability-groups/windows/distributed-availability-groups).
> [!NOTE] > - The Azure Arc-enabled SQL Managed Instance in both geo-primary and geo-secondary sites need to be identical in terms of their compute & capacity, as well as service tiers they are deployed in. > - Distributed availability groups can be setup for either General Purpose or Business Critical service tiers.
-To configure disaster recovery:
+To configure an Azure failover group:
1. Create custom resource for distributed availability group at the primary site 1. Create custom resource for distributed availability group at the secondary site
-1. Copy the mirroring certificates
+1. Copy the binary data from the mirroring certificates
1. Set up the distributed availability group between the primary and secondary sites The following image shows a properly configured distributed availability group: ![A properly configured distributed availability group](.\media\business-continuity\dag.png)
-### Configure distributed availability groups
+### Configure Azure failover group
1. Provision the managed instance in the primary site.
The following image shows a properly configured distributed availability group:
az sql mi-arc create --name <primaryinstance> --tier bc --replicas 3 --k8s-namespace <namespace> --use-k8s ```
-2. Provision the managed instance in the secondary site and configure as a disaster recovery instance. At this point, the system databases are not part of the contained availability group.
+2. Switch context to the secondary cluster by running ```kubectl config use-context <secondarycluster>``` and provision the managed instance in the secondary site that will be the disaster recovery instance. At this point, the system databases are not part of the contained availability group.
> [!NOTE] > - It is important to specify `--license-type DisasterRecovery` **during** the Azure Arc SQL MI creation. This will allow the DR instance to be seeded from the primary instance in the primary data center. Updating this property post deployment will not have the same effect.
The following image shows a properly configured distributed availability group:
az sql mi-arc create --name <secondaryinstance> --tier bc --replicas 3 --license-type DisasterRecovery --k8s-namespace <namespace> --use-k8s ```
-3. Copy the mirroring certificates from each site to a location that's accessible to both the geo-primary and geo-secondary instances.
+3. Mirroring certificates - The binary data inside the Mirroring Certificate property of the Arc SQL MI is needed for the Instance Failover Group CR (Custom Resource) creation.
- ```azurecli
- az sql mi-arc get-mirroring-cert --name <primaryinstance> --cert-file $HOME/sqlcerts/<name>.pemΓÇï --k8s-namespace <namespace> --use-k8s
- az sql mi-arc get-mirroring-cert --name <secondaryinstance> --cert-file $HOME/sqlcerts/<name>.pem --k8s-namespace <namespace> --use-k8s
- ```
+ This can be achieved in a few ways:
- Example:
+ (a) If using ```az``` CLI, generate the mirroring certificate file first, and then point to that file while configuring the Instance Failover Group so the binary data is read from the file and copied over into the CR. The cert files are not needed post FOG creation.
- ```azurecli
- az sql mi-arc get-mirroring-cert --name sqlprimary --cert-file $HOME/sqlcerts/sqlprimary.pemΓÇï --k8s-namespace my-namespace --use-k8s
- az sql mi-arc get-mirroring-cert --name sqlsecondary --cert-file $HOME/sqlcerts/sqlsecondary.pem --k8s-namespace my-namespace --use-k8s
- ```
+ (b) If using ```kubectl```, directly copy and paste the binary data from the Arc SQL MI CR into the yaml file that will be used to create the Instance Failover Group.
++
+ Using (a) above:
+
+ Create the mirroring certificate file for primary instance:
+ ```azurecli
+ az sql mi-arc get-mirroring-cert --name <primaryinstance> --cert-file </path/name>.pemΓÇï --k8s-namespace <namespace> --use-k8s
+ ```
+
+ Example:
+ ```azurecli
+ az sql mi-arc get-mirroring-cert --name sqlprimary --cert-file $HOME/sqlcerts/sqlprimary.pemΓÇï --k8s-namespace my-namespace --use-k8s
+ ```
+
+ Connect to the secondary cluster and create the mirroring certificate file for secondary instance:
+
+ ```azurecli
+ az sql mi-arc get-mirroring-cert --name <secondaryinstance> --cert-file </path/name>.pem --k8s-namespace <namespace> --use-k8s
+ ```
+
+ Example:
+
+ ```azurecli
+ az sql mi-arc get-mirroring-cert --name sqlsecondary --cert-file $HOME/sqlcerts/sqlsecondary.pem --k8s-namespace my-namespace --use-k8s
+ ```
+
+ Once the mirroring certificate files are created, copy the certificate from the secondary instance to a shared/local path on the primary instance cluster and vice-versa.
4. Create the failover group resource on both sites.
The following image shows a properly configured distributed availability group:
```azurecli az sql instance-failover-group-arc create --shared-name <name of failover group> --name <name for primary DAG resource> --mi <local SQL managed instance name> --role primary --partner-mi <partner SQL managed instance name> --partner-mirroring-url tcp://<secondary IP> --partner-mirroring-cert-file <secondary.pem> --k8s-namespace <namespace> --use-k8s
+ ```
+
+ Example:
+ ```azurecli
+ az sql instance-failover-group-arc create --shared-name myfog --name primarycr --mi sqlinstance1 --role primary --partner-mi sqlinstance2 --partner-mirroring-url tcp://10.20.5.20:970 --partner-mirroring-cert-file $HOME/sqlcerts/sqlinstance2.pem --k8s-namespace my-namespace --use-k8s
+ ```
+ On the secondary instance, run the following command to setup the FOG CR. The ```--partner-mirroring-cert-file``` in this case should point to a path that has the mirroring certificate file generated from the primary instance as described in 3(a) above.
+ ```azurecli
az sql instance-failover-group-arc create --shared-name <name of failover group> --name <name for secondary DAG resource> --mi <local SQL managed instance name> --role secondary --partner-mi <partner SQL managed instance name> --partner-mirroring-url tcp://<primary IP> --partner-mirroring-cert-file <primary.pem> --k8s-namespace <namespace> --use-k8s ``` Example:- ```azurecli
- az sql instance-failover-group-arc create --shared-name myfog --name primarycr --mi sqlinstance1 --role primary --partner-mi sqlinstance2 --partner-mirroring-url tcp://10.20.5.20:970 --partner-mirroring-cert-file $HOME/sqlcerts/sqlinstance2.pem --k8s-namespace my-namespace --use-k8s
- az sql instance-failover-group-arc create --shared-name myfog --name secondarycr --mi sqlinstance2 --role secondary --partner-mi sqlinstance1 --partner-mirroring-url tcp://10.10.5.20:970 --partner-mirroring-cert-file $HOME/sqlcerts/sqlinstance1.pem --k8s-namespace my-namespace --use-k8s ```
azure-arc Tutorial Akv Secrets Provider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-akv-secrets-provider.md
The Azure Key Vault Provider for Secrets Store CSI Driver allows for the integration of Azure Key Vault as a secrets store with a Kubernetes cluster via a [CSI volume](https://kubernetes-csi.github.io/docs/). For Azure Arc-enabled Kubernetes clusters, you can install the Azure Key Vault Secrets Provider extension to fetch secrets.
-Benefits of the Azure Key Vault Secrets Provider extension include the folllowing:
+Benefits of the Azure Key Vault Secrets Provider extension include the following:
- Mounts secrets/keys/certs to pod using a CSI Inline volume - Supports pod portability with the SecretProviderClass CRD
Benefits of the Azure Key Vault Secrets Provider extension include the folllowin
- A cluster with a supported Kubernetes distribution that has already been [connected to Azure Arc](quickstart-connect-cluster.md). The following Kubernetes distributions are currently supported for this scenario: - Cluster API Azure
+ - Azure Kubernetes Service (AKS) clusters on Azure Stack HCI
- AKS hybrid clusters provisioned from Azure - Google Kubernetes Engine - OpenShift Kubernetes Distribution
azure-arc Tutorial Arc Enabled Open Service Mesh https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-arc-enabled-open-service-mesh.md
Azure Arc-enabled Open Service Mesh can be deployed through Azure portal, Azure
- Only one instance of Open Service Mesh can be deployed on an Azure Arc-connected Kubernetes cluster. - Support is available for the two most recently released minor versions of Arc-enabled Open Service Mesh. Find the latest version [here](https://github.com/Azure/osm-azure/releases). Supported release versions are appended with notes. Ignore the tags associated with intermediate releases. - The following Kubernetes distributions are currently supported:
- - AKS Engine
+ - AKS (Azure Kubernetes Service) Engine
+ - AKS clusters on Azure Stack HCI
- AKS hybrid clusters provisioned from Azure - Cluster API Azure - Google Kubernetes Engine
azure-arc Tutorial Gitops Flux2 Ci Cd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-gitops-flux2-ci-cd.md
CD pipeline manipulates PRs in the GitOps repository. It needs a Service Connect
--set orchestratorPAT=<Azure Repos PAT token> ``` > [!NOTE]
-> `Azure Repos PAT token` should have `Build: Read & execute` and `Code: Read` permissions.
+> `Azure Repos PAT token` should have `Build: Read & execute` and `Code: Full` permissions.
3. Configure Flux to send notifications to GitOps connector: ```console
azure-arc Troubleshoot Resource Bridge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/troubleshoot-resource-bridge.md
Title: Troubleshoot Azure Arc resource bridge (preview) issues description: This article tells how to troubleshoot and resolve issues with the Azure Arc resource bridge (preview) when trying to deploy or connect to the service. Previously updated : 08/24/2022 Last updated : 09/26/2022
$HOME\.KVA\.ssh\logkey
To run the `az arcappliance logs` command, the path to the kubeconfig must be provided. The kubeconfig is generated after successful completion of the `az arcappliance deploy` command and is placed in the same directory as the CLI command in ./kubeconfig or as specified in `--outfile` (if the parameter was passed).
-If `az arcappliance deploy` was not completed, then the kubeconfig file may exist but may be empty or missing data, so it can't be used for logs collection. In this case, the Appliance VM IP address can be used to collect logs instead. The Appliance VM IP is assigned when the `az arcappliance deploy` command is run, after Control Plane Endpoint reconciliation. For example, if the message displayed in the command window reads "Appliance IP is 10.97.176.27", the command to use for logs collection would be:
+If `az arcappliance deploy` was not completed, then the kubeconfig file may exist but may be empty or missing data, so it can't be used for logs collection. In this case, the Appliance VM IP address can be used to collect logs instead. The Appliance VM IP is assigned when the `az arcappliance deploy` command is run, after Control Plane Endpoint reconciliation. For example, if the message displayed in the command window reads "Appliance IP is 10.97.176.27", the command to use for logs collection would be:
```azurecli az arcappliance logs hci --out-dir c:\logs --ip 10.97.176.27
When the appliance is deployed to a host resource pool, there is no high availab
### Restricted outbound connectivity
-If outbound connectivity is restricted by your firewall or proxy server, make sure the URLs listed below are not blocked.
+Make sure the URLs listed below are added to your allowlist.
-URLS:
+#### Proxy URLs used by appliance agents and services
-| Agent resource | Description |
-|||
-|`https://mcr.microsoft.com`|Microsoft container registry|
-|`https://*.his.arc.azure.com`|Azure Arc Identity service|
-|`https://*.dp.kubernetesconfiguration.azure.com`|Azure Arc configuration service|
-|`https://*.servicebus.windows.net`|Cluster connect|
-|`https://guestnotificationservice.azure.com` |Guest notification service|
-|`https://*.dp.prod.appliances.azure.com`|Resource bridge data plane service|
-|`https://ecpacr.azurecr.io` |Resource bridge container image download |
-|`.blob.core.windows.net`<br> `*.dl.delivery.mp.microsoft.com`<br> `*.do.dsp.mp.microsoft.com` |Resource bridge image download |
-|`https://azurearcfork8sdev.azurecr.io` |Azure Arc for Kubernetes container image download |
-|`adhs.events.data.microsoft.com ` |Required diagnostic data sent to Microsoft from control plane nodes|
-|`v20.events.data.microsoft.com` |Required diagnostic data sent to Microsoft from the Azure Stack HCI or Windows Server host|
+|**Service**|**Port**|**URL**|**Direction**|**Notes**|
+|--|--|--|--|--|
+|Microsoft container registry | 443 | `https://mcr.microsoft.com`| Appliance VM IP and Control Plane IP need outbound connection. | Required to pull container images for installation. |
+|Azure Arc Identity service | 443 | `https://*.his.arc.azure.com` | Appliance VM IP and Control Plane IP need outbound connection. | Manages identity and access control for Azure resources |
+|Azure Arc configuration service | 443 | `https://*.dp.kubernetesconfiguration.azure.com`| Appliance VM IP and Control Plane IP need outbound connection. | Used for Kubernetes cluster configuration.|
+|Cluster connect service | 443 | `https://*.servicebus.windows.net` | Appliance VM IP and Control Plane IP need outbound connection. | Provides cloud-enabled communication to connect on-premises resources with the cloud. |
+|Guest Notification service| 443 | `https://guestnotificationservice.azure.com`| Appliance VM IP and Control Plane IP need outbound connection. | Used to connect on-prem resources to Azure.|
+|SFS API endpoint | 443 | msk8s.api.cdp.microsoft.com | Host machine, Appliance VM IP and Control Plane IP need outbound connection. | Used when downloading product catalog, product bits, and OS images from SFS. |
+|Resource bridge (appliance) Dataplane service| 443 | `https://*.dp.prod.appliances.azure.com`| Appliance VM IP and Control Plane IP need outbound connection. | Communicate with resource provider in Azure.|
+|Resource bridge (appliance) container image download| 443 | `*.blob.core.windows.net, https://ecpacr.azurecr.io`| Appliance VM IP and Control Plane IP need outbound connection. | Required to pull container images. |
+|Resource bridge (appliance) image download| 80 | `*.dl.delivery.mp.microsoft.com`| Host machine, Appliance VM IP and Control Plane IP need outbound connection. | Download the Arc Resource Bridge OS images. |
+|Azure Arc for Kubernetes container image download| 443 | `https://azurearcfork8sdev.azurecr.io`| Appliance VM IP and Control Plane IP need outbound connection. | Required to pull container images. |
+|ADHS telemetry service | 443 | adhs.events.data.microsoft.com| Appliance VM IP and Control Plane IP need outbound connection. | Runs inside the appliance/mariner OS. Used periodically to send Microsoft required diagnostic data from control plane nodes. Used when telemetry is coming off Mariner, which would mean any Kubernetes control plane. |
+|Microsoft events data service | 443 |v20.events.data.microsoft.com| Appliance VM IP and Control Plane IP need outbound connection. | Used periodically to send Microsoft required diagnostic data from the Azure Stack HCI or Windows Server host. Used when telemetry is coming off Windows like Windows Server or HCI. |
-URLs used by other Arc agents:
+#### Used by other Arc agents
-|Agent resource | Description |
-|||
-|`https://management.azure.com` |Azure Resource Manager|
-|`https://login.microsoftonline.com` |Azure Active Directory|
+|**Service**|**URL**|
+|--|--|
+|Azure Resource Manager| `https://management.azure.com`|
+|Azure Active Directory| `https://login.microsoftonline.com`|
### Azure Arc resource bridge is unreachable
When deploying the resource bridge on VMware Vcenter, you may get an error sayin
If you don't see your problem here or you can't resolve your issue, try one of the following channels for support:
-* Get answers from Azure experts through [Microsoft Q&A](/answers/topics/azure-arc.html).
+- Get answers from Azure experts through [Microsoft Q&A](/answers/topics/azure-arc.html).
-* Connect with [@AzureSupport](https://twitter.com/azuresupport), the official Microsoft Azure account for improving customer experience. Azure Support connects the Azure community to answers, support, and experts.
+- Connect with [@AzureSupport](https://twitter.com/azuresupport), the official Microsoft Azure account for improving customer experience. Azure Support connects the Azure community to answers, support, and experts.
-* [Open an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md).
+- [Open an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md).
azure-arc Troubleshoot Agent Onboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/troubleshoot-agent-onboard.md
Title: Troubleshoot Azure Arc-enabled servers agent connection issues description: This article tells how to troubleshoot and resolve issues with the Connected Machine agent that arise with Azure Arc-enabled servers when trying to connect to the service. Previously updated : 07/16/2021 Last updated : 10/13/2022
Use the following table to identify and resolve issues when configuring the Azur
| Error code | Probable cause | Suggested remediation | ||-|--| | AZCM0000 | The action was successful | N/A |
-| AZCM0001 | An unknown error occurred | Contact Microsoft Support for further assistance |
-| AZCM0011 | The user canceled the action (CTRL+C) | Retry the previous command |
-| AZCM0012 | The access token provided is invalid | Obtain a new access token and try again |
-| AZCM0013 | The tags provided are invalid | Check that the tags are enclosed in double quotes, separated by commas, and that any names or values with spaces are enclosed in single quotes: `--tags "SingleName='Value with spaces',Location=Redmond"`
-| AZCM0014 | The cloud is invalid | Specify a supported cloud: `AzureCloud` or `AzureUSGovernment` |
-| AZCM0015 | The correlation ID specified isn't a valid GUID | Provide a valid GUID for `--correlation-id` |
-| AZCM0016 | Missing a mandatory parameter | Review the output to identify which parameters are missing |
-| AZCM0017 | The resource name is invalid | Specify a name that only uses alphanumeric characters, hyphens and/or underscores. The name cannot end with a hyphen or underscore. |
-| AZCM0018 | The command was executed without administrative privileges | Retry the command with administrator or root privileges in an elevated command prompt or console session. |
-| AZCM0041 | The credentials supplied are invalid | For device logins, verify the user account specified has access to the tenant and subscription where the server resource will be created. For service principal logins, check the client ID and secret for correctness, the expiration date of the secret, and that the service principal is from the same tenant where the server resource will be created. |
-| AZCM0042 | Creation of the Azure Arc-enabled server resource failed | Verify that the user/service principal specified has access to create Azure Arc-enabled server resources in the specified resource group. |
-| AZCM0043 | Deletion of the Azure Arc-enabled server resource failed | Verify that the user/service principal specified has access to delete Azure Arc-enabled server resources in the specified resource group. If the resource no longer exists in Azure, use the `--force-local-only` flag to proceed. |
+| AZCM0001 | An unknown error occurred | Contact Microsoft Support for assistance. |
+| AZCM0011 | The user canceled the action (CTRL+C) | Retry the previous command. |
+| AZCM0012 | The access token is invalid | If authenticating via access token, obtain a new token and try again. If authenticating via service principal or device logins, contact Microsoft Support for assistance. |
+| AZCM0016 | Missing a mandatory parameter | Review the error message in the output to identify which parameters are missing. For the complete syntax of the command, run `azcmagent <command> --help`. |
+| AZCM0018 | The command was executed without administrative privileges | Retry the command in an elevated user context (administrator/root). |
+| AZCM0019 | The path to the configuration file is incorrect | Ensure the path to the configuration file is correct and try again. |
+| AZCM0023 | The value provided for a parameter (argument) is invalid | Review the error message for more specific information. Refer to the syntax of the command (`azcmagent <command> --help`) for valid values or expected format for the arguments. |
+| AZCM0026 | There is an error in network configuration or some critical services are temporarily unavailable | Check if the required endpoints are reachable (for example, hostnames are resolvable, endpoints are not blocked). If the network is configured for Private Link Scope, a Private Link Scope resource ID must be provided for onboarding using the `--private-link-scope` parameter. |
+| AZCM0041 | The credentials supplied are invalid | For device logins, verify that the user account specified has access to the tenant and subscription where the server resource will be created<sup>[1](#footnote3)</sup>.<br> For service principal logins, check the client ID and secret for correctness, the expiration date of the secret<sup>[2](#footnote4)</sup>, and that the service principal is from the same tenant where the server resource will be created<sup>[1](#footnote3)</sup>.<br> <a name="footnote3"></a><sup>1</sup>See [How to find your Azure Active Directory tenant ID](/azure/active-directory/fundamentals/active-directory-how-to-find-tenant).<br> <a name="footnote4"></a><sup>2</sup>In Azure portal, open Azure Active Directory and select the App registration blade. Select the application to be used and the Certificates and secrets within it. Check whether the expiration data has passed. If it has, create new credentials with sufficient roles and try again. See [Connected Machine agent prerequisites-required permissions](prerequisites.md#required-permissions). |
+| AZCM0042 | Creation of the Azure Arc-enabled server resource failed | Review the error message in the output to identify the cause of the failure to create resource and the suggested remediation. For permission issues, see [Connected Machine agent prerequisites-required permissions](prerequisites.md#required-permissions) for more information. |
+| AZCM0043 | Deletion of the Azure Arc-enabled server resource failed | Verify that the user/service principal specified has permissions to delete Azure Arc-enabled server/resources in the specified group ΓÇö see [Connected Machine agent prerequisites-required permissions](prerequisites.md#required-permissions).<br> If the resource no longer exists in Azure, use the `--force-local-only` flag to proceed. |
| AZCM0044 | A resource with the same name already exists | Specify a different name for the `--resource-name` parameter or delete the existing Azure Arc-enabled server in Azure and try again. |
-| AZCM0061 | Unable to reach the agent service | Verify you're running the command in an elevated user context (administrator/root) and that the HIMDS service is running on your server. |
-| AZCM0062 | An error occurred while connecting the server | Review other error codes in the output for more specific information. If the error occurred after the Azure resource was created, you need to delete the Arc server from your resource group before retrying. |
-| AZCM0063 | An error occurred while disconnecting the server | Review other error codes in the output for more specific information. If you continue to encounter this error, you can delete the resource in Azure, and then run `azcmagent disconnect --force-local-only` on the server to disconnect the agent. |
-| AZCM0064 | The agent service is not responding | Check the status of the `himds` service to ensure it is running. Start the service if it is not running. If it is running, wait a minute then try again. |
-| AZCM0065 | An internal agent communication error occurred | Contact Microsoft Support for assistance |
-| AZCM0066 | The agent web service is not responding or unavailable | Contact Microsoft Support for assistance |
-| AZCM0067 | The agent is already connected to Azure | Run `azcmagent disconnect` to remove the current connection, then try again. |
-| AZCM0068 | An internal error occurred while disconnecting the server from Azure | Contact Microsoft Support for assistance |
-| AZCM0070 | Unable to obtain local config | The Hybrid Instance Metadata service (HIMDS) might not be running. Check the status of your HIMDS service (for Windows) or the HIMDS daemon (for Linux). |
+| AZCM0062 | An error occurred while connecting the server | Review the error message in the output for more specific information. If the error occurred after the Azure resource was created, delete this resource before retrying. |
+| AZCM0063 | An error occurred while disconnecting the server | Review the error message in the output for more specific information. If this error persists, delete the resource in Azure, and then run `azcmagent disconnect --force-local-only` on the server. |
+| AZCM0067 | The machine is already connected to Azure | Run `azcmagent disconnect` to remove the current connection, then try again. |
+| AZCM0068 | Subscription name was provided, and an error occurred while looking up the corresponding subscription GUID. | Retry the command with the subscription GUID instead of subscription name. |
+| AZCM0061<br>AZCM0064<br>AZCM0065<br>AZCM0066<br>AZCM0070<br> | The agent service is not responding or unavailable | Verify the command is run in an elevated user context (administrator/root). Ensure that the HIMDS service is running (start or restart HIMDS as needed) then try the command again. |
| AZCM0081 | An error occurred while downloading the Azure Active Directory managed identity certificate | If this message is encountered while attempting to connect the server to Azure, the agent won't be able to communicate with the Azure Arc service. Delete the resource in Azure and try connecting again. |
-| AZCM0101 | The command was not parsed successfully | Run `azcmagent <command> --help` to review the correct command syntax |
-| AZCM0102 | Unable to retrieve the computer hostname | Run `hostname` to check for any system-level error messages, then contact Microsoft Support. |
-| AZCM0103 | An error occurred while generating RSA keys | Contact Microsoft Support for assistance |
-| AZCM0104 | Failed to read system information | Verify the identity used to run `azcmagent` has administrator/root privileges on the system and try again. |
+| AZCM0101 | The command was not parsed successfully | Run `azcmagent <command> --help` to review the command syntax. |
+| AZCM0102 | An error occurred while retrieving the computer hostname | Retry the command and specify a resource name (with parameter --resource-name or ΓÇôn). Use only alphanumeric characters, hyphens and/or underscores; note that resource name cannot end with a hyphen or underscore. |
+| AZCM0103 | An error occurred while generating RSA keys | Contact Microsoft Support for assistance. |
+| AZCM0105 | An error occurred while downloading the Azure Active Directory managed identify certificate | Delete the resource created in Azure and try again. |
+| AZCM0147-<br>AZCM0152 | An error occurred while installing Azcmagent on Windows | Review the error message in the output for more specific information. |
+| AZCM0127-<br>AZCM0146 | An error occurred while installing Azcmagent on Linux | Review the error message in the output for more specific information. |
## Agent verbose log
Before following the troubleshooting steps described later in this article, the
### Windows
-The following is an example of the command to enable verbose logging with the Connected Machine agent for Windows when performing an interactive installation.
+Following is an example of the command to enable verbose logging with the Connected Machine agent for Windows when performing an interactive installation.
```console & "$env:ProgramFiles\AzureConnectedMachineAgent\azcmagent.exe" connect --resource-group "resourceGroupName" --tenant-id "tenantID" --location "regionName" --subscription-id "subscriptionID" --verbose ```
-The following is an example of the command to enable verbose logging with the Connected Machine agent for Windows when performing an at-scale installation using a service principal.
+Following is an example of the command to enable verbose logging with the Connected Machine agent for Windows when performing an at-scale installation using a service principal.
```console & "$env:ProgramFiles\AzureConnectedMachineAgent\azcmagent.exe" connect `
The following is an example of the command to enable verbose logging with the Co
### Linux
-The following is an example of the command to enable verbose logging with the Connected Machine agent for Linux when performing an interactive installation.
+Following is an example of the command to enable verbose logging with the Connected Machine agent for Linux when performing an interactive installation.
>[!NOTE] >You must have *root* access permissions on Linux machines to run **azcmagent**.
The following is an example of the command to enable verbose logging with the Co
azcmagent connect --resource-group "resourceGroupName" --tenant-id "tenantID" --location "regionName" --subscription-id "subscriptionID" --verbose ```
-The following is an example of the command to enable verbose logging with the Connected Machine agent for Linux when performing an at-scale installation using a service principal.
+Following is an example of the command to enable verbose logging with the Connected Machine agent for Linux when performing an at-scale installation using a service principal.
```bash azcmagent connect \
azure-arc Quick Start Connect Vcenter To Arc Using Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/quick-start-connect-vcenter-to-arc-using-script.md
First, the script deploys a virtual appliance called [Azure Arc resource bridge
### Azure Arc Resource Bridge -- Azure Arc Resource Bridge IP needs access to the URLs listed [here](../vmware-vsphere/support-matrix-for-arc-enabled-vmware-vsphere.md#resource-bridge-networking-requirements).
+- Azure Arc resource bridge IP needs access to the URLs listed [here](../vmware-vsphere/support-matrix-for-arc-enabled-vmware-vsphere.md#resource-bridge-networking-requirements).
### vCenter Server
First, the script deploys a virtual appliance called [Azure Arc resource bridge
- A virtual network that can provide internet access, directly or through a proxy. It must also be possible for VMs on this network to communicate with the vCenter server on TCP port (usually 443). -- At least one free IP address on the above network that isn't in the DHCP range. At least three free IP addresses if there's no DHCP server on the network.
+- At least three free static IP addresses on the above network. If you have a DHCP server on the network, the IP addresses must be outside the DHCP range.
- A resource pool or a cluster with a minimum capacity of 16 GB of RAM and four vCPUs.
A typical onboarding that uses the script takes 30 to 60 minutes. During the pro
| **vCenter FQDN/Address** | Enter the fully qualified domain name for the vCenter Server instance (or an IP address). For example: **10.160.0.1** or **nyc-vcenter.contoso.com**. | | **vCenter Username** | Enter the username for the vSphere account. The required permissions for the account are listed in the [prerequisites](#prerequisites). | | **vCenter password** | Enter the password for the vSphere account. |
-| **Data center selection** | Select the name of the datacenter (as shown in the vSphere client) where the Azure Arc resource bridge's VM should be deployed. |
-| **Network selection** | Select the name of the virtual network or segment to which the VM must be connected. This network should allow the appliance to communicate with vCenter Server and the Azure endpoints (or internet). |
-| **Static IP / DHCP** | If you have DHCP server in your network and want to use it, enter **y**. Otherwise, enter **n**. If you are using a DHCP server, reserve the IP address assigned to the Azure Arc Resource Bridge VM (Appliance VM IP). </br>When you choose a static IP configuration, you're asked for the following information: </br> 1. **Static IP address prefix**: Network address in CIDR notation. For example: **192.168.0.0/24**. </br> 2. **Static gateway**: Gateway address. For example: **192.168.0.0**. </br> 3. **DNS servers**: IP address(es) of DNS server(s) used by Azure Arc Resource Bridge VM for DNS resolution. VM must be able to resolve external sites, like mcr.microsoft.com and the vCenter server. </br> 4. **Start range IP**: Minimum size of two available IP addresses is required. One IP address is for the VM, and the other is reserved for upgrade scenarios. Provide the starting IP address of that range. Ensure the Start range IP has internet access. </br> 5. **End range IP**: Last IP address of the IP range requested in the previous field. Ensure the End range IP has internet access. </br> 6. **VLAN ID** (optional) |
-| **Resource pool** | Select the name of the resource pool to which the Azure Arc resource bridge's VM will be deployed. |
-| **Data store** | Select the name of the datastore to be used for the Azure Arc resource bridge's VM. |
+| **Data center selection** | Select the name of the datacenter (as shown in the vSphere client) where the Azure Arc resource bridge VM should be deployed. |
+| **Network selection** | Select the name of the virtual network or segment to which the Azure Arc resource bridge VM must be connected. This network should allow the appliance to communicate with vCenter Server and the Azure endpoints (or internet). |
+| **Static IP / DHCP** | For deploying Azure Arc resource bridge, the preferred configuration is to use Static IP. Enter **n** to select static IP configuration. While not recommended, if you have DHCP server in your network and want to use it instead, enter **y**. If you are using a DHCP server, reserve the IP address assigned to the Azure Arc Resource Bridge VM (Appliance VM IP). If you use DHCP, the cluster configuration IP address still needs to be a static IP address. </br>When you choose a static IP configuration, you're asked for the following information: </br> 1. **Static IP address prefix**: Network address in CIDR notation. For example: **192.168.0.0/24**. </br> 2. **Static gateway**: Gateway address. For example: **192.168.0.0**. </br> 3. **DNS servers**: IP address(es) of DNS server(s) used by Azure Arc resource bridge VM for DNS resolution. Azure Arc resource bridge VM must be able to resolve external sites, like mcr.microsoft.com and the vCenter server. </br> 4. **Start range IP**: Minimum size of two available IP addresses is required. One IP address is for the Azure Arc resource bridge VM, and the other is reserved for upgrade scenarios. Provide the starting IP address of that range. Ensure the Start range IP has internet access. </br> 5. **End range IP**: Last IP address of the IP range requested in the previous field. Ensure the End range IP has internet access. </br> 6. **VLAN ID** (optional) |
+| **Resource pool** | Select the name of the resource pool to which the Azure Arc resource bridge VM will be deployed. |
+| **Data store** | Select the name of the datastore to be used for the Azure Arc resource bridge VM. |
| **Folder** | Select the name of the vSphere VM and the template folder where the Azure Arc resource bridge's VM will be deployed. | | **VM template Name** | Provide a name for the VM template that will be created in your vCenter Server instance based on the downloaded OVA file. For example: **arc-appliance-template**. | | **Control Plane IP** address | Provide a static IP address that's outside the DHCP range but still available on the network. Ensure that this IP address isn't assigned to any other machine on the network. Azure Arc resource bridge (preview) runs a Kubernetes cluster, and its control plane requires a static IP address. Control Plane IP must have internet access. |
azure-arc Support Matrix For Arc Enabled Vmware Vsphere https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/support-matrix-for-arc-enabled-vmware-vsphere.md
The following firewall URL exceptions are needed for the Azure Arc resource brid
| Azure Arc for K8s container image download | 443 | https://azurearcfork8sdev.azurecr.io | Appliance VM IP and control plane endpoint need outbound connection. | Required to pull container images. | | ADHS telemetry service | 443 | adhs.events.data.microsoft.com | Appliance VM IP and control plane endpoint need outbound connection. Runs inside the appliance/mariner OS. | Used periodically to send Microsoft required diagnostic data from control plane nodes. Used when telemetry is coming off Mariner, which would mean any K8s control plane. | | Microsoft events data service | 443 | v20.events.data.microsoft.com | Appliance VM IP and control plane endpoint need outbound connection. | Used periodically to send Microsoft required diagnostic data from the Azure Stack HCI or Windows Server host. Used when telemetry is coming off Windows like Windows Server or HCI. |
+| vCenter Server | 443 | URL of the vCenter server | Appliance VM IP and control plane endpoint need outbound connection. | Used to by the vCenter server to communicate with the Appliance VM and the control plane.|
## Azure permissions required
azure-functions Dotnet Isolated Process Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/dotnet-isolated-process-guide.md
recommendations: false
# Guide for running C# Azure Functions in an isolated process
-This article is an introduction to using C# to develop .NET isolated process functions, which run out-of-process in Azure Functions. Running out-of-process lets you decouple your function code from the Azure Functions runtime. Isolated process C# functions run on .NET 6.0, .NET 7.0, and .NET Framework 4.8 (preview support). [In-process C# class library functions](functions-dotnet-class-library.md) aren't supported on .NET 7.0.
+This article is an introduction to using C# to develop .NET isolated process functions, which runs Azure Functions in an isolated process. This allows you to decouple your function code from the Azure Functions runtime, check out [supported version](#supported-versions) for Azure functions in an isolated process. [In-process C# class library functions](functions-dotnet-class-library.md) aren't supported on .NET 7.0.
| Getting started | Concepts| Samples | |--|--|--|
A [HostBuilder] is used to build and return a fully initialized [IHost] instance
### Configuration
-The [ConfigureFunctionsWorkerDefaults] method is used to add the settings required for the function app to run out-of-process, which includes the following functionality:
+The [ConfigureFunctionsWorkerDefaults] method is used to add the settings required for the function app to run in an isolated process, which includes the following functionality:
+ Default set of converters. + Set the default [JsonSerializerOptions] to ignore casing on property names.
azure-maps Webgl Custom Layer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/webgl-custom-layer.md
description: How to add a custom WebGL layer to a map using the Azure Maps Web SDK. Previously updated : 09/23/2022 Last updated : 10/17/2022
map.layers.add(new atlas.layer.WebGLLayer("layerId",
This sample renders a triangle on the map using a WebGL layer.
-<!-- Insert example here -->
- ![A screenshot showing a triangle rendered on a map, using a WebGL layer.](./media/how-to-webgl-custom-layer/triangle.png)
+For a fully functional sample with source code, see [Simple 2D WebGL layer][Simple 2D WebGL layer] in the Azure Maps Samples.
+ The map's camera matrix is used to project spherical Mercator point to gl coordinates. Mercator point \[0, 0\] represents the top left corner of the Mercator world and \[1, 1\] represents the bottom right corner.
to load a [glTF][glTF] file and render it on the map using [three.js][threejs].
You need to add the following script files. ```html
-<script src="https://unpkg.com/three@0.102.0/build/three.min.js"></script>
-
-<script src="https://unpkg.com/three@0.102.0/examples/js/loaders/GLTFLoader.js"></script>
+<script src="https://unpkg.com/three@latest/build/three.min.js"></script>
+<script src="https://unpkg.com/three@latest/examples/js/loaders/GLTFLoader.js"></script>
``` This sample renders an animated 3D parrot on the map.
-<!-- Insert example here -->
- ![A screenshot showing an an animated 3D parrot on the map.](./media/how-to-webgl-custom-layer/3d-parrot.gif)
+For a fully functional sample with source code, see [Three custom WebGL layer][Three custom WebGL layer] in the Azure Maps Samples.
+ The `onAdd` function loads a `.glb` file into memory and instantiates three.js objects such as Camera, Scene, Light, and a `THREE.WebGLRenderer`.
a single frame by calling `map.triggerRepaint()` in the `render` function.
> - To enable anti-aliasing simply set `antialias` to `true` as one of the style options while creating the map.
+## Render a 3D model using babylon.js
+
+[Babylon.js][babylonjs] is one of the world's leading WebGL-based graphics engines. The following example shows how to load a GLTF file and render it on the map using babylon.js.
+
+You need to add the following script files.
+
+```html
+<script src="https://cdn.babylonjs.com/babylon.js"></script>
+<script src="https://cdn.babylonjs.com/loaders/babylonjs.loaders.min.js"></script>
+```
+
+This sample renders a satellite tower on the map.
+
+The `onAdd` function instantiates a BABYLON engine and a scene. It then loads a `.gltf` file using BABYLON.SceneLoader.
+
+The `render` function calculates the projection matrix of the camera and renders the model to the scene.
+
+![A screenshot showing an example of rendering a 3D model using babylon.js.](./media/how-to-webgl-custom-layer/render-3d-model.png)
+
+For a fully functional sample with source code, see [Babylon custom WebGL layer][Babylon custom WebGL layer] in the Azure Maps Samples.
+ ## Render a deck.gl layer A WebGL layer can be used to render layers from the [deck.gl][deckgl]
within a certain time range.
You need to add the following script file. ```html
-<script src="https://unpkg.com/deck.gl@8.8.9/dist.min.js"></script>
+<script src="https://unpkg.com/deck.gl@latest/dist.min.js"></script>
``` Define a layer class that extends `atlas.layer.WebGLLayer`.
class DeckGLLayer extends atlas.layer.WebGLLayer {
} ```
-This sample renders an arc-layer from the [deck.gl][deckgl] library.
+
+This sample renders an arc-layer google the [deck.gl][deckgl] library.
![A screenshot showing an arc-layer from the Deck G L library.](./media/how-to-webgl-custom-layer/arc-layer.png)
+For a fully functional sample with source code, see [Deck GL custom WebGL layer][Deck GL custom WebGL layer] in the Azure Maps Samples.
+ ## Next steps Learn more about the classes and methods used in this article:
Learn more about the classes and methods used in this article:
[deckgl]: https://deck.gl/ [glTF]: https://www.khronos.org/gltf/ [OpenGL ES]: https://www.khronos.org/opengles/
+[babylonjs]: https://www.babylonjs.com/
[WebGLLayer]: /javascript/api/azure-maps-control/atlas.layer.webgllayer [WebGLLayerOptions]: /javascript/api/azure-maps-control/atlas.webgllayeroptions [WebGLRenderer interface]: /javascript/api/azure-maps-control/atlas.webglrenderer [MercatorPoint]: /javascript/api/azure-maps-control/atlas.data.mercatorpoint
+[Simple 2D WebGL layer]: https://samples.azuremaps.com/?sample=simple-2d-webgl-layer
+[Deck GL custom WebGL layer]: https://samples.azuremaps.com/?sample=deck-gl-custom-webgl-layer
+[Three custom WebGL layer]: https://samples.azuremaps.com/?sample=three-custom-webgl-layer
+[Babylon custom WebGL layer]: https://samples.azuremaps.com/?sample=babylon-custom-webgl-layer
azure-monitor Agents Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agents-overview.md
The following tables list the operating systems that Azure Monitor Agent and the
| Operating system | Azure Monitor agent <sup>1</sup> | Log Analytics agent <sup>1</sup> | Diagnostics extension <sup>2</sup>| |:|::|::|::|::
-| AlmaLinux 8.5 | X<sup>3</sup> | | |
-| AlmaLinux 8 | X | X | |
+| AlmaLinux 8 | X<sup>3</sup> | X | |
| Amazon Linux 2017.09 | | X | | | Amazon Linux 2 | | X | | | CentOS Linux 8 | X | X | | | CentOS Linux 7 | X<sup>3</sup> | X | X | | CentOS Linux 6 | | X | |
-| CentOS Linux 6.5+ | | X | X |
-| CBL-Mariner 2.0 | X | | |
+| CBL-Mariner 2.0 | X<sup>3</sup> | | |
| Debian 11 | X<sup>3</sup> | | | | Debian 10 | X | X | | | Debian 9 | X | X | X | | Debian 8 | | X | |
-| Debian 7 | | | X |
| OpenSUSE 15 | X | | |
-| OpenSUSE 13.1+ | | | X |
| Oracle Linux 8 | X | X | | | Oracle Linux 7 | X | X | X | | Oracle Linux 6 | | X | |
azure-monitor Convert Classic Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/convert-classic-resource.md
# Migrate to workspace-based Application Insights resources
-This guide will walk you through migrating a classic Application Insights resource to a workspace-based resource. Workspace-based resources support full integration between Application Insights and Log Analytics. Workspace-based resources send Application Insights telemetry to a common Log Analytics workspace. This behavior allows you to access [the latest features of Azure Monitor](#new-capabilities) while keeping application, infrastructure, and platform logs in a consolidated location.
+This article walks you through migrating a classic Application Insights resource to a workspace-based resource. Workspace-based resources support full integration between Application Insights and Log Analytics. Workspace-based resources send Application Insights telemetry to a common Log Analytics workspace. This behavior allows you to access [the latest features of Azure Monitor](#new-capabilities) while keeping application, infrastructure, and platform logs in a consolidated location.
-Workspace-based resources enable common Azure role-based access control (Azure RBAC) across your resources, and eliminate the need for cross-app/workspace queries.
+Workspace-based resources enable common Azure role-based access control across your resources and eliminate the need for cross-app/workspace queries.
-**Workspace-based resources are currently available in all commercial regions and Azure US Government.**
+Workspace-based resources are currently available in all commercial regions and Azure US Government.
## New capabilities
-Workspace-based Application Insights allows you to take advantage of all the latest capabilities of Azure Monitor and Log Analytics, including:
+Workspace-based Application Insights allow you to take advantage of the latest capabilities of Azure Monitor and Log Analytics:
-* [Customer-Managed Keys (CMK)](../logs/customer-managed-keys.md) provides encryption at rest for your data with encryption keys that only you have access to.
-* [Azure Private Link](../logs/private-link-security.md) allows you to securely link Azure PaaS services to your virtual network using private endpoints.
-* [Bring Your Own Storage (BYOS) for Profiler and Snapshot Debugger](./profiler-bring-your-own-storage.md) gives you full control over:
- - Encryption-at-rest policy
- - Lifetime management policy
- - Network access for all data associated with Application Insights Profiler and Snapshot Debugger
-* [Commitment Tiers](../logs/cost-logs.md#commitment-tiers) enable you to save as much as 30% compared to the Pay-As-You-Go price. Otherwise, Pay-as-you-go data ingestion and data retention are billed similarly in Log Analytics as they are in Application Insights.
-* Faster data ingestion via Log Analytics streaming ingestion.
+* [Customer-managed keys](../logs/customer-managed-keys.md) provide encryption at rest for your data with encryption keys that only you have access to.
+* [Azure Private Link](../logs/private-link-security.md) allows you to securely link the Azure platform as a service (PaaS) to your virtual network by using private endpoints.
+* [Bring your own storage (BYOS) for Profiler and Snapshot Debugger](./profiler-bring-your-own-storage.md) gives you full control over:
+ - Encryption-at-rest policy.
+ - Lifetime management policy.
+ - Network access for all data associated with Application Insights Profiler and Snapshot Debugger.
+* [Commitment tiers](../logs/cost-logs.md#commitment-tiers) enable you to save as much as 30% compared to the pay-as-you-go price. Otherwise, pay-as-you-go data ingestion and data retention are billed similarly in Log Analytics as they are in Application Insights.
+* Data is ingested faster via Log Analytics streaming ingestion.
> [!NOTE]
-> After migrating to a workspace-based Application Insights resource, telemetry from multiple Application Insights resources may be stored in a common Log Analytics workspace. You will still be able to pull data from a specific Application Insights resource, as described under [Understanding log queries](#understanding-log-queries).
+> After you migrate to a workspace-based Application Insights resource, telemetry from multiple Application Insights resources might be stored in a common Log Analytics workspace. You'll still be able to pull data from a specific Application Insights resource, as described in the section [Understand log queries](#understand-log-queries).
## Migration process When you migrate to a workspace-based resource, no data is transferred from your classic resource's storage to the new workspace-based storage. Choosing to migrate will change the location where new data is written to a Log Analytics workspace while preserving access to your classic resource data. Your classic resource data will persist and be subject to the retention settings on your classic Application Insights resource. All new data ingested post migration will be subject to the [retention settings](../logs/data-retention-archive.md) of the associated Log Analytics workspace, which also supports [different retention settings by data type](../logs/data-retention-archive.md#set-retention-and-archive-policy-by-table).
-The migration process is **permanent, and cannot be reversed**. Once you migrate a resource to workspace-based Application Insights, it will always be a workspace-based resource. However, once you migrate you're able to change the target workspace as often as needed.
-If you don't need to migrate an existing resource, and instead want to create a new workspace-based Application Insights resource, use the [workspace-based resource creation guide](create-workspace-resource.md).
+*The migration process is permanent and can't be reversed*. After you migrate a resource to workspace-based Application Insights, it will always be a workspace-based resource. After you migrate, you can change the target workspace as often as needed.
-## Pre-requisites
+If you don't need to migrate an existing resource, and instead want to create a new workspace-based Application Insights resource, see the [Workspace-based resource creation guide](create-workspace-resource.md).
-- A Log Analytics workspace with the access control mode set to the **`use resource or workspace permissions`** setting.
+## Prerequisites
- - Workspace-based Application Insights resources aren't compatible with workspaces set to the dedicated **`workspace based permissions`** setting. To learn more about Log Analytics workspace access control, consult the [access control mode guidance](../logs/manage-access.md#access-control-mode)
+- A Log Analytics workspace with the access control mode set to the **Use resource or workspace permissions** setting:
- - If you don't already have an existing Log Analytics Workspace, [consult the Log Analytics workspace creation documentation](../logs/quick-create-workspace.md).
+ - Workspace-based Application Insights resources aren't compatible with workspaces set to the dedicated **workspace-based permissions** setting. To learn more about Log Analytics workspace access control, see the [Access control mode guidance](../logs/manage-access.md#access-control-mode).
+
+ - If you don't already have an existing Log Analytics workspace, see the [Log Analytics workspace creation documentation](../logs/quick-create-workspace.md).
-- **Continuous export is not supported for workspace-based resources** and must be disabled.
-Once the migration is complete, you can use [diagnostic settings](../essentials/diagnostic-settings.md) to configure data archiving to a storage account or streaming to Azure Event Hubs.
+- **Continuous export** isn't supported for workspace-based resources and must be disabled. After the migration is finished, you can use [diagnostic settings](../essentials/diagnostic-settings.md) to configure data archiving to a storage account or streaming to Azure Event Hubs.
> [!CAUTION]
- > * Diagnostics settings uses a different export format/schema than continuous export, migrating will break any existing integrations with Stream Analytics.
- > * Diagnostic settings export may increase costs. ([more information](export-telemetry.md#diagnostic-settings-based-export))
+ > * Diagnostic settings use a different export format/schema than continuous export. Migrating will break any existing integrations with Azure Stream Analytics.
+ > * Diagnostic settings export might increase costs. For more information, see [Export telemetry from Application Insights](export-telemetry.md#diagnostic-settings-based-export).
-- Check your current retention settings under **General** > **Usage and estimated costs** > **Data Retention** for your Log Analytics workspace. This setting will affect how long any new ingested data is stored once you migrate your Application Insights resource.
+- Check your current retention settings under **General** > **Usage and estimated costs** > **Data Retention** for your Log Analytics workspace. This setting will affect how long any new ingested data is stored after you migrate your Application Insights resource.
> [!NOTE]
- > - If you currently store Application Insights data for longer than the default 90 days and want to retain this larger retention period after migration, you will need to adjust your [workspace retention settings](/azure/azure-monitor/logs/data-retention-archive?tabs=portal-1%2Cportal-2#set-retention-and-archive-policy-by-table) from the default 90 days to the desired longer retention period.
- > - If youΓÇÖve selected data retention greater than 90 days on data ingested into the Classic Application Insights resource prior to migration, data retention will continue to be billed to through that Application Insights resource until that data exceeds the retention period.
- > - If the retention setting for your Application Insights instance under **Configure** > **Usage and estimated costs** > **Data Retention** is enabled, then use that setting to control the retention days for the telemetry data still saved in your classic resource's storage.
+ > - If you currently store Application Insights data for longer than the default 90 days and want to retain this longer retention period after migration, adjust your [workspace retention settings](/azure/azure-monitor/logs/data-retention-archive?tabs=portal-1%2Cportal-2#set-retention-and-archive-policy-by-table) from the default 90 days to the desired longer retention period.
+ > - If you've selected data retention longer than 90 days on data ingested into the classic Application Insights resource prior to migration, data retention will continue to be billed through that Application Insights resource until the data exceeds the retention period.
+ > - If the retention setting for your Application Insights instance under **Configure** > **Usage and estimated costs** > **Data Retention** is enabled, use that setting to control the retention days for the telemetry data still saved in your classic resource's storage.
-- Understand [Workspace-based Application Insights](../logs/cost-logs.md#application-insights-billing) usage and costs.
+- Understand [workspace-based Application Insights](../logs/cost-logs.md#application-insights-billing) usage and costs.
## Migrate your resource
-This section walks through migrating a classic Application Insights resource to a workspace-based resource.
+To migrate a classic Application Insights resource to a workspace-based resource:
-1. From your Application Insights resource, select **Properties** under the **Configure** heading in the left-hand menu bar.
+1. From your Application Insights resource, select **Properties** under the **Configure** heading in the menu on the left.
-![Properties highlighted in red box](./media/convert-classic-resource/properties.png)
+ ![Screenshot that shows Properties under the Configure heading.](./media/convert-classic-resource/properties.png)
-2. Select **`Migrate to Workspace-based`**.
-
-![Migrate resource button](./media/convert-classic-resource/migrate.png)
+1. Select **Migrate to Workspace-based**.
-3. Choose the Log Analytics workspace where you want all future ingested Application Insights telemetry to be stored. It can either be a Log Analytics workspace in the same subscription, or in a different subscription that shares the same Azure AD tenant. The Log Analytics workspace doesn't have to be in the same resource group as the Application Insights resource.
+ ![Screenshot that shows the Migrate to Workspace-based resource button.](./media/convert-classic-resource/migrate.png)
-> [!NOTE]
-> Migrating to a workspace-based resource can take up to 24 hours, but is usually faster than that. Please rely on accessing data through your Application Insights resource while waiting for the migration process to complete. Once completed, you will start seeing new data stored in the Log Analytics workspace tables.
+1. Choose the Log Analytics workspace where you want all future ingested Application Insights telemetry to be stored. It can either be a Log Analytics workspace in the same subscription or a different subscription that shares the same Azure Active Directory tenant. The Log Analytics workspace doesn't have to be in the same resource group as the Application Insights resource.
-![Migration wizard UI with option to select targe workspace](./media/convert-classic-resource/migration.png)
-
-Once your resource is migrated, you'll see the corresponding workspace info in the **Overview** pane:
+ > [!NOTE]
+ > Migrating to a workspace-based resource can take up to 24 hours, but the process is usually faster than that. Rely on accessing data through your Application Insights resource while you wait for the migration process to finish. After it's finished, you'll see new data stored in the Log Analytics workspace tables.
+
+ ![Screenshot that shows the Migration wizard UI with the option to select target workspace.](./media/convert-classic-resource/migration.png)
+
+ After your resource is migrated, you'll see the corresponding workspace information in the **Overview** pane:
-![Workspace Name](./media/create-workspace-resource/workspace-name.png)
+ ![Screenshot that shows the Workspace Name](./media/create-workspace-resource/workspace-name.png)
-Clicking the blue link text will take you to the associated Log Analytics workspace where you can take advantage of the new unified workspace query environment.
+ Selecting the blue link text takes you to the associated Log Analytics workspace where you can take advantage of the new unified workspace query environment.
> [!TIP]
-> After migrating to a workspace-based Application Insights resource, we recommend using the [workspace's daily cap](../logs/daily-cap.md) to limit ingestion and costs instead of the cap in Application Insights.
+> After you migrate to a workspace-based Application Insights resource, we recommend using the [workspace's daily cap](../logs/daily-cap.md) to limit ingestion and costs instead of the cap in Application Insights.
-## Understanding log queries
+## Understand log queries
-We still provide full backwards compatibility for your Application Insights classic resource queries, workbooks, and log-based alerts within the Application Insights experience.
+We still provide full backward compatibility for your Application Insights classic resource queries, workbooks, and log-based alerts within the Application Insights experience.
-To write queries against the [new workspace-based table structure/schema](#workspace-based-resource-changes), you must first navigate to your Log Analytics workspace.
+To write queries against the [new workspace-based table structure/schema](#workspace-based-resource-changes), you must first go to your Log Analytics workspace.
To ensure the queries successfully run, validate that the query's fields align with the [new schema fields](#appmetrics).
-If you have multiple Application Insights resources store telemetry in one Log Analytics workspace, but you only want to query data from one specific Application Insights resource, you have two options:
+If you have multiple Application Insights resources that store telemetry in one Log Analytics workspace, but you want to query data from one specific Application Insights resource, you have two options:
-- Option 1: Go to the desired Application Insights resource and open the **Logs** tab. All queries from this tab will automatically pull data from the selected Application Insights resource.-- Option 2: Go to the Log Analytics workspace that you configured as the destination for your Application Insights telemetry and open the **Logs** tab. To query data from a specific Application Insights resource, filter for the built-in ```_ResourceId``` property that is available in all application specific tables.
+- **Option 1:** Go to the desired Application Insights resource and select the **Logs** tab. All queries from this tab will automatically pull data from the selected Application Insights resource.
+- **Option 2:** Go to the Log Analytics workspace that you configured as the destination for your Application Insights telemetry and select the **Logs** tab. To query data from a specific Application Insights resource, filter for the built-in `_ResourceId` property that's available in all application-specific tables.
-Notice that if you query directly from the Log Analytics workspace, you'll only see data that is ingested post migration. To see both your classic Application Insights data and the new data ingested after migration in a unified query experience, use the **Logs** tab from within your migrated Application Insights resource.
+Notice that if you query directly from the Log Analytics workspace, you'll only see data that's ingested post migration. To see both your classic Application Insights data and the new data ingested after migration in a unified query experience, use the **Logs** tab from within your migrated Application Insights resource.
> [!NOTE]
-> If you rename your Application Insights resource after migrating to workspace-based model, the Application Insights Logs tab will no longer show the telemetry collected before renaming. You will be able to see all data (old and new) on the Logs tab of the associated Log Analytics resource.
+> If you rename your Application Insights resource after you migrate to the workspace-based model, the Application Insights **Logs** tab will no longer show the telemetry collected before renaming. You can see all old and new data on the **Logs** tab of the associated Log Analytics resource.
## Programmatic resource migration
+This section helps you migrate your resources.
+ ### Azure CLI To access the preview Application Insights Azure CLI commands, you first need to run:
To access the preview Application Insights Azure CLI commands, you first need to
az extension add -n application-insights ```
-If you don't run the `az extension add` command, you'll see an error message that states: `az : ERROR: az monitor: 'app-insights' is not in the 'az monitor' command group. See 'az monitor --help'.`
+If you don't run the `az extension add` command, you'll see an error message that states `az : ERROR: az monitor: 'app-insights' is not in the 'az monitor' command group. See 'az monitor --help'.`
-Now you can run the following to create your Application Insights resource:
+Now you can run the following code to create your Application Insights resource:
```azurecli az monitor app-insights component update --app
az monitor app-insights component update --app
az monitor app-insights component update --app your-app-insights-resource-name -g your_resource_group --workspace "/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/test1234/providers/microsoft.operationalinsights/workspaces/test1234555" ```
-For the full Azure CLI documentation for this command, consult the [Azure CLI documentation](/cli/azure/monitor/app-insights/component#az-monitor-app-insights-component-update).
+For the full Azure CLI documentation for this command, see the [Azure CLI documentation](/cli/azure/monitor/app-insights/component#az-monitor-app-insights-component-update).
### Azure PowerShell
-The `Update-AzApplicationInsights` PowerShell command doesn't currently support migrating a classic Application Insights resource to workspace-based. To create a workspace-based resource with PowerShell, you can use the Azure Resource Manager templates below and deploy with PowerShell.
+The `Update-AzApplicationInsights` PowerShell command doesn't currently support migrating a classic Application Insights resource to workspace based. To create a workspace-based resource with PowerShell, you can use the following Azure Resource Manager templates and deploy with PowerShell.
### Azure Resource Manager templates
+This section provides templates.
+ #### Template file ```json
The `Update-AzApplicationInsights` PowerShell command doesn't currently support
```
-## Modifying the associated workspace
+## Modify the associated workspace
-Once a workspace-based Application Insights resource has been created, you can modify the associated Log Analytics Workspace.
+After a workspace-based Application Insights resource has been created, you can modify the associated Log Analytics workspace.
From within the Application Insights resource pane, select **Properties** > **Change Workspace** > **Log Analytics Workspaces**. ## Frequently asked questions
+This section provides answers to common questions.
+ ### Is there any implication on the cost from migration?
-There's usually no difference, with a couple of exceptions.
+There's usually no difference, with a couple of exceptions:
+ - Migrated Application Insights resources can use [Log Analytics commitment tiers](../logs/cost-logs.md#commitment-tiers) to reduce cost if the data volumes in the workspace are high enough.
- Grandfathered Application Insights resources will no longer get 1 GB per month free from the original Application Insights pricing model. ### How will telemetry capping work? You can set a [daily cap on the Log Analytics workspace](../logs/daily-cap.md#application-insights).
-There's no strict (billing-wise) capping available.
+There's no strict billing capping available.
### How will ingestion-based sampling work?
There are no changes to ingestion-based sampling.
No. We merge data during query time.
-### Will my old logs queries continue to work?
+### Will my old log queries continue to work?
Yes, they'll continue to work.
Yes, they'll continue to work.
Yes, they'll continue to work.
-### Will migration impact AppInsights API accessing data?
+### Will migration affect AppInsights API accessing data?
-No, migration won't impact existing API access to data. After migration, you'll be able to access data directly from a workspace using a [slightly different schema](#workspace-based-resource-changes).
+No. Migration won't affect existing API access to data. After migration, you'll be able to access data directly from a workspace by using a [slightly different schema](#workspace-based-resource-changes).
### Will there be any impact on Live Metrics or other monitoring experiences?
-No, there's no impact to [Live Metrics](live-stream.md#live-metrics-monitor--diagnose-with-1-second-latency) or other monitoring experiences.
+No. There's no impact to [Live Metrics](live-stream.md#live-metrics-monitor--diagnose-with-1-second-latency) or other monitoring experiences.
-### What happens with Continuous export after migration?
+### What happens with continuous export after migration?
Continuous export doesn't support workspace-based resources.
-You'll need to switch to [Diagnostic Settings](../essentials/diagnostic-settings.md#diagnostic-settings-in-azure-monitor).
+You'll need to switch to [diagnostic settings](../essentials/diagnostic-settings.md#diagnostic-settings-in-azure-monitor).
## Troubleshooting
+This section offers troubleshooting tips for common issues.
+ ### Access mode
-**Error message:** *The selected workspace is configured with workspace-based access mode. Some APM features may be impacted. Select another workspace or allow resource-based access in the workspace settings. You can override this error by using CLI.*
+**Error message:** "The selected workspace is configured with workspace-based access mode. Some APM features may be impacted. Select another workspace or allow resource-based access in the workspace settings. You can override this error by using CLI."
-In order for your workspace-based Application Insights resource to operate properly you need to change the access control mode of your target Log Analytics workspace to the **resource or workspace permissions** setting. This setting is located in the Log Analytics workspace UI under **Properties** > **Access control mode**. For detailed instructions, consult the [Log Analytics configure access control mode guidance](../logs/manage-access.md#access-control-mode). If your access control mode is set to the exclusive **Require workspace permissions** setting, migration via the portal migration experience will remain blocked.
+For your workspace-based Application Insights resource to operate properly, you need to change the access control mode of your target Log Analytics workspace to the **Resource or workspace permissions** setting. This setting is located in the Log Analytics workspace UI under **Properties** > **Access control mode**. For instructions, see the [Log Analytics configure access control mode guidance](../logs/manage-access.md#access-control-mode). If your access control mode is set to the exclusive **Require workspace permissions** setting, migration via the portal migration experience will remain blocked.
-If you canΓÇÖt change the access control mode for security reasons for your current target workspace, we recommend creating a new Log Analytics workspace to use for the migration.
+If you can't change the access control mode for security reasons for your current target workspace, create a new Log Analytics workspace to use for the migration.
### Continuous export
-**Error message:** *Continuous Export needs to be disabled before continuing. After migration, use Diagnostic Settings for export.*
+**Error message:** "Continuous Export needs to be disabled before continuing. After migration, use Diagnostic Settings for export."
-The legacy continuous export functionality isn't supported for workspace-based resources. Prior to migrating you need to disable continuous export.
+The legacy **Continuous export** functionality isn't supported for workspace-based resources. Prior to migrating, you need to disable continuous export.
-1. From your Application Insights resource view, under the **Configure** heading select **Continuous Export**.
+1. From your Application Insights resource view, under the **Configure** heading, select **Continuous export**.
- ![Continuous export menu item](./media/convert-classic-resource/continuous-export.png)
+ ![Screenshot that shows the Continuous export menu item.](./media/convert-classic-resource/continuous-export.png)
-2. Select **Disable**.
+1. Select **Disable**.
- ![Continuous export disable button](./media/convert-classic-resource/disable.png)
+ ![Screenshot that shows the Continuous export Disable button.](./media/convert-classic-resource/disable.png)
-- Once you have selected disable, you can navigate back to the migration UI. If the edit continuous export page prompts you that your settings won't be saved, you can select ok for this prompt as it doesn't pertain to disabling/enabling continuous export.
+ - After you select **Disable**, you can go back to the migration UI. If the **Edit continuous export** page prompts you that your settings won't be saved, select **OK** for this prompt because it doesn't pertain to disabling or enabling continuous export.
-- Once you've successfully migrated your Application Insights resource to workspace-based, you can use Diagnostic settings to replace the functionality that continuous export used to provide. Select **Diagnostic settings** > **add diagnostic setting** from within your Application Insights resource. You can select all tables, or a subset of tables to archive to a storage account, or to stream to Azure Event Hubs. For detailed guidance on diagnostic settings, refer to the [Azure Monitor diagnostic settings guidance](../essentials/diagnostic-settings.md).
+ - After you've successfully migrated your Application Insights resource to workspace based, you can use diagnostic settings to replace the functionality that continuous export used to provide. Select **Diagnostics settings** > **Add diagnostic setting** from within your Application Insights resource. You can select all tables, or a subset of tables, to archive to a storage account or stream to Azure Event Hubs. For more information on diagnostic settings, see the [Azure Monitor diagnostic settings guidance](../essentials/diagnostic-settings.md).
### Retention settings
-**Warning Message:** *Your customized Application Insights retention settings won't apply to data sent to the workspace. You'll need to reconfigure these separately.*
+**Warning message:** "Your customized Application Insights retention settings won't apply to data sent to the workspace. You'll need to reconfigure these separately."
-You don't have to make any changes prior to migrating. This message alerts you that your current Application Insights retention settings aren't set to the default 90-day retention period. This warning message means you may want to modify the retention settings for your Log Analytics workspace prior to migrating and starting to ingest new data.
+You don't have to make any changes prior to migrating. This message alerts you that your current Application Insights retention settings aren't set to the default 90-day retention period. This warning message means you might want to modify the retention settings for your Log Analytics workspace prior to migrating and starting to ingest new data.
-You can check your current retention settings for Log Analytics under **General** > **Usage and estimated costs** > **Data Retention** from within the Log Analytics UI. This setting will affect how long any new ingested data is stored once you migrate your Application Insights resource.
+You can check your current retention settings for Log Analytics under **General** > **Usage and estimated costs** > **Data Retention** from within the Log Analytics UI. This setting will affect how long any new ingested data is stored after you migrate your Application Insights resource.
## Workspace-based resource changes
-Prior to the introduction of [workspace-based Application Insights resources](create-workspace-resource.md), Application Insights data was stored separate from other log data in Azure Monitor. Both are based on Azure Data Explorer and use the same Kusto Query Language (KQL). Workspace-based Application Insights resources data is stored in a Log Analytics workspace, together with other monitoring data and application data. This simplifies your configuration by allowing you to analyze data across multiple solutions more easily, and to use the capabilities of workspaces.
+Prior to the introduction of [workspace-based Application Insights resources](create-workspace-resource.md), Application Insights data was stored separately from other log data in Azure Monitor. Both are based on Azure Data Explorer and use the same Kusto Query Language (KQL). Workspace-based Application Insights resources data is stored in a Log Analytics workspace, together with other monitoring data and application data. This arrangement simplifies your configuration by allowing you to analyze data across multiple solutions more easily, and to use the capabilities of workspaces.
### Classic data structure
-The structure of a Log Analytics workspace is described in [Log Analytics workspace overview](../logs/log-analytics-workspace-overview.md). For a classic application, the data isn't stored in a Log Analytics workspace. It uses the same query language, and you create and run queries by using the same Log Analytics tool in the Azure portal. Data items for classic applications are stored separately from each other. The general structure is the same as for workspace-based applications, although the table and column names are different.
+
+The structure of a Log Analytics workspace is described in [Log Analytics workspace overview](../logs/log-analytics-workspace-overview.md). For a classic application, the data isn't stored in a Log Analytics workspace. It uses the same query language, and you create and run queries by using the same Log Analytics tool in the Azure portal. Data items for classic applications are stored separately from each other. The general structure is the same as for workspace-based applications, although the table and column names are different.
> [!NOTE] > The classic Application Insights experience includes backward compatibility for your resource queries, workbooks, and log-based alerts. To query or view against the [new workspace-based table structure or schema](#table-structure), you must first go to your Log Analytics workspace. During the preview, selecting **Logs** from within the Application Insights panes will give you access to the classic Application Insights query experience. For more information, see [Query scope](../logs/scope.md).
The structure of a Log Analytics workspace is described in [Log Analytics worksp
|:|:|:| | availabilityResults | AppAvailabilityResults | Summary data from availability tests.| | browserTimings | AppBrowserTimings | Data about client performance, such as the time taken to process the incoming data.|
-| dependencies | AppDependencies | Calls from the application to other components (including external components) recorded via TrackDependency() ΓÇô for example, calls to REST API, database or a file system. |
+| dependencies | AppDependencies | Calls from the application to other components (including external components) recorded via `TrackDependency()`. Examples are calls to the REST API or database or a file system. |
| customEvents | AppEvents | Custom events created by your application. | | customMetrics | AppMetrics | Custom metrics created by your application. | | pageViews | AppPageViews| Data about each website view with browser information. |
-| performanceCounters | AppPerformanceCounters | Performance measurements from the compute resources supporting the application, for example, Windows performance counters. |
+| performanceCounters | AppPerformanceCounters | Performance measurements from the compute resources that support the application. An example is Windows performance counters. |
| requests | AppRequests | Requests received by your application. For example, a separate request record is logged for each HTTP request that your web app receives. |
-| exceptions | AppExceptions | Exceptions thrown by the application runtime, captures both server side and client-side (browsers) exceptions. |
-| traces | AppTraces | Detailed logs (traces) emitted through application code/logging frameworks recorded via TrackTrace(). |
+| exceptions | AppExceptions | Exceptions thrown by the application runtime. Captures both server side and client-side (browsers) exceptions. |
+| traces | AppTraces | Detailed logs (traces) emitted through application code/logging frameworks recorded via `TrackTrace()`. |
> [!CAUTION]
-> Do not take a production dependency on the Log Analytics tables, until you see new telemetry records show up directly in Log Analytics. This might take up to 24 hours after the migration process started.
+> Don't take a production dependency on the Log Analytics tables until you see new telemetry records show up directly in Log Analytics. It might take up to 24 hours after the migration process started for records to appear.
### Table schemas
-The following sections show the mapping between the classic property names and the new workspace-based Application Insights property names. Use this information to convert any queries using legacy tables.
+The following sections show the mapping between the classic property names and the new workspace-based Application Insights property names. Use this information to convert any queries using legacy tables.
-Most of the columns have the same name with different capitalization. Since KQL is case-sensitive, you'll need to change each column name along with the table names in existing queries. Columns with changes in addition to capitalization are highlighted. You can still use your classic Application Insights queries within the **Logs** pane of your Application Insights resource, even if it's a workspace-based resource. The new property names are required for when querying from within the context of the Log Analytics workspace experience.
+Most of the columns have the same name with different capitalization. Since KQL is case sensitive, you'll need to change each column name along with the table names in existing queries. Columns with changes in addition to capitalization are highlighted. You can still use your classic Application Insights queries within the **Logs** pane of your Application Insights resource, even if it's a workspace-based resource. The new property names are required when you query from within the context of the Log Analytics workspace experience.
#### AppAvailabilityResults
Legacy table: customMetrics
|valueSum|real|ValueSum|real| > [!NOTE]
-> Older versions of Application Insights SDKs used to report standard deviation (valueStdDev) in the metrics pre-aggregation. Due to little adoption in metrics analysis, the field was removed and is no longer aggregated by the SDKs. If the value is received by the Application Insights data collection end point, it gets dropped during ingestion and is not sent to the Log Analytics workspace. If you are interested in using standard deviation in your analysis, we recommend using queries against Application Insights raw events.
+> Older versions of Application Insights SDKs used to report standard deviation (`valueStdDev`) in the metrics pre-aggregation. Because adoption in metrics analysis was light, the field was removed and is no longer aggregated by the SDKs. If the value is received by the Application Insights data collection endpoint, it gets dropped during ingestion and isn't sent to the Log Analytics workspace. If you're interested in using standard deviation in your analysis, we recommend using queries against Application Insights raw events.
#### AppPageViews
Legacy table: traces
## Next steps * [Explore metrics](../essentials/metrics-charts.md)
-* [Write Analytics queries](../logs/log-query-overview.md)
+* [Write Log Analytics queries](../logs/log-query-overview.md)
azure-monitor Opencensus Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opencensus-python.md
Title: Monitor Python applications with Azure Monitor | Microsoft Docs
-description: Provides instructions to wire up OpenCensus Python with Azure Monitor
+description: This article provides instructions on how to wire up OpenCensus Python with Azure Monitor.
Last updated 8/19/2022 ms.devlang: python
Azure Monitor supports distributed tracing, metric collection, and logging of Python applications.
-Microsoft's supported solution for tracking and exporting data for your Python applications is through the [Opencensus Python SDK](#introducing-opencensus-python-sdk) via the [Azure Monitor exporters](#instrument-with-opencensus-python-sdk-with-azure-monitor-exporters).
+Microsoft's supported solution for tracking and exporting data for your Python applications is through the [OpenCensus Python SDK](#introducing-opencensus-python-sdk) via the [Azure Monitor exporters](#instrument-with-opencensus-python-sdk-with-azure-monitor-exporters).
-Any other telemetry SDKs for Python are UNSUPPORTED and are NOT recommended by Microsoft to use as a telemetry solution.
+Any other telemetry SDKs for Python *are unsupported and are not recommended* by Microsoft to use as a telemetry solution.
-You may have noted that OpenCensus is converging into [OpenTelemetry](https://opentelemetry.io/). However, we continue to recommend OpenCensus while OpenTelemetry gradually matures.
+OpenCensus is converging into [OpenTelemetry](https://opentelemetry.io/). We continue to recommend OpenCensus while OpenTelemetry gradually matures.
> [!NOTE]
-> A preview [OpenTelemetry-based Python offering](opentelemetry-enable.md?tabs=python) is available. [Learn more](opentelemetry-overview.md).
+> A preview [OpenTelemetry-based Python offering](opentelemetry-enable.md?tabs=python) is available. To learn more, see the [OpenTelemetry overview](opentelemetry-overview.md).
## Prerequisites -- An Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
+You need an Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
[!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)]
-## Introducing Opencensus Python SDK
+## Introducing OpenCensus Python SDK
-[OpenCensus](https://opencensus.io) is a set of open source libraries to allow collection of distributed tracing, metrics and logging telemetry. Through the use of [Azure Monitor exporters](https://github.com/census-instrumentation/opencensus-python/tree/master/contrib/opencensus-ext-azure), you will be able to send this collected telemetry to Application insights. This article walks you through the process of setting up OpenCensus and Azure Monitor Exporters for Python to send your monitoring data to Azure Monitor.
+[OpenCensus](https://opencensus.io) is a set of open-source libraries to allow collection of distributed tracing, metrics, and logging telemetry. By using [Azure Monitor exporters](https://github.com/census-instrumentation/opencensus-python/tree/master/contrib/opencensus-ext-azure), you can send this collected telemetry to Application Insights. This article walks you through the process of setting up OpenCensus and Azure Monitor exporters for Python to send your monitoring data to Azure Monitor.
## Instrument with OpenCensus Python SDK with Azure Monitor exporters
Install the OpenCensus Azure Monitor exporters:
python -m pip install opencensus-ext-azure ```
-The SDK uses three Azure Monitor exporters to send different types of telemetry to Azure Monitor. They are `trace`, `metrics`, and `logs`. For more information on these telemetry types, see [the data platform overview](../data-platform.md). Use the following instructions to send these telemetry types via the three exporters.
+The SDK uses three Azure Monitor exporters to send different types of telemetry to Azure Monitor. They're `trace`, `metrics`, and `logs`. For more information on these telemetry types, see the [Data platform overview](../data-platform.md). Use the following instructions to send these telemetry types via the three exporters.
## Telemetry type mappings
-Here are the exporters that OpenCensus provides mapped to the types of telemetry that you see in Azure Monitor.
+OpenCensus maps the following exporters to the types of telemetry that you see in Azure Monitor.
| Pillar of observability | Telemetry type in Azure Monitor | Explanation | |-||--|
Here are the exporters that OpenCensus provides mapped to the types of telemetry
90 ```
-1. Although entering values is helpful for demonstration purposes, ultimately we want to emit the log data to Azure Monitor. Pass your connection string directly into the exporter. Or, you can specify it in an environment variable, `APPLICATIONINSIGHTS_CONNECTION_STRING`. We recommend using the connection string to instantiate the exporters that are used to send telemetry to Application Insights. Modify your code from the previous step based on the following code sample:
+1. Entering values is helpful for demonstration purposes, but we want to emit the log data to Azure Monitor. Pass your connection string directly into the exporter. Or you can specify it in an environment variable, `APPLICATIONINSIGHTS_CONNECTION_STRING`. We recommend using the connection string to instantiate the exporters that are used to send telemetry to Application Insights. Modify your code from the previous step based on the following code sample:
```python import logging
Here are the exporters that OpenCensus provides mapped to the types of telemetry
1. The exporter sends log data to Azure Monitor. You can find the data under `traces`.
- > [!NOTE]
- > In this context, `traces` isn't the same as `tracing`. Here, `traces` refers to the type of telemetry that you'll see in Azure Monitor when you utilize `AzureLogHandler`. But `tracing` refers to a concept in OpenCensus and relates to [distributed tracing](./distributed-tracing.md).
+ In this context, `traces` isn't the same as `tracing`. Here, `traces` refers to the type of telemetry that you'll see in Azure Monitor when you utilize `AzureLogHandler`. But `tracing` refers to a concept in OpenCensus and relates to [distributed tracing](./distributed-tracing.md).
> [!NOTE]
- > The root logger is configured with the level of WARNING. That means any logs that you send that have less of a severity are ignored, and in turn, won't be sent to Azure Monitor. For more information, see [documentation](https://docs.python.org/3/library/logging.html#logging.Logger.setLevel).
+ > The root logger is configured with the level of `warning`. That means any logs that you send that have less severity are ignored, and in turn, won't be sent to Azure Monitor. For more information, see [Logging documentation](https://docs.python.org/3/library/logging.html#logging.Logger.setLevel).
-1. You can also add custom properties to your log messages in the *extra* keyword argument by using the custom_dimensions field. These properties appear as key-value pairs in `customDimensions` in Azure Monitor.
+1. You can also add custom properties to your log messages in the `extra` keyword argument by using the `custom_dimensions` field. These properties appear as key-value pairs in `customDimensions` in Azure Monitor.
> [!NOTE]
- > For this feature to work, you need to pass a dictionary to the custom_dimensions field. If you pass arguments of any other type, the logger ignores them.
+ > For this feature to work, you need to pass a dictionary to the `custom_dimensions` field. If you pass arguments of any other type, the logger ignores them.
```python import logging
Here are the exporters that OpenCensus provides mapped to the types of telemetry
``` > [!NOTE]
-> As part of using Application Insights instrumentation, we collect and send diagnostic data to Microsoft. This data helps us run and improve Application Insights. You have the option to disable non-essential data collection. [Learn More](./statsbeat.md).
+> As part of using Application Insights instrumentation, we collect and send diagnostic data to Microsoft. This data helps us run and improve Application Insights. You have the option to disable non-essential data collection. To learn more, see [Statsbeat in Application Insights](./statsbeat.md).
#### Configure logging for Django applications
-You can configure logging explicitly in your application code like above for your Django applications, or you can specify it in Django's logging configuration. This code can go into whatever file you use for Django settings configuration. For how to configure Django settings, see [Django settings](https://docs.djangoproject.com/en/4.0/topics/settings/). For more information on configuring logging, see [Django logging](https://docs.djangoproject.com/en/4.0/topics/logging/).
+You can configure logging explicitly in your application code like the preceding for your Django applications, or you can specify it in Django's logging configuration. This code can go into whatever file you use for Django settings configuration. For information on how to configure Django settings, see [Django settings](https://docs.djangoproject.com/en/4.0/topics/settings/). For more information on how to configure logging, see [Django logging](https://docs.djangoproject.com/en/4.0/topics/logging/).
```json LOGGING = {
logger.warning("this will be tracked")
#### Send exceptions
-OpenCensus Python doesn't automatically track and send `exception` telemetry. They're sent through `AzureLogHandler` by using exceptions through the Python logging library. You can add custom properties just like with normal logging.
+OpenCensus Python doesn't automatically track and send `exception` telemetry. It's sent through `AzureLogHandler` by using exceptions through the Python logging library. You can add custom properties like you do with normal logging.
```python import logging
try:
except Exception: logger.exception('Captured an exception.', extra=properties) ```
-Because you must log exceptions explicitly, it's up to the user how they want to log unhandled exceptions. OpenCensus doesn't place restrictions on how a user wants to do this, as long as they explicitly log an exception telemetry.
+
+Because you must log exceptions explicitly, it's up to you how to log unhandled exceptions. OpenCensus doesn't place restrictions on how to do this logging, but you must explicitly log exception telemetry.
#### Send events
-You can send `customEvent` telemetry in exactly the same way that you send `trace` telemetry except by using `AzureEventHandler` instead.
+You can send `customEvent` telemetry in exactly the same way that you send `trace` telemetry, except by using `AzureEventHandler` instead.
```python import logging
logger.info('Hello, World!')
#### Sampling
-For information on sampling in OpenCensus, take a look at [sampling in OpenCensus](sampling.md#configuring-fixed-rate-sampling-for-opencensus-python-applications).
+For information on sampling in OpenCensus, see [Sampling in OpenCensus](sampling.md#configuring-fixed-rate-sampling-for-opencensus-python-applications).
#### Log correlation
-For details on how to enrich your logs with trace context data, see OpenCensus Python [logs integration](./correlation.md#log-correlation).
+For information on how to enrich your logs with trace context data, see OpenCensus Python [logs integration](./correlation.md#log-correlation).
#### Modify telemetry
-For details on how to modify tracked telemetry before it's sent to Azure Monitor, see OpenCensus Python [telemetry processors](./api-filtering-sampling.md#opencensus-python-telemetry-processors).
-
+For information on how to modify tracked telemetry before it's sent to Azure Monitor, see OpenCensus Python [telemetry processors](./api-filtering-sampling.md#opencensus-python-telemetry-processors).
### Metrics
-OpenCensus.stats supports 4 aggregation methods but provides partial support for Azure Monitor:
+OpenCensus.stats supports four aggregation methods but provides partial support for Azure Monitor:
-- **Count:** The count of the number of measurement points. The value is cumulative, can only increase and resets to 0 on restart. -- **Sum:** A sum up of the measurement points. The value is cumulative, can only increase and resets to 0 on restart. -- **LastValue:** Keeps the last recorded value, drops everything else.-- **Distribution:** Histogram distribution of the measurement points. This method is **NOT supported by the Azure Exporter**.
+- **Count**: The count of the number of measurement points. The value is cumulative, can only increase, and resets to 0 on restart.
+- **Sum**: A sum up of the measurement points. The value is cumulative, can only increase, and resets to 0 on restart.
+- **LastValue**: Keeps the last recorded value and drops everything else.
+- **Distribution**: Histogram distribution of the measurement points. *This method is not supported by the Azure exporter*.
-### Count Aggregation example
+### Count aggregation example
-1. First, let's generate some local metric data. We'll create a simple metric to track the number of times the user selects the **Enter** key.
+1. First, let's generate some local metric data. We'll create a metric to track the number of times the user selects the **Enter** key.
```python from datetime import datetime
OpenCensus.stats supports 4 aggregation methods but provides partial support for
Point(value=ValueLong(7), timestamp=2019-10-09 20:58:07.138614) ```
-1. Although entering values is helpful for demonstration purposes, ultimately we want to emit the metric data to Azure Monitor. Pass your connection string directly into the exporter. Or, you can specify it in an environment variable, `APPLICATIONINSIGHTS_CONNECTION_STRING`. We recommend using the connection string to instantiate the exporters that are used to send telemetry to Application Insights. Modify your code from the previous step based on the following code sample:
+1. Entering values is helpful for demonstration purposes, but we want to emit the metric data to Azure Monitor. Pass your connection string directly into the exporter. Or you can specify it in an environment variable, `APPLICATIONINSIGHTS_CONNECTION_STRING`. We recommend using the connection string to instantiate the exporters that are used to send telemetry to Application Insights. Modify your code from the previous step based on the following code sample:
```python from datetime import datetime
OpenCensus.stats supports 4 aggregation methods but provides partial support for
main() ```
-1. The exporter sends metric data to Azure Monitor at a fixed interval. The default is every 15 seconds. To modify the export interval, pass in `export_interval` as a parameter in seconds to `new_metrics_exporter()`. We're tracking a single metric, so this metric data, with whatever value and time stamp it contains, is sent every interval. The value is cumulative, can only increase and resets to 0 on restart. You can find the data under `customMetrics`, but `customMetrics` properties valueCount, valueSum, valueMin, valueMax, and valueStdDev are not effectively used.
+1. The exporter sends metric data to Azure Monitor at a fixed interval. The default is every 15 seconds. To modify the export interval, pass in `export_interval` as a parameter in seconds to `new_metrics_exporter()`. We're tracking a single metric, so this metric data, with whatever value and time stamp it contains, is sent every interval. The value is cumulative, can only increase, and resets to 0 on restart.
-### Setting custom dimensions in metrics
+ You can find the data under `customMetrics`, but the `customMetrics` properties `valueCount`, `valueSum`, `valueMin`, `valueMax`, and `valueStdDev` aren't effectively used.
-Opencensus Python SDK allows adding custom dimensions to your metrics telemetry by the way of `tags`, which are essentially a dictionary of key/value pairs.
+### Set custom dimensions in metrics
+
+The OpenCensus Python SDK allows you to add custom dimensions to your metrics telemetry by using `tags`, which are like a dictionary of key-value pairs.
1. Insert the tags that you want to use into the tag map. The tag map acts like a sort of "pool" of all available tags you can use.
Opencensus Python SDK allows adding custom dimensions to your metrics telemetry
... ```
-1. For a specific `View`, specify the tags you want to use when recording metrics with that view via the tag key.
+1. For a specific `View`, specify the tags you want to use when you're recording metrics with that view via the tag key.
```python ...
Opencensus Python SDK allows adding custom dimensions to your metrics telemetry
... ```
-1. Be sure to use the tag map when recording in the measurement map. The tag keys that are specified in the `View` must be found in the tag map used to record.
+1. Be sure to use the tag map when you're recording in the measurement map. The tag keys that are specified in the `View` must be found in the tag map used to record.
```python ...
Opencensus Python SDK allows adding custom dimensions to your metrics telemetry
... ```
-1. Under the `customMetrics` table, all metrics records emitted using the `prompt_view` will have custom dimensions `{"url":"http://example.com"}`.
+1. Under the `customMetrics` table, all metrics records emitted by using `prompt_view` will have custom dimensions `{"url":"http://example.com"}`.
-1. To produce tags with different values using the same keys, create new tag maps for them.
+1. To produce tags with different values by using the same keys, create new tag maps for them.
```python ...
Opencensus Python SDK allows adding custom dimensions to your metrics telemetry
#### Performance counters
-By default, the metrics exporter sends a set of performance counters to Azure Monitor. You can disable this by setting the `enable_standard_metrics` flag to `False` in the constructor of the metrics exporter.
+By default, the metrics exporter sends a set of performance counters to Azure Monitor. You can disable this capability by setting the `enable_standard_metrics` flag to `False` in the constructor of the metrics exporter.
```python ...
exporter = metrics_exporter.new_metrics_exporter(
... ```
-These performance counters are currently sent:
+The following performance counters are currently sent:
- Available Memory (bytes) - CPU Processor Time (percentage)
These performance counters are currently sent:
- Process CPU Usage (percentage) - Process Private Bytes (bytes)
-You should be able to see these metrics in `performanceCounters`. For more information, see [performance counters](./performance-counters.md).
+You should be able to see these metrics in `performanceCounters`. For more information, see [Performance counters](./performance-counters.md).
#### Modify telemetry
For information on how to modify tracked telemetry before it's sent to Azure Mon
### Tracing > [!NOTE]
-> In OpenCensus, `tracing` refers to [distributed tracing](./distributed-tracing.md). The `AzureExporter` sends `requests` and `dependency` telemetry to Azure Monitor.
+> In OpenCensus, `tracing` refers to [distributed tracing](./distributed-tracing.md). The `AzureExporter` parameter sends `requests` and `dependency` telemetry to Azure Monitor.
1. First, let's generate some trace data locally. In Python IDLE, or your editor of choice, enter the following code:
For information on how to modify tracked telemetry before it's sent to Azure Mon
main() ```
-1. Running the code repeatedly prompts you to enter a value. With each entry, the value is printed to the shell. The OpenCensus Python Module generates a corresponding piece of `SpanData`. The OpenCensus project defines a [trace as a tree of spans](https://opencensus.io/core-concepts/tracing/).
+1. Running the code repeatedly prompts you to enter a value. With each entry, the value is printed to the shell. The OpenCensus Python module generates a corresponding piece of `SpanData`. The OpenCensus project defines a [trace as a tree of spans](https://opencensus.io/core-concepts/tracing/).
```output Enter a value: 4
For information on how to modify tracked telemetry before it's sent to Azure Mon
[SpanData(name='test', context=SpanContext(trace_id=8aa41bc469f1a705aed1bdb20c342603, span_id=None, trace_options=TraceOptions(enabled=True), tracestate=None), span_id='f3f9f9ee6db4740a', parent_span_id=None, attributes=BoundedDict({}, maxlen=32), start_time='2019-06-27T18:21:46.157732Z', end_time='2019-06-27T18:21:47.269583Z', child_span_count=0, stack_trace=None, annotations=BoundedList([], maxlen=32), message_events=BoundedList([], maxlen=128), links=BoundedList([], maxlen=32), status=None, same_process_as_parent_span=None, span_kind=0)] ```
-1. Although entering values is helpful for demonstration purposes, ultimately we want to emit `SpanData` to Azure Monitor. Pass your connection string directly into the exporter. Or, you can specify it in an environment variable, `APPLICATIONINSIGHTS_CONNECTION_STRING`. We recommend using the connection string to instantiate the exporters that are used to send telemetry to Application Insights. Modify your code from the previous step based on the following code sample:
+1. Entering values is helpful for demonstration purposes, but we want to emit `SpanData` to Azure Monitor. Pass your connection string directly into the exporter. Or you can specify it in an environment variable, `APPLICATIONINSIGHTS_CONNECTION_STRING`. We recommend using the connection string to instantiate the exporters that are used to send telemetry to Application Insights. Modify your code from the previous step based on the following code sample:
```python from opencensus.ext.azure.trace_exporter import AzureExporter
For information on how to modify tracked telemetry before it's sent to Azure Mon
main() ```
-1. Now when you run the Python script, you should still be prompted to enter values, but only the value is being printed in the shell. The created `SpanData` is sent to Azure Monitor. You can find the emitted span data under `dependencies`. For more information about outgoing requests, see OpenCensus Python [dependencies](./opencensus-python-dependency.md).
-For more information on incoming requests, see OpenCensus Python [requests](./opencensus-python-request.md).
+1. Now when you run the Python script, you should still be prompted to enter values, but only the value is being printed in the shell. The created `SpanData` is sent to Azure Monitor. You can find the emitted span data under `dependencies`.
+
+ For more information about outgoing requests, see OpenCensus Python [dependencies](./opencensus-python-dependency.md). For more information on incoming requests, see OpenCensus Python [requests](./opencensus-python-request.md).
#### Sampling
-For information on sampling in OpenCensus, take a look at [sampling in OpenCensus](sampling.md#configuring-fixed-rate-sampling-for-opencensus-python-applications).
+For information on sampling in OpenCensus, see [Sampling in OpenCensus](sampling.md#configuring-fixed-rate-sampling-for-opencensus-python-applications).
#### Trace correlation
-For more information on telemetry correlation in your trace data, take a look at OpenCensus Python [telemetry correlation](./correlation.md#telemetry-correlation-in-opencensus-python).
+For more information on telemetry correlation in your trace data, see OpenCensus Python [telemetry correlation](./correlation.md#telemetry-correlation-in-opencensus-python).
#### Modify telemetry
For more information on how to modify tracked telemetry before it's sent to Azur
## Configure Azure Monitor exporters
-As shown, there are three different Azure Monitor exporters that support OpenCensus. Each one sends different types of telemetry to Azure Monitor. To see what types of telemetry each exporter sends, see the following list.
+As shown, there are three different Azure Monitor exporters that support OpenCensus. Each one sends different types of telemetry to Azure Monitor. To see what types of telemetry each exporter sends, see the following table.
-Each exporter accepts the same arguments for configuration, passed through the constructors. You can see details about each one here:
+Each exporter accepts the same arguments for configuration, passed through the constructors. You can see information about each one here:
-- `connection_string`: The connection string used to connect to your Azure Monitor resource. Takes priority over `instrumentation_key`.-- `credential`: Credential class used by AAD authentication. See `Authentication` section below.-- `enable_standard_metrics`: Used for `AzureMetricsExporter`. Signals the exporter to send [performance counter](../essentials/app-insights-metrics.md#performance-counters) metrics automatically to Azure Monitor. Defaults to `True`.-- `export_interval`: Used to specify the frequency in seconds of exporting. Defaults to 15s.-- `grace_period`: Used to specify the timeout for shutdown of exporters in seconds. Defaults to 5s.-- `instrumentation_key`: The instrumentation key used to connect to your Azure Monitor resource.-- `logging_sampling_rate`: Used for `AzureLogHandler` and `AzureEventHandler`. Provides a sampling rate [0,1.0] for exporting logs/events. Defaults to 1.0.-- `max_batch_size`: Specifies the maximum size of telemetry that's exported at once.-- `proxies`: Specifies a sequence of proxies to use for sending data to Azure Monitor. For more information, see [proxies](https://requests.readthedocs.io/en/latest/user/advanced/#proxies).-- `storage_path`: A path to where the local storage folder exists (unsent telemetry). As of `opencensus-ext-azure` v1.0.3, the default path is the OS temp directory + `opencensus-python` + `your-ikey`. Prior to v1.0.3, the default path is $USER + `.opencensus` + `.azure` + `python-file-name`.-- `timeout`: Specifies the networking timeout to send telemetry to the ingestion service in seconds. Defaults to 10s.
+|Exporter telemetry|Description|
+|:|:|
+`connection_string`| The connection string used to connect to your Azure Monitor resource. Takes priority over `instrumentation_key`.|
+`credential`| Credential class used by Azure Active Directory authentication. See the "Authentication" section that follows.|
+`enable_standard_metrics`| Used for `AzureMetricsExporter`. Signals the exporter to send [performance counter](../essentials/app-insights-metrics.md#performance-counters) metrics automatically to Azure Monitor. Defaults to `True`.|
+`export_interval`| Used to specify the frequency in seconds of exporting. Defaults to `15s`.|
+`grace_period`| Used to specify the timeout for shutdown of exporters in seconds. Defaults to `5s`.|
+`instrumentation_key`| The instrumentation key used to connect to your Azure Monitor resource.|
+`logging_sampling_rate`| Used for `AzureLogHandler` and `AzureEventHandler`. Provides a sampling rate [0,1.0] for exporting logs/events. Defaults to `1.0`.|
+`max_batch_size`| Specifies the maximum size of telemetry that's exported at once.|
+`proxies`| Specifies a sequence of proxies to use for sending data to Azure Monitor. For more information, see [proxies](https://requests.readthedocs.io/en/latest/user/advanced/#proxies).|
+`storage_path`| A path to where the local storage folder exists (unsent telemetry). As of `opencensus-ext-azure` v1.0.3, the default path is the OS temp directory + `opencensus-python` + `your-ikey`. Prior to v1.0.3, the default path is `$USER` + `.opencensus` + `.azure` + `python-file-name`.|
+`timeout`| Specifies the networking timeout to send telemetry to the ingestion service in seconds. Defaults to `10s`.|
## Integrate with Azure Functions
-Users who want to capture custom telemetry in Azure Functions environments are encouraged to used the OpenCensus Python Azure Functions [extension](https://github.com/census-ecosystem/opencensus-python-extensions-azure/tree/main/extensions/functions#opencensus-python-azure-functions-extension). More details can be found in this [document](../../azure-functions/functions-reference-python.md#log-custom-telemetry).
+To capture custom telemetry in Azure Functions environments, use the OpenCensus Python Azure Functions [extension](https://github.com/census-ecosystem/opencensus-python-extensions-azure/tree/main/extensions/functions#opencensus-python-azure-functions-extension). For more information, see the [Azure Functions Python developer guide](../../azure-functions/functions-reference-python.md#log-custom-telemetry).
## Authentication (preview)+ > [!NOTE]
-> Authentication feature is available starting from `opencensus-ext-azure` v1.1b0
+> The authentication feature is available starting from `opencensus-ext-azure` v1.1b0.
-Each of the Azure Monitor exporters supports configuration of securely sending telemetry payloads via OAuth authentication with Azure Active Directory (AAD).
-For more information, check out the [Authentication](./azure-ad-authentication.md) documentation.
+Each of the Azure Monitor exporters supports configuration of securely sending telemetry payloads via OAuth authentication with Azure Active Directory. For more information, see the [Authentication documentation](./azure-ad-authentication.md).
## View your data with queries You can view the telemetry data that was sent from your application through the **Logs (Analytics)** tab.
-![Screenshot of the overview pane with "Logs (Analytics)" selected in a red box](./media/opencensus-python/0010-logs-query.png)
+![Screenshot of the Overview pane with the Logs (Analytics) tab selected.](./media/opencensus-python/0010-logs-query.png)
In the list under **Active**:
In the list under **Active**:
- For telemetry sent with the Azure Monitor metrics exporter, sent metrics appear under `customMetrics`. - For telemetry sent with the Azure Monitor logs exporter, logs appear under `traces`. Exceptions appear under `exceptions`.
-For more detailed information about how to use queries and logs, see [Logs in Azure Monitor](../logs/data-platform-logs.md).
+For more information about how to use queries and logs, see [Logs in Azure Monitor](../logs/data-platform-logs.md).
## Learn more about OpenCensus for Python * [OpenCensus Python on GitHub](https://github.com/census-instrumentation/opencensus-python) * [Customization](https://github.com/census-instrumentation/opencensus-python/blob/master/README.rst#customization)
-* [Azure Monitor Exporters on GitHub](https://github.com/census-instrumentation/opencensus-python/tree/master/contrib/opencensus-ext-azure)
-* [OpenCensus Integrations](https://github.com/census-instrumentation/opencensus-python#extensions)
-* [Azure Monitor Sample Applications](https://github.com/Azure-Samples/azure-monitor-opencensus-python/tree/master/azure_monitor)
+* [Azure Monitor exporters on GitHub](https://github.com/census-instrumentation/opencensus-python/tree/master/contrib/opencensus-ext-azure)
+* [OpenCensus integrations](https://github.com/census-instrumentation/opencensus-python#extensions)
+* [Azure Monitor sample applications](https://github.com/Azure-Samples/azure-monitor-opencensus-python/tree/master/azure_monitor)
## Troubleshooting
For more detailed information about how to use queries and logs, see [Logs in Az
## Next steps * [Tracking incoming requests](./opencensus-python-dependency.md)
-* [Tracking out-going requests](./opencensus-python-request.md)
+* [Tracking outgoing requests](./opencensus-python-request.md)
* [Application map](./app-map.md) * [End-to-end performance monitoring](../app/tutorial-performance.md)
azure-monitor Opentelemetry Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-overview.md
Title: OpenTelemetry with Azure Monitor overview
-description: Provides an overview of how to use OpenTelemetry with Azure Monitor.
+description: This article provides an overview of how to use OpenTelemetry with Azure Monitor.
Last updated 10/11/2021
# OpenTelemetry overview
-Microsoft is excited to embrace [OpenTelemetry](https://opentelemetry.io/) as the future of telemetry instrumentation. You, our customers, have asked for vendor-neutral instrumentation, and we're delighted to partner with the OpenTelemetry community to create consistent APIs/SDKs across languages.
+Microsoft is excited to embrace [OpenTelemetry](https://opentelemetry.io/) as the future of telemetry instrumentation. You, our customers, have asked for vendor-neutral instrumentation, and we're pleased to partner with the OpenTelemetry community to create consistent APIs and SDKs across languages.
-Microsoft worked together with project stakeholders from two previously popular open-source telemetry projects, [OpenCensus](https://opencensus.io/) and [OpenTracing](https://opentracing.io/), to help create a single project--OpenTelemetry. OpenTelemetry includes contributions from all major cloud and Application Performance Management (APM) vendors and lives within the [Cloud Native Computing Foundation (CNCF)](https://www.cncf.io/) of which Microsoft is a Platinum Member.
+Microsoft worked with project stakeholders from two previously popular open-source telemetry projects, [OpenCensus](https://opencensus.io/) and [OpenTracing](https://opentracing.io/). Together, we helped to create a single project, OpenTelemetry. OpenTelemetry includes contributions from all major cloud and Application Performance Management (APM) vendors and lives within the [Cloud Native Computing Foundation (CNCF)](https://www.cncf.io/). Microsoft is a Platinum Member of the CNCF.
## Concepts Telemetry, the data collected to observe your application, can be broken into three types or "pillars":
-1. Distributed Tracing
-2. Metrics
-3. Logs
-Initially the OpenTelemetry community took on Distributed Tracing. Metrics and Logs are still in progress. A complete observability story includes all three pillars, but currently our [Azure Monitor OpenTelemetry-based exporter **preview** offerings for .NET, Python, and JavaScript](opentelemetry-enable.md) **only include Distributed Tracing**.
+- Distributed Tracing
+- Metrics
+- Logs
-There are several sources that explain the three pillars in detail including the [OpenTelemetry community website](https://opentelemetry.io/docs/concepts/data-collection/), [OpenTelemetry Specifications](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/overview.md), and [Distributed Systems Observability](https://www.oreilly.com/library/view/distributed-systems-observability/9781492033431/ch04.html) by Cindy Sridharan.
+Initially, the OpenTelemetry community took on Distributed Tracing. Metrics and Logs are still in progress. A complete observability story includes all three pillars, but currently our [Azure Monitor OpenTelemetry-based exporter preview offerings for .NET, Python, and JavaScript](opentelemetry-enable.md) only include Distributed Tracing.
+
+The following sources explain the three pillars:
+
+- [OpenTelemetry community website](https://opentelemetry.io/docs/concepts/data-collection/)
+- [OpenTelemetry specifications](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/overview.md)
+- [Distributed Systems Observability](https://www.oreilly.com/library/view/distributed-systems-observability/9781492033431/ch04.html) by Cindy Sridharan
In the following sections, we'll cover some telemetry collection basics.
-### Instrumenting your application
+### Instrument your application
-At a basic level, "instrumentingΓÇ¥ is simply enabling an application to capture telemetry.
+At a basic level, "instrumenting" is simply enabling an application to capture telemetry.
There are two methods to instrument your application:
-1. Manual Instrumentation
-2. Automatic Instrumentation (Auto-Instrumentation)
-Manual instrumentation is coding against the OpenTelemetry API. In the context of an end user, it typically refers to installing a language-specific SDK in an application. Manual instrumentation packages consist of our [Azure Monitor OpenTelemetry-based exporter **preview** offerings for .NET, Python, and JavaScript](opentelemetry-enable.md).
+- Manual instrumentation
+- Automatic instrumentation (auto-instrumentation)
+
+Manual instrumentation is coding against the OpenTelemetry API. In the context of a user, it typically refers to installing a language-specific SDK in an application. Manual instrumentation packages consist of [Azure Monitor OpenTelemetry-based exporter preview offerings for .NET, Python, and JavaScript](opentelemetry-enable.md).
> [!IMPORTANT]
-> ΓÇ£ManualΓÇ¥ does **NOT** mean youΓÇÖll be required to write complex code to define spans for distributed traces (though it remains an option). A rich and growing set of instrumentation libraries maintained by OpenTelemetry contributors will enable you to effortlessly capture telemetry signals across common frameworks and libraries. A subset of OpenTelemetry Instrumentation Libraries will be supported by Azure Monitor, informed by customer feedback. Additionally, we are working to [instrument the most popular Azure Service SDKs using OpenTelemetry](https://devblogs.microsoft.com/azure-sdk/introducing-experimental-opentelemetry-support-in-the-azure-sdk-for-net/).
+> "Manual" doesn't mean you'll be required to write complex code to define spans for distributed traces, although it remains an option. A rich and growing set of instrumentation libraries maintained by OpenTelemetry contributors will enable you to effortlessly capture telemetry signals across common frameworks and libraries.
+>
+> A subset of OpenTelemetry instrumentation libraries will be supported by Azure Monitor, informed by customer feedback. We're also working to [instrument the most popular Azure Service SDKs using OpenTelemetry](https://devblogs.microsoft.com/azure-sdk/introducing-experimental-opentelemetry-support-in-the-azure-sdk-for-net/).
+
+Auto-instrumentation enables telemetry collection through configuration without touching the application's code. Although it's more convenient, it tends to be less configurable. It's also not available in all languages. The Azure Monitor OpenTelemetry-based auto-instrumentation offering consists of the [Java 3.X OpenTelemetry-based GA offering](java-in-process-agent.md). We continue to invest in it informed by customer feedback. The OpenTelemetry community is also experimenting with C# and Python auto-instrumentation, but Azure Monitor is focused on creating a simple and effective manual instrumentation story in the near term.
-On the other hand, auto-instrumentation is enabling telemetry collection through configuration without touching the application's code. While more convenient, it tends to be less configurable and itΓÇÖs not available in all languages. Azure MonitorΓÇÖs OpenTelemetry-based auto-instrumentation offering consists of the [Java 3.X OpenTelemetry-based GA offering](java-in-process-agent.md), and we continue to invest in it informed by customer feedback. The OpenTelemetry community is also experimenting with C# and Python auto-instrumentation, but Azure Monitor is focused on creating a simple and effective manual instrumentation story in the near-term.
+### Send your telemetry
-### Sending your telemetry
+There are two ways to send your data to Azure Monitor (or any vendor):
-There are also two ways to send your data to Azure Monitor (or any vendor).
-1. Direct Exporter
-2. Via an Agent
+- Via a direct exporter
+- Via an agent
-A direct exporter sends telemetry in-process (from the applicationΓÇÖs code) directly to Azure MonitorΓÇÖs ingestion endpoint. The main advantage of this approach is onboarding simplicity.
+A direct exporter sends telemetry in-process (from the application's code) directly to the Azure Monitor ingestion endpoint. The main advantage of this approach is onboarding simplicity.
-**All Azure MonitorΓÇÖs currently supported OpenTelemetry-based offerings use a direct exporter**.
+*All currently supported OpenTelemetry-based offerings in Azure Monitor use a direct exporter*.
-Alternatively, sending telemetry via an agent will provide a path for any OpenTelemetry supported language to send to Azure Monitor via [OTLP](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/protocol/README.md). Receiving OTLP will enable customers to observe applications written in languages beyond our [supported languages](platforms.md).
+Alternatively, sending telemetry via an agent will provide a path for any OpenTelemetry-supported language to send to Azure Monitor via [online transactional processing (OTLP)](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/protocol/README.md). Receiving OTLP will enable customers to observe applications written in languages beyond our [supported languages](platforms.md).
> [!NOTE]
-> Some customers have begun to use the [OpenTelemetry-Collector](https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/design.md) as an agent alternative even though Microsoft doesnΓÇÖt officially support ΓÇ£Via an AgentΓÇ¥ approach for application monitoring yet. In the meantime, the open source community has contributed an OpenTelemetry-Collector Azure Monitor Exporter that some customers are using to send data to Azure Monitor Application Insights.
+> Some customers have begun to use the [OpenTelemetry-Collector](https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/design.md) as an agent alternative even though Microsoft doesn't officially support the "via an agent" approach for application monitoring yet. In the meantime, the open-source community has contributed an OpenTelemetry-Collector Azure Monitor exporter that some customers are using to send data to Azure Monitor Application Insights.
## Terms
-See [glossary](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/glossary.md) in the OpenTelemetry specifications.
+For terminology, see the [glossary](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/glossary.md) in the OpenTelemetry specifications.
-Some legacy terms in Application Insights are confusing given the industry convergence on OpenTelemetry. The table below highlights these differences. Eventually Application Insights terms will be replaced by OpenTelemetry terms.
+Some legacy terms in Application Insights are confusing because of the industry convergence on OpenTelemetry. The following table highlights these differences. Eventually, Application Insights terms will be replaced by OpenTelemetry terms.
Application Insights | OpenTelemetry |
-Auto-Collectors | Instrumentation Libraries
-Channel | Exporter
-Codeless / Agent-based | Auto-Instrumentation
+Auto-collectors | Instrumentation libraries
+Channel | Exporter
+Codeless / Agent-based | Auto-instrumentation
Traces | Logs
+## Next steps
-## Next step
+The following websites consist of language-by-language guidance to enable and configure Microsoft's OpenTelemetry-based offerings. The available functionality and limitations of each offering are explained so that you can determine whether OpenTelemetry is right for your project.
-The following pages consist of language-by-language guidance to enable and configure MicrosoftΓÇÖs OpenTelemetry-based offerings. Importantly, we share the available functionality and limitations of each offering so you can determine whether OpenTelemetry is right for your project.
-- [.NET](opentelemetry-enable.md)
+- [.NET](opentelemetry-enable.md)
- [Java](java-in-process-agent.md) - [JavaScript](opentelemetry-enable.md) - [Python](opentelemetry-enable.md)
azure-monitor Overview Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/overview-dashboard.md
Title: Azure Application Insights Overview Dashboard | Microsoft Docs
-description: Monitor applications with Azure Application Insights and Overview Dashboard functionality.
+ Title: Application Insights Overview dashboard | Microsoft Docs
+description: Monitor applications with Application Insights and Overview dashboard functionality.
Last updated 06/03/2019 # Application Insights Overview dashboard
-Application Insights has always provided a summary overview pane to allow quick, at-a-glance assessment of your application's health and performance. The new overview dashboard provides a faster more flexible experience.
+Application Insights has always provided a summary overview pane to allow quick, at-a-glance assessment of your application's health and performance. The new **Overview** dashboard provides a faster more flexible experience.
## How do I test out the new experience?
-The new overview dashboard now launches by default:
+The new **Overview** dashboard now launches by default.
-![Overview Preview Pane](./media/overview-dashboard/overview.png)
+![Screenshot that shows the Overview preview pane.](./media/overview-dashboard/overview.png)
## Better performance Time range selection has been simplified to a simple one-click interface.
-![Time range](./media/overview-dashboard/app-insights-overview-dashboard-03.png)
+![Screenshot that shows the time range.](./media/overview-dashboard/app-insights-overview-dashboard-03.png)
-Overall performance has been greatly increased. You have one-click access to popular features like **Search** and **Analytics**. Each default dynamically updating KPI tile provides insight into corresponding Application Insights features. To learn more about failed requests select **Failures** under the **Investigate** header:
+Overall performance has been greatly increased. You have one-click access to popular features like **Search** and **Analytics**. Each default dynamically updating KPI tile provides insight into corresponding Application Insights features. To learn more about failed requests, under **Investigate**, select **Failures**.
-![Failures](./media/overview-dashboard/app-insights-overview-dashboard-04.png)
+![Screenshot that shows failures.](./media/overview-dashboard/app-insights-overview-dashboard-04.png)
## Application dashboard
-Application dashboard leverages the existing dashboard technology within Azure to provide a fully customizable single pane view of your application health and performance.
+The application dashboard uses the existing dashboard technology within Azure to provide a fully customizable single pane view of your application health and performance.
-To access the default dashboard select _Application Dashboard_ in the upper left corner.
+To access the default dashboard, select **Application Dashboard** in the upper-left corner.
-![Screenshot shows the Application Dashboard button highlighted.](./media/overview-dashboard/app-insights-overview-dashboard-05.png)
+![Screenshot that shows the Application Dashboard button.](./media/overview-dashboard/app-insights-overview-dashboard-05.png)
-If this is your first time accessing the dashboard, it will launch a default view:
+If this is your first time accessing the dashboard, it opens a default view.
-![Dashboard view](./media/overview-dashboard/0001-dashboard.png)
+![Screenshot that shows the Dashboard view.](./media/overview-dashboard/0001-dashboard.png)
-You can keep the default view if you like it. Or you can also add, and delete from the dashboard to best fit the needs of your team.
+You can keep the default view if you like it. Or you can also add and delete from the dashboard to best fit the needs of your team.
> [!NOTE]
-> All users with access to the Application Insights resource share the same Application dashboard experience. Changes made by one user will modify the view for all users.
+> All users with access to the Application Insights resource share the same **Application Dashboard** experience. Changes made by one user will modify the view for all users.
-To navigate back to the overview experience just select:
+To go back to the overview experience, select the **Overview** button.
-![Overview Button](./media/overview-dashboard/app-insights-overview-dashboard-07.png)
+![Screenshot that shows the Overview button.](./media/overview-dashboard/app-insights-overview-dashboard-07.png)
## Troubleshooting
-There is currently a limit of 30 days of data for data displayed in a dashboard.If you select a time filter beyond 30 days, or if you select **Configure tile settings** and set a custom time range in excess of 30 days your dashboard will not display beyond 30 days of data, even with the default data retention of 90 days. There is currently no workaround for this behavior.
+Currently, there's a limit of 30 days of data displayed in a dashboard. If you select a time filter beyond 30 days, or if you select **Configure tile settings** and set a custom time range in excess of 30 days, your dashboard won't display beyond 30 days of data. This is the case even with the default data retention of 90 days. There's currently no workaround for this behavior.
-The default Application Dashboard is created during Application Insights resource creation. If you move or rename your Application Insights instance, then queries on the dashboard will fail with Resource not found errors as the dashboard queries rely on the original resource URI. Delete the default dashboard, then from the Application Insights Overview resource menu select Application Dashboard again and the default dashboard will be re-created with the new resource name. Make other custom edits to the dashboard as needed.
+The default **Application Dashboard** is created during Application Insights resource creation. If you move or rename your Application Insights instance, queries on the dashboard will fail with "Resource not found" errors because the dashboard queries rely on the original resource URI. Delete the default dashboard. On the Application Insights **Overview** resource menu, select **Application Dashboard** again. The default dashboard will be re-created with the new resource name. Make other custom edits to the dashboard as needed.
## Next steps - [Funnels](./usage-funnels.md) - [Retention](./usage-retention.md)-- [User Flows](./usage-flows.md)-
+- [User flows](./usage-flows.md)
azure-monitor Tutorial Alert https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/tutorial-alert.md
Title: Send alerts from Azure Application Insights | Microsoft Docs
-description: Tutorial to send alerts in response to errors in your application using Azure Application Insights.
+description: Tutorial shows how to send alerts in response to errors in your application by using Application Insights.
Last updated 04/10/2019
-# Monitor and alert on application health with Azure Application Insights
+# Monitor and alert on application health with Application Insights
-Azure Application Insights allows you to monitor your application and send you alerts when it is either unavailable, experiencing failures, or suffering from performance issues. This tutorial takes you through the process of creating tests to continuously check the availability of your application.
+Application Insights allows you to monitor your application and sends you alerts when it's unavailable, experiencing failures, or suffering from performance issues. This tutorial takes you through the process of creating tests to continuously check the availability of your application.
-You learn how to:
+You'll learn how to:
> [!div class="checklist"]
-> * Create availability test to continuously check the response of the application
-> * Send mail to administrators when a problem occurs
+> * Create availability tests to continuously check the response of the application.
+> * Send mail to administrators when a problem occurs.
## Prerequisites
-To complete this tutorial:
-
-Create an [Application Insights resource](../app/create-new-resource.md).
+To complete this tutorial, create an [Application Insights resource](../app/create-new-resource.md).
## Sign in to Azure
-Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com).
+Sign in to the [Azure portal](https://portal.azure.com).
## Create availability test
-Availability tests in Application Insights allow you to automatically test your application from various locations around the world. In this tutorial, you will perform a url test to ensure that your web application is available. You could also create a complete walkthrough to test its detailed operation.
+Availability tests in Application Insights allow you to automatically test your application from various locations around the world. In this tutorial, you'll perform a URL test to ensure that your web application is available. You could also create a complete walkthrough to test its detailed operation.
+
+1. Select **Application Insights** and then select your subscription.
-1. Select **Application Insights** and then select your subscription.
+1. Under the **Investigate** menu, select **Availability**. Then select **Create test**.
-2. Select **Availability** under the **Investigate** menu and then click **Create test**.
+ ![Screenshot that shows adding an availability test.](media/tutorial-alert/add-test-001.png)
- ![Add availability test](media/tutorial-alert/add-test-001.png)
+1. Enter a name for the test and leave the other defaults. This selection will trigger requests for the application URL every 5 minutes from five different geographic locations.
-3. Type in a name for the test and leave the other defaults. This selection will trigger requests for the application url every 5 minutes from five different geographic locations.
+1. Select **Alerts** to open the **Alerts** dropdown where you can define details for how to respond if the test fails. Choose **Near-realtime** and set the status to **Enabled.**
-4. Select **Alerts** to open the **Alerts** dropdown where you can define details for how to respond if the test fails. Choose **Near-realtime** and set the status to **Enabled.**
+ Enter an email address to send when the alert criteria are met. Optionally, you can enter the address of a webhook to call when the alert criteria are met.
- Type in an email address to send when the alert criteria is met. You could optionally type in the address of a webhook to call when the alert criteria is met.
+ ![Screenshot that shows creating a test.](media/tutorial-alert/create-test-001.png)
- ![Create test](media/tutorial-alert/create-test-001.png)
+1. Return to the test panel, select the ellipses, and edit the alert to enter the configuration for your near-realtime alert.
-5. Return to the test panel, select the ellipses and edit alert to enter the configuration for your near-realtime alert.
+ ![Screenshot that shows editing an alert.](media/tutorial-alert/edit-alert-001.png)
- ![Edit alert](media/tutorial-alert/edit-alert-001.png)
+1. Set failed locations to greater than or equal to 3. Create an [action group](../alerts/action-groups.md) to configure who gets notified when your alert threshold is breached.
-6. Set failed locations to greater than or equal to 3. Create an [action group](../alerts/action-groups.md) to configure who gets notified when your alert threshold is breached.
+ ![Screenshot that shows saving alert UI.](media/tutorial-alert/save-alert-001.png)
- ![Save alert UI](media/tutorial-alert/save-alert-001.png)
+1. After you've configured your alert, select the test name to view details from each location. Tests can be viewed in both line graph and scatter plot format to visualize the successes and failures for a given time range.
-7. Once you have configured your alert, click on the test name to view details from each location. Tests can be viewed in both line graph and scatter plot format to visualize the success/failures for a given time range.
+ ![Screenshot that shows test details.](media/tutorial-alert/test-details-001.png)
- ![Test details](media/tutorial-alert/test-details-001.png)
+1. To see the details of any test, select its dot in the scatter chart to open the **End-to-end transaction details** screen. The following example shows the details for a failed request.
-8. You can drill down into the details of any test by clicking on its dot in the scatter chart. This will launch the end-to-end transaction details view. The example below shows the details for a failed request.
+ ![Screenshot that shows test results.](media/tutorial-alert/test-result-001.png)
- ![Test result](media/tutorial-alert/test-result-001.png)
-
## Next steps Now that you've learned how to alert on issues, advance to the next tutorial to learn how to analyze how users are interacting with your application. > [!div class="nextstepaction"] > [Understand users](./tutorial-users.md)-
azure-monitor Container Insights Prometheus Metrics Addon https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-prometheus-metrics-addon.md
This article describes how to configure Container insights to send Prometheus me
## Prerequisites -- The cluster must be [onboarded to Container insights](container-insights-enable-aks.md). - The cluster must use [managed identity authentication](container-insights-enable-aks.md#migrate-to-managed-identity-authentication). - The following resource providers must be registered in the subscription of the AKS cluster and the Azure Monitor Workspace. - Microsoft.ContainerService
Use any of the following methods to install the metrics addon on your cluster an
Managed Prometheus can be enabled in the Azure portal through either Container insights or an Azure Monitor workspace.
+### Prerequisites
+
+- The cluster must be [onboarded to Container insights](container-insights-enable-aks.md).
+ #### Enable from Container insights 1. Open the **Kubernetes services** menu in the Azure portal and select your AKS cluster.
Use the following procedure to install the Azure Monitor agent and the metrics a
#### Prerequisites - Register the `AKS-PrometheusAddonPreview` feature flag in the Azure Kubernetes clusters subscription with the following command in Azure CLI: `az feature register --namespace Microsoft.ContainerService --name AKS-PrometheusAddonPreview`.-- The aks-preview extension needs to be installed using the command `az extension add --name aks-preview`. For more information on how to install a CLI extension, see [Use and manage extensions with the Azure CLI](https://learn.microsoft.com/cli/azure/azure-cli-extensions-overview).
+- The aks-preview extension needs to be installed using the command `az extension add --name aks-preview`. For more information on how to install a CLI extension, see [Use and manage extensions with the Azure CLI](/azure/azure-cli-extensions-overview).
- Azure CLI version 2.41.0 or higher is required for this feature. #### Install metrics addon
azure-monitor Activity Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/activity-log.md
For more functionality, create a diagnostic setting to send the activity log to
For details on how to create a diagnostic setting, see [Create diagnostic settings to send platform logs and metrics to different destinations](./diagnostic-settings.md). > [!NOTE]
-> Entries in the activity log are system generated and can't be changed or deleted.
+> * Entries in the Activity Log are system generated and can't be changed or deleted.
+> * Entries in the Activity Log are representing control plane changes like a virtual machine restart, any non related entries should be written into [Azure Resource Logs](https://learn.microsoft.com/azure/azure-monitor/essentials/resource-logs)
## Retention period
Log profiles are the legacy method for sending the activity log to storage or ev
#### [PowerShell](#tab/powershell)
-If a log profile already exists, you first must remove the existing log profile and then create a new one.
+If a log profile already exists, you first must remove the existing log profile, and then create a new one.
1. Use `Get-AzLogProfile` to identify if a log profile exists. If a log profile exists, note the `Name` property.
This sample PowerShell script creates a log profile that writes the activity log
#### [CLI](#tab/cli)
-If a log profile already exists, you first must remove the existing log profile and then create a log profile.
+If a log profile already exists, you first must remove the existing log profile, and then create a log profile.
1. Use `az monitor log-profiles list` to identify if a log profile exists. 1. Use `az monitor log-profiles delete --name "<log profile name>` to remove the log profile by using the value from the `name` property.
azure-monitor Azure Monitor Workspace Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/azure-monitor-workspace-overview.md
description: Overview of Azure Monitor workspace, which is a unique environment
Previously updated : 05/09/2022 Last updated : 10/05/2022 # Azure Monitor workspace (preview)
The following table lists the contents of Azure Monitor workspaces. This table w
| Prometheus metrics | Native platform metrics<br>Native custom metrics<br>Prometheus metrics | - ## Workspace design A single Azure Monitor workspace can collect data from multiple sources, but there may be circumstances where you require multiple workspaces to address your particular business requirements. Azure Monitor workspace design is similar to [Log Analytics workspace design](../logs/workspace-design.md). There are several reasons that you may consider creating additional workspaces including the following. -- If you have multiple Azure tenants, you'll usually create a workspace in each because several data sources can only send monitoring data to a workspace in the same Azure tenant.-- Each workspace resides in a particular Azure region, and you may have regulatory or compliance requirements to store data in particular locations.-- You may choose to create separate workspaces to define data ownership, for example by subsidiaries or affiliated companies.
+- Azure tenants. If you have multiple Azure tenants, you'll usually create a workspace in each because several data sources can only send monitoring data to a workspace in the same Azure tenant.
+- Azure regions. Each workspace resides in a particular Azure region, and you may have regulatory or compliance requirements to store data in particular locations.
+- Data ownership. You may choose to create separate workspaces to define data ownership, for example by subsidiaries or affiliated companies.
+- Workspace limits. See [Azure Monitor service limits](../service-limits.md#prometheus-metrics) for current capacity limits related to Azure Monitor workspaces.
+- Multiple environments. You may have Azure Monitor workspaces supporting different environments such as test, pre-production, and production.
+
+> [!NOTE]
+> You cannot currently query across multiple Azure Monitor workspaces.
+
+## Workspace limits
+These are currently only related to Prometheus metrics, since this is the only data currently stored in Azure Monitor workspaces.
Many customers will choose an Azure Monitor workspace design to match their Log Analytics workspace design. Since Azure Monitor workspaces currently only contain Prometheus metrics, and metric data is typically not as sensitive as log data, you may choose to further consolidate your Azure Monitor workspaces for simplicity. ++ ## Create an Azure Monitor workspace In addition to the methods below, you may be given the option to create a new Azure Monitor workspace in the Azure portal as part of a configuration that requires one. For example, when you configure Azure Monitor managed service for Prometheus, you can select an existing Azure Monitor workspace or create a new one.
azure-monitor Prometheus Grafana https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-grafana.md
+
+ Title: Use Azure Monitor managed service for Prometheus (preview) as data source for Grafana
+description: Details on how to configure Azure Monitor managed service for Prometheus (preview) as data source for both Azure Managed Grafana and self-hosted Grafana in an Azure virtual machine.
++ Last updated : 09/28/2022++
+# Use Azure Monitor managed service for Prometheus (preview) as data source for Grafana using managed system identity
+
+[Azure Monitor managed service for Prometheus (preview)](prometheus-metrics-overview.md) allows you to collect and analyze metrics at scale using a [Prometheus](https://aka.ms/azureprometheus-promio)-compatible monitoring solution. The most common way to analyze and present Prometheus data is with a Grafana dashboard. This article explains how to configure Prometheus as a data source for both [Azure Managed Grafana](../../managed-grafan) and [self-hosted Grafana](https://grafana.com/) running in an Azure virtual machine using managed system identity authentication.
++
+## Azure Managed Grafana
+The following sections describe how to configure Azure Monitor managed service for Prometheus (preview) as a data source for Azure Managed Grafana.
+
+> [!IMPORTANT]
+> This section describes the manual process for adding an Azure Monitor managed service for Prometheus data source to Azure Managed Grafana. You can achieve the same functionality by linking the Azure Monitor workspace and Grafana workspace as described in [Link a Grafana workspace](azure-monitor-workspace-overview.md#link-a-grafana-workspace).
+
+### Configure system identify
+Your Grafana workspace requires the following:
+
+- System managed identity enabled
+- *Monitoring Data Reader* role for the Azure Monitor workspace
+
+Both of these settings are configured by default when you created your Grafana workspace. Verify these settings on the **Identity** page for your Grafana workspace.
+++
+**Configure from Grafana workspace**<br>
+Use the following steps to allow access all Azure Monitor workspaces in a resource group or subscription:
+
+1. Open the **Identity** page for your Grafana workspace in the Azure portal.
+2. If **Status** is **No**, change it to **Yes**.
+3. Click **Azure role assignments** to review the existing access in your subscription.
+4. If **Monitoring Data Reader** is not listed for your subscription or resource group:
+ 1. Click **+ Add role assignment**.
+ 2. For **Scope**, select either **Subscription** or **Resource group**.
+ 3. For **Role**, select **Monitoring Data Reader**.
+ 4. Click **Save**.
++
+**Configure from Azure Monitor workspace**<br>
+Use the following steps to allow access to only a specific Azure Monitor workspace:
+
+1. Open the **Access Control (IAM)** page for your Azure Monitor workspace in the Azure portal.
+2. Click **Add role assignment**.
+3. Select **Monitoring Data Reader** and click **Next**.
+4. For **Assign access to**, select **Managed identity**.
+5. Click **+ Select members**.
+6. For **Managed identity**, select **Azure Managed Grafana**.
+7. Select your Grafana workspace and then click **Select**.
+8. Click **Review + assign** to save the configuration.
+
+### Create Prometheus data source
+
+Azure Managed Grafana supports Azure authentication by default.
+
+1. Open the **Overview** page for your Azure Monitor workspace in the Azure portal.
+2. Copy the **Query endpoint**, which you'll need in a step below.
+3. Open your Azure Managed Grafana workspace in the Azure portal.
+4. Click on the **Endpoint** to view the Grafana workspace.
+5. Select **Configuration** and then **Data source**.
+6. Click **Add data source** and then **Prometheus**.
+7. For **URL**, paste in the query endpoint for your Azure Monitor workspace.
+8. Select **Azure Authentication** to turn it on.
+9. For **Authentication** under **Azure Authentication**, select **Managed Identity**.
+10. Scroll to the bottom of the page and click **Save & test**.
+++
+## Self-managed Grafana
+The following sections describe how to configure Azure Monitor managed service for Prometheus (preview) as a data source for self-managed Grafana on an Azure virtual machine.
+### Configure system identify
+Azure virtual machines support both system assigned and user assigned identity. The following steps configure system assigned identity.
+
+**Configure from Azure virtual machine**<br>
+Use the following steps to allow access all Azure Monitor workspaces in a resource group or subscription:
+
+1. Open the **Identity** page for your virtual machine in the Azure portal.
+2. If **Status** is **No**, change it to **Yes**.
+3. Click **Azure role assignments** to review the existing access in your subscription.
+4. If **Monitoring Data Reader** is not listed for your subscription or resource group:
+ 1. Click **+ Add role assignment**.
+ 2. For **Scope**, select either **Subscription** or **Resource group**.
+ 3. For **Role**, select **Monitoring Data Reader**.
+ 4. Click **Save**.
+
+**Configure from Azure Monitor workspace**<br>
+Use the following steps to allow access to only a specific Azure Monitor workspace:
+
+1. Open the **Access Control (IAM)** page for your Azure Monitor workspace in the Azure portal.
+2. Click **Add role assignment**.
+3. Select **Monitoring Data Reader** and click **Next**.
+4. For **Assign access to**, select **Managed identity**.
+5. Click **+ Select members**.
+6. For **Managed identity**, select **Virtual machine**.
+7. Select your Grafana workspace and then click **Select**.
+8. Click **Review + assign** to save the configuration.
++++
+### Create Prometheus data source
+
+Versions 9.x and greater of Grafana support Azure Authentication, but it's not enabled by default. To enable this feature, you need to update your Grafana configuration. To determine where your Grafana.ini file is and how to edit your Grafana config, please review this document from Grafana Labs. Once you know where your configuration file lives on your VM, make the following update:
++
+1. Locate and open the *Grafana.ini* file on your virtual machine.
+2. Under the `[auth]` section of the configuration file, change the `azure_auth_enabled` setting to `true`.
+3. Open the **Overview** page for your Azure Monitor workspace in the Azure portal.
+4. Copy the **Query endpoint**, which you'll need in a step below.
+5. Open your Azure Managed Grafana workspace in the Azure portal.
+6. Click on the **Endpoint** to view the Grafana workspace.
+7. Select **Configuration** and then **Data source**.
+8. Click **Add data source** and then **Prometheus**.
+9. For **URL**, paste in the query endpoint for your Azure Monitor workspace.
+10. Select **Azure Authentication** to turn it on.
+11. For **Authentication** under **Azure Authentication**, select **Managed Identity**.
+12. Scroll to the bottom of the page and click **Save & test**.
++++
+## Next steps
+
+- [Collect Prometheus metrics for your AKS cluster](../containers/container-insights-prometheus-metrics-addon.md).
+- [Configure Prometheus alerting and recording rules groups](prometheus-rule-groups.md).
+- [Customize scraping of Prometheus metrics](prometheus-metrics-scrape-configuration.md).
azure-monitor Prometheus Metrics Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-metrics-overview.md
Azure Monitor managed service for Prometheus can currently collect data from any
## Grafana integration
-The primary method for visualizing Prometheus metrics is [Azure Managed Grafana](../../managed-grafan). Connect your Azure Monitor workspace to a Grafana workspace so that it can be used as a data source in a Grafana dashboard. You then have access to multiple prebuilt dashboards that use Prometheus metrics and the ability to create any number of custom dashboards.
+The primary method for visualizing Prometheus metrics is [Azure Managed Grafana](../../managed-grafan#link-a-grafana-workspace) so that it can be used as a data source in a Grafana dashboard. You then have access to multiple prebuilt dashboards that use Prometheus metrics and the ability to create any number of custom dashboards.
## Alerts Azure Monitor managed service for Prometheus adds a new Prometheus alert type for creating alerts using PromQL queries. You can view fired and resolved Prometheus alerts in the Azure portal along with other alert types. Prometheus alerts are configured with the same [alert rules](https://aka.ms/azureprometheus-promio-alertrules) used by Prometheus. For your AKS cluster, you can use a [set of predefined Prometheus alert rules]
azure-monitor Analyze Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/analyze-usage.md
There are two approaches to investigating the amount of data collected for Appli
> [!NOTE]
-> Queries against Application Insights table except `SystemEvents` will work for both a workspace-based and classic Application Insights resource, since [backwards compatibility](../app/convert-classic-resource.md#understanding-log-queries) allows you to continue to use [legacy table names](../app/apm-tables.md). For a workspace-based resource, open **Logs** from the **Log Analytics workspace** menu. For a classic resource, open **Logs** from the **Application Insights** menu.
+> Queries against Application Insights table except `SystemEvents` will work for both a workspace-based and classic Application Insights resource, since [backwards compatibility](../app/convert-classic-resource.md#understand-log-queries) allows you to continue to use [legacy table names](../app/apm-tables.md). For a workspace-based resource, open **Logs** from the **Log Analytics workspace** menu. For a classic resource, open **Logs** from the **Application Insights** menu.
**Dependency operations generate the most data volume in the last 30 days (workspace-based or classic)**
azure-monitor Basic Logs Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/basic-logs-configure.md
Setting a table's [log data plan](log-analytics-workspace-overview.md#log-data-p
By default, all tables in your Log Analytics are Analytics tables, and available for query and alerts. You can currently configure the following tables for Basic Logs: -- All tables created with or converted to the [Data Collection Rule (DCR)-based logs ingestion API.](logs-ingestion-api-overview.md)
+- All custom tables created with or migrated to the [Data Collection Rule (DCR)-based logs ingestion API.](logs-ingestion-api-overview.md)
- [ContainerLogV2](/azure/azure-monitor/reference/tables/containerlogv2) - Used in [Container Insights](../containers/container-insights-overview.md) and includes verbose text-based log records. - [AppTraces](/azure/azure-monitor/reference/tables/apptraces) - Freeform Application Insights traces. - [ContainerAppConsoleLogs](/azure/azure-monitor/reference/tables/ContainerAppConsoleLogs) - Logs generated by Container Apps, within a Container App environment.
azure-monitor Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/whats-new.md
Title: "What's new in Azure Monitor documentation" description: "What's new in Azure Monitor documentation" Previously updated : 08/08/2022 Last updated : 10/13/2022 - # What's new in Azure Monitor documentation
This article lists significant changes to Azure Monitor documentation.
## September 2022 +
+### Agents
+
+| Article | Description |
+|||
+|[Azure Monitor Agent overview](https://docs.microsoft.com/azure/azure-monitor/agents/agents-overview)|Added Azure Monitor Agent support for ARM64-based virtual machines for a number of distributions. <br><br>Azure Monitor Agent and legacy agents don't support machines and appliances that run heavily customized or stripped-down versions of operating system distributions. <br><br>Azure Monitor Agent versions 1.15.2 and higher now support syslog RFC formats, including Cisco Meraki, Cisco ASA, Cisco FTD, Sophos XG, Juniper Networks, Corelight Zeek, CipherTrust, NXLog, McAfee, and Common Event Format (CEF).|
+
+### Alerts
+
+| Article | Description |
+|||
+|[Convert ITSM actions that send events to ServiceNow to secure webhook actions](https://docs.microsoft.com/azure/azure-monitor/alerts/itsm-convert-servicenow-to-webhook)|As of September 2022, we're starting the 3-year process of deprecating support of using ITSM actions to send events to ServiceNow. Learn how to convert ITSM actions that send events to ServiceNow to secure webhook actions|
+|[Create a new alert rule](https://docs.microsoft.com/azure/azure-monitor/alerts/alerts-create-new-alert-rule)|Added description of all available monitoring services to create a new alert rule and alert processing rules pages. <br><br>Added support for regional processing for metric alert rules that monitor a custom metric with the scope defined as one of the supported regions. <br><br> Clarified that selecting the **Automatically resolve alerts** setting makes log alerts stateful.<|
+|[Types of Azure Monitor alerts](https://learn.microsoft.com/azure/azure-monitor/alerts/alerts-types)|Azure Database for PostgreSQL - Flexible Servers is supported for monitoring multiple resources.|
+|[Upgrade legacy rules management to the current Log Alerts API from legacy Log Analytics Alert API](https://docs.microsoft.com/azure/azure-monitor/alerts/alerts-log-api-switch)|The process of moving legacy log alert rules management from the legacy API to the current API is now supported by the government cloud.|
+
+### Application insights
+
+| Article | Description |
+|||
+|[Azure Monitor OpenTelemetry-based auto-instrumentation for Java applications](https://docs.microsoft.com/azure/azure-monitor/app/java-in-process-agent)|New OpenTelemetry `@WithSpan` annotation guidance.|
+|[Capture Application Insights custom metrics with .NET and .NET Core](https://docs.microsoft.com/azure/azure-monitor/app/tutorial-asp-net-custom-metrics)|Tutorial steps and images have been updated.|
+|[Configuration options - Azure Monitor Application Insights for Java](https://learn.microsoft.com/azure/azure-monitor/app/java-in-process-agent)|Connection string guidance updated.|
+|[Enable Application Insights for ASP.NET Core applications](https://docs.microsoft.com/azure/azure-monitor/app/tutorial-asp-net-core)|Tutorial steps and images have been updated.|
+|[Enable Azure Monitor OpenTelemetry Exporter for .NET, Node.js, and Python applications (preview)](https://docs.microsoft.com/azure/azure-monitor/app/opentelemetry-enable)|Our product feedback link at the bottom of each document has been fixed.|
+|[Filter and preprocess telemetry in the Application Insights SDK](https://docs.microsoft.com/azure/azure-monitor/app/api-filtering-sampling)|Added sample initializer to control which client IP gets used as part of geo-location mapping.|
+|[Java Profiler for Azure Monitor Application Insights](https://docs.microsoft.com/azure/azure-monitor/app/java-standalone-profiler)|Our new Java Profiler was announced at Ignite. Read all about it!|
+|[Release notes for Azure Web App extension for Application Insights](https://docs.microsoft.com/azure/azure-monitor/app/web-app-extension-release-notes)|Added release notes for 2.8.44 and 2.8.43.|
+|[Resource Manager template samples for creating Application Insights resources](https://docs.microsoft.com/azure/azure-monitor/app/resource-manager-app-resource)|Fixed inaccurate tagging of workspace-based resources as still in Preview.|
+|[Unified cross-component transaction diagnostics](https://docs.microsoft.com/azure/azure-monitor/app/transaction-diagnostics)|A complete FAQ section is added to help troubleshoot Azure portal errors, such as "error retrieving data".|
+|[Upgrading from Application Insights Java 2.x SDK](https://docs.microsoft.com/azure/azure-monitor/app/java-standalone-upgrade-from-2x)|Additional upgrade guidance added. Java 2.x has been deprecated.|
+|[Using Azure Monitor Application Insights with Spring Boot](https://docs.microsoft.com/azure/azure-monitor/app/java-spring-boot)|Configuration options have been updated.|
+
+### Autoscale
+| Article | Description |
+|||
+|[Autoscale with multiple profiles](https://docs.microsoft.com/azure/azure-monitor/autoscale/autoscale-multiprofile)|New article: Using multiple profiles in autoscale with CLI PowerShell and templates.|
+|[Flapping in Autoscale](https://docs.microsoft.com/azure/azure-monitor/autoscale/autoscale-flapping)|New Article: Flapping in autoscale.|
+|[Understand Autoscale settings](https://docs.microsoft.com/azure/azure-monitor/autoscale/autoscale-understanding-settings)|Clarified how often autoscale runs.|
+
+### Change analysis
+| Article | Description |
+|||
+|[Troubleshoot Azure Monitor's Change Analysis](https://docs.microsoft.com/azure/azure-monitor/change/change-analysis-troubleshoot)|Added section about partial data and how to mitigate to the troubleshooting guide.|
+
+### Essentials
+| Article | Description |
+|||
+|[Structure of transformation in Azure Monitor (preview)](https://docs.microsoft.com/azure/azure-monitor/essentials/data-collection-transformations-structure)|New KQL functions supported.|
+
+### Virtual Machines
+| Article | Description |
+|||
+|[Migrate from Service Map to Azure Monitor VM insights](https://docs.microsoft.com/azure/azure-monitor/vm/vminsights-migrate-from-service-map)|Added a new article with guidance for migrating from the Service Map solution to Azure Monitor VM insights.|
+ ### Network Insights | Article | Description |
This article lists significant changes to Azure Monitor documentation.
|[Network Insights](../network-watcher/network-insights-overview.md)| Onboarded the new topology experience to Network Insights in Azure Monitor.|
-## August 2022
+### Visualizations
+| Article | Description |
+|||
+|[Access deprecated Troubleshooting guides in Azure Workbooks](https://docs.microsoft.com/azure/azure-monitor/visualize/workbooks-access-troubleshooting-guide)|New article: Access deprecated Troubleshooting guides in Azure Workbooks.|
+## August 2022
+ ### Agents | Article | Description |
This article lists significant changes to Azure Monitor documentation.
|:|:| |[Create Azure Monitor alert rules](alerts/alerts-create-new-alert-rule.md)|Added support for data processing in a specified region, for action groups and for metric alert rules that monitor a custom metric.|
-### Application-insights
+### Application insights
| Article | Description | |||
This article lists significant changes to Azure Monitor documentation.
|[Autoscale in Microsoft Azure](autoscale/autoscale-overview.md)|Updated conceptual diagrams| |[Use predictive autoscale to scale out before load demands in virtual machine scale sets (preview)](autoscale/autoscale-predictive.md)|Predictive autoscale (preview) is now available in all regions|
-### Change-analysis
+### Change analysis
| Article | Description | |||
azure-resource-manager Bicep Functions Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions-resource.md
The possible uses of `list*` are shown in the following table.
| Microsoft.Logic/workflows/versions/triggers | [listCallbackUrl](/rest/api/logic/workflowversions/listcallbackurl) | | Microsoft.MachineLearning/webServices | [listkeys](/rest/api/machinelearning/webservices/listkeys) | | Microsoft.MachineLearning/Workspaces | listworkspacekeys |
-| Microsoft.MachineLearningServices/workspaces/computes | [listKeys](/rest/api/azureml/2022-05-01/compute/list-keys) |
-| Microsoft.MachineLearningServices/workspaces/computes | [listNodes](/rest/api/azureml/2022-05-01/compute/list-nodes) |
-| Microsoft.MachineLearningServices/workspaces | [listKeys](/rest/api/azureml/2022-05-01/workspaces/list-keys) |
+| Microsoft.MachineLearningServices/workspaces/computes | [listKeys](/rest/api/azureml/2022-10-01/compute/list-keys) |
+| Microsoft.MachineLearningServices/workspaces/computes | [listNodes](/rest/api/azureml/2022-10-01/compute/list-nodes) |
+| Microsoft.MachineLearningServices/workspaces | [listKeys](/rest/api/azureml/2022-10-01/workspaces/list-keys) |
| Microsoft.Maps/accounts | [listKeys](/rest/api/maps-management/accounts/listkeys) | | Microsoft.Media/mediaservices/assets | [listContainerSas](/rest/api/media/assets/listcontainersas) | | Microsoft.Media/mediaservices/assets | [listStreamingLocators](/rest/api/media/assets/liststreaminglocators) |
azure-resource-manager Template Functions Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-functions-resource.md
The possible uses of `list*` are shown in the following table.
| Microsoft.Logic/workflows/versions/triggers | [listCallbackUrl](/rest/api/logic/workflowversions/listcallbackurl) | | Microsoft.MachineLearning/webServices | [listkeys](/rest/api/machinelearning/webservices/listkeys) | | Microsoft.MachineLearning/Workspaces | listworkspacekeys |
-| Microsoft.MachineLearningServices/workspaces/computes | [listKeys](/rest/api/azureml/2022-05-01/compute/list-keys) |
-| Microsoft.MachineLearningServices/workspaces/computes | [listNodes](/rest/api/azureml/2022-05-01/compute/list-nodes) |
-| Microsoft.MachineLearningServices/workspaces | [listKeys](/rest/api/azureml/2022-05-01/workspaces/list-keys) |
+| Microsoft.MachineLearningServices/workspaces/computes | [listKeys](/rest/api/azureml/2022-10-01/compute/list-keys) |
+| Microsoft.MachineLearningServices/workspaces/computes | [listNodes](/rest/api/azureml/2022-10-01/compute/list-nodes) |
+| Microsoft.MachineLearningServices/workspaces | [listKeys](/rest/api/azureml/2022-10-01/workspaces/list-keys) |
| Microsoft.Maps/accounts | [listKeys](/rest/api/maps-management/accounts/listkeys) | | Microsoft.Media/mediaservices/assets | [listContainerSas](/rest/api/media/assets/listcontainersas) | | Microsoft.Media/mediaservices/assets | [listStreamingLocators](/rest/api/media/assets/liststreaminglocators) |
azure-vmware Concepts Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/concepts-storage.md
vSAN datastores use data-at-rest encryption by default using keys stored in Azur
## Datastore capacity expansion options
-The vSAN datastore capacity can be expanded by connecting Azure storage resources such as [Azure NetApp Files volumes as datastores](/azure/azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts). Virtual machines can be migrated between vSAN and Azure NetApp Files datastores using storage vMotion. Azure NetApp Files datastores can be replicated to other regions using storage based [Cross-region replication](/azure/azure-netapp-files/cross-region-replication-introduction) for testing, development and failover purposes.
+The vSAN datastore capacity can be expanded by connecting Azure storage resources such as [Azure NetApp Files volumes as datastores](/azure/azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts). Virtual machines can be migrated between vSAN and Azure NetApp Files datastores using storage vMotion.
Azure NetApp Files is available in [Ultra, Premium and Standard performance tiers](/azure/azure-netapp-files/azure-netapp-files-service-levels) to allow for adjusting performance and cost to the requirements of the workloads. ## Azure storage integration
azure-vmware Configure Vmware Hcx https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/configure-vmware-hcx.md
VMware HCX Connector deploys a subset of virtual appliances (automated) that req
- vMotion - Replication - Uplink
+
+ > [!NOTE]
+ > * Azure VMware Solution connected via VPN should set Uplink Network Profile MTU's to 1350 to account for IPSec overhead.
+ > * Azure VMWare Solution defaults to 1500 MTU and is sufficient for most ExpressRoute implementations.
+ > * If your ExpressRoute provider does not support jumbo frame, MTU may need to be lowered in ExpressRoute setups as well.
+ > * Changes to MTU should be performed on both HCX Connector (on-premises) and HCX Cloud Manager (Azure VMware Solution) network profiles.
1. Under **Infrastructure**, select **Interconnect** > **Multi-Site Service Mesh** > **Network Profiles** > **Create Network Profile**.
backup Backup Azure Enhanced Soft Delete About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-enhanced-soft-delete-about.md
+
+ Title: Overview of enhanced soft delete for Azure Backup (preview)
+description: This article gives an overview of enhanced soft delete for Azure Backup.
++ Last updated : 10/13/2022+++++
+# About Enhanced soft delete for Azure Backup (preview)
+
+[Soft delete](backup-azure-security-feature-cloud.md) for Azure Backup enables you to recover your backup data even after it's deleted. This is useful when:
+
+- You've accidentally deleted backup data and you need it back.
+- Backup data is maliciously deleted by ransomware or bad actors.
+
+*Basic soft delete* is available for Recovery Services vaults for a while; *enhanced soft delete* now provides additional data protection capabilities.
+
+In this article, you'll learn about:
+
+>[!div class="checklist"]
+>- What's soft delete?
+>- What's enhanced soft delete?
+>- States of soft delete setting
+>- Soft delete retention
+>- Soft deleted items reregistration
+>- Pricing
+>- Supported scenarios
+
+## What's soft delete?
+
+[Soft delete](backup-azure-security-feature-cloud.md) primarily delays permanent deletion of backup data and gives you an opportunity to recover data after deletion. This deleted data is retained for a specified duration (*14*-*180* days) called soft delete retention period.
+
+After deletion (while the data is in soft deleted state), if you need the deleted data, you can undelete. This returns the data to *stop protection with retain data* state. You can then use it to perform restore operations or you can resume backups for this instance.
+
+The following diagram shows the flow of a backup item (or a backup instance) that gets deleted:
++
+## What's enhanced soft delete?
+
+The key benefits of enhanced soft delete are:
+
+- **Always-on soft delete**: You can now opt to set soft delete always-on (irreversible). Once opted, you can't disable the soft delete settings for the vault. [Learn more](#states-of-soft-delete-settings).
+- **Configurable soft delete retention**: You can now specify the retention duration for deleted backup data, ranging from *14* to *180* days. By default, the retention duration is set to *14* days (as per basic soft delete) for the vault, and you can extend it as required.
+
+ >[!Note]
+ >The soft delete doesn't cost you for first 14 days of retention; however, you're charged for the period beyond 14 days. [Learn more](#states-of-soft-delete-settings).
+- **Re-registration of soft deleted items**: You can now register the items in soft deleted state with another vault. However, you can't register the same item with two vaults for active backups.
+- **Soft delete and reregistration of backup containers**: You can now unregister the backup containers (which you can soft delete) if you've deleted all backup items in the container. You can now register such soft deleted containers to other vaults. This is applicable for applicable workloads only, including SQL in Azure VM backup, SAP HANA in Azure VM backup and backup of on-premises servers.
+- **Soft delete across workloads**: Enhanced soft delete applies to all vaulted workloads alike and is supported for Recovery Services vaults and Backup vaults. However, it currently doesn't support operational tier workloads, such as Azure Files backup, Operational backup for Blobs, Disk and VM snapshot backups.
+
+## States of soft delete settings
+
+The following table lists the soft delete properties for vaults:
+
+| State | Description |
+| | |
+| **Disabled** | Deleted items aren't retained in the soft deleted state, and are permanently deleted. |
+| **Enabled** | This is the default state for a new vault. <br><br> Deleted items are retained for the specified soft delete retention period, and are permanently deleted after the expiry of the soft delete retention duration. <br><br> Disabling soft delete immediate purges deleted data. |
+| **Enabled and always-on** | Deleted items are retained for the specified soft delete retention period, and are permanently deleted after the expiry of the soft delete retention duration. <br><br> Once you opt for this state, soft delete can't be disabled. |
+
+## Soft delete retention
+
+Soft delete retention is the retention period (in days) of a deleted item in soft deleted state. Once the soft delete retention period elapses (from the date of deletion), the item is permanently deleted, and you can't undelete. You can choose the soft delete retention period between *14* and *180* days. Longer durations allow you to recover data from threats that may take time to identify (for example, Advanced Persistent Threats).
+
+>[!Note]
+>Soft delete retention for *14* days involves no cost. However, [regular backup charges apply for additional retention days](#pricing).
+>
+>By default, soft delete retention is set to *14* days and you can change it any time. However, the *soft delete retention period* that is active at the time of the deletion governs retention of the item in soft deleted state.
+
+## Soft deleted items reregistration
+
+If a backup item/container is in soft deleted state, you can register it to a vault different from the original one where the soft deleted data belongs.
+
+>[!Note]
+>You can't actively protect one item to two vaults simultaneously. So, if you start protecting a backup container using another vault, you can no longer re-protect the same backup container to the previous vault.
+
+## Pricing
+
+Soft deleted data involves no retention cost for the default duration of *14* days. For soft deleted data retention more than the default period, it incurs regular backup charges.
+
+For example, you've deleted backups for one of the instances in the vault that has soft delete retention of *60* days. If you want to recover the soft deleted data after *50* days of deletion, the pricing is:
+
+- Standard rates (similar rates apply when the instance is in *stop protection with retain data* state) are applicable for the first *36* days (*50* days of data retained in soft deleted state minus *14* days of default soft delete retention).
+
+- No charges for the last *6* days of soft delete retention.
+
+## Supported scenarios
+
+- Enhanced soft delete is currently available in the following regions: West Central US, Australia East, North Europe.
+- It's supported for Recovery Services vaults and Backup vaults. Also, it's supported for new and existing vaults.
+- All existing Recovery Services vaults in the preview regions are upgraded with an option to use enhanced soft delete.
+
+## Next steps
+
+[Configure and manage enhanced soft delete for Azure Backup (preview)](backup-azure-enhanced-soft-delete-configure-manage.md).
backup Backup Azure Enhanced Soft Delete Configure Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-enhanced-soft-delete-configure-manage.md
+
+ Title: Configure and manage enhanced soft delete for Azure Backup (preview)
+description: This article describes about how to configure and manage enhanced soft delete for Azure Backup.
+ Last updated : 10/13/2022+++++
+# Configure and manage enhanced soft delete in Azure Backup (preview)
+
+This article describes how to configure and use enhanced soft delete to protect your data and recover backups, if they're deleted.
+
+In this article, you'll learn about:
+
+>[!div class="checklist"]
+>- Before you start
+>- Enable soft delete with always-on state
+>- Delete a backup item
+>- Recover a soft-deleted backup item
+>- Unregister containers
+>- Disable soft delete
+
+## Before you start
+
+- Enhanced soft delete is supported for Recovery Services vaults and Backup vaults.
+- It's supported for new and existing vaults.
+- All existing Recovery Services vaults in the [preview regions](backup-azure-enhanced-soft-delete-about.md#supported-scenarios) are upgraded with an option to use enhanced soft delete.
++
+## Enable soft delete with always-on state
+
+Soft delete is enabled by default for all new vaults you create. To make enabled settings irreversible, select **Enable Always-on Soft Delete**.
+
+**Choose a vault**
+
+# [Recovery Services vault](#tab/recovery-services-vault)
+
+Follow these steps:
+
+1. Go to **Recovery Services vault** > **Properties**.
+
+1. Under **Soft Delete**, select **Update** to modify the soft delete setting.
+
+ :::image type="content" source="./media/backup-azure-enhanced-soft-delete/open-soft-delete-properties-blade-inline.png" alt-text="Screenshot showing you how to open Soft Delete blade." lightbox="./media/backup-azure-enhanced-soft-delete/open-soft-delete-properties-blade-expanded.png":::
+
+ The soft delete settings for cloud and hybrid workloads are already enabled, unless you've explicitly disabled them earlier.
+
+1. If soft delete settings are disabled for any workload type in the **Soft Delete** blade, select the respective checkboxes to enable them.
+
+ >[!Note]
+ >Enabling soft delete for hybrid workloads also enables other security settings, such as Multi-factor authentication and alert notification for back up of workloads running in the on-premises servers.
+
+1. Choose the number of days between *14* and *180* to specify the soft delete retention period.
+
+ >[!Note]
+ >- There is no cost for soft delete for *14* days. However, deleted instances in soft delete state are charged if the soft delete retention period is *>14* days. Learn about [pricing details](backup-azure-enhanced-soft-delete-about.md#pricing).
+ >- Once configured, the soft delete retention period applies to all soft deleted instances of cloud and hybrid workloads in the vault.
+
+1. Select the **Enable Always-on Soft delete** checkbox to enable soft delete and make it irreversible.
+
+ :::image type="content" source="./media/backup-azure-enhanced-soft-delete/enable-always-on-soft-delete.png" alt-text="Screenshot showing you how to enable a;ways-on state of soft delete.":::
+
+ >[!Note]
+ >If you opt for *Enable Always-on Soft Delete*, select the *confirmation checkbox* to proceed. Once enabled, you can't disable the settings for this vault.
+
+1. Select **Update** to save the changes.
+
+# [Backup vault](#tab/backup-vault)
+
+Follow these steps:
+
+1. Go to **Backup vault** > **Properties**.
+
+1. Under **Soft Delete**, select **Update** to modify the soft delete setting.
+
+ :::image type="content" source="./media/backup-azure-enhanced-soft-delete/open-soft-delete-properties.png" alt-text="Screenshot showing you how to open soft delete blade for Backup vault.":::
+
+ Soft delete is enabled by default with the checkboxes selected.
+
+1. If you've explicitly disabled soft delete for any workload type in the **Soft Delete** blade earlier, select the checkboxes to enable them.
+
+1. Choose the number of days between *14* and *180* to specify the soft delete retention period.
+
+ >[!Note]
+ >There is no cost for enabling soft delete for *14* days. However, you're charged for the soft delete instances if soft delete retention period is *>14* days. Learn about the [pricing details](backup-azure-enhanced-soft-delete-about.md#pricing).
+
+1. Select the **Enable Always-on Soft Delete** checkbox to enable soft delete always-on and make it irreversible.
+
+ :::image type="content" source="./media/backup-azure-enhanced-soft-delete/enable-always-on-soft-delete-backup-vault.png" alt-text="Screenshot showing you how to enable always-on state for Backup vault.":::
+
+ >[!Note]
+ >If you opt for *Enable Always-on Soft Delete*, select the *confirmation checkbox* to proceed. Once enabled, you can't disable the settings for this vault.
+
+1. Select **Update** to save the changes.
+++
+## Delete a backup item
+
+You can delete backup items/instances even if the soft delete settings are enabled. However, if the soft delete is enabled, the deleted items don't get permanently deleted immediately and stays in soft deleted state as per [configured retention period](#enable-soft-delete-with-always-on-state). Soft delete delays permanent deletion of backup data by retaining deleted data for *14*-*180* days.
+
+**Choose a vault**
+
+# [Recovery Services vault](#tab/recovery-services-vault)
+
+Follow these steps:
+
+1. Go to the *backup item* that you want to delete.
+1. Select **Stop backup**.
+1. On the **Stop Backup** page, select **Delete Backup Data** from the drop-down list to delete all backups for the instance.
+1. Provide the applicable information, and then select **Stop backup** to delete all backups for the instance.
+
+ Once the *delete* operation completes, the backup item is moved to soft deleted state. In **Backup items**, the soft deleted item is marked in *Red*, and the last backup status shows that backups are disabled for the item.
+
+ :::image type="content" source="./media/backup-azure-enhanced-soft-delete/soft-deleted-backup-items-marked-red-inline.png" alt-text="Screenshot showing the soft deleted backup items marked red." lightbox="./media/backup-azure-enhanced-soft-delete/soft-deleted-backup-items-marked-red-expanded.png":::
+
+ In the item details, the soft deleted item shows no recovery point. Also, a notification appears to mention the state of the item, and the number of days left before the item is permanently deleted. You can select **Undelete** to recover the soft deleted items.
+
+ :::image type="content" source="./media/backup-azure-enhanced-soft-delete/soft-deleted-item-shows-no-recovery-point-inline.png" alt-text="Screenshot showing the soft deleted backup item that shows no recovery point." lightbox="./media/backup-azure-enhanced-soft-delete/soft-deleted-item-shows-no-recovery-point-expanded.png":::
+
+>[!Note]
+>When the item is in soft deleted state, no recovery points are cleaned on their expiry as per the backup policy.
+
+# [Backup vault](#tab/backup-vault)
+
+Follow these steps:
+
+1. In the **Backup center**, go to the *backup instance* that you want to delete.
+
+1. Select **Stop backup**.
+
+ :::image type="content" source="./media/backup-azure-enhanced-soft-delete/stop-backup-for-backup-vault-items-inline.png" alt-text="Screenshot showing how to initiate the stop backup process for backup items in Backup vault." lightbox="./media/backup-azure-enhanced-soft-delete/stop-backup-for-backup-vault-items-expanded.png":::
+
+ You can also select **Delete** in the instance view to delete backups.
+
+1. On the **Stop Backup** page, select **Delete Backup Data** from the drop-down list to delete all backups for the instance.
+
+1. Provide the applicable information, and then select **Stop backup** to initiate the deletion of the backup instance.
+
+ :::image type="content" source="./media/backup-azure-enhanced-soft-delete/start-stop-backup-process.png" alt-text="Screenshot showing how to stop the backup process.":::
+
+ Once deletion completes, the instance appears as *Soft deleted*.
+
+ :::image type="content" source="./media/backup-azure-enhanced-soft-delete/deleted-backup-items-marked-soft-deleted-inline.png" alt-text="Screenshot showing the deleted backup items marked as Soft Deleted." lightbox="./media/backup-azure-enhanced-soft-delete/deleted-backup-items-marked-soft-deleted-expanded.png":::
+++
+## Recover a soft-deleted backup item
+
+If a backup item/ instance is soft deleted, you can recover it before it's permanently deleted.
+
+**Choose a vault**
+
+# [Recovery Services vault](#tab/recovery-services-vault)
+
+Follow these steps:
+
+1. Go to the *backup item* that you want to retrieve from the *soft deleted* state.
+
+ You can also use the **Backup center** to go to the item by applying the filter **Protection status == Soft deleted** in the *Backup instances*.
+
+1. Select **Undelete** corresponding to the *soft deleted item*.
+
+ :::image type="content" source="./media/backup-azure-enhanced-soft-delete/start-recover-backup-items-inline.png" alt-text="Screenshot showing how to start recovering backup items from soft delete state." lightbox="./media/backup-azure-enhanced-soft-delete/start-recover-backup-items-expanded.png":::
+
+1. In the **Undelete** *backup item* blade, select **Undelete** to recover the deleted item.
+
+ All recovery points now appear and the backup item changes to *Stop protection with retain data* state. However, backups don't resume automatically. To continue taking backups for this item, select **Resume backup**.
+
+# [Backup vault](#tab/backup-vault)
+
+Follow these steps:
+
+1. Go to the *deleted backup instance* that you want to recover.
+
+ You can also use the **Backup center** to go to the *instance* by applying the filter **Protection status == Soft deleted** in the *Backup instances*.
+
+1. Select **Undelete** corresponding to the *soft deleted instance*.
+
+ :::image type="content" source="./media/backup-azure-enhanced-soft-delete/start-recover-deleted-backup-vault-items-inline.png" alt-text="Screenshot showing how to start recovering deleted backup vault items from soft delete state." lightbox="./media/backup-azure-enhanced-soft-delete/start-recover-deleted-backup-vault-items-expanded.png":::
+
+1. In the **Undelete** *backup instance* blade, select **Undelete** to recover the item.
+
+ All recovery points appear and the backup item changes to *Stop protection with retain data* state. However, backups don't resume automatically. To continue taking backups for this instance, select **Resume backup**.
+
+>[!Note]
+>Undeleting a soft deleted item reinstates the backup item into Stop backup with retain data state and doesn't automatically restart scheduled backups. You need to explicitly [resume backups](backup-azure-manage-vms.md#resume-protection-of-a-vm) if you want to continue taking new backups. Resuming backup will also clean up expired recovery points, if any.
+++
+## Unregister containers
+
+In the case of workloads that group multiple backup items into a container, you can unregister a container if all its backup items are either deleted or soft deleted.
+
+Here are some points to note:
+
+- You can unregister a container only if it has no protected items, that is, all backup items inside it are either deleted or soft deleted.
+
+- Unregistering a container while its backup items are soft deleted (not permanently deleted) will change the state of the container to Soft deleted.
+
+- You can re-register containers that are in soft deleted state to another vault. However, in such scenarios, the existing backups (that are soft deleted) will continue to be in the original vault and will be permanently deleted when the soft delete retention period expires.
+
+- You can also *undelete* the container. Once undeleted, it's re-registered to the original vault.
+
+ You can undelete a container only if it's not registered to another vault. If it's registered, then you need to unregister it with the vault before performing the *undelete* operation.
+
+## Disable soft delete
+
+Follow these steps:
+
+1. Go to your *vault* > **Properties**.
+
+1. On the **Properties** page, under **Soft delete**, select **Update**.
+1. In the **Soft Delete settings** blade, clear the **Enable soft delete** checkbox to disable soft delete.
+
+>[!Note]
+>You can't disable soft delete if **Enable Always-on Soft Delete** is enabled for this vault.
+
+## Next steps
+
+[About Enhanced soft delete for Azure Backup (preview)](backup-azure-enhanced-soft-delete-about.md).
backup Backup Azure Immutable Vault Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-immutable-vault-concept.md
+
+ Title: Concept of Immutable vault for Azure Backup (preview)
+description: This article explains about the concept of Immutable vault for Azure Backup, and how it helps in protecting data from malicious actors.
+++ Last updated : 09/15/2022++++
+# Immutable vault for Azure Backup (preview)
+
+Immutable vault can help you protect your backup data by blocking any operations that could lead to loss of recovery points. Further, you can lock the Immutable vault setting to make it irreversible to prevent any malicious actors from disabling immutability and deleting backups.
+
+## Before you start
+
+- Immutable vault is currently in preview and is available in the following regions: East US, West US, West US 2, West Central US, North Europe, Brazil South, Japan East.
+- Immutable vault is currently supported for Recovery Services vaults only.
+- Enabling Immutable vault blocks you from performing specific operations on the vault and its protected items. See the [restricted operations](#restricted-operations).
+- Enabling immutability for the vault is a reversible operation. However, you can choose to make it irreversible to prevent any malicious actors from disabling it (after disabling it, they can perform destructive operations). Learn about [making Immutable vault irreversible](#making-immutability-irreversible).
+- Immutable vault applies to all the data in the vault. Therefore, all instances that are protected in the vault have immutability applied to them.
+- Immutability doesn't apply to operational backups, such as operational backup of blobs, files, and disks.
+
+## How does immutability work?
+
+While Azure Backup stores data in isolation from production workloads, it allows performing management operations to help you manage your backups, including those operations that allow you to delete recovery points. However, in certain scenarios, you may want to make the backup data immutable by preventing any such operations that, if used by malicious actors, could lead to the loss of backups. The Immutable vault setting on your vault enables you to block such operations to ensure that your backup data is protected, even if any malicious actors try to delete them to affect the recoverability of data.
+
+## Making immutability irreversible
+
+The immutability of a vault is a reversible setting that allows you to disable the immutability (which would allow deletion of backup data) if needed. However, we recommend you, after being satisfied with the impact of immutability, lock the vault to make the Immutable vault settings irreversible, so that any bad actors canΓÇÖt disable it. Therefore, the Immutable vault settings accept following three states.
+
+| State of Immutable vault setting | Description |
+| | |
+| **Disabled** | The vault doesn't have immutability enabled and no operations are blocked. |
+| **Enabled** | The vault has immutability enabled and doesn't allow operations that could result in loss of backups. <br><br> However, the setting can be disabled. |
+| **Enabled and locked** | The vault has immutability enabled and doesn't allow operations that could result in loss of backups. <br><br> As the Immutable vault setting is now locked, it can't be disabled. <br><br> Note that immutability locking is irreversible, so ensure that you take a well-informed decision when opting to lock. |
+
+## Restricted operations
+
+Immutable vault prevents you from performing the following operations on the vault that could lead to loss of data:
+
+| Operation type | Description |
+| | |
+| **Stop protection with delete data** | A protected item can't have its recovery points deleted before their respective expiry date. However, you can still stop the protection of the instances while retaining data forever or until their expiry. |
+| **Modify backup policy to reduce retention** | Any actions that reduce the retention period in a backup policy are disallowed on Immutable vault. However, you can make policy changes that result in the increase of retention. You can also make changes to the schedule of a backup policy. |
+| **Change backup policy to reduce retention** | Any attempt to replace a backup policy associated with a backup item with another policy with retention lower than the existing one is blocked. However, you can replace a policy with the one that has higher retention. |
+
+## Next steps
+
+- Learn [how to manage operations of Azure Backup vault immutability (preview)](backup-azure-immutable-vault-how-to-manage.md).
+
backup Backup Azure Immutable Vault How To Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-immutable-vault-how-to-manage.md
+
+ Title: How to manage Azure Backup Immutable vault operations (preview)
+description: This article explains how to manage Azure Backup Immutable vault operations.
++ Last updated : 09/15/2022++++
+# Manage Azure Backup Immutable vault operations (preview)
+
+[Immutable vault](backup-azure-immutable-vault-concept.md) can help you protect your backup data by blocking any operations that could lead to loss of recovery points. Further, you can lock the Immutable vault setting to make it irreversible to prevent any malicious actors from disabling immutability and deleting backups.
+
+In this article, you'll learn how to:
+
+> [!div class="checklist"]
+>
+> - Enable Immutable vault
+> - Perform operations on Immutable vault
+> - Disable immutability
+
+## Enable Immutable vault
+
+You can enable immutability for a vault through its properties, follow these steps:
+
+1. Go to the Recovery Services vault for which you want to enable immutability.
+
+1. In the vault, go to **Properties** -> **Immutable vault**, and then select **Settings**.
+
+ :::image type="content" source="./media/backup-azure-immutable-vault/enable-immutable-vault-settings.png" alt-text="Screenshot showing how to open the Immutable vault settings.":::
+
+1. On **Immutable vault**, select the checkbox for **Enable vault immutability** to enable immutability for the vault.
+
+ At this point, immutability of the vault is reversible, and it can be disabled, if needed.
+
+1. Once you enable immutability, the option to lock the immutability for the vault appears.
+
+ This makes immutability setting for the vault irreversible. While this helps secure the backup data in the vault, we recommend you make a well-informed decision when opting to lock. You can also test and validate how the current settings of the vault, backup policies, and so on, meet your requirements and can lock the immutability setting later.
+
+1. Select **Apply** to save the changes.
+
+ :::image type="content" source="./media/backup-azure-immutable-vault/backup-azure-enable-immutability.png" alt-text="Screenshot showing how to enable the Immutable vault settings.":::
+
+## Perform operations on Immutable vault
+
+As per the [Restricted operations](backup-azure-immutable-vault-concept.md#restricted-operations), certain operations are restricted on Immutable vault. However, other operations on the vault or the items it contains remain unaffected.
+
+### Perform restricted operations
+
+[Restricted operations](backup-azure-immutable-vault-concept.md#restricted-operations) are disallowed on the vault. Consider the following example when trying to modify a policy to reduce its retention in a vault with immutability enabled.
+
+Consider a policy with a daily backup point retention of *35 days* and weekly backup point retention of *two weeks*, as shown in the following screenshot.
++
+Now, let's try to reduce the retention of daily backup points to *30 days*, reducing by *5 days*, and save the policy.
+
+You'll see that the operation fails with the information that the vault has immutability enabled, and therefore, any changes that could reduce retention of recovery points are disallowed.
++
+Now, let's try to increase the retention of daily backup points to *40 days*, increasing by *5 days*, and save the policy.
+
+This time, the operation successfully passes as no recovery points can be deleted as part of this update.
++
+## Disable immutability
+
+You can disable immutability only for vaults that have immutability enabled, but not locked. To disable immutability for such vaults, follow these steps:
+
+1. Go to the Recovery Services vault for which you want to disable immutability.
+
+1. In the vault, go to **Properties** -> **Immutable vault**, and then select **Settings**.
+
+ :::image type="content" source="./media/backup-azure-immutable-vault/disable-immutable-vault-settings.png" alt-text="Screenshot showing how to open the Immutable vault settings to disable.":::
+
+1. In the **Immutable vault** blade, clear the checkbox for **Enable vault Immutability**.
+
+1. Select **Apply** to save the changes.
+
+ :::image type="content" source="./media/backup-azure-immutable-vault/backup-azure-disable-immutability.png" alt-text="Screenshot showing how to disable the Immutable vault settings.":::
+
+## Next steps
+
+- Learn [about Immutable vault for Azure Backup (preview)](backup-azure-immutable-vault-concept.md).
backup Enable Multi User Authorization Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/enable-multi-user-authorization-quickstart.md
Follow these steps:
## Next steps -- [Protect against unauthorized (protected) operations](multi-user-authorization.md#protect-against-unauthorized-protected-operations)
+- [Protected operations using MUA](multi-user-authorization.md?pivots=vaults-recovery-services-vault#protected-operations-using-mua)
- [Authorize critical (protected) operations using Azure AD Privileged Identity Management](multi-user-authorization.md#authorize-critical-protected-operations-using-azure-ad-privileged-identity-management) - [Performing a protected operation after approval](multi-user-authorization.md#performing-a-protected-operation-after-approval) - [Disable MUA on a Recovery Services vault](multi-user-authorization.md#disable-mua-on-a-recovery-services-vault)
backup Multi User Authorization Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/multi-user-authorization-concept.md
Title: Multi-user authorization using Resource Guard description: An overview of Multi-user authorization using Resource Guard. Previously updated : 06/08/2022 Last updated : 09/15/2022 # Multi-user authorization using Resource Guard
-Multi-user authorization (MUA) for Azure Backup allows you to add an additional layer of protection to critical operations on your Recovery Services vaults. For MUA, Azure Backup uses another Azure resource called the Resource Guard to ensure critical operations are performed only with applicable authorization.
+Multi-user authorization (MUA) for Azure Backup allows you to add an additional layer of protection to critical operations on your Recovery Services vaults and Backup vaults. For MUA, Azure Backup uses another Azure resource called the Resource Guard to ensure critical operations are performed only with applicable authorization.
+
+>[!Note]
+>Multi-user authorization using Resource Guard for Backup vault is in preview.
## How does MUA for Backup work?
-Azure Backup uses the Resource Guard as an authorization service for a Recovery Services vault. Therefore, to perform a critical operation (described below) successfully, you must have sufficient permissions on the associated Resource Guard as well.
+Azure Backup uses the Resource Guard as an additional authorization mechanism for a Recovery Services vault or a Backup vault. Therefore, to perform a critical operation (described below) successfully, you must have sufficient permissions on the associated Resource Guard as well.
> [!Important]
-> To function as intended, the Resource Guard must be owned by a different user, and the vault admin must not have Contributor permissions. You can place Resource Guard in a subscription or tenant different from the one containing the Recovery Services vault to provide better protection.
+> To function as intended, the Resource Guard must be owned by a different user, and the vault admin must not have Contributor permissions. You can place Resource Guard in a subscription or tenant different from the one containing the vaults to provide better protection.
## Critical operations
-The following table lists the operations defined as critical operations and can be protected by a Resource Guard. You can choose to exclude certain operations from being protected using the Resource Guard when associating vaults with it. Note that operations denoted as Mandatory cannot be excluded from being protected using the Resource Guard for vaults associated with it. Also, the excluded critical operations would apply to all vaults associated with a Resource Guard.
+The following table lists the operations defined as critical operations and can be protected by a Resource Guard. You can choose to exclude certain operations from being protected using the Resource Guard when associating vaults with it.
+
+>[!Note]
+>You can't excluded the operations denoted as Mandatory from being protected using the Resource Guard for vaults associated with it. Also, the excluded critical operations would apply to all vaults associated with a Resource Guard.
+
+**Choose a vault**
+
+# [Recovery Services vault](#tab/recovery-services-vault)
-**Operation** | **Mandatory/Optional**
+**Operation** | **Mandatory/ Optional**
| Disable soft delete | Mandatory Disable MUA protection | Mandatory
-Modify backup policy (reduced retention) | Optional: Can be excluded
-Modify protection (reduced retention) | Optional: Can be excluded
-Stop protection with delete data | Optional: Can be excluded
-Change MARS security PIN | Optional: Can be excluded
+Modify backup policy (reduced retention) | Optional
+Modify protection (reduced retention) | Optional
+Stop protection with delete data | Optional
+Change MARS security PIN | Optional
+
+# [Backup vault (preview)](#tab/backup-vault)
+
+**Operation** | **Mandatory/ Optional**
+ |
+Disable MUA protection | Mandatory
+Delete backup instance | Optional
++ ### Concepts and process
-The concepts and the processes involved when using MUA for Backup are explained below.
+
+The concepts and the processes involved when using MUA for Azure Backup are explained below.
LetΓÇÖs consider the following two users for a clear understanding of the process and responsibilities. These two roles are referenced throughout this article.
-**Backup admin**: Owner of the Recovery Services vault and performs management operations on the vault. To begin with, the Backup admin must not have any permissions on the Resource Guard.
+**Backup admin**: Owner of the Recovery Services vault or the Backup vault who performs management operations on the vault. To begin with, the Backup admin must not have any permissions on the Resource Guard.
**Security admin**: Owner of the Resource Guard and serves as the gatekeeper of critical operations on the vault. Hence, the Security admin controls permissions that the Backup admin needs to perform critical operations on the vault. Following is a diagrammatic representation for performing a critical operation on a vault that has MUA configured using a Resource Guard.
-Here is the flow of events in a typical scenario:
+Here's the flow of events in a typical scenario:
-1. The Backup admin creates the Recovery Services vault.
-1. The Security admin creates the Resource Guard. The Resource Guard can be in a different subscription or a different tenant with respect to the Recovery Services vault. It must be ensured that the Backup admin does not have Contributor permissions on the Resource Guard.
+1. The Backup admin creates the Recovery Services vault or the Backup vault.
+1. The Security admin creates the Resource Guard. The Resource Guard can be in a different subscription or a different tenant with respect to the vault. It must be ensured that the Backup admin doesn't have Contributor permissions on the Resource Guard.
1. The Security admin grants the **Reader** role to the Backup Admin for the Resource Guard (or a relevant scope). The Backup admin requires the reader role to enable MUA on the vault.
-1. The Backup admin now configures the Recovery Services vault to be protected by MUA via the Resource Guard.
+1. The Backup admin now configures the vault to be protected by MUA via the Resource Guard.
1. Now, if the Backup admin wants to perform a critical operation on the vault, they need to request access to the Resource Guard. The Backup admin can contact the Security admin for details on gaining access to perform such operations. They can do this using Privileged Identity Management (PIM) or other processes as mandated by the organization. 1. The Security admin temporarily grants the **Contributor** role on the Resource Guard to the Backup admin to perform critical operations. 1. Now, the Backup admin initiates the critical operation. 1. The Azure Resource Manager checks if the Backup admin has sufficient permissions or not. Since the Backup admin now has Contributor role on the Resource Guard, the request is completed.
- - If the Backup admin did not have the required permissions/roles, the request would have failed.
+
+ If the Backup admin didn't have the required permissions/roles, the request would have failed.
+ 1. The security admin ensures that the privileges to perform critical operations are revoked after authorized actions are performed or after a defined duration. Using JIT tools [Azure Active Directory Privileged Identity Management](../active-directory/privileged-identity-management/pim-configure.md) may be useful in ensuring this. >[!NOTE]
->- MUA provides protection on the above listed operations performed on the Recovery Services vaults only. Any operations performed directly on the data source (i.e., the Azure resource/workload that is protected) are beyond the scope of the Resource Guard.
->- This feature is currently available via the Azure portal only.
->- This feature is currently supported for Recovery Services vaults only and not available for Backup vaults.
+>MUA provides protection on the above listed operations performed on the vaulted backups only. Any operations performed directly on the data source (that is, the Azure resource/workload that is protected) are beyond the scope of the Resource Guard.
## Usage scenarios
-The following table depicts scenarios for creating your Resource Guard and Recovery Services vault (RS vault), along with the relative protection offered by each.
+The following table lists the scenarios for creating your Resource Guard and vaults (Recovery Services vault and Backup vault), along with the relative protection offered by each.
>[!Important] > The Backup admin must not have Contributor permissions to the Resource Guard in any scenario. **Usage scenario** | **Protection due to MUA** | **Ease of implementation** | **Notes** | | | |
-RS vault and Resource Guard are **in the same subscription.** </br> The Backup admin does not have access to the Resource Guard. | Least isolation between the Backup admin and the Security admin. | Relatively easy to implement since only one subscription is required. | Resource level permissions/ roles need to be ensured are correctly assigned.
-RS vault and Resource Guard are **in different subscriptions but the same tenant.** </br> The Backup admin does not have access to the Resource Guard or the corresponding subscription. | Medium isolation between the Backup admin and the Security admin. | Relatively medium ease of implementation since two subscriptions (but a single tenant) are required. | Ensure that that permissions/ roles are correctly assigned for the resource or the subscription.
-RS vault and Resource Guard are **in different tenants.** </br> The Backup admin does not have access to the Resource Guard, the corresponding subscription, or the corresponding tenant.| Maximum isolation between the Backup admin and the Security admin, hence, maximum security. | Relatively difficult to test since requires two tenants or directories to test. | Ensure that permissions/ roles are correctly assigned for the resource, the subscription or the directory.
-
- >[!NOTE]
- > For this article, we will demonstrate creation of the Resource Guard in a different tenant that offers maximum protection. In terms of requesting and approving requests for performing critical operations, this article demonstrates the same using [Azure Active Directory Privileged Identity Management](../active-directory/privileged-identity-management/pim-configure.md) in the tenant housing the Resource Guard. You can optionally use other mechanisms to manage JIT permissions on the Resource Guard as per your setup.
+Vault and Resource Guard are **in the same subscription.** </br> The Backup admin does't have access to the Resource Guard. | Least isolation between the Backup admin and the Security admin. | Relatively easy to implement since only one subscription is required. | Resource level permissions/ roles need to be ensured are correctly assigned.
+Vault and Resource Guard are **in different subscriptions but the same tenant.** </br> The Backup admin doesn't have access to the Resource Guard or the corresponding subscription. | Medium isolation between the Backup admin and the Security admin. | Relatively medium ease of implementation since two subscriptions (but a single tenant) are required. | Ensure that that permissions/ roles are correctly assigned for the resource or the subscription.
+Vault and Resource Guard are **in different tenants.** </br> The Backup admin doesn't have access to the Resource Guard, the corresponding subscription, or the corresponding tenant.| Maximum isolation between the Backup admin and the Security admin, hence, maximum security. | Relatively difficult to test since requires two tenants or directories to test. | Ensure that permissions/ roles are correctly assigned for the resource, the subscription or the directory.
## Next steps
-[Configure Multi-user authorization using Resource Guard](multi-user-authorization.md)
+[Configure Multi-user authorization using Resource Guard](multi-user-authorization.md).
backup Multi User Authorization Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/multi-user-authorization-tutorial.md
Follow these steps:
Follow these steps: 1. In the Resource Guard created above, go to **Properties**.
-2. Select **Disable** for operations that you wish to exclude from being authorized using the Resource Guard. Note that the operations **Disable soft delete** and **Remove MUA protection** cannot be disabled.
+1. Select **Disable** for operations that you wish to exclude from being authorized using the Resource Guard.
+
+ >[!Note]
+ >The operations **Disable soft delete** and **Remove MUA protection** cannot be disabled.
1. Optionally, you can also update the description for the Resource Guard using this blade.
-1. Click **Save**.
+1. Select **Save**.
## Assign permissions to the Backup admin on the Resource Guard to enable MUA
Follow these steps:
## Next steps -- [Protect against unauthorized (protected) operations](multi-user-authorization.md#protect-against-unauthorized-protected-operations)
+- [Protected operations using MUA](multi-user-authorization.md?pivots=vaults-recovery-services-vault#protected-operations-using-mua)
- [Authorize critical (protected) operations using Azure AD Privileged Identity Management](multi-user-authorization.md#authorize-critical-protected-operations-using-azure-ad-privileged-identity-management) - [Performing a protected operation after approval](multi-user-authorization.md#performing-a-protected-operation-after-approval) - [Disable MUA on a Recovery Services vault](multi-user-authorization.md#disable-mua-on-a-recovery-services-vault)
backup Multi User Authorization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/multi-user-authorization.md
Title: Configure Multi-user authorization using Resource Guard description: This article explains how to configure Multi-user authorization using Resource Guard. Previously updated : 05/05/2022
+zone_pivot_groups: backup-vaults-recovery-services-vault-backup-vault
Last updated : 09/15/2022 # Configure Multi-user authorization using Resource Guard in Azure Backup
-This article describes how to configure Multi-user authorization (MUA) for Azure Backup to add an additional layer of protection to critical operations on your Recovery Services vaults
-This document includes the following:
+++
+This article describes how to configure Multi-user authorization (MUA) for Azure Backup to add an additional layer of protection to critical operations on your Recovery Services vaults.
+
+This article demonstrates Resource Guard creation in a different tenant that offers maximum protection. It also demonstrates how to request and approve requests for performing critical operations using [Azure Active Directory Privileged Identity Management](../active-directory/privileged-identity-management/pim-configure.md) in the tenant housing the Resource Guard. You can optionally use other mechanisms to manage JIT permissions on the Resource Guard as per your setup.
+
+This document includes the following sections:
>[!div class="checklist"] >- Before you start >- Testing scenarios >- Create a Resource Guard >- Enable MUA on a Recovery Services vault
->- Protect against unauthorized operations on a vault
+>- Protected operations on a vault using MUA
>- Authorize critical operations on a vault >- Disable MUA on a Recovery Services vault
This document includes the following:
- Ensure the Resource Guard and the Recovery Services vault are in the same Azure region. - Ensure the Backup admin does **not** have **Contributor** permissions on the Resource Guard. You can choose to have the Resource Guard in another subscription of the same directory or in another directory to ensure maximum isolation.-- Ensure that your subscriptions containing the Recovery Services vault as well as the Resource Guard (in different subscriptions or tenants) are registered to use the providers - **Microsoft.RecoveryServices** and **Microsoft.DataProtection** . For more details, see [Azure resource providers and types](../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider-1).
+- Ensure that your subscriptions containing the Recovery Services vault as well as the Resource Guard (in different subscriptions or tenants) are registered to use the providers - **Microsoft.RecoveryServices** and **Microsoft.DataProtection** . For more information, see [Azure resource providers and types](../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider-1).
-Learn about various [MUA usage scenarios](multi-user-authorization-concept.md#usage-scenarios).
+Learn about various [MUA usage scenarios](./multi-user-authorization-concept.md?tabs=recovery-services-vault#usage-scenarios).
## Create a Resource Guard The **Security admin** creates the Resource Guard. We recommend that you create it in a **different subscription** or a **different tenant** as the vault. However, it should be in the **same region** as the vault. The Backup admin must **NOT** have *contributor* access on the Resource Guard or the subscription that contains it. For the following example, create the Resource Guard in a tenant different from the vault tenant.
-1. In the Azure portal, go to the directory under which you wish to create the Resource Guard.
+1. In the Azure portal, go to the directory under which you want to create the Resource Guard.
:::image type="content" source="./media/multi-user-authorization/portal-settings-directories-subscriptions.png" alt-text="Screenshot showing the portal settings.":::
-1. Search for **Resource Guards** in the search bar and select the corresponding item from the drop-down.
+1. Search for **Resource Guards** in the search bar and select the corresponding item from the drop-down list.
:::image type="content" source="./media/multi-user-authorization/resource-guards-preview-inline.png" alt-text="Screenshot showing resource guards." lightbox="./media/multi-user-authorization/resource-guards-preview-expanded.png":::
- - Click **Create** to start creating a Resource Guard.
+ - Select **Create** to start creating a Resource Guard.
- In the create blade, fill in the required details for this Resource Guard. - Make sure the Resource Guard is in the same Azure regions as the Recovery Services vault.
- - Also, it is helpful to add a description of how to get or request access to perform actions on associated vaults when needed. This description would also appear in the associated vaults to guide the backup admin on getting the required permissions. You can edit the description later if needed, but having a well-defined description at all times is encouraged.
+ - Also, it's helpful to add a description of how to get or request access to perform actions on associated vaults when needed. This description would also appear in the associated vaults to guide the backup admin on getting the required permissions. You can edit the description later if needed, but having a well-defined description at all times is encouraged.
1. On the **Protected operations** tab, select the operations you need to protect using this resource guard.
- You can also [select the operations to be protected after creating the resource guard](#select-operations-to-protect-using-resource-guard).
+ You can also [select the operations for protection after creating the resource guard](?pivots=vaults-recovery-services-vault#select-operations-to-protect-using-resource-guard).
1. Optionally, add any tags to the Resource Guard as per the requirements
-1. Click **Review + Create**.
-1. Follow notifications for status and successful creation of the Resource Guard.
+1. Select **Review + Create**.
+
+ Follow notifications for status and successful creation of the Resource Guard.
### Select operations to protect using Resource Guard Choose the operations you want to protect using the Resource Guard out of all supported critical operations. By default, all supported critical operations are enabled. However, you can exempt certain operations from falling under the purview of MUA using Resource Guard. The security admin can perform the following steps: 1. In the Resource Guard created above, go to **Properties**.
-2. Select **Disable** for operations that you wish to exclude from being authorized using the Resource Guard. Note that the operations **Disable soft delete** and **Remove MUA protection** cannot be disabled.
-3. Optionally, you can also update the description for the Resource Guard using this blade.
-4. Click **Save**.
+2. Select **Disable** for operations that you want to exclude from being authorized using the Resource Guard.
+
+ >[!Note]
+ > You can't disable the protected operations - **Disable soft delete** and **Remove MUA protection**.
+1. Optionally, you can also update the description for the Resource Guard using this blade.
+1. Select **Save**.
:::image type="content" source="./media/multi-user-authorization/demo-resource-guard-properties.png" alt-text="Screenshot showing demo resource guard properties.":::
Choose the operations you want to protect using the Resource Guard out of all su
To enable MUA on a vault, the admin of the vault must have **Reader** role on the Resource Guard or subscription containing the Resource Guard. To assign the **Reader** role on the Resource Guard:
-1. In the Resource Guard created above, go to the Access Control (IAM) blade, and then go to **Add role assignment**.
+1. In the Resource Guard created above, go to the **Access Control (IAM)** blade, and then go to **Add role assignment**.
:::image type="content" source="./media/multi-user-authorization/demo-resource-guard-access-control.png" alt-text="Screenshot showing demo resource guard-access control.":::
-1. Select **Reader** from the list of built-in roles and click **Next** on the bottom of the screen.
+1. Select **Reader** from the list of built-in roles and select **Next** on the bottom of the screen.
:::image type="content" source="./media/multi-user-authorization/demo-resource-guard-add-role-assignment-inline.png" alt-text="Screenshot showing demo resource guard-add role assignment." lightbox="./media/multi-user-authorization/demo-resource-guard-add-role-assignment-expanded.png":::
-1. Click **Select members** and add the Backup adminΓÇÖs email ID to add them as the **Reader**. Since the Backup admin is in another tenant in this case, they will be added as guests to the tenant containing the Resource Guard.
+1. Click **Select members** and add the Backup adminΓÇÖs email ID to add them as the **Reader**. As the Backup admin is in another tenant in this case, they'll be added as guests to the tenant containing the Resource Guard.
1. Click **Select** and then proceed to **Review + assign** to complete the role assignment.
Now that the Backup admin has the Reader role on the Resource Guard, they can ea
:::image type="content" source="./media/multi-user-authorization/test-vault-properties.png" alt-text="Screenshot showing the Recovery services vault properties.":::
-1. Now you are presented with the option to enable MUA and choose a Resource Guard using one of the following ways:
+1. Now, you're presented with the option to enable MUA and choose a Resource Guard using one of the following ways:
1. You can either specify the URI of the Resource Guard, make sure you specify the URI of a Resource Guard you have **Reader** access to and that is the same regions as the vault. You can find the URI (Resource Guard ID) of the Resource Guard in its **Overview** screen: :::image type="content" source="./media/multi-user-authorization/resource-guard-rg-inline.png" alt-text="Screenshot showing the Resource Guard." lightbox="./media/multi-user-authorization/resource-guard-rg-expanded.png":::
- 1. Or you can select the Resource Guard from the list of Resource Guards you have **Reader** access to, and those available in the region.
+ 1. Or, you can select the Resource Guard from the list of Resource Guards you have **Reader** access to, and those available in the region.
1. Click **Select Resource Guard** 1. Click on the dropdown and select the directory the Resource Guard is in.
Now that the Backup admin has the Reader role on the Resource Guard, they can ea
:::image type="content" source="./media/multi-user-authorization/testvault1-multi-user-authorization-inline.png" alt-text="Screenshot showing multi-user authorization." lightbox="./media/multi-user-authorization/testvault1-multi-user-authorization-expanded.png" :::
-1. Click **Save** once done to enable MUA
+1. Select **Save** once done to enable MUA.
:::image type="content" source="./media/multi-user-authorization/testvault1-enable-mua.png" alt-text="Screenshot showing how to enable Multi-user authentication.":::
-## Protect against unauthorized (protected) operations
+## Protected operations using MUA
-Once you have enabled MUA, the operations in scope will be restricted on the vault, if the Backup admin tries to perform them without having the required role (i.e., Contributor role) on the Resource Guard.
+Once you have enabled MUA, the operations in scope will be restricted on the vault, if the Backup admin tries to perform them without having the required role (that is, Contributor role) on the Resource Guard.
>[!NOTE]
- >It is highly recommended that you test your setup after enabling MUA to ensure that protected operations are blocked as expected and to ensure that MUA is correctly configured.
+ >We highly recommend that you test your setup after enabling MUA to ensure that protected operations are blocked as expected and to ensure that MUA is correctly configured.
Depicted below is an illustration of what happens when the Backup admin tries to perform such a protected operation (For example, disabling soft delete is depicted here. Other protected operations have a similar experience). The following steps are performed by a Backup admin without required permissions.
-1. To disable soft delete, go to the Recovery Services Vault > Properties > Security Settings and click **Update**, which brings up the Security Settings.
-1. Disable the soft delete using the slider. You are informed that this is a protected operation, and you need to verify their access to the Resource Guard.
+1. To disable soft delete, go to the Recovery Services vault > **Properties** > **Security Settings** and select **Update**, which brings up the Security Settings.
+1. Disable the soft delete using the slider. You're informed that this is a protected operation, and you need to verify their access to the Resource Guard.
1. Select the directory containing the Resource Guard and Authenticate yourself. This step may not be required if the Resource Guard is in the same directory as the vault.
-1. Proceed to click **Save**. The request fails with an error informing them about not having sufficient permissions on the Resource Guard to let you perform this operation.
+1. Proceed to select **Save**. The request fails with an error informing them about not having sufficient permissions on the Resource Guard to let you perform this operation.
:::image type="content" source="./media/multi-user-authorization/test-vault-properties-security-settings-inline.png" alt-text="Screenshot showing the Test Vault properties security settings." lightbox="./media/multi-user-authorization/test-vault-properties-security-settings-expanded.png"::: ## Authorize critical (protected) operations using Azure AD Privileged Identity Management
-The following sub-sections discuss authorizing these requests using PIM. There are cases where you may need to perform critical operations on your backups and MUA can help you ensure that these are performed only when the right approvals or permissions exist. As discussed earlier, the Backup admin needs to have a Contributor role on the Resource Guard to perform critical operations that are in the Resource Guard scope. One of the ways to allow just-in-time for such operations is through the use of [Azure Active Directory (Azure AD) Privileged Identity Management](../active-directory/privileged-identity-management/pim-configure.md).
+The following sections discuss authorizing these requests using PIM. There are cases where you may need to perform critical operations on your backups and MUA can help you ensure that these are performed only when the right approvals or permissions exist. As discussed earlier, the Backup admin needs to have a Contributor role on the Resource Guard to perform critical operations that are in the Resource Guard scope. One of the ways to allow just-in-time for such operations is through the use of [Azure Active Directory (Azure AD) Privileged Identity Management](../active-directory/privileged-identity-management/pim-configure.md).
>[!NOTE]
-> Though using Azure AD PIM is the recommended approach, you can use manual or custom methods to manage access for the Backup admin on the Resource Guard. For managing access to the Resource Guard manually, use the ΓÇÿAccess control (IAM)ΓÇÖ setting on the left navigation bar of the Resource Guard and grant the **Contributor** role to the Backup admin.
+>Though using Azure AD PIM is the recommended approach, you can use manual or custom methods to manage access for the Backup admin on the Resource Guard. For managing access to the Resource Guard manually, use the ΓÇÿAccess control (IAM)ΓÇÖ setting on the left navigation bar of the Resource Guard and grant the **Contributor** role to the Backup admin.
### Create an eligible assignment for the Backup admin (if using Azure AD Privileged Identity Management)
The Security admin can use PIM to create an eligible assignment for the Backup a
1. In the security tenant (which contains the Resource Guard), go to **Privileged Identity Management** (search for this in the search bar in the Azure portal) and then go to **Azure Resources** (under **Manage** on the left menu). 1. Select the resource (the Resource Guard or the containing subscription/RG) to which you want to assign the **Contributor** role.
- 1. If you donΓÇÖt see the corresponding resource in the list of resources, ensure you add the containing subscription to be managed by PIM.
+ If you donΓÇÖt see the corresponding resource in the list of resources, ensure you add the containing subscription to be managed by PIM.
1. In the selected resource, go to **Assignments** (under **Manage** on the left menu) and go to **Add assignments**. :::image type="content" source="./media/multi-user-authorization/add-assignments.png" alt-text="Screenshot showing how to add assignments.":::
-1. In the Add assignments
+1. In the Add assignments:
1. Select the role as Contributor.
- 1. Go to Select members and add the username (or email IDs) of the Backup admin
- 1. Click Next
+ 1. Go to **Select members** and add the username (or email IDs) of the Backup admin.
+ 1. Select **Next**.
:::image type="content" source="./media/multi-user-authorization/add-assignments-membership.png" alt-text="Screenshot showing how to add assignments-membership.":::
-1. In the next screen
+1. In the next screen:
1. Under assignment type, choose **Eligible**. 1. Specify the duration for which the eligible permission is valid.
- 1. Click **Assign** to finish creating the eligible assignment.
+ 1. Select **Assign** to finish creating the eligible assignment.
:::image type="content" source="./media/multi-user-authorization/add-assignments-setting.png" alt-text="Screenshot showing how to add assignments-setting."::: ### Set up approvers for activating Contributor role By default, the setup above may not have an approver (and an approval flow requirement) configured in PIM. To ensure that approvers are required for allowing only authorized requests to go through, the security admin must perform the following steps.
-Note if this is not configured, any requests will be automatically approved without going through the security admins or a designated approverΓÇÖs review. More details on this can be found [here](../active-directory/privileged-identity-management/pim-resource-roles-configure-role-settings.md)
+
+> [!Note]
+> If this isn't configured, any requests will be automatically approved without going through the security admins or a designated approverΓÇÖs review. More details on this can be found [here](../active-directory/privileged-identity-management/pim-resource-roles-configure-role-settings.md)
1. In Azure AD PIM, select **Azure Resources** on the left navigation bar and select your Resource Guard.
Note if this is not configured, any requests will be automatically approved with
:::image type="content" source="./media/multi-user-authorization/add-contributor.png" alt-text="Screenshot showing how to add contributor.":::
-1. If the setting named **Approvers** shows None or displays incorrect approvers, click **Edit** to add the reviewers who would need to review and approve the activation request for the Contributor role.
+1. If the setting named **Approvers** shows *None* or displays incorrect approvers, select **Edit** to add the reviewers who would need to review and approve the activation request for the Contributor role.
-1. In the **Activation** tab, select **Require approval to activate** and add the approver(s) who need to approve each request. You can also select other security options like using MFA and mandating ticket options to activate the Contributor role. Optionally, select relevant settings in the **Assignment** and **Notification** tabs as per your requirements.
+1. On the **Activation** tab, select **Require approval to activate** and add the approver(s) who need to approve each request. You can also select other security options like using MFA and mandating ticket options to activate the Contributor role. Optionally, select relevant settings on the **Assignment** and **Notification** tabs as per your requirements.
:::image type="content" source="./media/multi-user-authorization/edit-role-settings.png" alt-text="Screenshot showing how to edit role setting.":::
-1. Click **Update** once done.
+1. Select **Update** once done.
### Request activation of an eligible assignment to perform critical operations After the security admin creates an eligible assignment, the Backup admin needs to activate the assignment for the Contributor role to be able to perform protected actions. The following actions are performed by the **Backup admin** to activate the role assignment. 1. Go to [Azure AD Privileged Identity Management](../active-directory/privileged-identity-management/pim-configure.md). If the Resource Guard is in another directory, switch to that directory and then go to [Azure AD Privileged Identity Management](../active-directory/privileged-identity-management/pim-configure.md).
-1. Go to My roles > Azure resources on the left menu.
-1. The Backup admin can see an Eligible assignment for the contributor role. Click **Activate** to activate it.
+1. Go to **My roles** > **Azure resources** on the left menu.
+1. The Backup admin can see an Eligible assignment for the contributor role. Select **Activate** to activate it.
1. The Backup admin is informed via portal notification that the request is sent for approval. :::image type="content" source="./media/multi-user-authorization/identity-management-myroles-inline.png" alt-text="Screenshot showing to activate eligible assignments." lightbox="./media/multi-user-authorization/identity-management-myroles-expanded.png":::
Once the Backup admin raises a request for activating the Contributor role, the
1. In the security tenant, go to [Azure AD Privileged Identity Management.](../active-directory/privileged-identity-management/pim-configure.md) 1. Go to **Approve Requests**. 1. Under **Azure resources**, the request raised by the Backup admin requesting activation as a **Contributor** can be seen.
-1. Review the request. If genuine, select the request and click **Approve** to approve it.
+1. Review the request. If genuine, select the request and select **Approve** to approve it.
1. The Backup admin is informed by email (or other organizational alerting mechanisms) that their request is now approved. 1. Once approved, the Backup admin can perform protected operations for the requested period.
The following screenshot shows an example of disabling soft delete for an MUA-en
Disabling MUA is a protected operation, and hence, is protected using MUA. This means that the Backup admin must have the required Contributor role in the Resource Guard. Details on obtaining this role are described here. Following is a summary of steps to disable MUA on a vault. 1. The Backup admin requests the Security admin for **Contributor** role on the Resource Guard. They can request this to use the methods approved by the organization such as JIT procedures, like [Azure AD Privileged Identity Management](../active-directory/privileged-identity-management/pim-configure.md), or other internal tools and procedures. 1. The Security admin approves the request (if they find it worthy of being approved) and informs the Backup admin. Now the Backup admin has the ΓÇÿContributorΓÇÖ role on the Resource Guard.
-1. The Backup admin goes to the vault -> **Properties** -> **Multi-user Authorization**.
-1. Click **Update**
- 1. Uncheck the Protect with Resource Guard check box
+1. The Backup admin goes to the vault > **Properties** > **Multi-user Authorization**.
+1. Select **Update**.
+ 1. Clear the **Protect with Resource Guard** checkbox.
1. Choose the Directory that contains the Resource Guard and verify access using the Authenticate button (if applicable).
- 1. After **authentication**, click **Save**. With the right access, the request should be successfully completed.
+ 1. After **authentication**, select **Save**. With the right access, the request should be successfully completed.
:::image type="content" source="./media/multi-user-authorization/disable-mua.png" alt-text="Screenshot showing to disable multi-user authentication.":::++++
+This article describes how to configure Multi-user authorization (MUA) for Azure Backup to add an additional layer of protection to critical operations on your Backup vault (preview).
+
+>[!Note]
+>Multi-user authorization using Resource Guard for Backup vault is in preview.
+
+This article demonstrates Resource Guard creation in a different tenant that offers maximum protection. It also demonstrates how to request and approve requests for performing critical operations using [Azure Active Directory Privileged Identity Management](../active-directory/privileged-identity-management/pim-configure.md) in the tenant housing the Resource Guard. You can optionally use other mechanisms to manage JIT permissions on the Resource Guard as per your setup.
+
+This document includes the following sections:
+
+>[!div class="checklist"]
+>- Before you start
+>- Testing scenarios
+>- Create a Resource Guard
+>- Enable MUA on a Backup vault
+>- Protected operations on a vault using MUA
+>- Authorize critical operations on a vault
+>- Disable MUA on a Backup vault
+
+>[!NOTE]
+>Multi-user authorization for Azure Backup is available in all public Azure regions.
+
+## Before you start
+
+- Ensure the Resource Guard and the Backup vault are in the same Azure region.
+- Ensure the Backup admin does **not** have **Contributor** permissions on the Resource Guard. You can choose to have the Resource Guard in another subscription of the same directory or in another directory to ensure maximum isolation.
+- Ensure that your subscriptions contain the Backup vault as well as the Resource Guard (in different subscriptions or tenants) are registered to use the provider - **Microsoft.DataProtection**4. For more information, see [Azure resource providers and types](../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider-1).
+
+Learn about various [MUA usage scenarios](./multi-user-authorization-concept.md?tabs=backup-vault#usage-scenarios).
+
+## Create a Resource Guard
+
+The **Security admin** creates the Resource Guard. We recommend that you create it in a **different subscription** or a **different tenant** as the vault. However, it should be in the **same region** as the vault.
+
+The Backup admin must **NOT** have *contributor* access on the Resource Guard or the subscription that contains it.
+
+To create the Resource Guard in a tenant different from the vault tenant as a Security admin, follow these steps:
+
+1. In the Azure portal, go to the directory under which you want to create the Resource Guard.
+
+ :::image type="content" source="./media/multi-user-authorization/portal-settings-directories-subscriptions.png" alt-text="Screenshot showing the portal settings to configure for Backup vault.":::
+
+1. Search for **Resource Guards** in the search bar and select the corresponding item from the drop-down list.
+
+ :::image type="content" source="./media/multi-user-authorization/resource-guards-preview-inline.png" alt-text="Screenshot showing resource guards for Backup vault." lightbox="./media/multi-user-authorization/resource-guards-preview-expanded.png":::
+
+ 1. Select **Create** to create a Resource Guard.
+ 1. In the Create blade, fill in the required details for this Resource Guard.
+ - Ensure that the Resource Guard is in the same Azure regions as the Backup vault.
+ - Add a description on how to request access to perform actions on associated vaults when needed. This description appears in the associated vaults to guide the Backup admin on how to get the required permissions.
+
+1. On the **Protected operations** tab, select the operations you need to protect using this resource guard under the **Backup vault** tab.
+
+ Currently, the **Protected operations** tab includes only the *Delete backup instance* option to disable.
+
+ You can also [select the operations for protection after creating the resource guard](?pivots=vaults-recovery-services-vault#select-operations-to-protect-using-resource-guard).
+
+ :::image type="content" source="./media/multi-user-authorization/backup-vault-select-operations-for-protection.png" alt-text="Screenshot showing how to select operations for protecting using Resource Guard.":::
+
+1. Optionally, add any tags to the Resource Guard as per the requirements.
+1. Select **Review + Create** and then follow the notifications to monitor the status and a successful creation of the Resource Guard.
+
+### Select operations to protect using Resource Guard
+
+After vault creation, the Security admin can also choose the operations for protection using the Resource Guard among all supported critical operations. By default, all supported critical operations are enabled. However, the Security admin can exempt certain operations from falling under the purview of MUA using Resource Guard.
+
+To select the operations for protection, follow these steps:
+
+1. In the Resource Guard that you've created, go to **Properties** > **Backup vault** tab.
+1. Select **Disable** for the operations that you want to exclude from being authorized.
+
+ You can't disable the **Remove MUA protection** operation.
+
+1. Optionally, in the **Backup vaults** tab, update the description for the Resource Guard.
+1. Select **Save**.
+
+ :::image type="content" source="./media/multi-user-authorization/demo-resource-guard-properties-backup-vault-inline.png" alt-text="Screenshot showing demo resource guard properties for Backup vault." lightbox="./media/multi-user-authorization/demo-resource-guard-properties-backup-vault-expanded.png":::
+
+## Assign permissions to the Backup admin on the Resource Guard to enable MUA
+
+The Backup admin must have **Reader** role on the Resource Guard or subscription that contains the Resource Guard to enable MUA on a vault. The Security admin needs to assign this role to the Backup admin.
+
+To assign the **Reader** role on the Resource Guard, follow these steps:
+
+1. In the Resource Guard created above, go to the **Access Control (IAM)** blade, and then go to **Add role assignment**.
+
+ :::image type="content" source="./media/multi-user-authorization/demo-resource-guard-access-control.png" alt-text="Screenshot showing demo resource guard-access control for Backup vault.":::
+
+1. Select **Reader** from the list of built-in roles and select **Next** on the bottom of the screen.
+
+ :::image type="content" source="./media/multi-user-authorization/demo-resource-guard-add-role-assignment-inline.png" alt-text="Screenshot showing demo resource guard-add role assignment for Backup vault." lightbox="./media/multi-user-authorization/demo-resource-guard-add-role-assignment-expanded.png":::
+
+1. Click **Select members** and add the Backup admin's email ID to assign the **Reader** role.
+
+ As the Backup admins are in another tenant, they'll be added as guests to the tenant that contains the Resource Guard.
+
+1. Click **Select** > **Review + assign** to complete the role assignment.
+
+ :::image type="content" source="./media/multi-user-authorization/demo-resource-guard-select-members-inline.png" alt-text="Screenshot showing demo resource guard-select members to protect the backup items in Backup vault." lightbox="./media/multi-user-authorization/demo-resource-guard-select-members-expanded.png":::
+
+## Enable MUA on a Backup vault
+
+Once the Backup admin has the Reader role on the Resource Guard, they can enable multi-user authorization on vaults managed by following these steps:
+
+1. Go to the Backup vault for which you want to configure MUA.
+1. On the left panel, select **Properties**.
+1. Go to **Multi-User Authorization** and select **Update**.
+
+ :::image type="content" source="./media/multi-user-authorization/test-backup-vault-properties.png" alt-text="Screenshot showing the Backup vault properties.":::
+
+1. To enable MUA and choose a Resource Guard, perform one of the following actions:
+
+ - You can either specify the URI of the Resource Guard. Ensure that you specify the URI of a Resource Guard you have **Reader** access to and it's in the same regions as the vault. You can find the URI (Resource Guard ID) of the Resource Guard on its **Overview** page.
+
+ :::image type="content" source="./media/multi-user-authorization/resource-guard-rg-inline.png" alt-text="Screenshot showing the Resource Guard for Backup vault protection." lightbox="./media/multi-user-authorization/resource-guard-rg-expanded.png":::
+
+ - Or, you can select the Resource Guard from the list of Resource Guards you have **Reader** access to, and those available in the region.
+
+ 1. Click **Select Resource Guard**.
+ 1. Select the drop-down and select the directory the Resource Guard is in.
+ 1. Select **Authenticate** to validate your identity and access.
+ 1. After authentication, choose the **Resource Guard** from the list displayed.
+
+ :::image type="content" source="./media/multi-user-authorization/test-backup-vault-1-multi-user-authorization.png" alt-text="Screenshot showing multi-user authorization enabled on Backup vault.":::
+
+1. Select **Save** to enable MUA.
+
+ :::image type="content" source="./media/multi-user-authorization/testvault1-enable-mua.png" alt-text="Screenshot showing how to enable Multi-user authentication.":::
+
+## Protected operations using MUA
+
+Once the Backup admin enables MUA, the operations in scope will be restricted on the vault, and the operations fail if the Backup admin tries to perform them without having the **Contributor** role on the Resource Guard.
+
+>[!NOTE]
+>We highly recommend you to test your setup after enabling MUA to ensure that:
+>- Protected operations are blocked as expected.
+>- MUA is correctly configured.
+
+To perform a protected operation (disabling MUA), follow these steps:
+
+1. Go to the vault > **Properties** in the left pane.
+1. Clear the checkbox to disable MUA.
+
+ You'll receive a notification that it's a protected operation, and you need to have access to the Resource Guard.
+
+1. Select the directory containing the Resource Guard and authenticate yourself.
+
+ This step may not be required if the Resource Guard is in the same directory as the vault.
+
+1. Select **Save**.
+
+ The request fails with an error that you don't have sufficient permissions on the Resource Guard to perform this operation.
+
+ :::image type="content" source="./media/multi-user-authorization/test-vault-properties-security-settings-inline.png" alt-text="Screenshot showing the test Backup vault properties security settings." lightbox="./media/multi-user-authorization/test-vault-properties-security-settings-expanded.png":::
+
+## Authorize critical (protected) operations using Azure AD Privileged Identity Management
+
+There are scenarios where you may need to perform critical operations on your backups and you can perform them with the right approvals or permissions with MUA. The following sections explain on how to authorize the critical operation requests using Privileged Identity Management (PIM).
+
+The Backup admin must have a Contributor role on the Resource Guard to perform critical operations in the Resource Guard scope. One of the ways to allow just-in-time (JIT) operations is through the use of [Azure Active Directory (Azure AD) Privileged Identity Management](../active-directory/privileged-identity-management/pim-configure.md).
+
+>[!NOTE]
+>We recommend to use the Azure AD PIM. However, you can also use manual or custom methods to manage access for the Backup admin on the Resource Guard. To manually manage access to the Resource Guard, use the *Access control (IAM)* setting on the left pane of the Resource Guard and grant the **Contributor** role to the Backup admin.
+
+### Create an eligible assignment for the Backup admin using Azure AD Privileged Identity Management
+
+The **Security admin** can use PIM to create an eligible assignment for the Backup admin as a Contributor to the Resource Guard. This enables the Backup admin to raise a request (for the Contributor role) when they need to perform a protected operation.
+
+To create an eligible assignment, follow the steps:
++
+1. Sign into the [Azure portal](https://portal.azure.com).
+1. Go to security tenant of Resource Guard, and in the search, enter **Privileged Identity Management**.
+1. In the left pane, select **Manage and go to Azure Resources**.
+1. Select the resource (the Resource Guard or the containing subscription/RG) to which you want to assign the Contributor role.
+
+ If you don't find any corresponding resources, then add the containing subscription that is managed by PIM.
+
+1. Select the resource and go to **Manage** > **Assignments** > **Add assignments**.
+
+ :::image type="content" source="./media/multi-user-authorization/add-assignments.png" alt-text="Screenshot showing how to add assignments to protect a Backup vault.":::
+
+1. In the Add assignments:
+ 1. Select the role as Contributor.
+ 1. Go to **Select members** and add the username (or email IDs) of the Backup admin.
+ 1. Select **Next**.
+
+ :::image type="content" source="./media/multi-user-authorization/add-assignments-membership.png" alt-text="Screenshot showing how to add assignments-membership to protect a Backup vault.":::
+
+1. In Assignment, select **Eligible** and specify the validity of the duration of eligible permission.
+1. Select **Assign** to complete creating the eligible assignment.
+
+ :::image type="content" source="./media/multi-user-authorization/add-assignments-setting.png" alt-text="Screenshot showing how to add assignments-setting to protect a Backup vault.":::
+
+### Set up approvers for activating Contributor role
+
+By default, the above setup may not have an approver (and an approval flow requirement) configured in PIM. To ensure that approvers have the **Contributor** role for request approval, the Security admin must follow these steps:
+
+>[!Note]
+>If the approver setup isn't configured, the requests are automatically approved without going through the Security admins or a designated approverΓÇÖs review. [Learn more](../active-directory/privileged-identity-management/pim-resource-roles-configure-role-settings.md).
+
+1. In Azure AD PIM, select **Azure Resources** on the left pane and select your Resource Guard.
+
+1. Go to **Settings** > **Contributor** role.
+
+ :::image type="content" source="./media/multi-user-authorization/add-contributor.png" alt-text="Screenshot showing how to add a contributor.":::
+
+1. Select **Edit** to add the reviewers who must review and approve the activation request for the *Contributor* role in case you find that Approvers show *None* or displays incorrect approvers.
+
+1. On the **Activation** tab, select **Require approval to activate** to add the approver(s) who must approve each request.
+1. Select security options, such as Multi Factor Authentication (MFA), Mandating ticket. to activate *Contributor* role.
+1. Select the appropriate options on **Assignment** and **Notification** tabs as per your requirement.
+
+ :::image type="content" source="./media/multi-user-authorization/edit-role-settings.png" alt-text="Screenshot showing how to edit the role setting.":::
+
+1. Select **Update** to complete the set-up of approvers to activate *Contributor* role.
+
+### Request activation of an eligible assignment to perform critical operations
+
+After the Security admin creates an eligible assignment, the Backup admin needs to activate the role assignment for the Contributor role to perform protected actions.
+
+To activate the role assignment, follow the steps:
+
+1. Go to [Azure AD Privileged Identity Management](../active-directory/privileged-identity-management/pim-configure.md). If the Resource Guard is in another directory, switch to that directory and then go to [Azure AD Privileged Identity Management](../active-directory/privileged-identity-management/pim-configure.md).
+1. Go to **My roles** > **Azure resources** in the left pane.
+1. Select **Activate** to activate the eligible assignment for *Contributor* role.
+
+ A notification appears notifying that the request is sent for approval.
+
+ :::image type="content" source="./media/multi-user-authorization/identity-management-myroles-inline.png" alt-text="Screenshot showing how to activate eligible assignments." lightbox="./media/multi-user-authorization/identity-management-myroles-expanded.png":::
+
+### Approve activation requests to perform critical operations
+
+Once the Backup admin raises a request for activating the Contributor role, the **Security admin** must review and approve the request.
+
+To review and approve the request, follow these steps:
+
+1. In the security tenant, go to [Azure AD Privileged Identity Management.](../active-directory/privileged-identity-management/pim-configure.md).
+1. Go to **Approve Requests**.
+1. Under **Azure resources**, you can see the request awaiting approval.
+
+ Select **Approve** to review and approve the genuine request.
+
+After the approval, the Backup admin receives a notification, via email or other internal alerting options, that the request is approved. Now, the Backup admin can perform the protected operations for the requested period.
+
+## Perform a protected operation after approval
+
+Once the Security admin approves the Backup admin's request for the Contributor role on the Resource Guard, they can perform protected operations on the associated vault. If the Resource Guard is in another directory, the Backup admin must authenticate themselves.
+
+>[!NOTE]
+>If the access was assigned using a JIT mechanism, the Contributor role is retracted at the end of the approved period. Otherwise, the Security admin manually removes the **Contributor** role assigned to the Backup admin to perform the critical operation.
+
+The following screenshot shows an example of [disabling soft delete](backup-azure-security-feature-cloud.md#disabling-soft-delete-using-azure-portal) for an MUA-enabled vault.
++
+## Disable MUA on a Backup vault
+
+Disabling the MUA is a protected operation that must be done by the Backup admin only. To do this, the Backup admin must have the required *Contributor* role in the Resource Guard. To obtain this permission, the Backup admin must first request the Security admin for the Contributor role on the Resource Guard using the just-in-time (JIT) procedure, such as [Azure Active Directory (Azure AD) Privileged Identity Management](../active-directory/privileged-identity-management/pim-configure.md) or internal tools.
+
+Then the Security admin approves the request if it's genuine and updates the Backup admin who now has Contributor role on the Resource guard. Learn more on [how to get this role](?pivots=vaults-backup-vault#assign-permissions-to-the-backup-admin-on-the-resource-guard-to-enable-mua).
+
+To disable the MUA, the Backup admins must follow these steps:
+
+1. Go to vault > **Properties** > **Multi-user Authorization**.
+1. Select **Update** and clear the **Protect with Resource Guard** checkbox.
+1. Select **Authenticate** (if applicable) to choose the Directory that contains the Resource Guard and verify access.
+1. Select **Save** to complete the process of disabling the MUA.
+
+ :::image type="content" source="./media/multi-user-authorization/disable-mua.png" alt-text="Screenshot showing how to disable multi-user authorization.":::
+++
+## Next steps
+
+[Learn more about Multi-user authorization using Resource Guard](multi-user-authorization-concept.md).
+
backup Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/whats-new.md
Title: What's new in Azure Backup description: Learn about new features in Azure Backup. Previously updated : 09/14/2022 Last updated : 10/14/2022
You can learn more about the new releases by bookmarking this page or by [subscr
## Updates summary - October 2022
+ - [Multi-user authorization using Resource Guard for Backup vault (in preview)](#multi-user-authorization-using-resource-guard-for-backup-vault-in-preview)
+ - [Enhanced soft delete for Azure Backup (preview)](#enhanced-soft-delete-for-azure-backup-preview)
+ - [Immutable vault for Azure Backup (in preview)](#immutable-vault-for-azure-backup-in-preview)
- [SAP HANA instance snapshot backup support (preview)](#sap-hana-instance-snapshot-backup-support-preview) - [SAP HANA System Replication database backup support (preview)](#sap-hana-system-replication-database-backup-support-preview) - September 2022 - [Built-in Azure Monitor alerting for Azure Backup is now generally available](#built-in-azure-monitor-alerting-for-azure-backup-is-now-generally-available) - June 2022
- - [Multi-user authorization using Resource Guard is now generally available](#multi-user-authorization-using-resource-guard-is-now-generally-available)
+ - [Multi-user authorization using Resource Guard for Recovery Services vault is now generally available](#multi-user-authorization-using-resource-guard-for-recovery-services-vault-is-now-generally-available)
- May 2022 - [Archive tier support for Azure Virtual Machines is now generally available](#archive-tier-support-for-azure-virtual-machines-is-now-generally-available) - February 2022
You can learn more about the new releases by bookmarking this page or by [subscr
- [Back up Azure Database for PostgreSQL is now generally available](#back-up-azure-database-for-postgresql-is-now-generally-available) - October 2021 - [Archive Tier support for SQL Server/ SAP HANA in Azure VM from Azure portal](#archive-tier-support-for-sql-server-sap-hana-in-azure-vm-from-azure-portal)
- - [Multi-user authorization using Resource Guard (in preview)](#multi-user-authorization-using-resource-guard-in-preview)
+ - [Multi-user authorization using Resource Guard for Recovery Services vault (in preview)](#multi-user-authorization-using-resource-guard-for-recovery-services-vault-in-preview)
- [Multiple backups per day for Azure Files (in preview)](#multiple-backups-per-day-for-azure-files-in-preview) - [Azure Backup Metrics and Metrics Alerts (in preview)](#azure-backup-metrics-and-metrics-alerts-in-preview) - July 2021
You can learn more about the new releases by bookmarking this page or by [subscr
- February 2021 - [Backup for Azure Blobs (in preview)](#backup-for-azure-blobs-in-preview)
+## Multi-user authorization using Resource Guard for Backup vault (in preview)
+
+Azure Backup now supports multi-user authorization (MUA) that allows you to add an additional layer of protection to critical operations on your Backup vaults. For MUA, Azure Backup uses the Azure resource, Resource Guard, to ensure critical operations are performed only with applicable authorization.
+
+For more information, see [MUA for Backup vault](multi-user-authorization-concept.md?tabs=backup-vault).
+
+## Enhanced soft delete for Azure Backup (preview)
+
+Enhanced soft delete provides improvements to the existing [soft delete](backup-azure-security-feature-cloud.md) feature. With enhanced soft delete, you can now make soft delete irreversible to prevent malicious actors from disabling it and deleting backups.
+
+You can also customize soft delete retention period (for which soft deleted data must be retained). Enhanced soft delete is available for Recovery Services vaults and Backup vaults.
+
+For more information, see [Enhanced soft delete for Azure Backup](backup-azure-enhanced-soft-delete-about.md).
+
+## Immutable vault for Azure Backup (in preview)
+
+Azure Backup now supports immutable vaults that help you ensure that recovery points once created can't be deleted before their expiry as per the backup policy (expiry at the time at which the recovery point was created). You can also choose to make the immutability irreversible to offer maximum protection to your backup data, thus helping you protect your data better against various threats, including ransomware attacks and malicious actors.
+
+For more information, see the [concept of Immutable vault for Azure Backup (preview)](backup-azure-immutable-vault-concept.md).
+ ## SAP HANA instance snapshot backup support (preview) Azure Backup now supports SAP HANA instance snapshot backup that provides a cost-effective backup solution using Managed disk incremental snapshots. Because instant backup uses snapshot, the effect on the database is minimum.
If you're currently using the [classic alerts solution](backup-azure-monitoring-
For more information, see [Switch to Azure Monitor based alerts for Azure Backup](move-to-azure-monitor-alerts.md). -
-## Multi-user authorization using Resource Guard is now generally available
+## Multi-user authorization using Resource Guard for Recovery Services vault is now generally available
Azure Backup now supports multi-user authorization (MUA) that allows you to add an additional layer of protection to critical operations on your Recovery Services vaults. For MUA, Azure Backup uses the Azure resource, Resource Guard, to ensure critical operations are performed only with applicable authorization.
Also, the support is extended via Azure CLI for the above workloads, along with
For more information, see [Archive Tier support in Azure Backup](archive-tier-support.md).
-## Multi-user authorization using Resource Guard (in preview)
+## Multi-user authorization using Resource Guard for Recovery Services vault (in preview)
Azure Backup now supports multi-user authorization (MUA) that allows you to add an additional layer of protection to critical operations on your Recovery Services vaults. For MUA, Azure Backup uses the Azure resource, Resource Guard, to ensure critical operations are performed only with applicable authorization.
batch Batch Certificate Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-certificate-migration-guide.md
Previously updated : 10/07/2022 Last updated : 10/12/2022 # Migrate Batch account certificates to Azure Key Vault
Certificates are often required in various scenarios such as decrypting a secret
After the certificates feature in Azure Batch is retired on February 29, 2024, a certificate in Batch won't work as expected. After that date, you'll no longer be able to add certificates to a Batch account or link these certificates to Batch pools. Pools that continue to use this feature after this date may not behave as expected such as updating certificate references or the ability to install existing certificate references.
-## Alternative: Use Azure Key Vault VM Extension with Pool User-assigned Managed Identity
+## Alternative: Use Azure Key Vault VM extension with pool user-assigned managed identity
Azure Key Vault is a fully managed Azure service that provides controlled access to store and manage secrets, certificates, tokens, and keys. Key Vault provides security at the transport layer by ensuring that any data flow from the key vault to the client application is encrypted. Azure Key Vault gives you a secure way to store essential access information and to set fine-grained access control. You can manage all secrets from one dashboard. Choose to store a key in either software-protected or hardware-protected hardware security modules (HSMs). You also can set Key Vault to auto-renew certificates.
For a complete guide on how to enable Azure Key Vault VM Extension with Pool Use
- Do `CloudServiceConfiguration` pools support Azure Key Vault VM extension and managed identity on pools?
- No. `CloudServiceConfiguration` pools will be [retired](https://azure.microsoft.com/updates/azure-batch-cloudserviceconfiguration-pools-will-be-retired-on-29-february-2024/) on the same date as Azure Batch account certificate retirement on February 29, 2024. We recommend that you migrate to `VirtualMachinceConfiguration` pools before that date where you'll be able to use these solutions.
+ No. `CloudServiceConfiguration` pools will be [retired](https://azure.microsoft.com/updates/azure-batch-cloudserviceconfiguration-pools-will-be-retired-on-29-february-2024/) on the same date as Azure Batch account certificate retirement on February 29, 2024. We recommend that you migrate to `VirtualMachineConfiguration` pools before that date where you'll be able to use these solutions.
- Do user subscription pool allocation Batch accounts support Azure Key Vault?
batch Batch Tls 101 Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-tls-101-migration-guide.md
Previously updated : 08/16/2022 Last updated : 10/12/2022 # Migrate client code to TLS 1.2 in Batch
-To support security best practices and remain in compliance with industry standards, Azure Batch will retire Transport Layer Security (TLS) 1.0 and TLS 1.1 in Azure Batch on *March 31, 2023*. Learn how to migrate to TLS 1.2 in the client code you manage by using Batch.
+To support security best practices and remain in compliance with industry standards, Azure Batch will retire Transport Layer Security (TLS) 1.0 and TLS 1.1 in Azure Batch on *March 31, 2023*. Learn how to migrate to TLS 1.2 in your Batch service client code.
## End of support for TLS 1.0 and TLS 1.1 in Batch TLS versions 1.0 and TLS 1.1 are known to be susceptible to BEAST and POODLE attacks and to have other Common Vulnerabilities and Exposures (CVE) weaknesses. TLS 1.0 and TLS 1.1 don't support the modern encryption methods and cipher suites that the Payment Card Industry (PCI) compliance standards recommends. Microsoft is participating in an industry-wide push toward the exclusive use of TLS version 1.2 or later.
-Most customers have already migrated to TLS 1.2. Customers who continue to use TLS 1.0 or TLS 1.1 can be identified via existing BatchOperation data. If you're using TLS 1.0 or TLS 1.1, to avoid disruption to your Batch workflows, update existing workflows to use TLS 1.2.
+If you've already migrated to use TLS 1.2 in your Batch client applications, then this retirement doesn't apply to you. Only API requests that go directly to the Batch service via the data plane API (not management plane) are impacted. API requests at the management plane layer are routed through ARM and are subject to ARM TLS minimum version requirements. We recommend that you migrate to TLS 1.2 across Batch data plane or management plane API calls for security best practices, if possible.
## Alternative: Use TLS 1.2
For more information, see [TLS best practices for the .NET Framework](/dotnet/fr
- Why do I need to upgrade to TLS 1.2?
- TLS 1.0 and TLS 1.1 have security issues that are fixed in TLS 1.2. TLS 1.2 has been available since 2008. TLS 1.2 is the current default version in most development frameworks.
+ TLS 1.0 and TLS 1.1 are considered insecure and have security issues that are addressed in TLS 1.2. TLS 1.2 has been available since 2008. TLS 1.2 is widely adopted as the minimum version for securing communication channels using TLS.
-- What happens if I donΓÇÖt upgrade?
+- What happens if I don't upgrade?
- After the feature retirement from Azure Batch, your client application won't work until you upgrade the code to use TLS 1.2.
+ After the feature retirement from Azure Batch, your client application won't be able to communicate with Batch data plane API services unless you upgrade to TLS 1.2.
-- Will upgrading to TLS 1.2 affect the performance of my application?
+- Does upgrading to TLS 1.2 affect the performance of my application?
- Upgrading to TLS 1.2 won't affect your application's performance.
+ Upgrading to TLS 1.2 generally shouldn't affect your application's performance.
- How do I know if IΓÇÖm using TLS 1.0 or TLS 1.1?
- To determine the TLS version you're using, check the audit log for your Batch deployment.
+ To determine the TLS version you're using, check your client application logs and the audit log for your Batch deployment.
## Next steps
cognitive-services How To Custom Speech Test And Train https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-test-and-train.md
Specialized or made up words might have unique pronunciations. These words can b
You can provide a custom pronunciation file to improve recognition. Don't use custom pronunciation files to alter the pronunciation of common words. For a list of languages that support custom pronunciation, see [language support](language-support.md?tabs=stt-tts). > [!NOTE]
-> You can either use a pronunciation data file on its own, or you can add pronunciation within a structured text data file. The Speech service doesn't support training a model where you select both of those datasets as input.
+> You can use a pronunciation file alongside any other training dataset except structured text training data. To use pronunciation data with structured text, it must be within a structured text file.
The spoken form is the phonetic sequence spelled out. It can be composed of letters, words, syllables, or a combination of all three. This table includes some examples:
cognitive-services Releasenotes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/releasenotes.md
See below for information about changes to Speech services and resources.
## What's new?
-* Speech SDK 1.23.0 and Speech CLI 1.23.0 were released in July 2022. See details below.
+* Speech SDK 1.24.0 and Speech CLI 1.24.0 were released in October 2022. See details below.
* Custom speech-to-text container v3.1.0 released in March 2022, with support to get display models. * TTS Service August 2022, five new voices in public preview were released. * TTS Service September 2022, all the prebuilt neural voices have been upgraded to high-fidelity voices with 48kHz sample rate.
cognitive-services Speech Synthesis Markup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-synthesis-markup.md
The following table has descriptions of each supported style.
### Style degree
-The intensity of speaking style can be adjusted to better fit your use case. You specify a stronger or softer style with the `styledegree` attribute to make the speech more expressive or subdued.
+The intensity of speaking style can be adjusted to better fit your use case. You specify a stronger or softer style with the `styledegree` attribute to make the speech more expressive or subdued.
For a list of neural voices that support speaking style degree, see [supported voice styles and roles](language-support.md?tabs=stt-tts#voice-styles-and-roles).
This SSML snippet illustrates how the `styledegree` attribute is used to change
### Role
-Apart from adjusting the speaking styles and style degree, you can also adjust the `role` parameter so that the voice imitates a different age and gender. For example, a male voice can raise the pitch and change the intonation to imitate a female voice, but the voice name won't be changed.
+Apart from adjusting the speaking styles and style degree, you can also adjust the `role` parameter so that the voice imitates a different age and gender. For example, a male voice can raise the pitch and change the intonation to imitate a female voice, but the voice name won't be changed.
For a list of supported roles per neural voice, see [supported voice styles and roles](language-support.md?tabs=stt-tts#voice-styles-and-roles).
To define how multiple entities are read, you can create a custom lexicon, which
The `lexicon` element contains at least one `lexeme` element. Each `lexeme` element contains at least one `grapheme` element and one or more `grapheme`, `alias`, and `phoneme` elements. The `grapheme` element contains text that describes the [orthography](https://www.w3.org/TR/pronunciation-lexicon/#term-Orthography). The `alias` elements are used to indicate the pronunciation of an acronym or an abbreviated term. The `phoneme` element provides text that describes how the `lexeme` is pronounced. When the `alias` and `phoneme` elements are provided with the same `grapheme` element, `alias` has higher priority. > [!IMPORTANT]
-> The `lexeme` element is case sensitive in the custom lexicon. For example, if you only provide a phoneme for the `lexeme` "Hello," it won't work for the `lexeme` "hello."
+> The `lexeme` element is case sensitive in the custom lexicon. For example, if you only provide a phoneme for the `lexeme` "Hello," it won't work for the `lexeme` "hello."
Lexicon contains the necessary `xml:lang` attribute to indicate which locale it should be applied for. One custom lexicon is limited to one locale by design, so if you apply it for a different locale, it won't work.
The `say-as` element is optional. It indicates the content type, such as number
| `detail` | Indicates the level of detail to be spoken. For example, this attribute might request that the speech synthesis engine pronounce punctuation marks. There are no standard values defined for `detail`. | Optional | The following content types are supported for the `interpret-as` and `format` attributes. Include the `format` attribute only if `format` column isn't empty in the table below.+ | interpret-as | format | Interpretation | | -- | | -- | | `characters`, `spell-out` | | The text is spoken as individual letters (spelled out). The speech synthesis engine pronounces:<br /><br />`<say-as interpret-as="characters">test</say-as>`<br /><br />As "T E S T." |
Only one background audio file is allowed per SSML document. You can intersperse
> [!NOTE] > The `mstts:backgroundaudio` element should be put in front of all `voice` elements, i.e., the first child of the `speak` element.
->
+>
> The `mstts:backgroundaudio` element is not supported by the [Long Audio API](long-audio-api.md). **Syntax**
A viseme is the visual description of a phoneme in spoken language. It defines t
| `type` | Specifies the type of viseme output.<ul><li>`redlips_front` ΓÇô lip-sync with viseme ID and audio offset output </li><li>`FacialExpression` ΓÇô blend shapes output</li></ul> | Required | > [!NOTE]
-> Currently, `redlips_front` only supports neural voices in `en-US` locale, and `FacialExpression` supports neural voices in `en-US` and `zh-CN` locales.
+> Currently, `redlips_front` only supports neural voices in `en-US` locale, and `FacialExpression` supports neural voices in `en-US` and `zh-CN` locales.
**Example**
cognitive-services What Is Dictionary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/what-is-dictionary.md
Previously updated : 12/06/2021 Last updated : 10/11/2022
You can train a model using only dictionary data. To do so, select only the dict
## Recommendations -- Dictionaries aren't a substitute for training a model using training data. We recommended letting the system learn from your training data for better results. However, when sentences or compound nouns must be rendered as-is, use a dictionary.-- The phrase dictionary should be used sparingly. When a phrase within a sentence is replaced, the context within that sentence is lost or limited for translating the rest of the sentence. The result is that while the phrase or word within the sentence will translate according to the provided dictionary, the overall translation quality of the sentence will often suffer.-- The phrase dictionary works well for compound nouns like product names ("Microsoft SQL Server"), proper names ("City of Hamburg"), or features of the product ("pivot table"). It doesn't work equally well for verbs or adjectives because those words are typically highly inflected in the source or in the target language. Best practice is to avoid phrase dictionary entries for anything but compound nouns.-- If you're using a phrase dictionary, capitalization and punctuation are important. Dictionary entries will only match words and phrases in the input sentence that use exactly the same capitalization and punctuation as specified in the source dictionary file. Also the translations will reflect the capitalization and punctuation provided in the target dictionary file. For example, if you trained an English to Spanish system that uses a phrase dictionary that specifies "US" in the source file, and "EE.UU." in the target file. When you request translation of a sentence that includes the word "us" (not capitalized), it will NOT return a match from the dictionary. However, if you request translation of a sentence that contains the word "US" (capitalized), it will match the dictionary and the translation will contain "EE.UU." The capitalization and punctuation in the translation may be different than specified in the dictionary target file, and may be different from the capitalization and punctuation in the source. It follows the rules of the target language.-- If you're using a sentence dictionary, the end of sentence punctuation is ignored. For example, if your source dictionary contains "this sentence ends with punctuation!", then any translation requests containing "this sentence ends with punctuation" would match.-- If a word appears more than once in a dictionary file, the system will always use the last entry provided. Thus, your dictionary shouldn't contain multiple translations of the same word.
+- Dictionaries aren't a substitute for training a model using training data. For better results, we recommended letting the system learn from your training data. However, when sentences or compound nouns must be translated verbatim, use a dictionary.
+
+- The phrase dictionary should be used sparingly. When a phrase within a sentence is replaced, the context of that sentence is lost or limited for translating the rest of the sentence. The result is that, while the phrase or word within the sentence will translate according to the provided dictionary, the overall translation quality of the sentence often suffers.
+
+- The phrase dictionary works well for compound nouns like product names ("_Microsoft SQL Server_"), proper names ("_City of Hamburg_"), or product features ("_pivot table_"). It doesn't work as well for verbs or adjectives because those words are typically highly contextual within the source or target language. The best practice is to avoid phrase dictionary entries for anything but compound nouns.
+
+- If you're using a phrase dictionary, capitalization and punctuation are important. Dictionary entries are case- and punctuation-sensitive. Custom Translator will only match words and phrases in the input sentence that use exactly the same capitalization and punctuation marks as specified in the source dictionary file. Also, translations will reflect the capitalization and punctuation provided in the target dictionary file.
+
+ **Example**
+
+ - If you're training an English-to-Spanish system that uses a phrase dictionary and you specify "_SQL server_" in the source file and "_Microsoft SQL Server_" in the target file. When you request the translation of a sentence that contains the phrase "_SQL server_", Custom Translator will match the dictionary entry and the translation will contain "_Microsoft SQL Server_."
+ - When you request translation of a sentence that includes the same phrase but **doesn't** match what is in your source file, such as "_sql server_", "_sql Server_" or "_SQL Server_", it **won't** return a match from your dictionary.
+ - The translation follows the rules of the target language as specified in your phrase dictionary.
+
+- If you're using a sentence dictionary, end-of-sentence punctuation is ignored.
+
+ **Example**
+
+ - If your source dictionary contains "_This sentence ends with punctuation!_", then any translation requests containing "_This sentence ends with punctuation_" will match.
+
+- Your dictionary should contain unique source lines. If a source line (a word, phrase, or sentence) appears more than once in a dictionary file, the system will always use the **last entry** provided and return the target when a match is found.
+
+- Avoid adding phrases that consist of only numbers or are two- or three-letter words, such as acronyms, in the source dictionary file.
## Next steps
cognitive-services Multi Region Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/concepts/custom-features/multi-region-deployment.md
+
+ Title: Deploy custom language projects to multiple regions in Azure Cognitive Service for Language
+
+description: Learn about deploying your language projects to multiple regions.
++++++ Last updated : 10/11/2022++++
+# Deploy custom language projects to multiple regions
+
+> [!NOTE]
+> This article applies to the following custom features in Azure Cognitive Service for Language:
+> * [Conversational language understanding](../../conversational-language-understanding/overview.md)
+> * [Custom text classification](../../custom-text-classification/overview.md)
+> * [Custom NER](../../custom-named-entity-recognition/overview.md)
+> * [Orchestration workflow](../../orchestration-workflow/overview.md)
+
+Custom Language service features enable you to deploy your project to more than one region, making it much easier to access your project globally while managing only one instance of your project in one place.
+
+Before you deploy a project, you can assign **deployment resources** in other regions. Each deployment resource is a different Language resource from the one you use to author your project. You deploy to those resources and then target your prediction requests to that resource in their respective regions and your queries are served directly from that region.
+
+When creating a deployment, you can select which of your assigned deployment resources and their corresponding regions you would like to deploy to. The model you deploy is then replicated to each region and accessible with its own endpoint dependent on the deployment resource's custom subdomain.
+
+## Example
+
+Suppose you want to make sure your project, which is used as part of a customer support chatbot, is accessible by customers across the US and India. You would author a project with the name **ContosoSupport** using a _West US 2_ Language resource named **MyWestUS2**. Before deployment, you would assign two deployment resources to your project - **MyEastUS** and **MyCentralIndia** in _East US_ and _Central India_, respectively.
+
+When deploying your project, You would select all three regions for deployment: the original _West US 2_ region and the assigned ones through _East US_ and _Central India_.
+
+You would now have three different endpoint URLs to access your project in all three regions:
+* West US 2: `https://mywestus2.cognitiveservices.azure.com/language/:analyze-conversations`
+* East US: `https://myeastus.cognitiveservices.azure.com/language/:analyze-conversations`
+* Central India: `https://mycentralindia.cognitiveservices.azure.com/language/:analyze-conversations`
+
+The same request body to each of those different URLs serves the exact same response directly from that region.
+
+## Validations and requirements
+
+Assigning deployment resources requires Microsoft Azure Active Directory (Azure AD) authentication. Azure AD is used to confirm you have access to the resources you are interested in assigning to your project for multi-region deployment. In the Language Studio, you can automatically [enable Azure AD authentication](https://aka.ms/rbac-language) by assigning yourself the _Cognitive Services Language Owner_ role to your original resource. To programmatically use Azure AD authentication, learn more from the [Cognitive Services documentation](/azure/cognitive-services/authentication?tabs=powershell&tryIt=true&source=docs#authenticate-with-azure-active-directory).
+
+Your project name and resource are used as its main identifiers. Therefore, a Language resource can only have a specific project name in each resource. Any other projects with the same name will not be deployable to that resource.
+
+For example, if a project **ContosoSupport** was created by resource **MyWestUS2** in _West US 2_ and deployed to resource **MyEastUS** in _East US_, the resource **MyEastUS** cannot create a different project called **ContosoSupport** and deploy a project to that region. Similarly, your collaborators cannot then create a project **ContosoSupport** with resource **MyCentralIndia** in _Central India_ and deploy it to either **MyWestUS2** or **MyEastUS**.
+
+You can only swap deployments that are available in the exact same regions, otherwise swapping will fail.
+
+If you remove an assigned resource from your project, all of the project deployments to that resource will then be deleted.
+
+> [!NOTE]
+> Orchestration workflow only:
+>
+> You **cannot** assign deployment resources to orchestration workflow projects with custom question answering or LUIS connections. You subsequently cannot add custom question answering or LUIS connections to projects that have assigned resources.
+>
+> For multi-region deployment to work as expected, the connected CLU projects **must also be deployed** to the same regional resources you've deployed the orchestration workflow project to. Otherwise the orchestration workflow project will attempt to route a request to a deployment in its region that doesn't exist.
+
+Some regions are only available for deployment and not for authoring projects.
+
+## Next steps
+
+Learn how to deploy models for:
+* [Conversational language understanding](../../conversational-language-understanding/how-to/deploy-model.md)
+* [Custom text classification](../../custom-text-classification/how-to/deploy-model.md)
+* [Custom NER](../../custom-named-entity-recognition/how-to/deploy-model.md)
+* [Orchestration workflow](../../orchestration-workflow/how-to/deploy-model.md)
cognitive-services Project Versioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/concepts/custom-features/project-versioning.md
+
+ Title: Conversational Language Understanding Project Versioning
+
+description: Learn how versioning works in conversational language understanding
++++++ Last updated : 10/10/2022+++++
+# Project versioning
+
+> [!NOTE]
+> This article applies to the following custom features in Azure Cognitive Service for Language:
+> * Conversational language understanding
+> * Custom text classification
+> * Custom NER
+> * Orchestration workflow
+
+Building your project typically happens in increments. You may add, remove, or edit intents, entities, labels and data at each stage. Every time you train, a snapshot of your current project state is taken to produce a model. That model saves the snapshot to be loaded back at any time. Every model acts as its own version of the project.
+
+For example, if your project has 10 intents and/or entities, with 50 training documents or utterances, it can be trained to create a model named **v1**. Afterwards, you might make changes to the project to alter the numbers of training data. The project can be trained again to create a new model named **v2**. If you don't like the changes you've made in **v2** and would like to continue from where you left off in model **v1**, then you would just need to load the model data from **v1** back into the project. Loading a model's data is possible through both the Language Studio and API. Once complete, the project will have the original amount and types of training data.
+
+If the project data is not saved in a trained model, it can be lost. For example, if you loaded model **v1**, your project now has the data that was used to train it. If you then made changes, didn't train, and loaded model **v2**, you would lose those changes as they weren't saved to any specific snapshot.
+
+If you overwrite a model with a new snapshot of data, you won't be able to revert back to any previous state of that model.
+
+You always have the option to locally export the data for every model.
+
+## Data location
+
+The data for your model versions will be saved in different locations, depending on the custom feature you're using.
+
+# [Custom NER](#tab/custom-ner)
+
+In custom named entity recognition, the data being saved to the snapshot is the labels file.
+
+# [Custom text classification](#tab/custom-text-classification)
+
+In custom text classification, the data being saved to the snapshot is the labels file.
+
+# [Orchestration workflow](#tab/orchestration-workflow)
+
+In orchestration workflow, you do not version or store the assets of the connected intents as part of the orchestration snapshot - those are managed separately. The only snapshot being taken is of the connection itself and the intents and utterances that do not have connections, including all the test data.
+
+# [Conversational language understanding](#tab/clu)
+
+In conversational language understanding, the data being saved to the snapshot are the intents and utterances included in the project.
++++
+## Next steps
+Learn how to load or export model data for:
+* [Conversational language understanding](../../conversational-language-understanding/how-to/view-model-evaluation.md#export-model-data)
+* [Custom text classification](../../custom-text-classification/how-to/view-model-evaluation.md)
+* [Custom NER](../../custom-named-entity-recognition/how-to/view-model-evaluation.md)
+* [Orchestration workflow](../../orchestration-workflow/how-to/view-model-evaluation.md)
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/concepts/language-support.md
Previously updated : 10/06/2022 Last updated : 10/13/2022
See the following service-level language support articles for information on mod
* [Opinion mining](../sentiment-opinion-mining/language-support.md#opinion-mining-language-support) * [Text Analytics for health](../text-analytics-for-health/language-support.md) * [Summarization](../summarization/language-support.md?tabs=document-summarization)
-* [Conversation summarization](../summarization/language-support.md?tabs=conversation-summarization)|
+* [Conversation summarization](../summarization/language-support.md?tabs=conversation-summarization)
cognitive-services Model Lifecycle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/concepts/model-lifecycle.md
# Model lifecycle
-Language service features utilize AI models that are versioned. We update the language service with new model versions to improve accuracy, support, and quality. As models become older, they are retired. Use this article for information on that process, and what you can expect for your applications.
+Language service features utilize AI models that are versioned. We update the language service with new model versions to improve accuracy, support, and quality. As models become older, they're retired. Use this article for information on that process, and what you can expect for your applications.
## Prebuilt features
Language service features utilize AI models that are versioned. We update the la
Our standard (not customized) language service features are built upon AI models that we call pre-trained models. We update the language service with new model versions every few months to improve model accuracy, support, and quality.
-As new models and functionalities become available, older less accurate models are deprecated. To ensure you are using the latest model version and avoid interruptions to your applications, we highly recommend using the default model-version parameter (`latest`) in your API calls. After their deprecation date, pre-built model versions will no longer be functional and your implementation may be broken.
+As new models and functionalities become available, older less accurate models are deprecated. To ensure you're using the latest model version and avoid interruptions to your applications, we highly recommend using the default model-version parameter (`latest`) in your API calls. After their deprecation date, pre-built model versions will no longer be functional, and your implementation may be broken.
Stable (not preview) model versions are deprecated six months after the release of another stable model version. Features in preview don't maintain a minimum retirement period and may be deprecated at any time.
Use the table below to find which model versions are supported by each feature:
| Sentiment Analysis and opinion mining | `2021-10-01`, `2022-06-01*` | `2019-10-01`, `2020-04-01` | | Language Detection | `2021-11-20*` | `2019-10-01`, `2020-07-01`, `2020-09-01`, `2021-01-05` | | Entity Linking | `2021-06-01*` | `2019-10-01`, `2020-02-01` |
-| Named Entity Recognition (NER) | `2021-06-01*` | `2019-10-01`, `2020-02-01`, `2020-04-01`, `2021-01-15` |
+| Named Entity Recognition (NER) | `2021-06-01*`, `2022-10-01-preview` | `2019-10-01`, `2020-02-01`, `2020-04-01`, `2021-01-15` |
| Personally Identifiable Information (PII) detection | `2020-07-01`, `2021-01-15*` | `2019-10-01`, `2020-02-01`, `2020-04-01`, `2020-07-01` | | PII detection for conversations (Preview) | `2022-05-15-preview**` | | | Question answering | `2021-10-01*` | |
Use the table below to find which model versions are supported by each feature:
As new training configs and new functionality become available; older and less accurate configs are retired, see the following timelines for configs expiration:
-New configs are being released every few months. So, training configs expiration of any publicly available config is **six months** after its release. If you have assigned a trained model to a deployment, this deployment expires after **twelve months** from the training config expiration.
+New configs are being released every few months. So, training configs expiration of any publicly available config is **six months** after its release. If you have assigned a trained model to a deployment, this deployment expires after **twelve months** from the training config expiration. If your models are about to expire, you can retrain and redeploy your models with the latest training configuration version.
-After training config version expires, API calls will return an error when called or used if called with an expired config version. By default, training requests will use the latest available training config version. To change the config version, use `trainingConfigVersion` when submitting a training job and assign the version you want.
+After training config version expires, API calls will return an error when called or used if called with an expired config version. By default, training requests will use the latest available training config version. To change the config version, use `trainingConfigVersion` when submitting a training job and assign the version you want.
> [!Tip] > It's recommended to use the latest supported config version
Use the table below to find which model versions are supported by each feature:
| Feature | Supported Training config versions | Training config expiration | Deployment expiration | ||--|||
-| Custom text classification | `2022-05-01` | `10/28/2022` | `10/28/2023` |
+| Custom text classification | `2022-05-01` | `04/10/2023` | `04/28/2024` |
| Conversational language understanding | `2022-05-01` | `10/28/2022` | `10/28/2023` |
-| Custom named entity recognition | `2022-05-01` | `10/28/2022` | `10/28/2023` |
+| Conversational language understanding | `2022-09-01` | `04/10/2023` | `04/28/2024` |
+| Custom named entity recognition | `2022-05-01` | `04/10/2023` | `04/28/2024` |
| Orchestration workflow | `2022-05-01` | `10/28/2022` | `10/28/2023` |
Use the table below to find which model versions are supported by each feature:
When you're making API calls to the following features, you need to specify the `API-VERISON` you want to use to complete your request. It's recommended to use the latest available API versions.
-If you are using the [Language Studio](https://aka.ms/languageStudio) for building your project you will be using the latest API version available. If you need to use another API version this is only available directly through APIs.
+If you're using the [Language Studio](https://aka.ms/languageStudio) for building your project you will be using the latest API version available. If you need to use another API version this is only available directly through APIs.
Use the table below to find which API versions are supported by each feature: | Feature | Supported versions | Latest Generally Available version | Latest preview version | |--||||
-| Custom text classification | `2022-05-01` | `2022-05-01` | |
-| Conversational language understanding | `2022-05-01` | `2022-05-01` | |
-| Custom named entity recognition | `2022-05-01` | `2022-05-01` | |
-| Orchestration workflow | `2022-05-01` | `2022-05-01` | |
-
+| Custom text classification | `2022-05-01`, `2022-10-01-preview` | `2022-05-01` | |
+| Conversational language understanding | `2022-05-01`, `2022-10-01-preview` | `2022-05-01` | |
+| Custom named entity recognition | `2022-05-01`, `2022-10-01-preview` | `2022-05-01` | |
+| Orchestration workflow | `2022-05-01`, `2022-10-01-preview` | `2022-05-01` | |
## Next steps
cognitive-services Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/concepts/best-practices.md
+
+ Title: Conversational language understanding best practices
+
+description: Apply best practices when using conversational language understanding
++++++ Last updated : 10/11/2022++++
+# Best practices for conversational language understanding
+
+Use the following guidelines to create the best possible projects in conversational language understanding.
+
+## Choose a consistent schema
+
+Schema is the definition of your intents and entities. There are different approaches you could take when defining what you should create as an intent versus an entity. There are some questions you need to ask yourself:
+
+- What actions or queries am I trying to capture from my user?
+- What pieces of information are relevant in each action?
+
+You can typically think of actions and queries as _intents_, while the information required to fulfill those queries as _entities_.
+
+For example, assume you want your customers to cancel subscriptions for various products that you offer through your chatbot. You can create a _Cancel_ intent with various examples like _"Cancel the Contoso service"_, or _"stop charging me for the Fabrikam subscription"_. The user's intent here is to _cancel_, the _Contoso service_ or _Fabrikam subscription_ are the subscriptions they would like to cancel. Therefore, you can create an entity for _subscriptions_. You can then model your entire project to capture actions as intents and use entities to fill in those actions. This allows you to cancel anything you define as an entity, such as other products. You can then have intents for signing up, renewing, upgrading, etc. that all make use of the _subscriptions_ and other entities.
+
+The above schema design makes it easy for you to extend existing capabilities (canceling, upgrading, signing up) to new targets by creating a new entity.
+
+Another approach is to model the _information_ as intents and _actions_ as entities. Let's take the same example, allowing your customers to cancel subscriptions through your chatbot. You can create an intent for each subscription available, such as _Contoso_ with utterances like _"cancel Contoso"_, _"stop charging me for contoso services"_, _"Cancel the Contoso subscription"_. You would then create an entity to capture the action, _cancel_. You can define different entities for each action or consolidate actions as one entity with a list component to differentiate between actions with different keys.
+
+This schema design makes it easy for you to extend new actions to existing targets by adding new action entities or entity components.
+
+Make sure to avoid trying to funnel all the concepts into just intents, for example don't try to create a _Cancel Contoso_ intent that only has the purpose of that one specific action. Intents and entities should work together to capture all the required information from the customer.
+
+You also want to avoid mixing different schema designs. Do not build half of your application with actions as intents and the other half with information as intents. Ensure it is consistent to get the possible results.
++++++++
cognitive-services Entity Components https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/concepts/entity-components.md
Previously updated : 05/13/2022 Last updated : 10/11/2022
In Conversational Language Understanding, entities are relevant pieces of inform
## Component types
-An entity component determines a way you can extract the entity. An entity can simply contain one component, which would determine the only method that would be used to extract the entity, or multiple components to expand the ways in which the entity is defined and extracted.
+An entity component determines a way you can extract the entity. An entity can contain one component, which would determine the only method that would be used to extract the entity, or multiple components to expand the ways in which the entity is defined and extracted.
### Learned component
The prebuilt component allows you to select from a library of common types such
:::image type="content" source="../media/prebuilt-component.png" alt-text="A screenshot showing an example of prebuilt components for entities." lightbox="../media/prebuilt-component.png":::
+### Regex component
+
+The regex component matches regular expressions to capture consistent patterns. When added, any text that matches the regular expression will be extracted. You can have multiple regular expressions within the same entity, each with a different key identifier. A matched expression will return the key as part of the prediction response.
+
+In multilingual projects, you can specify a different expression for each language. While using the prediction API, you can specify the language in the input request, which will only match the regular expression associated to that language.
++ ## Entity options
When you do not combine components, the entity will return twice:
:::image type="content" source="../media/separated-overlap-example-1-part-2.svg" alt-text="A screenshot showing the entity returned twice." lightbox="../media/separated-overlap-example-1-part-2.svg":::
+### Required components
+
+An entity can sometimes be defined by multiple components but requires one or more of them to be present. Every component can be set as **required**, which means the entity will **not** be returned if that component wasn't present. For example, if you have an entity with a list component and a required learned component, it is guaranteed that any returned entity includes a learned component; if it doesn't, the entity will not be returned.
+
+Required components are most frequently used with learned components, as they can restrict the other component types to a specific context, which is commonly associated to **roles**. You can also require all components to make sure that every component is present for an entity.
+
+In the Language Studio, every component in an entity has a toggle next to it that allows you to set it as required.
+
+#### Example
+
+Suppose you have an entity called **Ticket Quantity** that attempts to extract the number of tickets you want to reserve for flights, for utterances such as _"Book **two** tickets tomorrow to Cairo"_.
+
+Typically, you would add a prebuilt component for _Quantity.Number_ that already extracts all numbers. However if your entity was only defined with the prebuilt, it would also extract other numbers as part of the **Ticket Quantity** entity, such as _"Book **two** tickets tomorrow to Cairo at **3** PM"_.
+
+To resolve this, you would label a learned component in your training data for all the numbers that are meant to be **Ticket Quantity**. The entity now has 2 components, the prebuilt that knows all numbers, and the learned one that predicts where the Ticket Quantity is in a sentence. If you require the learned component, you make sure that Ticket Quantity only returns when the learned component predicts it in the right context. If you also require the prebuilt component, you can then guarantee that the returned Ticket Quantity entity is both a number and in the correct position.
-> [!NOTE]
-> During public preview of the service, there were 4 available options: **Longest overlap**, **Exact overlap**, **Union overlap**, and **Return all separately**. **Longest overlap** and **exact overlap** are deprecated and will only be supported for projects that previously had those options selected. **Union overlap** has been renamed to **Combine components**, while **Return all separately** has been renamed to **Do not combine components**.
## How to use components and options
A common practice is to extend a prebuilt component with a list of values that t
Other times you may be interested in extracting an entity through context such as a **Product** in a retail project. You would label for the learned component of the product to learn _where_ a product is based on its position within the sentence. You may also have a list of products that you already know before hand that you'd like to always extract. Combining both components in one entity allows you to get both options for the entity.
-When you do not combine components, you simply allow every component to act as an independent entity extractor. One way of using this option is to separate the entities extracted from a list to the ones extracted through the learned or prebuilt components to handle and treat them differently.
+When you do not combine components, you allow every component to act as an independent entity extractor. One way of using this option is to separate the entities extracted from a list to the ones extracted through the learned or prebuilt components to handle and treat them differently.
+
+> [!NOTE]
+> Previously during the public preview of the service, there were 4 available options: **Longest overlap**, **Exact overlap**, **Union overlap**, and **Return all separately**. **Longest overlap** and **exact overlap** are deprecated and will only be supported for projects that previously had those options selected. **Union overlap** has been renamed to **Combine components**, while **Return all separately** has been renamed to **Do not combine components**.
## Next steps
cognitive-services Deploy Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/how-to/deploy-model.md
Previously updated : 04/26/2022 Last updated : 10/12/2022
This can be used to swap your `production` and `staging` deployments when you wa
+## Assign deployment resources
+
+You can [deploy your project to multiple regions](../../concepts/custom-features/multi-region-deployment.md) by assigning different Language resources that exist in different regions.
+
+# [Language Studio](#tab/language-studio)
++
+# [REST APIs](#tab/rest-api)
++++
+## Unassign deployment resources
+
+When unassigning or removing a deployment resource from a project, you will also delete all the deployments that have been deployed to that resource's region.
+
+# [Language Studio](#tab/language-studio)
++
+# [REST APIs](#tab/rest-api)
++++ ## Next steps * Use [prediction API to query your model](call-api.md)
cognitive-services View Model Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/how-to/view-model-evaluation.md
In the **view model details** page, you'll be able to see all your models, with
+## Load or export model data
+
+### [Language studio](#tab/Language-studio)
+++
+### [REST APIs](#tab/REST-APIs)
++++ ## Delete model ### [Language studio](#tab/Language-studio)
cognitive-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/service-limits.md
Previously updated : 05/12/2022 Last updated : 10/12/2022
Use this article to learn about the data and service limits when using conversat
|Tier|Description|Limit| |--|--|--|
- |F0|Free tier|You are only allowed **One** Language resource **per subscription**.|
+ |F0|Free tier|You are only allowed **one** F0 Language resource **per subscription**.|
|S |Paid tier|You can have up to 100 Language resources in the S tier per region.|
See [pricing](https://azure.microsoft.com/pricing/details/cognitive-services/lan
### Regional availability
-Conversational language understanding is only available in some Azure regions. To use conversational language understanding, you must choose a Language resource in one of following regions:
-* Australia East
-* Central India
-* East US
-* East US 2
-* North Europe
-* South Central US
-* Switzerland North
-* UK South
-* West Europe
-* West US 2
-* West US 3
--
+Conversational language understanding is only available in some Azure regions. Some regions are available for **both authoring and prediction**, while other regions are **prediction only**. Language resources in authoring regions allow you to create, edit, train, and deploy your projects. Language resources in prediction regions allow you to get [predictions from a deployment](../concepts/custom-features/multi-region-deployment.md).
+
+| Region | Authoring | Prediction |
+|--|--|-|
+| Australia East | Γ£ô | Γ£ô |
+| Brazil South | | Γ£ô |
+| Canada Central | | Γ£ô |
+| Central India | Γ£ô | Γ£ô |
+| Central US | | Γ£ô |
+| East Asia | | Γ£ô |
+| East US | Γ£ô | Γ£ô |
+| East US 2 | Γ£ô | Γ£ô |
+| France Central | | Γ£ô |
+| Japan East | | Γ£ô |
+| Japan West | | Γ£ô |
+| Jio India West | | Γ£ô |
+| Korea Central | | Γ£ô |
+| North Central US | | Γ£ô |
+| North Europe | Γ£ô | Γ£ô |
+| Norway East | | Γ£ô |
+| South Africa North | | Γ£ô |
+| South Central US | Γ£ô | Γ£ô |
+| Southeast Asia | | Γ£ô |
+| Sweden Central | | Γ£ô |
+| Switzerland North | Γ£ô | Γ£ô |
+| UAE North | | Γ£ô |
+| UK South | Γ£ô | Γ£ô |
+| West Central US | | Γ£ô |
## API limits
The following limits are observed for the conversational language understanding.
| Item | Limits | |--|--|
-| Project name | You can only use letters `(a-z, A-Z)`, and numbers `(0-9)` ,symbols `_ . -`,with no spaces. Maximum allowed length is 50 characters. |
+| Project name | You can only use letters `(a-z, A-Z)`, and numbers `(0-9)` , symbols `_ . -`, with no spaces. Maximum allowed length is 50 characters. |
| Model name | You can only use letters `(a-z, A-Z)`, numbers `(0-9)` and symbols `_ . -`. Maximum allowed length is 50 characters. | | Deployment name | You can only use letters `(a-z, A-Z)`, numbers `(0-9)` and symbols `_ . -`. Maximum allowed length is 50 characters. | | Intent name| You can only use letters `(a-z, A-Z)`, numbers `(0-9)` and all symbols except ":", `$ & % * ( ) + ~ # / ?`. Maximum allowed length is 50 characters.|
cognitive-services Deploy Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/how-to/deploy-model.md
Previously updated : 05/09/2022 Last updated : 10/12/2022
After you are done testing a model assigned to one deployment and you want to as
+## Assign deployment resources
+
+You can [deploy your project to multiple regions](../../concepts/custom-features/multi-region-deployment.md) by assigning different Language resources that exist in different regions.
+
+# [Language Studio](#tab/language-studio)
++
+# [REST APIs](#tab/rest-api)
++++
+## Unassign deployment resources
+
+When unassigning or removing a deployment resource from a project, you will also delete all the deployments that have been deployed to that resource's region.
+
+# [Language Studio](#tab/language-studio)
++
+# [REST APIs](#tab/rest-api)
++++ ## Next steps After you have a deployment, you can use it to [extract entities](call-api.md) from text.
cognitive-services View Model Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/how-to/view-model-evaluation.md
Previously updated : 05/24/2022 Last updated : 10/12/2022
See the [project development lifecycle](../overview.md#project-development-lifec
[!INCLUDE [Model evaluation](../includes/rest-api/model-evaluation.md)] +
+## Load or export model data
+
+### [Language studio](#tab/Language-studio)
+++
+### [REST APIs](#tab/REST-APIs)
++++ ## Delete model ### [Language studio](#tab/language-studio)
cognitive-services Deploy Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-classification/how-to/deploy-model.md
Previously updated : 05/04/2022 Last updated : 10/12/2022
You can swap deployments after you've tested a model assigned to one deployment,
[!INCLUDE [Delete deployment](../includes/rest-api/delete-deployment.md)] +
+## Assign deployment resources
+
+You can [deploy your project to multiple regions](../../concepts/custom-features/multi-region-deployment.md) by assigning different Language resources that exist in different regions.
+
+# [Language Studio](#tab/language-studio)
++
+# [REST APIs](#tab/rest-api)
++++
+## Unassign deployment resources
+
+When you unassign or remove a deployment resource from a project, you will also delete all the deployments that have been deployed to that resource's region.
+
+# [Language Studio](#tab/language-studio)
++
+# [REST APIs](#tab/rest-api)
++++
+## Next steps
+
+* Use [prediction API to query your model](call-api.md)
cognitive-services View Model Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-classification/how-to/view-model-evaluation.md
Previously updated : 08/09/2022 Last updated : 10/12/2022
See the [project development lifecycle](../overview.md#project-development-lifec
+## Load or export model data
+
+### [Language studio](#tab/Language-studio)
+++
+### [REST APIs](#tab/REST-APIs)
++++ ## Delete model ### [Language studio](#tab/language-studio)
cognitive-services Entity Resolutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/named-entity-recognition/concepts/entity-resolutions.md
+
+ Title: Entity resolutions provided by Named Entity Recognition
+
+description: Learn about entity resolutions in the NER feature.
++++++ Last updated : 10/12/2022++++
+# Resolve entities to standard formats
+
+A resolution is a standard format for an entity. Entities can be expressed in various forms and resolutions provide standard predictable formats for common quantifiable types. For example, "eighty" and "80" should both resolve to the integer `80`.
+
+You can use NER resolutions to implement actions or retrieve further information. For example, your service can extract datetime entities to extract dates and times that will be provided to a meeting scheduling system.
+
+This article documents the resolution objects returned for each entity category or subcategory.
+
+## Age
+
+Examples: "10 years old", "23 months old", "sixty Y.O."
+
+```json
+"resolutions": [
+ {
+ "resolutionKind": "AgeResolution",
+ "unit": "Year",
+ "value": 10
+ }
+ ]
+```
+
+Possible values for "unit":
+- Year
+- Month
+- Week
+- Day
++
+## Currency
+
+Examples: "30 Egyptian pounds", "77 USD"
+
+```json
+"resolutions": [
+ {
+ "resolutionKind": "CurrencyResolution",
+ "unit": "Egyptian pound",
+ "ISO4217": "EGP",
+ "value": 30
+ }
+ ]
+```
+
+Possible values for "unit" and "ISO4217":
+- [ISO 4217 reference](https://docs.1010data.com/1010dataReferenceManual/DataTypesAndFormats/currencyUnitCodes.html).
+
+## Datetime
+
+Datetime includes several different subtypes that return different response objects.
+
+### Date
+
+Specific days.
+
+Examples: "January 1 1995", "12 april", "7th of October 2022", "tomorrow"
+
+```json
+"resolutions": [
+ {
+ "resolutionKind": "DateTimeResolution",
+ "dateTimeSubKind": "Date",
+ "timex": "1995-01-01",
+ "value": "1995-01-01"
+ }
+ ]
+```
+
+Whenever an ambiguous date is provided, you're offered different options for your resolution. For example, "12 April" could refer to any year. Resolution provides this year and the next as options. The `timex` value `XXXX` indicates no year was specified in the query.
+
+```json
+"resolutions": [
+ {
+ "resolutionKind": "DateTimeResolution",
+ "dateTimeSubKind": "Date",
+ "timex": "XXXX-04-12",
+ "value": "2022-04-12"
+ },
+ {
+ "resolutionKind": "DateTimeResolution",
+ "dateTimeSubKind": "Date",
+ "timex": "XXXX-04-12",
+ "value": "2023-04-12"
+ }
+ ]
+```
+
+Ambiguity can occur even for a given day of the week. For example, saying "Monday" could refer to last Monday or this Monday. Once again the `timex` value indicates no year or month was specified, and uses a day of the week identifier (W) to indicate the first day of the week.
+
+```json
+"resolutions": [
+ {
+ "resolutionKind": "DateTimeResolution",
+ "dateTimeSubKind": "Date",
+ "timex": "XXXX-WXX-1",
+ "value": "2022-10-03"
+ },
+ {
+ "resolutionKind": "DateTimeResolution",
+ "dateTimeSubKind": "Date",
+ "timex": "XXXX-WXX-1",
+ "value": "2022-10-10"
+ }
+ ]
+```
++
+### Time
+
+Specific times.
+
+Examples: "9:39:33 AM", "seven AM", "20:03"
+
+```json
+"resolutions": [
+ {
+ "resolutionKind": "DateTimeResolution",
+ "dateTimeSubKind": "Time",
+ "timex": "T09:39:33",
+ "value": "09:39:33"
+ }
+ ]
+```
+
+### Datetime
+
+Specific date and time combinations.
+
+Examples: "6 PM tomorrow", "8 PM on January 3rd", "Nov 1 19:30"
+
+```json
+"resolutions": [
+ {
+ "resolutionKind": "DateTimeResolution",
+ "dateTimeSubKind": "DateTime",
+ "timex": "2022-10-07T18",
+ "value": "2022-10-07 18:00:00"
+ }
+ ]
+```
+
+Similar to dates, you can have ambiguous datetime entities. For example, "May 3rd noon" could refer to any year. Resolution provides this year and the next as options. The `timex` value **XXXX** indicates no year was specified.
+
+```json
+"resolutions": [
+ {
+ "resolutionKind": "DateTimeResolution",
+ "dateTimeSubKind": "DateTime",
+ "timex": "XXXX-05-03T12",
+ "value": "2022-05-03 12:00:00"
+ },
+ {
+ "resolutionKind": "DateTimeResolution",
+ "dateTimeSubKind": "DateTime",
+ "timex": "XXXX-05-03T12",
+ "value": "2023-05-03 12:00:00"
+ }
+ ]
+```
+
+### Datetime ranges
+
+A datetime range is a period with a beginning and end date, time, or datetime.
+
+Examples: "from january 3rd 6 AM to april 25th 8 PM 2022", "between Monday to Thursday", "June", "the weekend"
+
+The "duration" parameter indicates the time passed in seconds (S), minutes (M), hours (H), or days (D). This parameter is only returned when an explicit start and end datetime are in the query. "Next week" would only return with "begin" and "end" parameters for the week.
+
+```json
+"resolutions": [
+ {
+ "resolutionKind": "TemporalSpanResolution",
+ "duration": "PT2702H",
+ "begin": "2022-01-03 06:00:00",
+ "end": "2022-04-25 20:00:00"
+ }
+ ]
+```
+
+### Set
+
+A set is a recurring datetime period. Sets don't resolve to exact values, as they don't indicate an exact datetime.
+
+Examples: "every Monday at 6 PM", "every Thursday", "every weekend"
+
+For "every Monday at 6 PM", the `timex` value indicates no specified year with the starting **XXXX**, then every Monday through **WXX-1** to determine first day of every week, and finally **T18** to indicate 6 PM.
+
+```json
+"resolutions": [
+ {
+ "resolutionKind": "DateTimeResolution",
+ "dateTimeSubKind": "Set",
+ "timex": "XXXX-WXX-1T18",
+ "value": "not resolved"
+ }
+ ]
+```
+
+## Dimensions
+
+Examples: "24 km/hr", "44 square meters", "sixty six kilobytes"
+
+```json
+"resolutions": [
+ {
+ "resolutionKind": "SpeedResolution",
+ "unit": "KilometersPerHour",
+ "value": 24
+ }
+ ]
+```
+
+Possible values for "resolutionKind" and their "unit" values:
+
+- **AreaResolution**:
+ - SquareKilometer
+ - SquareHectometer
+ - SquareDecameter
+ - SquareMeter
+ - SquareDecimeter
+ - SquareCentimeter
+ - SquareMillimeter
+ - SquareInch
+ - SquareFoot
+ - SquareMile
+ - SquareYard
+ - Acre
+
+- **InformationResolution**:
+ - Bit
+ - Kilobit
+ - Megabit
+ - Gigabit
+ - Terabit
+ - Petabit
+ - Byte
+ - Kilobyte
+ - Megabyte
+ - Gigabyte
+ - Terabyte
+ - Petabyte
+
+- **LengthResolution**:
+ - Kilometer
+ - Hectometer
+ - Decameter
+ - Meter
+ - Decimeter
+ - Centimeter
+ - Millimeter
+ - Micrometer
+ - Nanometer
+ - Picometer
+ - Mile
+ - Yard
+ - Inch
+ - Foot
+ - Light year
+ - Pt
+
+- **SpeedResolution**:
+ - MetersPerSecond
+ - KilometersPerHour
+ - KilometersPerMinute
+ - KilometersPerSecond
+ - MilesPerHour
+ - Knot
+ - FootPerSecond
+ - FootPerMinute
+ - YardsPerMinute
+ - YardsPerSecond
+ - MetersPerMillisecond
+ - CentimetersPerMillisecond
+ - KilometersPerMillisecond
+
+- **VolumeResolution**:
+ - CubicMeter
+ - CubicCentimeter
+ - CubicMillimiter
+ - Hectoliter
+ - Decaliter
+ - Liter
+ - Deciliter
+ - Centiliter
+ - Milliliter
+ - CubicYard
+ - CubicInch
+ - CubicFoot
+ - CubicMile
+ - FluidOunce
+ - Teaspoon
+ - Tablespoon
+ - Pint
+ - Quart
+ - Cup
+ - Gill
+ - Pinch
+ - FluidDram
+ - Barrel
+ - Minim
+ - Cord
+ - Peck
+ - Bushel
+ - Hogshead
+
+- **WeightResolution**:
+ - Kilogram
+ - Gram
+ - Milligram
+ - Microgram
+ - Gallon
+ - MetricTon
+ - Ton
+ - Pound
+ - Ounce
+ - Grain
+ - Pennyweight
+ - LongTonBritish
+ - ShortTonUS
+ - ShortHundredweightUS
+ - Stone
+ - Dram
++
+## Number
+
+Examples: "27", "one hundred and three", "38.5", "2/3", "33%"
+
+```json
+"resolutions": [
+ {
+ "resolutionKind": "NumberResolution",
+ "numberKind": "Integer",
+ "value": 27
+ }
+ ]
+```
+
+Possible values for "numberKind":
+- Integer
+- Decimal
+- Fraction
+- Power
+- Percent
++
+## Ordinal
+
+Examples: "3rd", "first", "last"
+
+```json
+"resolutions": [
+ {
+ "resolutionKind": "OrdinalResolution",
+ "offset": "3",
+ "relativeTo": "Start",
+ "value": "3"
+ }
+ ]
+```
+
+Possible values for "relativeTo":
+- Start
+- End
+
+## Temperature
+
+Examples: "88 deg fahrenheit", "twenty three degrees celsius"
+
+```json
+"resolutions": [
+ {
+ "resolutionKind": "TemperatureResolution",
+ "unit": "Fahrenheit",
+ "value": 88
+ }
+ ]
+```
+
+Possible values for "unit":
+- Celsius
+- Fahrenheit
+- Kelvin
+- Rankine
++++
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/named-entity-recognition/language-support.md
Use this article to learn which natural languages are supported by the NER featu
## NER language support
-| Language | Language code | Starting with model version: | Notes |
-|:-|:-:|:-:|::|
-| Arabic* | `ar` | 2019-10-01 | |
-| Chinese-Simplified | `zh-hans` | 2021-01-15 | `zh` also accepted |
-| Chinese-Traditional* | `zh-hant` | 2019-10-01 | |
-| Czech* | `cs` | 2019-10-01 | |
-| Danish* | `da` | 2019-10-01 | |
-| Dutch* | `nl` | 2019-10-01 | |
-| English | `en` | 2019-10-01 | |
-| Finnish* | `fi` | 2019-10-01 | |
-| French | `fr` | 2021-01-15 | |
-| German | `de` | 2021-01-15 | |
-| Hebrew | `he` | 2022-10-01 | |
-| Hindi | `hi` | 2022-10-01 | |
-| Hungarian* | `hu` | 2019-10-01 | |
-| Italian | `it` | 2021-01-15 | |
-| Japanese | `ja` | 2021-01-15 | |
-| Korean | `ko` | 2021-01-15 | |
-| Norwegian (Bokmål)* | `no` | 2019-10-01 | `nb` also accepted |
-| Polish* | `pl` | 2019-10-01 | |
-| Portuguese (Brazil) | `pt-BR` | 2021-01-15 | |
-| Portuguese (Portugal) | `pt-PT` | 2021-01-15 | `pt` also accepted |
-| Russian* | `ru` | 2019-10-01 | |
-| Spanish | `es` | 2020-04-01 | |
-| Swedish* | `sv` | 2019-10-01 | |
-| Turkish* | `tr` | 2019-10-01 | |
+| Language | Language code | Starting with model version: | Supports entity resolution | Notes |
+|:-|:-:|:-:|:--:|::|
+| Arabic* | `ar` | 2019-10-01 | | |
+| Chinese-Simplified | `zh-hans` | 2021-01-15 | Γ£ô | `zh` also accepted |
+| Chinese-Traditional* | `zh-hant` | 2019-10-01 | | |
+| Czech* | `cs` | 2019-10-01 | | |
+| Danish* | `da` | 2019-10-01 | | |
+| Dutch* | `nl` | 2019-10-01 | Γ£ô | |
+| English | `en` | 2019-10-01 | Γ£ô | |
+| Finnish* | `fi` | 2019-10-01 | | |
+| French | `fr` | 2021-01-15 | Γ£ô | |
+| German | `de` | 2021-01-15 | Γ£ô | |
+| Hebrew | `he` | 2022-10-01 | | |
+| Hindi | `hi` | 2022-10-01 | Γ£ô | |
+| Hungarian* | `hu` | 2019-10-01 | | |
+| Italian | `it` | 2021-01-15 | Γ£ô | |
+| Japanese | `ja` | 2021-01-15 | Γ£ô | |
+| Korean | `ko` | 2021-01-15 | | |
+| Norwegian (Bokmål)* | `no` | 2019-10-01 | | `nb` also accepted |
+| Polish* | `pl` | 2019-10-01 | | |
+| Portuguese (Brazil) | `pt-BR` | 2021-01-15 | Γ£ô | |
+| Portuguese (Portugal) | `pt-PT` | 2021-01-15 | | `pt` also accepted |
+| Russian* | `ru` | 2019-10-01 | | |
+| Spanish | `es` | 2020-04-01 | Γ£ô | |
+| Swedish* | `sv` | 2019-10-01 | | |
+| Turkish* | `tr` | 2019-10-01 | Γ£ô | |
## Next steps
-[PII feature overview](overview.md)
+[NER feature overview](overview.md)
cognitive-services Deploy Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/orchestration-workflow/how-to/deploy-model.md
Previously updated : 05/20/2022 Last updated : 10/12/2022
This can be used to swap your `production` and `staging` deployments when you wa
+## Assign deployment resources
+
+You can [deploy your project to multiple regions](../../concepts/custom-features/multi-region-deployment.md) by assigning different Language resources that exist in different regions.
+
+# [Language Studio](#tab/language-studio)
++
+# [REST APIs](#tab/rest-api)
++++
+## Unassign deployment resources
+
+When unassigning or removing a deployment resource from a project, you will also delete all the deployments that have been deployed to that resource's region.
+
+# [Language Studio](#tab/language-studio)
++
+# [REST APIs](#tab/rest-api)
++++ ## Next steps Use [prediction API to query your model](call-api.md)
cognitive-services View Model Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/orchestration-workflow/how-to/view-model-evaluation.md
Previously updated : 04/26/2022 Last updated : 10/12/2022
In the **view model details** page, you'll be able to see all your models, with
+## Load or export model data
+
+### [Language studio](#tab/Language-studio)
+++
+### [REST APIs](#tab/REST-APIs)
++++ ## Delete model ### [Language studio](#tab/Language-studio)
cognitive-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/orchestration-workflow/service-limits.md
Previously updated : 05/18/2022 Last updated : 10/12/2022
Use this article to learn about the data and service limits when using orchestra
## Language resource limits
-* Your Language resource has to be created in one of the [supported regions](#regional-support).
+* Your Language resource has to be created in one of the [supported regions](#regional-availability).
* Pricing tiers
See [pricing](https://azure.microsoft.com/pricing/details/cognitive-services/lan
* Project names have to be unique within the same resource across all custom features.
-## Regional support
-
-Orchestration workflow is only available in some Azure regions. To use orchestration workflow, you must choose a Language resource in one of following regions:
-
-* West US 2
-* East US
-* East US 2
-* West US 3
-* South Central US
-* West Europe
-* North Europe
-* UK south
-* Australia East
+## Regional availability
+
+Orchestration workflow is only available in some Azure regions. Some regions are available for **both authoring and prediction**, while other regions are **prediction only**. Language resources in authoring regions allow you to create, edit, train, and deploy your projects. Language resources in prediction regions allow you to get [predictions from a deployment](../concepts/custom-features/multi-region-deployment.md).
+
+| Region | Authoring | Prediction |
+|--|--|-|
+| Australia East | Γ£ô | Γ£ô |
+| Brazil South | | Γ£ô |
+| Canada Central | | Γ£ô |
+| Central India | Γ£ô | Γ£ô |
+| Central US | | Γ£ô |
+| East Asia | | Γ£ô |
+| East US | Γ£ô | Γ£ô |
+| East US 2 | Γ£ô | Γ£ô |
+| France Central | | Γ£ô |
+| Japan East | | Γ£ô |
+| Japan West | | Γ£ô |
+| Jio India West | | Γ£ô |
+| Korea Central | | Γ£ô |
+| North Central US | | Γ£ô |
+| North Europe | Γ£ô | Γ£ô |
+| Norway East | | Γ£ô |
+| South Africa North | | Γ£ô |
+| South Central US | Γ£ô | Γ£ô |
+| Southeast Asia | | Γ£ô |
+| Sweden Central | | Γ£ô |
+| Switzerland North | Γ£ô | Γ£ô |
+| UAE North | | Γ£ô |
+| UK South | Γ£ô | Γ£ô |
+| West Central US | | Γ£ô |
## API limits
The following limits are observed for orchestration workflow.
| Attribute | Limits | |--|--|
-| Project name | You can only use letters `(a-z, A-Z)`, and numbers `(0-9)` ,symbols `_ . -`,with no spaces. Maximum allowed length is 50 characters. |
+| Project name | You can only use letters `(a-z, A-Z)`, and numbers `(0-9)` , symbols `_ . -`, with no spaces. Maximum allowed length is 50 characters. |
| Model name | You can only use letters `(a-z, A-Z)`, numbers `(0-9)` and symbols `_ . -`. Maximum allowed length is 50 characters. | | Deployment name | You can only use letters `(a-z, A-Z)`, numbers `(0-9)` and symbols `_ . -`. Maximum allowed length is 50 characters. | | Intent name| You can only use letters `(a-z, A-Z)`, numbers `(0-9)` and all symbols except ":", `$ & % * ( ) + ~ # / ?`. Maximum allowed length is 50 characters.|
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/whats-new.md
Azure Cognitive Service for Language is updated on an ongoing basis. To stay up-
* [Sentiment analysis](./sentiment-opinion-mining/language-support.md) * [Key phrase extraction](./key-phrase-extraction/language-support.md) * [Named entity recognition](./key-phrase-extraction/language-support.md)
+* [Multi-region deployment](./concepts/custom-features/multi-region-deployment.md) and [project asset versioning](./concepts/custom-features/project-versioning.md) for:
+ * [Conversational language understanding](./conversational-language-understanding/overview.md)
+ * [Orchestration workflow](./orchestration-workflow/overview.md)
+ * [Custom text classification](./custom-text-classification/overview.md)
+ * [Custom named entity recognition](./custom-named-entity-recognition/overview.md).
+* [Regular expressions](./conversational-language-understanding/concepts/entity-components.md#regex-component) in conversational language understanding and [required components](./conversational-language-understanding/concepts/entity-components.md#required-components), offering an additional ability to influence entity predictions.
+* [Entity resolution](./named-entity-recognition/concepts/entity-resolutions.md) in named entity recognition
## September 2022
communication-services Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/analytics/insights.md
# Communications Services Insights Preview ## Overview
-Within your Communications Resource, we have provided an **Insights Preview** feature that displays a number of data visualizations conveying insights from the Azure Monitor logs and metrics monitored for your Communications Services. The visualizations within Insights is made possible via [Azure Monitor Workbooks](../../../azure-monitor/visualize/workbooks-overview.md). In order to take advantage of Workbooks, follow the instructions outlined in [Enable Azure Monitor in Diagnostic Settings](enable-logging.md), and to enable Workbooks, you will need to send your logs to a [Log Analytics workspace](../../../azure-monitor/logs/log-analytics-overview.md) destination.
+Within your Communications Resource, we have provided an **Insights Preview** feature that displays a number of data visualizations conveying insights from the Azure Monitor logs and metrics monitored for your Communications Services. The visualizations within Insights are made possible via [Azure Monitor Workbooks](../../../azure-monitor/visualize/workbooks-overview.md). In order to take advantage of Workbooks, follow the instructions outlined in [Enable Azure Monitor in Diagnostic Settings](enable-logging.md), and to enable Workbooks, you will need to send your logs to a [Log Analytics workspace](../../../azure-monitor/logs/log-analytics-overview.md) destination.
:::image type="content" source="media\workbooks\insights-overview-2.png" alt-text="Communication Services Insights":::
The **SMS** tab displays the operations and results for SMS usage through an Azu
:::image type="content" source="media\workbooks\sms.png" alt-text="SMS tab":::
+The **Email** tab displays delivery status, email size, and email count:
+[Screenshot displays email count, size and email delivery status level that illustrate email insights]
+ ## Editing dashboards The **Insights** dashboards provided with your **Communication Service** resource can be customized by clicking on the **Edit** button on the top navigation bar:
Editing these dashboards does not modify the **Insights** tab, but rather create
:::image type="content" source="media\workbooks\workbooks-tab.png" alt-text="Workbooks tab":::
-For an in-depth description of workbooks, please refer to the [Azure Monitor Workbooks](../../../azure-monitor/visualize/workbooks-overview.md) documentation.
+For an in-depth description of workbooks, please refer to the [Azure Monitor Workbooks](../../../azure-monitor/visualize/workbooks-overview.md) documentation.
communication-services Azure Ad Api Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/teams-user/azure-ad-api-permissions.md
# Azure AD permissions for communication as Teams user
-In this article, you will learn about Azure AD permissions available for communication as a Teams user in Azure Communication Services.
+In this article, you will learn about Azure AD permissions available for communication as a Teams user in Azure Communication Services. Azure AD application for Azure Communication Services provides delegated permissions for chat and calling. Both permissions are required to exchange Azure AD access token for Communication Services access token for Teams users.
## Delegated permissions
communication-services Call Recording https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/call-recording.md
Call recording currently supports mixed audio+video MP4 and mixed audio MP3/WAV
| Channel type | Content format | Output | Scenario | Release Stage | ||--|||-|
-| Mixed audio+video | Mp4 | Single file, single channel | Keeping records and meeting notes Coaching and Training | Public Preview |
-| Mixed audio | Mp3 (lossy)/ wav (lossless) | Single file, single channel | Compliance & Adherence Coaching and Training | Public Preview |
-| **Unmixed audio** | wav | Single file, up to 5 wav channels | Quality Assurance Analytics | **Private Preview** |
+| Mixed audio+video | Mp4 | Single file, single channel | keeping records and meeting notes, coaching and training | Public Preview |
+| Mixed audio | Mp3 (lossy)/ wav (lossless) | Single file, single channel | compliance & adherence, coaching and training | Public Preview |
+| **Unmixed audio** | wav | Single file, up to 5 wav channels | quality assurance, advance analytics | **Private Preview** |
## Run-time Control APIs
-Run-time control APIs can be used to manage recording via internal business logic triggers, such as an application creating a group call and recording the conversation. Also, recordings can be triggered by a user action that tells the server application to start recording. Call Recording APIs are [Out-of-Call APIs](./call-automation-apis.md#out-of-call-apis), using the `serverCallId` to initiate recording. Once a call is created, a `serverCallId` is returned via the `Microsoft.Communication.CallLegStateChanged` event after a call has been established. The `serverCallId` can be found in the `data.serverCallId` field. See our [Call Recording Quickstart Sample](../../quickstarts/voice-video-calling/call-recording-sample.md) to learn about retrieving the `serverCallId` from the Calling Client SDK. A `recordingOperationId` is returned when recording is started, which is then used for follow-on operations like pause and resume.
+Run-time control APIs can be used to manage recording via internal business logic triggers, such as an application creating a group call and recording the conversation. Also, recordings can be triggered by a user action that tells the server application to start recording. Call Recording APIs use the `serverCallId` to initiate recording. Once a call is created, a `serverCallId` is returned via the `Microsoft.Communication.CallLegStateChanged` event after a call has been established. The `serverCallId` can be found in the `data.serverCallId` field. Learn how to [Get `serverCallId`](../../quickstarts/voice-video-calling/get-server-call-id.md) from the Calling Client SDK. A `recordingOperationId` is returned when recording is started, which is then used for follow-on operations like pause and resume.
| Operation | Operates On | Comments | | :-- | : | :-- |
An Event Grid notification `Microsoft.Communication.RecordingFileStatusUpdated`
```typescript { "id": string, // Unique guid for event
- "topic": string, // Azure Communication Services resource id
- "subject": string, // /recording/call/{call-id}
+ "topic": string, // /subscriptions/{subscription-id}/resourceGroups/{group-name}/providers/Microsoft.Communication/communicationServices/{communication-services-resource-name}
+ "subject": string, // /recording/call/{call-id}/serverCallId/{serverCallId}
"data": { "recordingStorageInfo": { "recordingChunks": [
container-apps Communicate Between Microservices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/communicate-between-microservices.md
Output from the `az acr build` command shows the upload progress of the source c
# [Bash](#tab/bash) ```azurecli
- docker push $ACR_NAME.azurecr.io/albumapp-ui
+
+ docker push "$ACR_NAME.azurecr.io/albumapp-ui"
``` # [PowerShell](#tab/powershell) ```powershell
- docker push $ACR_NAME.azurecr.io/albumapp-ui
+
+ docker push "$ACR_NAME.azurecr.io/albumapp-ui"
```
container-apps Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/networking.md
There's no forced tunneling in Container Apps routes.
## Managed resources
-When you deploy an internal or an external environment into your own network, a new resource group prefixed with `MC_` is created in the Azure subscription where your environment is hosted. This resource group contains infrastructure components managed by the Azure Container Apps platform, and shouldn't be modified. The resource group contains Public IP addresses used specifically for outbound connectivity from your environment and a load balancer. In addition to the [Azure Container Apps billing](./billing.md), you will be billed for the following:
-- Three standard static [public IPs](https://azure.microsoft.com/pricing/details/ip-addresses/) if using an internal environment, or four standard static [public IPs](https://azure.microsoft.com/pricing/details/ip-addresses/) if using an external environment.
+When you deploy an internal or an external environment into your own network, a new resource group prefixed with `MC_` is created in the Azure subscription where your environment is hosted. This resource group contains infrastructure components managed by the Azure Container Apps platform, and shouldn't be modified. The resource group contains Public IP addresses used specifically for outbound connectivity from your environment and a load balancer. In addition to the [Azure Container Apps billing](./billing.md), you are billed for the following:
+
+- Two standard static [public IPs](https://azure.microsoft.com/pricing/details/ip-addresses/), one for ingress and one for egress. If you need more IPs for egress due to SNAT issues, [open a support ticket to request an override](https://azure.microsoft.com/support/create-ticket/).
+ - Two standard [Load Balancers](https://azure.microsoft.com/pricing/details/load-balancer/) if using an internal environment, or one standard [Load Balancer](https://azure.microsoft.com/pricing/details/load-balancer/) if using an external environment. Each load balancer has less than six rules. The cost of data processed (GB) includes both ingress and egress for management operations.
container-registry Container Registry Enable Conditional Access Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-enable-conditional-access-policy.md
+
+ Title: Configure conditional access to your Azure Container Registry
+description: Learn how to configure conditional access to your registry by using Azure CLI and Azure portal.
+ Last updated : 09/13/2021++++
+# Azure Container Registry (ACR) introduces the Conditional Access policy
+
+Azure Container Registry (ACR) gives you the option to create and configure the *Conditional Access policy*.
+
+The [Conditional Access policy](/azure/active-directory/conditional-access/overview) is designed to enforce strong authentication. The authentication is based on the location, trusted and compliant devices, user assigned roles, authorization method, and the client applications. The policy enables the security to meet the organizations compliance requirements and keep the data and user accounts safe.
+
+Learn more about [Conditional Access policy](/azure/active-directory/conditional-access/overview), the [conditions](/azure/active-directory/conditional-access/overview#common-signals) you'll take it into consideration to make [policy decisions.](/azure/active-directory/conditional-access/overview#common-decisions)
+
+The Conditional Access policy applies after the first-factor authentication to the Azure Container Registry is complete. The purpose of Conditional Access for ACR is for user authentication only. The policy enables the user to choose the controls and further blocks or grants access based on the policy decisions.
+
+The following steps will help create a Conditional Access policy for Azure Container Registry (ACR).
+
+1. Disable authentication-as-arm in ACR - Azure CLI.
+2. Disable authentication-as-arm in the ACR - Azure portal.
+3. Create and configure Conditional Access policy for Azure Container Registry.
+
+## Prerequisites
+
+>* [Install or upgrade Azure CLI](/cli/azure/install-azure-cli) version 2.40.0 or later. To find the version, run `az --version`.
+>* Sign into [Azure portal.](https://portal.azure.com).
+
+## Disable authentication-as-arm in ACR - Azure CLI
+
+Disabling `azureADAuthenticationAsArmPolicy` will force the registry to use ACR audience token. You can use Azure CLI version 2.40.0 or later, run `az --version` to find the version.
+
+1. Run the command to show the current configuration of the registry's policy for authentication using ARM tokens with the registry. If the status is `enabled`, then both ACRs and ARM audience tokens can be used for authentication. If the status is `disabled` it means only ACR's audience tokens can be used for authentication.
+
+ ```azurecli-interactive
+ az acr config authentication-as-arm show -r <registry>
+ ```
+
+1. Run the command to update the status of the registry's policy.
+
+ ```azurecli-interactive
+ az acr config authentication-as-arm update -r <registry> --status [enabled/disabled]
+ ```
+
+## Disable authentication-as-arm in the ACR - Azure portal
+
+Disabling `authentication-as-arm` property by assigning a built-in policy will automatically disable the registry property for the current and the future registries. This automatic behavior is for registries created within the policy scope. The possible policy scopes include either Resource Group level scope or Subscription ID level scope within the tenant.
+
+You can disable authentication-as-arm in the ACR, by following below steps:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+2. Refer to the ACR's built-in policy definitions in the [azure-container-registry-built-in-policy definition's](policy-reference.md).
+3. Assign a built-in policy to disable authentication-as-arm definition - Azure portal.
+
+### Assign a built-in policy definition to disable ARM audience token authentication - Azure portal.
+
+You can enable registry's Conditional Access policy in the [Azure portal](https://portal.azure.com).
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Navigate to your **Azure Container Registry** > **Resource Group** > **Settings** > **Policies** .
+
+ :::image type="content" source="media/container-registry-enable-conditional-policy/01-azure-policies.png" alt-text="Screenshot showing how to navigate Azure policies.":::
+
+1. Navigate to **Azure Policy**, On the **Assignments**, select **Assign policy**.
+
+ :::image type="content" source="media/container-registry-enable-conditional-policy/02-Assign-policy.png" alt-text="Screenshot showing how to assign a policy.":::
+
+1. Under the **Assign policy** , use filters to search and find the **Scope**, **Policy definition**, **Assignment name**.
+
+ :::image type="content" source="media/container-registry-enable-conditional-policy/03-Assign-policy-tab.png" alt-text="Screenshot of the assign policy tab.":::
+
+1. Select **Scope** to filter and search for the **Subscription** and **ResourceGroup** and choose **Select**.
+
+ :::image type="content" source="media/container-registry-enable-conditional-policy/04-select-scope.png" alt-text="Screenshot of the Scope tab.":::
+
+1. Select **Policy definition** to filter and search the built-in policy definitions for the Conditional Access policy.
+
+ :::image type="content" source="media/container-registry-enable-conditional-policy/05-built-in-policy-definitions.png" alt-text="Screenshot of built-in-policy-definitions.":::
+
+Azure Container Registry has two built-in policy definitions to disable authentication-as-arm, as below:
+
+>* `Container registries should have ARM audience token authentication disabled.` - This policy will report, block any non-compliant resources, and also sends a request to update non-compliant to compliant.
+>* `Configure container registries to disable ARM audience token authentication.` - This policy offers remediation and updates non-compliant to compliant resources.
+
+1. Use filters to select and confirm **Scope**, **Policy definition**, and **Assignment name**.
+
+1. Use the filters to limit compliance states or to search for policies.
+
+1. Confirm your settings and set policy enforcement as **enabled**.
+
+1. Select **Review+Create**.
+
+ :::image type="content" source="media/container-registry-enable-conditional-policy/06-enable-policy.png" alt-text="Screenshot showing how to activate a Conditional Access policy.":::
++
+## Create and configure a Conditional Access policy - Azure portal
+
+ACR supports Conditional Access policy for Active Directory users only. It currently doesn't support Conditional Access policy for Service Principal. To configure Conditional Access policy for the registry, you must disable `authentication-as-arm` for all the registries within the desired tenant. In this tutorial, we'll create a basic Conditional Access policy for the Azure Container Registry from the Azure portal.
+
+Create a Conditional Access policy and assign your test group of users as follows:
+
+1. Sign in to the [Azure portal](https://portal.azure.com) by using an account with *global administrator* permissions.
+
+1. Search for and select **Azure Active Directory**. Then select **Security** from the menu on the left-hand side.
+
+1. Select **Conditional Access**, select **+ New policy**, and then select **Create new policy**.
+
+ :::image type="content" alt-text="A screenshot of the Conditional Access page, where you select 'New policy' and then select 'Create new policy'." source="media/container-registry-enable-conditional-policy/01-create-conditional-access.png":::
+
+1. Enter a name for the policy, such as *demo*.
+
+1. Under **Assignments**, select the current value under **Users or workload identities**.
+
+ :::image type="content" alt-text="A screenshot of the Conditional Access page, where you select the current value under 'Users or workload identities'." source="media/container-registry-enable-conditional-policy/02-conditional-access-users-and-groups.png":::
+
+1. Under **What does this policy apply to?**, verify and select **Users and groups**.
+
+1. Under **Include**, choose **Select users and groups**, and then select **All users**.
+
+ :::image type="content" alt-text="A screenshot of the page for creating a new policy, where you select options to specify users." source="media/container-registry-enable-conditional-policy/03-conditional-access-users-groups-select-users.png":::
+
+1. Under **Exclude**, choose **Select users and groups**, to exclude any choice of selection.
+
+1. Under **Cloud apps or actions**, choose **Cloud apps**.
+
+1. Under **Include**, choose **Select apps**.
+
+ :::image type="content" alt-text="A screenshot of the page for creating a new policy, where you select options to specify cloud apps." source="media/container-registry-enable-conditional-policy/04-select-cloud-apps-select-apps.png":::
+
+1. Browse for and select apps to apply Conditional Access, in this case *Azure Container Registry*, then choose **Select**.
+
+ :::image type="content" alt-text="A screenshot of the list of apps, with results filtered, and 'Azure Container Registry' selected." source="media/container-registry-enable-conditional-policy/05-select-azure-container-registry-app.png":::
+
+1. Under **Conditions** , configure control access level with options such as *User risk level*, *Sign-in risk level*, *Sign-in risk detections (Preview)*, *Device platforms*, *Locations*, *Client apps*, *Time (Preview)*, *Filter for devices*.
+
+1. Under **Grant**, filter and choose from options to enforce grant access or block access, during a sign-in event to the Azure portal. In this case grant access with *Require multifactor authentication*, then choose **Select**.
+
+ >[!TIP]
+ > To configure and grant multi-factor authentication, see [configure and conditions for multi-factor authentication.](/azure/active-directory/authentication/tutorial-enable-azure-mfa#configure-the-conditions-for-multi-factor-authentication)
+
+1. Under **Session**, filter and choose from options to enable any control on session level experience of the cloud apps.
+
+1. After selecting and confirming, Under **Enable policy**, select **On**.
+
+1. To apply and activate the policy, Select **Create**.
+
+ :::image type="content" alt-text="A screenshot showing how to activate the Conditional Access policy." source="media/container-registry-enable-conditional-policy/06-enable-conditional-access-policy.png":::
+
+We have now completed creating the Conditional Access policy for the Azure Container Registry.
+
+## Next steps
+
+* Learn more about [Azure Policy definitions](../governance/policy/concepts/definition-structure.md) and [effects](../governance/policy/concepts/effects.md).
+* Learn more about [common access concerns that Conditional Access policies can help with](/azure/active-directory/conditional-access/concept-conditional-access-policy-common).
+* Learn more about [Conditional Access policy components](/azure/active-directory/conditional-access/concept-conditional-access-policies).
cosmos-db Materialized Views https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/materialized-views.md
You'll be able to add a column to the base table, but you won't be able to remov
### Can we create MV on existing base table?
-No. Materialized Views can't be created on a table that existed before the account was onboarded to support materialized views. Create new table after account is onboarded on which materialized views can be defined. MV on existing table is planned for the future.
+No. Materialized Views can't be created on a table that existed before the account was onboarded to support materialized views. You would need to create a new table with materialized views defined and move the existing data using [container copy jobs](../intra-account-container-copy.md). MV on existing table is planned for the future.
### What are the conditions on which records won't make it to MV and how to identify such records?
cosmos-db Hierarchical Partition Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/hierarchical-partition-keys.md
For more information, see [Azure Cosmos DB emulator](./local-emulator.md).
* Support for automation platforms (Azure PowerShell, Azure CLI) is planned and not yet available. * In the Data Explorer in the portal, you currently can't view documents in a container with hierarchical partition keys. You can read or edit these documents with the supported .NET v3 or Java v4 SDK version\[s\]. * You can only specify hierarchical partition keys up to three layers in depth.
-* Hierarchical partition keys can currently only be enabled on new containers. The desired partition key paths must be specified at the time of container creation and can't be changed later.
+* Hierarchical partition keys can currently only be enabled on new containers. The desired partition key paths must be specified at the time of container creation and can't be changed later. To use hierarchical partitions on existing containers, you should create a new container with the hierarchical partition keys set and move the data using [container copy jobs](intra-account-container-copy.md).
* Hierarchical partition keys are currently supported only for API for NoSQL accounts (API for MongoDB and Cassandra aren't currently supported). ## Next steps
cosmos-db Import Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/import-data.md
The Data Migration tool is an open-source solution that imports data to Azure Co
While the import tool includes a graphical user interface (dtui.exe), it can also be driven from the command-line (dt.exe). In fact, there's an option to output the associated command after setting up an import through the UI. You can transform tabular source data, such as SQL Server or CSV files, to create hierarchical relationships (subdocuments) during import. Keep reading to learn more about source options, sample commands to import from each source, target options, and viewing import results. > [!NOTE]
+> We recommend using [container copy jobs](intra-account-container-copy.md) for copying data within the same Azure Cosmos DB account.
+>
> You should only use the Azure Cosmos DB migration tool for small migrations. For large migrations, view our [guide for ingesting data](migration-choices.md). ## <a id="Install"></a>Installation
cosmos-db Migration Choices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/migration-choices.md
Last updated 04/02/2022
You can load data from various data sources to Azure Cosmos DB. Since Azure Cosmos DB supports multiple APIs, the targets can be any of the existing APIs. The following are some scenarios where you migrate data to Azure Cosmos DB:
-* Move data from one Azure Cosmos DB container to another container in the same database or a different databases.
-* Moving data between dedicated containers to shared database containers.
-* Move data from an Azure Cosmos DB account located in region1 to another Azure Cosmos DB account in the same or a different region.
+* Move data from one Azure Cosmos DB container to another container within the Azure Cosmos DB account (could be in the same database or a different database).
+* Move data from one Azure Cosmos DB account to another Azure Cosmos DB account (could be in the same region or a different regions, same subscription or a different one).
* Move data from a source such as Azure blob storage, a JSON file, Oracle database, Couchbase, DynamoDB to Azure Cosmos DB. In order to support migration paths from the various sources to the different Azure Cosmos DB APIs, there are multiple solutions that provide specialized handling for each migration path. This document lists the available solutions and describes their advantages and limitations.
If you need help with capacity planning, consider reading our [guide to estimati
|Migration type|Solution|Supported sources|Supported targets|Considerations| ||||||
+|Offline|[Intra-account container copy](intra-account-container-copy.md)|Azure Cosmos DB for NoSQL|Azure Cosmos DB for NoSQL|&bull; CLI-based; No set up needed. <br/>&bull; Supports large datasets.|
|Offline|[Data Migration Tool](import-data.md)| &bull;JSON/CSV Files<br/>&bull;Azure Cosmos DB for NoSQL<br/>&bull;MongoDB<br/>&bull;SQL Server<br/>&bull;Table Storage<br/>&bull;AWS DynamoDB<br/>&bull;Azure Blob Storage|&bull;Azure Cosmos DB for NoSQL<br/>&bull;Azure Cosmos DB Tables API<br/>&bull;JSON Files |&bull; Easy to set up and supports multiple sources. <br/>&bull; Not suitable for large datasets.| |Offline|[Azure Data Factory](../data-factory/connector-azure-cosmos-db.md)| &bull;JSON/CSV Files<br/>&bull;Azure Cosmos DB for NoSQL<br/>&bull;Azure Cosmos DB for MongoDB<br/>&bull;MongoDB <br/>&bull;SQL Server<br/>&bull;Table Storage<br/>&bull;Azure Blob Storage <br/> <br/>See the [Azure Data Factory](../data-factory/connector-overview.md) article for other supported sources.|&bull;Azure Cosmos DB for NoSQL<br/>&bull;Azure Cosmos DB for MongoDB<br/>&bull;JSON Files <br/><br/> See the [Azure Data Factory](../data-factory/connector-overview.md) article for other supported targets. |&bull; Easy to set up and supports multiple sources.<br/>&bull; Makes use of the Azure Cosmos DB bulk executor library. <br/>&bull; Suitable for large datasets. <br/>&bull; Lack of checkpointing - It means that if an issue occurs during the course of migration, you need to restart the whole migration process.<br/>&bull; Lack of a dead letter queue - It means that a few erroneous files can stop the entire migration process.| |Offline|[Azure Cosmos DB Spark connector](./nosql/quickstart-spark.md)|Azure Cosmos DB for NoSQL. <br/><br/>You can use other sources with additional connectors from the Spark ecosystem.| Azure Cosmos DB for NoSQL. <br/><br/>You can use other targets with additional connectors from the Spark ecosystem.| &bull; Makes use of the Azure Cosmos DB bulk executor library. <br/>&bull; Suitable for large datasets. <br/>&bull; Needs a custom Spark setup. <br/>&bull; Spark is sensitive to schema inconsistencies and this can be a problem during migration. |
If you need help with capacity planning, consider reading our [guide to estimati
|Migration type|Solution|Supported sources|Supported targets|Considerations| ||||||
+|Offline|[Intra-account container copy](intra-account-container-copy.md)|Azure Cosmos DB API for Cassandra | Azure Cosmos DB API for Cassandra| &bull; CLI-based; No set up needed. <br/>&bull; Supports large datasets.|
|Offline|[cqlsh COPY command](cassandr#migrate-data-by-using-the-cqlsh-copy-command)|CSV Files | Azure Cosmos DB API for Cassandra| &bull; Easy to set up. <br/>&bull; Not suitable for large datasets. <br/>&bull; Works only when the source is a Cassandra table.| |Offline|[Copy table with Spark](cassandr#migrate-data-by-using-spark) | &bull;Apache Cassandra<br/> | Azure Cosmos DB API for Cassandra | &bull; Can make use of Spark capabilities to parallelize transformation and ingestion. <br/>&bull; Needs configuration with a custom retry policy to handle throttles.| |Online|[Dual-write proxy + Spark](cassandr)| &bull;Apache Cassandra<br/>|&bull;Azure Cosmos DB API for Cassandra <br/>| &bull; Supports larger datasets, but careful attention required for setup and validation. <br/>&bull; Open-source tools, no purchase required.|
cosmos-db Change Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/change-log.md
Title: Change log for Azure CosmosDB API for MongoDB
+ Title: Change log for Azure Cosmos DB API for MongoDB
description: Notifies our customers of any minor/medium updates that were pushed--- Previously updated : 06/22/2022 ++++ Last updated : 10/12/2022+ # Change log for Azure Cosmos DB for MongoDB+ The Change log for the API for MongoDB is meant to inform you about our feature updates. This document covers more granular updates and complements [Azure Updates](https://azure.microsoft.com/updates/).
-## Azure Cosmos DB's API for MongoDB updates
+## Azure Cosmos DB for MongoDB updates
+
+### Role-based access control (RBAC) (GA)
+
+Azure Cosmos DB for MongoDB now offers a built-in role-based access control (RBAC) that allows you to authorize your data requests with a fine-grained, role-based permission model. Using this role-based access control (RBAC) allows you access with more options for control, security, and auditability of your database account data.
+
+[Learn more](./how-to-setup-rbac.md)
+
+### 16-MB limit per document in Cosmos DB for MongoDB (GA)
+
+The 16-MB document limit in Azure Cosmos DB for MongoDB provides developers the flexibility to store more data per document. This ease-of-use feature will speed up your development process and provide you with more flexibility in certain new application and migration cases.
+
+[Learn more](./feature-support-42.md#data-types)
### Azure Data Studio MongoDB extension for Azure Cosmos DB (Preview)
-You can now use the free and lightweight tool feature to manage and query your MongoDB resources using mongo shell. Azure Data Studio MongoDB extension for Azure Cosmos DB allows you to manage multiple accounts all in one view by
-1. Connecting your Mongo resources
-2. Configuring the database settings
-3. Performing create, read, update, and delete (CRUD) across Windows, macOS, and Linux.
+
+You can now use the free and lightweight tool feature to manage and query your MongoDB resources using mongo shell. Azure Data Studio MongoDB extension for Azure Cosmos DB allows you to manage multiple accounts all in one view by:
+
+1. Connecting your Mongo resources
+1. Configuring the database settings
+1. Performing create, read, update, and delete (CRUD) across Windows, macOS, and Linux
[Learn more](https://aka.ms/cosmosdb-ads)
+### Linux emulator with Azure Cosmos DB for MongoDB
-### Linux emulator with Azure Cosmos DB for MongoDB
-The Azure Cosmos DB Linux emulator with API for MongoDB support provides a local environment that emulates the Azure Cosmos DB service for development purposes on Linux and macOS. Using the emulator, you can develop and test your MongoDB applications locally, without creating an Azure subscription or incurring any costs.
+The Azure Cosmos DB Linux emulator with API for MongoDB support provides a local environment that emulates the Azure Cosmos DB service for development purposes on Linux and macOS. Using the emulator, you can develop and test your MongoDB applications locally, without creating an Azure subscription or incurring any costs.
[Learn more](https://aka.ms/linux-emulator-mongo)
+### 16-MB limit per document in Azure Cosmos DB for MongoDB (Preview)
-### 16-MB limit per document in API for MongoDB (Preview)
-The 16-MB document limit in the Azure Cosmos DB for MongoDB provides developers the flexibility to store more data per document. This ease-of-use feature will speed up your development process in these cases.
+The 16-MB document limit in the Azure Cosmos DB for MongoDB provides developers the flexibility to store more data per document. This ease-of-use feature will speed up your development process in these cases.
[Learn more](./introduction.md) - ### Azure Cosmos DB for MongoDB data plane Role-Based Access Control (RBAC) (Preview)
-The API for MongoDB now offers a built-in role-based access control (RBAC) that allows you to authorize your data requests with a fine-grained, role-based permission model. Using this role-based access control (RBAC) allows you access with more options for control, security, and auditability of your database account data.
+
+Azure Cosmos DB for MongoDB now offers a built-in role-based access control (RBAC) that allows you to authorize your data requests with a fine-grained, role-based permission model. Using this role-based access control (RBAC) allows you access with more options for control, security, and auditability of your database account data.
[Learn more](./how-to-setup-rbac.md)
The Azure Cosmos DB for MongoDB version 4.2 includes new aggregation functionali
[Learn more](./feature-support-42.md) ### Support $expr in Mongo 3.6+
-`$expr` allows the use of [aggregation expressions](https://www.mongodb.com/docs/manual/meta/aggregation-quick-reference/#std-label-aggregation-expressions) within the query language.
+
+`$expr` allows the use of [aggregation expressions](https://www.mongodb.com/docs/manual/meta/aggregation-quick-reference/#std-label-aggregation-expressions) within the query language.
`$expr` can build query expressions that compare fields from the same document in a `$match` stage. [Learn more](https://www.mongodb.com/docs/manual/reference/operator/query/expr/)
+### Role-Based Access Control for $merge stage
-### Role-Based Access Control for $merge stage
-* Added Role-Based Access Control(RBAC) for `$merge` stage.
-* `$merge` writes the results of aggregation pipeline to specified collection. The `$merge` operator must be the last stage in the pipeline
+- Added Role-Based Access Control(RBAC) for `$merge` stage.
+- `$merge` writes the results of aggregation pipeline to specified collection. The `$merge` operator must be the last stage in the pipeline
[Learn more](https://www.mongodb.com/docs/manual/reference/operator/aggregation/merge/) - ## Next steps - Learn how to [use Studio 3T](connect-using-mongochef.md) with Azure Cosmos DB for MongoDB. - Learn how to [use Robo 3T](connect-using-robomongo.md) with Azure Cosmos DB for MongoDB. - Explore MongoDB [samples](nodejs-console-app.md) with Azure Cosmos DB for MongoDB. - Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
- - If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md).
- - If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-capacity-planner.md).
+ - If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md).
+ - If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-capacity-planner.md).
cosmos-db Feature Support 32 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/feature-support-32.md
Title: Azure Cosmos DB's API for MongoDB (3.2 version) supported features and syntax
-description: Learn about Azure Cosmos DB's API for MongoDB (3.2 version) supported features and syntax.
+ Title: Azure Cosmos DB for MongoDB (3.2 version) supported features and syntax
+description: Learn about Azure Cosmos DB for MongoDB (3.2 version) supported features and syntax.
+++ -- Previously updated : 10/16/2019--+ Last updated : 10/12/2022
-# Azure Cosmos DB's API for MongoDB (3.2 version): supported features and syntax
+# Azure Cosmos DB for MongoDB (3.2 version): supported features and syntax
+ [!INCLUDE[MongoDB](../includes/appliesto-mongodb.md)]
-Azure Cosmos DB is Microsoft's globally distributed multi-model database service. You can communicate with the Azure Cosmos DB's API for MongoDB using any of the open-source MongoDB client [drivers](https://docs.mongodb.org/ecosystem/drivers). The Azure Cosmos DB's API for MongoDB enables the use of existing client drivers by adhering to the MongoDB [wire protocol](https://docs.mongodb.org/manual/reference/mongodb-wire-protocol).
+Azure Cosmos DB is Microsoft's globally distributed multi-model database service. You can communicate with the Azure Cosmos DB for MongoDB using any of the open-source MongoDB client [drivers](https://docs.mongodb.org/ecosystem/drivers). The Azure Cosmos DB for MongoDB enables the use of existing client drivers by adhering to the MongoDB [wire protocol](https://docs.mongodb.org/manual/reference/mongodb-wire-protocol).
-By using the Azure Cosmos DB's API for MongoDB, you can enjoy the benefits of the MongoDB you're used to, with all of the enterprise capabilities that Azure Cosmos DB provides: [global distribution](../distribute-data-globally.md), [automatic sharding](../partitioning-overview.md), availability and latency guarantees, automatic indexing of every field, encryption at rest, backups, and much more.
+By using the Azure Cosmos DB for MongoDB, you can enjoy the benefits of the MongoDB you're used to, with all of the enterprise capabilities that Azure Cosmos DB provides: [global distribution](../distribute-data-globally.md), [automatic sharding](../partitioning-overview.md), availability and latency guarantees, automatic indexing of every field, encryption at rest, backups, and much more.
> [!NOTE] > Version 3.2 of the Azure Cosmos DB for MongoDB has no current plans for end-of-life (EOL). The minimum notice for a future EOL is three years. ## Protocol Support
-All new accounts for Azure Cosmos DB's API for MongoDB are compatible with MongoDB server version **3.6**. This article covers MongoDB version 3.2. The supported operators and any limitations or exceptions are listed below. Any client driver that understands these protocols should be able to connect to Azure Cosmos DB's API for MongoDB.
+All new accounts for Azure Cosmos DB for MongoDB are compatible with MongoDB server version **3.6**. This article covers MongoDB version 3.2. The supported operators and any limitations or exceptions are listed below. Any client driver that understands these protocols should be able to connect to Azure Cosmos DB for MongoDB.
-Azure Cosmos DB's API for MongoDB also offers a seamless upgrade experience for qualifying accounts. Learn more on the [MongoDB version upgrade guide](upgrade-version.md).
+Azure Cosmos DB for MongoDB also offers a seamless upgrade experience for qualifying accounts. Learn more on the [MongoDB version upgrade guide](upgrade-version.md).
## Query language support
-Azure Cosmos DB's API for MongoDB provides comprehensive support for MongoDB query language constructs. Below you can find the detailed list of currently supported operations, operators, stages, commands, and options.
+Azure Cosmos DB for MongoDB provides comprehensive support for MongoDB query language constructs. Below you can find the detailed list of currently supported operations, operators, stages, commands, and options.
## Database commands
-Azure Cosmos DB's API for MongoDB supports the following database commands:
+Azure Cosmos DB for MongoDB supports the following database commands:
> [!NOTE]
-> This article only lists the supported server commands and excludes client-side wrapper functions. Client-side wrapper functions such as `deleteMany()` and `updateMany()` internally utilize the `delete()` and `update()` server commands. Functions utilizing supported server commands are compatible with Azure Cosmos DB's API for MongoDB.
+> This article only lists the supported server commands and excludes client-side wrapper functions. Client-side wrapper functions such as `deleteMany()` and `updateMany()` internally utilize the `delete()` and `update()` server commands. Functions utilizing supported server commands are compatible with Azure Cosmos DB for MongoDB.
### Query and write operation commands -- delete-- find-- findAndModify-- getLastError-- getMore-- insert-- update
+- `delete`
+- `find`
+- `findAndModify`
+- `getLastError`
+- `getMore`
+- `insert`
+- `update`
### Authentication commands -- logout-- authenticate-- getnonce
+- `logout`
+- `authenticate`
+- `getnonce`
### Administration commands -- dropDatabase-- listCollections-- drop-- create-- filemd5-- createIndexes-- listIndexes-- dropIndexes-- connectionStatus-- reIndex
+- `dropDatabase`
+- `listCollections`
+- `drop`
+- `create`
+- `filemd5`
+- `createIndexes`
+- `listIndexes`
+- `dropIndexes`
+- `connectionStatus`
+- `reIndex`
### Diagnostics commands -- buildInfo-- collStats-- dbStats-- hostInfo-- listDatabases-- whatsmyuri
+- `buildInfo`
+- `collStats`
+- `dbStats`
+- `hostInfo`
+- `listDatabases`
+- `whatsmyuri`
<a name="aggregation-pipeline"></a>
Azure Cosmos DB's API for MongoDB supports the following database commands:
### Aggregation commands -- aggregate-- count-- distinct
+- `aggregate`
+- `count`
+- `distinct`
### Aggregation stages -- $project-- $match-- $limit-- $skip-- $unwind-- $group-- $sample-- $sort-- $lookup-- $out-- $count-- $addFields
+- `$project`
+- `$match`
+- `$limit`
+- `$skip`
+- `$unwind`
+- `$group`
+- `$sample`
+- `$sort`
+- `$lookup`
+- `$out`
+- `$count`
+- `$addFields`
### Aggregation expressions #### Boolean expressions -- $and-- $or-- $not
+- `$and`
+- `$or`
+- `$not`
#### Set expressions -- $setEquals-- $setIntersection-- $setUnion-- $setDifference-- $setIsSubset-- $anyElementTrue-- $allElementsTrue
+- `$setEquals`
+- `$setIntersection`
+- `$setUnion`
+- `$setDifference`
+- `$setIsSubset`
+- `$anyElementTrue`
+- `$allElementsTrue`
#### Comparison expressions -- $cmp-- $eq-- $gt-- $gte-- $lt-- $lte-- $ne
+- `$cmp`
+- `$eq`
+- `$gt`
+- `$gte`
+- `$lt`
+- `$lte`
+- `$ne`
#### Arithmetic expressions -- $abs-- $add-- $ceil-- $divide-- $exp-- $floor-- $ln-- $log-- $log10-- $mod-- $multiply-- $pow-- $sqrt-- $subtract-- $trunc
+- `$abs`
+- `$add`
+- `$ceil`
+- `$divide`
+- `$exp`
+- `$floor`
+- `$ln`
+- `$log`
+- `$log10`
+- `$mod`
+- `$multiply`
+- `$pow`
+- `$sqrt`
+- `$subtract`
+- `$trunc`
#### String expressions -- $concat-- $indexOfBytes-- $indexOfCP-- $split-- $strLenBytes-- $strLenCP-- $strcasecmp-- $substr-- $substrBytes-- $substrCP-- $toLower-- $toUpper
+- `$concat`
+- `$indexOfBytes`
+- `$indexOfCP`
+- `$split`
+- `$strLenBytes`
+- `$strLenCP`
+- `$strcasecmp`
+- `$substr`
+- `$substrBytes`
+- `$substrCP`
+- `$toLower`
+- `$toUpper`
#### Array expressions -- $arrayElemAt-- $concatArrays-- $filter-- $indexOfArray-- $isArray-- $range-- $reverseArray-- $size-- $slice-- $in
+- `$arrayElemAt`
+- `$concatArrays`
+- `$filter`
+- `$indexOfArray`
+- `$isArray`
+- `$range`
+- `$reverseArray`
+- `$size`
+- `$slice`
+- `$in`
#### Date expressions -- $dayOfYear-- $dayOfMonth-- $dayOfWeek-- $year-- $month-- $week-- $hour-- $minute-- $second-- $millisecond-- $isoDayOfWeek-- $isoWeek
+- `$dayOfYear`
+- `$dayOfMonth`
+- `$dayOfWeek`
+- `$year`
+- `$month`
+- `$week`
+- `$hour`
+- `$minute`
+- `$second`
+- `$millisecond`
+- `$isoDayOfWeek`
+- `$isoWeek`
#### Conditional expressions -- $cond-- $ifNull
+- `$cond`
+- `$ifNull`
## Aggregation accumulators -- $sum-- $avg-- $first-- $last-- $max-- $min-- $push-- $addToSet
+- `$sum`
+- `$avg`
+- `$first`
+- `$last`
+- `$max`
+- `$min`
+- `$push`
+- `$addToSet`
## Operators
Following operators are supported with corresponding examples of their use. Cons
| Operator | Example | | | |
-| $eq | `{ "Volcano Name": { $eq: "Rainier" } }` |
-| $gt | `{ "Elevation": { $gt: 4000 } }` |
-| $gte | `{ "Elevation": { $gte: 4392 } }` |
-| $lt | `{ "Elevation": { $lt: 5000 } }` |
-| $lte | `{ "Elevation": { $lte: 5000 } }` |
-| $ne | `{ "Elevation": { $ne: 1 } }` |
-| $in | `{ "Volcano Name": { $in: ["St. Helens", "Rainier", "Glacier Peak"] } }` |
-| $nin | `{ "Volcano Name": { $nin: ["Lassen Peak", "Hood", "Baker"] } }` |
-| $or | `{ $or: [ { Elevation: { $lt: 4000 } }, { "Volcano Name": "Rainier" } ] }` |
-| $and | `{ $and: [ { Elevation: { $gt: 4000 } }, { "Volcano Name": "Rainier" } ] }` |
-| $not | `{ "Elevation": { $not: { $gt: 5000 } } }`|
-| $nor | `{ $nor: [ { "Elevation": { $lt: 4000 } }, { "Volcano Name": "Baker" } ] }` |
-| $exists | `{ "Status": { $exists: true } }`|
-| $type | `{ "Status": { $type: "string" } }`|
-| $mod | `{ "Elevation": { $mod: [ 4, 0 ] } }` |
-| $regex | `{ "Volcano Name": { $regex: "^Rain"} }`|
+| `eq` | `{ "Volcano Name": { $eq: "Rainier" } }` |
+| `gt` | `{ "Elevation": { $gt: 4000 } }` |
+| `gte` | `{ "Elevation": { $gte: 4392 } }` |
+| `lt` | `{ "Elevation": { $lt: 5000 } }` |
+| `lte` | `{ "Elevation": { $lte: 5000 } }` |
+| `ne` | `{ "Elevation": { $ne: 1 } }` |
+| `in` | `{ "Volcano Name": { $in: ["St. Helens", "Rainier", "Glacier Peak"] } }` |
+| `nin` | `{ "Volcano Name": { $nin: ["Lassen Peak", "Hood", "Baker"] } }` |
+| `or` | `{ $or: [ { Elevation: { $lt: 4000 } }, { "Volcano Name": "Rainier" } ] }` |
+| `and` | `{ $and: [ { Elevation: { $gt: 4000 } }, { "Volcano Name": "Rainier" } ] }` |
+| `not` | `{ "Elevation": { $not: { $gt: 5000 } } }`|
+| `nor` | `{ $nor: [ { "Elevation": { $lt: 4000 } }, { "Volcano Name": "Baker" } ] }` |
+| `exists` | `{ "Status": { $exists: true } }`|
+| `type` | `{ "Status": { $type: "string" } }`|
+| `mod` | `{ "Elevation": { $mod: [ 4, 0 ] } }` |
+| `regex` | `{ "Volcano Name": { $regex: "^Rain"} }`|
### Notes In $regex queries, Left-anchored expressions allow index search. However, using 'i' modifier (case-insensitivity) and 'm' modifier (multiline) causes the collection scan in all expressions.
-When there's a need to include '$' or '|', it is best to create two (or more) regex queries.
-For example, given the following original query: ```find({x:{$regex: /^abc$/})```, it has to be modified as follows:
-```find({x:{$regex: /^abc/, x:{$regex:/^abc$/}})```.
+When there's a need to include '$' or '|', it's best to create two (or more) regex queries.
+For example, given the following original query: `find({x:{$regex: /^abc$/})`, it has to be modified as follows:
+`find({x:{$regex: /^abc/, x:{$regex:/^abc$/}})`.
The first part will use the index to restrict the search to those documents beginning with ^abc and the second part will match the exact entries.
-The bar operator '|' acts as an "or" function - the query ```find({x:{$regex: /^abc|^def/})``` matches the documents in which field 'x' has values that begin with "abc" or "def". To utilize the index, it's recommended to break the query into two different queries joined by the $or operator: ```find( {$or : [{x: $regex: /^abc/}, {$regex: /^def/}] })```.
+The bar operator '|' acts as an "or" function - the query `find({x:{$regex: /^abc|^def/})` matches the documents in which field 'x' has values that begin with "abc" or "def". To utilize the index, it's recommended to break the query into two different queries joined by the $or operator: `find( {$or : [{x: $regex: /^abc/}, {$regex: /^def/}] })`.
### Update operators #### Field update operators -- $inc-- $mul-- $rename-- $setOnInsert-- $set-- $unset-- $min-- $max-- $currentDate
+- `$inc`
+- `$mul`
+- `$rename`
+- `$setOnInsert`
+- `$set`
+- `$unset`
+- `$min`
+- `$max`
+- `$currentDate`
#### Array update operators -- $addToSet-- $pop-- $pullAll-- $pull (Note: $pull with condition is not supported)-- $pushAll-- $push-- $each-- $slice-- $sort-- $position
+- `$addToSet`
+- `$pop`
+- `$pullAll`
+- `$pull` (Note: $pull with condition isn't supported)
+- `$pushAll`
+- `$push`
+- `$each`
+- `$slice`
+- `$sort`
+- `$position`
#### Bitwise update operator -- $bit
+- `$bit`
### Geospatial operators
-Operator | Example | Supported |
- | | |
-$geoWithin | ```{ "Location.coordinates": { $geoWithin: { $centerSphere: [ [ -121, 46 ], 5 ] } } }``` | Yes |
-$geoIntersects | ```{ "Location.coordinates": { $geoIntersects: { $geometry: { type: "Polygon", coordinates: [ [ [ -121.9, 46.7 ], [ -121.5, 46.7 ], [ -121.5, 46.9 ], [ -121.9, 46.9 ], [ -121.9, 46.7 ] ] ] } } } }``` | Yes |
-$near | ```{ "Location.coordinates": { $near: { $geometry: { type: "Polygon", coordinates: [ [ [ -121.9, 46.7 ], [ -121.5, 46.7 ], [ -121.5, 46.9 ], [ -121.9, 46.9 ], [ -121.9, 46.7 ] ] ] } } } }``` | Yes |
-$nearSphere | ```{ "Location.coordinates": { $nearSphere : [ -121, 46 ], $maxDistance: 0.50 } }``` | Yes |
-$geometry | ```{ "Location.coordinates": { $geoWithin: { $geometry: { type: "Polygon", coordinates: [ [ [ -121.9, 46.7 ], [ -121.5, 46.7 ], [ -121.5, 46.9 ], [ -121.9, 46.9 ], [ -121.9, 46.7 ] ] ] } } } }``` | Yes |
-$minDistance | ```{ "Location.coordinates": { $nearSphere : { $geometry: {type: "Point", coordinates: [ -121, 46 ]}, $minDistance: 1000, $maxDistance: 1000000 } } }``` | Yes |
-$maxDistance | ```{ "Location.coordinates": { $nearSphere : [ -121, 46 ], $maxDistance: 0.50 } }``` | Yes |
-$center | ```{ "Location.coordinates": { $geoWithin: { $center: [ [-121, 46], 1 ] } } }``` | Yes |
-$centerSphere | ```{ "Location.coordinates": { $geoWithin: { $centerSphere: [ [ -121, 46 ], 5 ] } } }``` | Yes |
-$box | ```{ "Location.coordinates": { $geoWithin: { $box: [ [ 0, 0 ], [ -122, 47 ] ] } } }``` | Yes |
-$polygon | ```{ "Location.coordinates": { $near: { $geometry: { type: "Polygon", coordinates: [ [ [ -121.9, 46.7 ], [ -121.5, 46.7 ], [ -121.5, 46.9 ], [ -121.9, 46.9 ], [ -121.9, 46.7 ] ] ] } } } }``` | Yes |
+| Operator | Example | Supported |
+| | | |
+| `$geoWithin` | `{ "Location.coordinates": { $geoWithin: { $centerSphere: [ [ -121, 46 ], 5 ] } } }` | Yes |
+| `$geoIntersects` | `{ "Location.coordinates": { $geoIntersects: { $geometry: { type: "Polygon", coordinates: [ [ [ -121.9, 46.7 ], [ -121.5, 46.7 ], [ -121.5, 46.9 ], [ -121.9, 46.9 ], [ -121.9, 46.7 ] ] ] } } } }` | Yes |
+| `$near` | `{ "Location.coordinates": { $near: { $geometry: { type: "Polygon", coordinates: [ [ [ -121.9, 46.7 ], [ -121.5, 46.7 ], [ -121.5, 46.9 ], [ -121.9, 46.9 ], [ -121.9, 46.7 ] ] ] } } } }` | Yes |
+| `$nearSphere` | `{ "Location.coordinates": { $nearSphere : [ -121, 46 ], $maxDistance: 0.50 } }` | Yes |
+| `$geometry` | `{ "Location.coordinates": { $geoWithin: { $geometry: { type: "Polygon", coordinates: [ [ [ -121.9, 46.7 ], [ -121.5, 46.7 ], [ -121.5, 46.9 ], [ -121.9, 46.9 ], [ -121.9, 46.7 ] ] ] } } } }` | Yes |
+| `$minDistance` | `{ "Location.coordinates": { $nearSphere : { $geometry: {type: "Point", coordinates: [ -121, 46 ]}, $minDistance: 1000, $maxDistance: 1000000 } } }` | Yes |
+| `$maxDistance` | `{ "Location.coordinates": { $nearSphere : [ -121, 46 ], $maxDistance: 0.50 } }` | Yes |
+| `$center` | `{ "Location.coordinates": { $geoWithin: { $center: [ [-121, 46], 1 ] } } }` | Yes |
+| `$centerSphere` | `{ "Location.coordinates": { $geoWithin: { $centerSphere: [ [ -121, 46 ], 5 ] } } }` | Yes |
+| `$box` | `{ "Location.coordinates": { $geoWithin: { $box: [ [ 0, 0 ], [ -122, 47 ] ] } } }` | Yes |
+| `$polygon` | `{ "Location.coordinates": { $near: { $geometry: { type: "Polygon", coordinates: [ [ [ -121.9, 46.7 ], [ -121.5, 46.7 ], [ -121.5, 46.9 ], [ -121.9, 46.9 ], [ -121.9, 46.7 ] ] ] } } } }` | Yes |
## Sort Operations
-When using the `findOneAndUpdate` operation, sort operations on a single field are supported but sort operations on multiple fields are not supported.
+When you use the `findOneAndUpdate` operation, sort operations on a single field are supported, but sort operations on multiple fields aren't supported.
## Other operators
-Operator | Example | Notes
- | | |
-$all | ```{ "Location.coordinates": { $all: [-121.758, 46.87] } }``` |
-$elemMatch | ```{ "Location.coordinates": { $elemMatch: { $lt: 0 } } }``` |
-$size | ```{ "Location.coordinates": { $size: 2 } }``` |
-$comment | ```{ "Location.coordinates": { $elemMatch: { $lt: 0 } }, $comment: "Negative values"}``` |
-$text | | Not supported. Use $regex instead.
+| Operator | Example | Notes
+| | | |
+| `$all` | `{ "Location.coordinates": { $all: [-121.758, 46.87] } }` |
+| `$elemMatch` | `{ "Location.coordinates": { $elemMatch: { $lt: 0 } } }` |
+| `$size` | `{ "Location.coordinates": { $size: 2 } }` |
+| `$comment` | `{ "Location.coordinates": { $elemMatch: { $lt: 0 } }, $comment: "Negative values"}` |
+| `$text` | | Not supported. Use $regex instead.
## Unsupported operators
-The ```$where``` and the ```$eval``` operators are not supported by Azure Cosmos DB.
+The `$where` and the `$eval` operators aren't supported by Azure Cosmos DB.
### Methods
Following methods are supported:
#### Cursor methods
-Method | Example | Notes
- | | |
-cursor.sort() | ```cursor.sort({ "Elevation": -1 })``` | Documents without sort key do not get returned
+| Method | Example | Notes |
+| | | |
+| `cursor.sort()` | `cursor.sort({ "Elevation": -1 })` | Documents without sort key don't get returned |
## Unique indexes Azure Cosmos DB indexes every field in documents that are written to the database by default. Unique indexes ensure that a specific field doesn't have duplicate values across all documents in a collection, similar to the way uniqueness is preserved on the default `_id` key. You can create custom indexes in Azure Cosmos DB by using the createIndex command, including the 'unique' constraint.
-Unique indexes are available for all Azure Cosmos DB accounts using Azure Cosmos DB's API for MongoDB.
+Unique indexes are available for all Azure Cosmos DB accounts using Azure Cosmos DB for MongoDB.
## Time-to-live (TTL)
Azure Cosmos DB supports a time-to-live (TTL) based on the timestamp of the docu
## User and role management
-Azure Cosmos DB does not yet support users and roles. However, Azure Cosmos DB supports Azure role-based access control (Azure RBAC) and read-write and read-only passwords/keys that can be obtained through the [Azure portal](https://portal.azure.com) (Connection String page).
+Azure Cosmos DB doesn't yet support users and roles. However, Azure Cosmos DB supports Azure role-based access control (Azure RBAC) and read-write and read-only passwords/keys that can be obtained through the [Azure portal](https://portal.azure.com) (Connection String page).
## Replication
-Azure Cosmos DB supports automatic, native replication at the lowest layers. This logic is extended out to achieve low-latency, global replication as well. Azure Cosmos DB does not support manual replication commands.
+Azure Cosmos DB supports automatic, native replication at the lowest layers. This logic is extended out to achieve low-latency, global replication as well. Azure Cosmos DB doesn't support manual replication commands.
## Write Concern
-Some applications rely on a [Write Concern](https://docs.mongodb.com/manual/reference/write-concern/) that specifies the number of responses required during a write operation. Due to how Azure Cosmos DB handles replication in the background all writes are all automatically Quorum by default. Any write concern specified by the client code is ignored. Learn more in [Using consistency levels to maximize availability and performance](../consistency-levels.md).
+Some applications rely on a [Write Concern](https://docs.mongodb.com/manual/reference/write-concern/) that specifies the number of responses required during a write operation. Due to how Azure Cosmos DB handles replication in the background all writes are automatically Quorum by default. Any write concern specified by the client code is ignored. Learn more in [Using consistency levels to maximize availability and performance](../consistency-levels.md).
## Sharding
-Azure Cosmos DB supports automatic, server-side sharding. It manages shard creation, placement, and balancing automatically. Azure Cosmos DB does not support manual sharding commands, which means you don't have to invoke commands such as shardCollection, addShard, balancerStart, moveChunk etc. You only need to specify the shard key while creating the containers or querying the data.
+Azure Cosmos DB supports automatic, server-side sharding. It manages shard creation, placement, and balancing automatically. Azure Cosmos DB doesn't support manual sharding commands, which means you don't have to invoke commands such as shardCollection, addShard, balancerStart, moveChunk etc. You only need to specify the shard key while creating the containers or querying the data.
## Next steps -- Learn how to [use Studio 3T](connect-using-mongochef.md) with Azure Cosmos DB's API for MongoDB.-- Learn how to [use Robo 3T](connect-using-robomongo.md) with Azure Cosmos DB's API for MongoDB.-- Explore MongoDB [samples](nodejs-console-app.md) with Azure Cosmos DB's API for MongoDB.
+- Learn how to [use Studio 3T](connect-using-mongochef.md) with Azure Cosmos DB for MongoDB.
+- Learn how to [use Robo 3T](connect-using-robomongo.md) with Azure Cosmos DB for MongoDB.
+- Explore MongoDB [samples](nodejs-console-app.md) with Azure Cosmos DB for MongoDB.
cosmos-db Feature Support 36 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/feature-support-36.md
Title: Azure Cosmos DB's API for MongoDB (3.6 version) supported features and syntax
-description: Learn about Azure Cosmos DB's API for MongoDB (3.6 version) supported features and syntax.
+ Title: Azure Cosmos DB for MongoDB (3.6 version) supported features and syntax
+description: Learn about Azure Cosmos DB for MongoDB (3.6 version) supported features and syntax.
+++ -- Previously updated : 04/04/2022--+ Last updated : 10/12/2022
-# Azure Cosmos DB's API for MongoDB (3.6 version): supported features and syntax
+# Azure Cosmos DB for MongoDB (3.6 version): supported features and syntax
+ [!INCLUDE[MongoDB](../includes/appliesto-mongodb.md)]
-Azure Cosmos DB is Microsoft's globally distributed multi-model database service. You can communicate with the Azure Cosmos DB's API for MongoDB using any of the open-source MongoDB client [drivers](https://docs.mongodb.org/ecosystem/drivers). The Azure Cosmos DB's API for MongoDB enables the use of existing client drivers by adhering to the MongoDB [wire protocol](https://docs.mongodb.org/manual/reference/mongodb-wire-protocol).
+Azure Cosmos DB is Microsoft's globally distributed multi-model database service. You can communicate with the Azure Cosmos DB for MongoDB using any of the open-source MongoDB client [drivers](https://docs.mongodb.org/ecosystem/drivers). The Azure Cosmos DB for MongoDB enables the use of existing client drivers by adhering to the MongoDB [wire protocol](https://docs.mongodb.org/manual/reference/mongodb-wire-protocol).
-By using the Azure Cosmos DB's API for MongoDB, you can enjoy the benefits of the MongoDB you're used to, with all of the enterprise capabilities that Azure Cosmos DB provides: [global distribution](../distribute-data-globally.md), [automatic sharding](../partitioning-overview.md), availability and latency guarantees, encryption at rest, backups, and much more.
+By using the Azure Cosmos DB for MongoDB, you can enjoy the benefits of the MongoDB you're used to, with all of the enterprise capabilities that Azure Cosmos DB provides: [global distribution](../distribute-data-globally.md), [automatic sharding](../partitioning-overview.md), availability and latency guarantees, encryption at rest, backups, and much more.
> [!NOTE] > Version 3.6 of the Azure Cosmos DB for MongoDB has no current plans for end-of-life (EOL). The minimum notice for a future EOL is three years. ## Protocol Support
-The Azure Cosmos DB's API for MongoDB is compatible with MongoDB server version **3.6** by default for new accounts. The supported operators and any limitations or exceptions are listed below. Any client driver that understands these protocols should be able to connect to Azure Cosmos DB's API for MongoDB. Note that when using Azure Cosmos DB's API for MongoDB accounts, the 3.6 version of account has the endpoint in the format `*.mongo.cosmos.azure.com` whereas the 3.2 version of account has the endpoint in the format `*.documents.azure.com`.
+The Azure Cosmos DB for MongoDB is compatible with MongoDB server version **3.6** by default for new accounts. The supported operators and any limitations or exceptions are listed below. Any client driver that understands these protocols should be able to connect to Azure Cosmos DB for MongoDB. When you create Azure Cosmos DB API for MongoDB accounts, the 3.6 version of account has the endpoint in the format `*.mongo.cosmos.azure.com` whereas the 3.2 version of account has the endpoint in the format `*.documents.azure.com`.
## Query language support
-Azure Cosmos DB's API for MongoDB provides comprehensive support for MongoDB query language constructs. The following sections show the detailed list of server operations, operators, stages, commands, and options currently supported by Azure Cosmos DB.
+Azure Cosmos DB for MongoDB provides comprehensive support for MongoDB query language constructs. The following sections show the detailed list of server operations, operators, stages, commands, and options currently supported by Azure Cosmos DB.
> [!NOTE]
-> This article only lists the supported server commands and excludes client-side wrapper functions. Client-side wrapper functions such as `deleteMany()` and `updateMany()` internally utilize the `delete()` and `update()` server commands. Functions utilizing supported server commands are compatible with Azure Cosmos DB's API for MongoDB.
+> This article only lists the supported server commands and excludes client-side wrapper functions. Client-side wrapper functions such as `deleteMany()` and `updateMany()` internally utilize the `delete()` and `update()` server commands. Functions utilizing supported server commands are compatible with Azure Cosmos DB for MongoDB.
## Database commands
-Azure Cosmos DB's API for MongoDB supports the following database commands:
+Azure Cosmos DB for MongoDB supports the following database commands:
### Query and write operation commands | Command | Supported | |||
-| [change streams](change-streams.md) | Yes |
-| delete | Yes |
-| eval | No |
-| find | Yes |
-| findAndModify | Yes |
-| getLastError | Yes |
-| getMore | Yes |
-| getPrevError | No |
-| insert | Yes |
-| parallelCollectionScan | No |
-| resetError | No |
-| update | Yes |
+| [`change streams`](change-streams.md) | Yes |
+| `delete` | Yes |
+| `eval` | No |
+| `find` | Yes |
+| `findAndModify` | Yes |
+| `getLastError` | Yes |
+| `getMore` | Yes |
+| `getPrevError` | No |
+| `insert` | Yes |
+| `parallelCollectionScan` | No |
+| `resetError` | No |
+| `update` | Yes |
### Authentication commands | Command | Supported | |||
-| authenticate | Yes |
-| getnonce | Yes |
-| logout | Yes |
+| `authenticate` | Yes |
+| `getnonce` | Yes |
+| `logout` | Yes |
### Administration commands | Command | Supported | |||
-| cloneCollectionAsCapped | No |
-| collMod | No |
-| connectionStatus | No |
-| convertToCapped | No |
-| copydb | No |
-| create | Yes |
-| createIndexes | Yes |
-| currentOp | Yes |
-| drop | Yes |
-| dropDatabase | Yes |
-| dropIndexes | Yes |
-| filemd5 | Yes |
-| killCursors | Yes |
-| killOp | No |
-| listCollections | Yes |
-| listDatabases | Yes |
-| listIndexes | Yes |
-| reIndex | Yes |
-| renameCollection | No |
-
+| `cloneCollectionAsCapped` | No |
+| `collMod` | No |
+| `connectionStatus` | No |
+| `convertToCapped` | No |
+| `copydb` | No |
+| `create` | Yes |
+| `createIndexes` | Yes |
+| `currentOp` | Yes |
+| `drop` | Yes |
+| `dropDatabase` | Yes |
+| `dropIndexes` | Yes |
+| `filemd5` | Yes |
+| `killCursors` | Yes |
+| `killOp` | No |
+| `listCollections` | Yes |
+| `listDatabases` | Yes |
+| `listIndexes` | Yes |
+| `reIndex` | Yes |
+| `renameCollection` | No |
### Diagnostics commands | Command | Supported | |||
-| buildInfo | Yes |
-| collStats | Yes |
-| connPoolStats | No |
-| connectionStatus | No |
-| dataSize | No |
-| dbHash | No |
-| dbStats | Yes |
-| explain | Yes |
-| features | No |
-| hostInfo | Yes |
-| listDatabases | Yes |
-| listCommands | No |
-| profiler | No |
-| serverStatus | No |
-| top | No |
-| whatsmyuri | Yes |
+| `buildInfo` | Yes |
+| `collStats` | Yes |
+| `connPoolStats` | No |
+| `connectionStatus` | No |
+| `dataSize` | No |
+| `dbHash` | No |
+| `dbStats` | Yes |
+| `explain` | Yes |
+| `features` | No |
+| `hostInfo` | Yes |
+| `listDatabases` | Yes |
+| `listCommands` | No |
+| `profiler` | No |
+| `serverStatus` | No |
+| `top` | No |
+| `whatsmyuri` | Yes |
<a name="aggregation-pipeline"></a>
Azure Cosmos DB's API for MongoDB supports the following database commands:
| Command | Supported | |||
-| aggregate | Yes |
-| count | Yes |
-| distinct | Yes |
-| mapReduce | No |
+| `aggregate` | Yes |
+| `count` | Yes |
+| `distinct` | Yes |
+| `mapReduce` | No |
### Aggregation stages | Command | Supported | |||
-| $addFields | Yes |
-| $bucket | No |
-| $bucketAuto | No |
-| $changeStream | Yes |
-| $collStats | No |
-| $count | Yes |
-| $currentOp | No |
-| $facet | Yes |
-| $geoNear | Yes |
-| $graphLookup | Yes |
-| $group | Yes |
-| $indexStats | No |
-| $limit | Yes |
-| $listLocalSessions | No |
-| $listSessions | No |
-| $lookup | Partial |
-| $match | Yes |
-| $out | Yes |
-| $project | Yes |
-| $redact | Yes |
-| $replaceRoot | Yes |
-| $replaceWith | No |
-| $sample | Yes |
-| $skip | Yes |
-| $sort | Yes |
-| $sortByCount | Yes |
-| $unwind | Yes |
+| `addFields` | Yes |
+| `bucket` | No |
+| `bucketAuto` | No |
+| `changeStream` | Yes |
+| `collStats` | No |
+| `count` | Yes |
+| `currentOp` | No |
+| `facet` | Yes |
+| `geoNear` | Yes |
+| `graphLookup` | Yes |
+| `group` | Yes |
+| `indexStats` | No |
+| `limit` | Yes |
+| `listLocalSessions` | No |
+| `listSessions` | No |
+| `lookup` | Partial |
+| `match` | Yes |
+| `out` | Yes |
+| `project` | Yes |
+| `redact` | Yes |
+| `replaceRoot` | Yes |
+| `replaceWith` | No |
+| `sample` | Yes |
+| `skip` | Yes |
+| `sort` | Yes |
+| `sortByCount` | Yes |
+| `unwind` | Yes |
> [!NOTE] > `$lookup` does not yet support the [uncorrelated subqueries](https://docs.mongodb.com/manual/reference/operator/aggregation/lookup/#join-conditions-and-uncorrelated-sub-queries) feature introduced in server version 3.6. You will receive an error with a message containing `let is not supported` if you attempt to use the `$lookup` operator with `let` and `pipeline` fields.
Azure Cosmos DB's API for MongoDB supports the following database commands:
| Command | Supported | |||
-| $and | Yes |
-| $not | Yes |
-| $or | Yes |
+| `and` | Yes |
+| `not` | Yes |
+| `or` | Yes |
### Set expressions | Command | Supported | |||
-| $setEquals | Yes |
-| $setIntersection | Yes |
-| $setUnion | Yes |
-| $setDifference | Yes |
-| $setIsSubset | Yes |
-| $anyElementTrue | Yes |
-| $allElementsTrue | Yes |
+| `setEquals` | Yes |
+| `setIntersection` | Yes |
+| `setUnion` | Yes |
+| `setDifference` | Yes |
+| `setIsSubset` | Yes |
+| `anyElementTrue` | Yes |
+| `allElementsTrue` | Yes |
### Comparison expressions
Azure Cosmos DB's API for MongoDB supports the following database commands:
| Command | Supported | |||
-| $cmp | Yes |
-| $eq | Yes |
-| $gt | Yes |
-| $gte | Yes |
-| $lt | Yes |
-| $lte | Yes |
-| $ne | Yes |
-| $in | Yes |
-| $nin | Yes |
+| `cmp` | Yes |
+| `eq` | Yes |
+| `gt` | Yes |
+| `gte` | Yes |
+| `lt` | Yes |
+| `lte` | Yes |
+| `ne` | Yes |
+| `in` | Yes |
+| `nin` | Yes |
### Arithmetic expressions | Command | Supported | |||
-| $abs | Yes |
-| $add | Yes |
-| $ceil | Yes |
-| $divide | Yes |
-| $exp | Yes |
-| $floor | Yes |
-| $ln | Yes |
-| $log | Yes |
-| $log10 | Yes |
-| $mod | Yes |
-| $multiply | Yes |
-| $pow | Yes |
-| $sqrt | Yes |
-| $subtract | Yes |
-| $trunc | Yes |
+| `abs` | Yes |
+| `add` | Yes |
+| `ceil` | Yes |
+| `divide` | Yes |
+| `exp` | Yes |
+| `floor` | Yes |
+| `ln` | Yes |
+| `log` | Yes |
+| `log10` | Yes |
+| `mod` | Yes |
+| `multiply` | Yes |
+| `pow` | Yes |
+| `sqrt` | Yes |
+| `subtract` | Yes |
+| `trunc` | Yes |
### String expressions | Command | Supported | |||
-| $concat | Yes |
-| $indexOfBytes | Yes |
-| $indexOfCP | Yes |
-| $split | Yes |
-| $strLenBytes | Yes |
-| $strLenCP | Yes |
-| $strcasecmp | Yes |
-| $substr | Yes |
-| $substrBytes | Yes |
-| $substrCP | Yes |
-| $toLower | Yes |
-| $toUpper | Yes |
+| `concat` | Yes |
+| `indexOfBytes` | Yes |
+| `indexOfCP` | Yes |
+| `split` | Yes |
+| `strLenBytes` | Yes |
+| `strLenCP` | Yes |
+| `strcasecmp` | Yes |
+| `substr` | Yes |
+| `substrBytes` | Yes |
+| `substrCP` | Yes |
+| `toLower` | Yes |
+| `toUpper` | Yes |
### Text search operator | Command | Supported | |||
-| $meta | No |
+| `meta` | No |
### Array expressions | Command | Supported | |||
-| $arrayElemAt | Yes |
-| $arrayToObject | Yes |
-| $concatArrays | Yes |
-| $filter | Yes |
-| $indexOfArray | Yes |
-| $isArray | Yes |
-| $objectToArray | Yes |
-| $range | Yes |
-| $reverseArray | Yes |
-| $reduce | Yes |
-| $size | Yes |
-| $slice | Yes |
-| $zip | Yes |
-| $in | Yes |
+| `arrayElemAt` | Yes |
+| `arrayToObject` | Yes |
+| `concatArrays` | Yes |
+| `filter` | Yes |
+| `indexOfArray` | Yes |
+| `isArray` | Yes |
+| `objectToArray` | Yes |
+| `range` | Yes |
+| `reverseArray` | Yes |
+| `reduce` | Yes |
+| `size` | Yes |
+| `slice` | Yes |
+| `zip` | Yes |
+| `in` | Yes |
### Variable operators | Command | Supported | |||
-| $map | Yes |
-| $let | Yes |
+| `map` | Yes |
+| `let` | Yes |
### System variables | Command | Supported | |||
-| $$CURRENT | Yes |
-| $$DESCEND | Yes |
-| $$KEEP | Yes |
-| $$PRUNE | Yes |
-| $$REMOVE | Yes |
-| $$ROOT | Yes |
+| `$$CURRENT` | Yes |
+| `$$DESCEND` | Yes |
+| `$$KEEP` | Yes |
+| `$$PRUNE` | Yes |
+| `$$REMOVE` | Yes |
+| `$$ROOT` | Yes |
### Literal operator | Command | Supported | |||
-| $literal | Yes |
+| `literal` | Yes |
### Date expressions | Command | Supported | |||
-| $dayOfYear | Yes |
-| $dayOfMonth | Yes |
-| $dayOfWeek | Yes |
-| $year | Yes |
-| $month | Yes |
-| $week | Yes |
-| $hour | Yes |
-| $minute | Yes |
-| $second | Yes |
-| $millisecond | Yes |
-| $dateToString | Yes |
-| $isoDayOfWeek | Yes |
-| $isoWeek | Yes |
-| $dateFromParts | Yes |
-| $dateToParts | Yes |
-| $dateFromString | Yes |
-| $isoWeekYear | Yes |
+| `dayOfYear` | Yes |
+| `dayOfMonth` | Yes |
+| `dayOfWeek` | Yes |
+| `year` | Yes |
+| `month` | Yes |
+| `week` | Yes |
+| `hour` | Yes |
+| `minute` | Yes |
+| `second` | Yes |
+| `millisecond` | Yes |
+| `dateToString` | Yes |
+| `isoDayOfWeek` | Yes |
+| `isoWeek` | Yes |
+| `dateFromParts` | Yes |
+| `dateToParts` | Yes |
+| `dateFromString` | Yes |
+| `isoWeekYear` | Yes |
### Conditional expressions | Command | Supported | |||
-| $cond | Yes |
-| $ifNull | Yes |
-| $switch | Yes |
+| `cond` | Yes |
+| `ifNull` | Yes |
+| `switch` | Yes |
### Data type operator | Command | Supported | |||
-| $type | Yes |
+| `type` | Yes |
### Accumulator expressions | Command | Supported | |||
-| $sum | Yes |
-| $avg | Yes |
-| $first | Yes |
-| $last | Yes |
-| $max | Yes |
-| $min | Yes |
-| $push | Yes |
-| $addToSet | Yes |
-| $stdDevPop | Yes |
-| $stdDevSamp | Yes |
+| `sum` | Yes |
+| `avg` | Yes |
+| `first` | Yes |
+| `last` | Yes |
+| `max` | Yes |
+| `min` | Yes |
+| `push` | Yes |
+| `addToSet` | Yes |
+| `stdDevPop` | Yes |
+| `stdDevSamp` | Yes |
### Merge operator | Command | Supported | |||
-| $mergeObjects | Yes |
+| `mergeObjects` | Yes |
## Data types | Command | Supported | |||
-| Double | Yes |
-| String | Yes |
-| Object | Yes |
-| Array | Yes |
-| Binary Data | Yes |
-| ObjectId | Yes |
-| Boolean | Yes |
-| Date | Yes |
-| Null | Yes |
-| 32-bit Integer (int) | Yes |
-| Timestamp | Yes |
-| 64-bit Integer (long) | Yes |
-| MinKey | Yes |
-| MaxKey | Yes |
-| Decimal128 | Yes |
-| Regular Expression | Yes |
-| JavaScript | Yes |
-| JavaScript (with scope)| Yes |
-| Undefined | Yes |
+| `Double` | Yes |
+| `String` | Yes |
+| `Object` | Yes |
+| `Array` | Yes |
+| `Binary Data` | Yes |
+| `ObjectId` | Yes |
+| `Boolean` | Yes |
+| `Date` | Yes |
+| `Null` | Yes |
+| `32-bit Integer (int)` | Yes |
+| `Timestamp` | Yes |
+| `64-bit Integer (long)` | Yes |
+| `MinKey` | Yes |
+| `MaxKey` | Yes |
+| `Decimal128` | Yes |
+| `Regular Expression` | Yes |
+| `JavaScript` | Yes |
+| `JavaScript (with scope)` | Yes |
+| `Undefined` | Yes |
## Indexes and index properties
Azure Cosmos DB's API for MongoDB supports the following database commands:
| Command | Supported | |||
-| Single Field Index | Yes |
-| Compound Index | Yes |
-| Multikey Index | Yes |
-| Text Index | No |
-| 2dsphere | Yes |
-| 2d Index | No |
-| Hashed Index | Yes |
+| `Single Field Index` | Yes |
+| `Compound Index` | Yes |
+| `Multikey Index` | Yes |
+| `Text Index` | No |
+| `2dsphere` | Yes |
+| `2d Index` | No |
+| `Hashed Index` | Yes |
### Index properties | Command | Supported | |||
-| TTL | Yes |
-| Unique | Yes |
-| Partial | No |
-| Case Insensitive | No |
-| Sparse | No |
-| Background | Yes |
+| `TTL` | Yes |
+| `Unique` | Yes |
+| `Partial` | No |
+| `Case Insensitive` | No |
+| `Sparse` | No |
+| `Background` | Yes |
## Operators
Azure Cosmos DB's API for MongoDB supports the following database commands:
| Command | Supported | |||
-| $or | Yes |
-| $and | Yes |
-| $not | Yes |
-| $nor | Yes |
+| `or` | Yes |
+| `and` | Yes |
+| `not` | Yes |
+| `nor` | Yes |
### Element operators | Command | Supported | |||
-| $exists | Yes |
-| $type | Yes |
+| `exists` | Yes |
+| `type` | Yes |
### Evaluation query operators | Command | Supported | |||
-| $expr | Yes |
-| $jsonSchema | No |
-| $mod | Yes |
-| $regex | Yes |
-| $text | No (Not supported. Use $regex instead.)|
-| $where | No |
+| `expr` | Yes |
+| `jsonSchema` | No |
+| `mod` | Yes |
+| `regex` | Yes |
+| `text` | No (Not supported. Use $regex instead.)|
+| `where` | No |
In the $regex queries, left-anchored expressions allow index search. However, using 'i' modifier (case-insensitivity) and 'm' modifier (multiline) causes the collection scan in all expressions.
-When there's a need to include '$' or '|', it is best to create two (or more) regex queries. For example, given the following original query: ```find({x:{$regex: /^abc$/})```, it has to be modified as follows:
+When there's a need to include `$` or `|`, it's best to create two (or more) regex queries. For example, given the following original query: `find({x:{$regex: /^abc$/})`, it has to be modified as follows:
`find({x:{$regex: /^abc/, x:{$regex:/^abc$/}})`
-The first part will use the index to restrict the search to those documents beginning with ^abc and the second part will match the exact entries. The bar operator '|' acts as an "or" function - the query ```find({x:{$regex: /^abc |^def/})``` matches the documents in which field 'x' has values that begin with "abc" or "def". To utilize the index, it's recommended to break the query into two different queries joined by the $or operator: ```find( {$or : [{x: $regex: /^abc/}, {$regex: /^def/}] })```.
+The first part will use the index to restrict the search to those documents beginning with ^abc and the second part will match the exact entries. The bar operator `|` acts as an "or" function - the query `find({x:{$regex: /^abc |^def/})` matches the documents in which field `x` has values that begin with `"abc"` or `"def"`. To utilize the index, it's recommended to break the query into two different queries joined by the $or operator: `find( {$or : [{x: $regex: /^abc/}, {$regex: /^def/}] })`.
### Array operators
-| Command | Supported |
+| Command | Supported |
|||
-| $all | Yes |
-| $elemMatch | Yes |
-| $size | Yes |
+| `all` | Yes |
+| `elemMatch` | Yes |
+| `size` | Yes |
### Comment operator
-| Command | Supported |
+| Command | Supported |
|||
-| $comment | Yes |
+| `comment` | Yes |
### Projection operators | Command | Supported | |||
-| $elemMatch | Yes |
-| $meta | No |
-| $slice | Yes |
+| `elemMatch` | Yes |
+| `meta` | No |
+| `slice` | Yes |
### Update operators
The first part will use the index to restrict the search to those documents begi
| Command | Supported | |||
-| $inc | Yes |
-| $mul | Yes |
-| $rename | Yes |
-| $setOnInsert | Yes |
-| $set | Yes |
-| $unset | Yes |
-| $min | Yes |
-| $max | Yes |
-| $currentDate | Yes |
+| `inc` | Yes |
+| `mul` | Yes |
+| `rename` | Yes |
+| `setOnInsert` | Yes |
+| `set` | Yes |
+| `unset` | Yes |
+| `min` | Yes |
+| `max` | Yes |
+| `currentDate` | Yes |
#### Array update operators | Command | Supported | |||
-| $ | Yes |
-| $[]| Yes |
-| $[\<identifier\>]| Yes |
-| $addToSet | Yes |
-| $pop | Yes |
-| $pullAll | Yes |
-| $pull | Yes |
-| $push | Yes |
-| $pushAll | Yes |
-
+| `$` | Yes |
+| `$[]` | Yes |
+| `$[\<identifier\>]` | Yes |
+| `addToSet` | Yes |
+| `pop` | Yes |
+| `pullAll` | Yes |
+| `pull` | Yes |
+| `push` | Yes |
+| `pushAll` | Yes |
#### Update modifiers | Command | Supported | |||
-| $each | Yes |
-| $slice | Yes |
-| $sort | Yes |
-| $position | Yes |
+| `each` | Yes |
+| `slice` | Yes |
+| `sort` | Yes |
+| `position` | Yes |
#### Bitwise update operator | Command | Supported | |||
-| $bit | Yes |
-| $bitsAllSet | No |
-| $bitsAnySet | No |
-| $bitsAllClear | No |
-| $bitsAnyClear | No |
+| `bit` | Yes |
+| `bitsAllSet` | No |
+| `bitsAnySet` | No |
+| `bitsAllClear` | No |
+| `bitsAnyClear` | No |
### Geospatial operators
-Operator | Supported |
- | |
-$geoWithin | Yes |
-$geoIntersects | Yes |
-$near | Yes |
-$nearSphere | Yes |
-$geometry | Yes |
-$minDistance | Yes |
-$maxDistance | Yes |
-$center | No |
-$centerSphere | No |
-$box | No |
-$polygon | No |
+| Operator | Supported |
+| | |
+| `$geoWithin` | Yes |
+| `$geoIntersects` | Yes |
+| `$near` | Yes |
+| `$nearSphere` | Yes |
+| `$geometry` | Yes |
+| `$minDistance` | Yes |
+| `$maxDistance` | Yes |
+| `$center` | No |
+| `$centerSphere` | No |
+| `$box` | No |
+| `$polygon` | No |
## Sort operations
-When using the `findOneAndUpdate` operation, sort operations on a single field are supported but sort operations on multiple fields are not supported.
+When you use the `findOneAndUpdate` operation, sort operations on a single field are supported, but sort operations on multiple fields aren't supported.
## Indexing
-The API for MongoDB [supports a variety of indexes](indexing.md) to enable sorting on multiple fields, improve query performance, and enforce uniqueness.
+
+The API for MongoDB [supports various indexes](indexing.md) to enable sorting on multiple fields, improve query performance, and enforce uniqueness.
## GridFS
Azure Cosmos DB supports GridFS through any GridFS-compatible MongoDB driver.
## Replication
-Azure Cosmos DB supports automatic, native replication at the lowest layers. This logic is extended out to achieve low-latency, global replication as well. Azure Cosmos DB does not support manual replication commands.
+Azure Cosmos DB supports automatic, native replication at the lowest layers. This logic is extended out to achieve low-latency, global replication as well. Azure Cosmos DB doesn't support manual replication commands.
## Retryable Writes
-Azure Cosmos DB does not yet support retryable writes. Client drivers must add `retryWrites=false` to their connection string.
+Azure Cosmos DB doesn't yet support retryable writes. Client drivers must add `retryWrites=false` to their connection string.
## Sharding
-Azure Cosmos DB supports automatic, server-side sharding. It manages shard creation, placement, and balancing automatically. Azure Cosmos DB does not support manual sharding commands, which means you don't have to invoke commands such as addShard, balancerStart, moveChunk etc. You only need to specify the shard key while creating the containers or querying the data.
+Azure Cosmos DB supports automatic, server-side sharding. It manages shard creation, placement, and balancing automatically. Azure Cosmos DB doesn't support manual sharding commands, which means you don't have to invoke commands such as addShard, balancerStart, moveChunk etc. You only need to specify the shard key while creating the containers or querying the data.
## Sessions
-Azure Cosmos DB does not yet support server-side sessions commands.
+Azure Cosmos DB doesn't yet support server-side sessions commands.
## Time-to-live (TTL)
Azure Cosmos DB supports a time-to-live (TTL) based on the timestamp of the docu
## User and role management
-Azure Cosmos DB does not yet support users and roles. However, it supports Azure role-based access control (Azure RBAC) and read-write and read-only passwords or keys that can be obtained through the connection string pane in the [Azure portal](https://portal.azure.com).
+Azure Cosmos DB doesn't yet support users and roles. However, it supports Azure role-based access control (Azure RBAC) and read-write and read-only passwords or keys that can be obtained through the connection string pane in the [Azure portal](https://portal.azure.com).
## Write Concern
-Some applications rely on a [Write Concern](https://docs.mongodb.com/manual/reference/write-concern/) which specifies the number of responses required during a write operation. Due to how Azure Cosmos DB handles replication, all writes are automatically majority quorum by default when using strong consistency. Any write concern specified by the client code is ignored. To learn more, see [Using consistency levels to maximize availability and performance](../consistency-levels.md) article.
+Some applications rely on a [Write Concern](https://docs.mongodb.com/manual/reference/write-concern/), which specifies the number of responses required during a write operation. Due to how Azure Cosmos DB handles replication, all writes are automatically majority quorum by default when using strong consistency. Any write concern specified by the client code is ignored. To learn more, see [Using consistency levels to maximize availability and performance](../consistency-levels.md) article.
## Next steps - For further information check [Mongo 3.6 version features](https://devblogs.microsoft.com/cosmosdb/azure-cosmos-dbs-api-for-mongodb-now-supports-server-version-3-6/)-- Learn how to [use Studio 3T](connect-using-mongochef.md) with Azure Cosmos DB's API for MongoDB.-- Learn how to [use Robo 3T](connect-using-robomongo.md) with Azure Cosmos DB's API for MongoDB.-- Explore MongoDB [samples](nodejs-console-app.md) with Azure Cosmos DB's API for MongoDB.
+- Learn how to [use Studio 3T](connect-using-mongochef.md) with Azure Cosmos DB for MongoDB.
+- Learn how to [use Robo 3T](connect-using-robomongo.md) with Azure Cosmos DB for MongoDB.
+- Explore MongoDB [samples](nodejs-console-app.md) with Azure Cosmos DB for MongoDB.
cosmos-db Feature Support 40 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/feature-support-40.md
Title: 4.0 server version supported features and syntax in Azure Cosmos DB's API for MongoDB
-description: Learn about Azure Cosmos DB's API for MongoDB 4.0 server version supported features and syntax. Learn about the database commands, query language support, datatypes, aggregation pipeline commands, and operators supported.
+ Title: 4.0 server version supported features and syntax in Azure Cosmos DB for MongoDB
+description: Learn about Azure Cosmos DB for MongoDB 4.0 server version supported features and syntax. Learn about the database commands, query language support, datatypes, aggregation pipeline commands, and operators supported.
+++ + Last updated : 10/12/2022 - Previously updated : 04/05/2022--
-# Azure Cosmos DB's API for MongoDB (4.0 server version): supported features and syntax
+# Azure Cosmos DB for MongoDB (4.0 server version): supported features and syntax
+ [!INCLUDE[MongoDB](../includes/appliesto-mongodb.md)]
-Azure Cosmos DB is Microsoft's globally distributed multi-model database service. You can communicate with the Azure Cosmos DB's API for MongoDB using any of the open-source MongoDB client [drivers](https://docs.mongodb.org/ecosystem/drivers). The Azure Cosmos DB's API for MongoDB enables the use of existing client drivers by adhering to the MongoDB [wire protocol](https://docs.mongodb.org/manual/reference/mongodb-wire-protocol).
+Azure Cosmos DB is Microsoft's globally distributed multi-model database service. You can communicate with the Azure Cosmos DB for MongoDB using any of the open-source MongoDB client [drivers](https://docs.mongodb.org/ecosystem/drivers). The Azure Cosmos DB for MongoDB enables the use of existing client drivers by adhering to the MongoDB [wire protocol](https://docs.mongodb.org/manual/reference/mongodb-wire-protocol).
-By using the Azure Cosmos DB's API for MongoDB, you can enjoy the benefits of the MongoDB you're used to, with all of the enterprise capabilities that Azure Cosmos DB provides: [global distribution](../distribute-data-globally.md), [automatic sharding](../partitioning-overview.md), availability and latency guarantees, encryption at rest, backups, and much more.
+By using the Azure Cosmos DB for MongoDB, you can enjoy the benefits of the MongoDB you're used to, with all of the enterprise capabilities that Azure Cosmos DB provides: [global distribution](../distribute-data-globally.md), [automatic sharding](../partitioning-overview.md), availability and latency guarantees, encryption at rest, backups, and much more.
## Protocol Support
-The supported operators and any limitations or exceptions are listed below. Any client driver that understands these protocols should be able to connect to Azure Cosmos DB's API for MongoDB. When using Azure Cosmos DB's API for MongoDB accounts, the 3.6+ versions of accounts have the endpoint in the format `*.mongo.cosmos.azure.com` whereas the 3.2 version of accounts has the endpoint in the format `*.documents.azure.com`.
+The supported operators and any limitations or exceptions are listed below. Any client driver that understands these protocols should be able to connect to Azure Cosmos DB for MongoDB. When you create Azure Cosmos DB for MongoDB accounts, the 3.6+ versions of accounts have the endpoint in the format `*.mongo.cosmos.azure.com` whereas the 3.2 version of accounts has the endpoint in the format `*.documents.azure.com`.
> [!NOTE]
-> This article only lists the supported server commands and excludes client-side wrapper functions. Client-side wrapper functions such as `deleteMany()` and `updateMany()` internally utilize the `delete()` and `update()` server commands. Functions utilizing supported server commands are compatible with Azure Cosmos DB's API for MongoDB.
+> This article only lists the supported server commands and excludes client-side wrapper functions. Client-side wrapper functions such as `deleteMany()` and `updateMany()` internally utilize the `delete()` and `update()` server commands. Functions utilizing supported server commands are compatible with Azure Cosmos DB for MongoDB.
## Query language support
-Azure Cosmos DB's API for MongoDB provides comprehensive support for MongoDB query language constructs. Below you can find the detailed list of currently supported operations, operators, stages, commands, and options.
+Azure Cosmos DB for MongoDB provides comprehensive support for MongoDB query language constructs. Below you can find the detailed list of currently supported operations, operators, stages, commands, and options.
## Database commands
-Azure Cosmos DB's API for MongoDB supports the following database commands:
+Azure Cosmos DB for MongoDB supports the following database commands:
### Query and write operation commands | Command | Supported | |||
-| [change streams](change-streams.md) | Yes |
-| delete | Yes |
-| eval | No |
-| find | Yes |
-| findAndModify | Yes |
-| getLastError | Yes |
-| getMore | Yes |
-| getPrevError | No |
-| insert | Yes |
-| parallelCollectionScan | No |
-| resetError | No |
-| update | Yes |
+| [`change streams`](change-streams.md) | Yes |
+| `delete` | Yes |
+| `eval` | No |
+| `find` | Yes |
+| `findAndModify` | Yes |
+| `getLastError` | Yes |
+| `getMore` | Yes |
+| `getPrevError` | No |
+| `insert` | Yes |
+| `parallelCollectionScan` | No |
+| `resetError` | No |
+| `update` | Yes |
### Transaction commands | Command | Supported | |||
-| abortTransaction | Yes |
-| commitTransaction | Yes |
+| `abortTransaction` | Yes |
+| `commitTransaction` | Yes |
### Authentication commands | Command | Supported | |||
-| authenticate | Yes |
-| getnonce | Yes |
-| logout | Yes |
+| `authenticate` | Yes |
+| `getnonce` | Yes |
+| `logout` | Yes |
### Administration commands | Command | Supported | |||
-| cloneCollectionAsCapped | No |
-| collMod | No |
-| connectionStatus | No |
-| convertToCapped | No |
-| copydb | No |
-| create | Yes |
-| createIndexes | Yes |
-| currentOp | Yes |
-| drop | Yes |
-| dropDatabase | Yes |
-| dropIndexes | Yes |
-| filemd5 | Yes |
-| killCursors | Yes |
-| killOp | No |
-| listCollections | Yes |
-| listDatabases | Yes |
-| listIndexes | Yes |
-| reIndex | Yes |
-| renameCollection | No |
+| `cloneCollectionAsCapped` | No |
+| `collMod` | No |
+| `connectionStatus` | No |
+| `convertToCapped` | No |
+| `copydb` | No |
+| `create` | Yes |
+| `createIndexes` | Yes |
+| `currentOp` | Yes |
+| `drop` | Yes |
+| `dropDatabase` | Yes |
+| `dropIndexes` | Yes |
+| `filemd5` | Yes |
+| `killCursors` | Yes |
+| `killOp` | No |
+| `listCollections` | Yes |
+| `listDatabases` | Yes |
+| `listIndexes` | Yes |
+| `reIndex` | Yes |
+| `renameCollection` | No |
### Diagnostics commands | Command | Supported | |||
-| buildInfo | Yes |
-| collStats | Yes |
-| connPoolStats | No |
-| connectionStatus | No |
-| dataSize | No |
-| dbHash | No |
-| dbStats | Yes |
-| explain | Yes |
-| features | No |
-| hostInfo | Yes |
-| listDatabases | Yes |
-| listCommands | No |
-| profiler | No |
-| serverStatus | No |
-| top | No |
-| whatsmyuri | Yes |
+| `buildInfo` | Yes |
+| `collStats` | Yes |
+| `connPoolStats` | No |
+| `connectionStatus` | No |
+| `dataSize` | No |
+| `dbHash` | No |
+| `dbStats` | Yes |
+| `explain` | Yes |
+| `features` | No |
+| `hostInfo` | Yes |
+| `listDatabases` | Yes |
+| `listCommands` | No |
+| `profiler` | No |
+| `serverStatus` | No |
+| `top` | No |
+| `whatsmyuri` | Yes |
## <a name="aggregation-pipeline"></a>Aggregation pipeline
Azure Cosmos DB's API for MongoDB supports the following database commands:
| Command | Supported | |||
-| aggregate | Yes |
-| count | Yes |
-| distinct | Yes |
-| mapReduce | No |
+| `aggregate` | Yes |
+| `count` | Yes |
+| `distinct` | Yes |
+| `mapReduce` | No |
### Aggregation stages | Command | Supported | |||
-| $addFields | Yes |
-| $bucket | No |
-| $bucketAuto | No |
-| $changeStream | Yes |
-| $collStats | No |
-| $count | Yes |
-| $currentOp | No |
-| $facet | Yes |
-| $geoNear | Yes |
-| $graphLookup | Yes |
-| $group | Yes |
-| $indexStats | No |
-| $limit | Yes |
-| $listLocalSessions | No |
-| $listSessions | No |
-| $lookup | Partial |
-| $match | Yes |
-| $out | Yes |
-| $project | Yes |
-| $redact | Yes |
-| $replaceRoot | Yes |
-| $replaceWith | No |
-| $sample | Yes |
-| $skip | Yes |
-| $sort | Yes |
-| $sortByCount | Yes |
-| $unwind | Yes |
+| `addFields` | Yes |
+| `bucket` | No |
+| `bucketAuto` | No |
+| `changeStream` | Yes |
+| `collStats` | No |
+| `count` | Yes |
+| `currentOp` | No |
+| `facet` | Yes |
+| `geoNear` | Yes |
+| `graphLookup` | Yes |
+| `group` | Yes |
+| `indexStats` | No |
+| `limit` | Yes |
+| `listLocalSessions` | No |
+| `listSessions` | No |
+| `lookup` | Partial |
+| `match` | Yes |
+| `out` | Yes |
+| `project` | Yes |
+| `redact` | Yes |
+| `replaceRoot` | Yes |
+| `replaceWith` | No |
+| `sample` | Yes |
+| `skip` | Yes |
+| `sort` | Yes |
+| `sortByCount` | Yes |
+| `unwind` | Yes |
> [!NOTE] > `$lookup` does not yet support the [uncorrelated subqueries](https://docs.mongodb.com/manual/reference/operator/aggregation/lookup/#join-conditions-and-uncorrelated-sub-queries) feature introduced in server version 3.6. You will receive an error with a message containing `let is not supported` if you attempt to use the `$lookup` operator with `let` and `pipeline` fields.
Azure Cosmos DB's API for MongoDB supports the following database commands:
| Command | Supported | |||
-| $and | Yes |
-| $not | Yes |
-| $or | Yes |
+| `and` | Yes |
+| `not` | Yes |
+| `or` | Yes |
### Conversion expressions | Command | Supported | |||
-| $convert | Yes |
-| $toBool | Yes |
-| $toDate | Yes |
-| $toDecimal | Yes |
-| $toDouble | Yes |
-| $toInt | Yes |
-| $toLong | Yes |
-| $toObjectId | Yes |
-| $toString | Yes |
+| `convert` | Yes |
+| `toBool` | Yes |
+| `toDate` | Yes |
+| `toDecimal` | Yes |
+| `toDouble` | Yes |
+| `toInt` | Yes |
+| `toLong` | Yes |
+| `toObjectId` | Yes |
+| `toString` | Yes |
### Set expressions | Command | Supported | |||
-| $setEquals | Yes |
-| $setIntersection | Yes |
-| $setUnion | Yes |
-| $setDifference | Yes |
-| $setIsSubset | Yes |
-| $anyElementTrue | Yes |
-| $allElementsTrue | Yes |
+| `setEquals` | Yes |
+| `setIntersection` | Yes |
+| `setUnion` | Yes |
+| `setDifference` | Yes |
+| `setIsSubset` | Yes |
+| `anyElementTrue` | Yes |
+| `allElementsTrue` | Yes |
### Comparison expressions
Azure Cosmos DB's API for MongoDB supports the following database commands:
| Command | Supported | |||
-| $cmp | Yes |
-| $eq | Yes |
-| $gt | Yes |
-| $gte | Yes |
-| $lt | Yes |
-| $lte | Yes |
-| $ne | Yes |
-| $in | Yes |
-| $nin | Yes |
+| `cmp` | Yes |
+| `eq` | Yes |
+| `gt` | Yes |
+| `gte` | Yes |
+| `lt` | Yes |
+| `lte` | Yes |
+| `ne` | Yes |
+| `in` | Yes |
+| `nin` | Yes |
### Arithmetic expressions | Command | Supported | |||
-| $abs | Yes |
-| $add | Yes |
-| $ceil | Yes |
-| $divide | Yes |
-| $exp | Yes |
-| $floor | Yes |
-| $ln | Yes |
-| $log | Yes |
-| $log10 | Yes |
-| $mod | Yes |
-| $multiply | Yes |
-| $pow | Yes |
-| $sqrt | Yes |
-| $subtract | Yes |
-| $trunc | Yes |
+| `abs` | Yes |
+| `add` | Yes |
+| `ceil` | Yes |
+| `divide` | Yes |
+| `exp` | Yes |
+| `floor` | Yes |
+| `ln` | Yes |
+| `log` | Yes |
+| `log10` | Yes |
+| `mod` | Yes |
+| `multiply` | Yes |
+| `pow` | Yes |
+| `sqrt` | Yes |
+| `subtract` | Yes |
+| `trunc` | Yes |
### String expressions | Command | Supported | |||
-| $concat | Yes |
-| $indexOfBytes | Yes |
-| $indexOfCP | Yes |
-| $ltrim | Yes |
-| $rtrim | Yes |
-| $trim | Yes |
-| $split | Yes |
-| $strLenBytes | Yes |
-| $strLenCP | Yes |
-| $strcasecmp | Yes |
-| $substr | Yes |
-| $substrBytes | Yes |
-| $substrCP | Yes |
-| $toLower | Yes |
-| $toUpper | Yes |
+| `concat` | Yes |
+| `indexOfBytes` | Yes |
+| `indexOfCP` | Yes |
+| `ltrim` | Yes |
+| `rtrim` | Yes |
+| `trim` | Yes |
+| `split` | Yes |
+| `strLenBytes` | Yes |
+| `strLenCP` | Yes |
+| `strcasecmp` | Yes |
+| `substr` | Yes |
+| `substrBytes` | Yes |
+| `substrCP` | Yes |
+| `toLower` | Yes |
+| `toUpper` | Yes |
### Text search operator | Command | Supported | |||
-| $meta | No |
+| `meta` | No |
### Array expressions | Command | Supported | |||
-| $arrayElemAt | Yes |
-| $arrayToObject | Yes |
-| $concatArrays | Yes |
-| $filter | Yes |
-| $indexOfArray | Yes |
-| $isArray | Yes |
-| $objectToArray | Yes |
-| $range | Yes |
-| $reverseArray | Yes |
-| $reduce | Yes |
-| $size | Yes |
-| $slice | Yes |
-| $zip | Yes |
-| $in | Yes |
+| `arrayElemAt` | Yes |
+| `arrayToObject` | Yes |
+| `concatArrays` | Yes |
+| `filter` | Yes |
+| `indexOfArray` | Yes |
+| `isArray` | Yes |
+| `objectToArray` | Yes |
+| `range` | Yes |
+| `reverseArray` | Yes |
+| `reduce` | Yes |
+| `size` | Yes |
+| `slice` | Yes |
+| `zip` | Yes |
+| `in` | Yes |
### Variable operators | Command | Supported | |||
-| $map | Yes |
-| $let | Yes |
+| `map` | Yes |
+| `let` | Yes |
### System variables | Command | Supported | |||
-| $$CURRENT | Yes |
-| $$DESCEND | Yes |
-| $$KEEP | Yes |
-| $$PRUNE | Yes |
-| $$REMOVE | Yes |
-| $$ROOT | Yes |
+| `$$CURRENT` | Yes |
+| `$$DESCEND` | Yes |
+| `$$KEEP` | Yes |
+| `$$PRUNE` | Yes |
+| `$$REMOVE` | Yes |
+| `$$ROOT` | Yes |
### Literal operator | Command | Supported | |||
-| $literal | Yes |
+| `literal` | Yes |
### Date expressions | Command | Supported | |||
-| $dayOfYear | Yes |
-| $dayOfMonth | Yes |
-| $dayOfWeek | Yes |
-| $year | Yes |
-| $month | Yes |
-| $week | Yes |
-| $hour | Yes |
-| $minute | Yes |
-| $second | Yes |
-| $millisecond | Yes |
-| $dateToString | Yes |
-| $isoDayOfWeek | Yes |
-| $isoWeek | Yes |
-| $dateFromParts | Yes |
-| $dateToParts | Yes |
-| $dateFromString | Yes |
-| $isoWeekYear | Yes |
+| `dayOfYear` | Yes |
+| `dayOfMonth` | Yes |
+| `dayOfWeek` | Yes |
+| `year` | Yes |
+| `month` | Yes |
+| `week` | Yes |
+| `hour` | Yes |
+| `minute` | Yes |
+| `second` | Yes |
+| `millisecond` | Yes |
+| `dateToString` | Yes |
+| `isoDayOfWeek` | Yes |
+| `isoWeek` | Yes |
+| `dateFromParts` | Yes |
+| `dateToParts` | Yes |
+| `dateFromString` | Yes |
+| `isoWeekYear` | Yes |
### Conditional expressions | Command | Supported | |||
-| $cond | Yes |
-| $ifNull | Yes |
-| $switch | Yes |
+| `cond` | Yes |
+| `ifNull` | Yes |
+| `switch` | Yes |
### Data type operator | Command | Supported | |||
-| $type | Yes |
+| `type` | Yes |
### Accumulator expressions | Command | Supported | |||
-| $sum | Yes |
-| $avg | Yes |
-| $first | Yes |
-| $last | Yes |
-| $max | Yes |
-| $min | Yes |
-| $push | Yes |
-| $addToSet | Yes |
-| $stdDevPop | Yes |
-| $stdDevSamp | Yes |
+| `sum` | Yes |
+| `avg` | Yes |
+| `first` | Yes |
+| `last` | Yes |
+| `max` | Yes |
+| `min` | Yes |
+| `push` | Yes |
+| `addToSet` | Yes |
+| `stdDevPop` | Yes |
+| `stdDevSamp` | Yes |
### Merge operator | Command | Supported | |||
-| $mergeObjects | Yes |
+| `mergeObjects` | Yes |
## Data types
-Azure Cosmos DB for MongoDB supports documents encoded in MongoDB BSON format. The 4.0 API version enhances the internal usage of this format to improve performance and reduce costs. Documents written or updated through an endpoint running 4.0+ benefit from this.
-
-In an [upgrade scenario](upgrade-version.md), documents written prior to the upgrade to version 4.0+ will not benefit from the enhanced performance until they are updated via a write operation through the 4.0+ endpoint.
+Azure Cosmos DB for MongoDB supports documents encoded in MongoDB BSON format. The 4.0 API version enhances the internal usage of this format to improve performance and reduce costs. Documents written or updated through an endpoint running 4.0+ benefit from optimization.
+
+In an [upgrade scenario](upgrade-version.md), documents written prior to the upgrade to version 4.0+ won't benefit from the enhanced performance until they're updated via a write operation through the 4.0+ endpoint.
-16MB document support raises the size limit for your documents from 2MB to 16MB. This limit only applies to collections created after this feature has been enabled. Once this feature is enabled for your database account, it cannot be disabled. This feature is not compatible with the Azure Synapse Link feature and/or Continuous Backup.
+16-MB document support raises the size limit for your documents from 2 MB to 16 MB. This limit only applies to collections created after this feature has been enabled. Once this feature is enabled for your database account, it can't be disabled. This feature isn't compatible with the Azure Synapse Link feature and/or Continuous Backup.
-Enabling 16MB can be done in the features tab in the Azure portal or programmatically by [adding the "EnableMongo16MBDocumentSupport" capability](how-to-configure-capabilities.md).
+Enabling 16 MB can be done in the features tab in the Azure portal or programmatically by [adding the "EnableMongo16MBDocumentSupport" capability](how-to-configure-capabilities.md).
-We recommend enabling Server Side Retry to ensure requests with larger documents succeed. If necessary, raising your DB/Collection RUs may also help performance.
+We recommend enabling Server Side Retry and avoiding wildcard indexes to ensure requests with larger documents succeed. If necessary, raising your DB/Collection RUs may also help performance.
| Command | Supported | |||
-| Double | Yes |
-| String | Yes |
-| Object | Yes |
-| Array | Yes |
-| Binary Data | Yes |
-| ObjectId | Yes |
-| Boolean | Yes |
-| Date | Yes |
-| Null | Yes |
-| 32-bit Integer (int) | Yes |
-| Timestamp | Yes |
-| 64-bit Integer (long) | Yes |
-| MinKey | Yes |
-| MaxKey | Yes |
-| Decimal128 | Yes |
-| Regular Expression | Yes |
-| JavaScript | Yes |
-| JavaScript (with scope)| Yes |
-| Undefined | Yes |
+| `Double` | Yes |
+| `String` | Yes |
+| `Object` | Yes |
+| `Array` | Yes |
+| `Binary Data` | Yes |
+| `ObjectId` | Yes |
+| `Boolean` | Yes |
+| `Date` | Yes |
+| `Null` | Yes |
+| `32-bit Integer (int)` | Yes |
+| `Timestamp` | Yes |
+| `64-bit Integer (long)` | Yes |
+| `MinKey` | Yes |
+| `MaxKey` | Yes |
+| `Decimal128` | Yes |
+| `Regular Expression` | Yes |
+| `JavaScript` | Yes |
+| `JavaScript (with scope)` | Yes |
+| `Undefined` | Yes |
## Indexes and index properties
We recommend enabling Server Side Retry to ensure requests with larger documents
| Command | Supported | |||
-| Single Field Index | Yes |
-| Compound Index | Yes |
-| Multikey Index | Yes |
-| Text Index | No |
-| 2dsphere | Yes |
-| 2d Index | No |
-| Hashed Index | Yes |
+| `Single Field Index` | Yes |
+| `Compound Index` | Yes |
+| `Multikey Index` | Yes |
+| `Text Index` | No |
+| `2dsphere` | Yes |
+| `2d Index` | No |
+| `Hashed Index` | Yes |
### Index properties | Command | Supported | |||
-| TTL | Yes |
-| Unique | Yes |
-| Partial | No |
-| Case Insensitive | No |
-| Sparse | No |
-| Background | Yes |
+| `TTL` | Yes |
+| `Unique` | Yes |
+| `Partial` | No |
+| `Case Insensitive` | No |
+| `Sparse` | No |
+| `Background` | Yes |
## Operators
We recommend enabling Server Side Retry to ensure requests with larger documents
| Command | Supported | |||
-| $or | Yes |
-| $and | Yes |
-| $not | Yes |
-| $nor | Yes |
+| `or` | Yes |
+| `and` | Yes |
+| `not` | Yes |
+| `nor` | Yes |
### Element operators | Command | Supported | |||
-| $exists | Yes |
-| $type | Yes |
+| `exists` | Yes |
+| `type` | Yes |
### Evaluation query operators | Command | Supported | |||
-| $expr | Yes |
-| $jsonSchema | No |
-| $mod | Yes |
-| $regex | Yes |
-| $text | No (Not supported. Use $regex instead.)|
-| $where | No |
+| `expr` | Yes |
+| `jsonSchema` | No |
+| `mod` | Yes |
+| `regex` | Yes |
+| `text` | No (Not supported. Use $regex instead.) |
+| `where` | No |
In the $regex queries, left-anchored expressions allow index search. However, using 'i' modifier (case-insensitivity) and 'm' modifier (multiline) causes the collection scan in all expressions.
-When there's a need to include '$' or '|', it is best to create two (or more) regex queries. For example, given the following original query: `find({x:{$regex: /^abc$/})`, it has to be modified as follows:
+When there's a need to include '$' or '|', it's best to create two (or more) regex queries. For example, given the following original query: `find({x:{$regex: /^abc$/})`, it has to be modified as follows:
`find({x:{$regex: /^abc/, x:{$regex:/^abc$/}})`
The first part will use the index to restrict the search to those documents begi
### Array operators
-| Command | Supported |
+| Command | Supported |
|||
-| $all | Yes |
-| $elemMatch | Yes |
-| $size | Yes |
+| `all` | Yes |
+| `elemMatch` | Yes |
+| `size` | Yes |
### Comment operator
-| Command | Supported |
+| Command | Supported |
|||
-| $comment | Yes |
+| `comment` | Yes |
### Projection operators | Command | Supported | |||
-| $elemMatch | Yes |
-| $meta | No |
-| $slice | Yes |
+| `elemMatch` | Yes |
+| `meta` | No |
+| `slice` | Yes |
### Update operators
The first part will use the index to restrict the search to those documents begi
| Command | Supported | |||
-| $inc | Yes |
-| $mul | Yes |
-| $rename | Yes |
-| $setOnInsert | Yes |
-| $set | Yes |
-| $unset | Yes |
-| $min | Yes |
-| $max | Yes |
-| $currentDate | Yes |
+| `inc` | Yes |
+| `mul` | Yes |
+| `rename` | Yes |
+| `setOnInsert` | Yes |
+| `set` | Yes |
+| `unset` | Yes |
+| `min` | Yes |
+| `max` | Yes |
+| `currentDate` | Yes |
#### Array update operators | Command | Supported | |||
-| $ | Yes |
-| $[]| Yes |
-| $[\<identifier\>]| Yes |
-| $addToSet | Yes |
-| $pop | Yes |
-| $pullAll | Yes |
-| $pull | Yes |
-| $push | Yes |
-| $pushAll | Yes |
+| `$` | Yes |
+| `$[]` | Yes |
+| `$[\<identifier\>]` | Yes |
+| `addToSet` | Yes |
+| `pop` | Yes |
+| `pullAll` | Yes |
+| `pull` | Yes |
+| `push` | Yes |
+| `pushAll` | Yes |
#### Update modifiers | Command | Supported | |||
-| $each | Yes |
-| $slice | Yes |
-| $sort | Yes |
-| $position | Yes |
+| `each` | Yes |
+| `slice` | Yes |
+| `sort` | Yes |
+| `position` | Yes |
#### Bitwise update operator | Command | Supported | |||
-| $bit | Yes |
-| $bitsAllSet | No |
-| $bitsAnySet | No |
-| $bitsAllClear | No |
-| $bitsAnyClear | No |
+| `bit` | Yes |
+| `bitsAllSet` | No |
+| `bitsAnySet` | No |
+| `bitsAllClear` | No |
+| `bitsAnyClear` | No |
### Geospatial operators
-Operator | Supported |
- | |
-$geoWithin | Yes |
-$geoIntersects | Yes |
-$near | Yes |
-$nearSphere | Yes |
-$geometry | Yes |
-$minDistance | Yes |
-$maxDistance | Yes |
-$center | No |
-$centerSphere | No |
-$box | No |
-$polygon | No |
+| Operator | Supported |
+| | |
+| `$geoWithin` | Yes |
+| `$geoIntersects` | Yes |
+| `$near` | Yes |
+| `$nearSphere` | Yes |
+| `$geometry` | Yes |
+| `$minDistance` | Yes |
+| `$maxDistance` | Yes |
+| `$center` | No |
+| `$centerSphere` | No |
+| `$box` | No |
+| `$polygon` | No |
## Sort operations
-When using the `findOneAndUpdate` operation with API for MongoDB version 4.0, sort operations on a single field and multiple fields are supported. Sort operations on multiple fields was a limitation of previous wire protocols.
+When you use the `findOneAndUpdate` operation with API for MongoDB version 4.0, sort operations on a single field and multiple fields are supported. Sort operations on multiple fields were a limitation of previous wire protocols.
## Indexing
-The API for MongoDB [supports a variety of indexes](indexing.md) to enable sorting on multiple fields, improve query performance, and enforce uniqueness.
+
+The API for MongoDB [supports various indexes](indexing.md) to enable sorting on multiple fields, improve query performance, and enforce uniqueness.
## GridFS
Azure Cosmos DB supports GridFS through any GridFS-compatible Mongo driver.
## Replication
-Azure Cosmos DB supports automatic, native replication at the lowest layers. This logic is extended out to achieve low-latency, global replication as well. Azure Cosmos DB does not support manual replication commands.
+Azure Cosmos DB supports automatic, native replication at the lowest layers. This logic is extended out to achieve low-latency, global replication as well. Azure Cosmos DB doesn't support manual replication commands.
## Retryable Writes
-Retryable writes enables MongoDB drivers to automatically retry certain write operations in case of failure, but results in more stringent requirements for certain operations, which match MongoDB protocol requirements. With this feature enabled, update operations, including deletes, in sharded collections will require the shard key to be included in the query filter or update statement.
+Retryable writes enable MongoDB drivers to automatically retry certain write operations if there was failure, but results in more stringent requirements for certain operations, which match MongoDB protocol requirements. With this feature enabled, update operations, including deletes, in sharded collections will require the shard key to be included in the query filter or update statement.
-For example, with a sharded collection, sharded on key ΓÇ£countryΓÇ¥: To delete all the documents with the field city = "NYC", the application will need to execute the operation for all shard key (country) values if Retryable writes is enabled.
+For example, with a sharded collection, sharded on key ΓÇ£countryΓÇ¥: To delete all the documents with the field **city** = `"NYC"`, the application will need to execute the operation for all shard key (country) values if Retryable writes are enabled.
-- `db.coll.deleteMany({"country": "USA", "city": "NYC"})` - **Success** -- `db.coll.deleteMany({"city": "NYC"})` - **Fails with error `ShardKeyNotFound(61)`**
+- `db.coll.deleteMany({"country": "USA", "city": "NYC"})` - **Success**
+- `db.coll.deleteMany({"city": "NYC"})` - Fails with error **ShardKeyNotFound(61)**
-To enable the feature, [add the EnableMongoRetryableWrites capability](how-to-configure-capabilities.md) to your database account. This feature can also be enabled in the features tab in the Azure portal.
+To enable the feature, [add the EnableMongoRetryableWrites capability](how-to-configure-capabilities.md) to your database account. This feature can also be enabled in the features tab in the Azure portal.
## Sharding
-Azure Cosmos DB supports automatic, server-side sharding. It manages shard creation, placement, and balancing automatically. Azure Cosmos DB does not support manual sharding commands, which means you don't have to invoke commands such as addShard, balancerStart, moveChunk etc. You only need to specify the shard key while creating the containers or querying the data.
+Azure Cosmos DB supports automatic, server-side sharding. It manages shard creation, placement, and balancing automatically. Azure Cosmos DB doesn't support manual sharding commands, which means you don't have to invoke commands such as addShard, balancerStart, moveChunk etc. You only need to specify the shard key while creating the containers or querying the data.
## Sessions
-Azure Cosmos DB does not yet support server-side sessions commands.
+Azure Cosmos DB doesn't yet support server-side sessions commands.
## Time-to-live (TTL)
Azure Cosmos DB supports a time-to-live (TTL) based on the timestamp of the docu
## Transactions
-Multi-document transactions are supported within an unsharded collection. Multi-document transactions are not supported across collections or in sharded collections. The timeout for transactions is a fixed 5 seconds.
+Multi-document transactions are supported within an unsharded collection. Multi-document transactions aren't supported across collections or in sharded collections. The timeout for transactions is a fixed 5 seconds.
## User and role management
-Azure Cosmos DB does not yet support users and roles. However, Azure Cosmos DB supports Azure role-based access control (Azure RBAC) and read-write and read-only passwords/keys that can be obtained through the [Azure portal](https://portal.azure.com) (Connection String page).
+Azure Cosmos DB doesn't yet support users and roles. However, Azure Cosmos DB supports Azure role-based access control (Azure RBAC) and read-write and read-only passwords/keys that can be obtained through the [Azure portal](https://portal.azure.com) (Connection String page).
## Write Concern
-Some applications rely on a [Write Concern](https://docs.mongodb.com/manual/reference/write-concern/), which specifies the number of responses required during a write operation. Due to how Azure Cosmos DB handles replication in the background all writes are all automatically Quorum by default. Any write concern specified by the client code is ignored. Learn more in [Using consistency levels to maximize availability and performance](../consistency-levels.md).
+Some applications rely on a [Write Concern](https://docs.mongodb.com/manual/reference/write-concern/), which specifies the number of responses required during a write operation. Due to how Azure Cosmos DB handles replication in the background all writes are automatically Quorum by default. Any write concern specified by the client code is ignored. Learn more in [Using consistency levels to maximize availability and performance](../consistency-levels.md).
## Next steps -- Learn how to [use Studio 3T](connect-using-mongochef.md) with Azure Cosmos DB's API for MongoDB.-- Learn how to [use Robo 3T](connect-using-robomongo.md) with Azure Cosmos DB's API for MongoDB.-- Explore MongoDB [samples](nodejs-console-app.md) with Azure Cosmos DB's API for MongoDB.
+- Learn how to [use Studio 3T](connect-using-mongochef.md) with Azure Cosmos DB for MongoDB.
+- Learn how to [use Robo 3T](connect-using-robomongo.md) with Azure Cosmos DB for MongoDB.
+- Explore MongoDB [samples](nodejs-console-app.md) with Azure Cosmos DB for MongoDB.
- Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
- - If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
- - If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-capacity-planner.md)
+ - If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
+ - If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-capacity-planner.md)
cosmos-db Feature Support 42 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/feature-support-42.md
Title: 4.2 server version supported features and syntax in Azure Cosmos DB for MongoDB description: Learn about Azure Cosmos DB for MongoDB 4.2 server version supported features and syntax. Learn about the database commands, query language support, datatypes, aggregation pipeline commands, and operators supported.+++ + Last updated : 10/12/2022 - Previously updated : 04/05/2022-- # Azure Cosmos DB for MongoDB (4.2 server version): supported features and syntax+ [!INCLUDE[MongoDB](../includes/appliesto-mongodb.md)] Azure Cosmos DB is Microsoft's globally distributed multi-model database service, offering [multiple database APIs](../choose-api.md). You can communicate with the Azure Cosmos DB for MongoDB using any of the open-source MongoDB client [drivers](https://docs.mongodb.org/ecosystem/drivers). The Azure Cosmos DB for MongoDB enables the use of existing client drivers by adhering to the MongoDB [wire protocol](https://docs.mongodb.org/manual/reference/mongodb-wire-protocol).
By using the Azure Cosmos DB for MongoDB, you can enjoy the benefits of the Mong
## Protocol Support
-The supported operators and any limitations or exceptions are listed below. Any client driver that understands these protocols should be able to connect to Azure Cosmos DB for MongoDB. When using Azure Cosmos DB for MongoDB accounts, the 3.6+ versions of accounts have the endpoint in the format `*.mongo.cosmos.azure.com` whereas the 3.2 version of accounts has the endpoint in the format `*.documents.azure.com`.
+The supported operators and any limitations or exceptions are listed below. Any client driver that understands these protocols should be able to connect to Azure Cosmos DB for MongoDB. When you create Azure Cosmos DB for MongoDB accounts, the 3.6+ versions of accounts have the endpoint in the format `*.mongo.cosmos.azure.com` whereas the 3.2 version of accounts has the endpoint in the format `*.documents.azure.com`.
> [!NOTE] > This article only lists the supported server commands, and excludes client-side wrapper functions. Client-side wrapper functions such as `deleteMany()` and `updateMany()` internally utilize the `delete()` and `update()` server commands. Functions utilizing supported server commands are compatible with the Azure Cosmos DB for MongoDB.
Azure Cosmos DB for MongoDB supports the following database commands:
| Command | Supported | |||
-| [change streams](change-streams.md) | Yes |
-| delete | Yes |
-| eval | No |
-| find | Yes |
-| findAndModify | Yes |
-| getLastError | Yes |
-| getMore | Yes |
-| getPrevError | No |
-| insert | Yes |
-| parallelCollectionScan | No |
-| resetError | No |
-| update | Yes |
+| `[change streams](change-streams.md)` | Yes |
+| `delete` | Yes |
+| `eval` | No |
+| `find` | Yes |
+| `findAndModify` | Yes |
+| `getLastError` | Yes |
+| `getMore` | Yes |
+| `getPrevError` | No |
+| `insert` | Yes |
+| `parallelCollectionScan` | No |
+| `resetError` | No |
+| `update` | Yes |
### Transaction commands+ > [!NOTE] > Multi-document transactions are only supported within a single non-sharded collection. Cross-collection and cross-shard multi-document transactions are not yet supported in the API for MongoDB. | Command | Supported | |||
-| abortTransaction | Yes |
-| commitTransaction | Yes |
+| `abortTransaction` | Yes |
+| `commitTransaction` | Yes |
### Authentication commands | Command | Supported | |||
-| authenticate | Yes |
-| getnonce | Yes |
-| logout | Yes |
+| `authenticate` | Yes |
+| `getnonce` | Yes |
+| `logout` | Yes |
### Administration commands | Command | Supported | |||
-| cloneCollectionAsCapped | No |
-| collMod | No |
-| connectionStatus | No |
-| convertToCapped | No |
-| copydb | No |
-| create | Yes |
-| createIndexes | Yes |
-| currentOp | Yes |
-| drop | Yes |
-| dropDatabase | Yes |
-| dropIndexes | Yes |
-| filemd5 | Yes |
-| killCursors | Yes |
-| killOp | No |
-| listCollections | Yes |
-| listDatabases | Yes |
-| listIndexes | Yes |
-| reIndex | Yes |
-| renameCollection | No |
+| `cloneCollectionAsCapped` | No |
+| `collMod` | No |
+| `connectionStatus` | No |
+| `convertToCapped` | No |
+| `copydb` | No |
+| `create` | Yes |
+| `createIndexes` | Yes |
+| `currentOp` | Yes |
+| `drop` | Yes |
+| `dropDatabase` | Yes |
+| `dropIndexes` | Yes |
+| `filemd5` | Yes |
+| `killCursors` | Yes |
+| `killOp` | No |
+| `listCollections` | Yes |
+| `listDatabases` | Yes |
+| `listIndexes` | Yes |
+| `reIndex` | Yes |
+| `renameCollection` | No |
### Diagnostics commands | Command | Supported | |||
-| buildInfo | Yes |
-| collStats | Yes |
-| connPoolStats | No |
-| connectionStatus | No |
-| dataSize | No |
-| dbHash | No |
-| dbStats | Yes |
-| explain | Yes |
-| features | No |
-| hostInfo | Yes |
-| listDatabases | Yes |
-| listCommands | No |
-| profiler | No |
-| serverStatus | No |
-| top | No |
-| whatsmyuri | Yes |
+| `buildInfo` | Yes |
+| `collStats` | Yes |
+| `connPoolStats` | No |
+| `connectionStatus` | No |
+| `dataSize` | No |
+| `dbHash` | No |
+| `dbStats` | Yes |
+| `explain` | Yes |
+| `features` | No |
+| `hostInfo` | Yes |
+| `listDatabases` | Yes |
+| `listCommands` | No |
+| `profiler` | No |
+| `serverStatus` | No |
+| `top` | No |
+| `whatsmyuri` | Yes |
## <a name="aggregation-pipeline"></a>Aggregation pipeline
Azure Cosmos DB for MongoDB supports the following database commands:
| Command | Supported | |||
-| aggregate | Yes |
-| count | Yes |
-| distinct | Yes |
-| mapReduce | No |
+| `aggregate` | Yes |
+| `count` | Yes |
+| `distinct` | Yes |
+| `mapReduce` | No |
### Aggregation stages | Command | Supported | |||
-| $addFields | Yes |
-| $bucket | No |
-| $bucketAuto | No |
-| $changeStream | Yes |
-| $collStats | No |
-| $count | Yes |
-| $currentOp | No |
-| $facet | Yes |
-| $geoNear | Yes |
-| $graphLookup | Yes |
-| $group | Yes |
-| $indexStats | No |
-| $limit | Yes |
-| $listLocalSessions | No |
-| $listSessions | No |
-| $lookup | Partial |
-| $match | Yes |
-| $merge | Yes |
-| $out | Yes |
-| $planCacheStats | Yes |
-| $project | Yes |
-| $redact | Yes |
-| $regexFind | Yes |
-| $regexFindAll | Yes |
-| $regexMatch | Yes |
-| $replaceRoot | Yes |
-| $replaceWith | Yes |
-| $sample | Yes |
-| $set | Yes |
-| $skip | Yes |
-| $sort | Yes |
-| $sortByCount | Yes |
-| $unset | Yes |
-| $unwind | Yes |
+| `addFields` | Yes |
+| `bucket` | No |
+| `bucketAuto` | No |
+| `changeStream` | Yes |
+| `collStats` | No |
+| `count` | Yes |
+| `currentOp` | No |
+| `facet` | Yes |
+| `geoNear` | Yes |
+| `graphLookup` | Yes |
+| `group` | Yes |
+| `indexStats` | No |
+| `limit` | Yes |
+| `listLocalSessions` | No |
+| `listSessions` | No |
+| `lookup` | Partial |
+| `match` | Yes |
+| `merge` | Yes |
+| `out` | Yes |
+| `planCacheStats` | Yes |
+| `project` | Yes |
+| `redact` | Yes |
+| `regexFind` | Yes |
+| `regexFindAll` | Yes |
+| `regexMatch` | Yes |
+| `replaceRoot` | Yes |
+| `replaceWith` | Yes |
+| `sample` | Yes |
+| `set` | Yes |
+| `skip` | Yes |
+| `sort` | Yes |
+| `sortByCount` | Yes |
+| `unset` | Yes |
+| `unwind` | Yes |
> [!NOTE] > The `$lookup` aggregation does not yet support the [uncorrelated subqueries](https://docs.mongodb.com/manual/reference/operator/aggregation/lookup/#join-conditions-and-uncorrelated-sub-queries) feature introduced in server version 3.6. You will receive an error with a message containing `let is not supported` if you attempt to use the `$lookup` operator with `let` and `pipeline` fields.
Azure Cosmos DB for MongoDB supports the following database commands:
| Command | Supported | |||
-| $and | Yes |
-| $not | Yes |
-| $or | Yes |
+| `and` | Yes |
+| `not` | Yes |
+| `or` | Yes |
### Conversion expressions | Command | Supported | |||
-| $convert | Yes |
-| $toBool | Yes |
-| $toDate | Yes |
-| $toDecimal | Yes |
-| $toDouble | Yes |
-| $toInt | Yes |
-| $toLong | Yes |
-| $toObjectId | Yes |
-| $toString | Yes |
+| `convert` | Yes |
+| `toBool` | Yes |
+| `toDate` | Yes |
+| `toDecimal` | Yes |
+| `toDouble` | Yes |
+| `toInt` | Yes |
+| `toLong` | Yes |
+| `toObjectId` | Yes |
+| `toString` | Yes |
### Set expressions | Command | Supported | |||
-| $setEquals | Yes |
-| $setIntersection | Yes |
-| $setUnion | Yes |
-| $setDifference | Yes |
-| $setIsSubset | Yes |
-| $anyElementTrue | Yes |
-| $allElementsTrue | Yes |
+| `setEquals` | Yes |
+| `setIntersection` | Yes |
+| `setUnion` | Yes |
+| `setDifference` | Yes |
+| `setIsSubset` | Yes |
+| `anyElementTrue` | Yes |
+| `allElementsTrue` | Yes |
### Comparison expressions
Azure Cosmos DB for MongoDB supports the following database commands:
| Command | Supported | |||
-| $cmp | Yes |
-| $eq | Yes |
-| $gt | Yes |
-| $gte | Yes |
-| $lt | Yes |
-| $lte | Yes |
-| $ne | Yes |
-| $in | Yes |
-| $nin | Yes |
+| `cmp` | Yes |
+| `eq` | Yes |
+| `gt` | Yes |
+| `gte` | Yes |
+| `lt` | Yes |
+| `lte` | Yes |
+| `ne` | Yes |
+| `in` | Yes |
+| `nin` | Yes |
### Arithmetic expressions | Command | Supported | |||
-| $abs | Yes |
-| $add | Yes |
-| $ceil | Yes |
-| $divide | Yes |
-| $exp | Yes |
-| $floor | Yes |
-| $ln | Yes |
-| $log | Yes |
-| $log10 | Yes |
-| $mod | Yes |
-| $multiply | Yes |
-| $pow | Yes |
-| $round | Yes |
-| $sqrt | Yes |
-| $subtract | Yes |
-| $trunc | Yes |
+| `abs` | Yes |
+| `add` | Yes |
+| `ceil` | Yes |
+| `divide` | Yes |
+| `exp` | Yes |
+| `floor` | Yes |
+| `ln` | Yes |
+| `log` | Yes |
+| `log10` | Yes |
+| `mod` | Yes |
+| `multiply` | Yes |
+| `pow` | Yes |
+| `round` | Yes |
+| `sqrt` | Yes |
+| `subtract` | Yes |
+| `trunc` | Yes |
### Trigonometry expressions | Command | Supported | |||
-| $acos | Yes |
-| $acosh | Yes |
-| $asin | Yes |
-| $asinh | Yes |
-| $atan | Yes |
-| $atan2 | Yes |
-| $atanh | Yes |
-| $cos | Yes |
-| $cosh | Yes |
-| $degreesToRadians | Yes |
-| $radiansToDegrees | Yes |
-| $sin | Yes |
-| $sinh | Yes |
-| $tan | Yes |
-| $tanh | Yes |
+| `acos` | Yes |
+| `acosh` | Yes |
+| `asin` | Yes |
+| `asinh` | Yes |
+| `atan` | Yes |
+| `atan2` | Yes |
+| `atanh` | Yes |
+| `cos` | Yes |
+| `cosh` | Yes |
+| `degreesToRadians` | Yes |
+| `radiansToDegrees` | Yes |
+| `sin` | Yes |
+| `sinh` | Yes |
+| `tan` | Yes |
+| `tanh` | Yes |
### String expressions | Command | Supported | |||
-| $concat | Yes |
-| $indexOfBytes | Yes |
-| $indexOfCP | Yes |
-| $ltrim | Yes |
-| $rtrim | Yes |
-| $trim | Yes |
-| $split | Yes |
-| $strLenBytes | Yes |
-| $strLenCP | Yes |
-| $strcasecmp | Yes |
-| $substr | Yes |
-| $substrBytes | Yes |
-| $substrCP | Yes |
-| $toLower | Yes |
-| $toUpper | Yes |
+| `concat` | Yes |
+| `indexOfBytes` | Yes |
+| `indexOfCP` | Yes |
+| `ltrim` | Yes |
+| `rtrim` | Yes |
+| `trim` | Yes |
+| `split` | Yes |
+| `strLenBytes` | Yes |
+| `strLenCP` | Yes |
+| `strcasecmp` | Yes |
+| `substr` | Yes |
+| `substrBytes` | Yes |
+| `substrCP` | Yes |
+| `toLower` | Yes |
+| `toUpper` | Yes |
### Text search operator | Command | Supported | |||
-| $meta | No |
+| `meta` | No |
### Array expressions | Command | Supported | |||
-| $arrayElemAt | Yes |
-| $arrayToObject | Yes |
-| $concatArrays | Yes |
-| $filter | Yes |
-| $indexOfArray | Yes |
-| $isArray | Yes |
-| $objectToArray | Yes |
-| $range | Yes |
-| $reverseArray | Yes |
-| $reduce | Yes |
-| $size | Yes |
-| $slice | Yes |
-| $zip | Yes |
-| $in | Yes |
+| `arrayElemAt` | Yes |
+| `arrayToObject` | Yes |
+| `concatArrays` | Yes |
+| `filter` | Yes |
+| `indexOfArray` | Yes |
+| `isArray` | Yes |
+| `objectToArray` | Yes |
+| `range` | Yes |
+| `reverseArray` | Yes |
+| `reduce` | Yes |
+| `size` | Yes |
+| `slice` | Yes |
+| `zip` | Yes |
+| `in` | Yes |
### Variable operators | Command | Supported | |||
-| $map | Yes |
-| $let | Yes |
+| `map` | Yes |
+| `let` | Yes |
### System variables | Command | Supported | |||
-| $$CLUSTERTIME | Yes |
-| $$CURRENT | Yes |
-| $$DESCEND | Yes |
-| $$KEEP | Yes |
-| $$NOW | Yes |
-| $$PRUNE | Yes |
-| $$REMOVE | Yes |
-| $$ROOT | Yes |
+| `$$CLUSTERTIME` | Yes |
+| `$$CURRENT` | Yes |
+| `$$DESCEND` | Yes |
+| `$$KEEP` | Yes |
+| `$$NOW` | Yes |
+| `$$PRUNE` | Yes |
+| `$$REMOVE` | Yes |
+| `$$ROOT` | Yes |
### Literal operator | Command | Supported | |||
-| $literal | Yes |
+| `literal` | Yes |
### Date expressions | Command | Supported | |||
-| $dayOfYear | Yes |
-| $dayOfMonth | Yes |
-| $dayOfWeek | Yes |
-| $year | Yes |
-| $month | Yes |
-| $week | Yes |
-| $hour | Yes |
-| $minute | Yes |
-| $second | Yes |
-| $millisecond | Yes |
-| $dateToString | Yes |
-| $isoDayOfWeek | Yes |
-| $isoWeek | Yes |
-| $dateFromParts | Yes |
-| $dateToParts | Yes |
-| $dateFromString | Yes |
-| $isoWeekYear | Yes |
+| `dayOfYear` | Yes |
+| `dayOfMonth` | Yes |
+| `dayOfWeek` | Yes |
+| `year` | Yes |
+| `month` | Yes |
+| `week` | Yes |
+| `hour` | Yes |
+| `minute` | Yes |
+| `second` | Yes |
+| `millisecond` | Yes |
+| `dateToString` | Yes |
+| `isoDayOfWeek` | Yes |
+| `isoWeek` | Yes |
+| `dateFromParts` | Yes |
+| `dateToParts` | Yes |
+| `dateFromString` | Yes |
+| `isoWeekYear` | Yes |
### Conditional expressions | Command | Supported | |||
-| $cond | Yes |
-| $ifNull | Yes |
-| $switch | Yes |
+| `cond` | Yes |
+| `ifNull` | Yes |
+| `switch` | Yes |
### Data type operator | Command | Supported | |||
-| $type | Yes |
+| `type` | Yes |
### Accumulator expressions | Command | Supported | |||
-| $sum | Yes |
-| $avg | Yes |
-| $first | Yes |
-| $last | Yes |
-| $max | Yes |
-| $min | Yes |
-| $push | Yes |
-| $addToSet | Yes |
-| $stdDevPop | Yes |
-| $stdDevSamp | Yes |
+| `sum` | Yes |
+| `avg` | Yes |
+| `first` | Yes |
+| `last` | Yes |
+| `max` | Yes |
+| `min` | Yes |
+| `push` | Yes |
+| `addToSet` | Yes |
+| `stdDevPop` | Yes |
+| `stdDevSamp` | Yes |
### Merge operator | Command | Supported | |||
-| $mergeObjects | Yes |
+| `mergeObjects` | Yes |
## Data types
-Azure Cosmos DB for MongoDB supports documents encoded in MongoDB BSON format. Versions 4.0 and higher (4.0+) enhance the internal usage of this format to improve performance and reduce costs. Documents written or updated through an endpoint running 4.0+ benefit from this.
-
-In an [upgrade scenario](upgrade-version.md), documents written prior to the upgrade to version 4.0+ will not benefit from the enhanced performance until they are updated via a write operation through the 4.0+ endpoint.
+Azure Cosmos DB for MongoDB supports documents encoded in MongoDB BSON format. Versions 4.0 and higher (4.0+) enhance the internal usage of this format to improve performance and reduce costs. Documents written or updated through an endpoint running 4.0+ benefit from this optimization.
+
+In an [upgrade scenario](upgrade-version.md), documents written prior to the upgrade to version 4.0+ won't benefit from the enhanced performance until they're updated via a write operation through the 4.0+ endpoint.
-16 MB document support raises the size limit for your documents from 2 MB to 16 MB. This limit only applies to collections created after this feature has been enabled. Once this feature is enabled for your database account, it cannot be disabled. This feature is not compatible with the Azure Synapse Link feature and/or Continuous Backup.
+16-MB document support raises the size limit for your documents from 2 MB to 16 MB. This limit only applies to collections created after this feature has been enabled. Once this feature is enabled for your database account, it can't be disabled. This feature isn't compatible with the Azure Synapse Link feature and/or Continuous Backup.
-Enabling 16 MB can be done in the features tab in the Azure portal or programmatically by [adding the `EnableMongo16MBDocumentSupport` capability](how-to-configure-capabilities.md).
+Enabling 16 MB can be done in the features tab in the Azure portal or programmatically by [adding the `EnableMongo16MBDocumentSupport` capability](how-to-configure-capabilities.md).
-We recommend enabling Server Side Retry to ensure requests with larger documents succeed. If necessary, raising your DB/Collection RUs may also help performance.
+We recommend enabling Server Side Retry and avoiding wildcard indexes to ensure requests with larger documents succeed. If necessary, raising your DB/Collection RUs may also help performance.
| Command | Supported | |||
-| Double | Yes |
-| String | Yes |
-| Object | Yes |
-| Array | Yes |
-| Binary Data | Yes |
-| ObjectId | Yes |
-| Boolean | Yes |
-| Date | Yes |
-| Null | Yes |
-| 32-bit Integer (int) | Yes |
-| Timestamp | Yes |
-| 64-bit Integer (long) | Yes |
-| MinKey | Yes |
-| MaxKey | Yes |
-| Decimal128 | Yes |
-| Regular Expression | Yes |
-| JavaScript | Yes |
-| JavaScript (with scope)| Yes |
-| Undefined | Yes |
+| `Double` | Yes |
+| `String` | Yes |
+| `Object` | Yes |
+| `Array` | Yes |
+| `Binary Data` | Yes |
+| `ObjectId` | Yes |
+| `Boolean` | Yes |
+| `Date` | Yes |
+| `Null` | Yes |
+| `32-bit Integer (int)` | Yes |
+| `Timestamp` | Yes |
+| `64-bit Integer (long)` | Yes |
+| `MinKey` | Yes |
+| `MaxKey` | Yes |
+| `Decimal128` | Yes |
+| `Regular Expression` | Yes |
+| `JavaScript` | Yes |
+| `JavaScript (with scope)` | Yes |
+| `Undefined` | Yes |
## Indexes and index properties
We recommend enabling Server Side Retry to ensure requests with larger documents
| Command | Supported | |||
-| Single Field Index | Yes |
-| Compound Index | Yes |
-| Multikey Index | Yes |
-| Text Index | No |
-| 2dsphere | Yes |
-| 2d Index | No |
-| Hashed Index | Yes |
+| `Single Field Index` | Yes |
+| `Compound Index` | Yes |
+| `Multikey Index` | Yes |
+| `Text Index` | No |
+| `2dsphere` | Yes |
+| `2d Index` | No |
+| `Hashed Index` | Yes |
### Index properties | Command | Supported | |||
-| TTL | Yes |
-| Unique | Yes |
-| Partial | No |
-| Case Insensitive | No |
-| Sparse | No |
-| Background | Yes |
+| `TTL` | Yes |
+| `Unique` | Yes |
+| `Partial` | No |
+| `Case Insensitive` | No |
+| `Sparse` | No |
+| `Background` | Yes |
## Operators
We recommend enabling Server Side Retry to ensure requests with larger documents
| Command | Supported | |||
-| $or | Yes |
-| $and | Yes |
-| $not | Yes |
-| $nor | Yes |
+| `or` | Yes |
+| `and` | Yes |
+| `not` | Yes |
+| `nor` | Yes |
### Element operators | Command | Supported | |||
-| $exists | Yes |
-| $type | Yes |
+| `exists` | Yes |
+| `type` | Yes |
### Evaluation query operators | Command | Supported | |||
-| $expr | Yes |
-| $jsonSchema | No |
-| $mod | Yes |
-| $regex | Yes |
-| $text | No (Not supported. Use $regex instead.)|
-| $where | No |
+| `expr` | Yes |
+| `jsonSchema` | No |
+| `mod` | Yes |
+| `regex` | Yes |
+| `text` | No (Not supported. Use $regex instead.)|
+| `where` | No |
In the $regex queries, left-anchored expressions allow index search. However, using 'i' modifier (case-insensitivity) and 'm' modifier (multiline) causes the collection scan in all expressions.
-When there's a need to include '$' or '|', it is best to create two (or more) regex queries. For example, given the following original query: `find({x:{$regex: /^abc$/})`, it has to be modified as follows:
+When there's a need to include '$' or '|', it's best to create two (or more) regex queries. For example, given the following original query: `find({x:{$regex: /^abc$/})`, it has to be modified as follows:
`find({x:{$regex: /^abc/, x:{$regex:/^abc$/}})`
The first part will use the index to restrict the search to those documents begi
### Array operators
-| Command | Supported |
+| Command | Supported |
|||
-| $all | Yes |
-| $elemMatch | Yes |
-| $size | Yes |
+| `all` | Yes |
+| `elemMatch` | Yes |
+| `size` | Yes |
### Comment operator
-| Command | Supported |
+| Command | Supported |
|||
-| $comment | Yes |
+| `comment` | Yes |
### Projection operators | Command | Supported | |||
-| $elemMatch | Yes |
-| $meta | No |
-| $slice | Yes |
+| `elemMatch` | Yes |
+| `meta` | No |
+| `slice` | Yes |
### Update operators
The first part will use the index to restrict the search to those documents begi
| Command | Supported | |||
-| $inc | Yes |
-| $mul | Yes |
-| $rename | Yes |
-| $setOnInsert | Yes |
-| $set | Yes |
-| $unset | Yes |
-| $min | Yes |
-| $max | Yes |
-| $currentDate | Yes |
+| `inc` | Yes |
+| `mul` | Yes |
+| `rename` | Yes |
+| `setOnInsert` | Yes |
+| `set` | Yes |
+| `unset` | Yes |
+| `min` | Yes |
+| `max` | Yes |
+| `currentDate` | Yes |
#### Array update operators | Command | Supported | |||
-| $ | Yes |
-| $[]| Yes |
-| $[\<identifier\>]| Yes |
-| $addToSet | Yes |
-| $pop | Yes |
-| $pullAll | Yes |
-| $pull | Yes |
-| $push | Yes |
-| $pushAll | Yes |
+| `$` | Yes |
+| `$[]`| Yes |
+| `$[\<identifier\>]` | Yes |
+| `addToSet` | Yes |
+| `pop` | Yes |
+| `pullAll` | Yes |
+| `pull` | Yes |
+| `push` | Yes |
+| `pushAll` | Yes |
#### Update modifiers | Command | Supported | |||
-| $each | Yes |
-| $slice | Yes |
-| $sort | Yes |
-| $position | Yes |
+| `each` | Yes |
+| `slice` | Yes |
+| `sort` | Yes |
+| `position` | Yes |
#### Bitwise update operator | Command | Supported | |||
-| $bit | Yes |
-| $bitsAllSet | No |
-| $bitsAnySet | No |
-| $bitsAllClear | No |
-| $bitsAnyClear | No |
+| `bit` | Yes |
+| `bitsAllSet` | No |
+| `bitsAnySet` | No |
+| `bitsAllClear` | No |
+| `bitsAnyClear` | No |
### Geospatial operators
-Operator | Supported |
- | |
-$geoWithin | Yes |
-$geoIntersects | Yes |
-$near | Yes |
-$nearSphere | Yes |
-$geometry | Yes |
-$minDistance | Yes |
-$maxDistance | Yes |
-$center | No |
-$centerSphere | No |
-$box | No |
-$polygon | No |
+| Operator | Supported |
+| | |
+| `$geoWithin` | Yes |
+| `$geoIntersects` | Yes |
+| `$near` | Yes |
+| `$nearSphere` | Yes |
+| `$geometry` | Yes |
+| `$minDistance` | Yes |
+| `$maxDistance` | Yes |
+| `$center` | No |
+| `$centerSphere` | No |
+| `$box` | No |
+| `$polygon` | No |
## Sort operations
-When using the `findOneAndUpdate` operation, sort operations on a single field are supported but sort operations on multiple fields are not supported.
+When you use the `findOneAndUpdate` operation, sort operations on a single field are supported, but sort operations on multiple fields aren't supported.
## Indexing
-The API for MongoDB [supports a variety of indexes](indexing.md) to enable sorting on multiple fields, improve query performance, and enforce uniqueness.
+
+The API for MongoDB [supports various indexes](indexing.md) to enable sorting on multiple fields, improve query performance, and enforce uniqueness.
## Client-side field level encryption
-Client-level field encryption is a driver feature and is compatible with the API for MongoDB. Explicit encryption - were the driver explicitly encrypts each field when written is supported. Automatic encryption is not supported. Explicit decryption and automatic decryption is supported.
+Client-level field encryption is a driver feature and is compatible with the API for MongoDB. Explicit encryption - were the driver explicitly encrypts each field when written is supported. Automatic encryption isn't supported. Explicit decryption and automatic decryption is supported.
-The mongocryptd should not be run since it is not needed to perform any of the supported operations.
+The mongocryptd shouldn't be run since it isn't needed to perform any of the supported operations.
## GridFS
Azure Cosmos DB supports GridFS through any GridFS-compatible Mongo driver.
## Replication
-Azure Cosmos DB supports automatic, native replication at the lowest layers. This logic is extended out to achieve low-latency, global replication as well. Azure Cosmos DB does not support manual replication commands.
+Azure Cosmos DB supports automatic, native replication at the lowest layers. This logic is extended out to achieve low-latency, global replication as well. Azure Cosmos DB doesn't support manual replication commands.
## Retryable Writes
-Retryable writes enables MongoDB drivers to automatically retry certain write operations in case of failure, but results in more stringent requirements for certain operations, which match MongoDB protocol requirements. With this feature enabled, update operations, including deletes, in sharded collections will require the shard key to be included in the query filter or update statement.
-For example, with a sharded collection, sharded on key ΓÇ£countryΓÇ¥: To delete all the documents with the field city = "NYC", the application will need to execute the operation for all shard key (country) values if Retryable writes is enabled.
+The Retryable writes feature enables MongoDB drivers to automatically retry certain write operations. The feature results in more stringent requirements for certain operations, which match MongoDB protocol requirements. With this feature enabled, update operations, including deletes, in sharded collections will require the shard key to be included in the query filter or update statement.
-db.coll.deleteMany({"country": "USA", "city": "NYC"}) ΓÇô Success
+For example, with a sharded collection, sharded on key ΓÇ£countryΓÇ¥: To delete all the documents with the field **city** = `"NYC"`, the application will need to execute the operation for all shard key (country) values if the Retryable writes feature is enabled.
-db.coll.deleteMany({"city": "NYC"})- Fails with error ShardKeyNotFound(61)
+- `db.coll.deleteMany({"country": "USA", "city": "NYC"})` ΓÇô **Success**
+- `db.coll.deleteMany({"city": "NYC"})` - Fails with error **ShardKeyNotFound(61)**
-To enable the feature, [add the `EnableMongoRetryableWrites` capability](how-to-configure-capabilities.md) to your database account. This feature can also be enabled in the features tab in the Azure portal.
+To enable the feature, [add the `EnableMongoRetryableWrites` capability](how-to-configure-capabilities.md) to your database account. This feature can also be enabled in the features tab in the Azure portal.
## Sharding
-Azure Cosmos DB supports automatic, server-side sharding. It manages shard creation, placement, and balancing automatically. Azure Cosmos DB does not support manual sharding commands, which means you don't have to invoke commands such as addShard, balancerStart, moveChunk etc. You only need to specify the shard key while creating the containers or querying the data.
+Azure Cosmos DB supports automatic, server-side sharding. It manages shard creation, placement, and balancing automatically. Azure Cosmos DB doesn't support manual sharding commands, which means you don't have to invoke commands such as addShard, balancerStart, moveChunk etc. You only need to specify the shard key while creating the containers or querying the data.
## Sessions
-Azure Cosmos DB does not yet support server-side sessions commands.
+Azure Cosmos DB doesn't yet support server-side sessions commands.
## Time-to-live (TTL)
Azure Cosmos DB supports a time-to-live (TTL) based on the timestamp of the docu
## Transactions
-Multi-document transactions are supported within an unsharded collection. Multi-document transactions are not supported across collections or in sharded collections. The timeout for transactions is a fixed 5 seconds.
+Multi-document transactions are supported within an unsharded collection. Multi-document transactions aren't supported across collections or in sharded collections. The timeout for transactions is a fixed 5 seconds.
## User and role management
-Azure Cosmos DB does not yet support users and roles. However, Azure Cosmos DB supports Azure role-based access control (Azure RBAC) and read-write and read-only passwords/keys that can be obtained through the [Azure portal](https://portal.azure.com) (Connection String page).
+Azure Cosmos DB doesn't yet support users and roles. However, Azure Cosmos DB supports Azure role-based access control (Azure RBAC) and read-write and read-only passwords/keys that can be obtained through the [Azure portal](https://portal.azure.com) (Connection String page).
## Write Concern
-Some applications rely on a [Write Concern](https://docs.mongodb.com/manual/reference/write-concern/), which specifies the number of responses required during a write operation. Due to how Azure Cosmos DB handles replication in the background all writes are all automatically Quorum by default. Any write concern specified by the client code is ignored. Learn more in [Using consistency levels to maximize availability and performance](../consistency-levels.md).
+Some applications rely on a [Write Concern](https://docs.mongodb.com/manual/reference/write-concern/), which specifies the number of responses required during a write operation. Due to how Azure Cosmos DB handles replication in the background, all writes are automatically Quorum by default. Any write concern specified by the client code is ignored. Learn more in [Using consistency levels to maximize availability and performance](../consistency-levels.md).
## Next steps
Some applications rely on a [Write Concern](https://docs.mongodb.com/manual/refe
- Learn how to [use Robo 3T](connect-using-robomongo.md) with Azure Cosmos DB for MongoDB. - Explore MongoDB [samples](nodejs-console-app.md) with Azure Cosmos DB for MongoDB. - Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
- - If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md).
- - If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-capacity-planner.md).
+ - If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md).
+ - If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-capacity-planner.md).
cosmos-db How To Configure Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-configure-capabilities.md
Title: Configure your API for MongoDB account capabilities
+ Title: Configure your Azure Cosmos DB for MongoDB account capabilities
description: Learn how to configure your API for MongoDB account capabilities-+++ -+ Previously updated : 09/06/2022- Last updated : 10/12/2022+
-# Configure your API for MongoDB account capabilities
+# Configure your Azure Cosmos DB for MongoDB account capabilities
+ [!INCLUDE[MongoDB](../includes/appliesto-mongodb.md)]
-Capabilities are features that can be added or removed to your API for MongoDB account. Many of these features affect account behavior so it's important to be fully aware of the impact a capability will have before enabling or disabling it. Several capabilities are set on API for MongoDB accounts by default, and cannot be changed or removed. One example is the EnableMongo capability. This article will demonstrate how to enable and disable a capability.
+Capabilities are features that can be added or removed to your API for MongoDB account. Many of these features affect account behavior so it's important to be fully aware of the effect a capability will have before enabling or disabling it. Several capabilities are set on API for MongoDB accounts by default, and can't be changed or removed. One example is the EnableMongo capability. This article will demonstrate how to enable and disable a capability.
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://aka.ms/trycosmosdb).
+- Azure Cosmos DB for MongoDB account. [Create an API for MongoDB account](quickstart-nodejs.md#create-an-azure-cosmos-db-account).
+- [Azure Command-Line Interface (CLI)](/cli/azure/)
+
+## Available capabilities
+
+| Capability | Description | Removable |
+| | | |
+| `DisableRateLimitingResponses` | Allows Mongo API to retry rate-limiting requests on the server-side until max-request-timeout | Yes |
+| `EnableMongoRoleBasedAccessControl` | Enable support for creating Users/Roles for native MongoDB role-based access control | No |
+| `EnableMongoRetryableWrites` | Enables support for retryable writes on the account | Yes |
+| `EnableMongo16MBDocumentSupport` | Enables support for inserting documents upto 16 MB in size | No |
## Enable a capability
-1. Retrieve your existing account capabilities:
-```powershell
-az cosmosdb show -n <account_name> -g <azure_resource_group>
-```
-You should see a capability section similar to this:
-```powershell
-"capabilities": [
- {
- "name": "EnableMongo"
- }
-]
-```
-Copy each of these capabilities. In this example, we have EnableMongo and DisableRateLimitingResponses.
-
-2. Set the new capability on your database account. The list of capabilities should include the list of previously enabled capabilities, since only the explicitly named capabilities will be set on your account. For example, if you want to add the capability "DisableRateLimitingResponses", you would run the following command:
-```powershell
-az cosmosdb update -n <account_name> -g <azure_resource_group> --capabilities EnableMongo, DisableRateLimitingResponses
-```
-If you are using PowerShell and receive an error using the command above, try using a PowerShell array instead to list the capabilities:
-```powershell
-az cosmosdb update -n <account_name> -g <azure_resource_group> --capabilities @("EnableMongo","DisableRateLimitingResponses")
-```
+
+1. Retrieve your existing account capabilities by using [**az cosmosdb show**](/cli/azure/cosmosdb#az-cosmosdb-show):
+
+ ```azurecli-interactive
+ az cosmosdb show \
+ --resource-group <azure_resource_group> \
+ --name <azure_cosmos_db_account_name>
+ ```
+
+ You should see a capability section similar to this output:
+
+ ```json
+ "capabilities": [
+ {
+ "name": "EnableMongo"
+ }
+ ]
+ ```
+
+ Review the default capability. In this example, we have just `EnableMongo`.
+
+1. Set the new capability on your database account. The list of capabilities should include the list of previously enabled capabilities, since only the explicitly named capabilities will be set on your account. For example, if you want to add the capability `DisableRateLimitingResponses`, you would use the [**az cosmosdb update**](/cli/azure/cosmosdb#az-cosmosdb-update) command with the `--capabilities` parameter:
+
+ ```azurecli-interactive
+ az cosmosdb update \
+ --resource-group <azure_resource_group> \
+ --name <azure_cosmos_db_account_name> \
+ --capabilities EnableMongo, DisableRateLimitingResponses
+ ```
+
+ > [!TIP]
+ > If you're using PowerShell and receive an error using the command above, try using a PowerShell array instead to list the capabilities:
+ >
+ > ```azurecli
+ > az cosmosdb update \
+ > --resource-group <azure_resource_group> \
+ > --name <azure_cosmos_db_account_name> \
+ > --capabilities @("EnableMongo","DisableRateLimitingResponses")
+ > ```
+ >
## Disable a capability
-1. Retrieve your existing account capabilities:
-```powershell
-az cosmosdb show -n <account_name> -g <azure_resource_group>
-```
-You should see a capability section similar to this:
-```powershell
-"capabilities": [
- {
- "name": "EnableMongo"
- },
- {
- "name": "DisableRateLimitingResponses"
- }
-]
-```
-Copy each of these capabilities. In this example, we have EnableMongo and DisableRateLimitingResponses.
-
-2. Remove the capability from your database account. The list of capabilities should include the list of previously enabled capabilities you want to keep, since only the explicitly named capabilities will be set on your account. For example, if you want to remove the capability "DisableRateLimitingResponses", you would run the following command:
-```powershell
-az cosmosdb update -n <account_name> -g <azure_resource_group> --capabilities EnableMongo
-```
-If you are using PowerShell and receive an error using the command above, try using a PowerShell array instead to list the capabilities:
-```powershell
-az cosmosdb update -n <account_name> -g <azure_resource_group> --capabilities @("EnableMongo")
-```
+
+1. Retrieve your existing account capabilities by using **az cosmosdb show**:
+
+ ```azurecli-interactive
+ az cosmosdb show \
+ --resource-group <azure_resource_group> \
+ --name <azure_cosmos_db_account_name>
+ ```
+
+ You should see a capability section similar to this output:
+
+ ```json
+ "capabilities": [
+ {
+ "name": "EnableMongo"
+ },
+ {
+ "name": "DisableRateLimitingResponses"
+ }
+ ]
+ ```
+
+ Observe each of these capabilities. In this example, we have `EnableMongo` and `DisableRateLimitingResponses`.
+
+1. Remove the capability from your database account. The list of capabilities should include the list of previously enabled capabilities you want to keep, since only the explicitly named capabilities will be set on your account. For example, if you want to remove the capability `DisableRateLimitingResponses`, you would use the **az cosmosdb update** command:
+
+ ```azurecli-interactive
+ az cosmosdb update \
+ --resource-group <azure_resource_group> \
+ --name <azure_cosmos_db_account_name> \
+ --capabilities EnableMongo
+ ```
+
+ > [!TIP]
+ > If you're using PowerShell and receive an error using the command above, try using a PowerShell array instead to list the capabilities:
+ >
+ > ```azurecli
+ > az cosmosdb update \
+ > --resource-group <azure_resource_group> \
+ > --name <azure_cosmos_db_account_name> \
+ > --capabilities @("EnableMongo")
+ > ```
## Next steps
az cosmosdb update -n <account_name> -g <azure_resource_group> --capabilities @(
- Learn how to [use Robo 3T](connect-using-robomongo.md) with Azure Cosmos DB for MongoDB. - Explore MongoDB [samples](nodejs-console-app.md) with Azure Cosmos DB for MongoDB. - Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
- - If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md).
- - If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-capacity-planner.md).
+ - If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md).
+ - If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-capacity-planner.md).
cosmos-db How To Setup Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-setup-rbac.md
MongoClientURI uri = new MongoClientURI(connectionString);
MongoClient client = new MongoClient(uri); ```
+## Authenticate using Mongosh
+```powershell
+mongosh --authenticationDatabase <YOUR_DB> --authenticationMechanism SCRAM-SHA-256 "mongodb://<YOUR_USERNAME>:<YOUR_PASSWORD>@<YOUR_HOST>:10255/?ssl=true&replicaSet=globaldb&retrywrites=false&maxIdleTimeMS=120000"
+```
+ ## Azure CLI RBAC Commands The RBAC management commands will only work with newer versions of the Azure CLI installed. See the Quickstart above on how to get started.
cosmos-db Sdk Dotnet V3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/sdk-dotnet-v3.md
Release history is maintained in the Azure Cosmos DB .NET SDK source repo. For a
## <a name="recommended-version"></a> Recommended version
-Different sub versions of .NET SDKs are available under the 3.x.x version. **The minimum recommended version is 3.25.0**.
+Different sub versions of .NET SDKs are available under the 3.x.x version. **The minimum recommended version is 3.31.0**.
## <a name="known-issues"></a> Known issues
cosmos-db Sdk Java Spring Data V3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/sdk-java-spring-data-v3.md
Release history is maintained in the azure-sdk-for-java repo, for detailed list
## Recommended version
-It's strongly recommended to use version 3.22.0 and above.
+It's strongly recommended to use version 3.28.1 and above.
## Additional notes
cosmos-db Sdk Java V4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/sdk-java-v4.md
Release history is maintained in the azure-sdk-for-java repo, for detailed list
## Recommended version
-It's strongly recommended to use version 4.31.0 and above.
+It's strongly recommended to use version 4.37.1 and above.
## FAQ [!INCLUDE [cosmos-db-sdk-faq](../includes/cosmos-db-sdk-faq.md)]
cosmos-db Partitioning Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/partitioning-overview.md
A partition key has two components: **partition key path** and the **partition k
To learn about the limits on throughput, storage, and length of the partition key, see the [Azure Cosmos DB service quotas](concepts-limits.md) article.
-Selecting your partition key is a simple but important design choice in Azure Cosmos DB. Once you select your partition key, it is not possible to change it in-place. If you need to change your partition key, you should move your data to a new container with your new desired partition key.
+Selecting your partition key is a simple but important design choice in Azure Cosmos DB. Once you select your partition key, it is not possible to change it in-place. If you need to change your partition key, you should move your data to a new container with your new desired partition key. ([Container copy jobs](intra-account-container-copy.md) help with this process.)
For **all** containers, your partition key should:
cosmos-db Set Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/set-throughput.md
You can combine the two models. Provisioning throughput on both the database and
* The container named *B* is guaranteed to get the *"P"* RUs throughput all the time. It's backed by SLAs. > [!NOTE]
-> A container with provisioned throughput cannot be converted to shared database container. Conversely a shared database container cannot be converted to have a dedicated throughput.
+> A container with provisioned throughput cannot be converted to shared database container. Conversely a shared database container cannot be converted to have a dedicated throughput. You will need to move the data to a container with the desired throughput setting. ([Container copy jobs](intra-account-container-copy.md) for NoSQL and Cassandra APIs help with this process.)
## Update throughput on a database or a container
cosmos-db Unique Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/unique-keys.md
You can define unique keys only when you create an Azure Cosmos DB container. A
* You can't update an existing container to use a different unique key. In other words, after a container is created with a unique key policy, the policy can't be changed.
-* To set a unique key for an existing container, create a new container with the unique key constraint. Use the appropriate data migration tool to move the data from the existing container to the new container. For SQL containers, use the [Data Migration tool](import-data.md) to move data. For MongoDB containers, use [mongoimport.exe or mongorestore.exe](../dms/tutorial-mongodb-cosmos-db.md?toc=%2fazure%2fcosmos-db%2ftoc.json%253ftoc%253d%2fazure%2fcosmos-db%2ftoc.json) to move data.
+* To set a unique key for an existing container, create a new container with the unique key constraint. Use the appropriate data migration tool to move the data from the existing container to the new container. For SQL containers, use the [container copy jobs](intra-account-container-copy.md) to move data. For MongoDB containers, use [mongoimport.exe or mongorestore.exe](../dms/tutorial-mongodb-cosmos-db.md?toc=%2fazure%2fcosmos-db%2ftoc.json%253ftoc%253d%2fazure%2fcosmos-db%2ftoc.json) to move data.
* A unique key policy can have a maximum of 16 path values. For example, the values can be `/firstName`, `/lastName`, and `/address/zipCode`. Each unique key policy can have a maximum of 10 unique key constraints or combinations. In the previous example, first name, last name, and email address together are one constraint. This constraint uses 3 out of the 16 possible paths.
databox Data Box Disk System Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-disk-system-requirements.md
Here is a list of the supported operating systems for the disk unlock and data c
| **Operating system** | **Tested versions** | | | | | Windows Server |2008 R2 SP1 <br> 2012 <br> 2012 R2 <br> 2016 |
-| Windows (64-bit) |7, 8, 10 |
+| Windows (64-bit) |7, 8, 10, 11 |
|Linux <br> <li> Ubuntu </li><li> Debian </li><li> Red Hat Enterprise Linux (RHEL) </li><li> CentOS| <br>14.04, 16.04, 18.04 <br> 8.11, 9 <br> 7.0 <br> 6.5, 6.9, 7.0, 7.5 | ## Other required software for Windows clients
databox Data Box Heavy System Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-heavy-system-requirements.md
Previously updated : 10/07/2021 Last updated : 10/07/2022 # Azure Data Box Heavy system requirements
databox Data Box System Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-system-requirements.md
Previously updated : 08/09/2022 Last updated : 10/07/2022
-# Azure Data Box system requirements
+# Azure Data Box system requirements
This article describes important system requirements for your Microsoft Azure Data Box and for clients that connect to the Data Box. We recommend you review the information carefully before you deploy your Data Box and then refer to it when you need to during deployment and operation.
The software requirements include supported operating systems, file transfer pro
[!INCLUDE [data-box-supported-os-clients](../../includes/data-box-supported-os-clients.md)] - ### Supported file transfer protocols for clients [!INCLUDE [data-box-supported-file-systems-clients](../../includes/data-box-supported-file-systems-clients.md)]
ddos-protection Ddos Protection Sku Comparison https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/ddos-protection-sku-comparison.md
Previously updated : 10/12/2022 Last updated : 10/13/2022
Azure DDoS Network Protection, combined with application design best practices,
> [!NOTE] > DDoS IP Protection is currently only available in the Azure Preview Portal.+
+DDoS IP Protection is currently available in the following regions.
+
+| Americas | Europe | Middle East | Africa | Asia Pacific |
+||-||--||
+| West Central US | France Central | UAE Central | South Africa North | Australia Central |
+| North Central US | Germany West Central | Qatar Central | | Korea Central |
+| West US | Switzerland North | | | Japan East |
+| West US 3 | France South | | | West India |
+| | Norway East | | | Jio India Central |
+| | Sweden Central | | | Australia Central 2 |
+| | Germany North | | | |
++ ## SKUs
The following table shows features and corresponding SKUs.
>[!Note] >At no additional cost, Azure DDoS infrastructure protection protects every Azure service that uses public IPv4 and IPv6 addresses. This DDoS protection service helps to protect all Azure services, including platform as a service (PaaS) services such as Azure DNS. For more information on supported PaaS services, see [DDoS Protection reference architectures](ddos-protection-reference-architectures.md). Azure DDoS infrastructure protection requires no user configuration or application changes. Azure provides continuous protection against DDoS attacks. DDoS protection does not store customer data. + ## Next steps * [Quickstart: Create an Azure DDoS Protection Plan](manage-ddos-protection.md)
ddos-protection Manage Ddos Protection Powershell Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/manage-ddos-protection-powershell-ip.md
In this quickstart, you'll enable DDoS IP protection and link it to a public IP
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - Azure PowerShell installed locally or Azure Cloud Shell -- If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 8.3.0 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+- If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 9.0.0 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)]
defender-for-cloud Azure Devops Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/azure-devops-extension.md
If you don't have access to install the extension, you must request access from
**To configure the Microsoft Security DevOps Azure DevOps extension**:
-1. Sign in to [Azure DevOps](https://dev.azure.com/)
+1. Sign in to [Azure DevOps](https://dev.azure.com/).
1. Navigate to **Shopping Bag** > **Manage extensions**.
If you don't have access to install the extension, you must request access from
1. Select **Shared**. > [!Note]
- > If you have already [installed the Microsoft Security DevOps extension](azure-devops-extension.md), it will be listed under Installed tab.
+ > If you've already [installed the Microsoft Security DevOps extension](azure-devops-extension.md), it will be listed in the **Installed** tab.
1. Select **Microsoft Security DevOps**.
If you don't have access to install the extension, you must request access from
1. Select **Install**.
-1. Select the appropriate Organization from the dropdown menu.
+1. Select the appropriate organization from the dropdown menu.
1. Select **Install**. 1. Select **Proceed to organization**.
-## Configure your Pipelines using YAML
+## Configure your pipelines using YAML
**To configure your pipeline using YAML**:
If you don't have access to install the extension, you must request access from
:::image type="content" source="media/msdo-azure-devops-extension/starter-piepline.png" alt-text="Screenshot showing where to select starter pipeline.":::
-1. Paste the following YAML into the pipeline
+1. Paste the following YAML into the pipeline:
```yml # Starter pipeline
If you don't have access to install the extension, you must request access from
1. Select **Save and run**.
-1. Select **Save and run** to commit the pipeline.
+1. To commit the pipeline, select **Save and run**.
The pipeline will run for a few minutes and save the results.
Learn more about [Defender for DevOps](defender-for-devops-introduction.md).
Learn how to [connect your Azure DevOps](quickstart-onboard-devops.md) to Defender for Cloud.
-[Discover misconfigurations in Infrastructure as Code (IaC)](iac-vulnerabilities.md)
+[Discover misconfigurations in Infrastructure as Code (IaC)](iac-vulnerabilities.md).
defender-for-cloud Defender For Containers Cicd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-cicd.md
- Title: Defender for Cloud's vulnerability scanner for container images in CI/CD workflows
-description: Learn how to scan container images in CI/CD workflows with Microsoft Defender for container registries
Previously updated : 11/09/2021-----
-# Identify vulnerable container images in your CI/CD workflows
-
-This page explains how to scan your Azure Container Registry-based container images with the integrated vulnerability scanner when they're built as part of your GitHub workflows.
-
-To set up the scanner, you'll need to enable **Microsoft Defender for Containers** and the CI/CD integration. When your CI/CD workflows push images to your registries, you can view registry scan results and a summary of CI/CD scan results.
-
-The findings of the CI/CD scans are an enrichment to the existing registry scan findings by Qualys. Defender for Cloud's CI/CD scanning is powered by [Aqua Trivy](https://github.com/aquasecurity/trivy).
-
-YouΓÇÖll get traceability information such as the GitHub workflow and the GitHub run URL, to help identify the workflows that are resulting in vulnerable images.
-
-> [!TIP]
-> The vulnerabilities identified in a scan of your registry might differ from the findings of your CI/CD scans. One reason for these differences is that the registry scanning is [continuous](defender-for-container-registries-introduction.md#when-are-images-scanned), whereas the CI/CD scanning happens immediately before the workflow pushes the image into the registry.
-
-## Availability
-
-|Aspect|Details|
-|-|:-|
-|Release state:| **This CI/CD integration is in preview.**<br>We recommend that you experiment with it on non-production workflows only.<br>[!INCLUDE [Legalese](../../includes/defender-for-cloud-preview-legal-text.md)]|
-|Pricing:|**Microsoft Defender for container registries** is billed as shown on the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/)|
-|Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Azure China 21Vianet)|
--
-## Prerequisites
-
-To scan your images as they're pushed by CI/CD workflows into your registries, you must have **Microsoft Defender for container registries** enabled on the subscription.
-
-## Set up vulnerability scanning of your CI/CD workflows
-
-To enable vulnerability scans of images in your GitHub workflows:
-
-[Step 1. Enable the CI/CD integration in Defender for Cloud](#step-1-enable-the-cicd-integration-in-defender-for-cloud)
-
-[Step 2. Add the necessary lines to your GitHub workflow](#step-2-add-the-necessary-lines-to-your-github-workflow-and-perform-a-scan)
-
-### Step 1. Enable the CI/CD integration in Defender for Cloud
-
-1. From Defender for Cloud's menu, open **Environment settings**.
-1. Select the relevant subscription.
-1. From the sidebar of the settings page for that subscription, select **Integrations**.
-1. In the pane that appears, select an Application Insights account to push the CI/CD scan results from your workflow.
-1. Copy the authentication token and connection string into your GitHub workflow.
-
- :::image type="content" source="./media/defender-for-containers-cicd/enable-cicd-integration.png" alt-text="Enable the CI/CD integration for vulnerability scans of container images in your GitHub workflows." lightbox="./media/defender-for-containers-cicd/enable-cicd-integration.png":::
-
- > [!IMPORTANT]
- > The authentication token and connection string are used to correlate the ingested security telemetry with resources in the subscription. If you use invalid values for these parameters, it'll lead to dropped telemetry.
-
-### Step 2. Add the necessary lines to your GitHub workflow and perform a scan
-
-1. From your GitHub workflow, enable CI/CD scanning as follows:
-
- > [!TIP]
- > We recommend creating two secrets in your repository to reference in your YAML file as shown below. The secrets can be named according to your own naming conventions. In this example, the secrets are referenced as **AZ_APPINSIGHTS_CONNECTION_STRING** and **AZ_SUBSCRIPTION_TOKEN**.
-
- > [!IMPORTANT]
- > The push to the registry must happen prior to the results being published.
-
- ```yml
- - name: Build and Tag Image
- run: |
- echo "github.sha=$GITHUB_SHA"
- docker build -t githubdemo1.azurecr.io/k8sdemo:${{ github.sha }} .
-
- - uses: Azure/container-scan@v0
- name: Scan image for vulnerabilities
- id: container-scan
- continue-on-error: true
- with:
- image-name: githubdemo1.azurecr.io/k8sdemo:${{ github.sha }}
-
- - name: Push Docker image
- run: |
- docker push githubdemo1.azurecr.io/k8sdemo:${{ github.sha }}
-
- - name: Post logs to appinsights
- uses: Azure/publish-security-assessments@v0
- with:
- scan-results-path: ${{ steps.container-scan.outputs.scan-report-path }}
- connection-string: ${{ secrets.AZ_APPINSIGHTS_CONNECTION_STRING }}
- subscription-token: ${{ secrets.AZ_SUBSCRIPTION_TOKEN }}
- ```
-
-1. Run the workflow that will push the image to the selected container registry. Once the image is pushed into the registry, a scan of the registry runs and you can view the CI/CD scan results along with the registry scan results within Microsoft Defender for Cloud. Running the above YAML file will install an instance of Aqua Security's [Trivy](https://github.com/aquasecurity/trivy) in your build system. Trivy is licensed under the Apache 2.0 License and has dependencies on data feeds, many of which contain their own terms of use.
-
-1. [View CI/CD scan results](#view-cicd-scan-results).
-
-## View CI/CD scan results
-
-1. To view the findings, open the **Recommendations** page. If issues were found, you'll see the recommendation [Container registry images should have vulnerability findings resolved (powered by Qualys)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/dbd0cb49-b563-45e7-9724-889e799fa648).
-
- ![Recommendation to remediate issues .](media/monitor-container-security/acr-finding.png)
-
-1. Select the recommendation.
-
- The recommendation details page opens with additional information. This information includes the list of registries with vulnerable images ("Affected resources") and the remediation steps.
-
-1. Open the **affected resources** list and select an unhealthy registry to see the repositories within it that have vulnerable images.
-
- :::image type="content" source="media/defender-for-containers-cicd/select-registry.png" alt-text="Select an unhealthy registry.":::
-
- The registry details page opens with the list of affected repositories.
-
-1. Select a specific repository to see the repositories within it that have vulnerable images.
-
- :::image type="content" source="media/defender-for-containers-cicd/select-repository.png" alt-text="Select an unhealthy repository.":::
-
- The repository details page opens. It lists the vulnerable images together with an assessment of the severity of the findings.
-
-1. Select a specific image to see the vulnerabilities.
-
- :::image type="content" source="media/defender-for-containers-cicd/select-image.png" alt-text="Select an unhealthy image.":::
-
- The list of findings for the selected image opens.
-
- :::image type="content" source="media/defender-for-containers-cicd/cicd-scan-results.png" alt-text="Image scan results.":::
-
-1. To learn more about which GitHub workflow is pushing these vulnerable images, select the information bubble:
-
- :::image type="content" source="media/defender-for-containers-cicd/cicd-findings.png" alt-text="CI/CD findings about specific GitHub branches and commits.":::
-
-## Next steps
-
-Learn more about the [advanced protection plans of Microsoft Defender for Cloud](defender-for-cloud-introduction.md).
defender-for-cloud Recommendations Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/recommendations-reference.md
This article lists the recommendations you might see in Microsoft Defender for C
shown in your environment depend on the resources you're protecting and your customized configuration.
-Defender for Cloud's recommendations are based on the [Microsoft Cloud Security Benchmark](../security/benchmarks/introduction.md).
-Microsoft Cloud Security Benchmark is the Microsoft-authored, Azure-specific set of guidelines for security
+Defender for Cloud's recommendations are based on the [Microsoft cloud security benchmark](/security/benchmark/azure/introduction).
+the Microsoft cloud security benchmark is the Microsoft-authored, Azure-specific set of guidelines for security
and compliance best practices based on common compliance frameworks. This widely respected benchmark builds on the controls from the [Center for Internet Security (CIS)](https://www.cisecurity.org/benchmark/azure/) and the [National Institute of Standards and Technology (NIST)](https://www.nist.gov/) with a focus on
defender-for-cloud Regulatory Compliance Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/regulatory-compliance-dashboard.md
The [CIS Benchmark](https://www.cisecurity.org/benchmark/azure/) is authored by
Since weΓÇÖve released the Microsoft Cloud Security Benchmark, many customers have chosen to migrate to it as a replacement for CIS benchmarks. ### What standards are supported in the compliance dashboard?
-By default, the regulatory compliance dashboard shows you the Microsoft Cloud Security Benchmark. The Microsoft Cloud Security Benchmark is the Microsoft-authored, Azure-specific guidelines for security, and compliance best practices based on common compliance frameworks. Learn more in the [Microsoft Cloud Security Benchmark introduction](../security/benchmarks/introduction.md).
+By default, the regulatory compliance dashboard shows you the Microsoft Cloud Security Benchmark. The Microsoft Cloud Security Benchmark is the Microsoft-authored, Azure-specific guidelines for security, and compliance best practices based on common compliance frameworks. Learn more in the [Microsoft Cloud Security Benchmark introduction](/security/benchmark/azure/introduction).
To track your compliance with any other standard, you'll need to explicitly add them to your dashboard.
The regulatory compliance dashboard can greatly simplify the compliance process,
To learn more, see these related pages: - [Customize the set of standards in your regulatory compliance dashboard](update-regulatory-compliance-packages.md) - Learn how to select which standards appear in your regulatory compliance dashboard. -- [Managing security recommendations in Defender for Cloud](review-security-recommendations.md) - Learn how to use recommendations in Defender for Cloud to help protect your Azure resources.
+- [Managing security recommendations in Defender for Cloud](review-security-recommendations.md) - Learn how to use recommendations in Defender for Cloud to help protect your Azure resources.
defender-for-cloud Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes-archive.md
Learn more in [Prioritize security actions by data sensitivity](information-prot
Microsoft Defender for Cloud's security recommendations are enabled and supported by the Azure Security Benchmark.
-[Azure Security Benchmark](../security/benchmarks/introduction.md) is the Microsoft-authored, Azure-specific set of guidelines for security and compliance best practices based on common compliance frameworks. This widely respected benchmark builds on the controls from the [Center for Internet Security (CIS)](https://www.cisecurity.org/benchmark/azure/) and the [National Institute of Standards and Technology (NIST)](https://www.nist.gov/) with a focus on cloud-centric security.
+[Azure Security Benchmark](/security/benchmark/azure/introduction) is the Microsoft-authored, Azure-specific set of guidelines for security and compliance best practices based on common compliance frameworks. This widely respected benchmark builds on the controls from the [Center for Internet Security (CIS)](https://www.cisecurity.org/benchmark/azure/) and the [National Institute of Standards and Technology (NIST)](https://www.nist.gov/) with a focus on cloud-centric security.
From Ignite 2021, Azure Security Benchmark **v3** is available in [Defender for Cloud's regulatory compliance dashboard](update-regulatory-compliance-packages.md) and enabled as the new default initiative for all Azure subscriptions protected with Microsoft Defender for Cloud.
The new vulnerability scanning feature for container images, utilizing Trivy, he
Container scan reports are summarized in Azure Security Center, providing security teams better insight and understanding about the source of vulnerable container images and the workflows and repositories from where they originate.
-Learn more in [Identify vulnerable container images in your CI/CD workflows](defender-for-containers-cicd.md).
+Learn more in [Identify vulnerable container images in your CI/CD workflows](defender-for-devops-introduction.md).
### More Resource Graph queries available for some recommendations
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
Until now, Defender for Cloud based its posture assessments for VMs on agent-bas
With agentless scanning for VMs, you get wide visibility on installed software and software CVEs, without the challenges of agent installation and maintenance, network connectivity requirements, and performance impact on your workloads. The analysis is powered by Microsoft Defender vulnerability management.
-Agentless vulnerability scanning is available in both Defender Defender Cloud Security Posture Management (CSPM) and in [Defender for Servers P2](defender-for-servers-introduction.md), with native support for AWS and Azure VMs.
+Agentless vulnerability scanning is available in both Defender Cloud Security Posture Management (CSPM) and in [Defender for Servers P2](defender-for-servers-introduction.md), with native support for AWS and Azure VMs.
- Learn more about [agentless scanning](concept-agentless-data-collection.md). - Find out how to enable [agentless vulnerability assessment](enable-vulnerability-assessment-agentless.md).
The following new recommendations are now available for DevOps Security assessme
| (Preview) [GitHub repositories should have secret scanning enabled](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/1a600c61-6443-4ab4-bd28-7a6b6fb4691d/showSecurityCenterCommandBar~/false) | GitHub scans repositories for known types of secrets, to prevent fraudulent use of secrets that were accidentally committed to repositories. Secret scanning will scan the entire Git history on all branches present in the GitHub repository for any secrets. Examples of secrets are tokens and private keys that a service provider can issue for authentication. If a secret is checked into a repository, anyone who has read access to the repository can use the secret to access the external service with those privileges. Secrets should be stored in a dedicated, secure location outside the repository for the project. (No related policy) | High | | (Preview) [GitHub repositories should have Dependabot scanning enabled](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/92643c1f-1a95-4b68-bbd2-5117f92d6e35/showSecurityCenterCommandBar~/false) | GitHub sends Dependabot alerts when it detects vulnerabilities in code dependencies that affect repositories. A vulnerability is a problem in a project's code that could be exploited to damage the confidentiality, integrity, or availability of the project or other projects that use its code. Vulnerabilities vary in type, severity, and method of attack. When code depends on a package that has a security vulnerability, this vulnerable dependency can cause a range of problems. (No related policy) | Medium |
+The Defender for DevOps recommendations replace the deprecated vulnerability scanner for CI/CD workflows that was included in Defender for Containers.
+ Learn more about [Defender for DevOps](defender-for-devops-introduction.md) ### Regulatory Compliance Dashboard now supports manual control management and detailed information on Microsoft's compliance status
defender-for-cloud Supported Machines Endpoint Solutions Clouds Servers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/supported-machines-endpoint-solutions-clouds-servers.md
For information about when recommendations are generated for each of these solut
| - [Microsoft Defender for App Service](./defender-for-app-service-introduction.md) | GA | Not Available | Not Available | | - [Microsoft Defender for DNS](./defender-for-dns-introduction.md) | GA | GA | GA | | - [Microsoft Defender for container registries](./defender-for-container-registries-introduction.md) <sup>[1](#footnote1)</sup> | GA | GA <sup>[2](#footnote2)</sup> | GA <sup>[2](#footnote2)</sup> |
-| - [Microsoft Defender for container registries scanning of images in CI/CD workflows](./defender-for-containers-cicd.md) <sup>[3](#footnote3)</sup> | Public Preview | Not Available | Not Available |
| - [Microsoft Defender for Kubernetes](./defender-for-kubernetes-introduction.md) <sup>[4](#footnote4)</sup> | GA | GA | GA | | - [Microsoft Defender for Containers](./defender-for-containers-introduction.md) <sup>[10](#footnote4)</sup> | GA | GA | GA | | - [Defender extension for Azure Arc-enabled Kubernetes clusters, servers or data services](./defender-for-kubernetes-azure-arc.md) <sup>[5](#footnote5)</sup> | Public Preview | Not Available | Not Available |
defender-for-cloud Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md
If you're looking for the latest release notes, you'll find them in the [What's
| Planned change | Estimated date for change | |--|--|
-| [Multiple changes to identity recommendations](#multiple-changes-to-identity-recommendations) | September 2022 |
+| [Multiple changes to identity recommendations](#multiple-changes-to-identity-recommendations) | October 2022 |
### Multiple changes to identity recommendations
dev-box Concept Dev Box Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/concept-dev-box-concepts.md
This article describes the key concepts and components of Microsoft Dev Box.
## Dev center
-A dev center is a collection of projects that require similar settings. Dev centers enable dev infrastructure managers to manage the images and SKUs available to the projects using [dev box definitions](concept-dev-box-concepts.md#dev-box-definition), and configure the networks the development teams consume using [network connections](./concept-dev-box-concepts.md#network-connection).
-
+A dev center is a collection of projects that require similar settings. Dev centers enable dev infrastructure managers to manage the images and SKUs available to the projects using [dev box definitions](concept-dev-box-concepts.md#dev-box-definition) and configure the networks the development teams consume using [network connections](./concept-dev-box-concepts.md#network-connection).
## Projects
-A project is the point of access for the development team members. When you associate a project with a dev center, all the settings at the dev center level will be applied to the project automatically. Each project can be associated with only one dev center. Dev managers can configure the dev boxes available for the project by specifying the [dev box definitions](./concept-dev-box-concepts.md#dev-box-definition) appropriate for their workloads.
-
+A project is the point of access for the development team members. When you associate a project with a dev center, all the settings at the dev center level will be applied to the project automatically. Each project can be associated with only one dev center. Dev managers can configure the dev boxes available for the project by specifying the [dev box definitions](./concept-dev-box-concepts.md#dev-box-definition) appropriate for their workloads.
## Dev box definition A dev box definition specifies a source image and size, including compute size and storage size. You can use a source image from the marketplace, or a custom image from your own [Azure Compute Gallery](./how-to-configure-azure-compute-gallery.md). You can use dev box definitions across multiple projects in a dev center.
A dev box definition specifies a source image and size, including compute size a
IT administrators and dev infrastructure managers configure the network used for dev box creation in accordance with their organizational policies. Network connections store configuration information like Active Directory join type and virtual network that dev boxes use to connect to network resources.
-When creating a network connection, you must choose whether to use a native Azure Active Directory (Azure AD) join or a hybrid Azure AD join. If your dev boxes need to connect exclusively to cloud-based resources, use a native Azure AD join. Use a hybrid Azure AD join if your dev boxes need to connect to on-premises resources and cloud-based resources. To learn more about Azure AD and hybrid Azure AD joined devices, [Plan your Azure Active Directory device deployment](/azure/active-directory/devices/plan-device-deployment).
-
+When creating a network connection, you must choose whether to use a native Azure Active Directory (Azure AD) join or a hybrid Azure AD join. If your dev boxes need to connect exclusively to cloud-based resources, use a native Azure AD join. Use a hybrid Azure AD join if your dev boxes need to connect to on-premises resources and cloud-based resources. To learn more about Azure AD and hybrid Azure AD joined devices, [Plan your Azure Active Directory device deployment](/azure/active-directory/devices/plan-device-deployment).
The virtual network specified in a network connection also determines the region for the dev box. You can create multiple network connections based on the regions where you support developers and use them when creating different dev box pools to ensure dev box users create a dev box in a region close to them. Using a region close to the dev box user provides the best experience. ## Dev box pool
-A dev box pool is a collection of dev boxes that you manage together that you manage together and to which you apply similar settings. You can create multiple dev box pools to support the needs of hybrid teams working in different regions or on different workloads.
-
+A dev box pool is a collection of dev boxes that you manage together and to which you apply similar settings. You can create multiple dev box pools to support the needs of hybrid teams working in different regions or on different workloads.
## Dev box
-A dev box is a preconfigured ready-to-code workstation that you create through the self-service developer portal. The new dev box has all the tools, binaries, and configuration required for a dev box user to be productive immediately. You can create and manage multiple dev boxes to work on multiple work streams. As a dev box user you have control over your own dev boxes - you can create more as you need them, and delete them when you have finished using them.
+A dev box is a preconfigured ready-to-code workstation that you create through the self-service developer portal. The new dev box has all the tools, binaries, and configuration required for a dev box user to be productive immediately. You can create and manage multiple dev boxes to work on multiple work streams. As a dev box user, you have control over your own dev boxes - you can create more as you need them and delete them when you have finished using them.
dms Tutorial Sql Server Azure Sql Database Offline Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-azure-sql-database-offline-ads.md
To complete this tutorial, you need to:
* Create a target [Azure SQL Database](/azure/azure-sql/database/single-database-create-quickstart). * Ensure that the SQL Server login to connect the source SQL Server is a member of the `db_datareader` and the login for the target SQL server is `db_owner`. * Migrate database schema from source to target using [SQL Server dacpac extension](/sql/azure-data-studio/extensions/sql-server-dacpac-extension) or, [SQL Database Projects extension](/sql/azure-data-studio/extensions/sql-database-project-extension) for Azure Data Studio.
+* If you're using the Azure Database Migration Service for the first time, ensure that Microsoft.DataMigration resource provider is registered in your subscription. You can follow the steps to [register the resource provider](quickstart-create-data-migration-service-portal.md#register-the-resource-provider)
## Launch the Migrate to Azure SQL wizard in Azure Data Studio
dns Private Resolver Endpoints Rulesets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/private-resolver-endpoints-rulesets.md
Previously updated : 09/09/2022 Last updated : 10/13/2022 #Customer intent: As an administrator, I want to understand components of the Azure DNS Private Resolver.
DNS forwarding rulesets enable you to specify one or more custom DNS servers to
Rulesets have the following associations: - A single ruleset can be associated with multiple outbound endpoints. - A ruleset can have up to 1000 DNS forwarding rules. -- A ruleset can be linked to any number of virtual networks in the same region
+- A ruleset can be linked to up to 500 virtual networks in the same region
-A ruleset can't be linked to a virtual network in another region.
+A ruleset can't be linked to a virtual network in another region. For more information about ruleset and other private resolver limits, see [What are the usage limits for Azure DNS?](dns-faq.yml#what-are-the-usage-limits-for-azure-dns-).
### Ruleset links
energy-data-services How To Set Up Private Links https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-set-up-private-links.md
Use the following steps to create a private endpoint for an existing Microsoft E
## Next steps <!-- Add a context sentence for the following links -->
+To learn more about data security and encryption
> [!div class="nextstepaction"]
-> [How to manage users](how-to-manage-users.md)
+> [Data security and encryption in Microsoft Energy Data Services](how-to-manage-data-security-and-encryption.md)
event-grid Secure Webhook Delivery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/secure-webhook-delivery.md
Title: Secure WebHook delivery with Azure AD in Azure Event Grid description: Describes how to deliver events to HTTPS endpoints protected by Azure Active Directory using Azure Event Grid Previously updated : 01/20/2022 Last updated : 10/12/2022 # Deliver events to Azure Active Directory protected endpoints
-This article describes how to use Azure Active Directory (Azure AD) to secure the connection between your **event subscription** and your **webhook endpoint**. For an overview of Azure AD applications and service principals, see [Microsoft identity platform (v2.0) overview](../active-directory/develop/v2-overview.md).
-
-This article uses the Azure portal for demonstration, however the feature can also be enabled using CLI, PowerShell, or the SDKs.
+This article describes how to use Azure Active Directory (Azure AD) to secure the connection between your **event subscription** and your **webhook endpoint**. It uses the Azure portal for demonstration, however the feature can also be enabled using CLI, PowerShell, or the SDKs.
> [!IMPORTANT]
-> Additional access check has been introduced as part of create or update of event subscription on March 30, 2021 to address a security vulnerability. The subscriber client's service principal needs to be either an owner or have a role assigned on the destination application service principal. Please reconfigure your Azure AD Application following the new instructions below.
+> Additional access check has been introduced as part of create or update of event subscription on March 30, 2021 to address a security vulnerability. The subscriber client's service principal needs to be either an owner or have a role assigned on the destination application service principal. Reconfigure your Azure AD Application following the new instructions below.For an overview of Azure AD applications and service principals, see [Microsoft identity platform (v2.0) overview](../active-directory/develop/v2-overview.md).
+
+## Scenarios
+This article explains how to implement the following two scenarios in detail:
+
+- [Delivering events to a webhook that is in the same Azure AD tenant as the event subscription](#deliver-events-to-a-webhook-in-the-same-azure-ad-tenant). You can use either an Azure AD user or an Azure AD application as the event subscription writer in this scenario.
+- [Delivering events to a webhook that is in a different Azure AD tenant from the event subscription](#deliver-events-to-a-webhook-in-a-different-azure-ad-tenant). You can only use an Azure AD application as an event subscription writer in this scenario.
+
+ In the first scenario, you run all the steps or scripts in a single tenant that has both the event subscription and the webhook. And, in the second scenario, you run some steps in the tenant that has the event subscription and some steps in the tenant that has the webhook.
## Deliver events to a Webhook in the same Azure AD tenant
-![Secure WebHook delivery with Azure AD in Azure Event Grid](./media/secure-webhook-delivery/single-tenant-diagram.png)
+The following diagram depicts how Event Grid events are delivered to a webhook in the same tenant as the event subscription.
++
+There are two subsections in this section. Read through both the scenarios or the one that you're interested in.
-Based on the diagram above, follow the next steps to configure the tenant.
+- [Configure the event subscription by using an Azure AD **user**](#configure-the-event-subscription-by-using-an-azure-ad-user)
+- [Configure the event subscription by using an Azure AD **application**](#configure-the-event-subscription-by-using-an-azure-ad-application)
-### Configure the event subscription by using Azure AD User
-1. Create an Azure AD Application for the webhook configured to work with the Microsoft directory (Single tenant).
+### Configure the event subscription by using an Azure AD user
+
+This section shows how to configure the event subscription by using an Azure AD user.
+
+1. Create an Azure AD application for the webhook configured to work with the Microsoft directory (single tenant).
2. Open the [Azure Shell](https://portal.azure.com/#cloudshell/) in the tenant and select the PowerShell environment. 3. Modify the value of **$webhookAadTenantId** to connect to the tenant. - Variables:
- - **$webhookAadTenantId**: Azure Tenant ID
+ - **$webhookAadTenantId**: Azure tenant ID
```Shell PS /home/user>$webhookAadTenantId = "[REPLACE_WITH_YOUR_TENANT_ID]"
Based on the diagram above, follow the next steps to configure the tenant.
4. Open the [following script](scripts/event-grid-powershell-webhook-secure-delivery-azure-ad-user.md) and update the values of **$webhookAppObjectId** and **$eventSubscriptionWriterUserPrincipalName** with your identifiers, then continue to run the script. - Variables:
- - **$webhookAppObjectId**: Azure AD Application ID created for the webhook
- - **$eventSubscriptionWriterUserPrincipalName**: Azure User Principal Name of the user who will create event subscription
+ - **$webhookAppObjectId**: Azure AD application ID created for the webhook
+ - **$eventSubscriptionWriterUserPrincipalName**: Azure user principal name of the user who will create event subscription
> [!NOTE]
- > You don't need to modify the value of **$eventGridAppId**, for this script we set **AzureEventGridSecureWebhookSubscriber** as the value for the **$eventGridRoleName**. Remember, you must be a member of the [Azure AD Application Administrator role](../active-directory/roles/permissions-reference.md#all-roles) or be an owner of the service principal of Webhook app in Azure AD to execute this script.
+ > You don't need to modify the value of **$eventGridAppId**. In this script, **AzureEventGridSecureWebhookSubscriber** is set for the **$eventGridRoleName**. Remember, you must be a member of the [Azure AD Application Administrator role](../active-directory/roles/permissions-reference.md#all-roles) or be an owner of the service principal of webhook app in Azure AD to execute this script.
If you see the following error message, you need to elevate to the service principal. An additional access check has been introduced as part of create or update of event subscription on March 30, 2021 to address a security vulnerability. The subscriber client's service principal needs to be either an owner or have a role assigned on the destination application service principal.
Based on the diagram above, follow the next steps to configure the tenant.
![Secure Webhook action](./media/secure-webhook-delivery/aad-configuration.png)
-### Configure the event subscription by using Azure AD Application
+### Configure the event subscription by using an Azure AD application
-1. Create an Azure AD Application for the Event Grid subscription writer configured to work with the Microsoft directory (Single tenant).
+This section shows how to configure the event subscription by using an Azure AD application.
-2. Create a secret for the Azure AD Application previously created and save the value (you'll need this value later).
+1. Create an Azure AD application for the Event Grid subscription writer configured to work with the Microsoft directory (Single tenant).
-3. Go to the Access control (IAM) in the Event Grid Topic and add the role assignment of the Event Grid subscription writer as Event Grid Contributor, this step will allow us to have access to the Event Grid resource when we logged-in into Azure with the Azure AD Application by using the Azure CLI.
+2. Create a secret for the Azure AD application and save the value (you'll need this value later).
-4. Create an Azure AD Application for the webhook configured to work with the Microsoft directory (Single tenant).
+3. Go to the **Access control (IAM)** page for the Event Grid topic and assign **Event Grid Contributor** role to the Event Grid subscription writer app. This step will allow you to have access to the Event Grid resource when you logged-in into Azure with the Azure AD application by using Azure CLI.
+
+4. Create an Azure AD application for the webhook configured to work with the Microsoft directory (Single tenant).
5. Open the [Azure Shell](https://portal.azure.com/#cloudshell/) in the tenant and select the PowerShell environment. 6. Modify the value of **$webhookAadTenantId** to connect to the tenant. - Variables:
- - **$webhookAadTenantId**: Azure Tenant ID
+ - **$webhookAadTenantId**: Azure tenant ID
```Shell PS /home/user>$webhookAadTenantId = "[REPLACE_WITH_YOUR_TENANT_ID]"
Based on the diagram above, follow the next steps to configure the tenant.
7. Open the [following script](scripts/event-grid-powershell-webhook-secure-delivery-azure-ad-app.md) and update the values of **$webhookAppObjectId** and **$eventSubscriptionWriterAppId** with your identifiers, then continue to run the script. - Variables:
- - **$webhookAppObjectId**: Azure AD Application ID created for the webhook
- - **$eventSubscriptionWriterAppId**: Azure AD Application ID for Event Grid subscription writer
+ - **$webhookAppObjectId**: Azure AD application ID created for the webhook
+ - **$eventSubscriptionWriterAppId**: Azure AD application ID for Event Grid subscription writer app.
> [!NOTE]
- > You don't need to modify the value of **```$eventGridAppId```**, for this script we set **AzureEventGridSecureWebhookSubscriber** as the value for the **```$eventGridRoleName```**. Remember, you must be a member of the [Azure AD Application Administrator role](../active-directory/roles/permissions-reference.md#all-roles) or be an owner of the service principal of Webhook app in Azure AD to execute this script.
+ > You don't need to modify the value of **```$eventGridAppId```**. In this script, **AzureEventGridSecureWebhookSubscriber** as set for the **```$eventGridRoleName```**. Remember, you must be a member of the [Azure AD Application Administrator role](../active-directory/roles/permissions-reference.md#all-roles) or be an owner of the service principal of webhook app in Azure AD to execute this script.
-8. Login as the Event Grid subscription writer Azure AD Application by running the command.
+8. Log in as the Event Grid subscription writer Azure AD Application by running the command.
```azurecli PS /home/user>az login --service-principal -u [REPLACE_WITH_EVENT_GRID_SUBSCRIPTION_WRITER_APP_ID] -p [REPLACE_WITH_EVENT_GRID_SUBSCRIPTION_WRITER_APP_SECRET_VALUE] --tenant [REPLACE_WITH_TENANT_ID]
Based on the diagram above, follow the next steps to configure the tenant.
``` > [!NOTE]
- > In this scenario we are using an Event Grid System Topic. See [here](/cli/azure/eventgrid), if you want to create a subscription for Custom Topics or Event Grid Domains by using the Azure CLI.
+ > This scenario uses a system topic. If you want to create a subscription for custom topics or domains by using Azure CLI, see [CLI reference](/cli/azure/eventgrid).
-10. If everything was correctly configured, you can successfully create the webhook subscription in your Event Grid Topic.
+10. If everything was correctly configured, you can successfully create the webhook subscription in your Event Grid topic.
> [!NOTE]
- > At this point Event Grid is now passing the Azure AD Bearer token to the webhook client in every message, you'll need to validate the Authorization token in your webhook.
+ > At this point, Event Grid is now passing the Azure AD bearer token to the webhook client in every message. You'll need to validate the authorization token in your webhook.
## Deliver events to a Webhook in a different Azure AD tenant
-To secure the connection between your event subscription and your webhook endpoint that are in different Azure AD tenants, you'll need to use an Azure AD application as shown in this section. Currently, it's not possible to secure this connection by using an Azure AD user in the Azure portal.
+To secure the connection between your event subscription and your webhook endpoint that are in different Azure AD tenants, you'll need to use an Azure AD **application** as shown in this section. Currently, it's not possible to secure this connection by using an Azure AD **user** in the Azure portal.
![Multitenant events with Azure AD and Webhooks](./media/secure-webhook-delivery/multitenant-diagram.png)
Based on the diagram above, follow next steps to configure both tenants.
Do the following steps in **Tenant A**:
-1. Create an Azure AD application for the Event Grid subscription writer configured to work with any Azure AD directory (Multitenant).
+1. Create an Azure AD application for the Event Grid subscription writer configured to work with any Azure AD directory (Multi-tenant).
-2. Create a secret for the Azure AD application previously created in the **Tenant A**, and save the value (you'll need this value later).
+2. Create a secret for the Azure AD application, and save the value (you'll need this value later).
-3. Navigate to the **Access control (IAM)** page for the event grid topic. Add Azure AD application of the Event Grid subscription writer to the **Event Grid Contributor** role. This step allows the application to have access to the Event Grid resource when you log in into Azure with the Azure AD application by using the Azure CLI.
+3. Navigate to the **Access control (IAM)** page for the Event Grid topic. Assign the **Event Grid Contributor** role to Azure AD application of the Event Grid subscription writer. This step allows the application to have access to the Event Grid resource when you sign in into Azure with the Azure AD application by using Azure CLI.
### Tenant B Do the following steps in **Tenant B**:
-1. Create an Azure AD Application for the webhook configured to work with the Microsoft directory (Single tenant).
+1. Create an Azure AD Application for the webhook configured to work with the Microsoft directory (single tenant).
5. Open the [Azure Shell](https://portal.azure.com/#cloudshell/), and select the PowerShell environment. 6. Modify the **$webhookAadTenantId** value to connect to the **Tenant B**. - Variables:
Do the following steps in **Tenant B**:
- **$eventSubscriptionWriterAppId**: Azure AD application ID for Event Grid subscription writer > [!NOTE]
- > You don't need to modify the value of **```$eventGridAppId```**, for this script we set **AzureEventGridSecureWebhookSubscriber** as the value for the **```$eventGridRoleName```**. Remember, you must be a member of the [Azure AD Application Administrator role](../active-directory/roles/permissions-reference.md#all-roles) or be an owner of the service principal of Webhook app in Azure AD to execute this script.
+ > You don't need to modify the value of **```$eventGridAppId```**. In this script, **AzureEventGridSecureWebhookSubscriber** is set for **```$eventGridRoleName```**. Remember, you must be a member of the [Azure AD Application Administrator role](../active-directory/roles/permissions-reference.md#all-roles) or be an owner of the service principal of webhook app in Azure AD to execute this script.
If you see the following error message, you need to elevate to the service principal. An additional access check has been introduced as part of create or update of event subscription on March 30, 2021 to address a security vulnerability. The subscriber client's service principal needs to be either an owner or have a role assigned on the destination application service principal.
Do the following steps in **Tenant B**:
Back in **Tenant A**, do the following steps:
-1. Open the [Azure Shell](https://portal.azure.com/#cloudshell/), and login as the Event Grid subscription writer Azure AD Application by running the command.
+1. Open the [Azure Shell](https://portal.azure.com/#cloudshell/), and sign in as the Event Grid subscription writer Azure AD Application by running the command.
```azurecli PS /home/user>az login --service-principal -u [REPLACE_WITH_APP_ID] -p [REPLACE_WITH_SECRET_VALUE] --tenant [REPLACE_WITH_TENANT_ID]
Back in **Tenant A**, do the following steps:
> [!NOTE] > In this scenario we are using an Event Grid System Topic. See [here](/cli/azure/eventgrid), if you want to create a subscription for custom topics or Event Grid domains by using the Azure CLI.
-3. If everything was correctly configured, you can successfully create the webhook subscription in your event grid topic.
+3. If everything was correctly configured, you can successfully create the webhook subscription in your Event Grid topic.
> [!NOTE] > At this point, Event Grid is now passing the Azure AD Bearer token to the webhook client in every message. You'll need to validate the Authorization token in your webhook.
firewall Ip Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/ip-groups.md
Previously updated : 01/22/2021 Last updated : 10/13/2022
IP Groups are available in all public cloud regions.
## IP address limits
-You can have a maximum of 100 IP Groups per firewall with a maximum 5000 individual IP addresses or IP prefixes per each IP Group.
+You can have a maximum of 200 IP Groups per firewall with a maximum 5000 individual IP addresses or IP prefixes per each IP Group.
## Related Azure PowerShell cmdlets
firewall Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/overview.md
- Previously updated : 09/26/2022+ Last updated : 10/13/2022 # Customer intent: As an administrator, I want to evaluate Azure Firewall so I can determine if I want to use it.
Azure Firewall Basic is similar to Firewall Standard, but has the following limi
- Fixed scale unit to run the service on two virtual machine backend instances. - Recommended for environments with maximum throughput of 250 Mbps. The throughput may increase for feature general availability (GA).
+### Supported regions
+
+Azure Firewall Basic is available in the following regions during the preview:
+
+- East US
+- East US 2
+- West US
+- West US 2
+- West US 3
+- Central US
+- North Central US
+- South Central US
+- West Central US
+- East US 2 EUAP
+- Central US EUAP
+- North Europe
+- West Europe
+- East Asia
+- Southeast Asia
+- Japan East
+- Japan West
+- Australia East
+- Australia Southeast
+- Australia Central
+- Brazil South
+- South India
+- Central India
+- West India
+- Canada Central
+- Canada East
+- UK South
+- UK West
+- Korea Central
+- Korea South
+- France Central
+- South Africa North
+- UAE North
+- Switzerland North
+- Germany West Central
+- Norway East
+- Jio India West
+- Sweden Central
+- Qatar Central
+ To deploy a Basic Firewall, see [Deploy and configure Azure Firewall Basic (preview) and policy using the Azure portal](deploy-firewall-basic-portal-policy.md). ## Azure Firewall Manager
frontdoor Front Door Security Headers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-security-headers.md
Title: 'Tutorial: Add security headers with Rules Engine - Azure Front Door' description: This tutorial teaches you how to configure a security header via Rules Engine on Azure Front Door na Previously updated : 09/14/2020 Last updated : 10/12/2022 # Customer intent: As an IT admin, I want to learn about Front Door and how to configure a security header via Rules Engine.
In this tutorial, you learn how to:
## Prerequisites
-* Before you can complete the steps in this tutorial, you must first create a Front Door. For more information, see [Quickstart: Create a Front Door](quickstart-create-front-door.md).
-* If this is your first time using the Rules Engine feature, see how to [Set up a Rules Engine](front-door-tutorial-rules-engine.md).
+* An Azure subscription.
+* An Azure Front Door. To complete the steps in this tutorial, you must have a Front Door configured with rules engine. For more information, see [Quickstart: Create a Front Door](quickstart-create-front-door.md) and [Configure your Rules Engine](front-door-tutorial-rules-engine.md).
## Add a Content-Security-Policy header in Azure portal
-1. Click **Add** to add a new rule. Provide the rule a name and then click **Add an Action** > **Response Header**.
+1. Within your Front door resource, select **Rules engine configuration** under **Settings**, and then select the rules engine that you want to add the security header to.
-1. Set the Operator to be **Append** to add this header as a response to all of the incoming requests to this route.
+ :::image type="content" source="media/front-door-security-headers/front-door-rules-engine-configuration.png" alt-text="Screenshot showing rules engine configuration page of Azure Front Door.":::
-1. Add the header name: **Content-Security-Policy** and define the values this header should accept. In this scenario, we choose *"script-src 'self' https://apiphany.portal.azure-api.net."*
+2. Select **Add rule** to add a new rule. Provide the rule a name and then select **Add an Action** > **Response Header**.
+
+3. Set the Operator to **Append** to add this header as a response to all of the incoming requests to this route.
+
+4. Add the header name: *Content-Security-Policy* and define the values this header should accept, then select **Save**. In this scenario, we choose *`script-src 'self' https://apiphany.portal.azure-api.net`*.
+
+ :::image type="content" source="./media/front-door-security-headers/front-door-security-header.png" alt-text="Screenshot showing the added security header under.":::
> [!NOTE] > Header values are limited to 640 characters.
-1. Once you've added all of the rules you'd like to your configuration, don't forget to go to your preferred route and associate your Rules Engine configuration to your Route Rule. This step is required to enable the rule to work.
+5. Once you've added all of the rules you'd like to your configuration, don't forget to go to your preferred route and associate your Rules engine configuration to the Route Rule. This step is required to enable the rule to work.
-![portal sample](./media/front-door-rules-engine/rules-engine-security-header-example.png)
+ :::image type="content" source="./media/front-door-security-headers/front-door-associate-routing-rule.png" alt-text="Screenshot showing how to associate a routing rule.":::
-> [!NOTE]
-> In this scenario, we did not add [match conditions](front-door-rules-engine-match-conditions.md) to the rule. All incoming requests that match the path defined in the Route Rule will have this rule applied. If you would like it to only apply to a subset of those requests, be sure to add your specific **match conditions** to this rule.
+ > [!NOTE]
+ > In this scenario, we did not add [match conditions](front-door-rules-engine-match-conditions.md) to the rule. All incoming requests that match the path defined in the Route Rule will have this rule applied. If you would like it to only apply to a subset of those requests, be sure to add your specific **match conditions** to this rule.
## Clean up resources
-In the preceding steps, you configured Security headers with Rules Engine. If you no longer want the rule, you can remove it by clicking Delete rule.
+In the previous steps, you configured security headers with rules engine of your Front Door. If you no longer want the rule, you can remove it by selecting **Delete rule** within the rules engine.
## Next steps
governance Pci Dss 3 2 1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/pci-dss-3-2-1.md
Title: Regulatory Compliance details for PCI DSS 3.2.1 description: Details of the PCI DSS 3.2.1 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/10/2022++ Last updated : 10/13/2022
initiative definition.
### PCI DSS requirement 1.3.4
-**ID**: PCI DSS v3.2.1 1.3.4
-**Ownership**: customer
-
-|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
-|||||
-|[Audit diagnostic setting](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7f89b1eb-583c-429a-8828-af049802c1d9) |Audit diagnostic setting for selected resource types |AuditIfNotExists |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/DiagnosticSettingsForTypes_Audit.json) |
-|[Auditing on SQL server should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa6fb4358-5bf4-4ad7-ba82-2cd2f41ce5e9) |Auditing on your SQL Server should be enabled to track database activities across all databases on the server and save them in an audit log. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServerAuditing_Audit.json) |
-|[Storage accounts should be migrated to new Azure Resource Manager resources](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F37e0d2fe-28a5-43d6-a273-67d37d1f5606) |Use new Azure Resource Manager for your storage accounts to provide security enhancements such as: stronger access control (RBAC), better auditing, Azure Resource Manager based deployment and governance, access to managed identities, access to key vault for secrets, Azure AD-based authentication and support for tags and resource groups for easier security management |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/Classic_AuditForClassicStorages_Audit.json) |
-|[Virtual machines should be migrated to new Azure Resource Manager resources](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1d84d5fb-01f6-4d12-ba4f-4a26081d403d) |Use new Azure Resource Manager for your virtual machines to provide security enhancements such as: stronger access control (RBAC), better auditing, Azure Resource Manager based deployment and governance, access to managed identities, access to key vault for secrets, Azure AD-based authentication and support for tags and resource groups for easier security management |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/ClassicCompute_Audit.json) |
- ## Requirement 10 ### PCI DSS requirement 10.5.4
hdinsight Hdinsight Machine Learning Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-machine-learning-overview.md
- Title: Machine learning overview - Azure HDInsight
-description: Overview of big data machine learning options for clusters in Azure HDInsight.
--- Previously updated : 12/06/2019--
-# Machine learning on HDInsight
-
-HDInsight enables machine learning with big data, providing the ability to obtain valuable insight from large amounts (petabytes, or even exabytes) of structured, unstructured, and fast-moving data. There are several machine learning options in HDInsight: SparkML and Apache Spark MLlib, Apache Hive, and the Microsoft Cognitive Toolkit.
-
-## SparkML and MLlib
-
-[HDInsight Spark](spark/apache-spark-overview.md) is an Azure-hosted offering of [Apache Spark](https://spark.apache.org/), a unified, open source, parallel data processing framework supporting in-memory processing to boost big data analytics. The Spark processing engine is built for speed, ease of use, and sophisticated analytics. Spark's in-memory distributed computation capabilities make it a good choice for the iterative algorithms used in machine learning and graph computations. There are two scalable machine learning libraries that bring algorithmic modeling capabilities to this distributed environment: MLlib and SparkML. MLlib contains the original API built on top of RDDs. SparkML is a newer package that provides a higher-level API built on top of DataFrames for constructing ML pipelines. SparkML doesn't yet support all of the features of MLlib, but is replacing MLlib as Spark's standard machine learning library.
-
-The Microsoft Machine Learning library for Apache Spark is [MMLSpark](https://github.com/Azure/mmlspark). This library is designed to make data scientists more productive on Spark, increase the rate of experimentation, and leverage cutting-edge machine learning techniques, including deep learning, on very large datasets. MMLSpark provides a layer on top of SparkML's low-level APIs when building scalable ML models, such as indexing strings, coercing data into a layout expected by machine learning algorithms, and assembling feature vectors. The MMLSpark library simplifies these and other common tasks for building models in PySpark.
-
-## Azure Machine Learning and Apache Hive
-
-Azure Machine Learning provides tools to model predictive analytics, and a fully managed service you can use to deploy your predictive models as ready-to-consume web services. Azure Machine Learning is a complete predictive analytics solution in the cloud that you can use to create, test, operationalize, and manage predictive models. Select from a large algorithm library, use a web-based studio for building models, and easily deploy your model as a web service.
--
-Create features for data in an HDInsight Hadoop cluster using [Hive queries](/azure/architecture/data-science-process/create-features-hive). *Feature engineering* attempts to increase the predictive power of learning algorithms by creating features from raw data that facilitate the learning process. You can run HiveQL queries from Azure Machine Learning Studio (classic), and access data processed in Hive and stored in blob storage, by using the [Import Data module](../machine-learning/classic/import-data.md).
-
-## Microsoft Cognitive Toolkit
-
-[Deep learning](https://www.microsoft.com/en-us/research/group/dltc/) is a branch of machine learning that uses neural networks, inspired by the biological processes of the human brain. Many researchers see deep learning as a promising approach for enhancing artificial intelligence. Examples of deep learning are spoken language translators, image recognition systems, and machine reasoning.
-
-To help advance its own work in deep learning, Microsoft developed the free, easy-to-use, open-source [Microsoft Cognitive Toolkit](https://www.microsoft.com/en-us/cognitive-toolkit/). This toolkit is being used by a wide variety of Microsoft products, by companies worldwide with a need to deploy deep learning at scale, and by students interested in the latest algorithms and techniques.
-
-## See also
-
-### Scenarios
-
-* [Apache Spark with Machine Learning: Use Spark in HDInsight for analyzing building temperature using HVAC data](spark/apache-spark-ipython-notebook-machine-learning.md)
-* [Apache Spark with Machine Learning: Use Spark in HDInsight to predict food inspection results](spark/apache-spark-machine-learning-mllib-ipython.md)
-* [Generate movie recommendations with Apache Mahout](hadoop/apache-hadoop-mahout-linux-mac.md)
-* [Apache Hive and Azure Machine Learning](/azure/architecture/data-science-process/create-features-hive)
-* [Apache Hive and Azure Machine Learning end-to-end](/azure/architecture/data-science-process/hive-walkthrough)
-* [Machine learning with Apache Spark on HDInsight](/azure/architecture/data-science-process/spark-overview)
-
-### Deep learning resources
-
-* [Use Microsoft Cognitive Toolkit deep learning model with Azure HDInsight Spark cluster](spark/apache-spark-microsoft-cognitive-toolkit.md)
-* [Deep Learning and AI frameworks on the Data Science Virtual Machine (DSVM)](../machine-learning/data-science-virtual-machine/dsvm-tools-deep-learning-frameworks.md)
hdinsight Hdinsight Version Release https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-version-release.md
Title: HDInsight 4.0 overview - Azure
description: Compare HDInsight 3.6 to HDInsight 4.0 features, limitations, and upgrade recommendations. Previously updated : 04/22/2022 Last updated : 10/13/2022 # Azure HDInsight 4.0 overview
There's no supported upgrade path from previous versions of HDInsight to HDInsig
## Limitations
-* HDInsight 4.0 doesn't support MapReduce for Apache Hive. Use Apache Tez instead. Learn more about [Apache Tez](https://tez.apache.org/).
* HDInsight 4.0 doesn't support Apache Storm. * HDInsight 4.0 doesn't support the ML Services cluster type.
-* Hive View is only available on HDInsight 4.0 clusters with a version number equal to or greater than 4.1. This version number is available in Ambari Admin -> Versions.
-* Shell interpreter in Apache Zeppelin isn't supported in Spark and Interactive Query clusters.
-* You can't *disable* LLAP on a Spark-LLAP cluster. You can only turn LLAP off.
-* Azure Data Lake Storage Gen2 can't save Jupyter Notebooks in a Spark cluster.
-* Apache pig runs on Tez by default, However you can change it to Mapreduce
-* Spark SQL Ranger integration for row and column security is deprecated
-* Spark 2.4 and Kafka 2.1 are available in HDInsight 4.0, so Spark 2.3 and Kafka 1.1 are no longer supported. We recommend using Spark 2.4 & Kafka 2.1 and above in HDInsight 4.0.
+* Shell interpretr in Apache Zeppelin isn't supported in Spark and Interactive Query clusters.
+* Apache Pig runs on Tez by default. However, you can change it to Mapreduce.
+* Spark SQL Ranger integration for row and column security is deprecated.
## Next steps
hdinsight Apache Spark Jupyter Spark Sql Use Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-jupyter-spark-sql-use-portal.md
description: This quickstart shows how to use the Azure portal to create an Apac
Previously updated : 04/01/2022 Last updated : 10/13/2022 #Customer intent: As a developer new to Apache Spark on Azure, I need to see how to create a Spark cluster and query some data.
In this quickstart, you use the Azure portal to create an Apache Spark cluster i
For in-depth explanations of available configurations, see [Set up clusters in HDInsight](../hdinsight-hadoop-provision-linux-clusters.md). For more information regarding the use of the portal to create clusters, see [Create clusters in the portal](../hdinsight-hadoop-create-linux-clusters-portal.md).
-If you're using multiple clusters together, you'll want to create a virtual network, and if you're using a Spark cluster you'll also want to use the Hive Warehouse Connector. For more information, see [Plan a virtual network for Azure HDInsight](../hdinsight-plan-virtual-network-deployment.md) and [Integrate Apache Spark and Apache Hive with the Hive Warehouse Connector](../interactive-query/apache-hive-warehouse-connector.md).
+If you're using multiple clusters together, you may want to create a virtual network; if you're using a Spark cluster may also want to use the Hive Warehouse Connector. For more information, see [Plan a virtual network for Azure HDInsight](../hdinsight-plan-virtual-network-deployment.md) and [Integrate Apache Spark and Apache Hive with the Hive Warehouse Connector](../interactive-query/apache-hive-warehouse-connector.md).
> [!IMPORTANT] > Billing for HDInsight clusters is prorated per minute, whether you are using them or not. Be sure to delete your cluster after you have finished using it. For more information, see the [Clean up resources](#clean-up-resources) section of this article.
You use the Azure portal to create an HDInsight cluster that uses Azure Storage
1. From the top menu, select **+ Create a resource**.
- :::image type="content" source="./media/apache-spark-jupyter-spark-sql-use-portal/azure-portal-create-resource.png " alt-text="Azure portal create a resource" border="true":::urce" border="true":::
+ :::image type="content" source="./media/apache-spark-jupyter-spark-sql-use-portal/azure-portal-create-resource.png " alt-text="Screenshot of Azure portal how to create a resource" border="true":::
1. Select **Analytics** > **Azure HDInsight** to go to the **Create HDInsight cluster** page.
You use the Azure portal to create an HDInsight cluster that uses Azure Storage
|Resource group | From the drop-down list, select your existing resource group, or select **Create new**.| |Cluster name | Enter a globally unique name.| |Region | From the drop-down list, select a region where the cluster is created. |
- |Cluster type| Select Select cluster type to open a list. From the list, select **Spark**.|
+ |Availability zone |Optional - specify an availability zone in which to deploy your cluster|
+ |Cluster type| Select cluster type to open a list. From the list, select **Spark**.|
|Cluster version|This field will auto-populate with the default version once the cluster type has been selected.|
- |Cluster login username| Enter the cluster login username. The default name is **admin**. You use this account to login in to the Jupyter Notebook later in the quickstart. |
+ |Cluster login username| Enter the cluster login username. The default name is **admin**. You use this account to log in to the Jupyter Notebook later in the quickstart. |
|Cluster login password| Enter the cluster login password. | |Secure Shell (SSH) username| Enter the SSH username. The SSH username used for this quickstart is **sshuser**. By default, this account shares the same password as the *Cluster Login username* account. |
+
+ :::image type="content" source="./media/apache-spark-jupyter-spark-sql-use-portal/azure-portal-cluster-basics-spark.png " alt-text="Screenshot shows Create HDInsight cluster with the Basics tab selected." border="true":::
- :::image type="content" source="./media/apache-spark-jupyter-spark-sql-use-portal/azure-portal-cluster-basics-spark.png " alt-text="Screenshot shows Create H D Insight cluster with the Basics tab selected." border="true":::
-
- Select **Next: Storage >>** to continue to the **Storage** page.
+1. Select **Next: Storage >>** to continue to the **Storage** page.
1. Under **Storage**, provide the following values:
You use the Azure portal to create an HDInsight cluster that uses Azure Storage
|Primary storage account|Use the auto-populated value.| |Container|Use the auto-populated value.|
- :::image type="content" source="./media/apache-spark-jupyter-spark-sql-use-portal/azure-portal-cluster-storage.png " alt-text="Screenshot shows Create H D Insight cluster with the Storage tab selected." border="true":::
+ :::image type="content" source="./media/apache-spark-jupyter-spark-sql-use-portal/azure-portal-cluster-storage.png " alt-text="Screenshot shows Create HDInsight cluster with the Storage tab selected." border="true":::
Select **Review + create** to continue.
healthcare-apis Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/known-issues.md
Refer to the table below to find details about resolution dates or possible work
| :- | : | :- | :- | |Using [token type](https://www.hl7.org/fhir/search.html#token) fields of length more than 128 characters can result in undesired behavior on `create`, `search`, `update`, and `delete` operations. | August 2022 |No workaround | Not resolved | |The SQL provider will cause the `RawResource` column in the database to save incorrectly. This occurs in a small number of cases when a transient exception occurs that causes the provider to use its retry logic.ΓÇ»|April 2022 |Resolved [#2571](https://github.com/microsoft/fhir-server/pull/2571)|May 2022 |
+| Queries not providing consistent result counts after appended with `_sort` operator. For more information, see [#2680](https://github.com/microsoft/fhir-server/pull/2680). | July 2022 | No workaround| July 2022| Not resolved |
## Next steps
healthcare-apis Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/release-notes.md
Azure Health Data Services is a set of managed API services based on open standards and frameworks for the healthcare industry. They enable you to build scalable and secure healthcare solutions by bringing protected health information (PHI) datasets together and connecting them end-to-end with tools for machine learning, analytics, and AI. This document provides details about the features and enhancements made to Azure Health Data Services including the different service types (FHIR service, DICOM service, and MedTech service) that seamlessly work with one another.
+## September 2022
+
+### Azure Health Data Services
+
+#### **Bug Fixes**
+
+| Bug Fix |Related information |
+| :- | : |
+| Querying with :not operator was returning more results than expected | The issue is now fixed and querying with :not operator should provide correct results. For more information, see [#2790](https://github.com/microsoft/fhir-server/pull/2785). |
+
+#### **Known Issues**
+
+| Known Issue | Description |
+| : | :- |
+| Using [token type](https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.hl7.org%2Ffhir%2Fsearch.html%23token&data=05%7C01%7CKetki.Sheth%40microsoft.com%7C7ec4c7dad9b940b74a8508da60395511%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637928096596122743%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=VmAG2iHUyxtNZI88HqKeSSwFV28zFSs2qgkAQRPnZ%2Bw%3D&reserved=0) fields of length more than 128 characters can result in undesired behavior on create, search, update, and delete operations | No workaround. |
+
+### FHIR Service
+
+#### **Bug Fixes**
+
+| Bug Fix |Related information |
+| :- | : |
+| Error message is provided for failure in export resulting from long time span | With failure in export job due to a long time span, customer will see `RequestEntityTooLarge` HTTP status code. For more information, see [#2790](https://github.com/microsoft/fhir-server/pull/2790).|
+|In a query sort, functionality throws an error when chained search is performed with same field value. | Sort functionality returns a response. For more information, see [#2794](https://github.com/microsoft/fhir-server/pull/2794).
+| Server doesn't indicate `_text` not supported | When passed as URL parameter,`_text` returns an error in response when using the `Prefer` heading with `value handling=strict`. For more information, see [#2779](https://github.com/microsoft/fhir-server/pull/2779). |
+| Verbose error message is not provided for invalid resource type | Verbose error message is added when resource type is invalid or empty for `_include` and `_revinclude` searches. For more information, see [#2776](https://github.com/microsoft/fhir-server/pull/2776).
+
+### DICOM service
+
+#### **Features**
+
+| Enhancements/Improvements | Related information |
+| : | :- |
+| Export is GA |The export feature for the DICOM service is now generally available. Export enables a user-supplied list of studies, series, and/or instances to be exported in bulk to an Azure Storage account. Learn more about the [export feature](https://github.com/microsoft/dicom-server/blob/main/docs/how-to-guides/export-data.md). |
+|Improved deployment performance |Performance improvements have cut the time to deploy new instances of the DICOM service by more than 55% at the 50th percentile. |
+| Reduced strictness when validating STOW requests |Some customers have run into issues storing DICOM files that do not perfectly conform to the specification. To enable those files to be stored in the DICOM service, we have reduced the strictness of the validation performed on STOW. <p>The service will now accept the following: <p><ul><li>DICOM UIDs that contain trailing whitespace <li>IS, DS, SV, and UV VRs that are not valid numbers<li>Invalid private creator tags |
+
+### Toolkit and Samples Open Source
+
+#### **Features**
+
+| Enhancements/Improvements | Related information |
+| : | :- |
+| Azure Health Data Services Toolkit | The [Azure Health Data Services Toolkit](https://github.com/microsoft/azure-health-data-services-toolkit) is now in the public preview. The toolkit is open-source and allows to easily customize and extend the functionality of their Azure Health Data Services implementations. |
+ ## August 2022 ### FHIR service
iot-central Howto Authorize Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-authorize-rest-api.md
Using the REST API:
1. Use the REST API to retrieve a list of role IDs from your application: ```http
- GET https://{your app subdomain}.azureiotcentral.com/api/roles?api-version=2022-05-31
+ GET https://{your app subdomain}.azureiotcentral.com/api/roles?api-version=2022-07-31
``` The response to this request looks like the following example:
Using the REST API:
1. Use the REST API to create an API token for a role. For example, to create an API token called `operator-token` for the operator role: ```http
- PUT https://{your app subdomain}.azureiotcentral.com/api/apiToken/operator-token?api-version=2022-05-31
+ PUT https://{your app subdomain}.azureiotcentral.com/api/apiToken/operator-token?api-version=2022-07-31
``` Request body:
iot-central Howto Control Devices With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-control-devices-with-rest-api.md
In IoT Central, a module refers to an IoT Edge module running on a connected IoT
Use the following request to retrieve the components from a device called `temperature-controller-01`: ```http
-GET https://{your app subdomain}.azureiotcentral.com/api/devices/temperature-controller-01/components?api-version=2022-05-31
+GET https://{your app subdomain}.azureiotcentral.com/api/devices/temperature-controller-01/components?api-version=2022-07-31
``` The response to this request looks like the following example. The `value` array contains details of each device component:
The response to this request looks like the following example. The `value` array
Use the following request to retrieve a list of modules running on a connected IoT Edge device called `environmental-sensor-01`: ```http
-GET https://{your app subdomain}.azureiotcentral.com/api/devices/environmental-sensor-01/modules?api-version=2022-05-31
+GET https://{your app subdomain}.azureiotcentral.com/api/devices/environmental-sensor-01/modules?api-version=2022-07-31
``` The response to this request looks like the following example. The array of modules only includes custom modules running on the IoT Edge device, not the built-in `$edgeAgent` and `$edgeHub` modules:
The response to this request looks like the following example. The array of modu
Use the following request to retrieve a list of the components in a module called `SimulatedTemperatureSensor`: ```http
-GET https://{your app subdomain}.azureiotcentral.com/api/devices/environmental-sensor-01/modules?api-version=2022-05-31
+GET https://{your app subdomain}.azureiotcentral.com/api/devices/environmental-sensor-01/modules?api-version=2022-07-31
``` ## Read telemetry
GET https://{your app subdomain}.azureiotcentral.com/api/devices/environmental-s
Use the following request to retrieve the last known telemetry value from a device that doesn't use components. In this example, the device is called `thermostat-01` and the telemetry is called `temperature`: ```http
-GET https://{your app subdomain}.azureiotcentral.com/api/devices/thermostat-01/telemetry/temperature?api-version=2022-05-31
+GET https://{your app subdomain}.azureiotcentral.com/api/devices/thermostat-01/telemetry/temperature?api-version=2022-07-31
``` The response to this request looks like the following example:
The response to this request looks like the following example:
Use the following request to retrieve the last known telemetry value from a device that does use components. In this example, the device is called `temperature-controller-01`, the component is called `thermostat2`, and the telemetry is called `temperature`: ```http
-GET https://{your app subdomain}.azureiotcentral.com/api/devices/temperature-controller-01/components/thermostat2/telemetry/temperature?api-version=2022-05-31
+GET https://{your app subdomain}.azureiotcentral.com/api/devices/temperature-controller-01/components/thermostat2/telemetry/temperature?api-version=2022-07-31
``` The response to this request looks like the following example:
The response to this request looks like the following example:
If the device is an IoT Edge device, use the following request to retrieve the last known telemetry value from a module. This example uses a device called `environmental-sensor-01` with a module called `SimulatedTemperatureSensor` and telemetry called `ambient`. The `ambient` telemetry type has temperature and humidity values: ```http
-GET https://{your app subdomain}.azureiotcentral.com/api/devices/environmental-sensor-01/modules/SimulatedTemperatureSensor/telemetry/ambient?api-version=2022-05-31
+GET https://{your app subdomain}.azureiotcentral.com/api/devices/environmental-sensor-01/modules/SimulatedTemperatureSensor/telemetry/ambient?api-version=2022-07-31
``` The response to this request looks like the following example:
The response to this request looks like the following example:
Use the following request to retrieve the property values from a device that doesn't use components. In this example, the device is called `thermostat-01`: ```http
-GET https://{your app subdomain}.azureiotcentral.com/api/devices/thermostat-01/properties?api-version=2022-05-31
+GET https://{your app subdomain}.azureiotcentral.com/api/devices/thermostat-01/properties?api-version=2022-07-31
``` The response to this request looks like the following example. It shows the device is reporting a single property value:
The response to this request looks like the following example. It shows the devi
Use the following request to retrieve property values from all components. In this example, the device is called `temperature-controller-01`: ```http
-GET https://{your app subdomain}.azureiotcentral.com/api/devices/temperature-controller-01/properties?api-version=2022-05-31
+GET https://{your app subdomain}.azureiotcentral.com/api/devices/temperature-controller-01/properties?api-version=2022-07-31
``` The response to this request looks like the following example:
The response to this request looks like the following example:
Use the following request to retrieve a property value from an individual component. In this example, the device is called `temperature-controller-01` and the component is called `thermostat2`: ```http
-GET https://{your app subdomain}.azureiotcentral.com/api/devices/temperature-controller-01/components/thermostat2/properties?api-version=2022-05-31
+GET https://{your app subdomain}.azureiotcentral.com/api/devices/temperature-controller-01/components/thermostat2/properties?api-version=2022-07-31
``` The response to this request looks like the following example:
The response to this request looks like the following example:
} ```
-If the device is an IoT Edge device, use the following request to retrieve property values from a from a module. This example uses a device called `environmental-sensor-01` with a module called `SimulatedTemperatureSensor`:
+If the device is an IoT Edge device, use the following request to retrieve property values from a module. This example uses a device called `environmental-sensor-01` with a module called `SimulatedTemperatureSensor`:
```http
-GET https://{your app subdomain}.azureiotcentral.com/api/devices/environmental-sensor-01/modules/SimulatedTemperatureSensor/properties?api-version=2022-05-31
+GET https://{your app subdomain}.azureiotcentral.com/api/devices/environmental-sensor-01/modules/SimulatedTemperatureSensor/properties?api-version=2022-07-31
``` The response to this request looks like the following example:
Some properties are writable. For example, in the thermostat model the `targetTe
Use the following request to write an individual property value to a device that doesn't use components. In this example, the device is called `thermostat-01`: ```http
-PATCH https://{your app subdomain}.azureiotcentral.com/api/devices/thermostat-01/properties?api-version=2022-05-31
+PATCH https://{your app subdomain}.azureiotcentral.com/api/devices/thermostat-01/properties?api-version=2022-07-31
``` The request body looks like the following example:
The response to this request looks like the following example:
Use the following request to write an individual property value to a device that does use components. In this example, the device is called `temperature-controller-01` and the component is called `thermostat2`: ```http
-PATCH https://{your app subdomain}.azureiotcentral.com/api/devices/temperature-controller-01/components/thermostat2/properties?api-version=2022-05-31
+PATCH https://{your app subdomain}.azureiotcentral.com/api/devices/temperature-controller-01/components/thermostat2/properties?api-version=2022-07-31
``` The request body looks like the following example:
The response to this request looks like the following example:
If the device is an IoT Edge device, use the following request to write an individual property value to a module. This example uses a device called `environmental-sensor-01`, a module called `SimulatedTemperatureSensor`, and a property called `SendInterval`: ```http
-PUT https://{your app subdomain}.azureiotcentral.com/api/devices/environmental-sensor-01/modules/SimulatedTemperatureSensor/properties?api-version=2022-05-31
+PUT https://{your app subdomain}.azureiotcentral.com/api/devices/environmental-sensor-01/modules/SimulatedTemperatureSensor/properties?api-version=2022-07-31
``` The request body looks like the following example:
The response to this request looks like the following example:
If you're using an IoT Edge device, use the following request to retrieve property values from a module: ```http
-GET https://{your app subdomain}.azureiotcentral.com/api/devices/{deviceId}/modules/{moduleName}/properties?api-version=2022-05-31
+GET https://{your app subdomain}.azureiotcentral.com/api/devices/{deviceId}/modules/{moduleName}/properties?api-version=2022-07-31
``` If you're using an IoT Edge device, use the following request to retrieve property values from a component in a module: ```http
-GET https://{your app subdomain}.azureiotcentral.com/api/devices/{deviceId}/modules/{moduleName}/components/{componentName}/properties?api-version=2022-05-31
+GET https://{your app subdomain}.azureiotcentral.com/api/devices/{deviceId}/modules/{moduleName}/components/{componentName}/properties?api-version=2022-07-31
``` ## Call commands
You can use the REST API to call device commands and retrieve the device history
Use the following request to call a command on device that doesn't use components. In this example, the device is called `thermostat-01` and the command is called `getMaxMinReport`: ```http
-POST https://{your app subdomain}.azureiotcentral.com/api/devices/thermostat-01/commands/getMaxMinReport?api-version=2022-05-31
+POST https://{your app subdomain}.azureiotcentral.com/api/devices/thermostat-01/commands/getMaxMinReport?api-version=2022-07-31
``` The request body looks like the following example:
The response to this request looks like the following example:
To view the history for this command, use the following request: ```http
-GET https://{your app subdomain}.azureiotcentral.com/api/devices/thermostat-01/commands/getMaxMinReport?api-version=2022-05-31
+GET https://{your app subdomain}.azureiotcentral.com/api/devices/thermostat-01/commands/getMaxMinReport?api-version=2022-07-31
``` The response to this request looks like the following example:
The response to this request looks like the following example:
Use the following request to call a command on device that does use components. In this example, the device is called `temperature-controller-01`, the component is called `thermostat2`, and the command is called `getMaxMinReport`: ```http
-POST https://{your app subdomain}.azureiotcentral.com/api/devices/temperature-controller-01/components/thermostat2/commands/getMaxMinReport?api-version=2022-05-31
+POST https://{your app subdomain}.azureiotcentral.com/api/devices/temperature-controller-01/components/thermostat2/commands/getMaxMinReport?api-version=2022-07-31
``` The formats of the request payload and response are the same as for a device that doesn't use components.
The formats of the request payload and response are the same as for a device tha
To view the history for this command, use the following request: ```http
-GET https://{your app subdomain}.azureiotcentral.com/api/devices/temperature-controller-01/components/thermostat2/commands/getMaxMinReport?api-version=2022-05-31
+GET https://{your app subdomain}.azureiotcentral.com/api/devices/temperature-controller-01/components/thermostat2/commands/getMaxMinReport?api-version=2022-07-31
``` > [!TIP]
iot-central Howto Manage Data Export With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-data-export-with-rest-api.md
The response to this request looks like the following example:
PATCH https://{subdomain}.{baseDomain}/api/dataExport/destinations/{destinationId}?api-version=2022-06-30-preview ```
-You can use this to perform an incremental update to an export. The sample request body looks like the following example which updates the `displayName` to a destination:
+You can use this to perform an incremental update to an export. The sample request body looks like the following example that updates the `displayName` to a destination:
```json {
The response to this request looks like the following example:
PATCH https://{subdomain}.{baseDomain}/dataExport/exports/{exportId}?api-version=2022-06-30-preview ```
-You can use this to perform an incremental update to an export. The sample request body looks like the following example which updates the `enrichments` to an export:
+You can use this to perform an incremental update to an export. The sample request body looks like the following example that updates the `enrichments` to an export:
```json {
iot-central Howto Manage Device Templates With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-device-templates-with-rest-api.md
The IoT Central REST API lets you:
Use the following request to create and publish a new device template. Default views are automatically generated for device templates created this way. ```http
-PUT https://{subdomain}.{baseDomain}/api/deviceTemplates/{deviceTemplateId}?api-version=2022-05-31
+PUT https://{subdomain}.{baseDomain}/api/deviceTemplates/{deviceTemplateId}?api-version=2022-07-31
``` >[!NOTE]
The response to this request looks like the following example:
Use the following request to retrieve details of a device template from your application: ```http
-GET https://{subdomain}.{baseDomain}/api/deviceTemplates/{deviceTemplateId}?api-version=2022-05-31
+GET https://{subdomain}.{baseDomain}/api/deviceTemplates/{deviceTemplateId}?api-version=2022-07-31
``` >[!NOTE]
The response to this request looks like the following example:
## Update a device template ```http
-PATCH https://{subdomain}.{baseDomain}/api/deviceTemplates/{deviceTemplateId}?api-version=2022-05-31
+PATCH https://{subdomain}.{baseDomain}/api/deviceTemplates/{deviceTemplateId}?api-version=2022-07-31
``` >[!NOTE] >`{deviceTemplateId}` should be the same as the `@id` in the payload.
-The sample request body looks like the following example which adds a the `LastMaintenanceDate` cloud property to the device template:
+The sample request body looks like the following example that adds a `LastMaintenanceDate` cloud property to the device template:
```json {
The response to this request looks like the following example:
Use the following request to delete a device template: ```http
-DELETE https://{subdomain}.{baseDomain}/api/deviceTemplates/{deviceTemplateId}?api-version=2022-05-31
+DELETE https://{subdomain}.{baseDomain}/api/deviceTemplates/{deviceTemplateId}?api-version=2022-07-31
``` ## List device templates
DELETE https://{subdomain}.{baseDomain}/api/deviceTemplates/{deviceTemplateId}?a
Use the following request to retrieve a list of device templates from your application: ```http
-GET https://{subdomain}.{baseDomain}/api/deviceTemplates?api-version=2022-05-31
+GET https://{subdomain}.{baseDomain}/api/deviceTemplates?api-version=2022-07-31
``` The response to this request looks like the following example:
The response to this request looks like the following example:
You can also combine two or more filters.
-The following example shows how to retrieve the top 2 device templates where the display name contains the string `thermostat`.
+The following example shows how to retrieve the top two device templates where the display name contains the string `thermostat`.
```http GET https://{subdomain}.{baseDomain}/api/deviceTemplates?api-version=2022-07-31&$filter=contains(displayName, 'thermostat')&$top=2
iot-central Howto Manage Devices With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-devices-with-rest-api.md
The IoT Central REST API lets you:
Use the following request to create a new device. ```http
-PUT https://{subdomain}.{baseDomain}/api/devices/{deviceId}?api-version=2022-05-31
+PUT https://{subdomain}.{baseDomain}/api/devices/{deviceId}?api-version=2022-07-31
``` The following example shows a request body that adds a device for a device template. You can get the `template` details from the device templates page in IoT Central application UI.
The response to this request looks like the following example:
Use the following request to retrieve details of a device from your application: ```http
-GET https://{subdomain}.{baseDomain}/api/devices/{deviceId}?api-version=2022-05-31
+GET https://{subdomain}.{baseDomain}/api/devices/{deviceId}?api-version=2022-07-31
``` >[!NOTE]
The response to this request looks like the following example:
Use the following request to retrieve credentials of a device from your application: ```http
-GET https://{subdomain}.{baseDomain}/api/devices/{deviceId}/credentials?api-version=2022-05-31
+GET https://{subdomain}.{baseDomain}/api/devices/{deviceId}/credentials?api-version=2022-07-31
``` The response to this request looks like the following example:
The response to this request looks like the following example:
### Update a device ```http
-PATCH https://{subdomain}.{baseDomain}/api/devices/{deviceId}?api-version=2022-05-31
+PATCH https://{subdomain}.{baseDomain}/api/devices/{deviceId}?api-version=2022-07-31
``` >[!NOTE] >`{deviceTemplateId}` should be the same as the `@id` in the payload.
-The sample request body looks like the following example which updates the `displayName` to the device:
+The sample request body looks like the following example that updates the `displayName` to the device:
```json {
The response to this request looks like the following example:
Use the following request to delete a device: ```http
-DELETE https://{subdomain}.{baseDomain}/api/devices/{deviceId}?api-version=2022-05-31
+DELETE https://{subdomain}.{baseDomain}/api/devices/{deviceId}?api-version=2022-07-31
``` ### List devices
DELETE https://{subdomain}.{baseDomain}/api/devices/{deviceId}?api-version=2022-
Use the following request to retrieve a list of devices from your application: ```http
-GET https://{subdomain}.{baseDomain}/api/devices?api-version=2022-05-31
+GET https://{subdomain}.{baseDomain}/api/devices?api-version=2022-07-31
``` The response to this request looks like the following example:
The response to this request looks like the following example:
You can also combine two or more filters.
-The following example shows how to retrieve the top 2 device where the display name contains the string `thermostat`.
+The following example shows how to retrieve the top two devices where the display name contains the string `thermostat`.
```http GET https://{subdomain}.{baseDomain}/api/deviceTemplates?api-version=2022-07-31&$filter=contains(displayName, 'thermostat')&$top=2
The response to this request looks like the following example:
Use the following request to create a new device group. ```http
-PUT https://{subdomain}.{baseDomain}/api/deviceGroups/{deviceGroupId}?api-version=2022-05-31
+PUT https://{subdomain}.{baseDomain}/api/deviceGroups/{deviceGroupId}?api-version=2022-07-31
``` When you create a device group, you define a `filter` that selects the devices to add to the group. A `filter` identifies a device template and any properties to match. The following example creates device group that contains all devices associated with the "dtmi:modelDefinition:dtdlv2" template where the `provisioned` property is true
The response to this request looks like the following example:
Use the following request to retrieve details of a device group from your application: ```http
-GET https://{subdomain}.{baseDomain}/api/deviceGroups/{deviceGroupId}?api-version=2022-05-31
+GET https://{subdomain}.{baseDomain}/api/deviceGroups/{deviceGroupId}?api-version=2022-07-31
``` * deviceGroupId - Unique ID for the device group.
The response to this request looks like the following example:
### Update a device group ```http
-PATCH https://{subdomain}.{baseDomain}/api/deviceGroups/{deviceGroupId}?api-version=2022-05-31
+PATCH https://{subdomain}.{baseDomain}/api/deviceGroups/{deviceGroupId}?api-version=2022-07-31
```
-The sample request body looks like the following example which updates the `displayName` of the device group:
+The sample request body looks like the following example that updates the `displayName` of the device group:
```json {
The response to this request looks like the following example:
Use the following request to delete a device group: ```http
-DELETE https://{subdomain}.{baseDomain}/api/deviceGroups/{deviceGroupId}?api-version=2022-05-31
+DELETE https://{subdomain}.{baseDomain}/api/deviceGroups/{deviceGroupId}?api-version=2022-07-31
``` ### List device groups
DELETE https://{subdomain}.{baseDomain}/api/deviceGroups/{deviceGroupId}?api-ver
Use the following request to retrieve a list of device groups from your application: ```http
-GET https://{subdomain}.{baseDomain}/api/deviceGroups?api-version=2022-05-31
+GET https://{subdomain}.{baseDomain}/api/deviceGroups?api-version=2022-07-31
``` The response to this request looks like the following example:
Use the following request to update an enrollment group.
PATCH https://{your app subdomain}.azureiotcentral.com/api/enrollmentGroups/myx509eg?api-version=2022-07-31 ```
-The following example shows a request body that updates the display name of a enrollment group:
+The following example shows a request body that updates the display name of an enrollment group:
```json {
iot-central Howto Manage Organizations With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-organizations-with-rest-api.md
The IoT Central REST API lets you:
The REST API lets you create organizations in your IoT Central application. Use the following request to create an organization in your application: ```http
-PUT https://{your app subdomain}.azureiotcentral.com/api/organizations/{organizationId}?api-version=2022-05-31
+PUT https://{your app subdomain}.azureiotcentral.com/api/organizations/{organizationId}?api-version=2022-07-31
``` * organizationId - Unique ID of the organization
The response to this request looks like the following example:
Use the following request to retrieve details of an individual organization from your application: ```http
-GET https://{your app subdomain}.azureiotcentral.com/api/organizations/{organizationId}?api-version=2022-05-31
+GET https://{your app subdomain}.azureiotcentral.com/api/organizations/{organizationId}?api-version=2022-07-31
``` The response to this request looks like the following example:
The response to this request looks like the following example:
Use the following request to update details of an organization in your application: ```http
-PATCH https://{your app subdomain}.azureiotcentral.com/api/organizations/{organizationId}?api-version=2022-05-31
+PATCH https://{your app subdomain}.azureiotcentral.com/api/organizations/{organizationId}?api-version=2022-07-31
``` The following example shows a request body that updates an organization.
The response to this request looks like the following example:
Use the following request to retrieve a list of organizations from your application: ```http
-GET https://{your app subdomain}.azureiotcentral.com/api/organizations?api-version=2022-05-31
+GET https://{your app subdomain}.azureiotcentral.com/api/organizations?api-version=2022-07-31
``` The response to this request looks like the following example.
The response to this request looks like the following example.
Use the following request to delete an organization: ```http
-DELETE https://{your app subdomain}.azureiotcentral.com/api/organizations/{organizationId}?api-version=2022-05-31
+DELETE https://{your app subdomain}.azureiotcentral.com/api/organizations/{organizationId}?api-version=2022-07-31
``` ## Use organizations
DELETE https://{your app subdomain}.azureiotcentral.com/api/organizations/{organ
The REST API lets you list the roles defined in your IoT Central application. Use the following request to retrieve a list of application role and organization role IDs from your application. To learn more see, [How to manage IoT Central organizations](howto-create-organizations.md): ```http
-GET https://{your app subdomain}.azureiotcentral.com/api/roles?api-version=2022-05-31
+GET https://{your app subdomain}.azureiotcentral.com/api/roles?api-version=2022-07-31
``` The response to this request looks like the following example that includes the application role and organization role IDs.
The response to this request looks like the following example that includes the
Use the following request to create Create an API token to a node in an organization hierarchy in your application: ```http
-PUT https://{your app subdomain}.azureiotcentral.com/api/apiTokens/{tokenId}?api-version=2022-05-31
+PUT https://{your app subdomain}.azureiotcentral.com/api/apiTokens/{tokenId}?api-version=2022-07-31
``` * tokenId - Unique ID of the token
The response to this request looks like the following example:
Use the following request to create and associate a user with a node in an organization hierarchy in your application. The ID and email must be unique in the application: ```http
-PUT https://{your app subdomain}.azureiotcentral.com/api/users/user-001?api-version=2022-05-31
+PUT https://{your app subdomain}.azureiotcentral.com/api/users/user-001?api-version=2022-07-31
``` In the following request body, the `role` is the ID of one of the organization roles and `organization` is the ID of the organization
The response to this request looks like the following example. The role value id
Use the following request to associate a new device with an organization ```http
-PUT https://{your app subdomain}.azureiotcentral.com/api/devices/{deviceId}?api-version=2022-05-31
+PUT https://{your app subdomain}.azureiotcentral.com/api/devices/{deviceId}?api-version=2022-07-31
``` The following example shows a request body that adds a device for a device template. You can get the `template` details from the device templates page in IoT Central application UI.
The response to this request looks like the following example:
Use the following request to create and associate a new device group with an organization. ```http
-PUT https://{your app subdomain}.azureiotcentral.com/api/deviceGroups/{deviceGroupId}?api-version=2022-05-31
+PUT https://{your app subdomain}.azureiotcentral.com/api/deviceGroups/{deviceGroupId}?api-version=2022-07-31
``` When you create a device group, you define a `filter` that selects the devices to add to the group. A `filter` identifies a device template and any properties to match. The following example creates device group that contains all devices associated with the "dtmi:modelDefinition:dtdlv2" template where the `provisioned` property is true.
iot-central Howto Manage Users Roles With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-users-roles-with-rest-api.md
To learn how to manage users and roles by using the IoT Central UI, see [Manage
The REST API lets you list the roles defined in your IoT Central application. Use the following request to retrieve a list of role IDs from your application: ```http
-GET https://{your app subdomain}.azureiotcentral.com/api/roles?api-version=2022-05-31
+GET https://{your app subdomain}.azureiotcentral.com/api/roles?api-version=2022-07-31
``` The response to this request looks like the following example that includes the three built-in roles and a custom role:
The REST API lets you:
Use the following request to retrieve a list of users from your application: ```http
-GET https://{your app subdomain}.azureiotcentral.com/api/users?api-version=2022-05-31
+GET https://{your app subdomain}.azureiotcentral.com/api/users?api-version=2022-07-31
``` The response to this request looks like the following example. The role values identify the role ID the user is associated with:
The response to this request looks like the following example. The role values i
Use the following request to retrieve details of an individual user from your application: ```http
-GET https://{your app subdomain}.azureiotcentral.com/api/users/dc1c916b-a652-49ea-b128-7c465a54c759?api-version=2022-05-31
+GET https://{your app subdomain}.azureiotcentral.com/api/users/dc1c916b-a652-49ea-b128-7c465a54c759?api-version=2022-07-31
``` The response to this request looks like the following example. The role value identifies the role ID the user is associated with:
The response to this request looks like the following example. The role value id
Use the following request to create a user in your application. The ID and email must be unique in the application: ```http
-PUT https://{your app subdomain}.azureiotcentral.com/api/users/user-001?api-version=2022-05-31
+PUT https://{your app subdomain}.azureiotcentral.com/api/users/user-001?api-version=2022-07-31
``` In the following request body, the `role` value is for the operator role you retrieved previously:
You can also add a service principal user which is useful if you need to use ser
Use the following request to change the role assigned to user. This example uses the ID of the builder role you retrieved previously: ```http
-PATCH https://{your app subdomain}.azureiotcentral.com/api/users/user-001?api-version=2022-05-31
+PATCH https://{your app subdomain}.azureiotcentral.com/api/users/user-001?api-version=2022-07-31
``` Request body. The value is for the builder role you retrieved previously:
The response to this request looks like the following example:
Use the following request to delete a user: ```http
-DELETE https://{your app subdomain}.azureiotcentral.com/api/users/user-001?api-version=2022-05-31
+DELETE https://{your app subdomain}.azureiotcentral.com/api/users/user-001?api-version=2022-07-31
``` ## Next steps
iot-central Howto Upload File Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-upload-file-rest-api.md
The response to this request looks like the following example:
Use the following request to create a file upload blob storage account configuration in your IoT Central application: ```http
-PUT https://{your-app-subdomain}.azureiotcentral.com/api/fileUploads?api-version=2022-05-31
+PUT https://{your-app-subdomain}.azureiotcentral.com/api/fileUploads?api-version=2022-07-31
``` The request body has the following fields:
The response to this request looks like the following example:
Use the following request to retrieve details of a file upload blob storage account configuration in your IoT Central application: ```http
-GET https://{your-app-subdomain}.azureiotcentral.com/api/fileUploads?api-version=2022-05-31
+GET https://{your-app-subdomain}.azureiotcentral.com/api/fileUploads?api-version=2022-07-31
``` The response to this request looks like the following example:
The response to this request looks like the following example:
Use the following request to update a file upload blob storage account configuration in your IoT Central application: ```http
-PATCH https://{your-app-subdomain}.azureiotcentral.com/api/fileUploads?api-version=2022-05-31
+PATCH https://{your-app-subdomain}.azureiotcentral.com/api/fileUploads?api-version=2022-07-31
``` ```json
The response to this request looks like the following example:
Use the following request to delete a storage account configuration: ```http
-DELETE https://{your-app-subdomain}.azureiotcentral.com/api/fileUploads?api-version=2022-05-31
+DELETE https://{your-app-subdomain}.azureiotcentral.com/api/fileUploads?api-version=2022-07-31
``` ## Test file upload
iot-develop Quickstart Send Telemetry Central https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-send-telemetry-central.md
Title: Quickstart - connect a device and send telemetry to Azure IoT Central
-description: "This quickstart shows device developers how to connect a device securely to Azure IoT Central. You use an Azure IoT device SDK for C, C#, Python, Node.js, or Java, to build a device client for Windows, Linux, or Raspberry Pi (Raspbian), then you connect and send telemetry."
+description: "This quickstart shows device developers how to connect a device securely to Azure IoT Central. You use an Azure IoT device SDK for C, C#, Python, Node.js, or Java, to build a device client for Windows, Linux, or Raspberry Pi (Raspbian). Then you connect and send telemetry."
Previously updated : 04/27/2021 Last updated : 10/13/2022 zone_pivot_groups: iot-develop-set1-+ #Customer intent: As a device application developer, I want to learn the basic workflow of using an Azure IoT device SDK to build a client app on a device, connect the device securely to Azure IoT Central, and send telemetry.
## View telemetry After the device connects to IoT Central, it begins sending telemetry. You can view the telemetry and other details about connected devices in IoT Central.
-In IoT Central, select **Devices**, click your device name, then select the **Overview** tab. This view displays a graph of the temperatures from the two thermostat devices.
+In IoT Central, select **Devices**, select your device name, then select the **Overview** tab. This view displays a graph of the temperatures from the two thermostat devices.
:::image type="content" source="media/quickstart-send-telemetry-central/iot-central-telemetry-output-overview.png" alt-text="IoT Central device telemetry overview":::
iot-develop Quickstart Send Telemetry Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-send-telemetry-iot-hub.md
Title: Send device telemetry to Azure IoT Hub quickstart
-description: "This quickstart shows device developers how to connect a device securely to Azure IoT Hub. You use an Azure IoT device SDK for C, C#, Python, Node.js, or Java, to build a device client for Windows, Linux, or Raspberry Pi (Raspbian), then you connect and send telemetry."
+description: "This quickstart shows device developers how to connect a device securely to Azure IoT Hub. You use an Azure IoT device SDK for C, C#, Python, Node.js, or Java, to build a device client for Windows, Linux, or Raspberry Pi (Raspbian). Then you connect and send telemetry."
Previously updated : 08/03/2021 Last updated : 10/13/2022 zone_pivot_groups: iot-develop-set1-+ ms.devlang: azurecli #Customer intent: As a device application developer, I want to learn the basic workflow of using an Azure IoT device SDK to build a client app on a device, connect the device securely to Azure IoT Hub, and send telemetry.
iot-dps How To Legacy Device Symm Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/how-to-legacy-device-symm-key.md
Title: Tutorial - Provision devices using a symmetric key enrollment group in Az
description: This tutorial shows how to use symmetric keys to provision devices through an enrollment group in your Device Provisioning Service (DPS) instance Previously updated : 08/19/2022 Last updated : 10/14/2022
+zone_pivot_groups: iot-dps-set1
# Tutorial: Provision devices using symmetric key enrollment groups This tutorial shows how to securely provision multiple simulated symmetric key devices to a single IoT Hub using an enrollment group.
-Some devices may not have a certificate, TPM, or any other security feature that can be used to securely identify the device. The Device Provisioning Service includes [symmetric key attestation](concepts-symmetric-key-attestation.md). Symmetric key attestation can be used to identify a device based off unique information like the MAC address or a serial number.
+Some devices may not have a certificate, TPM, or any other security feature that can be used to securely identify the device. For such devices, the Azure IoT Hub Device Provisioning Service (DPS) includes [symmetric key attestation](concepts-symmetric-key-attestation.md). Symmetric key attestation can be used to identify a device based on unique information like the MAC address or a serial number.
-If you can easily install a [hardware security module (HSM)](concepts-service.md#hardware-security-module) and a certificate, then that may be a better approach for identifying and provisioning your devices. Using an HSM will allow you to bypass updating the code deployed to all your devices, and you would not have a secret key embedded in your device images. This tutorial assumes that neither an HSM or a certificate is a viable option. However, it is assumed that you do have some method of updating device code to use the Device Provisioning Service to provision these devices.
+If you can easily install a [hardware security module (HSM)](concepts-service.md#hardware-security-module) and a certificate, then that may be a better approach for identifying and provisioning your devices. Using an HSM will allow you to bypass updating the code deployed to all your devices, and you wouldn't have a secret key embedded in your device images. This tutorial assumes that neither an HSM nor a certificate is a viable option. However, it's assumed that you do have some method of updating device code to use the Device Provisioning Service to provision these devices.
This tutorial also assumes that the device update takes place in a secure environment to prevent unauthorized access to the master group key or the derived device key. This tutorial is oriented toward a Windows-based workstation. However, you can perform the procedures on Linux. For a Linux example, see [Tutorial: Provision for geolatency](how-to-provision-multitenant.md).
-> [!NOTE]
-> The sample used in this tutorial is written in C. There is also a [C# device provisioning symmetric key sample](https://github.com/Azure/azure-iot-sdk-csharp/tree/main/provisioning/device/samples/How%20To/SymmetricKeySample) available. To use this sample, download or clone the [azure-iot-sdk-csharp](https://github.com/Azure/azure-iot-sdk-csharp) repository and follow the in-line instructions in the sample code. You can follow the instructions in this tutorial to create a symmetric key enrollment group using the portal and to find the ID Scope and enrollment group primary and secondary keys needed to run the sample. You can also create individual enrollments using the sample.
- ## Prerequisites
-* Completion of the [Set up IoT Hub Device Provisioning Service with the Azure portal](./quick-setup-auto-provision.md) quickstart.
+* If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio) before you begin.
-The following prerequisites are for a Windows development environment. For Linux or macOS, see the appropriate section in [Prepare your development environment](https://github.com/Azure/azure-iot-sdk-c/blob/master/doc/devbox_setup.md) in the SDK documentation.
+* Complete the steps in [Set up IoT Hub Device Provisioning Service with the Azure portal](./quick-setup-auto-provision.md).
-* [Visual Studio](https://visualstudio.microsoft.com/vs/) 2019 with the ['Desktop development with C++'](/cpp/ide/using-the-visual-studio-ide-for-cpp-desktop-development) workload enabled. Visual Studio 2015 and Visual Studio 2017 are also supported.
+* If you're using a Windows development environment, install [Visual Studio](https://visualstudio.microsoft.com/vs/) 2022 with the ['Desktop development with C++'](/cpp/ide/using-the-visual-studio-ide-for-cpp-desktop-development) workload enabled. Visual Studio 2019, Visual Studio 2017, and Visual Studio 2015 are also supported. For Linux or macOS, see the appropriate section in [Prepare your development environment](https://github.com/Azure/azure-iot-sdk-c/blob/master/doc/devbox_setup.md) in the SDK documentation.
-* Latest version of [Git](https://git-scm.com/download/) installed.
+* Install the latest [CMake build system](https://cmake.org/download/). Make sure you check the option that adds the CMake executable to your path.
-## Overview
+ >[!IMPORTANT]
+ >Confirm that the Visual Studio prerequisites (Visual Studio and the 'Desktop development with C++' workload) are installed on your machine, **before** starting the `CMake` installation. Once the prerequisites are in place, and the download is verified, install the CMake build system. Also, be aware that older versions of the CMake build system fail to generate the solution file used in this article. Make sure to use the latest version of CMake.
-A unique registration ID will be defined for each device based on information that identifies that device. For example, the MAC address or a serial number.
++
+* Install [.NET SDK 6.0](https://dotnet.microsoft.com/download) or later on your Windows-based machine. You can use the following command to check your version.
+
+ ```cmd
+ dotnet --info
+ ```
+++
+* Install [Node.js v4.0+](https://nodejs.org).
+++
+* Install [Python 3.7](https://www.python.org/downloads/) or later installed on your Windows-based machine. You can check your version of Python by running `python --version`.
+
-An enrollment group that uses [symmetric key attestation](concepts-symmetric-key-attestation.md) will be created with the Device Provisioning Service. The enrollment group will include a group master key. That master key will be used to hash each unique registration ID to produce a unique device key for each device. The device will use that derived device key with its unique registration ID to attest with the Device Provisioning Service and be assigned to an IoT hub.
-The device code demonstrated in this tutorial will follow the same pattern as the [Quickstart: Provision a simulated symmetric key device](quick-create-simulated-device-symm-key.md). The code will simulate a device using a sample from the [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c). The simulated device will attest with an enrollment group instead of an individual enrollment as demonstrated in the quickstart.
+* Install [Java SE Development Kit 8](/azure/developer/java/fundamentals/java-support-on-azure) or later installed on your machine.
+* Download and install [Maven](https://maven.apache.org/install.html).
-## Prepare an Azure IoT C SDK development environment
+* Install the latest version of [Git](https://git-scm.com/download/). Make sure that Git is added to the environment variables accessible to the command window. See [Software Freedom Conservancy's Git client tools](https://git-scm.com/download/) for the latest version of `git` tools to install, which includes *Git Bash*, the command-line app that you can use to interact with your local Git repository.
-In this section, you will prepare a development environment used to build the [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c).
+## Overview
+
+A unique registration ID will be defined for each device based on information that identifies that device. For example, the MAC address or a serial number.
+
+An enrollment group that uses [symmetric key attestation](concepts-symmetric-key-attestation.md) will be created with the Device Provisioning Service. The enrollment group will include a group master key. The group master key will be used to hash each unique registration ID to produce a unique device key for each device. The device will use the derived device key with its unique registration ID to attest with the Device Provisioning Service to be assigned to an IoT hub.
-The SDK includes the sample code for the simulated device. This simulated device will attempt provisioning during the device's boot sequence.
+## Prepare your development environment
-1. Download the [CMake build system](https://cmake.org/download/).
- It is important that the Visual Studio prerequisites (Visual Studio and the 'Desktop development with C++' workload) are installed on your machine, **before** starting the `CMake` installation. Once the prerequisites are in place, and the download is verified, install the CMake build system.
+In this section, you'll prepare a development environment that's used to build the [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c). The sample code attempts to provision the device during the device's boot sequence.
-2. Find the tag name for the [latest release](https://github.com/Azure/azure-iot-sdk-c/releases/latest) of the SDK.
+1. Open a web browser, and go to the [Release page of the Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c/releases/latest).
-3. Open a command prompt or Git Bash shell. Run the following commands to clone the latest release of the [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c) GitHub repository. Use the tag you found in the previous step as the value for the `-b` parameter:
+1. Select the **Tags** tab at the top of the page.
- ```cmd/sh
+1. Copy the tag name for the latest release of the Azure IoT C SDK.
+
+1. In a Windows command prompt, run the following commands to clone the latest release of the [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c) GitHub repository (replace `<release-tag>` with the tag you copied in the previous step).
+
+ ```cmd
git clone -b <release-tag> https://github.com/Azure/azure-iot-sdk-c.git cd azure-iot-sdk-c git submodule update --init ```
- You should expect this operation to take several minutes to complete.
+ This operation could take several minutes to complete.
-4. Create a `cmake` subdirectory in the root directory of the Git repository, and navigate to that folder. Run the following commands from the `azure-iot-sdk-c` directory:
+1. When the operation is complete, run the following commands from the `azure-iot-sdk-c` directory:
- ```cmd/sh
+ ```cmd
mkdir cmake cd cmake ```
-5. Run the following command, which builds a version of the SDK specific to your development client platform. A Visual Studio solution for the simulated device will be generated in the `cmake` directory.
+1. The code sample uses a symmetric key to provide attestation. Run the following command to build a version of the SDK specific to your development client platform that includes the device provisioning client:
```cmd cmake -Dhsm_type_symm_key:BOOL=ON -Duse_prov_client:BOOL=ON .. ```
-
- If `cmake` does not find your C++ compiler, you might get build errors while running the above command. If that happens, try running this command in the [Visual Studio command prompt](/dotnet/framework/tools/developer-command-prompt-for-vs).
- Once the build succeeds, the last few output lines will look similar to the following output:
+ >[!TIP]
+ >If `cmake` does not find your C++ compiler, you may get build errors while running the above command. If that happens, try running the command in the [Visual Studio command prompt](/dotnet/framework/tools/developer-command-prompt-for-vs).
- ```cmd/sh
+1. When the build completes successfully, the last few output lines will look similar to the following output:
+
+ ```output
$ cmake -Dhsm_type_symm_key:BOOL=ON -Duse_prov_client:BOOL=ON ..
- -- Building for: Visual Studio 15 2017
- -- Selecting Windows SDK version 10.0.16299.0 to target Windows 10.0.17134.
- -- The C compiler identification is MSVC 19.12.25835.0
- -- The CXX compiler identification is MSVC 19.12.25835.0
+ -- Building for: Visual Studio 16 2019
+ -- Selecting Windows SDK version 10.0.19041.0 to target Windows 10.0.22621.
+ -- The C compiler identification is MSVC 19.29.30146.0
+ -- The CXX compiler identification is MSVC 19.29.30146.0
... -- Configuring done -- Generating done
- -- Build files have been written to: E:/IoT Testing/azure-iot-sdk-c/cmake
+ -- Build files have been written to: C:/azure-iot-sdk-c/cmake
```
-## Create a symmetric key enrollment group
+
+1. Open a Git CMD or Git Bash command-line environment.
+
+2. Clone the [Azure IoT SDK for C#](https://github.com/Azure/azure-iot-sdk-csharp) GitHub repository using the following command:
+
+ ```cmd
+ git clone https://github.com/Azure/azure-iot-sdk-csharp.git
+ ```
+
-1. Sign in to the [Azure portal](https://portal.azure.com), and open your Device Provisioning Service instance.
-2. Select the **Manage enrollments** tab, and then click the **Add enrollment group** button at the top of the page.
+1. Open a Git CMD or Git Bash command-line environment.
-3. On **Add Enrollment Group**, enter the following information, and click the **Save** button.
+2. Clone the [Azure IoT SDK for Node.js](https://github.com/Azure/azure-iot-sdk-node.git) GitHub repository using the following command:
- - **Group name**: Enter **mylegacydevices**. The enrollment group name is a case-insensitive string (up to 128 characters long) of alphanumeric characters plus the special characters: `'-'`, `'.'`, `'_'`, `':'`. The last character must be alphanumeric or dash (`'-'`).
+ ```cmd
+ git clone https://github.com/Azure/azure-iot-sdk-node.git --recursive
+ ```
+++
+1. Open a Git CMD or Git Bash command-line environment.
+
+2. Clone the [Azure IoT SDK for Python](https://github.com/Azure/azure-iot-sdk-python.git) GitHub repository using the following command:
+
+ ```cmd
+ git clone https://github.com/Azure/azure-iot-sdk-python.git --recursive
+ ```
+++
+1. Open a Git CMD or Git Bash command-line environment.
+
+2. Clone the [Azure IoT SDK for Java](https://github.com/Azure/azure-iot-sdk-java.git) GitHub repository using the following command:
+
+ ```cmd
+ git clone https://github.com/Azure/azure-iot-sdk-java.git --recursive
+ ```
- - **Attestation Type**: Select **Symmetric Key**.
+3. Go to the root `azure-iot-sdk-java` directory and build the project to download all needed packages. This step can take several minutes to complete.
- - **Auto Generate Keys**: Check this box.
+ ```cmd
+ cd azure-iot-sdk-java
+ mvn install -DskipTests=true
+ ```
- - **Select how you want to assign devices to hubs**: Select **Static configuration** so you can assign to a specific hub.
- - **Select the IoT hubs this group can be assigned to**: Select one of your hubs.
+## Create a symmetric key enrollment group
+
+1. Sign in to the [Azure portal](https://portal.azure.com), and navigate to your Device Provisioning Service instance.
+
+1. Select the **Manage enrollments** tab and then select **+ Add enrollment group** at the top of the page.
+
+1. On **Add Enrollment Group**, enter the following information:
+
+ * **Group name**: Enter **mylegacydevices**. The enrollment group name is a case-insensitive string (up to 128 characters long) of alphanumeric characters plus the special characters: `'-'`, `'.'`, `'_'`, `':'`. The last character must be alphanumeric or dash (`'-'`).
+
+ * **Attestation Type**: Select **Symmetric Key**.
+
+ * **Auto Generate Keys**: Check this box.
+
+ * **Select how you want to assign devices to hubs**: Select **Static configuration** so you can assign to a specific hub.
- ![Add enrollment group for symmetric key attestation](./media/how-to-legacy-device-symm-key/symm-key-enrollment-group.png)
+ * **Select the IoT hubs this group can be assigned to**: Select one of the IoT hubs from the drop-down list.
-4. Once you saved your enrollment, the **Primary Key** and **Secondary Key** will be generated and added to the enrollment entry. Your symmetric key enrollment group appears as **mylegacydevices** under the *Group Name* column in the *Enrollment Groups* tab.
+ :::image type="content" source="./media/how-to-legacy-device-symm-key/add-symmetric-key-enrollment-group.png" alt-text="Screenshot that shows adding a symmetric key enrollment group to DPS.":::
- Open the enrollment and copy the value of your generated **Primary Key**. This key is your master group key.
+1. Select **Save**. When you save the enrollment, IoT Hub generates the **Primary Key** and **Secondary Key** and adds them to the enrollment entry. Your symmetric key enrollment group appears as **mylegacydevices** under the *Group Name* column in the *Enrollment Groups* tab.
+1. Open the enrollment and copy the value of the **Primary Key**. This key is your master group key.
## Choose a unique registration ID for the device
-A unique registration ID must be defined to identify each device. You can use the MAC address, serial number, or any unique information from the device.
+A unique registration ID must be defined to identify each device. You can use the MAC address, serial number, or any unique information from the device.
In this example, we use a combination of a MAC address and serial number forming the following string for a registration ID.
-```
+```text
sn-007-888-abc-mac-a1-b2-c3-d4-e5-f6 ``` Create unique registration IDs for each device. The registration ID is a case-insensitive string (up to 128 characters long) of alphanumeric characters plus the special characters: `'-'`, `'.'`, `'_'`, `':'`. The last character must be alphanumeric or dash (`'-'`). -
-## Derive a device key
+## Derive a device key
To generate device keys, use the enrollment group master key to compute an [HMAC-SHA256](https://wikipedia.org/wiki/HMAC) of the registration ID for each device. The result is then converted into Base64 format for each device.
To generate device keys, use the enrollment group master key to compute an [HMAC
# [Azure CLI](#tab/azure-cli)
-The IoT extension for the Azure CLI provides the [`compute-device-key`](/cli/azure/iot/dps#az-iot-dps-compute-device-key) command for generating derived device keys. This command can be used from a Windows-based or Linux systems, in PowerShell or a Bash shell.
+The IoT extension for the Azure CLI provides the [az iot dps enrollment-group compute-device-key](/cli/azure/iot/dps/enrollment-group#az-iot-dps-enrollment-group-compute-device-key) command for generating derived device keys. This command can be used from both Windows-based and Linux systems.
-Replace the value of `--key` argument with the **Primary Key** from your enrollment group.
+Replace the value of the `--key` parameter with the **Primary Key** from your enrollment group.
-Replace the value of `--registration-id` argument with your registration ID.
+Replace the value of the `--registration-id` parameter with your registration ID.
```azurecli
-az iot dps compute-device-key --key 8isrFI1sGsIlvvFSSFRiMfCNzv21fjbE/+ah/lSh3lF8e2YG1Te7w1KpZhJFFXJrqYKi9yegxkqIChbqOS9Egw== --registration-id sn-007-888-abc-mac-a1-b2-c3-d4-e5-f6
+az iot dps enrollment-group compute-device-key --key 8isrFI1sGsIlvvFSSFRiMfCNzv21fjbE/+ah/lSh3lF8e2YG1Te7w1KpZhJFFXJrqYKi9yegxkqIChbqOS9Egw== --registration-id sn-007-888-abc-mac-a1-b2-c3-d4-e5-f6
``` Example result:
Example result:
``` # [Windows](#tab/windows)
-If you are using a Windows-based workstation, you can use PowerShell to generate your derived device key as shown in the following example.
+If you're using a Windows-based workstation, you can use PowerShell to generate your derived device key as shown in the following example.
-Replace the value of **KEY** with the **Primary Key** you noted earlier.
+Replace the value of **KEY** with the **Primary Key** from your enrollment group.
Replace the value of **REG_ID** with your registration ID.
$derivedkey = [Convert]::ToBase64String($sig)
echo "`n$derivedkey`n" ```
+Example result:
+ ```powershell Jsm0lyGpjaVYVP2g3FnmnmG9dI/9qU24wNoykUmermc= ``` # [Linux](#tab/linux)
-If you are using a Linux workstation, you can use openssl to generate your
-derived device key as shown in the following example.
+If you're using a Linux workstation, you can use openssl to generate your derived device key as shown in the following example.
-Replace the value of **KEY** with the **Primary Key** you noted earlier.
+Replace the value of **KEY** with the **Primary Key** from your enrollment group.
Replace the value of **REG_ID** with your registration ID.
keybytes=$(echo $KEY | base64 --decode | xxd -p -u -c 1000)
echo -n $REG_ID | openssl sha256 -mac HMAC -macopt hexkey:$keybytes -binary | base64 ```
+Example result:
+ ```bash Jsm0lyGpjaVYVP2g3FnmnmG9dI/9qU24wNoykUmermc= ```
Jsm0lyGpjaVYVP2g3FnmnmG9dI/9qU24wNoykUmermc=
Each device uses its derived device key and unique registration ID to perform symmetric key attestation with the enrollment group during provisioning.
+## Prepare and run the device provisioning code
-## Create a device image to provision
+In this section, you'll update the device sample code to send the device's boot sequence to your Device Provisioning Service instance. This boot sequence will cause the device to be recognized, authenticated, and assigned to an IoT hub linked to the Device Provisioning Service instance.
-In this section, you will update a provisioning sample named **prov\_dev\_client\_sample** located in the Azure IoT C SDK you set up earlier.
+The sample provisioning code accomplishes the following tasks, in order:
-This sample code simulates a device boot sequence that sends the provisioning request to your Device Provisioning Service instance. The boot sequence will cause the device to be recognized and assigned to the IoT hub you configured on the enrollment group. This would be completed for each device that would be provisioned using the enrollment group.
+1. Authenticates your device with your Device Provisioning resource using the following three parameters:
-1. In the Azure portal, select the **Overview** tab for your Device Provisioning Service and note down the **_ID Scope_** value.
+ * The ID Scope of your Device Provisioning Service
+ * The registration ID for your device.
+ * The derived device key for your device.
- ![Extract Device Provisioning Service endpoint information from the portal blade](./media/quick-create-simulated-device-x509/copy-id-scope.png)
+2. Assigns the device to the IoT hub already linked to your Device Provisioning Service instance.
-2. In Visual Studio, open the **azure_iot_sdks.sln** solution file that was generated by running CMake earlier. The solution file should be in the following location:
+To update and run the provisioning sample with your device information:
+
+1. In the main menu of your Device Provisioning Service, select **Overview**.
+
+2. Copy the **ID Scope** value.
+
+ :::image type="content" source="./media/how-to-legacy-device-symm-key/copy-id-scope.png" alt-text="Screenshot that shows copying the ID scope from the DPS overview pane.":::
+
+3. In Visual Studio, open the *azure_iot_sdks.sln* solution file that was generated by running CMake. The solution file should be in the following location:
+
+ ```output
- ```
\azure-iot-sdk-c\cmake\azure_iot_sdks.sln+ ```
-3. In Visual Studio's *Solution Explorer* window, navigate to the **Provision\_Samples** folder. Expand the sample project named **prov\_dev\_client\_sample**. Expand **Source Files**, and open **prov\_dev\_client\_sample.c**.
+ >[!TIP]
+ >If the file was not generated in your cmake directory, make sure you used a recent version of the CMake build system.
+
+4. In Visual Studio's *Solution Explorer* window, go to the **Provision\_Samples** folder. Expand the sample project named **prov\_dev\_client\_sample**. Expand **Source Files**, and open **prov\_dev\_client\_sample.c**.
-4. Find the `id_scope` constant, and replace the value with your **ID Scope** value that you copied earlier.
+5. Find the `id_scope` constant, and replace the value with the **ID Scope** value that you copied in step 2.
```c static const char* id_scope = "0ne00002193"; ```
-5. Find the definition for the `main()` function in the same file. Make sure the `hsm_type` variable is set to `SECURE_DEVICE_TYPE_SYMMETRIC_KEY` as shown below:
+6. Find the definition for the `main()` function in the same file. Make sure the `hsm_type` variable is set to `SECURE_DEVICE_TYPE_SYMMETRIC_KEY` as shown below:
```c SECURE_DEVICE_TYPE hsm_type;
This sample code simulates a device boot sequence that sends the provisioning re
hsm_type = SECURE_DEVICE_TYPE_SYMMETRIC_KEY; ```
-6. Find the call to `prov_dev_set_symmetric_key_info()` in **prov\_dev\_client\_sample.c** which is commented out.
+7. Find the call to `prov_dev_set_symmetric_key_info()` in **prov\_dev\_client\_sample.c** that is commented out.
```c // Set the symmetric key if using they auth type //prov_dev_set_symmetric_key_info("<symm_registration_id>", "<symmetric_Key>"); ```
- Uncomment the function call, and replace the placeholder values (including the angle brackets) with the unique registration ID for your device and the derived device key you generated.
+ Uncomment the function call and replace the placeholder values (including the angle brackets) with the registration ID you chose in [Choose a unique registration ID for the device](#choose-a-unique-registration-id-for-the-device) and the derived device key that you generated in [Derive a device key](#derive-a-device-key).
```c // Set the symmetric key if using they auth type prov_dev_set_symmetric_key_info("sn-007-888-abc-mac-a1-b2-c3-d4-e5-f6", "Jsm0lyGpjaVYVP2g3FnmnmG9dI/9qU24wNoykUmermc="); ```
-
- Save the file.
-7. Right-click the **prov\_dev\_client\_sample** project and select **Set as Startup Project**.
+ > [!CAUTION]
+ > Be aware that this step leaves the derived device key included as part of the image for each device, which isn't a recommended security best practice. This is one reason why security and ease-of-use are often tradeoffs. You must fully review the security of your devices based on your own requirements.
-8. On the Visual Studio menu, select **Debug** > **Start without debugging** to run the solution. In the prompt to rebuild the project, click **Yes**, to rebuild the project before running.
+8. Save the file.
- The following output is an example of the simulated device successfully booting up, and connecting to the provisioning Service instance to be assigned to an IoT hub:
+9. Right-click the **prov\_dev\_client\_sample** project and select **Set as Startup Project**.
- ```cmd
- Provisioning API Version: 1.2.8
+10. On the Visual Studio menu, select **Debug** > **Start without debugging** to run the solution. In the rebuild the project prompt, select **Yes** to rebuild the project before running.
+
+ The following output is an example of the device successfully connecting to the provisioning Service instance to be assigned to an IoT hub:
+
+ ```output
+ Provisioning API Version: 1.9.1
Registering Device Provisioning Status: PROV_DEVICE_REG_STATUS_CONNECTED Provisioning Status: PROV_DEVICE_REG_STATUS_ASSIGNING
- Provisioning Status: PROV_DEVICE_REG_STATUS_ASSIGNING
-
- Registration Information received from service:
- test-docs-hub.azure-devices.net, deviceId: sn-007-888-abc-mac-a1-b2-c3-d4-e5-f6
+ Registration Information received from service: contoso-hub-2.azure-devices.net, deviceId: sn-007-888-abc-mac-a1-b2-c3-d4-e5-f6
Press enter key to exit: ```
-9. In the portal, navigate to the IoT hub your simulated device was assigned to and click the **IoT Devices** tab. On successful provisioning of the simulated to the hub, its device ID appears on the **IoT Devices** blade, with *STATUS* as **enabled**. You might need to click the **Refresh** button at the top.
++
+The sample provisioning code accomplishes the following tasks:
+
+1. Authenticates your device with your Device Provisioning resource using the following three parameters:
+
+ * The ID Scope of your Device Provisioning Service
+ * The registration ID for your device.
+ * The derived device key for your device.
+
+2. Assigns the device to the IoT hub already linked to your Device Provisioning Service instance.
+
+3. Sends a test telemetry message to the IoT hub.
+
+To update and run the provisioning sample with your device information:
+
+1. In the main menu of your Device Provisioning Service, select **Overview**.
+
+2. Copy the **ID Scope** value.
+
+ :::image type="content" source="./media/how-to-legacy-device-symm-key/copy-id-scope.png" alt-text="Screenshot that shows copying the ID scope from the DPS overview pane.":::
+
+3. Open a command prompt and go to the *SymmetricKeySample* in the cloned sdk repository:
+
+ ```cmd
+ cd .\azure-iot-sdk-csharp\provisioning\device\samples\How To\SymmetricKeySample
+ ```
+
+4. In the *SymmetricKeySample* folder, open *Parameters.cs* in a text editor. This file shows the parameters that are supported by the sample. Only the first three required parameters will be used in this article when running the sample. Review the code in this file. No changes are needed.
+
+ | Parameter | Required | Description |
+ | :-- | :- | :-- |
+ | `--i` or `--IdScope` | True | The ID Scope of the DPS instance |
+ | `--r` or `--RegistrationId` | True | The registration ID for the device. The registration ID is a case-insensitive string (up to 128 characters long) of alphanumeric characters plus the special characters: `'-'`, `'.'`, `'_'`, `':'`. The last character must be alphanumeric or dash (`'-'`). |
+ | `--p` or `--PrimaryKey` | True | The primary key of an individual enrollment or the derived device key of a group enrollment. |
+ | `--g` or `--GlobalDeviceEndpoint` | False | The global endpoint for devices to connect to. Defaults to `global.azure-devices-provisioning.net` |
+ | `--t` or `--TransportType` | False | The transport to use to communicate with the device provisioning instance. Defaults to `Mqtt`. Possible values include `Mqtt`, `Mqtt_WebSocket_Only`, `Mqtt_Tcp_Only`, `Amqp`, `Amqp_WebSocket_Only`, `Amqp_Tcp_only`, and `Http1`.|
+
+5. In the *SymmetricKeySample* folder, open *ProvisioningDeviceClientSample.cs* in a text editor. This file shows how the [SecurityProviderSymmetricKey](/dotnet/api/microsoft.azure.devices.shared.securityprovidersymmetrickey?view=azure-dotnet&preserve-view=true) class is used along with the [ProvisioningDeviceClient](/dotnet/api/microsoft.azure.devices.provisioning.client.provisioningdeviceclient?view=azure-dotnet&preserve-view=true) class to provision your simulated symmetric key device. Review the code in this file. No changes are needed.
+
+6. Build and run the sample code using the following command:
+
+ * Replace `<id-scope>` with the **ID Scope** that you copied in step 2.
+ * Replace `<registration-id>` with the registration ID that you chose in [Choose a unique registration ID for the device](#choose-a-unique-registration-id-for-the-device).
+ * Replace `<primarykey>` with the derived device key that you generated.
+
+ ```cmd
+ dotnet run --i <id-scope> --r <registration-id> --p <primarykey>
+ ```
+
+7. You should see something similar to the following output. A "TestMessage" string is sent to the hub as a test message.
+
+ ```output
+ D:\azure-iot-sdk-csharp\provisioning\device\samples\How To\SymmetricKeySample>dotnet run --s 0ne00000A0A --i sn-007-888-abc-mac-a1-b2-c3-d4-e5-f6 --p sbDDeEzRuEuGKag+kQKV+T1QGakRtHpsERLP0yPjwR93TrpEgEh/Y07CXstfha6dhIPWvdD1nRxK5T0KGKA+nQ==
+
+ Initializing the device provisioning client...
+ Initialized for registration Id sn-007-888-abc-mac-a1-b2-c3-d4-e5-f6.
+ Registering with the device provisioning service...
+ Registration status: Assigned.
+ Device sn-007-888-abc-mac-a1-b2-c3-d4-e5-f6 registered to contoso-hub-2.azure-devices.net.
+ Creating symmetric key authentication for IoT Hub...
+ Testing the provisioned device with IoT Hub...
+ Sending a telemetry message...
+ Finished.
+ ```
+++
+The sample provisioning code accomplishes the following tasks, in order:
+
+1. Authenticates your device with your Device Provisioning resource using the following four parameters:
+
+ * `PROVISIONING_HOST`
+ * `PROVISIONING_IDSCOPE`
+ * `PROVISIONING_REGISTRATION_ID`
+ * `PROVISIONING_SYMMETRIC_KEY`
+
+2. Assigns the device to the IoT hub already linked to your Device Provisioning Service instance.
+
+3. Sends a test telemetry message to the IoT hub.
+
+To update and run the provisioning sample with your device information:
+
+1. In the main menu of your Device Provisioning Service, select **Overview**.
+
+2. Copy the **ID Scope** value.
+
+ :::image type="content" source="./media/how-to-legacy-device-symm-key/copy-id-scope.png" alt-text="Screenshot that shows copying the ID scope from the DPS overview pane.":::
+
+3. Open a command prompt for executing Node.js commands, and go to the following directory:
+
+ ```cmd
+ cd azure-iot-sdk-node\provisioning\device\samples
+ ```
+
+4. In the *provisioning/device/samples* folder, open *register_symkey.js* and review the code.
+
+ The sample defaults to MQTT as the transport protocol. If you want to use a different protocol, comment out the following line and uncomment the line for the appropriate protocol.
+
+ ```javascript
+ var ProvisioningTransport = require('azure-iot-provisioning-device-mqtt').Mqtt;
+ ```
+
+ Notice, also, that the sample code sets a custom payload:
+
+ ```nodejs
+ provisioningClient.setProvisioningPayload({a: 'b'});
+ ```
- ![Device is registered with the IoT hub](./media/how-to-legacy-device-symm-key/hub-registration.png)
+ You may comment out this code, as it's not needed with for this tutorial. A custom payload can be used when you use a custom allocation webhook to assign your device to an IoT Hub. For more information, see [Tutorial: Use custom allocation policies](tutorial-custom-allocation-policies.md).
+ The `provisioningClient.register()` method attempts the registration of your device.
+5. In the command prompt, run the following commands to set environment variables used by the sample:
-## Security concerns
+ * The first command sets the `PROVISIONING_HOST` environment variable to the **Global device endpoint**. This endpoint is the same for all DPS instances.
+ * Replace `<id-scope>` with the **ID Scope** that you copied in step 2.
+ * Replace `<registration-id>` with the registration ID that you chose in [Choose a unique registration ID for the device](#choose-a-unique-registration-id-for-the-device).
+ * Replace `<defived-device-key>` with the derived device key that you generated in [Derive a device key](#derive-a-device-key).
-Be aware that this leaves the derived device key included as part of the image for each device, which is not a recommended security best practice. This is one reason why security and ease-of-use are often tradeoffs. You must fully review the security of your devices based on your own requirements.
+ ```cmd
+ set PROVISIONING_HOST=global.azure-devices-provisioning.net
+ ```
+ ```cmd
+ set PROVISIONING_IDSCOPE=<id-scope>
+ ```
+
+ ```cmd
+ set PROVISIONING_REGISTRATION_ID=<registration-id>
+ ```
+
+ ```cmd
+ set PROVISIONING_SYMMETRIC_KEY=<derived-device-key>
+ ```
+
+6. Build and run the sample code using the following commands:
+
+ ```cmd
+ npm install
+ ```
+
+ ```cmd
+ node register_symkey.js
+ ```
+
+7. You should now see something similar to the following output. A "Hello World" string is sent to the hub as a test message.
+
+ ```output
+ registration succeeded
+ assigned hub=contoso-hub-2.azure-devices.net
+ deviceId=sn-007-888-abc-mac-a1-b2-c3-d4-e5-f6
+ payload=undefined
+ Client connected
+ send status: MessageEnqueued
+ ```
+++
+The sample provisioning code accomplishes the following tasks, in order:
+
+1. Authenticates your device with your Device Provisioning resource using the following four parameters:
+
+ * `PROVISIONING_HOST`
+ * `PROVISIONING_IDSCOPE`
+ * `PROVISIONING_REGISTRATION_ID`
+ * `PROVISIONING_SYMMETRIC_KEY`
+
+2. Assigns the device to the IoT hub already linked to your Device Provisioning Service instance.
+
+3. Sends a test telemetry message to the IoT hub.
+
+To update and run the provisioning sample with your device information:
+
+1. In the main menu of your Device Provisioning Service, select **Overview**.
+
+2. Copy the **ID Scope** value.
+
+ :::image type="content" source="./media/how-to-legacy-device-symm-key/copy-id-scope.png" alt-text="Screenshot that shows copying the ID scope from the DPS overview pane.":::
+
+3. Open a command prompt and go to the directory where the sample file, _provision_symmetric_key.py_, is located.
+
+ ```cmd
+ cd azure-iot-sdk-python\samples\async-hub-scenarios
+ ```
+
+4. In the command prompt, run the following commands to set environment variables used by the sample:
+
+ * The first command sets the `PROVISIONING_HOST` environment variable to the **Global device endpoint**. This endpoint is the same for all DPS instances.
+ * Replace `<id-scope>` with the **ID Scope** that you copied in step 2.
+ * Replace `<registration-id>` with the registration ID that you chose in [Choose a unique registration ID for the device](#choose-a-unique-registration-id-for-the-device).
+ * Replace `<defived-device-key>` with the derived device key that you generated in [Derive a device key](#derive-a-device-key).
+
+ ```cmd
+ set PROVISIONING_HOST=global.azure-devices-provisioning.net
+ ```
+
+ ```cmd
+ set PROVISIONING_IDSCOPE=<id-scope>
+ ```
+
+ ```cmd
+ set PROVISIONING_REGISTRATION_ID=<registration-id>
+ ```
+
+ ```cmd
+ set PROVISIONING_SYMMETRIC_KEY=<derived-device-key>
+ ```
+
+5. Install the _azure-iot-device_ library by running the following command.
+
+ ```cmd
+ pip install azure-iot-device
+ ```
+
+6. Run the Python sample code in *_provision_symmetric_key.py_*.
+
+ ```cmd
+ python provision_symmetric_key.py
+ ```
+
+7. You should now see something similar to the following output. Some example wind speed telemetry messages are also sent to the hub as a test.
+
+ ```output
+ D:\azure-iot-sdk-python\samples\async-hub-scenarios>python provision_symmetric_key.py
+ The complete registration result is
+ sn-007-888-abc-mac-a1-b2-c3-d4-e5-f6
+ contoso-hub-2.azure-devices.net
+ initialAssignment
+ null
+ Will send telemetry from the provisioned device
+ sending message #1
+ sending message #2
+ sending message #3
+ sending message #4
+ sending message #5
+ sending message #6
+ sending message #7
+ sending message #8
+ sending message #9
+ sending message #10
+ done sending message #1
+ done sending message #2
+ done sending message #3
+ done sending message #4
+ done sending message #5
+ done sending message #6
+ done sending message #7
+ done sending message #8
+ done sending message #9
+ done sending message #10
+ ```
+++
+The sample provisioning code accomplishes the following tasks, in order:
+
+1. Authenticates your device with your Device Provisioning resource using the following four parameters:
+
+ * `GLOBAL_ENDPOINT`
+ * `SCOPE_ID`
+ * `REGISTRATION_ID`
+ * `SYMMETRIC_KEY`
+
+2. Assigns the device to the IoT hub already linked to your Device Provisioning Service instance.
+
+3. Sends a test telemetry message to the IoT hub.
+
+To update and run the provisioning sample with your device information:
+
+1. In the main menu of your Device Provisioning Service, select **Overview**.
+
+2. Copy the **ID Scope** value.
+
+ :::image type="content" source="./media/how-to-legacy-device-symm-key/copy-id-scope.png" alt-text="Screenshot that shows copying the ID scope from the DPS overview pane.":::
+
+3. Open the Java device sample code for editing. The full path to the device sample code is:
+
+ `azure-iot-sdk-java/provisioning/provisioning-samples/provisioning-symmetrickey-individual-sample/src/main/java/samples/com/microsoft/azure/sdk/iot/ProvisioningSymmetricKeyIndividualEnrollmentSample.java`
+
+4. Set the value of the following variables for your DPS and device enrollment:
+
+ * Replace `[Your scope ID here]` with the **ID Scope** that you copied in step 2.
+ * Replace `[Your Provisioning Service Global Endpoint here]` with the **Global device endpoint**: global.azure-devices-provisioning.net. This endpoint is the same for all DPS instances.
+ * Replace `[Enter your Symmetric Key here]` with the derived device key that you generated in [Derive a device key](#derive-a-device-key).
+ * Replace `[Enter your Registration ID here]` with the registration ID that you chose in [Choose a unique registration ID for the device](#choose-a-unique-registration-id-for-the-device).
+
+ ```java
+ private static final String SCOPE_ID = "[Your scope ID here]";
+ private static final String GLOBAL_ENDPOINT = "[Your Provisioning Service Global Endpoint here]";
+ private static final String SYMMETRIC_KEY = "[Enter your Symmetric Key here]";
+ private static final String REGISTRATION_ID = "[Enter your Registration ID here]";
+ ```
+
+ > [!CAUTION]
+ > Be aware that this step leaves the derived device key included as part of the image for each device, which isn't a recommended security best practice. This is one reason why security and ease-of-use are often tradeoffs. You must fully review the security of your devices based on your own requirements.
+
+5. Open a command prompt for building. Go to the provisioning sample project folder of the Java SDK repository.
+
+ ```cmd
+ cd azure-iot-sdk-java\provisioning\provisioning-samples\provisioning-symmetrickey-individual-sample
+ ```
+
+6. Build the sample.
+
+ ```cmd
+ mvn clean install
+ ```
+
+7. Go to the `target` folder and execute the created `.jar` file. In the `java` command, replace the `{version}` placeholder with the version in the `.jar` filename on your machine.
+
+ ```cmd
+ cd target
+ java -jar ./provisioning-symmetrickey-individual-sample-{version}-with-deps.jar
+ ```
+
+8. You should now see something similar to the following output.
+
+ ```output
+ Starting...
+ Beginning setup.
+ WARNING: sun.reflect.Reflection.getCallerClass is not supported. This will impact performance.
+ 2022-10-07 18:14:48,388 DEBUG (main) [com.microsoft.azure.sdk.iot.provisioning.device.ProvisioningDeviceClient] - Initialized a ProvisioningDeviceClient instance using SDK version 2.0.2
+ 2022-10-07 18:14:48,390 DEBUG (main) [com.microsoft.azure.sdk.iot.provisioning.device.ProvisioningDeviceClient] - Starting provisioning thread...
+ Waiting for Provisioning Service to register
+ 2022-10-07 18:14:48,392 INFO (global.azure-devices-provisioning.net-002edcf5-CxnPendingConnectionId-azure-iot-sdk-ProvisioningTask) [com.microsoft.azure.sdk.iot.provisioning.device.internal.task.ProvisioningTask] - Opening the connection to device provisioning service...
+ 2022-10-07 18:14:48,518 INFO (global.azure-devices-provisioning.net-002edcf5-Cxn002edcf5-azure-iot-sdk-ProvisioningTask) [com.microsoft.azure.sdk.iot.provisioning.device.internal.task.ProvisioningTask] - Connection to device provisioning service opened successfully, sending initial device registration message
+ 2022-10-07 18:14:48,521 INFO (global.azure-devices-provisioning.net-002edcf5-Cxn002edcf5-azure-iot-sdk-RegisterTask) [com.microsoft.azure.sdk.iot.provisioning.device.internal.task.RegisterTask] - Authenticating with device provisioning service using symmetric key
+ 2022-10-07 18:14:49,252 INFO (global.azure-devices-provisioning.net-002edcf5-Cxn002edcf5-azure-iot-sdk-ProvisioningTask) [com.microsoft.azure.sdk.iot.provisioning.device.internal.task.ProvisioningTask] - Waiting for device provisioning service to provision this device...
+ 2022-10-07 18:14:49,253 INFO (global.azure-devices-provisioning.net-002edcf5-Cxn002edcf5-azure-iot-sdk-ProvisioningTask) [com.microsoft.azure.sdk.iot.provisioning.device.internal.task.ProvisioningTask] - Current provisioning status: ASSIGNING
+ 2022-10-07 18:14:52,459 INFO (global.azure-devices-provisioning.net-002edcf5-Cxn002edcf5-azure-iot-sdk-ProvisioningTask) [com.microsoft.azure.sdk.iot.provisioning.device.internal.task.ProvisioningTask] - Device provisioning service assigned the device successfully
+ IotHUb Uri : contoso-hub-2.azure-devices.net
+ Device ID : sn-007-888-abc-mac-a1-b2-c3-d4-e5-f6
+ 2022-10-07 18:14:58,424 INFO (main) [com.microsoft.azure.sdk.iot.device.transport.ExponentialBackoffWithJitter] - NOTE: A new instance of ExponentialBackoffWithJitter has been created with the following properties. Retry Count: 2147483647, Min Backoff Interval: 100, Max Backoff Interval: 10000, Max Time Between Retries: 100, Fast Retry Enabled: true
+ 2022-10-07 18:14:58,436 INFO (main) [com.microsoft.azure.sdk.iot.device.transport.ExponentialBackoffWithJitter] - NOTE: A new instance of ExponentialBackoffWithJitter has been created with the following properties. Retry Count: 2147483647, Min Backoff Interval: 100, Max Backoff Interval: 10000, Max Time Between Retries: 100, Fast Retry Enabled: true
+ 2022-10-07 18:14:58,440 DEBUG (main) [com.microsoft.azure.sdk.iot.device.DeviceClient] - Initialized a DeviceClient instance using SDK version 2.1.1
+ 2022-10-07 18:14:58,450 DEBUG (main) [com.microsoft.azure.sdk.iot.device.transport.mqtt.MqttIotHubConnection] - Opening MQTT connection...
+ 2022-10-07 18:14:58,471 DEBUG (main) [com.microsoft.azure.sdk.iot.device.transport.mqtt.Mqtt] - Sending MQTT CONNECT packet...
+ 2022-10-07 18:14:59,314 DEBUG (main) [com.microsoft.azure.sdk.iot.device.transport.mqtt.Mqtt] - Sent MQTT CONNECT packet was acknowledged
+ 2022-10-07 18:14:59,315 DEBUG (main) [com.microsoft.azure.sdk.iot.device.transport.mqtt.Mqtt] - Sending MQTT SUBSCRIBE packet for topic devices/sn-007-888-abc-mac-a1-b2-c3-d4-e5-f6/messages/devicebound/#
+ 2022-10-07 18:14:59,378 DEBUG (main) [com.microsoft.azure.sdk.iot.device.transport.mqtt.Mqtt] - Sent MQTT SUBSCRIBE packet for topic devices/sn-007-888-abc-mac-a1-b2-c3-d4-e5-f6/messages/devicebound/# was acknowledged
+ 2022-10-07 18:14:59,379 DEBUG (main) [com.microsoft.azure.sdk.iot.device.transport.mqtt.MqttIotHubConnection] - MQTT connection opened successfully
+ 2022-10-07 18:14:59,381 DEBUG (main) [com.microsoft.azure.sdk.iot.device.transport.IotHubTransport] - The connection to the IoT Hub has been established
+ 2022-10-07 18:14:59,383 DEBUG (main) [com.microsoft.azure.sdk.iot.device.transport.IotHubTransport] - Updating transport status to new status CONNECTED with reason CONNECTION_OK
+ 2022-10-07 18:14:59,389 DEBUG (main) [com.microsoft.azure.sdk.iot.device.DeviceIO] - Starting worker threads
+ 2022-10-07 18:14:59,392 DEBUG (main) [com.microsoft.azure.sdk.iot.device.transport.IotHubTransport] - Invoking connection status callbacks with new status details
+ 2022-10-07 18:14:59,395 DEBUG (main) [com.microsoft.azure.sdk.iot.device.transport.IotHubTransport] - Client connection opened successfully
+ 2022-10-07 18:14:59,404 INFO (main) [com.microsoft.azure.sdk.iot.device.DeviceClient] - Device client opened successfully
+ Sending message from device to IoT Hub...
+ 2022-10-07 18:14:59,408 DEBUG (main) [com.microsoft.azure.sdk.iot.device.transport.IotHubTransport] - Message was queued to be sent later ( Message details: Correlation Id [32cf12c4-4db1-4562-9d8c-267c0506636f] Message Id [2e1717be-cfcf-41a7-b1c0-59edeb8ea865] )
+ Press any key to exit...
+ 2022-10-07 18:14:59,409 DEBUG (contoso-hub-2.azure-devices.net-sn-007-888-abc-mac-a1-b2-c3-d4-e5-f6-c32c76d0-Cxn0e70bbf7-8476-441d-8626-c17250585ee6-azure-iot-sdk-IotHubSendTask) [com.microsoft.azure.sdk.iot.device.transport.IotHubTransport] - Sending message ( Message details: Correlation Id [32cf12c4-4db1-4562-9d8c-267c0506636f] Message Id [2e1717be-cfcf-41a7-b1c0-59edeb8ea865] )
+ 2022-10-07 18:14:59,777 DEBUG (MQTT Call: sn-007-888-abc-mac-a1-b2-c3-d4-e5-f6) [com.microsoft.azure.sdk.iot.device.transport.IotHubTransport] - IotHub message was acknowledged. Checking if there is record of sending this message ( Message details: Correlation Id [32cf12c4-4db1-4562-9d8c-267c0506636f] Message Id [2e1717be-cfcf-41a7-b1c0-59edeb8ea865] )
+ 2022-10-07 18:14:59,779 DEBUG (contoso-hub-2.azure-devices.net-sn-007-888-abc-mac-a1-b2-c3-d4-e5-f6-c32c76d0-Cxn0e70bbf7-8476-441d-8626-c17250585ee6-azure-iot-sdk-IotHubSendTask) [com.microsoft.azure.sdk.iot.device.transport.IotHubTransport] - Invoking the callback function for sent message, IoT Hub responded to message ( Message details: Correlation Id [32cf12c4-4db1-4562-9d8c-267c0506636f] Message Id [2e1717be-cfcf-41a7-b1c0-59edeb8ea865] ) with status OK
+ Message received! Response status: OK
+ ```
++
+## Confirm your device provisioning registration
+
+In this tutorial, you used the *Static configuration* allocation policy to assign devices that register through the enrollment group to the same IoT hub. However, for allocations where a device might be provisioned to one of several IoT hubs, you can examine the enrollment group's registration records to see which IoT hub the device was provisioned to:
+
+1. In Azure portal, go to your DPS instance.
+
+1. In the **Settings** menu, select **Manage enrollments**.
+
+1. Select **Enrollment Groups**.
+
+1. Select the enrollment group you used for this tutorial, *mylegacydevices*.
+
+1. On the **Enrollment Group Details** page, select the **Registration Records** tab.
+
+1. Find the device ID for your device **Device Id** column and note down the IoT hub in the **Assigned IoT Hub** column. The device ID is the same as the registration ID, *sn-007-888-abc-mac-a1-b2-c3-d4-e5-f6*. (For devices that register through an enrollment group, the device ID registered to IoT Hub is always the same as the registration ID.)
+
+ :::image type="content" source="./media/how-to-legacy-device-symm-key/enrollment-group-registration-records.png" alt-text="Screenshot that shows the enrollment group registration records on Azure portal.":::
+
+ You can select the record to see more details like the initial twin assigned to the device.
+
+To verify the device on your IoT hub:
+
+1. In Azure portal, go to the IoT hub that your device was assigned to.
+
+1. In the **Device management** menu, select **Devices**.
+
+1. If your device was provisioned successfully, its device ID, *sn-007-888-abc-mac-a1-b2-c3-d4-e5-f6*, should appear in the list, with **Status** set as *enabled*. If you don't see your device, select **Refresh**.
+
+ :::image type="content" source="./media/how-to-legacy-device-symm-key/hub-registration.png" alt-text="Device is registered with the IoT hub":::
+
+> [!NOTE]
+> If you changed the *initial device twin state* from the default value in the enrollment group, a device can pull the desired twin state from the hub and act accordingly. For more information, see [Understand and use device twins in IoT Hub](../iot-hub/iot-hub-devguide-device-twins.md).
+>
+
+## Provision more devices (optional)
+
+To provision more devices through the enrollment group, follow the steps in the preceding sections to:
+
+1. [Choose a unique registration ID for the device](#choose-a-unique-registration-id-for-the-device).
+
+1. [Derive a device key](#derive-a-device-key). As you did previously, use the primary key for the enrollment group as the group master key.
+
+1. [Run the device provisioning code](#prepare-and-run-the-device-provisioning-code). Replace the necessary artifacts with your new derived device key and registration ID.
+
+## Clean up resources
+
+If you plan to continue working on and exploring the device client sample, don't clean up the resources created in this tutorial. If you don't plan to continue, use the following steps to delete all resources created in this tutorial.
+
+### Delete your enrollment group
+
+1. Close the device client sample output window on your machine.
+
+1. From the left-hand menu in the Azure portal, select **All resources**.
+
+1. Select your DPS instance.
+
+1. In the **Settings** menu, select **Manage enrollments**.
+
+1. Select the **Enrollment Groups** tab.
+
+1. Select the enrollment group you used for this tutorial, *mylegacydevices*.
+
+1. On the **Enrollment Group Details** page, select the **Registration Records** tab. Then select the check box next to the **Device Id** column header to select all of the registration records for the enrollment group. Select **Delete Registrations** at the top of the page to delete the registration records.
+
+ > [!IMPORTANT]
+ > Deleting an enrollment group doesn't delete the registration records associated with it. These orphaned records will count against the [registrations quota](about-iot-dps.md#quotas-and-limits) for the DPS instance. For this reason, it's a best practice to delete all registration records associated with an enrollment group before you delete the enrollment group itself.
+
+1. Go back to the **Manage Enrollments** page and make sure the **Enrollment Groups** tab is selected.
+
+1. Select the check box next to the *GROUP NAME* of the enrollment group you used for this tutorial, *mylegacydevices*.
+
+1. At the top of the page, select **Delete**.
+
+### Delete device registration(s) from IoT Hub
+
+1. From the left-hand menu in the Azure portal, select **All resources**.
+
+2. Select your IoT hub.
+
+3. In the **Explorers** menu, select **IoT devices**.
+
+4. Select the check box next to the *DEVICE ID* of the device(s) you registered in this tutorial. For example, *sn-007-888-abc-mac-a1-b2-c3-d4-e5-f6*.
+
+5. At the top of the page, select **Delete**.
## Next steps * To learn more about Reprovisioning, see
-> [!div class="nextstepaction"]
-> [IoT Hub Device reprovisioning concepts](concepts-device-reprovision.md)
-
-> [!div class="nextstepaction"]
-> [Quickstart: Provision a simulated symmetric key device](quick-create-simulated-device-symm-key.md)
+ > [!div class="nextstepaction"]
+ > [IoT Hub Device reprovisioning concepts](concepts-device-reprovision.md)
* To learn more about Deprovisioning, see
-> [!div class="nextstepaction"]
-> [How to deprovision devices that were previously auto-provisioned](how-to-unprovision-devices.md)
+ > [!div class="nextstepaction"]
+ > [How to deprovision devices that were previously auto-provisioned](how-to-unprovision-devices.md)
iot-edge Version History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/version-history.md
The IoT Edge documentation on this site is available for two different versions
* **IoT Edge 1.4 (LTS)** is the latest long-term support (LTS) version of IoT Edge and contains content for new features and capabilities that are in the latest stable release. The documentation for this version covers all features and capabilities from all previous versions through 1.3. This version of the documentation also contains content for the IoT Edge for Linux on Windows (EFLOW) continuous release version. * **IoT Edge 1.1 (LTS)** is the first long-term support (LTS) version of IoT Edge. The documentation for this version covers all features and capabilities from all previous versions through 1.1. This version of the documentation also contains content for the IoT Edge for Linux on Windows long-term support version, which is based on IoT Edge 1.1 LTS.
- * This documentation version will be stable through the supported lifetime of version 1.1, and won't reflect new features released in later versions. IoT Edge 1.1 LTS will be supported until December 13, 2022 to match the [.NET Core 3.1 release lifecycle](https://dotnet.microsoft.com/platform/support/policy/dotnet-core).
-
+ * This documentation version will be stable through the supported lifetime of version 1.1, and won't reflect new features released in later versions.
For more information about IoT Edge releases, see [Azure IoT Edge supported systems](support.md). ### IoT Edge for Linux on Windows
This table provides recent version history for IoT Edge package releases, and hi
>[!NOTE] >Long-term servicing (LTS) releases are serviced for a fixed period. Updates to this release type contain critical security and bug fixes only. All other stable releases are continuously supported and serviced. A stable release may contain features updates along with critical security fixes. Stable releases are supported only until the next release (stable or LTS) is generally available.
-| Release notes and assets | Type | Date | Highlights |
-| | - | - | - |
-| [1.4](https://github.com/Azure/azure-iotedge/releases/tag/1.4.0) | Long-term support (LTS) | August 2022 | Automatic image clean-up of unused Docker images <br> Ability to pass a [custom JSON payload to DPS on provisioning](../iot-dps/how-to-send-additional-data.md#iot-edge-support) <br> Ability to require all modules in a deployment be downloaded before restart <br> Use of the TCG TPM2 Software Stack which enables TPM hierarchy authorization values, specifying the TPM index at which to persist the DPS authentication key, and accommodating more [TPM configurations](http://github.com/Azure/iotedge/blob/897aed8c5573e8cad4b602e5a1298bdc64cd28b4/edgelet/contrib/config/linux/template.toml#L262-L288)
-| [1.3](https://github.com/Azure/azure-iotedge/releases/tag/1.3.0) | Stable | June 2022 | Support for Red Hat Enterprise Linux 8 on AMD and Intel 64-bit architectures.<br>Edge Hub now enforces that inbound/outbound communication uses minimum TLS version 1.2 by default<br>Updated runtime modules (edgeAgent, edgeHub) based on .NET 6
-| [1.2](https://github.com/Azure/azure-iotedge/releases/tag/1.2.0) | Stable | April 2021 | [IoT Edge devices behind gateways](how-to-connect-downstream-iot-edge-device.md?view=iotedge-2020-11&preserve-view=true)<br>[IoT Edge MQTT broker (preview)](how-to-publish-subscribe.md?view=iotedge-2020-11&preserve-view=true)<br>New IoT Edge packages introduced, with new installation and configuration steps. For more information, see [Update from 1.0 or 1.1 to latest release](how-to-update-iot-edge.md#special-case-update-from-10-or-11-to-latest-release).<br>Includes [Microsoft Defender for IoT micro-agent for Edge](../defender-for-iot/device-builders/overview.md).<br> Integration with Device Update. For more information, see [Update IoT Edge](how-to-update-iot-edge.md).
-| [1.1](https://github.com/Azure/azure-iotedge/releases/tag/1.1.0) | Long-term support (LTS) | February 2021 | [Long-term support plan and supported systems updates](support.md) |
-| [1.0.10](https://github.com/Azure/azure-iotedge/releases/tag/1.0.10) | Stable | October 2020 | [UploadSupportBundle direct method](how-to-retrieve-iot-edge-logs.md#upload-support-bundle-diagnostics)<br>[Upload runtime metrics](how-to-access-built-in-metrics.md)<br>[Route priority and time-to-live](module-composition.md#priority-and-time-to-live)<br>[Module startup order](module-composition.md#configure-modules)<br>[X.509 manual provisioning](how-to-provision-single-device-linux-x509.md) |
-| [1.0.9](https://github.com/Azure/azure-iotedge/releases/tag/1.0.9) | Stable | March 2020 | X.509 auto-provisioning with DPS<br>[RestartModule direct method](how-to-edgeagent-direct-method.md#restart-module)<br>[support-bundle command](troubleshoot.md#gather-debug-information-with-support-bundle-command) |
+| Release notes and assets | Type | Release Date | End of Support Date | Highlights |
+| | - | | - | - |
+| [1.4](https://github.com/Azure/azure-iotedge/releases/tag/1.4.0) | Long-term support (LTS) | August 2022 | November 12, 2024 | IoT Edge 1.4 LTS is supported through November 12, 2022 to match the [.NET 6 release lifecycle](https://dotnet.microsoft.com/platform/support/policy/dotnet-core#lifecycle). <br> Automatic image clean-up of unused Docker images <br> Ability to pass a [custom JSON payload to DPS on provisioning](../iot-dps/how-to-send-additional-data.md#iot-edge-support) <br> Ability to require all modules in a deployment be downloaded before restart <br> Use of the TCG TPM2 Software Stack which enables TPM hierarchy authorization values, specifying the TPM index at which to persist the DPS authentication key, and accommodating more [TPM configurations](http://github.com/Azure/iotedge/blob/897aed8c5573e8cad4b602e5a1298bdc64cd28b4/edgelet/contrib/config/linux/template.toml#L262-L288) |
+| [1.3](https://github.com/Azure/azure-iotedge/releases/tag/1.3.0) | Stable | June 2022 | August 2022 | Support for Red Hat Enterprise Linux 8 on AMD and Intel 64-bit architectures.<br>Edge Hub now enforces that inbound/outbound communication uses minimum TLS version 1.2 by default<br>Updated runtime modules (edgeAgent, edgeHub) based on .NET 6 |
+| [1.2](https://github.com/Azure/azure-iotedge/releases/tag/1.2.0) | Stable | April 2021 | June 2022 | [IoT Edge devices behind gateways](how-to-connect-downstream-iot-edge-device.md?view=iotedge-2020-11&preserve-view=true)<br>[IoT Edge MQTT broker (preview)](how-to-publish-subscribe.md?view=iotedge-2020-11&preserve-view=true)<br>New IoT Edge packages introduced, with new installation and configuration steps. For more information, see [Update from 1.0 or 1.1 to latest release](how-to-update-iot-edge.md#special-case-update-from-10-or-11-to-latest-release).<br>Includes [Microsoft Defender for IoT micro-agent for Edge](../defender-for-iot/device-builders/overview.md).<br> Integration with Device Update. For more information, see [Update IoT Edge](how-to-update-iot-edge.md). |
+| [1.1](https://github.com/Azure/azure-iotedge/releases/tag/1.1.0) | Long-term support (LTS) | February 2021 | December 13, 2022 | IoT Edge 1.1 LTS is supported through December 13, 2022 to match the [.NET Core 3.1 release lifecycle](https://dotnet.microsoft.com/platform/support/policy/dotnet-core). <br> [Long-term support plan and supported systems updates](support.md) |
+| [1.0.10](https://github.com/Azure/azure-iotedge/releases/tag/1.0.10) | Stable | October 2020 | February 2021 | [UploadSupportBundle direct method](how-to-retrieve-iot-edge-logs.md#upload-support-bundle-diagnostics)<br>[Upload runtime metrics](how-to-access-built-in-metrics.md)<br>[Route priority and time-to-live](module-composition.md#priority-and-time-to-live)<br>[Module startup order](module-composition.md#configure-modules)<br>[X.509 manual provisioning](how-to-provision-single-device-linux-x509.md) |
+| [1.0.9](https://github.com/Azure/azure-iotedge/releases/tag/1.0.9) | Stable | March 2020 | October 2020 | X.509 auto-provisioning with DPS<br>[RestartModule direct method](how-to-edgeagent-direct-method.md#restart-module)<br>[support-bundle command](troubleshoot.md#gather-debug-information-with-support-bundle-command) |
### IoT Edge for Linux on Windows
-| IoT Edge release | Available in EFLOW branch | Release date | Highlights |
-||--|--||
-| 1.4 | Continuous release (CR) <br> Long-term support (LTS) | TBA | |
-| 1.3 | [Continuous release (CR)](https://github.com/Azure/iotedge-eflow/releases/tag/1.3.1.02092) | September 2022 | [Azure IoT Edge 1.3.0](https://github.com/Azure/azure-iotedge/releases/tag/1.3.0)<br/> [CBL-Mariner 2.0](https://microsoft.github.io/CBL-Mariner/announcing-mariner-2.0/)<br/> [USB passthrough using USB-Over-IP](https://aka.ms/AzEFLOW-USBIP)<br/>[File/Folder sharing between Windows OS and the EFLOW VM](https://aka.ms/AzEFLOW-FolderSharing) |
-| 1.2 | [Continuous release (CR)](https://github.com/Azure/iotedge-eflow/releases/tag/1.2.7.07022) | January 2022 | [Public Preview](https://techcommunity.microsoft.com/t5/internet-of-things-blog/azure-iot-edge-for-linux-on-windows-eflow-continuous-release/ba-p/3169590) |
-| 1.1 | [Long-term support (LTS)](https://github.com/Azure/iotedge-eflow/releases/tag/1.1.2106.0) | June 2021 | [Long-term support plan and supported systems updates](support.md) |
+| IoT Edge release | Available in EFLOW branch | Release date | End of Support Date | Highlights |
+| - | - | | - | - |
+| 1.4 | Continuous release (CR) <br> Long-term support (LTS) | TBA | | |
+| 1.3 | [Continuous release (CR)](https://github.com/Azure/iotedge-eflow/releases/tag/1.3.1.02092) | September 2022 | In support | [Azure IoT Edge 1.3.0](https://github.com/Azure/azure-iotedge/releases/tag/1.3.0)<br/> [CBL-Mariner 2.0](https://microsoft.github.io/CBL-Mariner/announcing-mariner-2.0/)<br/> [USB passthrough using USB-Over-IP](https://aka.ms/AzEFLOW-USBIP)<br/>[File/Folder sharing between Windows OS and the EFLOW VM](https://aka.ms/AzEFLOW-FolderSharing) |
+| 1.2 | [Continuous release (CR)](https://github.com/Azure/iotedge-eflow/releases/tag/1.2.7.07022) | January 2022 | September 2022 | [Public Preview](https://techcommunity.microsoft.com/t5/internet-of-things-blog/azure-iot-edge-for-linux-on-windows-eflow-continuous-release/ba-p/3169590) |
+| 1.1 | [Long-term support (LTS)](https://github.com/Azure/iotedge-eflow/releases/tag/1.1.2106.0) | June 2021 | December 13, 2022 | IoT Edge 1.1 LTS is supported through December 13, 2022 to match the [.NET Core 3.1 release lifecycle](https://dotnet.microsoft.com/platform/support/policy/dotnet-core). <br> [Long-term support plan and supported systems updates](support.md) |
## Next steps
key-vault Alert https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/alert.md
You can set an alert to fire after a certain number of violations within a set t
1. Use the following configuration parameters:
+ + Set **Dimension Name** to **Transaction Type** and **Dimension Values** to **vaultoperation**.
+ Set **Threshold** to **Dynamic**. + Set **Operator** to **Greater than**. + Set **Aggregation type** to **Average**.
kubernetes-fleet Quickstart Create Fleet And Members https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kubernetes-fleet/quickstart-create-fleet-and-members.md
Get started with Azure Kubernetes Fleet Manager (Fleet) by using the Azure CLI t
* Enable the following feature for each subscription where you'll be creating the fleet resource or where your AKS clusters that will be joined as members are located in:
- ```azurecli
- az feature register --namespace Microsoft.ContainerService --name FleetResourcePreview
- ```
+ ```azurecli
+ az feature register --namespace Microsoft.ContainerService --name FleetResourcePreview
+ ```
* Install the **fleet** Azure CLI extension. Make sure your version is at least `0.1.0`:
Get started with Azure Kubernetes Fleet Manager (Fleet) by using the Azure CLI t
export FLEET=<your_fleet_name> ```
+* The AKS clusters that you want to join as member clusters to the fleet resource need to be within the supported versions of AKS. Learn more about AKS version support policy [here](../aks/supported-kubernetes-versions.md#kubernetes-version-support-policy).
+ ## Create a resource group An [Azure resource group](../azure-resource-manager/management/overview.md) is a logical group in which Azure resources are deployed and managed. When you create a resource group, you're prompted to specify a location. This location is:
load-balancer Gateway Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/gateway-overview.md
# Gateway Load Balancer
-Gateway Load Balancer is a SKU of the Azure Load Balancer portfolio catered for high performance and high availability scenarios with third-party Network Virtual Appliances (NVAs). With the capabilities of Gateway Load Balancer, you can easily deploy, scale, and manage NVAs. Chaining a Gateway Load Balancer to your public endpoint only requires one click.
+Gateway Load Balancer is a SKU of the Azure Load Balancer portfolio catered for high performance and high availability scenarios with third-party Network Virtual Appliances (NVAs). With the capabilities of Gateway Load Balancer, you can easily deploy, scale, and manage NVAs. Chaining a Gateway Load Balancer to your public endpoint only requires one selection.
You can insert appliances transparently for different kinds of scenarios such as:
You can insert appliances transparently for different kinds of scenarios such as
* DDoS protection * Custom appliances
-With Gateway Load Balancer, you can easily add or remove advanced network functionality without additional management overhead. It provides the bump-in-the-wire technology you need to ensure all traffic to a public endpoint is first sent to the appliance before your application. In scenarios with NVAs, it's especially important that flows are symmetrical. Gateway Load Balancer maintains flow stickiness to a specific instance in the backend pool along with flow symmetry. As a result, a consistent route to your network virtual appliance is ensured ΓÇô without additional manual configuration. As a result, packets traverse the same network path in both directions and appliances that need this key capability are able to function seamlessly.
+With Gateway Load Balancer, you can easily add or remove advanced network functionality without extra management overhead. It provides the bump-in-the-wire technology you need to ensure all traffic to a public endpoint is first sent to the appliance before your application. In scenarios with NVAs, it's especially important that flows are symmetrical. Gateway Load Balancer maintains flow stickiness to a specific instance in the backend pool along with flow symmetry. As a result, a consistent route to your network virtual appliance is ensured ΓÇô without other manual configuration. As a result, packets traverse the same network path in both directions and appliances that need this key capability are able to function seamlessly.
The health probe listens across all ports and routes traffic to the backend instances using the HA ports rule. Traffic sent to and from Gateway Load Balancer uses the VXLAN protocol.
Gateway Load Balancer has the following benefits:
* Chain applications across regions and subscriptions
-A Standard Public Load balancer or a Standard IP configuration of a virtual machine can be chained to a Gateway Load Balancer. Once chained to a Standard Public Load Balancer frontend or Standard IP configuration on a virtual machine, no additional configuration is needed to ensure traffic to and from the application endpoint is sent to the Gateway Load Balancer.
+A Standard Public Load balancer or a Standard IP configuration of a virtual machine can be chained to a Gateway Load Balancer. Once chained to a Standard Public Load Balancer frontend or Standard IP configuration on a virtual machine, no extra configuration is needed to ensure traffic to, and from the application endpoint is sent to the Gateway Load Balancer.
Traffic moves from the consumer virtual network to the provider virtual network. The traffic then returns to the consumer virtual network. The consumer virtual network and provider virtual network can be in different subscriptions, tenants, or regions removing management overhead.
Gateway Load Balancer consists of the following components:
* A Gateway Load Balancer rule can be associated with up to two backend pools.
-* **Backend pool(s)** - The group of virtual machines or instances in a virtual machine scale set that is serving the incoming request. To scale cost-effectively to meet high volumes of incoming traffic, computing guidelines generally recommend adding more instances to the backend pool. Load Balancer instantly reconfigures itself via automatic reconfiguration when you scale instances up or down. Adding or removing VMs from the backend pool reconfigures the load balancer without extra operations. The scope of the backend pool is any virtual machine in a single virtual network.
+* **Backend pool(s)** - The group of virtual machines or instances in a Virtual Machine Scale Set that is serving the incoming request. To scale cost-effectively to meet high volumes of incoming traffic, computing guidelines generally recommend adding more instances to the backend pool. Load Balancer instantly reconfigures itself via automatic reconfiguration when you scale instances up or down. Adding or removing VMs from the backend pool reconfigures the load balancer without extra operations. The scope of the backend pool is any virtual machine in a single virtual network.
-* **Tunnel interfaces** - Gateway Load balancer backend pools have another component called the tunnel interfaces. The tunnel interface enables the appliances in the backend to ensure network flows are handled as expected. Each backend pool can have up to 2 tunnel interfaces. Tunnel interfaces can be either internal or external. For traffic coming to your backend pool, you should use the external type. For traffic going from your appliance to the application, you should use the internal type.
+* **Tunnel interfaces** - Gateway Load balancer backend pools have another component called the tunnel interfaces. The tunnel interface enables the appliances in the backend to ensure network flows are handled as expected. Each backend pool can have up to two tunnel interfaces. Tunnel interfaces can be either internal or external. For traffic coming to your backend pool, you should use the external type. For traffic going from your appliance to the application, you should use the internal type.
* **Chain** - A Gateway Load Balancer can be referenced by a Standard Public Load Balancer frontend or a Standard Public IP configuration on a virtual machine. The addition of advanced networking capabilities in a specific sequence is known as service chaining. As a result, this reference is called a chain. ## Pricing
-For pricing see [Load Balancer pricing](https://azure.microsoft.com/pricing/details/load-balancer/).
+For pricing, see [Load Balancer pricing](https://azure.microsoft.com/pricing/details/load-balancer/).
## Limitations * Gateway Load Balancer doesn't work with the Global Load Balancer tier.
-* Cross-tenant chaining is not supported through the Azure portal.
-* Gateway Load Balancer does not currently support IPv6
-* Gateway Load Balancer does not currently support zone-redundant frontends due to a known issue. All frontends configured as zone-redundant will be allocated no-zone or non-zonal IPs. Frontends configured in Portal will automatically be created as no-zone.
+* Cross-tenant chaining isn't supported through the Azure portal.
+* Gateway Load Balancer doesn't currently support IPv6
+* Currently, Gateway Load Balancer frontends configured in Portal will automatically be created as no-zone. To create a zone-redundant frontend, use an alternative client such as ARM/CLI/PS.
## Next steps
load-testing Concept Azure Load Testing Vnet Injection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/concept-azure-load-testing-vnet-injection.md
In this scenario, you've deployed an application endpoint in a virtual network o
:::image type="content" source="media/concept-azure-load-testing-vnet-injection/azure-hosted-private-endpoint.png" alt-text="Diagram that shows the set-up for load testing a private endpoint hosted on Azure.":::
-When you deploy Azure Load Testing in the virtual network, the load test engines can now communicate with the application endpoint. If you've used separate subnets for the application endpoint and Azure Load Testing, make sure that communication between the subsets isn't blocked, for example by a network security group (NSG). Learn how [network security groups filter network traffic](/azure/virtual-network/network-security-group-how-it-works).
+When you deploy Azure Load Testing in the virtual network, the load test engines can now communicate with the application endpoint. If you've used separate subnets for the application endpoint and Azure Load Testing, make sure that communication between the subnets isn't blocked, for example by a network security group (NSG). Learn how [network security groups filter network traffic](/azure/virtual-network/network-security-group-how-it-works).
## Scenario: Load test a public endpoint with access restrictions
machine-learning Concept Azure Machine Learning V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-azure-machine-learning-v2.md
Last updated 04/29/2022
#Customer intent: As a data scientist, I want to understand the big picture about how Azure Machine Learning works.
-# How Azure Machine Learning works: resources and assets (v2)
+# How Azure Machine Learning works: resources and assets
[!INCLUDE [dev v2](../../includes/machine-learning-dev-v2.md)]
machine-learning Concept Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-compute-instance.md
You can [Add RStudio](how-to-create-manage-compute-instance.md#add-custom-applic
|Jupyterlab and extensions|| [Azure Machine Learning SDK for Python](https://aka.ms/sdk-v2-install)</br>from PyPI|Includes most of the azureml extra packages. To see the full list, [open a terminal window on your compute instance](how-to-access-terminal.md) and run <br/> `conda list -n azureml_py36 azureml*` | |Other PyPI packages|`jupytext`</br>`tensorboard`</br>`nbconvert`</br>`notebook`</br>`Pillow`|
-|Conda packages|`cython`</br>`numpy`</br>`ipykernel`</br>`scikit-learn`</br>`matplotlib`</br>`tqdm`</br>`joblib`</br>`nodejs`</br>`nb_conda_kernels`|
+|Conda packages|`cython`</br>`numpy`</br>`ipykernel`</br>`scikit-learn`</br>`matplotlib`</br>`tqdm`</br>`joblib`</br>`nodejs`|
|Deep learning packages|`PyTorch`</br>`TensorFlow`</br>`Keras`</br>`Horovod`</br>`MLFlow`</br>`pandas-ml`</br>`scrapbook`| |ONNX packages|`keras2onnx`</br>`onnx`</br>`onnxconverter-common`</br>`skl2onnx`</br>`onnxmltools`| |Azure Machine Learning Python samples||
machine-learning Concept Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-environments.md
You use system-managed environments when you want [conda](https://conda.io/docs/
You can create environments from clients like the AzureML Python SDK, Azure Machine Learning CLI, Environments page in Azure Machine Learning studio, and [VS Code extension](how-to-manage-resources-vscode.md#create-environment). Every client allows you to customize the base image, Dockerfile, and Python layer if needed.
-For specific code samples, see the "Create an environment" section of [How to use environments](how-to-use-environments.md#create-an-environment).
+For specific code samples, see the "Create an environment" section of [How to use environments](how-to-manage-environments-v2.md#create-an-environment).
Environments are also easily managed through your workspace, which allows you to:
Environments are also easily managed through your workspace, which allows you to
"Anonymous" environments are automatically registered in your workspace when you submit an experiment. They will not be listed but may be retrieved by version.
-For code samples, see the "Manage environments" section of [How to use environments](how-to-use-environments.md#manage-environments).
+For code samples, see the "Manage environments" section of [How to use environments](how-to-manage-environments-v2.md#manage-environments).
## Environment building, caching, and reuse
machine-learning Concept Responsible Ai Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-responsible-ai-dashboard.md
The Responsible AI dashboard is accompanied by a [PDF scorecard](how-to-responsi
## Responsible AI dashboard components
-The Responsible AI dashboard brings together, in a comprehensive view, various new and pre-existing tools. The dashboard integrates these tools with [Azure Machine Learning CLI v2, Azure Machine Learning Python SDK v2](concept-v2.md), and [Azure Machine Learning studio](overview-what-is-machine-learning-studio.md). The tools include:
+The Responsible AI dashboard brings together, in a comprehensive view, various new and pre-existing tools. The dashboard integrates these tools with [Azure Machine Learning CLI v2, Azure Machine Learning Python SDK v2](concept-v2.md), and [Azure Machine Learning studio](overview-what-is-azure-machine-learning.md#studio). The tools include:
- [Data explorer](concept-data-analysis.md), to understand and explore your dataset distributions and statistics. - [Model overview and fairness assessment](concept-fairness-ml.md), to evaluate the performance of your model and evaluate your model's group fairness issues (how your model's predictions affect diverse groups of people).
machine-learning How To Access Azureml Behind Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-azureml-behind-firewall.md
These rule collections are described in more detail in [What are some Azure Fire
1. To restrict outbound traffic for models deployed to Azure Kubernetes Service (AKS), see the [Restrict egress traffic in Azure Kubernetes Service](../aks/limit-egress-traffic.md) and [Deploy ML models to Azure Kubernetes Service](v1/how-to-deploy-azure-kubernetes-service.md#connectivity) articles.
-### Kubernetes Compute
+## Kubernetes Compute
[Kubernetes Cluster](./how-to-attach-kubernetes-anywhere.md) running behind an outbound proxy server or firewall needs extra egress network configuration.
Besides above requirements, the following outbound URLs are also required for Az
> > `<your AML workspace ID>` can be found in Azure portal - your Machine Learning resource page - Properties - Workspace ID.
+### In-cluster communication requirements
+
+To install AzureMl extension at Kubernetes compute, all AzureML related components are deployed in `azureml` namespace. Following in-cluster communication are needed to ensure the ML workloads work well in cluster.
+- The components in `azureml` namespace should be able to communicate with Kubernetes API server.
+- The components in `azureml` namespace should be able to communicate with each other.
+- The components in `azureml` namespace should be able to communicate with `kube-dns` and `konnectivity-agent` in `kube-system` namespace.
+- If the cluster is used for real-time inferencing, `azureml-fe-xxx` PODs should be able to communicate with the deployed model PODs on 5001 port in other namespace. `azureml-fe-xxx` PODs should open 11001, 12001, 12101, 12201, 20000, 8000, 8001, 9001 ports for internal communication.
+- If the cluster is used for real-time inferencing, the deployed model PODs should be able to communicate with `amlarc-identity-proxy-xxx` PODs on 9999 port.
+ ## Other firewalls
machine-learning How To Attach Kubernetes To Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-attach-kubernetes-to-workspace.md
Once AzureML extension is deployed on AKS or Arc Kubernetes cluster, you can attach the Kubernetes cluster to AzureML workspace and create compute targets for ML professionals to use. Some key considerations when attaching Kubernetes cluster to AzureML workspace:
- * If you need to access Azure resource seamlessly from your training script, you can specify a managed identity for Kubernetes compute target during attach operation.
+ * If you need to access Azure resource securely from your training script, you can specify a [managed identity](./how-to-identity-based-service-authentication.md) for Kubernetes compute target during attach operation.
* If you plan to have different compute target for different project/team, you can specify Kubernetes namespace for the compute target to isolate workload among different teams/projects. * For the same Kubernetes cluster, you can attach it to the same workspace multiple times and create multiple compute targets for different project/team/workload. * For the same Kubernetes cluster, you can also attach it to multiple workspaces, and the multiple workspaces can share the same Kubernetes cluster.
machine-learning How To Auto Train Nlp Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-auto-train-nlp-models.md
You can seamlessly integrate with the [Azure Machine Learning data labeling](how
[!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)] - * Azure subscription. If you don't have an Azure subscription, sign up to try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/) today. * An Azure Machine Learning workspace with a GPU training compute. To create the workspace, see [Create workspace resources](quickstart-create-resources.md). See [GPU optimized virtual machine sizes](../virtual-machines/sizes-gpu.md) for more details of GPU instances provided by Azure.
Determine what NLP task you want to accomplish. Currently, automated ML supports
Task |AutoML job syntax| Description -|-|
-Multi-class text classification | CLI v2: `text_classification` <br> SDK v2 (preview): `text_classification()`| There are multiple possible classes and each sample can be classified as exactly one class. The task is to predict the correct class for each sample. <br> <br> For example, classifying a movie script as "Comedy" or "Romantic".
-Multi-label text classification | CLI v2: `text_classification_multilabel` <br> SDK v2 (preview): `text_classification_multilabel()`| There are multiple possible classes and each sample can be assigned any number of classes. The task is to predict all the classes for each sample<br> <br> For example, classifying a movie script as "Comedy", or "Romantic", or "Comedy and Romantic".
-Named Entity Recognition (NER)| CLI v2:`text_ner` <br> SDK v2 (preview): `text_ner()`| There are multiple possible tags for tokens in sequences. The task is to predict the tags for all the tokens for each sequence. <br> <br> For example, extracting domain-specific entities from unstructured text, such as contracts or financial documents.
+Multi-class text classification | CLI v2: `text_classification` <br> SDK v2: `text_classification()`| There are multiple possible classes and each sample can be classified as exactly one class. The task is to predict the correct class for each sample. <br> <br> For example, classifying a movie script as "Comedy" or "Romantic".
+Multi-label text classification | CLI v2: `text_classification_multilabel` <br> SDK v2: `text_classification_multilabel()`| There are multiple possible classes and each sample can be assigned any number of classes. The task is to predict all the classes for each sample<br> <br> For example, classifying a movie script as "Comedy", or "Romantic", or "Comedy and Romantic".
+Named Entity Recognition (NER)| CLI v2:`text_ner` <br> SDK v2: `text_ner()`| There are multiple possible tags for tokens in sequences. The task is to predict the tags for all the tokens for each sequence. <br> <br> For example, extracting domain-specific entities from unstructured text, such as contracts or financial documents.
## Thresholding
For CLI v2 AutoML jobs you configure your experiment in a YAML file like the fol
[!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)] - For AutoML jobs via the SDK, you configure the job with the specific NLP task function. The following example demonstrates the configuration for `text_classification`. ```Python
featurization:
[!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)] - You can specify your dataset language with the `set_featurization()` method. BERT is also used in the featurization process of automated ML experiment training, learn more about [BERT integration and featurization in automated ML](how-to-configure-auto-features.md#bert-integration-in-automated-ml). ```python
You can also run your NLP experiments with distributed training on an Azure ML c
[!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)] - This is handled automatically by automated ML when the parameters `max_concurrent_iterations = number_of_vms` and `enable_distributed_dnn_training = True` are provided in your `AutoMLConfig` during experiment setup. Doing so, schedules distributed training of the NLP models and automatically scales to every GPU on your virtual machine or cluster of virtual machines. The max number of virtual machines allowed is 32. The training is scheduled with number of virtual machines that is in powers of two. ```python
az ml job create --file ./hello-automl-job-basic.yml --workspace-name [YOUR_AZUR
[!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)] - With the `MLClient` created earlier, you can run this `CommandJob` in the workspace. ```python
See the following sample YAML files for each NLP task.
[!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)] - See the sample notebooks for detailed code examples for each NLP task. * [Multi-class text classification](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/sdk/python/jobs/automl-standalone-jobs/automl-nlp-text-classification-multiclass-task-sentiment-analysis/automl-nlp-text-classification-multiclass-task-sentiment.ipynb)
search_space:
[!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)] - You can set the limits for your model sweeping job: ```python
machine-learning How To Deploy Custom Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-custom-container.md
Previously updated : 05/11/2022 Last updated : 10/13/2022 ms.devlang: azurecli
ms.devlang: azurecli
[!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
-> [!IMPORTANT]
-> SDK v2 is currently in public preview.
-> The preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- Learn how to deploy a custom container as an online endpoint in Azure Machine Learning. Custom container deployments can use web servers other than the default Python Flask server used by Azure Machine Learning. Users of these deployments can still take advantage of Azure Machine Learning's built-in monitoring, scaling, alerting, and authentication.
See also [the example notebook](https://github.com/Azure/azureml-examples/blob/m
Define environment variables: ## Download a TensorFlow model Download and unzip a model that divides an input by two and adds 2 to the result: ## Run a TF Serving image locally to test that it works Use docker to run your image locally for testing: ### Check that you can send liveness and scoring requests to the image First, check that the container is "alive," meaning that the process inside the container is still running. You should get a 200 (OK) response. Then, check that you can get predictions about unlabeled data: ### Stop the image Now that you've tested locally, stop the image: ## Deploy your online endpoint to Azure Next, deploy your online endpoint to Azure.
You can configure your cloud deployment using YAML. Take a look at the sample YA
__tfserving-endpoint.yml__ __tfserving-deployment.yml__ # [Python SDK](#tab/python)
Once your deployment completes, see if you can make a scoring request to the dep
# [Azure CLI](#tab/cli) # [Python SDK](#tab/python)
machine-learning How To Deploy Kubernetes Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-kubernetes-extension.md
You can use AzureML CLI command `k8s-extension create` to deploy AzureML extensi
| `inferenceRouterHA` |`True` or `False`, default `True`. By default, AzureML extension will deploy three inference router replicas for high availability, which requires at least three worker nodes in a cluster. Set to `False` if your cluster has fewer than three worker nodes, in this case only one inference router service is deployed. | N/A| Optional | Optional | |`nodeSelector` | By default, the deployed kubernetes resources are randomly deployed to one or more nodes of the cluster, and daemonset resources are deployed to ALL nodes. If you want to restrict the extension deployment to specific nodes with label `key1=value1` and `key2=value2`, use `nodeSelector.key1=value1`, `nodeSelector.key2=value2` correspondingly. | Optional| Optional | Optional | |`installNvidiaDevicePlugin` | `True` or `False`, default `False`. [NVIDIA Device Plugin](https://github.com/NVIDIA/k8s-device-plugin#nvidia-device-plugin-for-kubernetes) is required for ML workloads on NVIDIA GPU hardware. By default, AzureML extension deployment won't install NVIDIA Device Plugin regardless Kubernetes cluster has GPU hardware or not. User can specify this setting to `True`, to install it, but make sure to fulfill [Prerequisites](https://github.com/NVIDIA/k8s-device-plugin#prerequisites). | Optional |Optional |Optional |
- |`installPromOp`|`True` or `False`, default `True`. AzureML extension needs prometheus operator to manage prometheus. Set to `False` to reuse existing prometheus operator. Compatible [kube-prometheus-stack](https://github.com/prometheus-community/helm-charts/blob/main/charts/kube-prometheus-stack/README.md) helm chart versions are from 9.3.4 to 30.0.1.| Optional| Optional | Optional |
- |`installVolcano`| `True` or `False`, default `True`. AzureML extension needs volcano scheduler to schedule the job. Set to `False` to reuse existing volcano scheduler. Supported volcano scheduler versions are 1.4, 1.5. | Optional| N/A | Optional |
- |`installDcgmExporter` |`True` or `False`, default `False`. Dcgm-exporter can expose GPU metrics for AzureML workloads, which can be monitored in Azure portal. Set `installDcgmExporter` to `True` to install dcgm-exporter. But if you want to utilize your own dcgm-exporter, refer to [DCGM exporter](https://github.com/Azure/AML-Kubernetes/blob/master/docs/troubleshooting.md#dcgm) |Optional |Optional |Optional |
+ |`installPromOp`|`True` or `False`, default `True`. AzureML extension needs prometheus operator to manage prometheus. Set to `False` to reuse the existing prometheus operator. For more information about reusing the existing prometheus operator, refer to [reusing the prometheus operator](./how-to-troubleshoot-kubernetes-extension.md#prometheus-operator)| Optional| Optional | Optional |
+ |`installVolcano`| `True` or `False`, default `True`. AzureML extension needs volcano scheduler to schedule the job. Set to `False` to reuse existing volcano scheduler. For more information about reusing the existing vocano scheduler, refer to [reusing volcano scheduler](./how-to-troubleshoot-kubernetes-extension.md#volcano-scheduler) | Optional| N/A | Optional |
+ |`installDcgmExporter` |`True` or `False`, default `False`. Dcgm-exporter can expose GPU metrics for AzureML workloads, which can be monitored in Azure portal. Set `installDcgmExporter` to `True` to install dcgm-exporter. But if you want to utilize your own dcgm-exporter, refer to [DCGM exporter](./how-to-troubleshoot-kubernetes-extension.md#dcgm-exporter) |Optional |Optional |Optional |
|Configuration Protected Setting Key Name |Description |Training |Inference |Training and Inference
machine-learning How To Kubernetes Inference Routing Azureml Fe https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-kubernetes-inference-routing-azureml-fe.md
AzureML inference router is the front-end component (`azureml-fe`) which is depl
* Routes incoming inference requests from cluster load balancer or ingress controller to corresponding model pods. * Load-balance all incoming inference requests with smart coordinated routing.
- * manages model pods auto-scaling.
+ * Manages model pods auto-scaling.
* Fault-tolerant and failover capability, ensuring inference requests is always served for critical business application. The following steps are how requests are processed by the front-end:
scale_setting:
# other deployment properties continue ```
-Decisions to scale up/down is based off of utilization of the current container replicas. The number of replicas that are busy (processing a request) divided by the total number of current replicas is the current utilization. If this number exceeds `target_utilization_percentage`, then more replicas are created. If it's lower, then replicas are reduced. By default, the target utilization is 70%.
+The decision to scale up or down is based off of ``utilization of the current container replicas``.
+
+```
+utilization_percentage = (The number of replicas that are busy processing a request + The number of requests queued in azureml-fe) / The total number of current replicas
+```
+If this number exceeds `target_utilization_percentage`, then more replicas are created. If it's lower, then replicas are reduced. By default, the target utilization is 70%.
Decisions to add replicas are eager and fast (around 1 second). Decisions to remove replicas are conservative (around 1 minute).
concurrentRequests = targetRps * reqTime / targetUtilization
replicas = ceil(concurrentRequests / maxReqPerContainer) ```
-If you have RPS requirements higher than 10K, consider following options:
-
-* Increase resource requests/limits for `azureml-fe` pods, by default it has 2 vCPU and 2G memory request/limit.
-* Increase number of instances for `azureml-fe`, by default AzureML creates 3 `azureml-fe` instances per cluster.
-* Reach out to Microsoft experts for help.
+>[!Note]
+>
+>`azureml-fe` can reach to 5K requests per second (QPS) with good latency, with no more than 3ms overhead in average, and 15ms at 99% percentile.
+>
+>If you have RPS requirements higher than 10K, consider following options:
+>
+>* Increase resource requests/limits for `azureml-fe` pods, by default it has 2 vCPU and 1.2G memory resource limit.
+>* Increase number of instances for `azureml-fe`, by default AzureML creates 3 `azureml-fe` instances per cluster.
+>* Reach out to Microsoft experts for help.
## Understand connectivity requirements for AKS inferencing cluster
The following diagram shows the connectivity requirements for AKS inferencing. B
For general AKS connectivity requirements, see [Control egress traffic for cluster nodes in Azure Kubernetes Service](../aks/limit-egress-traffic.md).
-For accessing Azure ML services behind a firewall, see [How to access azureml behind firewall](./how-to-access-azureml-behind-firewall.md).
+For accessing Azure ML services behind a firewall, see [How to access azureml behind firewall](./how-to-access-azureml-behind-firewall.md#kubernetes-compute).
### Overall DNS resolution requirements
DNS resolution within an existing VNet is under your control. For example, a fir
| `<cluster>.hcp.<region>.azmk8s.io` | AKS API server | | `mcr.microsoft.com` | Microsoft Container Registry (MCR) | | `<ACR name>.azurecr.io` | Your Azure Container Registry (ACR) |
-| `<account>.table.core.windows.net` | Azure Storage Account (table storage) |
| `<account>.blob.core.windows.net` | Azure Storage Account (blob storage) | | `api.azureml.ms` | Azure Active Directory (Azure AD) authentication | | `ingest-vienna<region>.kusto.windows.net` | Kusto endpoint for uploading telemetry |
machine-learning How To Migrate From V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-migrate-from-v1.md
Title: 'Migrate from v1 to v2'
-description: Migrate from v1 to v2 of Azure Machine Learning REST APIs, CLI extension, and Python SDK (preview).
+description: Migrate from v1 to v2 of Azure Machine Learning REST APIs, CLI extension, and Python SDK.
machine-learning How To Prevent Data Loss Exfiltration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-prevent-data-loss-exfiltration.md
Azure Machine Learning has several inbound and outbound dependencies. Some of th
* __Storage Outbound__: This requirement comes from compute instance and compute cluster. A malicious agent can use this outbound rule to exfiltrate data by provisioning and saving data in their own storage account. You can remove data exfiltration risk by using an Azure Service Endpoint Policy and Azure Batch's simplified node communication architecture.
- * __AzureFrontDoor.frontend outbound__: Azure Front Door is required by the Azure Machine Learning studio UI and AutoML. To narrow down the list of possible outbound destinations to just those required by Azure ML, allowlist the following fully qualified domain names (FQDN) on your firewall.
+ * __AzureFrontDoor.frontend outbound__: Azure Front Door is required by the Azure Machine Learning studio UI and AutoML. To narrow down the list of possible outbound destinations to just the ones required by Azure ML, allowlist the following fully qualified domain names (FQDN) on your firewall.
- `ml.azure.com` - `automlresources-prod.azureedge.net`
__Allow__ outbound traffic over __TCP port 443__ to the following FQDNs. Replace
* __Resource__: The storage account. Select __Add__ to add the resource information.
- 1. Select __+ Add an alias__, and then select `/services/Azure/MachineLearning` as the __Server Alias__ value. Select __Add__ to add thee alias.
+ 1. Select __+ Add an alias__, and then select `/services/Azure/MachineLearning` as the __Server Alias__ value. Select __Add__ to add the alias.
+
+ > [!NOTE]
+ > The Azure CLI and Azure PowerShell do not provide support for adding an alias to the policy.
+ 1. Select __Review + Create__, and then select __Create__.
machine-learning How To Secure Kubernetes Inferencing Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-kubernetes-inferencing-environment.md
# Secure Azure Kubernetes Service inferencing environment
-If you have an Azure Kubernetes (AKS) cluster behind of VNet, you would need to secure Azure Machine Learning workspace resources and compute environment using the same VNet. In this article, you'll learn:
+If you have an Azure Kubernetes (AKS) cluster behind of VNet, you would need to secure Azure Machine Learning workspace resources and a compute environment using the same or peered VNet. In this article, you'll learn:
* What is a secure AKS inferencing environment * How to configure a secure AKS inferencing environment ## Limitations
-* If your AKS cluster is behind of a VNet, your workspace and its associated resources (storage, key vault, Azure Container Registry) must have private endpoints or service endpoints in the same VNet as AKS cluster's VNet. For more information on securing the workspace and associated resources, see [create a secure workspace](tutorial-create-secure-workspace.md).
+* If your AKS cluster is behind of a VNet, your workspace and its associated resources (storage, key vault, Azure Container Registry) must have private endpoints or service endpoints in the same or peered VNet as AKS cluster's VNet. For more information on securing the workspace and associated resources, see [create a secure workspace](tutorial-create-secure-workspace.md).
* If your workspace has a __private endpoint__, the Azure Kubernetes Service cluster must be in the same Azure region as the workspace. * Using a [public fully qualified domain name (FQDN) with a private AKS cluster](/azure/aks/private-clusters) is __not supported__ with Azure Machine learning.
For default AKS cluster, you can find VNet information under the resource group
After you have VNet information for AKS cluster and if you already have workspace available, use following steps to configure a secure AKS inferencing environment:
- * Use your AKS cluster VNet information to add new private endpoints for the Azure Storage Account, Azure Key Vault, and Azure Container Registry used by your workspace. These private endpoints should exist in the same VNet as AKS cluster. For more information, see the [secure workspace with private endpoint](./how-to-secure-workspace-vnet.md#secure-the-workspace-with-private-endpoint) article.
- * If you have other storage that is used by your workspace, add a new private endpoint for that storage. The private endpoint should exist in the AKS cluster VNet and have private DNS zone integration enabled.
- * Add a new private endpoint to your workspace. This private endpoint should exist in AKS cluster VNet and have private DNS zone integration enabled.
+ * Use your AKS cluster VNet information to add new private endpoints for the Azure Storage Account, Azure Key Vault, and Azure Container Registry used by your workspace. These private endpoints should exist in the same or peered VNet as AKS cluster. For more information, see the [secure workspace with private endpoint](./how-to-secure-workspace-vnet.md#secure-the-workspace-with-private-endpoint) article.
+ * If you have other storage that is used by your AzureML workloads, add a new private endpoint for that storage. The private endpoint should be in the same or peered VNet as AKS cluster and have private DNS zone integration enabled.
+ * Add a new private endpoint to your workspace. This private endpoint should be in the same or peered VNet as your AKS cluster and have private DNS zone integration enabled.
If you have AKS cluster ready but don't have workspace created yet, you can use AKS cluster VNet when creating the workspace. Use the AKS cluster VNet information when following the [create secure workspace](./tutorial-create-secure-workspace.md) tutorial. Once the workspace has been created, add a new private endpoint to your workspace as the last step. For all the above steps, it's important to ensure that all private endpoints should exist in the same AKS cluster VNet and have private DNS zone integration enabled.
machine-learning How To Train Mlflow Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-mlflow-projects.md
dependencies:
Submit the local run and ensure you set the parameter `backend = "azureml" `. With this setting, you can submit runs locally and get the added support of automatic output tracking, log files, snapshots, and printed errors in your workspace.
-View your runs and metrics in the [Azure Machine Learning studio](overview-what-is-machine-learning-studio.md).
+View your runs and metrics in the [Azure Machine Learning studio](https://ml.azure.com).
```python local_env_run = mlflow.projects.run(uri=".",
dependencies:
Submit the mlflow project run and ensure you set the parameter `backend = "azureml" `. With this setting, you can submit your run to your remote compute and get the added support of automatic output tracking, log files, snapshots, and printed errors in your workspace.
-View your runs and metrics in the [Azure Machine Learning studio](overview-what-is-machine-learning-studio.md).
+View your runs and metrics in the [Azure Machine Learning studio](https://ml.azure.com).
```python remote_mlflow_run = mlflow.projects.run(uri=".",
machine-learning How To Troubleshoot Kubernetes Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-kubernetes-extension.md
+
+ Title: Troubleshoot AzureML extension
+description: Learn how to troubleshoot some common AzureML extension deployment or update errors.
++++++ Last updated : 10/10/2022++++
+# Troubleshoot AzureML extension
+
+In this article, learn how to troubleshoot common problems you may encounter with [AzureML extension](./how-to-deploy-kubernetes-extension.md) deployment in your AKS or Arc-enabled Kubernetes.
+
+## How is AzureML extension installed
+AzureML extension is released as a helm chart and installed by Helm V3. All components of AzureML extension are installed in `azureml` namespace. You can use the following commands to check the extension status.
+```bash
+# get the extension status
+az k8s-extension show --name <extension-name>
+
+# check status of all pods of AzureML extension
+kubectl get pod -n azureml
+
+# get events of the extension
+kubectl get events -n azureml --sort-by='.lastTimestamp'
+```
+
+## Troubleshoot AzureML extension deployment error
+
+### Error: cannot reuse a name that is still in use
+This means the extension name you specified already exists. If the name is used by Azureml extension, you need to wait for about an hour and try again. If the name is used by other helm charts, you need to use another name. Run ```helm list -Aa``` to list all helm charts in your cluster.
+
+### Error: earlier operation for the helm chart is still in progress
+You need to wait for about an hour and try again after the unknown operation is completed.
+
+### Error: unable to create new content in namespace azureml because it is being terminated
+This happens when an uninstallation operation isn't finished and another installation operation is triggered. You can run ```az k8s-extension show``` command to check the provisioning status of the extension and make sure the extension has been uninstalled before taking other actions.
+
+### Error: failed in download the Chart path not found
+This happens when you specify a wrong extension version. You need to make sure the specified version exists. If you want to use the latest version, you don't need to specify ```--version``` .
+
+### Error: cannot be imported into the current release: invalid ownership metadata
+This error means there is a conflict between existing cluster resources and AzureML extension. A full error message could be like this:
+```
+CustomResourceDefinition "jobs.batch.volcano.sh" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "amlarc-extension"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "azureml"
+```
+
+Follow the steps below to mitigate the issue.
+
+* Check who owns the problematic resources and if the resource can be deleted or modified.
+* If the resource is used only by AzureML extension and can be deleted, you can manually add labels to mitigate the issue. Taking the previous error message as an example, you can run commands as follows,
+
+ ```bash
+ kubectl label crd jobs.batch.volcano.sh "app.kubernetes.io/managed-by=Helm"
+ kubectl annotate crd jobs.batch.volcano.sh "meta.helm.sh/release-namespace=azureml" "meta.helm.sh/release-name=<extension-name>"
+ ```
+ By setting the labels and annotations to the resource, it means the resource is managed by helm and owned by AzureML extension.
+* If the resource is also used by other components in your cluster and can't be modified. Refer to [deploy AzureML extension](./how-to-deploy-kubernetes-extension.md#review-azureml-extension-configuration-settings) to see if there is a configuration setting to disable the conflict resource.
+
+## HealthCheck of extension
+When the installation failed and didn't hit any of the above error messages, you can use the built-in health check job to make a comprehensive check on the extension. Azureml extension contains a `HealthCheck` job to pre-check your cluster readiness when you try to install, update or delete the extension. The HealthCheck job will output a report, which is saved in a configmap named `arcml-healthcheck` in `azureml` namespace. The error codes and possible solutions for the report are listed in [Error Code of HealthCheck](#error-code-of-healthcheck).
+
+Run this command to get the HealthCheck report,
+```bash
+kubectl describe configmap -n azureml arcml-healthcheck
+```
+The health check is triggered whenever you install, update or delete the extension. The health check report is structured with several parts `pre-install`, `pre-rollback`, `pre-upgrade` and `pre-delete`.
+
+- If the extension is installed failed, you should look into `pre-install` and `pre-delete`.
+- If the extension is updated failed, you should look into `pre-upgrade` and `pre-rollback`.
+- If the extension is deleted failed, you should look into `pre-delete`.
+
+When you request support, we recommend that you run the following command below and send the```healthcheck.logs``` file to us, as it can facilitate us to better locate the problem.
+```bash
+kubectl logs healthcheck -n azureml
+```
+
+### Error Code of HealthCheck
+This table shows how to troubleshoot the error codes returned by the HealthCheck report.
+
+|Error Code |Error Message | Description |
+|--|--|--|
+|E40001 | LOAD_BALANCER_NOT_SUPPORT | Load balancer isn't supported in your cluster. You need to configure the load balancer in your cluster or consider to set `inferenceRouterServiceType` to `nodePort` or `clusterIP`. |
+|E40002 | INSUFFICIENT_NODE | You have enabled `inferenceRouterHA` that requires at least three nodes in your cluster. Disable the HA if you have fewer than three nodes. |
+|E40003 | INTERNAL_LOAD_BALANCER_NOT_SUPPORT | Currently, internal load balancer is only supported by AKS. Don't set `internalLoadBalancerProvider` if you don't have an AKS cluster.|
+|E40007 | INVALID_SSL_SETTING | The SSL key or certificate isn't valid. The CNAME should be compatible with the certificate. |
+|E45002 | PROMETHEUS_CONFLICT | The Prometheus Operator installed is conflict with your existing Prometheus Operator. For more information, refer to [Prometheus operator](#prometheus-operator) |
+|E45003 | BAD_NETWORK_CONNECTIVITY | You need to meet [network-requirements](./how-to-access-azureml-behind-firewall.md#kubernetes-compute).|
+|E45004 | AZUREML_FE_ROLE_CONFLICT |AzureML extension isn't supported in the [legacy AKS](./how-to-attach-kubernetes-anywhere.md#kubernetescompute-and-legacy-akscompute). To install AzureML extension, you need to [delete the legacy azureml-fe components](v1/how-to-create-attach-kubernetes.md#delete-azureml-fe-related-resources).|
+|E45005 | AZUREML_FE_DEPLOYMENT_CONFLICT | AzureML extension isn't supported in the [legacy AKS](./how-to-attach-kubernetes-anywhere.md#kubernetescompute-and-legacy-akscompute). To install AzureML extension, you need to [delete the legacy azureml-fe components](v1/how-to-create-attach-kubernetes.md#delete-azureml-fe-related-resources).|
+
+## Open Source components integration
+
+AzureML extension uses some open source components, including Prometheus Operator, Volcano Scheduler, and DCGM exporter. If the Kubernetes cluster already has some of them installed, you can read following sections to integrate your existing components with AzureML extension.
+
+### Prometheus operator
+[Prometheus operator](https://github.com/prometheus-operator/prometheus-operator) is an open source framework to help build metric monitoring system in kubernetes. AzureML extension also utilizes Prometheus operator to help monitor resource utilization of jobs.
+
+If the Prometheus operator has already been installed in cluster by other service, you can specify ```installPromOp=false``` to disable the Prometheus operator in AzureML extension to avoid a conflict between two Prometheus operators.
+In this case, all Prometheus instances will be managed by the existing prometheus operator. To make sure Prometheus works properly, the following things need to be paid attention to when you disable prometheus operator in Azureml extension.
+1. Check if prometheus in azureml namespace is managed by the Prometheus operator. In some scenarios, prometheus operator is set to only monitor some specific namespaces. If so, make sure azureml namespace is in the allowlist. For more information, see [command flags](https://github.com/prometheus-operator/prometheus-operator/blob/b475b655a82987eca96e142fe03a1e9c4e51f5f2/cmd/operator/main.go#L165).
+2. Check if kubelet-service is enabled in prometheus operator. Kubelet-service contains all the endpoints of kubelet. For more information, see [command flags](https://github.com/prometheus-operator/prometheus-operator/blob/b475b655a82987eca96e142fe03a1e9c4e51f5f2/cmd/operator/main.go#L149). And also need to make sure that kubelet-service has a label`k8s-app=kubelet`.
+3. Create ServiceMonitor for kubelet-service. Run the following command with variables replaced:
+ ```bash
+ cat << EOF | kubectl apply -f -
+ apiVersion: monitoring.coreos.com/v1
+ kind: ServiceMonitor
+ metadata:
+ name: prom-kubelet
+ namespace: azureml
+ labels:
+ release: "<extension-name>" # Please replace to your Azureml extension name
+ spec:
+ endpoints:
+ - port: https-metrics
+ scheme: https
+ path: /metrics/cadvisor
+ honorLabels: true
+ tlsConfig:
+ caFile: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
+ insecureSkipVerify: true
+ bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
+ relabelings:
+ - sourceLabels:
+ - __metrics_path__
+ targetLabel: metrics_path
+ jobLabel: k8s-app
+ namespaceSelector:
+ matchNames:
+ - "<namespace-of-your-kubelet-service>" # Please change this to the same namespace of your kubelet-service
+ selector:
+ matchLabels:
+ k8s-app: kubelet # Please make sure your kubelet-service has a label named k8s-app and it's value is kubelet
+
+ EOF
+ ```
+
+### DCGM exporter
+[Dcgm-exporter](https://github.com/NVIDIA/dcgm-exporter) is the official tool recommended by NVIDIA for collecting GPU metrics. We have integrated it into Azureml extension. But, by default, dcgm-exporter is not enabled, and no GPU metrics are collected. You can specify ```installDcgmExporter``` flag to ```true``` to enable it. As it's NVIDIA's official tool, you may already have it installed in your GPU cluster. If so, you can set ```installDcgmExporter``` to ```false``` and follow the steps below to integrate your dcgm-exporter into Azureml extension. Another thing to note is that dcgm-exporter allows user to config which metrics to expose. For Azureml extension, make sure ```DCGM_FI_DEV_GPU_UTIL```, ```DCGM_FI_DEV_FB_FREE``` and ```DCGM_FI_DEV_FB_USED``` metrics are exposed.
+
+1. Make sure you have Aureml extension and dcgm-exporter installed successfully. Dcgm-exporter can be installed by [Dcgm-exporter helm chart](https://github.com/NVIDIA/dcgm-exporter) or [Gpu-operator helm chart](https://github.com/NVIDIA/gpu-operator)
+
+1. Check if there is a service for dcgm-exporter. If it doesn't exist or you don't know how to check, run the command below to create one.
+ ```bash
+ cat << EOF | kubectl apply -f -
+ apiVersion: v1
+ kind: Service
+ metadata:
+ name: dcgm-exporter-service
+ namespace: "<namespace-of-your-dcgm-exporter>" # Please change this to the same namespace of your dcgm-exporter
+ labels:
+ app.kubernetes.io/name: dcgm-exporter
+ app.kubernetes.io/instance: "<extension-name>" # Please replace to your Azureml extension name
+ app.kubernetes.io/component: "dcgm-exporter"
+ annotations:
+ prometheus.io/scrape: 'true'
+ spec:
+ type: "ClusterIP"
+ ports:
+ - name: "metrics"
+ port: 9400 # Please replace to the correct port of your dcgm-exporter. It's 9400 by default
+ targetPort: 9400 # Please replace to the correct port of your dcgm-exporter. It's 9400 by default
+ protocol: TCP
+ selector:
+ app.kubernetes.io/name: dcgm-exporter # Those two labels are used to select dcgm-exporter pods. You can change them according to the actual label on the service
+ app.kubernetes.io/instance: "<dcgm-exporter-helm-chart-name>" # Please replace to the helm chart name of dcgm-exporter
+ EOF
+ ```
+1. Check if the service in previous step is set correctly
+ ```bash
+ kubectl -n <namespace-of-your-dcgm-exporter> port-forward service/dcgm-exporter-service 9400:9400
+ # run this command in a separate terminal. You will get a lot of dcgm metrics with this command.
+ curl http://127.0.0.1:9400/metrics
+ ```
+1. Set up ServiceMonitor to expose dcgm-exporter service to Azureml extension. Run the following command and it will take effect in a few minutes.
+ ```bash
+ cat << EOF | kubectl apply -f -
+ apiVersion: monitoring.coreos.com/v1
+ kind: ServiceMonitor
+ metadata:
+ name: dcgm-exporter-monitor
+ namespace: azureml
+ labels:
+ app.kubernetes.io/name: dcgm-exporter
+ release: "<extension-name>" # Please replace to your Azureml extension name
+ app.kubernetes.io/component: "dcgm-exporter"
+ spec:
+ selector:
+ matchLabels:
+ app.kubernetes.io/name: dcgm-exporter
+ app.kubernetes.io/instance: "<extension-name>" # Please replace to your Azureml extension name
+ app.kubernetes.io/component: "dcgm-exporter"
+ namespaceSelector:
+ matchNames:
+ - "<namespace-of-your-dcgm-exporter>" # Please change this to the same namespace of your dcgm-exporter
+ endpoints:
+ - port: "metrics"
+ path: "/metrics"
+ EOF
+ ```
+
+### Volcano Scheduler
+If your cluster already has the volcano suite installed, you can set `installVolcano=false`, so the extension won't install the volcano scheduler. Volcano scheduler and volcano controller are required for training job submission and scheduling.
+
+The volcano scheduler config used by AzureML extension is:
+
+```yaml
+volcano-scheduler.conf: |
+ actions: "enqueue, allocate, backfill"
+ tiers:
+ - plugins:
+ - name: task-topology
+ - name: priority
+ - name: gang
+ - name: conformance
+ - plugins:
+ - name: overcommit
+ - name: drf
+ - name: predicates
+ - name: proportion
+ - name: nodeorder
+ - name: binpack
+```
+You need to use the same config settings as above, and disable `job/validate` webhook in the volcano admission, so that AzureML training workloads can perform properly.
++
machine-learning How To Understand Automated Ml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-understand-automated-ml.md
For example, automated ML generates the following charts based on experiment typ
## View job results After your automated ML experiment completes, a history of the jobs can be found via:
- - A browser with [Azure Machine Learning studio](overview-what-is-machine-learning-studio.md)
+ - A browser with [Azure Machine Learning studio](https://ml.azure.com)
- A Jupyter notebook using the [JobDetails Jupyter widget](/python/api/azureml-widgets/azureml.widgets.rundetails) The following steps and video, show you how to view the run history and model evaluation metrics and charts in the studio:
machine-learning How To Use Automated Ml For Ml Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-automated-ml-for-ml-models.md
# Set up no-code AutoML training with the studio UI
-In this article, you learn how to set up AutoML training jobs without a single line of code using Azure Machine Learning automated ML in the [Azure Machine Learning studio](overview-what-is-machine-learning-studio.md).
+In this article, you learn how to set up AutoML training jobs without a single line of code using Azure Machine Learning automated ML in the [Azure Machine Learning studio](overview-what-is-azure-machine-learning.md#studio).
Automated machine learning, AutoML, is a process in which the best machine learning algorithm to use for your specific data is selected for you. This process enables you to generate machine learning models quickly. [Learn more about how Azure Machine Learning implements automated machine learning](concept-automated-ml.md).
machine-learning How To Use Mlflow Cli Runs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-mlflow-cli-runs.md
To register and view a model from a job, use the following steps:
mlflow.register_model(model_uri,"registered_model_name") ```
-1. View the registered model in your workspace with [Azure Machine Learning studio](overview-what-is-machine-learning-studio.md).
+1. View the registered model in your workspace with [Azure Machine Learning studio](https://ml.azure.com).
In the following example the registered model, `my-model` has MLflow tracking metadata tagged.
machine-learning Overview What Is Machine Learning Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/overview-what-is-machine-learning-studio.md
- Title: What is Azure Machine Learning studio?
-description: The studio is a web portal for Azure Machine Learning workspaces. The studio combines no-code and code-first experiences for an inclusive data science platform.
------- Previously updated : 10/21/2021
-adobe-target: true
-
-
-# What is Azure Machine Learning studio?
-
-In this article, you learn about Azure Machine Learning studio, the web portal for data scientist developers in [Azure Machine Learning](overview-what-is-azure-machine-learning.md). The studio combines no-code and code-first experiences for an inclusive data science platform.
-
-In this article you learn:
->[!div class="checklist"]
-> - How to [author machine learning projects](#author-machine-learning-projects) in the studio.
-> - How to [manage assets and resources](#manage-assets-and-resources) in the studio.
-> - The differences between [Azure Machine Learning studio and ML Studio (classic)](#ml-studio-classic-vs-azure-machine-learning-studio).
-
-We recommend that you use the most up-to-date browser that's compatible with your operating system. The following browsers are supported:
- * Microsoft Edge (latest version)
- * Safari (latest version, Mac only)
- * Chrome (latest version)
- * Firefox (latest version)
-
-## Author machine learning projects
-
-The studio offers multiple authoring experiences depending on the type project and the level of user experience.
-
-+ **Notebooks**
-
- Write and run your own code in managed [Jupyter Notebook servers](how-to-run-jupyter-notebooks.md) that are directly integrated in the studio.
--
-+ **Azure Machine Learning designer**
-
- Use the designer to train and deploy machine learning models without writing any code. Drag and drop datasets and components to create ML pipelines. Try out the [designer tutorial](tutorial-designer-automobile-price-train-score.md).
-
- :::image type="content" source="media/concept-designer/designer-drag-and-drop.gif" alt-text="Azure Machine Learning designer example.":::
-
-+ **Automated machine learning UI**
-
- Learn how to create [automated ML experiments](tutorial-first-experiment-automated-ml.md) with an easy-to-use interface.
-
- :::image type="content" source="./media/overview-what-is-azure-ml-studio/azure-machine-learning-automated-ml-ui.jpg" alt-text="AutoML in the Azure Machine Learning studio navigation pane." lightbox = "./media/overview-what-is-azure-ml-studio/azure-machine-learning-automated-ml-ui.jpg":::
-
-+ **Data labeling**
-
- Use Azure Machine Learning data labeling to efficiently coordinate [image labeling](how-to-create-image-labeling-projects.md) or [text labeling](how-to-create-text-labeling-projects.md) projects.
-
-## Manage assets and resources
-
-Manage your machine learning assets directly in your browser. Assets are shared in the same workspace between the SDK and the studio for a seamless experience. Use the studio to manage:
--- Models-- Datasets-- Datastores-- Compute resources-- Notebooks-- Experiments-- Run logs-- Pipelines -- Pipeline endpoints-
-Even if you're an experienced developer, the studio can simplify how you manage workspace resources.
-
-## ML Studio (classic) vs Azure Machine Learning studio
--
-Released in 2015, **ML Studio (classic)** was the first drag-and-drop machine learning model builder in Azure. **ML Studio (classic)** is a standalone service that only offers a visual experience. Studio (classic) does not interoperate with Azure Machine Learning.
-
-**Azure Machine Learning** is a separate, and modernized, service that delivers a complete data science platform. It supports both code-first and low-code experiences.
-
-**Azure Machine Learning studio** is a web portal *in* Azure Machine Learning that contains low-code and no-code options for project authoring and asset management.
-
-If you're a new user, choose **Azure Machine Learning**, instead of ML Studio (classic). As a complete ML platform, Azure Machine Learning offers:
--- Scalable compute clusters for large-scale training.-- Enterprise security and governance.-- Interoperable with popular open-source tools.-- End-to-end MLOps.-
-### Feature comparison
--
-## Troubleshooting
-
-* **Missing user interface items in studio** Azure role-based access control can be used to restrict actions that you can perform with Azure Machine Learning. These restrictions can prevent user interface items from appearing in the Azure Machine Learning studio. For example, if you are assigned a role that cannot create a compute instance, the option to create a compute instance will not appear in the studio. For more information, see [Manage users and roles](how-to-assign-roles.md).
-
-## Next steps
-
-Visit the [studio](https://ml.azure.com), or explore the different authoring options with these tutorials:
-
-Start with [Quickstart: Get started with Azure Machine Learning](quickstart-create-resources.md). Then use these resources to create your first experiment with your preferred method:
-
- + [Tutorial: Azure Machine Learning in a day](tutorial-azure-ml-in-a-day.md)
- + [Use automated machine learning to train & deploy models](tutorial-first-experiment-automated-ml.md)
- + [Use the designer to train & deploy models](tutorial-designer-automobile-price-train-score.md)
- + [Use studio in a secured virtual network](how-to-enable-studio-virtual-network.md)
machine-learning Tutorial Create Secure Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-create-secure-workspace.md
In this tutorial, you accomplish the following tasks:
> [!TIP] > If you're looking for a template (Microsoft Bicep or Hashicorp Terraform) that demonstrates how to create a secure workspace, see [Tutorial - Create a secure workspace using a template](tutorial-create-secure-workspace-template.md).
+After completing this tutorial, you will have the following architecture:
+
+* An Azure Virtual Network, which contains three subnets:
+ * __Training__: Contains the Azure Machine Learning workspace, dependency services, and resources used for training models.
+ * __Scoring__: Contains resources used to deploy models as endpoints.
+ * __AzureBastionSubnet__: Used by the Azure Bastion service to securely connect clients to Azure Virtual Machines.
+ ## Prerequisites * Familiarity with Azure Virtual Networks and IP networking. If you are not familiar, try the [Fundamentals of computer networking](/training/modules/network-fundamentals/) module. * While most of the steps in this article use the Azure portal or the Azure Machine Learning studio, some steps use the Azure CLI extension for Machine Learning v2.
-## Limitations
-
-The steps in this article put Azure Container Registry behind the VNet. In this configuration, you can't deploy models to Azure Container Instances inside the VNet. For more information, see [Secure the inference environment](./v1/how-to-secure-inferencing-vnet.md).
-
-> [!TIP]
-> As an alternative to Azure Container Instances, try Azure Machine Learning managed online endpoints. For more information, see [Enable network isolation for managed online endpoints (preview)](how-to-secure-online-endpoint.md).
- ## Create a virtual network To create a virtual network, use the following steps:
There are several ways that you can connect to the secured workspace. The steps
### Create a jump box (VM)
-Use the following steps to create a Data Science Virtual Machine for use as a jump box:
+Use the following steps to create an Azure Virtual Machine to use as a jump box. Azure Bastion enables you to connect to the VM desktop through your browser. From the VM desktop, you can then use the browser on the VM to connect to resources inside the VNet, such as Azure Machine Learning studio. Or you can install development tools on the VM.
+
+> [!TIP]
+> The steps below create a Windows 11 enterprise VM. Depending on your requirements, you may want to select a different VM image. The Windows 11 (or 10) enterprise image is useful if you need to join the VM to your organization's domain.
+
+1. In the [Azure portal](https://portal.azure.com), select the portal menu in the upper left corner. From the menu, select __+ Create a resource__ and then enter __Virtual Machine__. Select the __Virtual Machine__ entry, and then select __Create__.
-1. In the [Azure portal](https://portal.azure.com), select the portal menu in the upper left corner. From the menu, select __+ Create a resource__ and then enter __Data science virtual machine__. Select the __Data science virtual machine - Windows__ entry, and then select __Create__.
1. From the __Basics__ tab, select the __subscription__, __resource group__, and __Region__ you previously used for the virtual network. Provide values for the following fields: * __Virtual machine name__: A unique name for the VM. * __Username__: The username you will use to log in to the VM. * __Password__: The password for the username. * __Security type__: Standard.
- * __Image__: Data Science Virtual Machine - Windows Server 2019 - Gen1.
+ * __Image__: Windows 11 Enterprise.
+
+ > [!TIP]
+ > If Windows 11 Enterprise isn't in the list for image selection, use _See all images__. Find the __Windows 11__ entry from Microsoft, and use the __Select__ drop-down to select the enterprise image.
- > [!IMPORTANT]
- > Do not select a Gen2 image.
You can leave other fields at the default values.
To delete all resources created in this tutorial, use the following steps:
1. Enter the resource group name, then select __Delete__. ## Next steps
-Now that you have created a secure workspace and can access studio, learn how to [run a Python script](tutorial-azure-ml-in-a-day.md) using Azure Machine Learning.
+Now that you have created a secure workspace and can access studio, learn how to [deploy a model to an online endpoint with network isolation](how-to-secure-online-endpoint.md).
machine-learning Concept Azure Machine Learning Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/concept-azure-machine-learning-architecture.md
Azure Machine Learning provides the following monitoring and logging capabilitie
### Studio
-[Azure Machine Learning studio](../overview-what-is-machine-learning-studio.md) provides a web view of all the artifacts in your workspace. You can view results and details of your datasets, experiments, pipelines, models, and endpoints. You can also manage compute resources and datastores in the studio.
+[Azure Machine Learning studio](../overview-what-is-azure-machine-learning.md#studio) provides a web view of all the artifacts in your workspace. You can view results and details of your datasets, experiments, pipelines, models, and endpoints. You can also manage compute resources and datastores in the studio.
The studio is also where you access the interactive tools that are part of Azure Machine Learning:
machine-learning How To Connect Data Ui https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-connect-data-ui.md
# Connect to data with the Azure Machine Learning studio
-In this article, learn how to access your data with the [Azure Machine Learning studio](../overview-what-is-machine-learning-studio.md). Connect to your data in storage services on Azure with [Azure Machine Learning datastores](how-to-access-data.md), and then package that data for tasks in your ML workflows with [Azure Machine Learning datasets](how-to-create-register-datasets.md).
+In this article, learn how to access your data with the [Azure Machine Learning studio](https://ml.azure.com). Connect to your data in storage services on Azure with [Azure Machine Learning datastores](how-to-access-data.md), and then package that data for tasks in your ML workflows with [Azure Machine Learning datasets](how-to-create-register-datasets.md).
The following table defines and summarizes the benefits of datastores and datasets.
machine-learning How To Create Attach Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-create-attach-kubernetes.md
Last updated 04/21/2022
Azure Machine Learning can deploy trained machine learning models to Azure Kubernetes Service. However, you must first either __create__ an Azure Kubernetes Service (AKS) cluster from your Azure ML workspace, or __attach__ an existing AKS cluster. This article provides information on both creating and attaching a cluster. ## Prerequisites- - An Azure Machine Learning workspace. For more information, see [Create an Azure Machine Learning workspace](../how-to-manage-workspace.md). - The [Azure CLI extension (v1) for Machine Learning service](reference-azure-machine-learning-cli.md), [Azure Machine Learning Python SDK](/python/api/overview/azure/ml/intro), or the [Azure Machine Learning Visual Studio Code extension](../how-to-setup-vs-code.md).
Azure Machine Learning can deploy trained machine learning models to Azure Kuber
## Limitations
+- An AKS can only be created or attached as a single compute target in AzureML workspace. Multiple attachments for one AKS is not supported.
- If you need a **Standard Load Balancer(SLB)** deployed in your cluster instead of a Basic Load Balancer(BLB), create a cluster in the AKS portal/CLI/SDK and then **attach** it to the AzureML workspace. - If you have an Azure Policy that restricts the creation of Public IP addresses, then AKS cluster creation will fail. AKS requires a Public IP for [egress traffic](../../aks/limit-egress-traffic.md). The egress traffic article also provides guidance to lock down egress traffic from the cluster through the Public IP, except for a few fully qualified domain names. There are 2 ways to enable a Public IP:
If you already have AKS cluster in your Azure subscription, you can use it with
> [!WARNING]
-> Do not create multiple, simultaneous attachments to the same AKS cluster from your workspace. For example, attaching one AKS cluster to a workspace using two different names. Each new attachment will break the previous existing attachment(s).
+> **Do not** create multiple, simultaneous attachments to the same AKS cluster. For example, attaching one AKS cluster to a workspace using two different names, or attaching one AKS cluster to different workspace. Each new attachment will break the previous existing attachment(s), and cause unpredictable error.
> > If you want to re-attach an AKS cluster, for example to change TLS or other cluster configuration setting, you must first remove the existing attachment by using [AksCompute.detach()](/python/api/azureml-core/azureml.core.compute.akscompute#detach--).
To create an AKS cluster that uses an Internal Load Balancer, use the `load_bala
from azureml.core.compute.aks import AksUpdateConfiguration from azureml.core.compute import AksCompute, ComputeTarget
-# Change to the name of the subnet that contains AKS
-subnet_name = "default"
# When you create an AKS cluster, you can specify Internal Load Balancer to be created with provisioning_config object
-provisioning_config = AksCompute.provisioning_configuration(load_balancer_type = 'InternalLoadBalancer', load_balancer_subnet = subnet_name)
+provisioning_config = AksCompute.provisioning_configuration(load_balancer_type = 'InternalLoadBalancer')
# Create the cluster aks_target = ComputeTarget.create(workspace = ws,
machine-learning How To Secure Web Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-secure-web-service.md
TLS/SSL certificates expire and must be renewed. Typically this happens every ye
If the certificate was originally generated by Microsoft (when using the *leaf_domain_label* to create the service), **it will automatically renew** when needed. If you want to manually renew it, use one of the following examples to update the certificate: > [!IMPORTANT]
-> * If the existing certificate is still valid, use `renew=True` (SDK) or `--ssl-renew` (CLI) to force the configuration to renew it. For example, if the existing certificate is still valid for 10 days and you don't use `renew=True`, the certificate may not be renewed.
+> * If the existing certificate is still valid, use `renew=True` (SDK) or `--ssl-renew` (CLI) to force the configuration to renew it. This operation will take about 5 hours to take effect.
> * When the service was originally deployed, the `leaf_domain_label` is used to create a DNS name using the pattern `<leaf-domain-label>######.<azure-region>.cloudapp.azure.com`. To preserve the existing name (including the 6 digits originally generated), use the original `leaf_domain_label` value. Do not include the 6 digits that were generated. **Use the SDK**
machine-learning How To Use Mlflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-use-mlflow.md
To register and view a model from a run, use the following steps:
mlflow.register_model(model_uri,"registered_model_name") ```
-1. View the registered model in your workspace with [Azure Machine Learning studio](../overview-what-is-machine-learning-studio.md).
+1. View the registered model in your workspace with [Azure Machine Learning studio](https://ml.azure.com).
In the following example the registered model, `my-model` has MLflow tracking metadata tagged.
network-watcher Connection Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/connection-monitor.md
Complete the steps in [Create the first VM](#create-the-first-vm) again, with th
|Step|Setting|Value| ||||
-| 1 | Select a version of **Ubuntu Server** | |
+| 1 | Select a version of the **Ubuntu Server** | |
| 3 | Name | myVm2 |
-| 3 | Authentication type | Paste your SSH public key or select **Password**, and enter a password. |
+| 3 | Authentication type | Paste your SSH public key or select **Password** and enter a password. |
| 3 | Resource group | Select **Use existing** and select **myResourceGroup**. | | 6 | Extensions | **Network Watcher Agent for Linux** |
Create a connection monitor to monitor communication over TCP port 22 from *myVm
Alerts are created by alert rules in Azure Monitor and can automatically run saved queries or custom log searches at regular intervals. A generated alert can automatically run one or more actions, such as to notify someone or start another process. When setting an alert rule, the resource that you target determines the list of available metrics that you can use to generate alerts.
-1. In Azure portal, select the **Monitor** service, and then select **Alerts** > **New alert rule**.
-2. Click **Select target**, and then select the resources that you want to target. Select the **Subscription**, and set **Resource type** to filter down to the Connection Monitor that you want to use.
+1. In the Azure portal, select the **Monitor** service, and then select **Alerts** > **New alert rule**.
+2. Click **Select target**, and then select the resources that you want to target. Select the **Subscription**, and set the **Resource type** to filter down to the Connection Monitor that you want to use.
![alert screen with target selected](./media/connection-monitor/set-alert-rule.png)
-1. Once you have selected a resource to target, select **Add criteria**.The Network Watcher has [metrics on which you can create alerts](../azure-monitor/alerts/alerts-metric-near-real-time.md#metrics-and-dimensions-supported). Set **Available signals** to the metrics ProbesFailedPercent and AverageRoundtripMs:
+1. Once you have selected a resource to target, select **Add criteria**. The Network Watcher has [metrics on which you can create alerts](../azure-monitor/alerts/alerts-metric-near-real-time.md#metrics-and-dimensions-supported). Set **Available signals** to the metrics ProbesFailedPercent and AverageRoundtripMs:
![alert page with signals selected](./media/connection-monitor/set-alert-signals.png)
-1. Fill out the alert details like alert rule name, description and severity. You can also add an action group to the alert to automate and customize the alert response.
+1. Fill out the alert details like alert rule name, description, and severity. You can also add an action group to the alert to automate and customize the alert response.
## View a problem
By default, Azure allows communication over all ports between VMs in the same vi
| Priority | 100 | | Name | DenySshInbound |
-5. Since connection monitor probes at 60-second intervals, wait a few minutes and then on the left side of the portal, select **Network Watcher**, then **Connection monitor**, and then select the **myVm1-myVm2(22)** monitor again. The results are different now, as shown in the following picture:
+5. Since connection monitor probes at 60-second intervals, wait a few minutes, and then on the left side of the portal, select **Network Watcher**, then **Connection monitor**, and then select the **myVm1-myVm2(22)** monitor again. The results are different now, as shown in the following picture:
![Monitor details fault](./media/connection-monitor/vm-monitor-fault.png)
network-watcher Diagnose Vm Network Routing Problem https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/diagnose-vm-network-routing-problem.md
Log in to the Azure portal at https://portal.azure.com.
## Create a VM
-1. Select **+ Create a resource** found on the upper, left corner of the Azure portal.
-2. Select **Compute**, and then select **Windows Server 2016 Datacenter** or **Ubuntu Server 17.10 VM**.
+1. Select **+ Create a resource** found on the upper-left corner of the Azure portal.
+2. Select **Compute** and then select **Windows Server 2016 Datacenter** or **Ubuntu Server 17.10 VM**.
3. Enter, or select, the following information, accept the defaults for the remaining settings, and then select **OK**: |Setting|Value|
If you already have a network watcher enabled in at least one region, skip to [U
Azure automatically creates routes to default destinations. You may create custom routes that override the default routes. Sometimes, custom routes can cause communication to fail. Use the next hop capability of Network Watcher to determine which route Azure is using to route traffic.
-1. In the Azure portal, select **Next hop**, under **Network Watcher**.
+1. In the Azure portal, select **Next hop** under **Network Watcher**.
2. Select your subscription, enter or select the following values, and then select **Next hop**, as shown in the picture that follows: |Setting |Value |
Azure automatically creates routes to default destinations. You may create custo
![Next hop](./media/diagnose-vm-network-routing-problem/next-hop.png)
- After a few seconds, the result informs you that the next hop type is **Internet**, and that the **Route table ID** is **System Route**. This result lets you know that there is a valid system route to the destination.
+ After a few seconds, the result informs you that the next hop type is **Internet** and that the **Route table ID** is **System Route**. This result lets you know that there is a valid system route to the destination.
-3. Change the **Destination IP address** to *172.31.0.100* and select **Next hop** again. The result returned informs you that **None** is the **Next hop type**, and that the **Route table ID** is also **System Route**. This result lets you know that, while there is a valid system route to the destination, there is no next hop to route the traffic to the destination.
+3. Change the **Destination IP address** to *172.31.0.100* and select **Next hop** again. The result returned informs you that **None** is the **Next hop type** and that the **Route table ID** is also **System Route**. This result lets you know that, while there is a valid system route to the destination, there is no next hop to route the traffic to the destination.
## View details of a route
Azure automatically creates routes to default destinations. You may create custo
![Effective routes](./media/diagnose-vm-network-routing-problem/effective-routes.png)
- When you ran the test using 13.107.21.200 in [Use next hop](#use-next-hop), the route with the address prefix 0.0.0.0/0 was used to route traffic to the address, since no other route includes the address. By default, all addresses not specified within the address prefix of another route are routed to the internet.
+ When you ran the test using 13.107.21.200 in [Use next hop](#use-next-hop), the route with the address prefix 0.0.0.0/0 was used to route traffic to the address since no other route includes the address. By default, all addresses not specified within the address prefix of another route are routed to the internet.
- When you ran the test using 172.31.0.100 however, the result informed you that there was no next hop type. As you can see in the previous picture, though there is a default route to the 172.16.0.0/12 prefix, which includes the 172.31.0.100 address, the **NEXT HOP TYPE** is **None**. Azure creates a default route to 172.16.0.0/12, but doesn't specify a next hop type until there is a reason to. If, for example, you added the 172.16.0.0/12 address range to the address space of the virtual network, Azure changes the **NEXT HOP TYPE** to **Virtual network** for the route. A check would then show **Virtual network** as the **NEXT HOP TYPE**.
+ When you ran the test using 172.31.0.100, however, the result informed you that there was no next hop type. As you can see in the previous picture, though there is a default route to the 172.16.0.0/12 prefix, which includes the 172.31.0.100 address, the **NEXT HOP TYPE** is **None**. Azure creates a default route to 172.16.0.0/12 but doesn't specify a next hop type until there is a reason to. If, for example, you added the 172.16.0.0/12 address range to the address space of the virtual network, Azure changes the **NEXT HOP TYPE** to **Virtual network** for the route. A check would then show the **Virtual network** as the **NEXT HOP TYPE**.
## Clean up resources
network-watcher Diagnose Vm Network Traffic Filtering Problem Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/diagnose-vm-network-traffic-filtering-problem-powershell.md
documentationcenter: network-watcher
editor: Previously updated : 05/03/2022 Last updated : 10/12/2022 ms.assetid:
# Quickstart: Diagnose a virtual machine network traffic filter problem - Azure PowerShell
-In this quickstart, you deploy a virtual machine (VM), and then check communications to an IP address and URL and from an IP address. You determine the cause of a communication failure and how you can resolve it.
+In this quickstart, you deploy a virtual machine (VM) and then check communications to an IP address and URL and from an IP address. You determine the cause of a communication failure and how you can resolve it.
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
If you don't have an Azure subscription, create a [free account](https://azure.m
[!INCLUDE [cloud-shell-try-it.md](../../includes/cloud-shell-try-it.md)]
-If you choose to install and use PowerShell locally, this quickstart requires the Azure PowerShell `Az` module. To find the installed version, run `Get-Module -ListAvailable Az`. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
-
+If you choose to install and use PowerShell locally, this quickstart requires the Azure PowerShell `Az` module. To find the installed version, run `Get-Module -ListAvailable Az`. If you need to upgrade, [Install the Azure PowerShell module](/powershell/azure/install-Az-ps). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
## Create a VM
Before you can create a VM, you must create a resource group to contain the VM.
New-AzResourceGroup -Name myResourceGroup -Location EastUS ```
-Create the VM with [New-AzVM](/powershell/module/az.compute/new-azvm). When running this step, you are prompted for credentials. The values that you enter are configured as the user name and password for the VM.
+Create the VM with [New-AzVM](/powershell/module/az.compute/new-azvm). When running this step, you're prompted for credentials. The values that you enter are configured as the username and password for the VM.
```azurepowershell-interactive $vM = New-AzVm `
$vM = New-AzVm `
-Location "East US" ```
-The VM takes a few minutes to create. Don't continue with remaining steps until the VM is created and PowerShell returns output.
+The VM takes a few minutes to create. Don't continue with the remaining steps until the VM is created and PowerShell returns the output.
## Test network communication
When you ran the `Test-AzNetworkWatcherIPFlow` command to test outbound communic
} ```
-The rule lists **0.0.0.0/0** as the **DestinationAddressPrefix**. The rule denies the outbound communication to 172.131.0.100, because the address is not within the **DestinationAddressPrefix** of any of the other outbound rules in the output from the `Get-AzEffectiveNetworkSecurityGroup` command. To allow the outbound communication, you could add a security rule with a higher priority, that allows outbound traffic to port 80 at 172.131.0.100.
+The rule lists **0.0.0.0/0** as the **DestinationAddressPrefix**. The rule denies the outbound communication to 172.131.0.100, because the address isn't within the **DestinationAddressPrefix** of any of the other outbound rules in the output from the `Get-AzEffectiveNetworkSecurityGroup` command. To allow the outbound communication, you could add a security rule with a higher priority, that allows outbound traffic to port 80 at 172.131.0.100.
When you ran the `Test-AzNetworkWatcherIPFlow` command to test inbound communication from 172.131.0.100 in [Use IP flow verify](#use-ip-flow-verify), the output informed you that the **DenyAllInBound** rule denied the communication. The **DenyAllInBound** rule equates to the **DenyAllInBound** rule listed in the following output from the `Get-AzEffectiveNetworkSecurityGroup` command:
The checks in this quickstart tested Azure configuration. If the checks return e
## Clean up resources
-When no longer needed, you can use [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) to remove the resource group and all of the resources it contains:
+When no longer needed, you can use [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) to remove the resource group, and all of the resources it contains:
```azurepowershell-interactive Remove-AzResourceGroup -Name myResourceGroup -Force
network-watcher Diagnose Vm Network Traffic Filtering Problem https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/diagnose-vm-network-traffic-filtering-problem.md
description: In this quickstart, you learn how to diagnose a virtual machine net
documentationcenter: network-watcher -+ editor: Previously updated : 05/02/2022 Last updated : 10/12/2022 ms.assetid:
# Quickstart: Diagnose a virtual machine network traffic filter problem using the Azure portal
-In this quickstart, you deploy a virtual machine (VM), and then check communications to an IP address and URL and from an IP address. You determine the cause of a communication failure and how you can resolve it.
+In this quickstart, you will deploy a virtual machine (VM) and check communications to an IP address and URL, and from an IP address. You will determine the cause of a communication failure and learn how you can resolve it.
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
Log in to the Azure portal at https://portal.azure.com.
## Create a VM
-1. Select **+ Create a resource** found on the upper, left corner of the Azure portal.
+1. Select **+ Create a resource** found on the upper-left corner of the Azure portal.
1. Select **Compute**, and then select **Windows Server 2019 Datacenter** or a version of **Ubuntu Server**. 1. Enter, or select, the following information, accept the defaults for the remaining settings, and then select **OK**:
Log in to the Azure portal at https://portal.azure.com.
## Test network communication
-To test network communication with Network Watcher, first enable a network watcher in at least one Azure region, and then use Network Watcher's IP flow verify capability.
+To test network communication with Network Watcher, first, enable a network watcher in at least one Azure region, and then use Network Watcher's IP flow verify capability.
### Enable network watcher
When you create a VM, Azure allows and denies network traffic to and from the VM
After a few seconds, the result returned informs you that access is allowed because of a security rule named **AllowInternetOutbound**. When you ran the check, Network Watcher automatically created a network watcher in the East US region, if you had an existing network watcher in a region other than the East US region before you ran the check. 1. Complete step 3 again, but change the **Remote IP address** to **172.31.0.100**. The result returned informs you that access is denied because of a security rule named **DenyAllOutBound**.
-1. Complete step 3 again, but change the **Direction** to **Inbound**, the **Local port** to **80** and the **Remote port** to **60000**. **Remote IP address** remains **172.31.0.100**. The result returned informs you that access is denied because of a security rule named **DenyAllInBound**.
+1. Complete step 3 again, but change the **Direction** to **Inbound**, the **Local port** to **80**, and the **Remote port** to **60000**. The **Remote IP address** remains **172.31.0.100**. The result returned informs you that access is denied because of a security rule named **DenyAllInBound**.
Now that you know which security rules are allowing or denying traffic to or from a VM, you can determine how to resolve the problems.
To determine why the rules in steps 3-5 of **Use IP flow verify** allow or deny
One of the prefixes in the list is **12.0.0.0/8**, which encompasses the 12.0.0.1-15.255.255.254 range of IP addresses. Since 13.107.21.200 is within that address range, the **AllowInternetOutBound** rule allows the outbound traffic. Additionally, there are no higher priority (lower number) rules shown in the picture in step 2 that override this rule. Close the **Address prefixes** box. To deny outbound communication to 13.107.21.200, you could add a security rule with a higher priority, that denies port 80 outbound to the IP address.
-1. When you ran the outbound check to 172.131.0.100 in step 4 of **Use IP flow verify**, you learned that the **DenyAllOutBound** rule denied communication. That rule equates to the **DenyAllOutBound** rule shown in the picture in step 2 that specifies **0.0.0.0/0** as the **Destination**. This rule denies the outbound communication to 172.131.0.100, because the address is not within the **Destination** of any of the other **Outbound rules** shown in the picture. To allow the outbound communication, you could add a security rule with a higher priority, that allows outbound traffic to port 80 for the 172.131.0.100 address.
+1. When you ran the outbound check to 172.131.0.100 in step 4 of **Use IP flow verify**, you learned that the **DenyAllOutBound** rule denied communication. That rule equates to the **DenyAllOutBound** rule shown in the picture in step 2 that specifies **0.0.0.0/0** as the **Destination**. This rule denies the outbound communication to 172.131.0.100 because the address is not within the **Destination** of any of the other **Outbound rules** shown in the picture. To allow the outbound communication, you can add a security rule with a higher priority, that allows outbound traffic to port 80 for the 172.131.0.100 address.
1. When you ran the inbound check from 172.131.0.100 in step 5 of **Use IP flow verify**, you learned that the **DenyAllInBound** rule denied communication. That rule equates to the **DenyAllInBound** rule shown in the picture in step 2. The **DenyAllInBound** rule is enforced because no other higher priority rule exists that allows port 80 inbound to the VM from 172.31.0.100. To allow the inbound communication, you could add a security rule with a higher priority, that allows port 80 inbound from 172.31.0.100.
-The checks in this quickstart tested Azure configuration. If the checks return expected results and you still have network problems, ensure that you don't have a firewall between your VM and the endpoint you're communicating with and that the operating system in your VM doesn't have a firewall that is allowing or denying communication.
+The checks in this quickstart tested Azure configuration. If the checks return the expected results and you still have network problems, ensure that you don't have a firewall between your VM and the endpoint you're communicating with and that the operating system in your VM doesn't have a firewall that is allowing or denying communication.
## Clean up resources
network-watcher Network Watcher Ip Flow Verify Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-ip-flow-verify-overview.md
Title: Introduction to IP flow verify in Azure Network Watcher | Microsoft Docs
description: This page provides an overview of the Network Watcher IP flow verify capability documentationcenter: na-+ na Previously updated : 01/04/2021- Last updated : 10/04/2022+ # Introduction to IP flow verify in Azure Network Watcher
IP flow verify checks if a packet is allowed or denied to or from a virtual mach
IP flow verify looks at the rules for all Network Security Groups (NSGs) applied to the network interface, such as a subnet or virtual machine NIC. Traffic flow is then verified based on the configured settings to or from that network interface. IP flow verify is useful in confirming if a rule in a Network Security Group is blocking ingress or egress traffic to or from a virtual machine. Now along with the NSG rules evaluation, the Azure Virtual Network Manager rules will also be evaluated.
-[Azure Virtual Network Manager (AVNM)](../virtual-network-manager/overview.md) is a management service that enables users to group, configure, deploy, and manage Virtual Networks globally across subscriptions. AVNM security configuration allows users to define a collection of rules that can be applied to one or more network security groups at the global level. These security rules have a higher priority than network security group (NSG) rules. An important difference to note here is that admin rules are a resource delivered by ANM in a central location controlled by governance and security teams, which bubble down to each vnet. NSGs are a resource controlled by the vnet owners, which apply at each subnet or NIC level.
+[Azure Virtual Network Manager (AVNM)](../virtual-network-manager/overview.md) is a management service that enables users to group, configure, deploy, and manage Virtual Networks globally across subscriptions. AVNM security configuration allows users to define a collection of rules that can be applied to one or more network groups at the global level. These security rules have a higher priority than network security group (NSG) rules. An important difference to note here is that admin rules are a resource delivered by ANM in a central location controlled by governance and security teams, which bubble down to each vnet. NSGs are a resource controlled by the vnet owners, which apply at each subnet or NIC level.
An instance of Network Watcher needs to be created in all regions that you plan to run IP flow verify. Network Watcher is a regional service and can only be ran against resources in the same region. The instance used does not affect the results of IP flow verify, as any route associated with the NIC or subnet is still be returned.
network-watcher Network Watcher Monitoring Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-monitoring-overview.md
na Previously updated : 01/04/2021- Last updated : 10/11/2022+ # What is Azure Network Watcher?
-Azure Network Watcher provides tools to monitor, diagnose, view metrics, and enable or disable logs for resources in an Azure virtual network. Network Watcher is designed to monitor and repair the network health of IaaS (Infrastructure-as-a-Service) products which includes Virtual Machines, Virtual Networks, Application Gateways, Load balancers, etc.
+Azure Network Watcher provides tools to monitor, diagnose, view metrics, and enable or disable logs for resources in an Azure virtual network. Network Watcher is designed to monitor and repair the network health of IaaS (Infrastructure-as-a-Service) products including Virtual Machines (VM), Virtual Networks, Application Gateways, Load balancers, etc.
+ > [!Note] > It is not intended for and will not work for PaaS monitoring or Web analytics.
For information about analyzing traffic from a network security group, see [Netw
### <a name = "connection-monitor"></a>Monitor communication between a virtual machine and an endpoint
-Endpoints can be another virtual machine (VM), a fully qualified domain name (FQDN), a uniform resource identifier (URI), or IPv4 address. The *connection monitor* capability monitors communication at a regular interval and informs you of reachability, latency, and network topology changes between the VM and the endpoint. For example, you might have a web server VM that communicates with a database server VM. Someone in your organization may, unknown to you, apply a custom route or network security rule to the web server or database server VM or subnet.
+Endpoints can be another virtual machine (VM), a fully qualified domain name (FQDN), a uniform resource identifier (URI), or an IPv4 address. The *connection monitor* capability monitors communication at a regular interval and informs you of reachability, latency, and network topology changes between the VM and the endpoint. For example, you might have a web server VM that communicates with a database server VM. Someone in your organization may, unknown to you, apply a custom route or network security rule to the web server or database server VM or subnet.
-If an endpoint becomes unreachable, connection troubleshoot informs you of the reason. Potential reasons are a DNS name resolution problem, the CPU, memory, or firewall within the operating system of a VM, or the hop type of a custom route, or security rule for the VM or subnet of the outbound connection. Learn more about [security rules](../virtual-network/network-security-groups-overview.md?toc=/azure/network-watcher/toc.json#security-rules) and [route hop types](../virtual-network/virtual-networks-udr-overview.md?toc=/azure/network-watcher/toc.json) in Azure.
+If an endpoint becomes unreachable, *connection troubleshoot* notifies you of the reason. Potential reasons are a DNS name resolution problem, the CPU, memory, or firewall within the operating system of a VM, or the hop type of a custom route, or the security rule for the VM or subnet of the outbound connection. Learn more about [security rules](../virtual-network/network-security-groups-overview.md?toc=/azure/network-watcher/toc.json#security-rules) and [route hop types](../virtual-network/virtual-networks-udr-overview.md?toc=/azure/network-watcher/toc.json) in Azure.
-Connection monitor also provides the minimum, average, and maximum latency observed over time. After learning the latency for a connection, you may find that you're able to decrease the latency by moving your Azure resources to different Azure regions. Learn more about determining [relative latencies between Azure regions and internet service providers](#determine-relative-latencies-between-azure-regions-and-internet-service-providers) and how to monitor communication between a VM and an endpoint with [connection monitor](connection-monitor.md). If you'd rather test a connection at a point in time, rather than monitor the connection over time, like you do with connection monitor, use the [connection troubleshoot](#connection-troubleshoot) capability.
+Connection monitor also provides the minimum, average, and maximum latency observed over time. After learning the latency for a connection, you may find that you can decrease the latency by moving your Azure resources to different Azure regions. Learn more about determining [relative latencies between Azure regions and internet service providers](#determine-relative-latencies-between-azure-regions-and-internet-service-providers) and how to monitor communication between a VM and an endpoint with [connection monitor](connection-monitor.md). If you'd rather test a connection at a point in time, rather than monitor the connection over time, as you do with connection monitor, use the [connection troubleshoot](#connection-troubleshoot) capability.
-Network performance monitor is a cloud-based hybrid network monitoring solution that helps you monitor network performance between various points in your network infrastructure. It also helps you monitor network connectivity to service and application endpoints and monitor the performance of Azure ExpressRoute. Network performance monitor detects network issues like traffic blackholing, routing errors, and issues that conventional network monitoring methods aren't able to detect. The solution generates alerts and notifies you when a threshold is breached for a network link. It also ensures timely detection of network performance issues and localizes the source of the problem to a particular network segment or device. Learn more about [network performance monitor](../azure-monitor/insights/network-performance-monitor.md?toc=/azure/network-watcher/toc.json).
+*Network performance monitor* is a cloud-based hybrid network monitoring solution that helps you monitor network performance between various points in your network infrastructure. It also helps you monitor network connectivity to service and application endpoints and monitor the performance of Azure ExpressRoute. Network performance monitor detects network issues like traffic blackholing, routing errors, and issues that conventional network monitoring methods aren't able to detect. The solution generates alerts and notifies you when a threshold is breached for a network link. It also ensures timely detection of network performance issues and localizes the source of the problem to a particular network segment or device. Learn more about [network performance monitor](../azure-monitor/insights/network-performance-monitor.md?toc=/azure/network-watcher/toc.json).
### View resources in a virtual network and their relationships
-As resources are added to a virtual network, it can become difficult to understand what resources are in a virtual network and how they relate to each other. The *topology* capability enables you to generate a visual diagram of the resources in a virtual network, and the relationships between the resources. The following picture shows an example topology diagram for a virtual network that has three subnets, two VMs, network interfaces, public IP addresses, network security groups, route tables, and the relationships between the resources:
+As resources are added to a virtual network, it can become difficult to understand what resources are in a virtual network and how they relate to each other. The *topology* capability enables you to generate a visual diagram of the resources in a virtual network and the relationships between the resources. The following image shows an example topology diagram for a virtual network that has three subnets, two VMs, network interfaces, public IP addresses, network security groups, route tables, and the relationships between the resources:
![Topology view](./media/network-watcher-monitoring-overview/topology.png)
-You can download an editable version of the picture in svg format. Learn more about [topology view](view-network-topology.md).
+You can download an editable version of the picture in SVG format. Learn more about [topology view](view-network-topology.md).
## Diagnostics
Advanced filtering options and fine-tuned controls, such as the ability to set t
### Diagnose problems with an Azure Virtual network gateway and connections
-Virtual network gateways provide connectivity between on-premises resources and Azure virtual networks. Monitoring gateways and their connections are critical to ensuring communication is not broken. The *VPN diagnostics* capability provides the ability to diagnose gateways and connections. VPN diagnostics diagnoses the health of the gateway, or gateway connection, and informs you whether a gateway and gateway connections, are available. If the gateway or connection is not available, VPN diagnostics tells you why, so you can resolve the problem. Learn more about VPN diagnostics by completing the [Diagnose a communication problem between networks](diagnose-communication-problem-between-networks.md) tutorial.
+Virtual network gateways provide connectivity between on-premises resources and Azure virtual networks. Monitoring gateways and their connections are critical to ensuring communication are not broken. The *VPN diagnostics* capability provides the ability to diagnose gateways and connections. VPN diagnostics diagnoses the health of the gateway, or gateway connection, and informs you whether a gateway and gateway connections are available. If the gateway or connection is not available, VPN diagnostics tells you why, so you can resolve the problem. Learn more about VPN diagnostics by completing the [Diagnose a communication problem between networks](diagnose-communication-problem-between-networks.md) tutorial.
### Determine relative latencies between Azure regions and internet service providers
Learn more about NSG flow logs by completing the [Log network traffic to and fro
### View diagnostic logs for network resources
-You can enable diagnostic logging for Azure networking resources such as network security groups, public IP addresses, load balancers, virtual network gateways, and application gateways. The *Diagnostic logs* capability provides a single interface to enable and disable network resource diagnostic logs for any existing network resource that generates a diagnostic log. You can view diagnostic logs using tools such as Microsoft Power BI and Azure Monitor logs. To learn more about analyzing Azure network diagnostic logs, see [Azure network solutions in Azure Monitor logs](../azure-monitor/insights/azure-networking-analytics.md?toc=/azure/network-watcher/toc.json).
+You can enable diagnostic logging for Azure networking resources such as network security groups, public IP addresses, load balancers, virtual network gateways, and application gateways. The *Diagnostic logs* capability provides a single interface to enable and disable network resource diagnostic logs for any existing network resource that generates a diagnostic log. You can view diagnostic logs using tools such as Microsoft Power BI and Azure Monitor logs. To learn more about analyzing Azure network diagnostic logs, see the [Azure network solutions in Azure Monitor logs](../azure-monitor/insights/azure-networking-analytics.md?toc=/azure/network-watcher/toc.json).
## Network Watcher automatic enablement
-When you create or update a virtual network in your subscription, Network Watcher will be enabled automatically in your Virtual Network's region. There is no impact to your resources or associated charge for automatically enabling Network Watcher. For more information, see [Network Watcher create](network-watcher-create.md).
+
+When you create or update a virtual network in your subscription, Network Watcher will be enabled automatically in your Virtual Network's region. There is no impact on your resources or associated charge for automatically enabling Network Watcher. For more information, see [Network Watcher create](network-watcher-create.md).
## Next steps
network-watcher Quickstart Configure Network Security Group Flow Logs From Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/quickstart-configure-network-security-group-flow-logs-from-arm-template.md
If there were issues with the deployment, see [Troubleshoot common Azure deploym
## Clean up resources
-You can delete Azure resources by using complete deployment mode. To delete a flow logs resource, specify a deployment in complete mode without including the resource you want to delete. Read more about [complete deployment mode](../azure-resource-manager/templates/deployment-modes.md#complete-mode).
+You can delete Azure resources by using complete deployment mode. To delete a flow logs resource, specify a deployment in the complete mode without including the resource you want to delete. Read more about [complete deployment mode](../azure-resource-manager/templates/deployment-modes.md#complete-mode).
You also can disable an NSG flow log in the Azure portal:
network-watcher Quickstart Configure Network Security Group Flow Logs From Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/quickstart-configure-network-security-group-flow-logs-from-bicep.md
If there were issues with the deployment, see [Troubleshoot common Azure deploym
## Clean up resources
-You can delete Azure resources by using complete deployment mode. To delete a flow logs resource, specify a deployment in complete mode without including the resource you want to delete. Read more about [complete deployment mode](../azure-resource-manager/templates/deployment-modes.md#complete-mode).
+You can delete Azure resources by using complete deployment mode. To delete a flow logs resource, specify a deployment in the complete mode without including the resource you want to delete. Read more about [complete deployment mode](../azure-resource-manager/templates/deployment-modes.md#complete-mode).
You also can disable an NSG flow log in the Azure portal:
notification-hubs Eu Data Boundary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/eu-data-boundary.md
+
+ Title: EU Data Boundary compliance in Azure Notification Hubs
+description: Learn about the EU data boundary capabilities of Azure Notification Hubs.
++++ Last updated : 09/21/2022++
+# EU Data Boundary
+
+The EU Data Boundary (EUDB) is a response to increasing concerns about the transnational transfer of European Union customer personal data. Microsoft strives to foster trust in its services by limiting data transfer.
+
+## EUDB in Azure
+
+If you use the Azure portal to create an Azure Notification Hubs namespace in an EU country, your data will remain in the EU region, and will not be transferred outside the EU data boundary. A full list of countries in scope for EUDB is as follows:
+
+- Austria
+- Belgium
+- Bulgaria
+- Croatia
+- Republic of Cyprus
+- Czech Republic
+- Denmark
+- Estonia
+- Finland
+- France
+- Germany
+- Greece
+- Hungary
+- Ireland
+- Italy
+- Latvia
+- Lithuania
+- Luxembourg
+- Malta
+- Netherlands
+- Poland
+- Portugal
+- Romania
+- Slovakia
+- Slovenia
+- Spain
+- Sweden
+
+## Next steps
+
+- [Azure Notification Hubs](notification-hubs-push-notification-overview.md)
openshift Support Policies V4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/support-policies-v4.md
Certain configurations for Azure Red Hat OpenShift 4 clusters can affect your cl
* Don't remove or modify the cluster Prometheus service. * Don't remove or modify the cluster Alertmanager service or Default receiver. It *is* supported to create additional receivers to notify external systems. * Don't remove Service Alertmanager rules.
-* Security groups can't be modified. Any attempt to modify security groups will be reverted.
+* The ARO-provided Network Security Group can't be modified or replaced. Any attempt to modify or replace it will be reverted.
* Don't remove or modify Azure Red Hat OpenShift service logging (mdsd pods). * Don't remove or modify the 'arosvc.azurecr.io' cluster pull secret. * All cluster virtual machines must have direct outbound internet access, at least to the Azure Resource Manager (ARM) and service logging (Geneva) endpoints. No form of HTTPS proxying is supported.
orbital Sar Reference Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/sar-reference-architecture.md
+
+ Title: SAR reference architecture - Azure Orbital Analytics
+description: Learn about how SAR data is processed horizontally.
++++ Last updated : 10/11/2022+++
+# SAR reference architecture
+
+SAR is a form of radar that is used to create two-dimensional images of three-dimensional reconstructions of objects, such as landscapes. SAR uses the motion of the radar antenna over a target to provide finer spatial resolution than conventional stationary beam-scanning radars.
+
+## Processing
+
+Remote Sensing and or SAR data has always been processed in a linear way because of the way the algorithms were written. Historically, the data was processed on single, and or power machines, which could only be scaled vertically. There was limited way to scale this process vertically and horizontally since the machines that were used to process the data were expensive. Due to increased cost, it wasn't possible to process this data in real-time or near real-time. After looking into the problem space, we were able to come up with alternative ways to scale this process horizontally.
+
+### Scaling using AKS and Argo Workflows
+
+SAR data processing, especially raw L0 processing, typically involves vendor-specific tooling rather than open-source software. As such, a scalable processing pipeline must be able to execute vendor-specific binaries as-is instead of relying on access to source code to change the algorithm and scale out using technologies such as Apache Spark. Containerization allows for vendor supplied binaries to be wrapped in a
+container, and then run at scale. While the performance of processing a given image won't increase, many images can be processed in parallel. Azure Kubernetes Service is a natural fit for executing containerized software at scale. Argo Workflows provides a low overhead Kubernetes-native approach to execute pipelines on any Kubernetes cluster. This architecture allows for horizontal scaling of a processing
+pipeline that utilizes vendor provided binaries and/or open-source software. While processing of any individual file or image won't occur any faster, many files can be processed simultaneously in parallel. With the flexibility of AKS, each step in the pipeline can execute on the hardware best suited for the tool, for example, GPU, high core
+count, or increased memory.
++
+Raw products are received by a ground station application, which, in turn, writes the data into Azure Blob Storage. Using an Azure Event Grid subscription, a notification is supplied to Azure Event Hubs when a new product image is written to blob storage. Argo Events, running on Azure Kubernetes Service, subscribes to the Azure Event Hubs notification and upon receipt of the event, triggers an Argo Workflows workflow to process the image.
+
+Argo Workflows are specified using a Kubernetes custom resource
+definition that allows a DAG or simple pipeline to be created by defining Kubernetes objects. Each step in the pipeline/DAG can run a Kubernetes Pod that performs work. The Pod may run a simple shell script or execute code in a custom container including executing vendor-specific tools to process the remote sensor products. Since each step in the pipeline is a different Kubernetes object, normal Kubernetes resource requests are used to specify the requirements of the step. For example, a vendor-specific tool may require a GPU or node with high memory and/or cores to complete its work. These requirements can be specified using Kubernetes resource requests, and Kubernetes affinity and/or nodeSelectors. Kubernetes will map these requests to nodes that are able to satisfy the needs, provided such nodes exist.
+
+With Azure Kubernetes Service, this typically involves creating node pools with the appropriate Azure compute SKU to meet the needs of potential pipelines. These node pools can be set to auto scale so that resources aren't consumed when pipeline steps requiring them aren't running.
+
+### Processing using Azure Synapse
+
+The approach to use Azure Synapse is slightly different than a normal pipeline. Typically, lots of data processing firms already have algorithms that are processing the data. They may not want to rewrite the algorithms that are already written but they may need a way to scale those algorithms horizontally. What we are showing here's an approach using which they can easily run their code on distributed
+framework like Apache Spark and not have to worry about dealing with all the complexities one would when working with Distributed system. We're taking advantage of vectorization and SIMD architecture where we're processing more than one row at a time instead of processing one row at a time. These features are specific to Apache Spark DataFrame and JVM
++
+## Data ingestion
+
+Remote Sensing Data is sent to a ground station. The ground station app collects the data and Writes to Blob Storage.
+
+## Data transformation
+
+1. Blob Storage sends an event to Event-Grid about the file being created.
+1. Event-Grid Notifies the function registered to receive the event.
+1. The function triggers an Azure Synapse Spark Pipeline. This pipeline has the native library and the configuration required to run the spark job. The Spark Job performs the heavy computation and writes the result to the blob storage where it can be further used by any downstream processes.
+
+Under this approach using Apache Spark, we're gluing the library that has algorithms with JNA. JNA requires you to define the interfaces for your native code and does the heavy lifting to converting your data to and from the native library to usable Java Types. Now without any major rewriting, we can distribute the computation of data on nodes vs a single machine. Typical Spark Execution under this model looks like as follows.
++
+## Considerations
+
+### Pool size consideration
+
+The following section outlines in detail as to how to choose a pool size for the job.
+
+ | Size | Cores | Memory (GB) | Nodes | Executor | Cost (USD) |
+ |-|-|-|-|-||
+ |Small | 4 | 32 | 20-200 | 2-100 | 11.37 to 113.71 |
+ | Medium | 8 | 64 | 20-200 | 2-100 | 22.74 to 227.42 |
+ | Large | 16 | 128 | 20-200 | 2-100 | 45.48 to 454.85 |
+ | XLarge | 32 | 256 | 20-200 | 2-100 | 90.97 to 909.70 |
+ | XXLarge | 64 | 512 | 20-200 | 2-100 | 181.94 to 1819.39 |
+
+To process 1 year's worth of data, which is around 610 GB of remote sensing data, following are the metrics that were captured. These metrics are specific to the processing algorithm that was used. It only showcases and exhibits how the process can be horizontally scaled for Batch processing and for Real time processing.
+
+
+ |Size | Time(mins)|
+ || |
+ |Small | 120|
+ |Medium | 80 |
+ |Large | 67 |
+ |XLarge | 50 |
+ |XXLarge | 40 |
+
+### Spark configuration
+
+ |Property Name | Value |
+ ||-|
+ |Spark.driver.maxResultSize | 2g |
+ |Spark.kryoserializer.buffer.max | 2000 |
+ | Spark.sql.shuffle.partitions | 1000 |
+
+The above configuration was used in the BYOLB use case as there was lot of data that was moved from the executor and the driver nodes. The default configurations weren't enough to handle the use case where we were moving the results as part of DataFrame. We could have tried broadcasting the data but since these were processed as a part of DataFrame broadcasting the values wasn't chosen as we wanted to transform each row of the DataFrame.
+
+### Spark version
+
+We were using Apache Spark 3.1 with Scala 2.12 to develop our pipelines. This version is compatible with Java 11 which has the Garbage collector improvements over Java 8.
+
+### Data abstraction
+
+**DataFrames**
+ - Best choice in most situations.
+ - Provides query optimization through Catalyst.
+ - Whole-stage code generation.
+ - Direct memory access.
+ - Low garbage collection (GC) overhead.
+ - Not as developer-friendly as Datasets, as there 's no compile-time checks or domain object programming.
+
+**RDDs**
+ - You don\'t need to use RDDs, unless you need to build a new custom RDD.
+ - No query optimization through Catalyst.
+ - No whole-stage code generation.
+ - High GC overhead.
+ - Must use Spark 1.x legacy APIs.
+
+## Potential use cases
+
+ - Digital Signal Processing
+
+ - Operations on Raw Satellite Data.
+
+ - Image manipulation and processing.
+
+ - Compute heavy tasks that want to be distributed.
+
+## Contributors
+
+*This article is being updated and maintained by Microsoft. It was originally written by the following contributors*
+
+ - Harjit Singh | Senior Engineering Architect
+ - Brian Loss | Principal Engineering Architect
+
+Additional contributors:
+- Nikhil Manchanda | Principal Engineering Manager
+- Billie Rinaldi | Principal Engineering Manager
+- Joey Frazee | Principal Engineering Manager
+- Katy Smith | Data Scientist
+- Steve Truitt | Principal Program Manager
+
+## Next steps
+
+- [Azure Maps Geospatial Services](https://microsoft.github.io/SynapseML/docs/features/geospatial_services/GeospatialServices%20-%20Overview)
+- [Getting geospatial insights from big data using SynapseML](https://techcommunity.microsoft.com/t5/azure-maps-blog/getting-geospatial-insides-in-big-data-using-synapseml/ba-p/3154717)
+- [Get started with Azure Synapse Analytics](/azure/synapse-analytics/get-started)
+- [Explore Azure Synapse Studio](/training/modules/explore-azure-synapse-studio)
+- [SpaceBorne Data Analysis](https://github.com/MicrosoftDocs/architecture-center/blob/main/docs/industries/aerospace/geospatial-processing-analytics-content.md)
+
+## See also
+
+- [Geospatial data processing and analytics](https://github.com/MicrosoftDocs/architecture-center/blob/main/docs/example-scenario/data/geospatial-data-processing-analytics-azure.yml)
+- [Geospatial analysis for the telecommunications industry](https://github.com/MicrosoftDocs/architecture-center/blob/main/docs/example-scenario/data/geospatial-analysis-telecommunications-industry.yml)
+- [Big data architectures](https://github.com/MicrosoftDocs/architecture-center/tree/main/docs/data-guide/big-data)
+
+## References
+
+ - [Azure Synapse](https://azure.microsoft.com/services/synapse-analytics)
+ - [Apache Spark](https://spark.apache.org)
+ - [Argo](https://argoproj.github.io/)
partner-solutions Dynatrace Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/dynatrace/dynatrace-create.md
description: This article describes how to use the Azure portal to create an ins
Previously updated : 08/24/2022 Last updated : 10/12/2022
When you use the integrated Dynatrace experience in Azure portal, the following
:::image type="content" source="media/dynatrace-create/dynatrace-entities.png" alt-text="Flowchart showing three entities: Marketplace S A A S connecting to Dynatrace resource, connecting to Dynatrace environment."::: -- **Dynatrace resource in Azure** - Using the Dynatrace resource, you can manage the Dynatrace environment in Azure. The resource is created in the Azure subscription and resource group that you select during the create or linking process.-- **Dynatrace environment** - This is the Dynatrace environment on Dynatrace SaaS. When you choose to create a new environment, the environment on Dynatrace SaaS is automatically created, in addition to the Dynatrace resource in Azure. The resource is created in the Azure subscription and resource group that you selected when you created the environment or linked to an existing environment.
+- **Dynatrace resource in Azure** - Using the Dynatrace resource, you can manage the Dynatrace environment in Azure. The resource is created in the Azure subscription and resource group that you select during the create process or linking process.
+- **Dynatrace environment** - This is the Dynatrace environment on Dynatrace _Software as a Service_ (SaaS). When you create a new environment, the environment on Dynatrace SaaS is automatically created, in addition to the Dynatrace resource in Azure.
- **Marketplace SaaS resource** - The SaaS resource is created automatically, based on the plan you select from the Dynatrace Marketplace offer. This resource is used for billing purposes. ## Prerequisites
-Before creating your first instance of Dynatrace in Azure, configure your environment. These steps must be completed before continuing with the next steps in this quickstart.
+Before you link the subscription to a Dynatrace environment,[complete the pre-deployment configuration.](dynatrace-link-to-existing.md).
### Find Offer
Use the Azure portal to find Dynatrace for Azure application.
1. Go to the [Azure portal](https://portal.azure.com) and sign in.
-1. If you've visited the **Marketplace** in a recent session, select the icon from the available options. Otherwise, search for *Marketplace*.
+1. If you've visited the **Marketplace** in a recent session, select the icon from the available options. Otherwise, search for _Marketplace_.
:::image type="content" source="media/dynatrace-create/dynatrace-search-marketplace.png" alt-text="Screenshot showing a search for Marketplace in the Azure portal.":::
Use the Azure portal to find Dynatrace for Azure application.
| **Property** | **Description** | |--|-| | Subscription | Select the Azure subscription you want to use for creating the Dynatrace resource. You must have owner or contributor access.|
- | Resource group | Specify whether you want to create a new resource group or use an existing one. A [resource group](../../azure-resource-manager/management/overview.md) is a container that holds related resources for an Azure solution. |
+ | Resource group | Specify whether you want to create a new resource group or use an existing one. A [resource group](/azure-resource-manager/management/overview.md) is a container that holds related resources for an Azure solution. |
| Resource name | Specify a name for the Dynatrace resource. This name will be the friendly name of the new Dynatrace environment.|
- | Location | Select the region. Both the Dynatrace resource in Azure and Dynatrace environment will be created in the selected region.|
+ | Location | Select the region. Select the region where the Dynatrace resource in Azure and the Dynatrace environment is created.|
| Pricing plan | Select from the list of available plans. | ### Configure metrics and logs
Use the Azure portal to find Dynatrace for Azure application.
:::image type="content" source="media/dynatrace-create/dynatrace-metrics-and-logs.png" alt-text="Screenshot showing options for metrics and logs.":::
- - **Subscription activity logs** - These logs provide insight into the operations on your resources at the [control plane](../../azure-resource-manager/management/control-plane-and-data-plane.md). Updates on service-health events are also included. Use the activity log to determine the what, who, and when for any write operations (PUT, POST, DELETE). There's a single activity log for each Azure subscription.
+ - **Subscription activity logs** - These logs provide insight into the operations on your resources at the [control plane](/azure-resource-manager/management/control-plane-and-data-plane.md). Updates on service-health events are also included. Use the activity log to determine the what, who, and when for any write operations (PUT, POST, DELETE). There's a single activity log for each Azure subscription.
- - **Azure resource logs** - These logs provide insight into operations that were taken on an Azure resource at the [data plane](../../azure-resource-manager/management/control-plane-and-data-plane.md). For example, getting a secret from a Key Vault is a data plane operation. Or, making a request to a database is also a data plane operation. The content of resource logs varies by the Azure service and resource type.
+ - **Azure resource logs** - These logs provide insight into operations that were taken on an Azure resource at the [data plane](/azure-resource-manager/management/control-plane-and-data-plane.md). For example, getting a secret from a Key Vault is a data plane operation. Or, making a request to a database is also a data plane operation. The content of resource logs varies by the Azure service and resource type.
1. To send subscription level logs to Dynatrace, select **Send subscription activity logs**. If this option is left unchecked, none of the subscription level logs are sent to Dynatrace.
- To send Azure resource logs to Dynatrace, select **Send Azure resource logs for all defined resources**. The types of Azure resource logs are listed in [Azure Monitor Resource Log categories](../../azure-monitor/essentials/resource-logs-categories.md). To filter the set of Azure resources sending logs to Dynatrace, use inclusion and exclusion rules and set the Azure resource tags.
+1. To send Azure resource logs to Dynatrace, select **Send Azure resource logs for all defined resources**. The types of Azure resource logs are listed in [Azure Monitor Resource Log categories](/azure-monitor/essentials/resource-logs-categories.md).
- Rules for sending resource logs:
-
- - When the checkbox for Azure resource logs is selected, by default, logs are forwarded for all resources.
- - Azure resources with Include tags send logs to Dynatrace.
- - Azure resources with Exclude tags don't send logs to Dynatrace.
- - If there's a conflict between inclusion and exclusion rules, exclusion rule applies.
-
- The logs sent to Dynatrace are charged by Azure. For more information, see the [pricing of platform logs](https://azure.microsoft.com/pricing/details/monitor/) sent to Azure Marketplace partners.
+ When the checkbox for Azure resource logs is selected, by default, logs are forwarded for all resources. To filter the set of Azure resources sending logs to Dynatrace, use inclusion and exclusion rules and set the Azure resource tags:
- > [!NOTE]
- > Metrics for virtual machines and App Services can be collected by installing the Dynatrace OneAgent after the Dynatrace resource has been created.
+ - All Azure resources with tags defined in include Rules send logs to Dynatrace.
+ - All Azure resources with tags defined in exclude rules don't send logs to Dynatrace.
+ - If there's a conflict between an inclusion and exclusion rule, the exclusion rule applies.
+
+ The logs sent to Dynatrace are charged by Azure. For more information, see the [pricing of platform logs](https://azure.microsoft.com/pricing/details/monitor/) sent to Azure Marketplace partners.
+
+ > [!NOTE]
+ > Metrics for virtual machines and App Services can be collected by installing the Dynatrace OneAgent after the Dynatrace resource has been created.
1. Once you have completed configuring metrics and logs, select **Next: Single sign-on**.
Use the Azure portal to find Dynatrace for Azure application.
## Next steps -- [Manage the Dynatrace resource](dynatrace-how-to-manage.md)
+- [Manage the Dynatrace resource](dynatrace-how-to-manage.md)
partner-solutions Dynatrace How To Configure Prereqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/dynatrace/dynatrace-how-to-configure-prereqs.md
description: This article describes how to complete the prerequisites for Dynatr
Previously updated : 08/24/2022 Last updated : 10/12/2022 # Configure pre-deployment
-This article describes the prerequisites that must be completed before you create the first instance of Dynatrace resource in Azure.
+This article describes the prerequisites that must be completed in your Azure subscription or Azure Active Directory before you create your first Dynatrace resource in Azure.
## Access control
-To set up the Dynatrace for Azure, you must have **Owner** or **Contributor** access on the Azure subscription. [Confirm that you have the appropriate access](../../role-based-access-control/check-access.md) before starting the setup.
+To set up Dynatrace for Azure, you must have **Owner** or **Contributor** access on the Azure subscription. First, [confirm that you have the appropriate access](/azure/role-based-access-control/check-access) before starting the setup.
## Add enterprise application
To use the Security Assertion Markup Language (SAML) based single sign-on (SSO)
## Next steps -- [Quickstart: Create a new Dynatrace environment](dynatrace-create.md)
+- [Quickstart: Create a new Dynatrace environment](dynatrace-create.md)
partner-solutions Dynatrace How To Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/dynatrace/dynatrace-how-to-manage.md
Previously updated : 08/24/2022 Last updated : 10/12/2022
At the bottom, you see two tabs:
- **Get started tab** also provides links to Dynatrace dashboards, logs and Smartscape Topology. - **Monitoring tab** provides a summary of the resources sending logs to Dynatrace.
-If you select the **Monitoring** pane, you see a table with information about the Dynatrace resource.
+If you select the **Monitoring** pane, you see a table with information about the Azure resources sending logs to Dynatrace.
:::image type="content" source="media/dynatrace-how-to-manage/dynatrace-monitoring.png" alt-text="Screenshot of overview working pane showing monitoring.":::
You can filter the list of resources by resource type, resource group name, regi
The column **Logs to Dynatrace** indicates whether the resource is sending logs to Dynatrace. If the resource isn't sending logs, this field indicates why logs aren't being sent. The reasons could be: -- _Resource doesn't support sending logs_ - Only resource types with monitoring log categories can be configured to send logs. See [supported categories](../../azure-monitor/essentials/resource-logs-categories.md).-- _Limit of five diagnostic settings reached_ - Each Azure resource can have a maximum of five diagnostic settings. For more information, see [diagnostic settings](../../azure-monitor/essentials/diagnostic-settings.md).
+- _Resource doesn't support sending logs_ - Only resource types with monitoring log categories can be configured to send logs. See [supported categories](/azure-monitor/essentials/resource-logs-categories.md).
+- _Limit of five diagnostic settings reached_ - Each Azure resource can have a maximum of five diagnostic settings. For more information, see [diagnostic settings](/azure-monitor/essentials/diagnostic-settings.md).
- _Error_ - The resource is configured to send logs to Dynatrace, but is blocked by an error. - _Logs not configured_ - Only Azure resources that have the appropriate resource tags are configured to send logs to Dynatrace. - _Agent not configured_ - Virtual machines without the Dynatrace OneAgent installed don't emit logs to Dynatrace.
For each virtual machine, the following info is displayed:
||| | **Resource Name** | Virtual machine name | | **Resource Status** | Indicates whether the virtual machine is stopped or running. Dynatrace OneAgent can only be installed on virtual machines that are running. If the virtual machine is stopped, installing the Dynatrace OneAgent will be disabled. |
-| **Agent status** | Whether the Dynatrace OneAgent is running on the virtual machine |
-| **Agent version** | The Dynatrace OneAgent version number |
+| **OneAgent status** | Whether the Dynatrace OneAgent is running on the virtual machine |
+| **OneAgent version** | The Dynatrace OneAgent version number |
| **Auto-update** | Whether auto-update has been enabled for the OneAgent |
-| **Log analytics** | Whether log monitoring option was selected when OneAgent was installed |
+| **Log monitoring** | Whether log monitoring option was selected when OneAgent was installed |
| **Monitoring mode** | Whether the Dynatrace OneAgent is monitoring hosts in [full-stack monitoring mode or infrastructure monitoring mode](https://www.dynatrace.com/support/help/how-to-use-dynatrace/hosts/basic-concepts/get-started-with-infrastructure-monitoring) | > [!NOTE]
-> If a virtual machine shows that an agent has been configured, but the options to manage the agent through extension are disabled, it means that the agent has been configured through a different Dynatrace resource in the same Azure subscription.
+> If a virtual machine shows that an OneAgent is installed, but the option Uninstall extension is disabled, then the agent was configured through a different Dynatrace resource in the same Azure subscription. To make any changes, please go to the other Dynatrace resource in the Azure subscription.
## Monitor App Services using Dynatrace OneAgent
For each app service, the following information is displayed:
| **Resource name** | App service name | | **Resource status** | Indicates whether the App service is running or stopped. Dynatrace OneAgent can only be installed on app services that are running. | | **App Service plan** | The plan configured for the app service |
-| **Agent version** | The Dynatrace OneAgent version |
-| **Agent status** | status of the agent |
+| **OneAgent version** | The Dynatrace OneAgent version |
+| **OneAgent status** | status of the agent |
To install the Dynatrace OneAgent, select the app service and select **Install Extension.** The application settings for the selected app service are updated and the app service is restarted to complete the configuration of the Dynatrace OneAgent.
Select **Overview** in Resource menu. Then, select **Delete**. Confirm that you
If only one Dynatrace resource is mapped to a Dynatrace environment, logs are no longer sent to Dynatrace. All billing through Azure Marketplace stops for Dynatrace.
-If more than one Dynatrace resource is mapped to the Dynatrace environment using the link Azure subscription option, deleting the Dynatrace resource only stops sending logs for that Dynatrace resource. However, since other Dynatrace environment may be linked to other Dynatrace resources, billing continues through the Azure Marketplace.
+If more than one Dynatrace resource is mapped to the Dynatrace environment using the link Azure subscription option, deleting the Dynatrace resource only stops sending logs for Azure resources associated to that Dynatrace resource. However, since this one Dynatrace environment might still be linked to other Dynatrace resources, billing continues through the Azure Marketplace.
## Next steps
-For help with troubleshooting, see [Troubleshooting Dynatrace integration with Azure](dynatrace-troubleshoot.md).
+For help with troubleshooting, see [Troubleshooting Dynatrace integration with Azure](dynatrace-troubleshoot.md).
partner-solutions Dynatrace Link To Existing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/dynatrace/dynatrace-link-to-existing.md
description: This article describes how to use the Azure portal to link to an in
Previously updated : 08/24/2022 Last updated : 10/12/2022 # Quickstart: Link to an existing Dynatrace environment
-In this quickstart, you link an Azure subscription to an existing Dynatrace environment. After you link to the Dynatrace environment, you can monitor the linked Azure subscription and the resources in that subscription using the Dynatrace environment.
+In this quickstart, you link an Azure subscription to an existing Dynatrace environment. After linking to the Dynatrace environment, you can monitor the linked Azure subscription and the resources in that subscription using the Dynatrace environment.
+
+> [!NOTE]
+> You can only link Dynatrace environments that have been previously created via Dynatrace for Azure.
When you use the integrated experience for Dynatrace in the Azure portal, your billing and monitoring for the following entities is tracked in the portal.
When you use the integrated experience for Dynatrace in the Azure portal, your b
- **Dynatrace environment** - the Dynatrace environment on Dynatrace SaaS. When you choose to link an existing environment, a new Dynatrace resource is created in Azure. The Dynatrace environment and the Dynatrace resource must reside in the same region. - **Marketplace SaaS resource** - the SaaS resource is used for billing purposes. The SaaS resource typically resides in a different Azure subscription from where the Dynatrace environment was first created.
-## Prerequisites
-
-Before you link the subscription to a Dynatrace environment, [complete pre-deployment configuration](dynatrace-how-to-configure-prereqs.md).
-
-### Find Offer
+## Find Offer
1. Use the Azure portal to find Dynatrace.
Before you link the subscription to a Dynatrace environment, [complete pre-deplo
> [!NOTE] > Linking requires that the environment and the Dynatrace resource reside in the same Azure region. The user that is performing the linking action should have administrator permissions on the Dynatrace environment being linked. If the environment that you want to link to does not appear in the dropdown list, check if any of these conditions are not satisfied.
-1. Select **Next: Metrics and logs** to configure metrics and logs.
+Select **Next: Metrics and logs** to configure metrics and logs.
### Configure metrics and logs
Before you link the subscription to a Dynatrace environment, [complete pre-deplo
:::image type="content" source="media/dynatrace-link-to-existing/dynatrace-metrics-and-logs.png" alt-text="Screenshot showing options for metrics and logs.":::
- - **Subscription activity logs** - These logs provide insight into the operations on your resources at the [control plane](../../azure-resource-manager/management/control-plane-and-data-plane.md). Updates on service-health events are also included. Use the activity log to determine the what, who, and when for any write operations (PUT, POST, DELETE). There\'s a single activity log for each Azure subscription.
-
- - **Azure resource logs** - These logs provide insight into operations that were taken on an Azure resource at the [data plane](../../azure-resource-manager/management/control-plane-and-data-plane.md). For example, getting a secret from a Key Vault is a data plane operation. Or, making a request to a database is also a data plane operation. The content of resource logs varies by the Azure service and resource type.
+ - **Subscription activity logs** - These logs provide insight into the operations on your resources at the [control plane](/azure-resource-manager/management/control-plane-and-data-plane.md). Updates on service-health events are also included. Use the activity log to determine the what, who, and when for any write operations (PUT, POST, DELETE). There\'s a single activity log for each Azure subscription.
-1. To send subscription level logs to Dynatrace, select **Send subscription activity logs**. If this option is left unchecked, none of the subscription level logs are sent to Dynatrace.
+ - **Azure resource logs** - These logs provide insight into operations that were taken on an Azure resource at the [data plane](/azure-resource-manager/management/control-plane-and-data-plane.md). For example, getting a secret from a Key Vault is a data plane operation. Or, making a request to a database is also a data plane operation. The content of resource logs varies by the Azure service and resource type.
- To send Azure resource logs to Dynatrace, select **Send Azure resource logs for all defined resources**. The types of Azure resource logs are listed in [Azure Monitor Resource Log categories](../../azure-monitor/essentials/resource-logs-categories.md). To filter the set of Azure resources sending logs to Dynatrace, use inclusion and exclusion rules and set the Azure resource tags.
+1. To send Azure resource logs to Dynatrace, select **Send Azure resource logs for all defined resources**. The types of Azure resource logs are listed in [Azure Monitor Resource Log categories](../../azure-monitor/essentials/resource-logs-categories.md).
- Rules for sending resource logs are:
+ When the checkbox for Azure resource logs is selected, by default, logs are forwarded for all resources. To filter the set of Azure resources sending logs to Dynatrace, use inclusion and exclusion rules and set the Azure resource tags:
- - When the checkbox for Azure resource logs is selected, by default, logs are forwarded for all resources.
- - Azure resources with Include tags send logs to Dynatrace.
- - Azure resources with Exclude tags don't send logs to Dynatrace.
- - If there's a conflict between inclusion and exclusion rules, exclusion rule applies.
-
- The logs sent to Dynatrace is charged by Azure. For more information, see the [pricing of platform logs](https://azure.microsoft.com/pricing/details/monitor/) sent to Azure Marketplace partners.
+ - All Azure resources with tags defined in include Rules send logs to Dynatrace.
+ - All Azure resources with tags defined in exclude rules don't send logs to Dynatrace.
+ - If there's a conflict between an inclusion and exclusion rule, the exclusion rule applies.
+
+ The logs sent to Dynatrace are charged by Azure. For more information, see the [pricing of platform logs](https://azure.microsoft.com/pricing/details/monitor/) sent to Azure Marketplace partners.
Metrics for virtual machines and App Services can be collected by installing the Dynatrace OneAgent after the Dynatrace resource has been created and an existing Dynatrace environment has been linked to it.
-1. Once you have completed configuring metrics and logs, select **Next: Single sign-on**.
+Once you have completed configuring metrics and logs, select **Next: Single sign-on**.
### Configure single sign-on
-1. At this point, you see the next part of the form for **Single Sign-on**. If you're linking the Dynatrace resource to an existing Dynatrace environment, you cannot set up single sign-on at this step.
-
- > [!NOTE]
- > You cannot set up single sign-on when linking the Dynatrace resource to an existing Dynatrace environment.
-
-1. Instead, you can set up single sign-on after creating the Dynatrace resource. For more information, see [Reconfigure single sign-on](dynatrace-how-to-manage.md#reconfigure-single-sign-on).
+> [!NOTE]
+> You cannot set up single sign-on when linking the Dynatrace resource to an existing Dynatrace environment. You can do so after creating the Dynatrace resource. For more information, see [Reconfigure single sign-on](dynatrace-how-to-manage.md#reconfigure-single-sign-on).
-1. Select **Next: Tags**.
+Select **Next: Tags**.
### Add tags
Before you link the subscription to a Dynatrace environment, [complete pre-deplo
:::image type="content" source="media/dynatrace-link-to-existing/dynatrace-custom-tags.png" alt-text="Screenshot showing list of tags for a Dynatrace resource.":::
-1. When you've finished adding tags, select **Next: Review+Create.**
+When you've finished adding tags, select **Next: Review+Create.**
### Review and create
Before you link the subscription to a Dynatrace environment, [complete pre-deplo
## Next steps -- [Manage the Dynatrace resource](dynatrace-how-to-manage.md)
+- [Manage the Dynatrace resource](dynatrace-how-to-manage.md)
partner-solutions Dynatrace Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/dynatrace/dynatrace-overview.md
description: Learn about using the Dynatrace Cloud-Native Observability Platform
Previously updated : 08/24/2022 Last updated : 10/12/2022
Last updated 08/24/2022
Dynatrace is a monitoring solution that provides deep cloud observability, advanced AIOps, and continuous runtime application security capabilities in Azure.
-Dynatrace for Azure offering in the Azure Marketplace enables you to create and manage Dynatrace environments using the Azure portal with a seamlessly integrated experience. This enables you to use Dynatrace as a monitoring solution for your Azure workloads through a streamlined workflow, starting from procurement, all the way to configuration and management.
+The Dynatrace for Azure offering in the Azure Marketplace enables you to create and manage Dynatrace environments using the Azure portal with a seamlessly integrated experience. This enables you to use Dynatrace as a monitoring solution for your Azure workloads through a streamlined workflow, starting from procurement, all the way to configuration and management.
You can create and manage the Dynatrace resources using the Azure portal through a resource provider named `Dynatrace.Observability`. Dynatrace owns and runs the software as a service (SaaS) application including the Dynatrace environments created through this experience.
Dynatrace for Azure provides the following capabilities:
- **Single-Sign on to Dynatrace** - You need not sign up or sign in separately to Dynatrace. Sign in once in the Azure portal and seamlessly transition to Dynatrace portal when needed. -- **Log forwarder** - Enables automated forwarding of subscription activity and resource logs to Dynatrace
+- **Log monitoring** - Enables automated monitoring of subscription activity and resource logs to Dynatrace
- **Manage Dynatrace OneAgent on VMs and App Services** - Provides a single experience to install and uninstall Dynatrace OneAgent on virtual machines and App Services.
partner-solutions Dynatrace Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/dynatrace/dynatrace-troubleshoot.md
Previously updated : 08/24/2022 Last updated : 10/12/2022
This document contains information about troubleshooting your solutions that use
### Single sign-on errors -- **Single sign-on configuration indicates lack of permissions** - This happens when the user that is trying to configure single sign-on doesn't have tenant
+- **Single sign-on configuration indicates lack of permissions** - This occurs when the user that is trying to configure single sign-on doesn't have Manage users permissions for the Dynatrace account. For a description of how to configure this permission, see [here](https://www.dynatrace.com/support/help/shortlink/azure-native-integration#setup).
- **Unable to save single sign-on settings** - This error happens when there's another Enterprise app that is using the Dynatrace SAML identifier. To find which app is using it, select **Edit** on the Basic **SAML** configuration section. To resolve this issue, either disable the other app or use the other app as the Enterprise app to set up SAML SSO. - **App not showing in Single sign-on settings page** - First, search for application ID. If no result is shown, check the SAML settings of the app. The grid only shows apps with correct SAML settings.
- The following image shows the correct values.
## Next steps
postgresql Concepts Azure Ad Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-azure-ad-authentication.md
-# Azure Active Directory Authentication (Azure AD) with PostgreSQL Flexible Server
+# Azure Active Directory Authentication with PostgreSQL Flexible Server Preview
[!INCLUDE [applies-to-postgresql-Flexible-server](../includes/applies-to-postgresql-Flexible-server.md)]
-> [!NOTE]
+> [!NOTE]
> Azure Active Directory Authentication for PostgreSQL Flexible Server is currently in preview.
-Microsoft Azure Active Directory authentication is a mechanism of connecting to Azure Database for PostgreSQL using identities defined in Azure AD.
+Microsoft Azure Active Directory (Azure AD) authentication is a mechanism of connecting to Azure Database for PostgreSQL using identities defined in Azure AD.
With Azure AD authentication, you can manage database user identities and other Microsoft services in a central location, which simplifies permission management. Benefits of using Azure AD include:
The following high-level diagram summarizes how authentication works using Azure
## Manage PostgreSQL Access For AD Principals
-When Azure AD authentication is enabled and Azure AD principal is added as an Azure AD administrator the account gets the same privileges as the original PostgreSQL administrator. Only Azure AD administrator can manage other Azure AD enabled roles on the server using Azure portal or Database API. The Azure AD administrator login can be an Azure AD user, Azure AD group, Service Principal or Managed Identity. Using a group account as an administrator enhances manageability by allowing you to centrally add and remove group members in Azure AD without changing the users or permissions in the PostgreSQL server. Multiple Azure AD administrators can be configured at any time and you can optionally disable password authentication to an Azure Database for PostgreSQL Flexible Server for better auditing and compliance needs.
+When Azure AD authentication is enabled and Azure AD principal is added as an Azure AD administrator the account gets the same privileges as the original PostgreSQL administrator. Only Azure AD administrator can manage other Azure AD enabled roles on the server using Azure portal or Database API. The Azure AD administrator log in can be an Azure AD user, Azure AD group, Service Principal or Managed Identity. Using a group account as an administrator enhances manageability by allowing you to centrally add and remove group members in Azure AD without changing the users or permissions in the PostgreSQL server. Multiple Azure AD administrators can be configured at any time and you can optionally disable password authentication to an Azure Database for PostgreSQL Flexible Server for better auditing and compliance needs.
![admin structure][2]
Once you've authenticated against the Active Directory, you then retrieve a toke
## Additional considerations - Multiple Azure AD principals (a user, group, service principal or managed identity) can be configured as Azure AD Administrator for an Azure Database for PostgreSQL server at any time.
+- Azure AD groups must be a mail enabled security group for authentication to work.
+- In preview , `Azure Active Directory Authentication only` is supported post server creation, this option is currently disabled during server creation experience
- Only an Azure AD administrator for PostgreSQL can initially connect to the Azure Database for PostgreSQL using an Azure Active Directory account. The Active Directory administrator can configure subsequent Azure AD database users. - If an Azure AD principal is deleted from Azure AD, it still remains as PostgreSQL role, but it will no longer be able to acquire new access token. In this case, although the matching role still exists in the database it won't be able to authenticate to the server. Database administrators need to transfer ownership and drop roles manually.
postgresql How To Bulk Load Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-bulk-load-data.md
+
+ Title: Bulk data uploads For Azure Database for PostgreSQL - Flexible Server
+description: Best practices to bulk load data in Azure Database for PostgreSQL - Flexible Server
+++++ Last updated : 08/16/2022++++
+# Best practices for bulk data upload for Azure Database for PostgreSQL - Flexible Server
+
+There are two types of bulk loads:
+- Initial data load of an empty database
+- Incremental data loads
+
+This article discusses various loading techniques along with best practices when it comes to initial data loads and incremental data loads.
+
+## Loading methods
+
+Performance-wise, the data loading methods arranged in the order of most time consuming to least time consuming is as follows:
+- Single record Insert
+- Batch into 100-1000 rows per commit. One can use transaction block to wrap multiple records per commit
+- INSERT with multi row values
+- COPY command
+
+The preferred method to load the data into the database is by copy command. If the copy command isn't possible, batch INSERTs is the next best method. Multi-threading with a COPY command is the optimal method for bulk data loads.
+
+## Best practices for initial data loads
+
+#### Drop indexes
+
+Before an initial data load, it's advised to drop all the indexes in the tables. It's always more efficient to create the indexes after the data load.
+
+#### Drop constraints
+
+##### Unique key constraints
+
+To achieve strong performance, it's advised to drop unique key constraints before an initial data load, and recreate it once the data load is completed. However, dropping unique key constraints cancels the safeguards against duplicated data.
+
+##### Foreign key constraints
+
+It's advised to drop foreign key constraints before initial data load and recreate once data load is completed.
+
+Changing the `session_replication_role` parameter to replica also disables all foreign key checks. However, be aware making the change can leave data in an inconsistent state if not properly used.
+
+#### Unlogged tables
+
+Use of unlogged tables will make data load faster. Data written to unlogged tables isn't written to the write-ahead log.
+
+The disadvantages of using unlogged tables are
+- They aren't crash-safe. An unlogged table is automatically truncated after a crash or unclean shutdown.
+- Data from unlogged tables can't be replicated to standby servers.
+
+The pros and cons of using unlogged tables should be considered before using in initial data loads.
+
+Use the following options to create an unlogged table or change an existing table to unlogged table:
+
+Create a new unlogged table by using the following syntax:
+```
+CREATE UNLOGGED TABLE <tablename>;
+```
+
+Convert an existing logged table to an unlogged table by using the following syntax:
+```
+ALTER TABLE <tablename> SET UNLOGGED;
+```
+
+#### Server parameter tuning
+
+`Autovacuum`
+
+During the initial data load, it's best to turn off the autovacuum. Once the initial load is completed, it's advised to run a manual VACUUM ANALYZE on all tables in the database, and then turn on autovacuum.
+
+> [!NOTE]
+> Please follow the recommendations below only if there is enough memory and disk space.
+
+`maintenance_work_mem`
+
+The maintenance_work_mem can be set to a maximum of 2 GB on a flexible server. `maintenance_work_mem` helps in speeding up autovacuum, index, and foreign key creation.
+
+`checkpoint_timeout`
+
+On the flexible server, the checkpoint_timeout can be increased to maximum 24 h from default 5 minutes. It's advised to increase the value to 1 hour before initial data loads on Flexible server.
+
+`checkpoint_completion_target`
+
+A value of 0.9 is always recommended.
+
+`max_wal_size`
+
+The max_wal_size can be set to the maximum allowed value on the Flexible server, which is 64 GB while we do the initial data load.
+
+`wal_compression`
+
+wal_compression can be turned on. Enabling the parameter can have some extra CPU cost spent on the compression during WAL logging and on the decompression during WAL replay.
++
+#### Flexible server recommendations
+
+Before the start of initial data load on a Flexible server, it's recommended to
+
+- Disable high availability [HA] on the server. You can enable HA once initial load is completed on master/primary.
+- Create read replicas after initial data load is completed.
+- Make logging minimal or disable all together during initial data loads. Example: disable pgaudit, pg_stat_statements, query store.
++
+#### Recreating indexes and adding constraints
+
+Assuming the indexes and constraints were dropped before the initial load, it's recommended to have high values of maintenance_work_mem (as recommended above) for creating indexes and adding constraints. In addition, starting with Postgres version 11, the following parameters can be modified for faster parallel index creation after initial data load:
+
+`max_parallel_workers`
+
+Sets the maximum number of workers that the system can support for parallel queries.
+
+`max_parallel_maintenance_workers`
+
+Controls the maximum number of worker processes, which can be used to CREATE INDEX.
+
+One could also create the indexes by making recommended settings at the session level. An example of how it can be done at the session level is shown below:
+
+```sql
+SET maintenance_work_mem = '2GB';
+SET max_parallel_workers = 16;
+SET max_parallel_maintenance_workers = 8;
+CREATE INDEX test_index ON test_table (test_column);
+```
+
+## Best practices for incremental data loads
+
+#### Table partitioning
+
+It's always recommended to partition large tables. Some advantages of partitioning, especially during incremental loads:
+- Creation of new partitions based on the new deltas makes it efficient to add new data to the table.
+- Maintenance of tables becomes easier. One can drop a partition during incremental data loads avoiding time-consuming deletes on large tables.
+- Autovacuum would be triggered only on partitions that were changed or added during incremental loads, which make maintaining statistics on the table easier.
+
+#### Maintain up-to-date table statistics
+
+Monitoring and maintaining table statistics is important for query performance on the database. This also includes scenarios where you have incremental loads. PostgreSQL uses the autovacuum daemon process to clean up dead tuples and analyze the tables to keep the statistics updated. For more details on autovacuum monitoring and tuning, review [Autovacuum Tuning](./how-to-autovacuum-tuning.md).
+
+#### Index creation on foreign key constraints
+
+Creating indexes on foreign keys in the child tables would be beneficial in the following scenarios:
+- Data updates or deletions in the parent table. When data is updated or deleted in the parent table lookups would be performed on the child table. To make lookups faster, you could index foreign keys on the child table.
+- Queries, where we see join between parent and child tables on key columns.
+
+#### Unused indexes
+
+Identify unused indexes in the database and drop them. Indexes are an overhead on data loads. The fewer the indexes on a table the better the performance is during data ingestion.
+Unused indexes can be identified in two ways - by Query Store and an index usage query.
+
+##### Query store
+
+Query Store helps identify indexes, which can be dropped based on query usage patterns on the database. For step-by-step guidance, see [Query Store](./concepts-query-store.md).
+Once Query Store is enabled on the server, the following query can be used to identify indexes that can be dropped by connecting to azure_sys database.
+
+```sql
+SELECT * FROM IntelligentPerformance.DropIndexRecommendations;
+```
+
+##### Index usage
+
+The below query can also be used to identify unused indexes:
+
+```sql
+SELECT
+ t.schemaname,
+ t.tablename,
+ c.reltuples::bigint AS num_rows,
+ pg_size_pretty(pg_relation_size(c.oid)) AS table_size,
+ psai.indexrelname AS index_name,
+ pg_size_pretty(pg_relation_size(i.indexrelid)) AS index_size,
+ CASE WHEN i.indisunique THEN 'Y' ELSE 'N' END AS "unique",
+ psai.idx_scan AS number_of_scans,
+ psai.idx_tup_read AS tuples_read,
+ psai.idx_tup_fetch AS tuples_fetched
+FROM
+ pg_tables t
+ LEFT JOIN pg_class c ON t.tablename = c.relname
+ LEFT JOIN pg_index i ON c.oid = i.indrelid
+ LEFT JOIN pg_stat_all_indexes psai ON i.indexrelid = psai.indexrelid
+WHERE
+ t.schemaname NOT IN ('pg_catalog', 'information_schema')
+ORDER BY 1, 2;
+```
+
+Number_of_scans, tuples_read, and tuples_fetched columns would indicate index usage.number_of_scans column value of zero points to index not being used.
+
+#### Server parameter tuning
+
+> [!NOTE]
+> Please follow the recommendations below only if there is enough memory and disk space.
+
+`maintenance_work_mem`
+
+The maintenance_work_mem parameter can be set to a maximum of 2 GB on Flexible Server. `maintenance_work_mem` helps speed up index creation and foreign key additions.
+
+`checkpoint_timeout`
+
+On the Flexible Server, the checkpoint_timeout parameter can be increased to 10 minutes or 15 minutes from the default 5 minutes. Increasing `checkpoint_timeout` to a larger value, such as 15 minutes, can reduce the I/O load, but the downside is that it takes longer to recover if there was a crash. Careful consideration is recommended before making the change.
+
+`checkpoint_completion_target`
+
+A value of 0.9 is always recommended.
+
+`max_wal_size`
+
+The max_wal_size depends on SKU, storage, and workload.
+
+One way to arrive at the correct value for max_wal_size is shown below.
+
+During peak business hours, follow the below steps to arrive at a value:
+
+- Take the current WAL LSN by executing the below query:
+
+```sql
+SELECT pg_current_wal_lsn ();
+```
+
+- Wait for checkpoint_timeout number of seconds. Take the current WAL LSN by executing the below query:
+
+```sql
+SELECT pg_current_wal_lsn ();
+```
+
+- Use the two results to check the difference in GB:
+
+```sql
+SELECT round (pg_wal_lsn_diff('LSN value when run second time','LSN value when run first time')/1024/1024/1024,2) WAL_CHANGE_GB;
+```
+
+`wal_compression`
+
+wal_compression can be turned on. Enabling the parameter can have some extra CPU cost spent on the compression during WAL logging and on the decompression during WAL replay.
++
+## Next steps
+- Troubleshoot high CPU utilization [High CPU Utilization](./how-to-high-CPU-utilization.md).
+- Troubleshoot high memory utilization [High Memory Utilization](./how-to-high-memory-utilization.md).
+- Configure server parameters [Server Parameters](./howto-configure-server-parameters-using-portal.md).
+- Troubleshoot and tune Autovacuum [Autovacuum Tuning](./how-to-autovacuum-tuning.md).
+- Troubleshoot high CPU utilization [High IOPS Utilization](./how-to-high-io-utilization.md).
postgresql How To Configure Sign In Azure Ad Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-configure-sign-in-azure-ad-authentication.md
-# Use Azure Active Directory (Azure AD) for authentication with PostgreSQL Flexible Server
+# Use Azure Active Directory for authentication with PostgreSQL Flexible Server Preview
[!INCLUDE [applies-to-postgresql-Flexible-server](../includes/applies-to-postgresql-Flexible-server.md)]
-> [!NOTE]
+> [!NOTE]
> Azure Active Directory Authentication for PostgreSQL Flexible Server is currently in preview.
-This article will walk you through the steps how to configure Azure Active Directory access with Azure Database for PostgreSQL Flexible Server, and how to connect using an Azure AD token.
+This article walks you through the steps how to configure Azure Active Directory (Azure AD) access with Azure Database for PostgreSQL Flexible Server, and how to connect using an Azure AD token.
## Enable Azure AD Authentication
Note only one Azure admin user can be added during server provisioning and you c
To set the Azure AD administrator after server creation, follow the below steps
-1. In the Azure portal, select the instance of Azure Database for PostgreSQL that you want to enable for Azure AD.
+1. In the Azure portal, select the instance of Azure Database for PostgreSQL Flexible Server that you want to enable for Azure AD.
1. Under Security, select Authentication and choose either`PostgreSQL and Azure Active Directory authentication` or `Azure Active Directory authentication only` as authentication method based upon your requirements. ![set azure ad administrator][2]
To set the Azure AD administrator after server creation, follow the below steps
1. Select Save, > [!IMPORTANT]
-> When setting the administrator, a new user is added to the Azure Database for PostgreSQL server with full administrator permissions.
+> When setting the administrator, a new user is added to the Azure Database for PostgreSQL Flexible Server with full administrator permissions.
-## Connect to Azure Database for PostgreSQL using Azure AD
+## Connect to Azure Database for PostgreSQL Flexible Server using Azure AD
The following high-level diagram summarizes the workflow of using Azure AD authentication with Azure Database for PostgreSQL:
These are the steps that a user/application will need to do authenticate with Az
You can follow along in Azure Cloud Shell, an Azure VM, or on your local machine. Make sure you have the [Azure CLI installed](/cli/azure/install-azure-cli).
-## Authenticate with Azure AD as a Flexible user
+## Authenticate with Azure AD as a Flexible Server user
### Step 1: Log in to the user's Azure subscription
Invoke the Azure CLI tool to acquire an access token for the Azure AD authentica
Example (for Public Cloud): ```azurecli-interactive
-az account get-access-token --resource https://ossrdbms-Azure AD.database.windows.net
+az account get-access-token --resource https://ossrdbms-aad.database.windows.net
``` The above resource value must be specified exactly as shown. For other clouds, the resource value can be looked up using:
You're now authenticated to your Azure Database for PostgreSQL server using Azur
## Authenticate with Azure AD as a group member
-### Step 1: Create Azure AD groups in Azure Database for PostgreSQL
+### Step 1: Create Azure AD groups in Azure Database for PostgreSQL Flexible Server
To enable an Azure AD group for access to your database, use the same mechanism as for users, but instead specify the group name:
Invoke the Azure CLI tool to acquire an access token for the Azure AD authentica
Example (for Public Cloud): ```azurecli-interactive
-az account get-access-token --resource https://ossrdbms-Azure AD.database.windows.net
+az account get-access-token --resource https://ossrdbms-aad.database.windows.net
``` The above resource value must be specified exactly as shown. For other clouds, the resource value can be looked up using:
postgresql How To Connect With Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-connect-with-managed-identity.md
Last updated 09/26/2022
-# Connect with Managed Identity to Azure Database for PostgreSQL Flexible Server
+# Connect with Managed Identity to Azure Database for PostgreSQL Flexible Server Preview
[!INCLUDE [applies-to-postgresql-Flexible-server](../includes/applies-to-postgresql-Flexible-server.md)]
postgresql How To Create Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-create-users.md
Last updated 09/26/2022
-# Create users in Azure Database for PostgreSQL - Flexible Server
+# Create users in Azure Database for PostgreSQL - Flexible Server Preview
[!INCLUDE [applies-to-postgresql-Flexible-server](../includes/applies-to-postgresql-Flexible-server.md)]
postgresql How To High Cpu Utilization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-high-cpu-utilization.md
Title: High CPU Utilization
description: Troubleshooting guide for high cpu utilization in Azure Database for PostgreSQL - Flexible Server +
Last updated 08/03/2022
# Troubleshoot high CPU utilization in Azure Database for PostgreSQL - Flexible Server + This article shows you how to quickly identify the root cause of high CPU utilization, and possible remedial actions to control CPU utilization when using [Azure Database for PostgreSQL - Flexible Server](overview.md).
-In this article, you will learn:
+In this article, you'll learn:
- About tools to identify high CPU utilization such as Azure Metrics, Query Store, and pg_stat_statements. - How to identify root causes, such as long running queries and total connections. - How to resolve high CPU utilization by using Explain Analyze, Connection Pooling, and Vacuuming tables. - ## Tools to identify high CPU utilization Consider these tools to identify high CPU utilization.
Consider these tools to identify high CPU utilization.
Azure Metrics is a good starting point to check the CPU utilization for the definite date and period. Metrics give information about the time duration during which the CPU utilization is high. Compare the graphs of Write IOPs, Read IOPs, Read Throughput, and Write Throughput with CPU utilization to find out times when the workload caused high CPU. For proactive monitoring, you can configure alerts on the metrics. For step-by-step guidance, see [Azure Metrics](./howto-alert-on-metrics.md). - ### Query Store Query Store automatically captures the history of queries and runtime statistics, and it retains them for your review. It slices the data by time so that you can see temporal usage patterns. Data for all users, databases and queries is stored in a database named azure_sys in the Azure Database for PostgreSQL instance. For step-by-step guidance, see [Query Store](./concepts-query-store.md).
The pg_stat_statements extension helps identify queries that consume time on the
##### [Postgres v13 & above](#tab/postgres-13) - For Postgres versions 13 and above, use the following statement to view the top five SQL statements by mean or average execution time: ```postgresql
ORDER BY mean_exec_time
DESC LIMIT 5; ``` - ##### [Postgres v9.6-12](#tab/postgres9-12) For Postgres versions 9.6, 10, 11, and 12, use the following statement to view the top five SQL statements by mean or average execution time: - ```postgresql SELECT userid::regrole, dbid, query FROM pg_stat_statements ORDER BY mean_time DESC LIMIT 5; ```- #### Total execution time
ORDER BY duration DESC;
### Total number of connections and number connections by state
-A large number of connections to the database is also another issue that might lead to increased CPU as well as memory utilization.
+A large number of connections to the database is also another issue that might lead to increased CPU and memory utilization.
The following query gives information about the number of connections by state:
For more information about the **EXPLAIN** command, review [Explain Plan](https:
### PGBouncer and connection pooling
-In situations where there are lots of idle connections or lot of connections which are consuming the CPU consider use of a connection pooler like PgBouncer.
+In situations where there are lots of idle connections or lot of connections, which are consuming the CPU consider use of a connection pooler like PgBouncer.
For more details about PgBouncer, review:
For more details about PgBouncer, review:
Azure Database for Flexible Server offers PgBouncer as a built-in connection pooling solution. For more information, see [PgBouncer](./concepts-pgbouncer.md) - ### Terminating long running transactions You could consider killing a long running transaction as an option.
-To terminate a session's PID, you will need to detect the PID using the following query:
+To terminate a session's PID, you'll need to detect the PID using the following query:
```postgresql SELECT pid, usename, datname, query, now() - xact_start as duration
ORDER BY duration DESC;
You can also filter by other properties like `usename` (username), `datname` (database name) etc.
-Once you have the session's PID you can terminate using the following query:
+Once you have the session's PID, you can terminate using the following query:
```postgresql SELECT pg_terminate_backend(pid);
SELECT pg_terminate_backend(pid);
Keeping table statistics up to date helps improve query performance. Monitor whether regular autovacuuming is being carried out. - The following query helps to identify the tables that need vacuuming: ```postgresql
select schemaname,relname,n_dead_tup,n_live_tup,last_vacuum,last_analyze,last_au
from pg_stat_all_tables where n_live_tup > 0;   ```
-`last_autovacuum` and `last_autoanalyze` columns give the date and time when the table was last autovacuumed or analyzed. If the tables are not being vacuumed regularly, take steps to tune autovacuum. For more information about autovacuum troubleshooting and tuning, see [Autovacuum Troubleshooting](./how-to-autovacuum-tuning.md).
+`last_autovacuum` and `last_autoanalyze` columns give the date and time when the table was last autovacuumed or analyzed. If the tables aren't being vacuumed regularly, take steps to tune autovacuum. For more information about autovacuum troubleshooting and tuning, see [Autovacuum Troubleshooting](./how-to-autovacuum-tuning.md).
A short-term solution would be to do a manual vacuum analyze of the tables where slow queries are seen:
postgresql How To High Io Utilization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-high-io-utilization.md
+
+ Title: High IOPS utilization for Azure Database for PostgreSQL - Flexible Server
+description: Troubleshooting guide for high IOPS utilization in Azure Database for PostgreSQL - Flexible Server
++++ Last updated : 08/16/2022+++
+# Troubleshoot high IOPS utilization for Azure Database for PostgreSQL - Flexible Server
+
+This article shows you how to quickly identify the root cause of high IOPS utilization and possible remedial actions to control IOPS utilization when using [Azure Database for PostgreSQL - Flexible Server](overview.md).
+
+In this article, you learn:
+
+- About tools to identify high IO utilization, such as Azure Metrics, Query Store, and pg_stat_statements.
+- How to identify root causes, such as long-running queries, checkpoint timings, disruptive autovacuum daemon process, and high storage utilization.
+- How to resolve high IO utilization using Explain Analyze, tune checkpoint-related server parameters, and tune autovacuum daemon.
+
+## Tools to identify high IO utilization
+
+Consider these tools to identify high IO utilization.
+
+### Azure metrics
+
+Azure Metrics is a good starting point to check the IO utilization for the definite date and period. Metrics give information about the time duration the IO utilization is high. Compare the graphs of Write IOPs, Read IOPs, Read Throughput, and Write Throughput to find out times when the workload caused high IO utilization. For proactive monitoring, you can configure alerts on the metrics. For step-by-step guidance, see [Azure Metrics](./howto-alert-on-metrics.md).
+
+### Query store
+
+Query Store automatically captures the history of queries and runtime statistics and retains them for your review. It slices the data by time to see temporal usage patterns. Data for all users, databases, and queries is stored in a database named azure_sys in the Azure Database for PostgreSQL instance. For step-by-step guidance, see [Query Store](./concepts-query-store.md).
+
+Use the following statement to view the top five SQL statements that consume IO:
+
+```sql
+select * from query_store.qs_view qv where is_system_query is FALSE
+order by blk_read_time + blk_write_time desc limit 5;
+```
+
+### pg_stat_statements
+
+The pg_stat_statements extension helps identify queries that consume IO on the server.
+
+Use the following statement to view the top five SQL statements that consume IO:
+
+```sql
+SELECT userid::regrole, dbid, query
+FROM pg_stat_statements
+ORDER BY blk_read_time + blk_write_time desc
+LIMIT 5;
+```
+
+> [!NOTE]
+> When using query store or pg_stat_statements for columns blk_read_time and blk_write_time to be populated enable server parameter `track_io_timing`.For more information about the **track_io_timing** parameter, review [Server Parameters](https://www.postgresql.org/docs/current/runtime-config-statistics.html).
+
+## Identify root causes
+
+If IO consumption levels are high in general, the following could be possible root causes:
+
+### Long-running transactions
+
+Long-running transactions can consume IO, that can lead to high IO utilization.
+
+The following query helps identify connections running for the longest time:
+
+```sql
+SELECT pid, usename, datname, query, now() - xact_start as duration
+FROM pg_stat_activity
+WHERE pid <> pg_backend_pid() and state IN ('idle in transaction', 'active')
+ORDER BY duration DESC;
+```
+
+### Checkpoint timings
+
+High IO can also be seen in scenarios where a checkpoint is happening too frequently. One way to identify this is by checking the Postgres log file for the following log text "LOG: checkpoints are occurring too frequently."
+
+You could also investigate using an approach where periodic snapshots of `pg_stat_bgwriter` with a timestamp is saved. Using the snapshots saved the average checkpoint interval, number of checkpoints requested and number of checkpoints timed can be calculated.
+
+### Disruptive autovacuum daemon process
+
+Execute the below query to monitor autovacuum:
+
+```sql
+SELECT schemaname, relname, n_dead_tup, n_live_tup, autovacuum_count, last_vacuum, last_autovacuum, last_autoanalyze, autovacuum_count, autoanalyze_count FROM pg_stat_all_tables WHERE n_live_tup > 0;
+```
+The query is used to check how frequently the tables in the database are being vacuumed.
+
+**last_autovacuum** : provides date and time when the last autovacuum ran on the table.
+**autovacuum_count** : provides number of times the table was vacuumed.
+**autoanalyze_count**: provides number of times the table was analyzed.
+
+## Resolve high IO utilization
+
+To resolve high IO utilization, there are three methods you could employ - using Explain Analyze, terminating long-running transactions, or tuning server parameters.
+
+### Explain Analyze
+
+Once you identify the query that's consuming high IO, use **EXPLAIN ANALYZE** to further investigate the query and tune it. For more information about the **EXPLAIN ANALYZE** command, review [Explain Plan](https://www.postgresql.org/docs/current/sql-explain.html).
+
+### Terminating long running transactions
+
+You could consider killing a long running transaction as an option.
+
+To terminate a session's PID, you need to detect the PID using the following query:
+
+```sql
+SELECT pid, usename, datname, query, now() - xact_start as duration
+FROM pg_stat_activity
+WHERE pid <> pg_backend_pid() and state IN ('idle in transaction', 'active')
+ORDER BY duration DESC;
+```
+
+You can also filter by other properties like `usename` (username), `datname` (database name) etc.
+
+Once you have the session's PID, you can terminate using the following query:
+
+```sql
+SELECT pg_terminate_backend(pid);
+```
+
+### Server parameter tuning
+
+If it's observed that the checkpoint is happening too frequently, increase `max_wal_size` server parameter until most checkpoints are time driven, instead of requested. Eventually, 90% or more should be time based, and the interval between two checkpoints is close to the `checkpoint_timeout` set on the server.
+
+`max_wal_size`
+
+Peak business hours are a good time to arrive at `max_wal_size` value. Follow the below listed steps to arrive at a value.
+
+Execute the below query to get current WAL LSN, note down the result:
+
+ ```sql
+select pg_current_wal_lsn();
+```
+
+Wait for `checkpoint_timeout` number of seconds. Execute the below query to get current WAL LSN, note down the result:
+
+ ```sql
+select pg_current_wal_lsn();
+```
+
+Execute below query that uses the two results to check the difference in GB:
+
+ ```sql
+select round (pg_wal_lsn_diff ('LSN value when run second time', 'LSN value when run first time')/1024/1024/1024,2) WAL_CHANGE_GB;
+```
+
+`checkpoint_completion_target`
+
+A good practice would be to set it to 0.9. As an example, a value of 0.9 for a `checkpoint_timeout` of 5 minutes indicates the target to complete a checkpoint is 270 sec [0.9*300 sec]. A value of 0.9 provides fairly consistent I/O load. An aggressive value of `check_point_completion_target` may result in increased IO load on the server.
+
+`checkpoint_timeout`
+
+The `checkpoint_timeout` value can be increased from default value set on the server. Note while increasing the `checkpoint_timeout` take into consideration that increasing the value would also increase the time for crash recovery.
+
+### Autovacuum tuning to decrease disruptions
+
+For more details on monitoring and tuning in scenarios where autovacuum is too disruptive review [Autovacuum Tuning](./how-to-autovacuum-tuning.md).
+
+### Increase storage
+
+Increasing storage will also help in addition of more IOPS to the server. For more details on storage and associated IOPS review [Compute and Storage Options](./concepts-compute-storage.md).
+
+## Next steps
+
+- Troubleshoot and tune Autovacuum [Autovacuum Tuning](./how-to-autovacuum-tuning.md)
+- Compute and Storage Options [Compute and Storage Options](./concepts-compute-storage.md)
+
postgresql How To High Memory Utilization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-high-memory-utilization.md
Title: High Memory Utilization
description: Troubleshooting guide for high memory utilization in Azure Database for PostgreSQL - Flexible Server +
Last updated 08/03/2022
This article introduces common scenarios and root causes that might lead to high memory utilization in [Azure Database for PostgreSQL - Flexible Server](overview.md).
-In this article, you will learn:
+In this article, you learn:
- Tools to identify high memory utilization. - Reasons for high memory & remedial actions. - ## Tools to identify high memory utilization Consider the following tools to identify high memory utilization.
For proactive monitoring, configure alerts on the metrics. For step-by-step guid
### Query Store Query Store automatically captures the history of queries and their runtime statistics, and it retains them for your review. + Query Store can correlate wait event information with query run time statistics. Use Query Store to identify queries that have high memory consumption during the period of interest. For more information on setting up and using Query Store, review [Query Store](./concepts-query-store.md). - ## Reasons and remedial actions Consider the following reasons and remedial actions for resolving high memory utilization.
The `work_mem` parameter specifies the amount of memory to be used by intern
If the workload has many short-running queries with simple joins and minimal sort operations, it's advised to keep lower `work_mem`. If there are a few active queries with complex joins and sorts, then it's advised to set a higher value for work_mem.
-It is tough to get the value of `work_mem` right. If you notice high memory utilization or out-of-memory issues, consider decreasing `work_mem`.
+It's tough to get the value of `work_mem` right. If you notice high memory utilization or out-of-memory issues, consider decreasing `work_mem`.
A safer setting for `work_mem` is `work_mem = Total RAM / Max_Connections / 16 `
The default value of `work_mem` = 4 MB. You can set the `work_mem` value on mult
A good strategy is to monitor memory consumption during peak times.
-If disk sorts are happening during this time and there is plenty of unused memory, increase `work_mem` gradually until you're able to reach a good balance between available and used memory
+If disk sorts are happening during this time and there's plenty of unused memory, increase `work_mem` gradually until you're able to reach a good balance between available and used memory
Similarly, if the memory use looks high, reduce `work_mem`. - #### Maintenance_Work_Mem `maintenance_work_mem` is for maintenance tasks like vacuuming, adding indexes or foreign keys. The usage of memory in this scenario is per session.
If `maintenance_work_mem` is set to 1 GB, then all sessions combined will use 3
A high `maintenance_work_mem` value along with multiple running sessions for vacuuming/index creation/adding foreign keys can cause high memory utilization. The maximum allowed value for the `maintenance_work_mem` server parameter in Azure Database for Flexible Server is 2 GB. - #### Shared buffers The `shared_buffers` parameter determines how much memory is dedicated to the server for caching data. The objective of shared buffers is to reduce DISK I/O.
-A reasonable setting for shared buffers is 25% of RAM. Setting a value of greater than 40% of RAM is not recommended for most common workloads.
+A reasonable setting for shared buffers is 25% of RAM. Setting a value of greater than 40% of RAM isn't recommended for most common workloads.
### Max connections
select count(*) from pg_stat_activity;
When the number of connections to a database is high, memory consumption also increases.
-In situations where there are a lot of database connections, consider using a connection pooler like PgBouncer.
+In situations where there are many database connections, consider using a connection pooler like PgBouncer.
For more details on PgBouncer, review:
For more details on PgBouncer, review:
[Best Practices](https://techcommunity.microsoft.com/t5/azure-database-for-postgresql/connection-handling-best-practice-with-postgresql/ba-p/790883). - Azure Database for Flexible Server offers PgBouncer as a built-in connection pooling solution. For more information, see [PgBouncer](./concepts-pgbouncer.md). - ### Explain Analyze
-Once high memory-consuming queries have been identified from Query Store, use **EXPLAIN** and **EXPLAIN ANALYZE** to investigate further and tune them.
-
+Once high memory-consuming queries have been identified from Query Store, use **EXPLAIN,** and **EXPLAIN ANALYZE** to investigate further and tune them.
For more information on the **EXPLAIN** command, review [Explain Plan](https://www.postgresql.org/docs/current/sql-explain.html). ## Next steps -- Troubleshoot and tune Autovacuum [Autovacuum Tuning](./how-to-high-cpu-utilization.md).
+- Troubleshoot and tune Autovacuum [Autovacuum Tuning](./how-to-autovacuum-tuning.md).
- Troubleshoot High CPU Utilization [High CPU Utilization](./how-to-high-cpu-utilization.md). - Configure server parameters [Server Parameters](./howto-configure-server-parameters-using-portal.md).
postgresql How To Manage Azure Ad Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-manage-azure-ad-users.md
description: This article describes how you can manage Azure AD enabled roles to
Previously updated : 10/12/2022 Last updated : 10/12/2022
-# Manage Azure Active Directory (Azure AD) roles in Azure Database for PostgreSQL - Flexible Server
+# Manage Azure Active Directory roles in Azure Database for PostgreSQL - Flexible Server Preview
[!INCLUDE [applies-to-postgresql-Flexible-server](../includes/applies-to-postgresql-Flexible-server.md)]
-This article describes how you can create Azure AD enabled database roles within an Azure Database for PostgreSQL server.
+This article describes how you can create an Azure Active Directory (Azure AD) enabled database roles within an Azure Database for PostgreSQL server.
> [!NOTE] > This guide assumes you already enabled Azure Active Directory authentication on your PostgreSQL Flexible server. > See [How to Configure Azure AD Authentication](./how-to-configure-sign-in-azure-ad-authentication.md)
-> [!NOTE]
+> [!NOTE]
> Azure Active Directory Authentication for PostgreSQL Flexible Server is currently in preview. If you like to learn about how to create and manage Azure subscription users and their privileges, you can visit the [Azure role-based access control (Azure RBAC) article](../../role-based-access-control/built-in-roles.md) or review [how to customize roles](../../role-based-access-control/custom-roles.md).
If you like to learn about how to create and manage Azure subscription users and
## Create or Delete Azure AD administrators using Azure portal or Azure Resource Manager (ARM) API 1. Open **Authentication** page for your Azure Database for PostgreSQL Flexible Server in Azure portal
-1. To add an administrator - select **Add Azure AD Admin** and select a user, group, application or a managed identity from the current Azure AD tenant.
-1. To remove an administrator - select **Delete** icon for the one to remove.
-1. Select **Save** and wait for provisioning operation to completed.
+2. To add an administrator - select **Add Azure AD Admin** and select a user, group, application or a managed identity from the current Azure AD tenant.
+3. To remove an administrator - select **Delete** icon for the one to remove.
+4. Select **Save** and wait for provisioning operation to completed.
> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/how-to-manage-azure-ad-users/add-aad-principal-via-portal.png" alt-text="Screenshot of managing Azure AD administrators via portal":::
+> :::image type="content" source="./media/how-to-manage-azure-ad-users/add-aad-principal-via-portal.png" alt-text="Screenshot of managing Azure AD administrators via portal.":::
> [!NOTE]
-> Support for Azure AD Administrators management via Azure SDK, az cli and Azure Powershell is coming soon,
+> Support for Azure AD Administrators management via Azure SDK, az cli and Azure PowerShell is coming soon.
## Manage Azure AD roles using SQL
postgresql How To Pgdump Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-pgdump-restore.md
+
+ Title: Best practices for PG dump and restore in Azure Database for PostgreSQL - Flexible Server
+description: Best Practices For PG Dump And Restore in Azure Database for PostgreSQL - Flexible Server
+++++ Last updated : 09/16/2022+++
+# Best practices for PG dump and restore for Azure Database for PostgreSQL - Flexible Server
+
+This article reviews options to speed up pg_dump and pg_restore. It also explains the best server configurations for carrying out pg_restore.
+
+## Best practices for pg_dump
+
+pg_dump is a utility that can extract a PostgreSQL database into a script file or archive file. Few of the command line options that can be used to reduce the overall dump time using pg_dump are listed below.
+
+#### Directory format(-Fd)
+
+This option outputs a directory-format archive that can be input to pg_restore. By default the output is compressed.
+
+#### Parallel jobs(-j)
+
+Pg_dump can run dump jobs concurrently using the parallel jobs option. This option reduces the total dump time but increases the load on the database server. It's advised to arrive at a parallel job value after closely monitoring the source server metrics like CPU, Memory, and IOPS usage.
+
+There are a few considerations that need to be taken into account when setting this value
+- Pg_dump requires number of parallel jobs +1 number of connections when parallel jobs option is considered, so make sure max_connections is set accordingly.
+- The number of parallel jobs should be less than or equal to the number of vCPUs allocated for the database server.
+
+#### Compression(-Z0)
+
+Specifies the compression level to use. Zero means no compression. Zero compression during pg_dump process could help with performance gains.
+
+#### Table bloats and vacuuming
+
+Before the starting the pg_dump process, consider if table vacuuming is necessary. Bloat on tables significantly increases pg_dump times. Execute the below query to identify table bloats:
+
+```
+select schemaname,relname,n_dead_tup,n_live_tup,round(n_dead_tup::float/n_live_tup::float*100) dead_pct,autovacuum_count,last_vacuum,last_autovacuum,last_autoanalyze,last_analyze from pg_stat_all_tables where n_live_tup >0;
+```
+
+The **dead_pct** column in the above query gives percentage of dead tuples when compared to live tuples. A high dead_pct value for a table might point to the table not being properly vacuumed. For tuning autovacuum, review the article [Autovacuum Tuning](./how-to-autovacuum-tuning.md).
++
+As a one of case perform manual vacuum analyze of the tables that are identified.
+
+```
+vacuum(analyze, verbose) <table_name>
+```
+
+#### Use of PITR [Point In Time Recovery] server
+
+Pg dump can be carried out on an online or live server. It makes consistent backups even if the database is being used. It doesn't block other users from using the database. Consider the database size and other business or customer needs before the pg_dump process is started. Small databases might be a good candidate to carry out a pg dump on the production server. For large databases, you could create PITR (Point In Time Recovery) server from the production server and carry out the pg_dump process on the PITR server. Running pg_dump on a PITR would be a cold run process. The trade-off for the approach would be you wouldn't be concerned with extra CPU/memory/IO utilization that comes with the pg_dump process running on the actual production server. You can run pg_dump on a PITR server and drop the PITR server once the pg_dump process is completed.
+
+##### Syntax
+
+Use the following syntax to perform a pg_dump:
+
+`pg_dump -h <hostname> -U <username> -d <databasename> -Fd -j <Num of parallel jobs> -Z0 -f sampledb_dir_format`
++
+## Best practices for pg_restore
+
+pg_restore is a utility for restoring postgreSQL database from an archive created by pg_dump. Few of the command line options that can be used to reduce the overall restore time using pg_restore are listed below.
+
+#### Parallel restore
+
+Using multiple concurrent jobs, you can reduce the time to restore a large database on a multi vCore target server. The number of jobs can be equal to or less than the number of vCPUs allocated for the target server.
+
+#### Server parameters
+
+If you're restoring data to a new server or non-production server, you can optimize the following server parameters prior to running pg_restore.
+
+`work_mem` = 32 MB
+`max_wal_size` = 65536 (64 GB)
+`checkpoint_timeout` = 3600 #60min
+`maintenance_work_mem` = 2097151 (2 GB)
+`autovacuum` = off
+`wal_compression` = on
+
+Once the restore is completed, make sure all the above mentioned parameters are appropriately updated as per workload requirements.
+
+> [!NOTE]
+> Please follow the above recommendations only if there is enough memory and disk space. In case you have small server with 2,4,8 vCore, please set the parameters accordingly.
+
+#### Other considerations
+
+- Disable High Availability [HA] prior to running pg_restore.
+- Analyze all tables migrated after restore option.
+
+##### Syntax
+
+Use the following syntax for pg_restore:
+
+`pg_restore -h <hostname> -U <username> -d <db name> -Fd -j <NUM> -C <dump directory>`
+
+-Fd - Directory format
+-j - Number of jobs
+-C - Begin the output with a command to create the database itself and reconnect to the created database
+
+Here's an example of how this syntax might appear:
+
+`pg_restore -h <hostname> -U <username> -j <Num of parallel jobs> -Fd -C -d <databasename> sampledb_dir_format`
+
+## Virtual machine considerations
+
+Create a virtual machine in the same region, same availability zone (AZ) preferably where you have both your target and source servers or at least have the virtual machine closer to source server or a target server. Use of Azure Virtual Machines with high-performance local SSD is recommended. For more details about the SKUs review
+
+[Edv4 and Edsv4-series](/azure/virtual-machines/edv4-edsv4-series)
+
+[Ddv4 and Ddsv4-series](/azure/virtual-machines/ddv4-ddsv4-series)
+
+## Next steps
+
+- Troubleshoot high CPU utilization [High CPU Utilization](./how-to-high-cpu-utilization.md).
+- Troubleshoot high memory utilization [High Memory Utilization](./how-to-high-memory-utilization.md).
+- Troubleshoot and tune Autovacuum [Autovacuum Tuning](./how-to-autovacuum-tuning.md).
+- Troubleshoot high CPU utilization [High IOPS Utilization](./how-to-high-io-utilization.md).
postgresql Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/overview.md
One advantage of running your workload in Azure is global reach. The flexible se
| Qatar Central | :heavy_check_mark: | :x: | :heavy_check_mark: | :x: | | South Africa North | :heavy_check_mark: | :heavy_check_mark: ** | :heavy_check_mark: | :x: | | South Central US | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| South India | :x: $$ | :x: | :heavy_check_mark: | :heavy_check_mark: |
+| South India | :heavy_check_mark: | :x: | :heavy_check_mark: | :heavy_check_mark: |
| Southeast Asia | :heavy_check_mark: | :x: $ | :heavy_check_mark: | :heavy_check_mark: | | Sweden Central | :heavy_check_mark: | :x: | :heavy_check_mark: |:x: | | Switzerland North | :heavy_check_mark: | :x: $ ** | :heavy_check_mark: | :heavy_check_mark: |
One advantage of running your workload in Azure is global reach. The flexible se
| US Gov Arizona | :heavy_check_mark: | :x: | :heavy_check_mark: |:x: | | US Gov Virginia | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :x: | | UK South | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| UK West | :heavy_check_mark: | :x: | :heavy_check_mark: | :heavy_check_mark: |
+| UK West | :x: | :x: | :heavy_check_mark: | :heavy_check_mark: |
| West Central US | :heavy_check_mark: | :x: | :heavy_check_mark: | :heavy_check_mark: | | West Europe | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | West US | :heavy_check_mark: | :x: | :heavy_check_mark: | :heavy_check_mark: |
postgresql Moved https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/moved.md
recommendations: false Previously updated : 10/12/2022 Last updated : 10/13/2022 # Azure Database for PostgreSQL - Hyperscale (Citus) is now Azure Cosmos DB for PostgreSQL
-Existing Hyperscale (Citus) server groups will automatically become Azure
-Cosmos DB for PostgreSQL clusters under the new name, with zero downtime.
-All features and pricing, including reserved compute pricing and
-regional availability, will be preserved under the new name.
+Existing Hyperscale (Citus) server groups will automatically become [Azure
+Cosmos DB for PostgreSQL](../../cosmos-db/postgresql/introduction.md) clusters
+under the new name, with zero downtime. All features and pricing, including
+reserved compute pricing and regional availability, will be preserved under the
+new name.
Once the name change is complete, all Hyperscale (Citus) information such as product overview, pricing information, documentation, and more will be moved
purview How To Policies Data Owner Arc Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-policies-data-owner-arc-sql-server.md
This guide covers how a data owner can delegate authoring policies in Microsoft
## Prerequisites [!INCLUDE [Access policies generic pre-requisites](./includes/access-policies-prerequisites-generic.md)] ## Microsoft Purview configuration
purview Register Scan Power Bi Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-power-bi-tenant.md
For a list of metadata available for Power BI, see our [available metadata docum
- You can create only one scan for a Power BI data source that is registered in your Microsoft Purview account. - If Power BI dataset schema isn't shown after scan, it's due to one of the current limitations with [Power BI Metadata scanner](/power-bi/admin/service-admin-metadata-scanning). - Empty workspaces are skipped.-- Payload is currently limited to 2MB and 300 columns when scanning an asset.
+- Payload is currently limited to 2MB and 800 columns when scanning an asset.
## Prerequisites
role-based-access-control Built In Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/built-in-roles.md
Previously updated : 09/23/2022 Last updated : 10/12/2022
The following table provides a brief description of each built-in role. Click th
> | [Data Box Contributor](#data-box-contributor) | Lets you manage everything under Data Box Service except giving access to others. | add466c9-e687-43fc-8d98-dfcf8d720be5 | > | [Data Box Reader](#data-box-reader) | Lets you manage Data Box Service except creating order or editing order details and giving access to others. | 028f4ed7-e2a9-465e-a8f4-9c0ffdfdc027 | > | [Data Lake Analytics Developer](#data-lake-analytics-developer) | Lets you submit, monitor, and manage your own jobs but not create or delete Data Lake Analytics accounts. | 47b7735b-770e-4598-a7da-8b91488b4c88 |
+> | [Elastic SAN Owner](#elastic-san-owner) | Allows for full access to all resources under Azure Elastic SAN including changing network security policies to unblock data path access | 80dcbedb-47ef-405d-95bd-188a1b4ac406 |
+> | [Elastic SAN Reader](#elastic-san-reader) | Allows for control path read access to Azure Elastic SAN | af6a70f8-3c9f-4105-acf1-d719e9fca4ca |
+> | [Elastic SAN Volume Group Owner](#elastic-san-volume-group-owner) | Allows for full access to a volume group in Azure Elastic SAN including changing network security policies to unblock data path access | a8281131-f312-4f34-8d98-ae12be9f0d23 |
> | [Reader and Data Access](#reader-and-data-access) | Lets you view everything but will not let you delete or create a storage account or contained resource. It will also allow read/write access to all data contained in a storage account via access to storage account keys. | c12c1c16-33a1-487b-954d-41c89c60f349 | > | [Storage Account Backup Contributor](#storage-account-backup-contributor) | Lets you perform backup and restore operations using Azure Backup on the storage account. | e5e2a7ff-d759-4cd2-bb51-3152d37e2eb1 | > | [Storage Account Contributor](#storage-account-contributor) | Permits management of storage accounts. Provides access to the account key, which can be used to access data via Shared Key authorization. | 17d1049b-9a84-46fb-8f53-869881c3d3ab |
The following table provides a brief description of each built-in role. Click th
> | [Azure Connected SQL Server Onboarding](#azure-connected-sql-server-onboarding) | Allows for read and write access to Azure resources for SQL Server on Arc-enabled servers. | e8113dce-c529-4d33-91fa-e9b972617508 | > | [Cosmos DB Account Reader Role](#cosmos-db-account-reader-role) | Can read Azure Cosmos DB account data. See [DocumentDB Account Contributor](#documentdb-account-contributor) for managing Azure Cosmos DB accounts. | fbdf93bf-df7d-467e-a4d2-9458aa1360c8 | > | [Cosmos DB Operator](#cosmos-db-operator) | Lets you manage Azure Cosmos DB accounts, but not access data in them. Prevents access to account keys and connection strings. | 230815da-be43-4aae-9cb4-875f7bd000aa |
-> | [CosmosBackupOperator](#cosmosbackupoperator) | Can submit restore request for an Azure Cosmos DB database or a container for an account | db7b14f2-5adf-42da-9f96-f2ee17bab5cb |
-> | [CosmosRestoreOperator](#cosmosrestoreoperator) | Can perform restore action for an Azure Cosmos DB database account with continuous backup mode | 5432c526-bc82-444a-b7ba-57c5b0b5b34f |
+> | [CosmosBackupOperator](#cosmosbackupoperator) | Can submit restore request for a Cosmos DB database or a container for an account | db7b14f2-5adf-42da-9f96-f2ee17bab5cb |
+> | [CosmosRestoreOperator](#cosmosrestoreoperator) | Can perform restore action for Cosmos DB database account with continuous backup mode | 5432c526-bc82-444a-b7ba-57c5b0b5b34f |
> | [DocumentDB Account Contributor](#documentdb-account-contributor) | Can manage Azure Cosmos DB accounts. Azure Cosmos DB is formerly known as DocumentDB. | 5bd9cd88-fe45-4216-938b-f97437e15450 | > | [Redis Cache Contributor](#redis-cache-contributor) | Lets you manage Redis caches, but not access to them. | e0f68234-74aa-48ed-b826-c38b57376e17 | > | [SQL DB Contributor](#sql-db-contributor) | Lets you manage SQL databases, but not access to them. Also, you can't manage their security-related policies or their parent SQL servers. | 9b7fa17d-e63e-47b0-bb0a-15c516ac86ec |
Lets you manage backup service, but can't create vaults and give access to other
> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/backupInstances/delete | Deletes the Backup Instance | > | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/backupInstances/read | Returns all Backup Instances | > | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/backupInstances/read | Returns all Backup Instances |
+> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/deletedBackupInstances/read | List soft-deleted Backup Instances in a Backup Vault. |
+> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/deletedBackupInstances/undelete/action | Perform undelete of soft-deleted Backup Instance. Backup Instance moves from SoftDeleted to ProtectionStopped state. |
> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/backupInstances/backup/action | Performs Backup on the Backup Instance | > | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/backupInstances/validateRestore/action | Validates for Restore of the Backup Instance | > | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/backupInstances/restore/action | Triggers restore on the Backup Instance |
Lets you manage backup service, but can't create vaults and give access to other
"Microsoft.DataProtection/backupVaults/backupInstances/delete", "Microsoft.DataProtection/backupVaults/backupInstances/read", "Microsoft.DataProtection/backupVaults/backupInstances/read",
+ "Microsoft.DataProtection/backupVaults/deletedBackupInstances/read",
+ "Microsoft.DataProtection/backupVaults/deletedBackupInstances/undelete/action",
"Microsoft.DataProtection/backupVaults/backupInstances/backup/action", "Microsoft.DataProtection/backupVaults/backupInstances/validateRestore/action", "Microsoft.DataProtection/backupVaults/backupInstances/restore/action",
Lets you manage backup services, except removal of backup, vault creation and gi
> | [Microsoft.Support](resource-provider-operations.md#microsoftsupport)/* | Create and update a support ticket | > | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/backupInstances/read | Returns all Backup Instances | > | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/backupInstances/read | Returns all Backup Instances |
+> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/deletedBackupInstances/read | List soft-deleted Backup Instances in a Backup Vault. |
> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/backupPolicies/read | Returns all Backup Policies | > | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/backupPolicies/read | Returns all Backup Policies | > | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/backupInstances/recoveryPoints/read | Returns all Recovery Points |
Lets you manage backup services, except removal of backup, vault creation and gi
> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/read | Gets list of Backup Vaults in a Subscription | > | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/locations/operationStatus/read | Returns Backup Operation Status for Backup Vault. | > | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/locations/operationResults/read | Returns Backup Operation Result for Backup Vault. |
-> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/providers/operations/read | |
+> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/operations/read | Operation returns the list of Operations for a Resource Provider |
+> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/validateForBackup/action | Validates for backup of Backup Instance |
+> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/backupInstances/backup/action | Performs Backup on the Backup Instance |
+> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/backupInstances/validateRestore/action | Validates for Restore of the Backup Instance |
+> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/backupInstances/restore/action | Triggers restore on the Backup Instance |
> | **NotActions** | | > | *none* | | > | **DataActions** | |
Lets you manage backup services, except removal of backup, vault creation and gi
"Microsoft.Support/*", "Microsoft.DataProtection/backupVaults/backupInstances/read", "Microsoft.DataProtection/backupVaults/backupInstances/read",
+ "Microsoft.DataProtection/backupVaults/deletedBackupInstances/read",
"Microsoft.DataProtection/backupVaults/backupPolicies/read", "Microsoft.DataProtection/backupVaults/backupPolicies/read", "Microsoft.DataProtection/backupVaults/backupInstances/recoveryPoints/read",
Lets you manage backup services, except removal of backup, vault creation and gi
"Microsoft.DataProtection/backupVaults/read", "Microsoft.DataProtection/locations/operationStatus/read", "Microsoft.DataProtection/locations/operationResults/read",
- "Microsoft.DataProtection/providers/operations/read"
+ "Microsoft.DataProtection/operations/read",
+ "Microsoft.DataProtection/backupVaults/validateForBackup/action",
+ "Microsoft.DataProtection/backupVaults/backupInstances/backup/action",
+ "Microsoft.DataProtection/backupVaults/backupInstances/validateRestore/action",
+ "Microsoft.DataProtection/backupVaults/backupInstances/restore/action"
], "notActions": [], "dataActions": [],
Can view backup services, but can't make changes [Learn more](../backup/backup-r
> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/locations/getBackupStatus/action | Check Backup Status for Recovery Services Vaults | > | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/backupInstances/write | Creates a Backup Instance | > | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/backupInstances/read | Returns all Backup Instances |
-> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/backupInstances/read | Returns all Backup Instances |
+> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/deletedBackupInstances/read | List soft-deleted Backup Instances in a Backup Vault. |
> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/backupInstances/backup/action | Performs Backup on the Backup Instance | > | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/backupInstances/validateRestore/action | Validates for Restore of the Backup Instance | > | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/backupInstances/restore/action | Triggers restore on the Backup Instance |
Can view backup services, but can't make changes [Learn more](../backup/backup-r
> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/locations/operationStatus/read | Returns Backup Operation Status for Backup Vault. | > | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/locations/operationResults/read | Returns Backup Operation Result for Backup Vault. | > | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/validateForBackup/action | Validates for backup of Backup Instance |
-> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/providers/operations/read | |
+> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/operations/read | Operation returns the list of Operations for a Resource Provider |
> | **NotActions** | | > | *none* | | > | **DataActions** | |
Can view backup services, but can't make changes [Learn more](../backup/backup-r
"Microsoft.DataProtection/locations/getBackupStatus/action", "Microsoft.DataProtection/backupVaults/backupInstances/write", "Microsoft.DataProtection/backupVaults/backupInstances/read",
- "Microsoft.DataProtection/backupVaults/backupInstances/read",
+ "Microsoft.DataProtection/backupVaults/deletedBackupInstances/read",
"Microsoft.DataProtection/backupVaults/backupInstances/backup/action", "Microsoft.DataProtection/backupVaults/backupInstances/validateRestore/action", "Microsoft.DataProtection/backupVaults/backupInstances/restore/action",
Can view backup services, but can't make changes [Learn more](../backup/backup-r
"Microsoft.DataProtection/locations/operationStatus/read", "Microsoft.DataProtection/locations/operationResults/read", "Microsoft.DataProtection/backupVaults/validateForBackup/action",
- "Microsoft.DataProtection/providers/operations/read"
+ "Microsoft.DataProtection/operations/read"
], "notActions": [], "dataActions": [],
Lets you submit, monitor, and manage your own jobs but not create or delete Data
} ```
+### Elastic SAN Owner
+
+Allows for full access to all resources under Azure Elastic SAN including changing network security policies to unblock data path access
+
+> [!div class="mx-tableFixed"]
+> | Actions | Description |
+> | | |
+> | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/*/read | Read roles and role assignments |
+> | [Microsoft.ResourceHealth](resource-provider-operations.md#microsoftresourcehealth)/availabilityStatuses/read | Gets the availability statuses for all resources in the specified scope |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/deployments/* | Create and manage a deployment |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/resourceGroups/read | Gets or lists resource groups. |
+> | [Microsoft.ElasticSan](resource-provider-operations.md#microsoftelasticsan)/elasticSans/* | |
+> | [Microsoft.ElasticSan](resource-provider-operations.md#microsoftelasticsan)/locations/* | |
+> | **NotActions** | |
+> | *none* | |
+> | **DataActions** | |
+> | *none* | |
+> | **NotDataActions** | |
+> | *none* | |
+
+```json
+{
+ "assignableScopes": [
+ "/"
+ ],
+ "description": "Allows for full access to all resources under Azure Elastic SAN including changing network security policies to unblock data path access",
+ "id": "/subscriptions/{subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/80dcbedb-47ef-405d-95bd-188a1b4ac406",
+ "name": "80dcbedb-47ef-405d-95bd-188a1b4ac406",
+ "permissions": [
+ {
+ "actions": [
+ "Microsoft.Authorization/*/read",
+ "Microsoft.ResourceHealth/availabilityStatuses/read",
+ "Microsoft.Resources/deployments/*",
+ "Microsoft.Resources/subscriptions/resourceGroups/read",
+ "Microsoft.ElasticSan/elasticSans/*",
+ "Microsoft.ElasticSan/locations/*"
+ ],
+ "notActions": [],
+ "dataActions": [],
+ "notDataActions": []
+ }
+ ],
+ "roleName": "Elastic SAN Owner",
+ "roleType": "BuiltInRole",
+ "type": "Microsoft.Authorization/roleDefinitions"
+}
+```
+
+### Elastic SAN Reader
+
+Allows for control path read access to Azure Elastic SAN
+
+> [!div class="mx-tableFixed"]
+> | Actions | Description |
+> | | |
+> | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/roleAssignments/read | Get information about a role assignment. |
+> | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/roleDefinitions/read | Get information about a role definition. |
+> | [Microsoft.ResourceHealth](resource-provider-operations.md#microsoftresourcehealth)/availabilityStatuses/read | Gets the availability statuses for all resources in the specified scope |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/resourceGroups/read | Gets or lists resource groups. |
+> | [Microsoft.ElasticSan](resource-provider-operations.md#microsoftelasticsan)/elasticSans/*/read | |
+> | **NotActions** | |
+> | *none* | |
+> | **DataActions** | |
+> | *none* | |
+> | **NotDataActions** | |
+> | *none* | |
+
+```json
+{
+ "assignableScopes": [
+ "/"
+ ],
+ "description": "Allows for control path read access to Azure Elastic SAN",
+ "id": "/subscriptions/{subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/af6a70f8-3c9f-4105-acf1-d719e9fca4ca",
+ "name": "af6a70f8-3c9f-4105-acf1-d719e9fca4ca",
+ "permissions": [
+ {
+ "actions": [
+ "Microsoft.Authorization/roleAssignments/read",
+ "Microsoft.Authorization/roleDefinitions/read",
+ "Microsoft.ResourceHealth/availabilityStatuses/read",
+ "Microsoft.Resources/subscriptions/resourceGroups/read",
+ "Microsoft.ElasticSan/elasticSans/*/read"
+ ],
+ "notActions": [],
+ "dataActions": [],
+ "notDataActions": []
+ }
+ ],
+ "roleName": "Elastic SAN Reader",
+ "roleType": "BuiltInRole",
+ "type": "Microsoft.Authorization/roleDefinitions"
+}
+```
+
+### Elastic SAN Volume Group Owner
+
+Allows for full access to a volume group in Azure Elastic SAN including changing network security policies to unblock data path access
+
+> [!div class="mx-tableFixed"]
+> | Actions | Description |
+> | | |
+> | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/roleAssignments/read | Get information about a role assignment. |
+> | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/roleDefinitions/read | Get information about a role definition. |
+> | [Microsoft.ElasticSan](resource-provider-operations.md#microsoftelasticsan)/elasticSans/volumeGroups/* | |
+> | [Microsoft.ElasticSan](resource-provider-operations.md#microsoftelasticsan)/locations/asyncoperations/read | Polls the status of an asynchronous operation. |
+> | **NotActions** | |
+> | *none* | |
+> | **DataActions** | |
+> | *none* | |
+> | **NotDataActions** | |
+> | *none* | |
+
+```json
+{
+ "assignableScopes": [
+ "/"
+ ],
+ "description": "Allows for full access to a volume group in Azure Elastic SAN including changing network security policies to unblock data path access",
+ "id": "/subscriptions/{subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/a8281131-f312-4f34-8d98-ae12be9f0d23",
+ "name": "a8281131-f312-4f34-8d98-ae12be9f0d23",
+ "permissions": [
+ {
+ "actions": [
+ "Microsoft.Authorization/roleAssignments/read",
+ "Microsoft.Authorization/roleDefinitions/read",
+ "Microsoft.ElasticSan/elasticSans/volumeGroups/*",
+ "Microsoft.ElasticSan/locations/asyncoperations/read"
+ ],
+ "notActions": [],
+ "dataActions": [],
+ "notDataActions": []
+ }
+ ],
+ "roleName": "Elastic SAN Volume Group Owner",
+ "roleType": "BuiltInRole",
+ "type": "Microsoft.Authorization/roleDefinitions"
+}
+```
+ ### Reader and Data Access Lets you view everything but will not let you delete or create a storage account or contained resource. It will also allow read/write access to all data contained in a storage account via access to storage account keys.
Lets you manage Azure Cosmos DB accounts, but not access data in them. Prevents
### CosmosBackupOperator
-Can submit restore request for an Azure Cosmos DB database or a container for an account [Learn more](../cosmos-db/role-based-access-control.md)
+Can submit restore request for a Cosmos DB database or a container for an account [Learn more](../cosmos-db/role-based-access-control.md)
> [!div class="mx-tableFixed"] > | Actions | Description |
role-based-access-control Resource Provider Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/resource-provider-operations.md
Previously updated : 09/23/2022 Last updated : 10/12/2022
Click the resource provider name in the following table to see the list of opera
| [Microsoft.ClassicStorage](#microsoftclassicstorage) | | [Microsoft.DataBox](#microsoftdatabox) | | [Microsoft.DataShare](#microsoftdatashare) |
+| [Microsoft.ElasticSan](#microsoftelasticsan) |
| [Microsoft.ImportExport](#microsoftimportexport) | | [Microsoft.NetApp](#microsoftnetapp) | | [Microsoft.Storage](#microsoftstorage) |
Azure service: [Azure Service Health](../service-health/index.yml)
> | Microsoft.ResourceHealth/AvailabilityStatuses/current/read | Gets the availability status for the specified resource | > | Microsoft.ResourceHealth/emergingissues/read | Get Azure services' emerging issues | > | Microsoft.ResourceHealth/events/read | Get Service Health Events for given subscription |
+> | Microsoft.ResourceHealth/events/securityAdvisories/action | Get SSecurity Advisories for a given subscription |
+> | Microsoft.ResourceHealth/events/securityAdvisories/read | Get Security Advisories for a given subscription with restricted view of properities |
> | Microsoft.Resourcehealth/healthevent/Activated/action | Denotes the change in health state for the specified resource | > | Microsoft.Resourcehealth/healthevent/Updated/action | Denotes the change in health state for the specified resource | > | Microsoft.Resourcehealth/healthevent/Resolved/action | Denotes the change in health state for the specified resource |
Azure service: [Azure Service Health](../service-health/index.yml)
> | Microsoft.ResourceHealth/metadata/read | Gets Metadata | > | Microsoft.ResourceHealth/Notifications/read | Receives Azure Resource Manager notifications | > | Microsoft.ResourceHealth/Operations/read | Get the operations available for the Microsoft ResourceHealth |
-> | Microsoft.ResourceHealth/potentialoutages/read | |
+> | Microsoft.ResourceHealth/potentialoutages/read | Get Potential Outages for given subscription |
### Microsoft.Support
Azure service: [Application Gateway](../application-gateway/index.yml), [Azure B
> | Microsoft.Network/firewallPolicies/ruleGroups/read | Gets a Firewall Policy Rule Group | > | Microsoft.Network/firewallPolicies/ruleGroups/write | Creates a Firewall Policy Rule Group or Updates an existing Firewall Policy Rule Group | > | Microsoft.Network/firewallPolicies/ruleGroups/delete | Deletes a Firewall Policy Rule Group |
+> | Microsoft.Network/frontdooroperationresults/read | Gets Frontdoor operation result |
+> | Microsoft.Network/frontdooroperationresults/frontdoorResults/read | Gets Frontdoor operation result |
+> | Microsoft.Network/frontdooroperationresults/rulesenginesresults/read | Gets Rules Engine operation result |
> | Microsoft.Network/frontDoors/read | Gets a Front Door | > | Microsoft.Network/frontDoors/write | Creates or updates a Front Door | > | Microsoft.Network/frontDoors/delete | Deletes a Front Door |
Azure service: [Application Gateway](../application-gateway/index.yml), [Azure B
> | Microsoft.Network/locations/commitInternalAzureNetworkManagerConfiguration/action | Commits Internal AzureNetworkManager Configuration In ANM | > | Microsoft.Network/locations/internalAzureVirtualNetworkManagerOperation/action | Internal AzureVirtualNetworkManager Operation In ANM | > | Microsoft.Network/locations/setLoadBalancerFrontendPublicIpAddresses/action | SetLoadBalancerFrontendPublicIpAddresses targets frontend IP configurations of 2 load balancers. Azure Resource Manager IDs of the IP configurations are provided in the body of the request. |
-> | Microsoft.Network/locations/applicationgatewaywafdynamicmanifest/read | Get the application gateway waf dynamic manifest |
+> | Microsoft.Network/locations/applicationGatewayWafDynamicManifests/read | Get the application gateway waf dynamic manifest |
+> | Microsoft.Network/locations/applicationgatewaywafdynamicmanifests/default/read | Get Application Gateway Waf Dynamic Manifest Default entry |
> | Microsoft.Network/locations/autoApprovedPrivateLinkServices/read | Gets Auto Approved Private Link Services | > | Microsoft.Network/locations/availableDelegations/read | Gets Available Delegations | > | Microsoft.Network/locations/availablePrivateEndpointTypes/read | Gets available Private Endpoint resources |
Azure service: [Azure Data Share](../data-share/index.yml)
> | Microsoft.DataShare/locations/operationResults/read | Reads the locations Data Share is supported in. | > | Microsoft.DataShare/operations/read | Reads all available operations in Data Share Resource Provider. |
+### Microsoft.ElasticSan
+
+Azure service: [Azure Elastic SAN](../storage/elastic-san/index.yml)
+
+> [!div class="mx-tableFixed"]
+> | Action | Description |
+> | | |
+> | Microsoft.ElasticSan/register/action | Registers the subscription for the ElasticSan resource provider and enables the creation of san accounts. |
+> | Microsoft.ElasticSan/elasticSans/read | List ElasticSans by Resource Group |
+> | Microsoft.ElasticSan/elasticSans/read | List ElasticSans by Subscription |
+> | Microsoft.ElasticSan/elasticSans/delete | Delete ElasticSan |
+> | Microsoft.ElasticSan/elasticSans/read | Get Elastic San |
+> | Microsoft.ElasticSan/elasticSans/write | Create/Update Elastic San |
+> | Microsoft.ElasticSan/elasticSans/volumeGroups/read | List VolumeGroups by ElasticSan |
+> | Microsoft.ElasticSan/elasticSans/volumeGroups/delete | Delete Volume Group |
+> | Microsoft.ElasticSan/elasticSans/volumeGroups/read | Get Volume Group |
+> | Microsoft.ElasticSan/elasticSans/volumeGroups/write | Create/Update Volume Group |
+> | Microsoft.ElasticSan/elasticSans/volumeGroups/volumes/delete | Delete Volume |
+> | Microsoft.ElasticSan/elasticSans/volumeGroups/volumes/read | List Volumes by Volume Group |
+> | Microsoft.ElasticSan/elasticSans/volumeGroups/volumes/write | Create/Update Volume |
+> | Microsoft.ElasticSan/elasticSans/volumeGroups/volumes/read | Get Volume |
+> | Microsoft.ElasticSan/locations/asyncoperations/read | Polls the status of an asynchronous operation. |
+> | Microsoft.ElasticSan/operations/read | List the operations supported by Microsoft.ElasticSan |
+> | Microsoft.ElasticSan/skus/read | Get Sku |
+ ### Microsoft.ImportExport Azure service: [Azure Import/Export](../import-export/storage-import-export-service.md)
Azure service: [Container Instances](../container-instances/index.yml)
> | Microsoft.ContainerInstance/containerGroups/providers/Microsoft.Insights/diagnosticSettings/read | Gets the diagnostic setting for the container group. | > | Microsoft.ContainerInstance/containerGroups/providers/Microsoft.Insights/diagnosticSettings/write | Creates or updates the diagnostic setting for the container group. | > | Microsoft.ContainerInstance/containerGroups/providers/Microsoft.Insights/metricDefinitions/read | Gets the available metrics for container group. |
+> | Microsoft.ContainerInstance/locations/validateDeleteVirtualNetworkOrSubnets/action | Notifies Microsoft.ContainerInstance that virtual network or subnet is being deleted. |
> | Microsoft.ContainerInstance/locations/deleteVirtualNetworkOrSubnets/action | Notifies Microsoft.ContainerInstance that virtual network or subnet is being deleted. | > | Microsoft.ContainerInstance/locations/cachedImages/read | Gets the cached images for the subscription in a region. | > | Microsoft.ContainerInstance/locations/capabilities/read | Get the capabilities for a region. |
Azure service: [Azure Database Migration Service](../dms/index.yml)
> | Microsoft.DataMigration/sqlMigrationServices/regenerateAuthKeys/action | Regenerate the Authentication Keys | > | Microsoft.DataMigration/sqlMigrationServices/deleteNode/action | | > | Microsoft.DataMigration/sqlMigrationServices/listMonitoringData/action | Retrieve the Monitoring Data |
+> | Microsoft.DataMigration/sqlMigrationServices/validateIR/action | |
> | Microsoft.DataMigration/sqlMigrationServices/read | Retrieve all services in the Subscription | > | Microsoft.DataMigration/sqlMigrationServices/listMigrations/read | | > | Microsoft.DataMigration/sqlMigrationServices/MonitoringData/read | Retrieve the Monitoring Data |
Azure service: [Azure Database for MySQL](../mysql/index.yml)
> | Action | Description | > | | | > | Microsoft.DBforMySQL/getPrivateDnsZoneSuffix/action | Gets the private dns zone suffix. |
-> | Microsoft.DBforMySQL/assessForMigration/action | Performs a migration assessment with the specified parameters. |
> | Microsoft.DBforMySQL/privateEndpointConnectionsApproval/action | Determines if user is allowed to approve a private endpoint connection | > | Microsoft.DBforMySQL/register/action | Register MySQL Resource Provider | > | Microsoft.DBforMySQL/checkNameAvailability/action | Verify whether given server name is available for provisioning worldwide for a given subscription. |
Azure service: [Azure Database for MySQL](../mysql/index.yml)
> | Microsoft.DBforMySQL/flexibleServers/replicas/read | Returns the list of read replicas for a MySQL server | > | Microsoft.DBforMySQL/locations/checkVirtualNetworkSubnetUsage/action | Checks the subnet usage for speicifed delegated virtual network. | > | Microsoft.DBforMySQL/locations/checkNameAvailability/action | Verify whether given server name is available for provisioning worldwide for a given subscription. |
+> | Microsoft.DBforMySQL/locations/assessForMigration/action | Performs a migration assessment with the specified parameters. |
> | Microsoft.DBforMySQL/locations/administratorAzureAsyncOperation/read | Gets in-progress operations on MySQL server administrators | > | Microsoft.DBforMySQL/locations/administratorOperationResults/read | Return MySQL Server administrator operation results | > | Microsoft.DBforMySQL/locations/azureAsyncOperation/read | Return MySQL Server Operation Results |
Azure service: [Azure Data Lake Store](../storage/blobs/data-lake-storage-introd
> | Microsoft.DataLakeStore/accounts/delete | Delete a DataLakeStore account. | > | Microsoft.DataLakeStore/accounts/enableKeyVault/action | Enable KeyVault for a DataLakeStore account. | > | Microsoft.DataLakeStore/accounts/Superuser/action | Grant Superuser on Data Lake Store when granted with Microsoft.Authorization/roleAssignments/write. |
-> | Microsoft.DataLakeStore/accounts/cosmosCertMappings/read | Get information about an Azure Cosmos DB Cert Mapping. |
-> | Microsoft.DataLakeStore/accounts/cosmosCertMappings/write | Create or update an Azure Cosmos DB Cert Mapping. |
-> | Microsoft.DataLakeStore/accounts/cosmosCertMappings/delete | Delete an Azure Cosmos DB Cert Mapping. |
+> | Microsoft.DataLakeStore/accounts/cosmosCertMappings/read | Get information about a Cosmos Cert Mapping. |
+> | Microsoft.DataLakeStore/accounts/cosmosCertMappings/write | Create or update a Cosmos Cert Mapping. |
+> | Microsoft.DataLakeStore/accounts/cosmosCertMappings/delete | Delete a Cosmos Cert Mapping. |
> | Microsoft.DataLakeStore/accounts/eventGridFilters/read | Get an EventGrid Filter. | > | Microsoft.DataLakeStore/accounts/eventGridFilters/write | Create or update an EventGrid Filter. | > | Microsoft.DataLakeStore/accounts/eventGridFilters/delete | Delete an EventGrid Filter. |
Azure service: [Cognitive Services](../cognitive-services/index.yml)
> | Microsoft.CognitiveServices/accounts/Language/read | Gets the details of a project. Lists the existing projects.* | > | Microsoft.CognitiveServices/accounts/Language/:import/action | Triggers a job to import a project. If a project with the same name already exists, the data of that project is replaced. | > | Microsoft.CognitiveServices/accounts/Language/:train/action | Triggers a training job for a project. |
+> | Microsoft.CognitiveServices/accounts/Language/:migratefromluis/action | Triggers a job to migrate one or more LUIS apps. |
> | Microsoft.CognitiveServices/accounts/Language/analyze-conversation/jobscancel/action | Cancel a long-running analysis job on conversation. | > | Microsoft.CognitiveServices/accounts/Language/analyze-conversation/jobs/action | Submit a long conversation for analysis. Specify one or more unique tasks to be executed as a long-running operation. | > | Microsoft.CognitiveServices/accounts/Language/analyze-conversation/jobs/read | Get the status of an analysis job. A job may consist of one or more tasks. Once all tasks are succeeded, the job will transition to the suceeded state and results will be available for each task. |
Azure service: [Cognitive Services](../cognitive-services/index.yml)
> | Microsoft.CognitiveServices/accounts/Language/analyze-text/internal/projects/autotag/jobs/read | Get autotagging jobs. Get auto tagging job status and result details.* | > | Microsoft.CognitiveServices/accounts/Language/analyze-text/internal/projects/export/jobs/result/read | Get export job result details. | > | Microsoft.CognitiveServices/accounts/Language/analyze-text/internal/projects/models/read | Get a trained model info. Get trained models info.* |
+> | Microsoft.CognitiveServices/accounts/Language/analyze-text/internal/projects/models/modelguidance/read | Get trained model guidance. |
> | Microsoft.CognitiveServices/accounts/Language/analyze-text/jobs/read | Get the status of an analysis job. A job may consist of one or more tasks. Once all tasks are completed, the job will transition to the completed state and results will be available for each task. | > | Microsoft.CognitiveServices/accounts/Language/analyze-text/projects/write | Creates a new or update a project. | > | Microsoft.CognitiveServices/accounts/Language/analyze-text/projects/delete | Deletes a project. |
Azure service: [Cognitive Services](../cognitive-services/index.yml)
> | Microsoft.CognitiveServices/accounts/Language/global/prebuilt-entities/read | Lists the supported prebuilt entities that can be used while creating composed entities. | > | Microsoft.CognitiveServices/accounts/Language/global/training-config-versions/read | Lists the support training config version for a given project type. | > | Microsoft.CognitiveServices/accounts/Language/import/jobs/read | Gets the status for an import. |
+> | Microsoft.CognitiveServices/accounts/Language/migratefromluis/jobs/read | Gets the status of a migration job of a batch of LUIS apps. |
> | Microsoft.CognitiveServices/accounts/Language/models/delete | Deletes an existing trained model. | > | Microsoft.CognitiveServices/accounts/Language/models/read | Gets the details of a trained model. Lists the trained models belonging to a project.* | > | Microsoft.CognitiveServices/accounts/Language/models/:load-snapshot/action | Restores the snapshot of this trained model to be the current working directory of the project. |
Azure service: [Machine Learning](../machine-learning/index.yml)
> | Microsoft.MachineLearningServices/workspaces/jobs/cancel/action | Cancel Jobs in Machine Learning Services Workspace(s) | > | Microsoft.MachineLearningServices/workspaces/jobs/operationresults/read | Reads Jobs in Machine Learning Services Workspace(s) | > | Microsoft.MachineLearningServices/workspaces/labeling/export/action | Export labels of labeling projects in Machine Learning Services Workspace(s) |
+> | Microsoft.MachineLearningServices/workspaces/labeling/labelimport/action | Import labels into labeling projects in Machine Learning Services Workspace(s) |
> | Microsoft.MachineLearningServices/workspaces/labeling/labels/read | Gets labels of labeling projects in Machine Learning Services Workspace(s) | > | Microsoft.MachineLearningServices/workspaces/labeling/labels/write | Creates labels of labeling projects in Machine Learning Services Workspace(s) | > | Microsoft.MachineLearningServices/workspaces/labeling/labels/reject/action | Reject labels of labeling projects in Machine Learning Services Workspace(s) |
Azure service: [Azure Stack](/azure-stack/)
> | Microsoft.AzureStack/linkedSubscriptions/linkedResourceGroups/action | Reads or Writes to a projected linked resource under the linked resource group | > | Microsoft.AzureStack/linkedSubscriptions/linkedProviders/action | Reads or Writes to a projected linked resource under the given linked resource provider namespace | > | Microsoft.AzureStack/linkedSubscriptions/operations/action | Get or list statuses of async operations on projected linked resources |
+> | Microsoft.AzureStack/linkedSubscriptions/linkedResourceGroups/linkedProviders/virtualNetworks/read | Get or list virtual network |
> | Microsoft.AzureStack/Operations/read | Gets the properties of a resource provider operation | > | Microsoft.AzureStack/registrations/read | Gets the properties of an Azure Stack registration | > | Microsoft.AzureStack/registrations/write | Creates or updates an Azure Stack registration |
Azure service: [Azure Arc](../azure-arc/index.yml)
> | | | > | Microsoft.HybridCompute/register/action | Registers the subscription for the Microsoft.HybridCompute Resource Provider | > | Microsoft.HybridCompute/unregister/action | Unregisters the subscription for Microsoft.HybridCompute Resource Provider |
+> | Microsoft.HybridCompute/batch/action | Batch deletes Azure Arc machines |
> | Microsoft.HybridCompute/locations/operationresults/read | Reads the status of an operation on Microsoft.HybridCompute Resource Provider | > | Microsoft.HybridCompute/locations/operationstatus/read | Reads the status of an operation on Microsoft.HybridCompute Resource Provider | > | Microsoft.HybridCompute/locations/privateLinkScopes/read | Reads the full details of any Azure Arc privateLinkScopes |
Azure service: [Azure Arc](../azure-arc/index.yml)
> | Microsoft.HybridCompute/privateLinkScopes/privateEndpointConnections/write | Writes an Azure Arc privateEndpointConnections | > | Microsoft.HybridCompute/privateLinkScopes/privateEndpointConnections/delete | Deletes an Azure Arc privateEndpointConnections | > | **DataAction** | **Description** |
+> | Microsoft.HybridCompute/locations/publishers/extensionTypes/versions/read | Returns a list of versions for extensionMetadata based on query parameters. |
> | Microsoft.HybridCompute/machines/login/action | Log in to a Azure Arc machine as a regular user | > | Microsoft.HybridCompute/machines/loginAsAdmin/action | Log in to a Azure Arc machine with Windows administrator or Linux root user privilege | > | Microsoft.HybridCompute/machines/WACloginAsAdmin/action | Lets you manage the OS of your resource via Windows Admin Center as an administrator. |
Azure service: [Azure Kubernetes Service (AKS)](/azure/aks/)
> | Microsoft.KubernetesConfiguration/privateLinkScopes/privateEndpointConnectionProxies/read | Gets private endpoint connection proxy. | > | Microsoft.KubernetesConfiguration/privateLinkScopes/privateEndpointConnectionProxies/validate/action | Validates private endpoint connection proxy object. | > | Microsoft.KubernetesConfiguration/privateLinkScopes/privateEndpointConnectionProxies/updatePrivateEndpointProperties/action | Updates patch on private endpoint connection proxy. |
-> | Microsoft.KubernetesConfiguration/privateLinkScopes/privateEndpointConnectionProxies/operations/read | Updates patch on private endpoint connection proxy. |
+> | Microsoft.KubernetesConfiguration/privateLinkScopes/privateEndpointConnectionProxies/operations/read | Gets private endpoint connection proxies operation. |
> | Microsoft.KubernetesConfiguration/privateLinkScopes/privateEndpointConnections/write | Creates or updates private endpoint connection. | > | Microsoft.KubernetesConfiguration/privateLinkScopes/privateEndpointConnections/delete | Deletes private endpoint connection. | > | Microsoft.KubernetesConfiguration/privateLinkScopes/privateEndpointConnections/read | Gets private endpoint connection. |
Azure service: [Site Recovery](../site-recovery/index.yml)
> | | | > | Microsoft.RecoveryServices/register/action | Registers subscription for given Resource Provider | > | Microsoft.RecoveryServices/unregister/action | Unregisters subscription for given Resource Provider |
-> | Microsoft.RecoveryServices/Locations/backupCrossRegionRestore/action | Trigger Cross region restore. |
-> | Microsoft.RecoveryServices/Locations/backupCrrJob/action | Get Cross Region Restore Job Details in the secondary region for Recovery Services Vault. |
-> | Microsoft.RecoveryServices/Locations/backupCrrJobs/action | List Cross Region Restore Jobs in the secondary region for Recovery Services Vault. |
-> | Microsoft.RecoveryServices/Locations/backupPreValidateProtection/action | |
-> | Microsoft.RecoveryServices/Locations/backupStatus/action | Check Backup Status for Recovery Services Vaults |
-> | Microsoft.RecoveryServices/Locations/backupValidateFeatures/action | Validate Features |
+> | microsoft.recoveryservices/Locations/backupCrossRegionRestore/action | Trigger Cross region restore. |
+> | microsoft.recoveryservices/Locations/backupCrrJob/action | Get Cross Region Restore Job Details in the secondary region for Recovery Services Vault. |
+> | microsoft.recoveryservices/Locations/backupCrrJobs/action | List Cross Region Restore Jobs in the secondary region for Recovery Services Vault. |
+> | microsoft.recoveryservices/Locations/backupPreValidateProtection/action | |
+> | microsoft.recoveryservices/Locations/backupStatus/action | Check Backup Status for Recovery Services Vaults |
+> | microsoft.recoveryservices/Locations/backupValidateFeatures/action | Validate Features |
> | Microsoft.RecoveryServices/locations/allocateStamp/action | AllocateStamp is internal operation used by service | > | Microsoft.RecoveryServices/locations/checkNameAvailability/action | Check Resource Name Availability is an API to check if resource name is available | > | Microsoft.RecoveryServices/locations/allocatedStamp/read | GetAllocatedStamp is internal operation used by service |
-> | Microsoft.RecoveryServices/Locations/backupAadProperties/read | Get AAD Properties for authentication in the third region for Cross Region Restore. |
-> | Microsoft.RecoveryServices/Locations/backupCrrOperationResults/read | Returns CRR Operation Result for Recovery Services Vault. |
-> | Microsoft.RecoveryServices/Locations/backupCrrOperationsStatus/read | Returns CRR Operation Status for Recovery Services Vault. |
-> | Microsoft.RecoveryServices/Locations/backupProtectedItem/write | Create a backup Protected Item |
-> | Microsoft.RecoveryServices/Locations/backupProtectedItems/read | Returns the list of all Protected Items. |
+> | microsoft.recoveryservices/Locations/backupAadProperties/read | Get AAD Properties for authentication in the third region for Cross Region Restore. |
+> | microsoft.recoveryservices/Locations/backupCrrOperationResults/read | Returns CRR Operation Result for Recovery Services Vault. |
+> | microsoft.recoveryservices/Locations/backupCrrOperationsStatus/read | Returns CRR Operation Status for Recovery Services Vault. |
+> | microsoft.recoveryservices/Locations/backupProtectedItem/write | Create a backup Protected Item |
+> | microsoft.recoveryservices/Locations/backupProtectedItems/read | Returns the list of all Protected Items. |
> | Microsoft.RecoveryServices/locations/operationStatus/read | Gets Operation Status for a given Operation | > | Microsoft.RecoveryServices/operations/read | Operation returns the list of Operations for a Resource Provider |
-> | Microsoft.RecoveryServices/Vaults/backupJobsExport/action | Export Jobs |
-> | Microsoft.RecoveryServices/Vaults/backupSecurityPIN/action | Returns Security PIN Information for Recovery Services Vault. |
-> | Microsoft.RecoveryServices/Vaults/backupTriggerValidateOperation/action | Validate Operation on Protected Item |
-> | Microsoft.RecoveryServices/Vaults/backupValidateOperation/action | Validate Operation on Protected Item |
+> | microsoft.recoveryservices/Vaults/backupJobsExport/action | Export Jobs |
+> | microsoft.recoveryservices/Vaults/backupSecurityPIN/action | Returns Security PIN Information for Recovery Services Vault. |
+> | microsoft.recoveryservices/Vaults/backupTriggerValidateOperation/action | Validate Operation on Protected Item |
+> | microsoft.recoveryservices/Vaults/backupValidateOperation/action | Validate Operation on Protected Item |
> | Microsoft.RecoveryServices/Vaults/write | Create Vault operation creates an Azure resource of type 'vault' | > | Microsoft.RecoveryServices/Vaults/read | The Get Vault operation gets an object representing the Azure resource of type 'vault' | > | Microsoft.RecoveryServices/Vaults/delete | The Delete Vault operation deletes the specified Azure resource of type 'vault' |
-> | Microsoft.RecoveryServices/Vaults/backupconfig/read | Returns Configuration for Recovery Services Vault. |
-> | Microsoft.RecoveryServices/Vaults/backupconfig/write | Updates Configuration for Recovery Services Vault. |
-> | Microsoft.RecoveryServices/Vaults/backupEncryptionConfigs/read | Gets Backup Resource Encryption Configuration. |
-> | Microsoft.RecoveryServices/Vaults/backupEncryptionConfigs/write | Updates Backup Resource Encryption Configuration |
-> | Microsoft.RecoveryServices/Vaults/backupEngines/read | Returns all the backup management servers registered with vault. |
-> | Microsoft.RecoveryServices/Vaults/backupFabrics/refreshContainers/action | Refreshes the container list |
-> | Microsoft.RecoveryServices/Vaults/backupFabrics/backupProtectionIntent/delete | Delete a backup Protection Intent |
-> | Microsoft.RecoveryServices/Vaults/backupFabrics/backupProtectionIntent/read | Get a backup Protection Intent |
-> | Microsoft.RecoveryServices/Vaults/backupFabrics/backupProtectionIntent/write | Create a backup Protection Intent |
-> | Microsoft.RecoveryServices/Vaults/backupFabrics/operationResults/read | Returns status of the operation |
-> | Microsoft.RecoveryServices/Vaults/backupFabrics/operationsStatus/read | Returns status of the operation |
-> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectableContainers/read | Get all protectable containers |
-> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/delete | Deletes the registered Container |
-> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/inquire/action | Do inquiry for workloads within a container |
-> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/read | Returns all registered containers |
-> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/write | Creates a registered container |
-> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/items/read | Get all items in a container |
-> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/operationResults/read | Gets result of Operation performed on Protection Container. |
-> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/operationsStatus/read | Gets status of Operation performed on Protection Container. |
-> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/protectedItems/backup/action | Performs Backup for Protected Item. |
-> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/protectedItems/delete | Deletes Protected Item |
-> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/protectedItems/read | Returns object details of the Protected Item |
-> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/protectedItems/recoveryPointsRecommendedForMove/action | Get Recovery points recommended for move to another tier |
-> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/protectedItems/write | Create a backup Protected Item |
-> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/protectedItems/operationResults/read | Gets Result of Operation Performed on Protected Items. |
-> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/protectedItems/operationsStatus/read | Returns the status of Operation performed on Protected Items. |
-> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/protectedItems/recoveryPoints/accessToken/action | Get AccessToken for Cross Region Restore. |
-> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/protectedItems/recoveryPoints/move/action | Move Recovery point to another tier |
-> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/protectedItems/recoveryPoints/provisionInstantItemRecovery/action | Provision Instant Item Recovery for Protected Item |
-> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/protectedItems/recoveryPoints/read | Get Recovery Points for Protected Items. |
-> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/protectedItems/recoveryPoints/restore/action | Restore Recovery Points for Protected Items. |
-> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/protectedItems/recoveryPoints/revokeInstantItemRecovery/action | Revoke Instant Item Recovery for Protected Item |
-> | Microsoft.RecoveryServices/Vaults/backupJobs/cancel/action | Cancel the Job |
-> | Microsoft.RecoveryServices/Vaults/backupJobs/read | Returns all Job Objects |
-> | Microsoft.RecoveryServices/Vaults/backupJobs/operationResults/read | Returns the Result of Job Operation. |
-> | Microsoft.RecoveryServices/Vaults/backupJobs/operationsStatus/read | Returns the status of Job Operation. |
-> | Microsoft.RecoveryServices/Vaults/backupOperationResults/read | Returns Backup Operation Result for Recovery Services Vault. |
-> | Microsoft.RecoveryServices/Vaults/backupOperations/read | Returns Backup Operation Status for Recovery Services Vault. |
-> | Microsoft.RecoveryServices/Vaults/backupPolicies/delete | Delete a Protection Policy |
-> | Microsoft.RecoveryServices/Vaults/backupPolicies/read | Returns all Protection Policies |
-> | Microsoft.RecoveryServices/Vaults/backupPolicies/write | Creates Protection Policy |
-> | Microsoft.RecoveryServices/Vaults/backupPolicies/operationResults/read | Get Results of Policy Operation. |
-> | Microsoft.RecoveryServices/Vaults/backupPolicies/operations/read | Get Status of Policy Operation. |
-> | Microsoft.RecoveryServices/Vaults/backupProtectableItems/read | Returns list of all Protectable Items. |
-> | Microsoft.RecoveryServices/Vaults/backupProtectedItems/read | Returns the list of all Protected Items. |
-> | Microsoft.RecoveryServices/Vaults/backupProtectionContainers/read | Returns all containers belonging to the subscription |
-> | Microsoft.RecoveryServices/Vaults/backupProtectionIntents/read | List all backup Protection Intents |
-> | Microsoft.RecoveryServices/Vaults/backupResourceGuardProxies/delete | The Delete ResourceGuard proxy operation deletes the specified Azure resource of type 'ResourceGuard proxy' |
-> | Microsoft.RecoveryServices/Vaults/backupResourceGuardProxies/read | Get the list of ResourceGuard proxies for a resource |
-> | Microsoft.RecoveryServices/Vaults/backupResourceGuardProxies/read | Get ResourceGuard proxy operation gets an object representing the Azure resource of type 'ResourceGuard proxy' |
-> | Microsoft.RecoveryServices/Vaults/backupResourceGuardProxies/unlockDelete/action | Unlock delete ResourceGuard proxy operation unlocks the next delete critical operation |
-> | Microsoft.RecoveryServices/Vaults/backupResourceGuardProxies/write | Create ResourceGuard proxy operation creates an Azure resource of type 'ResourceGuard Proxy' |
-> | Microsoft.RecoveryServices/Vaults/backupstorageconfig/read | Returns Storage Configuration for Recovery Services Vault. |
-> | Microsoft.RecoveryServices/Vaults/backupstorageconfig/write | Updates Storage Configuration for Recovery Services Vault. |
-> | Microsoft.RecoveryServices/Vaults/backupUsageSummaries/read | Returns summaries for Protected Items and Protected Servers for a Recovery Services . |
-> | Microsoft.RecoveryServices/Vaults/backupValidateOperationResults/read | Validate Operation on Protected Item |
-> | Microsoft.RecoveryServices/Vaults/backupValidateOperationsStatuses/read | Validate Operation on Protected Item |
+> | microsoft.recoveryservices/Vaults/backupconfig/read | Returns Configuration for Recovery Services Vault. |
+> | microsoft.recoveryservices/Vaults/backupconfig/write | Updates Configuration for Recovery Services Vault. |
+> | microsoft.recoveryservices/Vaults/backupDeletedProtectionContainers/read | Returns all containers belonging to the subscription |
+> | microsoft.recoveryservices/Vaults/backupEncryptionConfigs/read | Gets Backup Resource Encryption Configuration. |
+> | microsoft.recoveryservices/Vaults/backupEncryptionConfigs/write | Updates Backup Resource Encryption Configuration |
+> | microsoft.recoveryservices/Vaults/backupEngines/read | Returns all the backup management servers registered with vault. |
+> | microsoft.recoveryservices/Vaults/backupFabrics/refreshContainers/action | Refreshes the container list |
+> | microsoft.recoveryservices/Vaults/backupFabrics/backupProtectionIntent/delete | Delete a backup Protection Intent |
+> | microsoft.recoveryservices/Vaults/backupFabrics/backupProtectionIntent/read | Get a backup Protection Intent |
+> | microsoft.recoveryservices/Vaults/backupFabrics/backupProtectionIntent/write | Create a backup Protection Intent |
+> | microsoft.recoveryservices/Vaults/backupFabrics/operationResults/read | Returns status of the operation |
+> | microsoft.recoveryservices/Vaults/backupFabrics/operationsStatus/read | Returns status of the operation |
+> | microsoft.recoveryservices/Vaults/backupFabrics/protectableContainers/read | Get all protectable containers |
+> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/delete | Deletes the registered Container |
+> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/inquire/action | Do inquiry for workloads within a container |
+> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/read | Returns all registered containers |
+> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/write | Creates a registered container |
+> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/items/read | Get all items in a container |
+> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/operationResults/read | Gets result of Operation performed on Protection Container. |
+> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/operationsStatus/read | Gets status of Operation performed on Protection Container. |
+> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/protectedItems/backup/action | Performs Backup for Protected Item. |
+> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/protectedItems/delete | Deletes Protected Item |
+> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/protectedItems/read | Returns object details of the Protected Item |
+> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/protectedItems/recoveryPointsRecommendedForMove/action | Get Recovery points recommended for move to another tier |
+> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/protectedItems/write | Create a backup Protected Item |
+> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/protectedItems/operationResults/read | Gets Result of Operation Performed on Protected Items. |
+> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/protectedItems/operationsStatus/read | Returns the status of Operation performed on Protected Items. |
+> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/protectedItems/recoveryPoints/accessToken/action | Get AccessToken for Cross Region Restore. |
+> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/protectedItems/recoveryPoints/move/action | Move Recovery point to another tier |
+> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/protectedItems/recoveryPoints/provisionInstantItemRecovery/action | Provision Instant Item Recovery for Protected Item |
+> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/protectedItems/recoveryPoints/read | Get Recovery Points for Protected Items. |
+> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/protectedItems/recoveryPoints/restore/action | Restore Recovery Points for Protected Items. |
+> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/protectedItems/recoveryPoints/revokeInstantItemRecovery/action | Revoke Instant Item Recovery for Protected Item |
+> | microsoft.recoveryservices/Vaults/backupJobs/cancel/action | Cancel the Job |
+> | microsoft.recoveryservices/Vaults/backupJobs/read | Returns all Job Objects |
+> | microsoft.recoveryservices/Vaults/backupJobs/operationResults/read | Returns the Result of Job Operation. |
+> | microsoft.recoveryservices/Vaults/backupJobs/operationsStatus/read | Returns the status of Job Operation. |
+> | microsoft.recoveryservices/Vaults/backupOperationResults/read | Returns Backup Operation Result for Recovery Services Vault. |
+> | microsoft.recoveryservices/Vaults/backupOperations/read | Returns Backup Operation Status for Recovery Services Vault. |
+> | microsoft.recoveryservices/Vaults/backupPolicies/delete | Delete a Protection Policy |
+> | microsoft.recoveryservices/Vaults/backupPolicies/read | Returns all Protection Policies |
+> | microsoft.recoveryservices/Vaults/backupPolicies/write | Creates Protection Policy |
+> | microsoft.recoveryservices/Vaults/backupPolicies/operationResults/read | Get Results of Policy Operation. |
+> | microsoft.recoveryservices/Vaults/backupPolicies/operations/read | Get Status of Policy Operation. |
+> | microsoft.recoveryservices/Vaults/backupProtectableItems/read | Returns list of all Protectable Items. |
+> | microsoft.recoveryservices/Vaults/backupProtectedItems/read | Returns the list of all Protected Items. |
+> | microsoft.recoveryservices/Vaults/backupProtectionContainers/read | Returns all containers belonging to the subscription |
+> | microsoft.recoveryservices/Vaults/backupProtectionIntents/read | List all backup Protection Intents |
+> | microsoft.recoveryservices/Vaults/backupResourceGuardProxies/delete | The Delete ResourceGuard proxy operation deletes the specified Azure resource of type 'ResourceGuard proxy' |
+> | microsoft.recoveryservices/Vaults/backupResourceGuardProxies/read | Get ResourceGuard proxy operation gets an object representing the Azure resource of type 'ResourceGuard proxy' |
+> | microsoft.recoveryservices/Vaults/backupResourceGuardProxies/read | Get the list of ResourceGuard proxies for a resource |
+> | microsoft.recoveryservices/Vaults/backupResourceGuardProxies/unlockDelete/action | Unlock delete ResourceGuard proxy operation unlocks the next delete critical operation |
+> | microsoft.recoveryservices/Vaults/backupResourceGuardProxies/write | Create ResourceGuard proxy operation creates an Azure resource of type 'ResourceGuard Proxy' |
+> | microsoft.recoveryservices/Vaults/backupstorageconfig/read | Returns Storage Configuration for Recovery Services Vault. |
+> | microsoft.recoveryservices/Vaults/backupstorageconfig/write | Updates Storage Configuration for Recovery Services Vault. |
+> | microsoft.recoveryservices/Vaults/backupUsageSummaries/read | Returns summaries for Protected Items and Protected Servers for a Recovery Services . |
+> | microsoft.recoveryservices/Vaults/backupValidateOperationResults/read | Validate Operation on Protected Item |
+> | microsoft.recoveryservices/Vaults/backupValidateOperationsStatuses/read | Validate Operation on Protected Item |
> | Microsoft.RecoveryServices/Vaults/certificates/write | The Update Resource Certificate operation updates the resource/vault credential certificate. | > | Microsoft.RecoveryServices/Vaults/extendedInformation/read | The Get Extended Info operation gets an object's Extended Info representing the Azure resource of type ?vault? | > | Microsoft.RecoveryServices/Vaults/extendedInformation/write | The Get Extended Info operation gets an object's Extended Info representing the Azure resource of type ?vault? |
Azure service: [Site Recovery](../site-recovery/index.yml)
> | Microsoft.RecoveryServices/Vaults/monitoringConfigurations/write | Configures e-mail notifications to Recovery services vault. | > | Microsoft.RecoveryServices/Vaults/operationResults/read | The Get Operation Results operation can be used get the operation status and result for the asynchronously submitted operation | > | Microsoft.RecoveryServices/Vaults/operationStatus/read | Gets Operation Status for a given Operation |
-> | Microsoft.RecoveryServices/Vaults/privateEndpointConnectionProxies/delete | Wait for a few minutes and then try the operation again. If the issue persists, please contact Microsoft support. |
-> | Microsoft.RecoveryServices/Vaults/privateEndpointConnectionProxies/read | Get all protectable containers |
-> | Microsoft.RecoveryServices/Vaults/privateEndpointConnectionProxies/validate/action | Get all protectable containers |
-> | Microsoft.RecoveryServices/Vaults/privateEndpointConnectionProxies/write | Get all protectable containers |
-> | Microsoft.RecoveryServices/Vaults/privateEndpointConnectionProxies/operationsStatus/read | Get all protectable containers |
-> | Microsoft.RecoveryServices/Vaults/privateEndpointConnections/delete | Delete Private Endpoint requests. This call is made by Backup Admin. |
-> | Microsoft.RecoveryServices/Vaults/privateEndpointConnections/write | Approve or Reject Private Endpoint requests. This call is made by Backup Admin. |
+> | microsoft.recoveryservices/Vaults/privateEndpointConnectionProxies/delete | Wait for a few minutes and then try the operation again. If the issue persists, please contact Microsoft support. |
+> | microsoft.recoveryservices/Vaults/privateEndpointConnectionProxies/read | Get all protectable containers |
+> | microsoft.recoveryservices/Vaults/privateEndpointConnectionProxies/validate/action | Get all protectable containers |
+> | microsoft.recoveryservices/Vaults/privateEndpointConnectionProxies/write | Get all protectable containers |
+> | microsoft.recoveryservices/Vaults/privateEndpointConnectionProxies/operationsStatus/read | Get all protectable containers |
+> | microsoft.recoveryservices/Vaults/privateEndpointConnections/delete | Delete Private Endpoint requests. This call is made by Backup Admin. |
+> | microsoft.recoveryservices/Vaults/privateEndpointConnections/write | Approve or Reject Private Endpoint requests. This call is made by Backup Admin. |
> | Microsoft.RecoveryServices/Vaults/privateEndpointConnections/read | Returns all the private endpoint connections. |
-> | Microsoft.RecoveryServices/Vaults/privateEndpointConnections/operationsStatus/read | Returns the operation status for a private endpoint connection. |
+> | microsoft.recoveryservices/Vaults/privateEndpointConnections/operationsStatus/read | Returns the operation status for a private endpoint connection. |
> | Microsoft.RecoveryServices/Vaults/privateLinkResources/read | Returns all the private link resources. | > | Microsoft.RecoveryServices/Vaults/providers/Microsoft.Insights/diagnosticSettings/read | Azure Backup Diagnostics | > | Microsoft.RecoveryServices/Vaults/providers/Microsoft.Insights/diagnosticSettings/write | Azure Backup Diagnostics |
Azure service: [Site Recovery](../site-recovery/index.yml)
> | Microsoft.RecoveryServices/vaults/replicationVaultSettings/read | Read any | > | Microsoft.RecoveryServices/vaults/replicationVaultSettings/write | Create or Update any | > | Microsoft.RecoveryServices/vaults/replicationvCenters/read | Read any vCenters |
-> | Microsoft.RecoveryServices/Vaults/usages/read | Returns usage details for a Recovery Services Vault. |
+> | microsoft.recoveryservices/Vaults/usages/read | Returns usage details for a Recovery Services Vault. |
> | Microsoft.RecoveryServices/vaults/usages/read | Read any Vault Usages | > | Microsoft.RecoveryServices/Vaults/vaultTokens/read | The Vault Token operation can be used to get Vault Token for vault level backend operations. |
search Cognitive Search Quickstart Blob https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-quickstart-blob.md
Last updated 05/31/2022
# Quickstart: Create an Azure Cognitive Search skillset in the Azure portal
-Learn how AI enrichment in Azure Cognitive Search adds Optical Character Recognition (OCR), image analysis, language detection, text translation, and entity recognition to create searchable content in a search index.
+In this quickstart, you'll learn how AI enrichment in Azure Cognitive Search adds Optical Character Recognition (OCR), image analysis, language detection, text translation, and entity recognition to create text-searchable content in a search index.
-In this quickstart, you'll run the **Import data** wizard to apply skills that transform and enrich content during indexing. Output is a searchable index containing AI-generated image text, captions, and entities. Enriched content is queryable in the portal using [Search explorer](search-explorer.md).
+You'll run the **Import data** wizard in the Azure portal to apply skills that transform and enrich content during indexing. Output is a searchable index containing AI-generated image text, captions, and entities. Enriched content is queryable in the portal using [Search explorer](search-explorer.md).
To prepare, you'll create a few resources and upload sample files before running the wizard.
-Prefer to start with code? Try the [.NET tutorial](cognitive-search-tutorial-blob-dotnet.md), [Python tutorial](cognitive-search-tutorial-blob-python.md), or [REST tutorial](cognitive-search-tutorial-blob-dotnet.md) instead.
- ## Prerequisites Before you begin, have the following prerequisites in place:
In the following steps, set up a blob container in Azure Storage to store hetero
1. In Azure portal, open your Azure Storage page and create a container. You can use the default public access level.
-1. In Container, select **Upload** to upload the sample files you downloaded in the first step. Notice that you have a wide range of content types, including images and application files that are not full text searchable in their native formats.
+1. In Container, select **Upload** to upload the sample files you downloaded in the first step. Notice that you have a wide range of content types, including images and application files that aren't full text searchable in their native formats.
:::image type="content" source="media/cognitive-search-quickstart-blob/sample-data.png" alt-text="Screenshot of source files in Azure Blob Storage." border="false":::
-You are now ready to move on the Import data wizard.
+You're now ready to move on the Import data wizard.
## Run the Import data wizard
You are now ready to move on the Import data wizard.
Next, configure AI enrichment to invoke OCR, image analysis, and natural language processing.
-1. For this quickstart, we are using the **Free** Cognitive Services resource. The sample data consists of 14 files, so the free allotment of 20 transaction on Cognitive Services is sufficient for this quickstart.
+1. For this quickstart, we're using the **Free** Cognitive Services resource. The sample data consists of 14 files, so the free allotment of 20 transaction on Cognitive Services is sufficient for this quickstart.
:::image type="content" source="media/cognitive-search-quickstart-blob/cog-search-attach.png" alt-text="Screenshot of the Attach Cognitive Services tab." border="true":::
For this quickstart, the wizard does a good job setting reasonable defaults:
:::image type="content" source="media/cognitive-search-quickstart-blob/index-fields.png" alt-text="Screenshot of the index definition page." border="true":::
-Marking a field as **Retrievable** does not mean that the field *must* be present in the search results. You can control search results composition by using the **$select** query parameter to specify which fields to include.
+Marking a field as **Retrievable** doesn't mean that the field *must* be present in the search results. You can control search results composition by using the **$select** query parameter to specify which fields to include.
Continue to the next page.
The indexer drives the indexing process. It specifies the data source name, a ta
:::image type="content" source="media/cognitive-search-quickstart-blob/indexer-def.png" alt-text="Screenshot of the indexer definition page." border="true":::
-1. Click **Submit** to create and simultaneously run the indexer.
+1. Select **Submit** to create and simultaneously run the indexer.
## Monitor status
Cognitive skills indexing takes longer to complete than typical text-based index
To check details about execution status, select an indexer from the list, and then select **Success** (or **Failed**) to view execution details.
-In this demo, there is one warning: `"Could not execute skill because one or more skill input was invalid." It tells you that a PNG file in the data source doesn't provide a text input to Entity Recognition. This warning occurs because the upstream OCR skill didn't recognize any text in the image, and thus could not provide a text input to the downstream Entity Recognition skill.
+In this demo, there's one warning: `"Could not execute skill because one or more skill input was invalid." It tells you that a PNG file in the data source doesn't provide a text input to Entity Recognition. This warning occurs because the upstream OCR skill didn't recognize any text in the image, and thus couldn't provide a text input to the downstream Entity Recognition skill.
Warnings are common in skillset execution. As you become familiar with how skills iterate over your data, you'll begin to notice patterns and learn which warnings are safe to ignore.
Results are returned as verbose JSON, which can be hard to read, especially in l
Query strings are case-sensitive so if you get an "unknown field" message, check **Fields** or **Index Definition (JSON)** to verify name and case.
- :::image type="content" source="media/cognitive-search-quickstart-blob/search-explorer.png" alt-text="Screenshot of the the Search explorer page." border="true":::
+ :::image type="content" source="media/cognitive-search-quickstart-blob/search-explorer.png" alt-text="Screenshot of the Search explorer page." border="true":::
## Takeaways
Some key concepts that we hope you picked up include the dependency on Azure dat
Another important concept is that skills operate over content types, and when working with heterogeneous content, some inputs will be skipped. Also, large files or fields might exceed the indexer limits of your service tier. It's normal to see warnings when these events occur.
-Output is directed to a search index, and there is a mapping between name-value pairs created during indexing and individual fields in your index. Internally, the portal sets up [annotations](cognitive-search-concept-annotations-syntax.md) and defines a [skillset](cognitive-search-defining-skillset.md), establishing the order of operations and general flow. These steps are hidden in the portal, but when you start writing code, these concepts become important.
+Output is directed to a search index, and there's a mapping between name-value pairs created during indexing and individual fields in your index. Internally, the portal sets up [annotations](cognitive-search-concept-annotations-syntax.md) and defines a [skillset](cognitive-search-defining-skillset.md), establishing the order of operations and general flow. These steps are hidden in the portal, but when you start writing code, these concepts become important.
Finally, you learned that can verify content by querying the index. In the end, what Azure Cognitive Search provides is a searchable index, which you can query using either the [simple](/rest/api/searchservice/simple-query-syntax-in-azure-search) or [fully extended query syntax](/rest/api/searchservice/lucene-query-syntax-in-azure-search). An index containing enriched fields is like any other. If you want to incorporate standard or [custom analyzers](search-analyzers.md), [scoring profiles](/rest/api/searchservice/add-scoring-profiles-to-a-search-index), [synonyms](search-synonyms.md), [faceted navigation](search-faceted-navigation.md), geo-search, or any other Azure Cognitive Search feature, you can certainly do so.
When you're working in your own subscription, it's a good idea at the end of a p
You can find and manage resources in the portal, using the **All resources** or **Resource groups** link in the left-navigation pane.
-If you are using a free service, remember that you are limited to three indexes, indexers, and data sources. You can delete individual items in the portal to stay under the limit.
+If you're using a free service, remember that you're limited to three indexes, indexers, and data sources. You can delete individual items in the portal to stay under the limit.
## Next steps
search Knowledge Store Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/knowledge-store-create-portal.md
# Quickstart: Create a knowledge store in the Azure portal
-[Knowledge store](knowledge-store-concept-intro.md) is a feature of Azure Cognitive Search that accepts output from an [AI enrichment pipeline](cognitive-search-concept-intro.md) and makes it available in Azure Storage for downstream apps and workloads.
+In this quickstart, you'll create a [knowledge store](knowledge-store-concept-intro.md) that serves as a repository for output created from an [AI enrichment pipeline](cognitive-search-concept-intro.md). A knowledge store makes enriched content available in Azure Storage for downstream apps and workloads, for other work besides full text search.
-In this quickstart, you'll set up some sample data and then run the **Import data** wizard to create an enrichment pipeline that also generates a knowledge store. The knowledge store will contain original text content pulled from the source (customer reviews of a hotel), plus AI-generated content that includes a sentiment label, key phrase extraction, and text translation of non-English customer comments.
-
-> [!NOTE]
-> This quickstart shows you the fastest route to a finished knowledge store in Azure Storage. For more detailed explanations of each step, see [Create a knowledge store in REST](knowledge-store-create-rest.md) instead.
+First, you'll set up some sample data. Then, you'll run the **Import data** wizard to create an enrichment pipeline that also generates a knowledge store. The knowledge store will contain original text content pulled from the data source (customer reviews of a hotel), plus AI-generated content that includes a sentiment label, key phrase extraction, and text translation of non-English customer comments.
## Prerequisites
-This quickstart uses the following
+Before you begin, have the following prerequisites in place:
+ An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/).
This quickstart uses the following
[Upload the file to a blob container](../storage/blobs/storage-quickstart-blobs-portal.md) in Azure Storage.
-This quickstart also uses [Cognitive Services](https://azure.microsoft.com/services/cognitive-services/) for AI enrichment. Because the workload is so small, Cognitive Services is tapped behind the scenes for free processing for up to 20 transactions. This means that you can complete this exercise without having to create an additional Cognitive Services resource.
+This quickstart also uses [Cognitive Services](https://azure.microsoft.com/services/cognitive-services/) for AI enrichment. Because the workload is so small, Cognitive Services is tapped behind the scenes for free processing for up to 20 transactions. This means that you can complete this exercise without having to create an extra Cognitive Services resource.
## Start the wizard 1. Sign in to the [Azure portal](https://portal.azure.com/) with your Azure account.
-1. [Find your search service](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Storage%2storageAccounts/) and on the Overview page, click **Import data** on the command bar to create a knowledge store in four steps.
+1. [Find your search service](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Storage%2storageAccounts/) and on the Overview page, select **Import data** on the command bar to create a knowledge store in four steps.
:::image type="content" source="medi.png" alt-text="Screenshot of the Import data command" border="true":::
In the **Overview** page, open the **Indexers** tab in the middle of the page, a
You should see three tables, one for each projection that was offered in the "Save enrichments" section of the "Add enrichments" page.
- + "hotelReviewssDocuments" contains all of the first-level nodes of a document's enrichment tree that are not collections.
+ + "hotelReviewssDocuments" contains all of the first-level nodes of a document's enrichment tree that aren't collections.
+ "hotelReviewssKeyPhrases" contains a long list of just the key phrases extracted from all reviews. Skills that output collections (arrays), such as key phrases and entities, will have output sent to a standalone table.
When you're working in your own subscription, it's a good idea at the end of a p
You can find and manage resources in the portal, using the **All resources** or **Resource groups** link in the left-navigation pane.
-If you are using a free service, remember that you are limited to three indexes, indexers, and data sources. You can delete individual items in the portal to stay under the limit.
+If you're using a free service, remember that you're limited to three indexes, indexers, and data sources. You can delete individual items in the portal to stay under the limit.
> [!TIP] > If you want to repeat this exercise or try a different AI enrichment walkthrough, delete the **hotel-reviews-idxr** indexer and the related objects to recreate them. Deleting the indexer resets the free daily transaction counter to zero.
search Search Create App Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-create-app-portal.md
Previously updated : 07/26/2022 Last updated : 10/13/2022 # Quickstart: Create a demo app in the portal (Azure Cognitive Search)
-Use the Azure portal's **Create demo app** wizard to generate a downloadable, "localhost"-style web app that runs in a browser. Depending on its configuration, the generated app is operational on first use, with a live read-only connection to an index on your search service. A default app can include a search bar, results area, sidebar filters, and typeahead support.
+In this quickstart, you'll use the Azure portal's **Create demo app** wizard to generate a downloadable, "localhost"-style web app that runs in a browser. Depending on its configuration, the generated app is operational on first use, with a live read-only connection to an index on your search service. A default app can include a search bar, results area, sidebar filters, and typeahead support.
-The demo app can help you visualize how an index will function in a client app, but it isn't intended for production scenarios. Production apps should include security, error handling, and hosting logic that the demo app doesn't provide. When you're ready to create a client app, see [Create your first search app using the .NET SDK](tutorial-csharp-create-first-app.md) for next steps.
+A demo app can help you visualize how an index will function in a client app, but it isn't intended for production scenarios. Production apps should include security, error handling, and hosting logic that the demo app doesn't provide.
## Prerequisites
-Before you begin, you must have the following:
+Before you begin, have the following prerequisites in place:
+ An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/).
search Search Create Service Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-create-service-portal.md
Previously updated : 03/22/2022 Last updated : 10/13/2022 # Create an Azure Cognitive Search service in the portal [**Azure Cognitive Search**](search-what-is-azure-search.md) is an Azure resource used for adding a full text search experience to custom apps.
-You can create search service using the [Azure portal](https://portal.azure.com/), which is covered in this article. You can also use [Azure PowerShell](search-manage-powershell.md), [Azure CLI](/cli/azure/search), the [Management REST API](/rest/api/searchmanagement/), an [Azure Resource Manager service template](https://azure.microsoft.com/resources/templates/azure-search-create/), or a [Bicep file](search-get-started-bicep.md).
+If you have an Azure subscription, including a [trial subscription](https://azure.microsoft.com/pricing/free-trial/?WT.mc_id=A261C142F), you can create a search service for free. Free services have limitations, but you can complete all of the quickstarts and most tutorials.
+
+The easiest way to create search service is using the [Azure portal](https://portal.azure.com/), which is covered in this article. You can also use [Azure PowerShell](search-manage-powershell.md), [Azure CLI](/cli/azure/search), the [Management REST API](/rest/api/searchmanagement/), an [Azure Resource Manager service template](https://azure.microsoft.com/resources/templates/azure-search-create/), or a [Bicep file](search-get-started-bicep.md).
[![Animated GIF](./media/search-create-service-portal/AnimatedGif-AzureSearch-small.gif)](./media/search-create-service-portal/AnimatedGif-AzureSearch.gif#lightbox)
To try search for free, you have two options:
+ Alternatively, [activate Azure credits in a Visual Studio subscription](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details/?WT.mc_id=A261C142F). A Visual Studio subscription gives you credits every month you can use for paid Azure services.
-Paid (or billable) search becomes effective when you choose a billable tier (Basic or above) and create the resource.
+Paid (or billable) search becomes effective when you choose a billable tier (Basic or above) when creating the resource.
## Find the Azure Cognitive Search offering
search Search Data Sources Gallery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-data-sources-gallery.md
layout: LandingPage Previously updated : 06/21/2022 Last updated : 10/13/2022 # Data sources gallery
Secure enterprise search connector for reliably indexing content from OpenText D
+### Drupal
+
+by [Raytion](https://www.raytion.com/contact)
+
+Raytion's Drupal Connector indexes content from Drupal into Azure Cognitive Search to be able to access and explore all pages and attachments published by Drupal alongside content from other corporate systems in Azure Cognitive Search.
+
+[More details](https://www.raytion.com/connectors/raytion-drupal-connector)
++
+ :::column-end:::
+ :::column span="":::
+ :::column-end:::
+++++ ### Egnyte by [BA Insight](https://www.bainsight.com/)
Secure enterprise search connector for reliably indexing content from Google Dri
+### Happeo
+
+by [Raytion](https://www.raytion.com/contact)
+
+Raytion's Happeo Connector indexes content from Happeo into Azure Cognitive Search and keeps track of all changes, whether for your company-wide enterprise search platform or in vibrant social collaboration environments. It guarantees an updated Azure Cognitive index and advances knowledge sharing.
+
+[More details](https://www.raytion.com/connectors/raytion-happeo-connector)
++
+ :::column-end:::
+ :::column span="":::
+ :::column-end:::
++++++ ### HP Consolidated Archive (EAS) by [BA Insight](https://www.bainsight.com/)
Secure enterprise search connector for reliably indexing content from Microsoft
+### Zendesk Guide
+
+by [Raytion](https://www.raytion.com/contact)
+
+Raytion's Zendesk Guide Connector indexes content from Zendesk Guide into Azure Cognitive Search and keeps track of all changes, whether for your company-wide enterprise search platform or a knowledge search for customers or agents. It guarantees an updated Azure Cognitive index and advances knowledge sharing.
+
+[More details](https://www.raytion.com/connectors/raytion-zendesk-guide-connector)
++
+ :::column-end:::
+ :::column span="":::
+ :::column-end:::
+++++ :::column-end::: :::column span="":::
search Search Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-explorer.md
Previously updated : 12/01/2021 Last updated : 10/13/2022 # Quickstart: Use Search explorer to run queries in the portal
-**Search explorer** is a built-in query tool in the Azure portal used for running queries against a search index in Azure Cognitive Search. This tool makes it easy to learn query syntax, test a query or filter expression, or confirm data refresh by checking whether new content exists in the index.
+In this quickstart, you'll learn how to use **Search explorer**, a built-in query tool in the Azure portal used for running queries against a search index in Azure Cognitive Search. This tool makes it easy to learn query syntax, test a query or filter expression, or confirm data refresh by checking whether new content exists in the index.
This quickstart uses an existing index to demonstrate Search explorer.
Before you begin, have the following prerequisites in place:
In Search explorer, requests are formulated using the [Search REST API](/rest/api/searchservice/search-documents), with responses returned as verbose JSON documents.
-For a first look at content, execute an empty search by clicking **Search** with no terms provided. An empty search is useful as a first query because it returns entire documents so that you can review document composition. On an empty search, there is no search rank and documents are returned in arbitrary order (`"@search.score": 1` for all documents). By default, 50 documents are returned in a search request.
+For a first look at content, execute an empty search by clicking **Search** with no terms provided. An empty search is useful as a first query because it returns entire documents so that you can review document composition. On an empty search, there's no search rank and documents are returned in arbitrary order (`"@search.score": 1` for all documents). By default, 50 documents are returned in a search request.
Equivalent syntax for an empty search is `*` or `search=*`.
In this quickstart, you used **Search explorer** to query an index using the RES
+ Results are returned as verbose JSON documents so that you can view document construction and content, in entirety. The **$select** parameter in a query expression can limit which fields are returned.
-+ Search results are composed of all fields marked as **Retrievable** in the index. To view field attributes in the portal, click *realestate-us-sample* in the **Indexes** list on the search overview page, and then open the **Fields** tab.
++ Search results are composed of all fields marked as **Retrievable** in the index. To view field attributes in the portal, select *realestate-us-sample* in the **Indexes** list on the search overview page, and then open the **Fields** tab. + Keyword search, similar to what you might enter in a commercial web browser, are useful for testing an end-user experience. For example, assuming the built-in real estate sample index, you could enter "Seattle apartments lake washington", and then you can use Ctrl-F to find terms within the search results.
When you're working in your own subscription, it's a good idea at the end of a p
You can find and manage resources in the portal, using the **All resources** or **Resource groups** link in the left-navigation pane.
-If you are using a free service, remember that you are limited to three indexes, indexers, and data sources. You can delete individual items in the portal to stay under the limit.
+If you're using a free service, remember that you're limited to three indexes, indexers, and data sources. You can delete individual items in the portal to stay under the limit.
## Next steps
-To learn more about query structures and syntax, use Postman or an equivalent tool to create query expressions that leverage more parts of the API. The [Search REST API](/rest/api/searchservice/search-documents) is especially helpful for learning and exploration.
+To learn more about query structures and syntax, use Postman or an equivalent tool to create query expressions that use more parts of the API. The [Search REST API](/rest/api/searchservice/search-documents) is especially helpful for learning and exploration.
> [!div class="nextstepaction"] > [Create a basic query in Postman](search-get-started-rest.md)
search Search Features List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-features-list.md
Previously updated : 10/06/2021 Last updated : 10/06/2022 # Features of Azure Cognitive Search
The following table summarizes features by category. For more information about
| Category&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | Features | |-|-|
-| Data sources | Search indexes can accept text from any source, provided it is submitted as a JSON document. <br/><br/> [**Indexers**](search-indexer-overview.md) are a feature that automates data import from supported data sources to extract searchable content in primary data stores. Indexers handle JSON serialization for you. You can connect to a [variety of data sources](search-data-sources-gallery.md), including [Azure SQL Database](search-howto-connecting-azure-sql-database-to-azure-search-using-indexers.md), [Azure Cosmos DB](search-howto-index-cosmosdb.md), or [Azure Blob storage](search-howto-indexing-azure-blob-storage.md). |
+| Data sources | Search indexes can accept text from any source, provided it's submitted as a JSON document. <br/><br/> [**Indexers**](search-indexer-overview.md) are a feature that automates data import from supported data sources to extract searchable content in primary data stores. Indexers handle JSON serialization for you and most support some form of change and deletion detection. You can connect to a [variety of data sources](search-data-sources-gallery.md), including [Azure SQL Database](search-howto-connecting-azure-sql-database-to-azure-search-using-indexers.md), [Azure Cosmos DB](search-howto-index-cosmosdb.md), or [Azure Blob storage](search-howto-indexing-azure-blob-storage.md). |
| Hierarchical and nested data structures | [**Complex types**](search-howto-complex-data-types.md) and collections allow you to model virtually any type of JSON structure within a search index. One-to-many and many-to-many cardinality can be expressed natively through collections, complex types, and collections of complex types.| | Linguistic analysis | Analyzers are components used for text processing during indexing and search operations. By default, you can use the general-purpose Standard Lucene analyzer, or override the default with a language analyzer, a custom analyzer that you configure, or another predefined analyzer that produces tokens in the format you require. <br/><br/>[**Language analyzers**](index-add-language-analyzers.md) from Lucene or Microsoft are used to intelligently handle language-specific linguistics including verb tenses, gender, irregular plural nouns (for example, 'mouse' vs. 'mice'), word de-compounding, word-breaking (for languages with no spaces), and more. <br/><br/>[**Custom lexical analyzers**](index-add-custom-analyzers.md) are used for complex query forms such as phonetic matching and regular expressions.<br/><br/> |
The following table summarizes features by category. For more information about
| Category&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | Features | |-|-|
-|AI processing during indexing | [**AI enrichment**](cognitive-search-concept-intro.md) refers to embedded image and natural language processing in an indexer pipeline that extracts text and information from content that cannot otherwise be indexed for full text search. AI processing is achieved by adding and combining skills in a skillset, which is then attached to an indexer. AI can be either [built-in skills](cognitive-search-predefined-skills.md) from Microsoft, such as text translation or Optical Character Recognition (OCR), or [custom skills](cognitive-search-create-custom-skill-example.md) that you provide. |
+|AI processing during indexing | [**AI enrichment**](cognitive-search-concept-intro.md) refers to embedded image and natural language processing in an indexer pipeline that extracts text and information from content that can't otherwise be indexed for full text search. AI processing is achieved by adding and combining skills in a skillset, which is then attached to an indexer. AI can be either [built-in skills](cognitive-search-predefined-skills.md) from Microsoft, such as text translation or Optical Character Recognition (OCR), or [custom skills](cognitive-search-create-custom-skill-example.md) that you provide. |
| Storing enriched content for analysis and consumption in non-search scenarios | [**Knowledge store**](knowledge-store-concept-intro.md) is persistent storage of enriched content, intended for non-search scenarios like knowledge mining and data science processing. A knowledge store is defined in a skillset, but created in Azure Storage as objects or tabular rowsets.| | Cached enrichments | [**Incremental enrichment (preview)**](cognitive-search-incremental-indexing-conceptual.md) refers to cached enrichments that can be reused during skillset execution. Caching is particularly valuable in skillsets that include OCR and image analysis, which are expensive to process. |
The following table summarizes features by category. For more information about
| Category&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | Features | |-|-|
-|Free-form text search | [**Full-text search**](search-lucene-query-architecture.md) is a primary use case for most search-based apps. Queries can be formulated using a supported syntax. <br/><br/>[**Simple query syntax**](query-simple-syntax.md) provides logical operators, phrase search operators, suffix operators, precedence operators.<br/><br/>[**Full Lucene query syntax**](query-lucene-syntax.md) includes all operations in simple syntax, with extensions for fuzzy search, proximity search, term boosting, and regular expressions.|
-| Relevance | [**Simple scoring**](index-add-scoring-profiles.md) is a key benefit of Azure Cognitive Search. Scoring profiles are used to model relevance as a function of values in the documents themselves. For example, you might want newer products or discounted products to appear higher in the search results. You can also build scoring profiles using tags for personalized scoring based on customer search preferences you've tracked and stored separately. |
+|Free-form text search | [**Full-text search**](search-lucene-query-architecture.md) is a primary use case for most search-based apps. Queries can be formulated using a supported syntax. <br/><br/>[**Simple query syntax**](query-simple-syntax.md) provides logical operators, phrase search operators, suffix operators, precedence operators. <br/><br/>[**Full Lucene query syntax**](query-lucene-syntax.md) includes all operations in simple syntax, with extensions for fuzzy search, proximity search, term boosting, and regular expressions.|
+| Relevance | [**Simple scoring**](index-add-scoring-profiles.md) is a key benefit of Azure Cognitive Search. Scoring profiles are used to model relevance as a function of values in the documents themselves. For example, you might want newer products or discounted products to appear higher in the search results. You can also build scoring profiles using tags for personalized scoring based on customer search preferences you've tracked and stored separately. <br/><br/>[**Semantic search (preview)**](semantic-search-overview.md) is premium feature that reranks results based on semantic relevance to the query. Depending on your content and scenario, it can significantly improve search relevance with almost minimal configuration or effort. |
| Geospatial search | [**Geospatial functions**](search-query-odata-geo-spatial-functions.md) filter over and match on geographic coordinates. You can [match on distance](search-query-simple-examples.md#example-6-geospatial-search) or by inclusion in a polygon shape. | | Filters and facets | [**Faceted navigation**](search-faceted-navigation.md) is enabled through a single query parameter. Azure Cognitive Search returns a faceted navigation structure you can use as the code behind a categories list, for self-directed filtering (for example, to filter catalog items by price-range or brand). <br/><br/> [**Filters**](query-odata-filter-orderby-syntax.md) can be used to incorporate faceted navigation into your application's UI, enhance query formulation, and filter based on user- or developer-specified criteria. Create filters using the OData syntax. | | User experience | [**Autocomplete**](search-add-autocomplete-suggestions.md) can be enabled for type-ahead queries in a search bar. <br/><br/>[**Search suggestions**](/rest/api/searchservice/suggesters) also works off of partial text inputs in a search bar, but the results are actual documents in your index rather than query terms. <br/><br/>[**Synonyms**](search-synonyms.md) associates equivalent terms that implicitly expand the scope of a query, without the user having to provide the alternate terms. <br/><br/>[**Hit highlighting**](/rest/api/searchservice/Search-Documents) applies text formatting to a matching keyword in search results. You can choose which fields return highlighted snippets.<br/><br/>[**Sorting**](/rest/api/searchservice/Search-Documents) is offered for multiple fields via the index schema and then toggled at query-time with a single search parameter.<br/><br/> [**Paging**](search-pagination-page-layout.md) and throttling your search results is straightforward with the finely tuned control that Azure Cognitive Search offers over your search results. <br/><br/>|
The following table summarizes features by category. For more information about
|-|-| | Data encryption | [**Microsoft-managed encryption-at-rest**](search-security-overview.md#encryption) is built into the internal storage layer and is irrevocable. <br/><br/>[**Customer-managed encryption keys**](search-security-manage-encryption-keys.md) that you create and manage in Azure Key Vault can be used for supplemental encryption of indexes and synonym maps. For services created after August 1 2020, CMK encryption extends to data on temporary disks, for full double encryption of indexed content.| | Endpoint protection | [**IP rules for inbound firewall support**](service-configure-firewall.md) allows you to set up IP ranges over which the search service will accept requests.<br/><br/>[**Create a private endpoint**](service-create-private-endpoint.md) using Azure Private Link to force all requests through a virtual network. |
+| Azure role-based access control | [**RBAC for data plane (preview)**](search-security-rbac.md) refers to the assignment of roles to users and groups in Azure Active Directory to control access to search content and operations. |
| Outbound security (indexers) | [**Data access through private endpoints**](search-indexer-howto-access-private.md) allows an indexer to connect to Azure resources that are protected through Azure Private Link.<br/><br/>[**Data access using a trusted identity**](search-howto-managed-identities-data-sources.md) means that connection strings to external data sources can omit user names and passwords. When an indexer connects to the data source, the resource allows the connection if the search service was previously registered as a trusted service. | ## Portal features | Category&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | Features | |-|-|
-| Tools for prototyping and inspection | [**Add index**](search-what-is-an-index.md) is an index designer in the portal that you can use to create a basic schema consisting of attributed fields and a few other settings. After saving the index, you can populate it using an SDK or the REST API to provide the data. <br/><br/>[**Import data wizard**](search-import-data-portal.md) creates indexes, indexers, skillsets, and data source definitions. If your data exists in Azure, this wizard can save you significant time and effort, especially for proof-of-concept investigation and exploration. <br/><br/>[**Search explorer**](search-explorer.md) is used to test queries and refine scoring profiles.<br/><br/>[**Create demo app**](search-create-app-portal.md) is used to generate an HTML page that can be used to test the search experience. |
-| Monitoring and diagnostics | [**Enable monitoring features**](monitor-azure-cognitive-search.md) to go beyond the metrics-at-a-glance that are always visible in the portal. Metrics on queries per second, latency, and throttling are captured and reported in portal pages with no additional configuration required.|
+| Tools for prototyping and inspection | [**Add index**](search-what-is-an-index.md) is an index designer in the portal that you can use to create a basic schema consisting of attributed fields and a few other settings. After saving the index, you can populate it using an SDK or the REST API to provide the data. <br/><br/>[**Import data wizard**](search-import-data-portal.md) creates indexes, indexers, skillsets, and data source definitions. If your data exists in Azure, this wizard can save you significant time and effort, especially for proof-of-concept investigation and exploration. <br/><br/>[**Search explorer**](search-explorer.md) is used to test queries and refine scoring profiles.<br/><br/>[**Create demo app**](search-create-app-portal.md) is used to generate an HTML page that can be used to test the search experience. <br/><br/>[**Debug Sessions**](cognitive-search-debug-session.md) is a visual editor that lets you debug a skillset interactively. It shows you dependencies, output, and transformations. |
+| Monitoring and diagnostics | [**Enable monitoring features**](monitor-azure-cognitive-search.md) to go beyond the metrics-at-a-glance that are always visible in the portal. Metrics on queries per second, latency, and throttling are captured and reported in portal pages with no extra configuration required.|
## Programmability
search Search Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-security-overview.md
Using alerts and the logging infrastructure in Azure, you can pick up on query v
Azure Cognitive Search participates in regular audits, and has been certified against many global, regional, and industry-specific standards for both the public cloud and Azure Government. For the complete list, download the [**Microsoft Azure Compliance Offerings** whitepaper](https://azure.microsoft.com/resources/microsoft-azure-compliance-offerings/) from the official Audit reports page.
-For compliance, you can use [Azure Policy](../governance/policy/overview.md) to implement the high-security best practices of [Azure Security Benchmark](../security/benchmarks/introduction.md). Azure Security Benchmark is a collection of security recommendations, codified into security controls that map to key actions you should take to mitigate threats to services and data. There are currently 11 security controls, including [Network Security](../security/benchmarks/security-control-network-security.md), [Logging and Monitoring](../security/benchmarks/security-control-logging-monitoring.md), and [Data Protection](../security/benchmarks/security-control-data-protection.md) to name a few.
+For compliance, you can use [Azure Policy](../governance/policy/overview.md) to implement the high-security best practices of [Microsoft cloud security benchmark](/security/benchmark/azure/introduction). The Microsoft cloud security benchmark is a collection of security recommendations, codified into security controls that map to key actions you should take to mitigate threats to services and data. There are currently 12 security controls, including [Network Security](/security/benchmark/azure/mcsb-network-security), [Logging and Monitoring](/security/benchmark/azure/mcsb-logging-monitoring), and [Data Protection](/security/benchmark/azure/mcsb-data-protection).
-Azure Policy is a capability built into Azure that helps you manage compliance for multiple standards, including those of Azure Security Benchmark. For well-known benchmarks, Azure Policy provides built-in definitions that provide both criteria and an actionable response that addresses non-compliance.
+Azure Policy is a capability built into Azure that helps you manage compliance for multiple standards, including those of Microsoft cloud security benchmark. For well-known benchmarks, Azure Policy provides built-in definitions that provide both criteria and an actionable response that addresses non-compliance.
For Azure Cognitive Search, there's currently one built-in definition. It's for resource logging. With this built-in, you can assign a policy that identifies any search service that is missing resource logging, and then turns it on. For more information, see [Azure Policy Regulatory Compliance controls for Azure Cognitive Search](security-controls-policy.md).
search Search What Is Azure Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-what-is-azure-search.md
Last updated 07/22/2022
-# What is Azure Cognitive Search?
+# What's Azure Cognitive Search?
Azure Cognitive Search ([formerly known as "Azure Search"](whats-new.md#new-service-name)) is a cloud search service that gives developers infrastructure, APIs, and tools for building a rich search experience over private, heterogeneous content in web, mobile, and enterprise applications.
search Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/security-controls-policy.md
# Azure Policy Regulatory Compliance controls for Azure Cognitive Search If you are using [Azure Policy](../governance/policy/overview.md) to enforce the recommendations in
-[Azure Security Benchmark](/azure/security/benchmarks/introduction), then you probably already know
+[Microsoft cloud security benchmark](/azure/security/benchmarks/introduction), then you probably already know
that you can create policies for identifying and fixing non-compliant services. These policies might be custom, or they might be based on built-in definitions that provide compliance criteria and appropriate solutions for well-understood best practices.
search Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/whats-new.md
Previously updated : 08/03/2022 Last updated : 10/12/2022 # What's new in Azure Cognitive Search
-Learn about the latest updates to Azure Cognitive Search. The following links supplement this article:
+Learn about the latest updates to Azure Cognitive Search functionality.
-* [**Previous versions**](/previous-versions/azure/search/) is an archive of feature announcements from 2019 and 2020.
-* [**Preview features**](search-api-preview.md) are announced here in "What's New", mixed in with announcements about general availability or feature retirement. If you need to quickly determine the status of a particular feature, visit the preview features page to see if it's listed.
+> [!NOTE]
+> Looking for preview feature status? Preview features are announced in this what's new article, but we also maintain a [preview features list](search-api-preview.md) so that you can find them all in one place.
## June 2022
Learn about the latest updates to Azure Cognitive Search. The following links su
||--|| | [Index aliases](search-how-to-alias.md) | An index alias is a secondary name that can be used to refer to an index for querying, indexing, and other operations. You can create an alias that maps to a search index and substitute the alias name in places where you would otherwise reference an index name. This gives you added flexibility if you ever need to change which index your application is pointing to. Instead of updating the references to the index name in your application, you can just update the mapping for your alias. | Public preview REST APIs (no portal support at this time).|
-## 2021 Archive
+## 2021 announcements
| Month | Feature | Description | |-||-|
Learn about the latest updates to Azure Cognitive Search. The following links su
| February | [Azure CLI](/cli/azure/search) </br>[Azure PowerShell](/powershell/module/az.search/) | New revisions now provide the full range of operations in the Management REST API 2020-08-01, including support for IP firewall rules and private endpoint. Generally available. | | January | [Solution accelerator for Azure Cognitive Search and QnA Maker](https://github.com/Azure-Samples/search-qna-maker-accelerator) | Pulls questions and answers out of the document and suggest the most relevant answers. A live demo app can be found at [https://aka.ms/qnaWithAzureSearchDemo](https://aka.ms/qnaWithAzureSearchDemo). This feature is an open-source project (no SLA). |
+## 2019 and 2020 announcements
+
+For feature announcements from 2019 and 2020, see the content archive, [**Previous versions**](/previous-versions/azure/search/) on the Microsoft Learn website.
+ <a name="new-service-name"></a>
-## Service re-brand
+## Service re-brand announcement
Azure Search was renamed to **Azure Cognitive Search** in October 2019 to reflect the expanded (yet optional) use of cognitive skills and AI processing in service operations. API versions, NuGet packages, namespaces, and endpoints are unchanged. New and existing search solutions are unaffected by the service name change.
sentinel Normalization Common Fields https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization-common-fields.md
The currently supported list of vendors and products used in the [EventVendor](#
| Vendor | Products | | | -- | | AWS | - CloudTrail<br> - VPC |
-| Cisco | - ASA<br> - Umbrella |
+| Cisco | - ASA<br> - Umbrella<br> - IOS |
| Corelight | Zeek | | GCP | Cloud DNS | | Infoblox | NIOS |
sentinel Normalization Develop Parsers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization-develop-parsers.md
The KQL operators that perform parsing are listed below, ordered by their perfor
||| |[split](/azure/data-explorer/kusto/query/splitfunction) | Parse a string of delimited values. | |[parse_csv](/azure/data-explorer/kusto/query/parsecsvfunction) | Parse a string of values formatted as a CSV (comma-separated values) line. |
+|[parse-kv](/azure/data-explorer/kusto/query/parse-kv-operator) | Extracts structured information from a string expression and represents the information in a key/value form. |
|[parse](/azure/data-explorer/kusto/query/parseoperator) | Parse multiple values from an arbitrary string using a pattern, which can be a simplified pattern with better performance, or a regular expression. | |[extract_all](/azure/data-explorer/kusto/query/extractallfunction) | Parse single values from an arbitrary string using a regular expression. `extract_all` has a similar performance to `parse` if the latter uses a regular expression. | |[extract](/azure/data-explorer/kusto/query/extractfunction) | Extract a single value from an arbitrary string using a regular expression. <br><br>Using `extract` provides better performance than `parse` or `extract_all` if a single value is needed. However, using multiple activations of `extract` over the same source string is less efficient than a single `parse` or `extract_all` and should be avoided. |
service-bus-messaging Service Bus Messages Payloads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-messages-payloads.md
While this hidden serialization magic is convenient, applications should take ex
The .NET Standard and Java API variants only accept byte arrays, which means that the application must handle object serialization control.
+If the payload of a message can't be deserialized, then it is recommended to [dead-letter the message](/azure/service-bus-messaging/service-bus-dead-letter-queues?source=recommendations#application-level-dead-lettering).
+ ## Next steps To learn more about Service Bus messaging, see the following topics: * [Service Bus queues, topics, and subscriptions](service-bus-queues-topics-subscriptions.md) * [Get started with Service Bus queues](service-bus-dotnet-get-started-with-queues.md)
-* [How to use Service Bus topics and subscriptions](service-bus-dotnet-how-to-use-topics-subscriptions.md)
+* [How to use Service Bus topics and subscriptions](service-bus-dotnet-how-to-use-topics-subscriptions.md)
spring-apps How To Application Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-application-insights.md
When the **Application Insights** feature is enabled, you can:
* In the left navigation pane, select **Application Insights** to view the **Overview** page of Application Insights. The **Overview** page will show you an overview of all running applications. * Select **Application Map** to see the status of calls between applications.
- :::image type="content" source="media/spring-cloud-application-insights/insights-process-agent-map.png" alt-text="Screenshot of Azure portal Application Insights with Application map page showing." lightbox="media/spring-cloud-application-insights/insights-process-agent-map.png":::
+ :::image type="content" source="media/how-to-application-insights/insights-process-agent-map.png" alt-text="Screenshot of Azure portal Application Insights with Application map page showing." lightbox="media/how-to-application-insights/insights-process-agent-map.png":::
* Select the link between customers-service and `petclinic` to see more details such as a query from SQL. * Select an endpoint to see all the applications making requests to the endpoint. * In the left navigation pane, select **Performance** to see the performance data of all applications' operations, dependencies, and roles.
- :::image type="content" source="media/spring-cloud-application-insights/insights-process-agent-performance.png" alt-text="Screenshot of Azure portal Application Insights with Performance page showing." lightbox="media/spring-cloud-application-insights/insights-process-agent-performance.png":::
+ :::image type="content" source="media/how-to-application-insights/insights-process-agent-performance.png" alt-text="Screenshot of Azure portal Application Insights with Performance page showing." lightbox="media/how-to-application-insights/insights-process-agent-performance.png":::
* In the left navigation pane, select **Failures** to see any unexpected failures or exceptions from your applications.
- :::image type="content" source="media/spring-cloud-application-insights/insights-process-agent-failures.png" alt-text="Screenshot of Azure portal Application Insights with Failures page showing." lightbox="media/spring-cloud-application-insights/insights-process-agent-failures.png":::
+ :::image type="content" source="media/how-to-application-insights/insights-process-agent-failures.png" alt-text="Screenshot of Azure portal Application Insights with Failures page showing." lightbox="media/how-to-application-insights/insights-process-agent-failures.png":::
* In the left navigation pane, select **Metrics** and select the namespace, you'll see both Spring Boot metrics and custom metrics, if any.
- :::image type="content" source="media/spring-cloud-application-insights/insights-process-agent-metrics.png" alt-text="Screenshot of Azure portal Application Insights with Metrics page showing." lightbox="media/spring-cloud-application-insights/insights-process-agent-metrics.png":::
+ :::image type="content" source="media/how-to-application-insights/insights-process-agent-metrics.png" alt-text="Screenshot of Azure portal Application Insights with Metrics page showing." lightbox="media/how-to-application-insights/insights-process-agent-metrics.png":::
* In the left navigation pane, select **Live Metrics** to see the real-time metrics for different dimensions.
- :::image type="content" source="media/spring-cloud-application-insights/petclinic-microservices-live-metrics.png" alt-text="Screenshot of Azure portal Application Insights with Live Metrics page showing." lightbox="media/spring-cloud-application-insights/petclinic-microservices-live-metrics.png":::
+ :::image type="content" source="media/how-to-application-insights/petclinic-microservices-live-metrics.png" alt-text="Screenshot of Azure portal Application Insights with Live Metrics page showing." lightbox="media/how-to-application-insights/petclinic-microservices-live-metrics.png":::
* In the left navigation pane, select **Availability** to monitor the availability and responsiveness of Web apps by creating [Availability tests in Application Insights](../azure-monitor/app/monitor-web-app-availability.md).
- :::image type="content" source="media/spring-cloud-application-insights/petclinic-microservices-availability.png" alt-text="Screenshot of Azure portal Application Insights with Availability page showing." lightbox="media/spring-cloud-application-insights/petclinic-microservices-availability.png":::
+ :::image type="content" source="media/how-to-application-insights/petclinic-microservices-availability.png" alt-text="Screenshot of Azure portal Application Insights with Availability page showing." lightbox="media/how-to-application-insights/petclinic-microservices-availability.png":::
* In the left navigation pane, select **Logs** to view all applications' logs, or one application's logs when filtering by `cloud_RoleName`.
- :::image type="content" source="media/enterprise/how-to-application-insights/application-insights-application-logs.png" alt-text="Screenshot of Azure portal Application Insights with Logs page showing." lightbox="media/enterprise/how-to-application-insights/application-insights-application-logs.png":::
+ :::image type="content" source="media/how-to-application-insights/application-insights-application-logs.png" alt-text="Screenshot of Azure portal Application Insights with Logs page showing." lightbox="media/how-to-application-insights/application-insights-application-logs.png":::
## Manage Application Insights using the Azure portal
Enable the Java In-Process Agent by using the following procedure.
1. Select an existing instance of Application Insights or create a new one. 1. When **Application Insights** is enabled, you can configure one optional sampling rate (default 10.0%).
- :::image type="content" source="media/spring-cloud-application-insights/insights-process-agent.png" alt-text="Screenshot of Azure portal Azure Spring Apps instance with Application Insights page showing and 'Enable Application Insights' checkbox highlighted." lightbox="media/spring-cloud-application-insights/insights-process-agent.png":::
+ :::image type="content" source="media/how-to-application-insights/insights-process-agent.png" alt-text="Screenshot of Azure portal Azure Spring Apps instance with Application Insights page showing and 'Enable Application Insights' checkbox highlighted." lightbox="media/how-to-application-insights/insights-process-agent.png":::
1. Select **Save** to save the change.
You can use the Portal to check or update the current settings in Application In
1. Select **Application Insights**. 1. Enable Application Insights by selecting **Edit binding**, or the **Unbound** hyperlink.
- :::image type="content" source="media/enterprise/how-to-application-insights/application-insights-binding-enable.png" alt-text="Screenshot of Azure portal Azure Spring Apps instance with Application Insights page showing and drop-down menu visible with 'Edit binding' option.":::
+ :::image type="content" source="media/how-to-application-insights/application-insights-binding-enable.png" alt-text="Screenshot of Azure portal Azure Spring Apps instance with Application Insights page showing and drop-down menu visible with 'Edit binding' option.":::
1. Edit **Application Insights** or **Sampling rate**, then select **Save**.
You can use the Portal to check or update the current settings in Application In
1. Select **Application Insights**. 1. Select **Unbind binding** to disable Application Insights.
- :::image type="content" source="media/enterprise/how-to-application-insights/application-insights-unbind-binding.png" alt-text="Screenshot of Azure portal Azure Spring Apps instance with Application Insights page showing and drop-down menu visible with 'Unbind binding' option.":::
+ :::image type="content" source="media/how-to-application-insights/application-insights-unbind-binding.png" alt-text="Screenshot of Azure portal Azure Spring Apps instance with Application Insights page showing and drop-down menu visible with 'Unbind binding' option.":::
### Change Application Insights Settings Select the name under the *Application Insights* column to open the Application Insights section. ### Edit Application Insights buildpack bindings in Build Service
Application Insights settings are found in the *ApplicationInsights* item listed
1. Select the **Bound** hyperlink, or select **Edit Binding** under the ellipse, to open and edit the Application Insights buildpack bindings.
- :::image type="content" source="media/enterprise/how-to-application-insights/application-insights-builder-settings.png" alt-text="Screenshot of Azure portal 'Edit bindings for default builder' pane.":::
+ :::image type="content" source="media/how-to-application-insights/application-insights-builder-settings.png" alt-text="Screenshot of Azure portal 'Edit bindings for default builder' pane.":::
1. Edit the binding settings, then select **Save**.
- :::image type="content" source="media/enterprise/how-to-application-insights/application-insights-edit-binding.png" alt-text="Screenshot of Azure portal 'Edit binding' pane.":::
+ :::image type="content" source="media/how-to-application-insights/application-insights-edit-binding.png" alt-text="Screenshot of Azure portal 'Edit binding' pane.":::
::: zone-end
spring-apps How To Enable System Assigned Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-enable-system-assigned-managed-identity.md
To set up a managed identity in the portal, first create an app, and then enable
3. Select **Identity**. 4. Within the **System assigned** tab, switch **Status** to *On*. Select **Save**. ### [Azure CLI](#tab/azure-cli)
To remove system-assigned managed identity from an app that no longer needs it:
1. Navigate to the desired application and select **Identity**. 1. Under **System assigned**/**Status**, select **Off** and then select **Save**: ### [Azure CLI](#tab/azure-cli)
spring-apps How To Enterprise Build Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-enterprise-build-service.md
In Azure Spring Apps, the existing Standard tier already supports compiling user
Tanzu Build Service in the Enterprise tier is the entry point to containerize user applications from both source code and artifacts. There's a dedicated build agent pool that reserves compute resources for a given number of concurrent build tasks. The build agent pool prevents resource contention with your running apps. You can configure the number of resources given to the build agent pool when you create a new service instance of Azure Spring Apps using the **VMware Tanzu settings**. The following Build Agent Pool scale set sizes are available:
The following Build Agent Pool scale set sizes are available:
The following image shows the resources given to the Tanzu Build Service Agent Pool after you've successfully provisioned the service instance. You can also update the configured agent pool size on the **Build Service** page after you've created the service instance. ## Default Builder and Tanzu Buildpacks
Besides the `default` builder, you can also create custom builders with the prov
All the builders configured in a Spring Cloud Service instance are listed in the **Build Service** section under **VMware Tanzu components**. Select **Add** to create a new builder. The image below shows the resources you should use to create the custom builder. You can also edit a custom builder when the builder isn't used in a deployment. You can update the buildpacks or the [OS Stack](https://docs.pivotal.io/tanzu-buildpacks/stacks.html), but the builder name is read only. You can delete any custom builder when the builder isn't used in a deployment, but the `default` builder is read only.
Not all Tanzu Buildpacks support all service binding types. The following table
To edit service bindings for the builder, select **Edit**. After a builder is bound to the service bindings, the service bindings are enabled for an app deployed with the builder. > [!NOTE] > When configuring environment variables for APM bindings, use key names without a prefix. For example, do not use a `DT_` prefix for a Dynatrace binding. Tanzu APM buildpacks will transform the key name to the original environment variable name with a prefix.
Follow these steps to view the current buildpack bindings:
1. Select **Build Service**. 1. Select **Edit** under the **Bindings** column to view the bindings configured under a builder. ### Create a buildpack binding
You can unbind a buildpack binding by using the **Unbind binding** command, or b
To use the **Unbind binding** command, select the **Bound** hyperlink, and then select **Unbind binding**. To unbind a buildpack binding by editing binding properties, select **Edit Binding**, and then select **Unbind**. When you unbind a binding, the bind status changes from **Bound** to **Unbound**.
spring-apps How To Enterprise Marketplace Offer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-enterprise-marketplace-offer.md
To see the offering and read a detailed description, see [Azure Spring Apps Ente
To see the supported plans in your market, select **Plans + Pricing**. > [!NOTE] > If you see "No plans are available for market '\<Location>'", that means none of your Azure subscriptions can purchase the SaaS offering. For more information, see [No plans are available for market '\<Location>'](./troubleshoot.md#no-plans-are-available-for-market-location) in [Troubleshooting](./troubleshoot.md). To see the Enterprise Tier creation page, select **Subscribe** ## Next steps
spring-apps How To Migrate Standard Tier To Enterprise Tier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-migrate-standard-tier-to-enterprise-tier.md
Use the following steps to provision an Azure Spring Apps service instance:
1. Select **Change** next to the **Pricing** option, then select **Enterprise**.
- :::image type="content" source="media/enterprise/getting-started-enterprise/choose-enterprise-tier.png" alt-text="Screenshot of Azure portal Azure Spring Apps creation page with Basics section and 'Choose your pricing tier' pane showing." lightbox="media/enterprise/getting-started-enterprise/choose-enterprise-tier.png":::
+ :::image type="content" source="media/how-to-migrate-standard-tier-to-enterprise-tier/choose-enterprise-tier.png" alt-text="Screenshot of Azure portal Azure Spring Apps creation page with Basics section and 'Choose your pricing tier' pane showing." lightbox="media/how-to-migrate-standard-tier-to-enterprise-tier/choose-enterprise-tier.png":::
Select the **Terms** checkbox to agree to the legal terms and privacy statements of the Enterprise tier offering in the Azure Marketplace.
Use the following steps to provision an Azure Spring Apps service instance:
> [!NOTE] > All Tanzu components are enabled by default. Carefully consider which Tanzu components you want to use or enable during the provisioning phase. After provisioning the Azure Spring Apps instance, you can't enable or disable Tanzu components.
- :::image type="content" source="media/enterprise/getting-started-enterprise/create-instance-tanzu-settings-public-preview.png" alt-text="Screenshot of Azure portal Azure Spring Apps creation page with V M ware Tanzu Settings section showing." lightbox="media/enterprise/getting-started-enterprise/create-instance-tanzu-settings-public-preview.png":::
+ :::image type="content" source="media/how-to-migrate-standard-tier-to-enterprise-tier/create-instance-tanzu-settings-public-preview.png" alt-text="Screenshot of Azure portal Azure Spring Apps creation page with V M ware Tanzu Settings section showing." lightbox="media/how-to-migrate-standard-tier-to-enterprise-tier/create-instance-tanzu-settings-public-preview.png":::
1. Select the **Application Insights** section, then select **Enable Application Insights**. You can also enable Application Insights after you provision the Azure Spring Apps instance.
Follow these steps to use Application Configuration Service for Tanzu as a centr
1. Select **Application Configuration Service**. 1. Select **Overview** to view the running state and resources allocated to Application Configuration Service for Tanzu.
- :::image type="content" source="./media/enterprise/getting-started-enterprise/config-service-overview.png" alt-text="Screenshot of Azure portal Azure Spring Apps with Application Configuration Service page and Overview section showing." lightbox="./media/enterprise/getting-started-enterprise/config-service-overview.png":::
+ :::image type="content" source="./media/how-to-migrate-standard-tier-to-enterprise-tier/config-service-overview.png" alt-text="Screenshot of Azure portal Azure Spring Apps with Application Configuration Service page and Overview section showing." lightbox="./media/how-to-migrate-standard-tier-to-enterprise-tier/config-service-overview.png":::
1. Select **Settings**, then add a new entry in the **Repositories** section with the following information:
Follow these steps to use Application Configuration Service for Tanzu as a centr
1. After validation completes successfully, select **Apply** to update the configuration settings.
- :::image type="content" source="./media/enterprise/getting-started-enterprise/config-service-settings.png" alt-text="Screenshot of Azure portal Azure Spring Apps with Application Configuration Service page and Settings section showing." lightbox="./media/enterprise/getting-started-enterprise/config-service-settings.png":::
+ :::image type="content" source="./media/how-to-migrate-standard-tier-to-enterprise-tier/config-service-settings.png" alt-text="Screenshot of Azure portal Azure Spring Apps with Application Configuration Service page and Settings section showing." lightbox="./media/how-to-migrate-standard-tier-to-enterprise-tier/config-service-settings.png":::
### [Azure CLI](#tab/azure-cli)
To bind apps to Application Configuration Service for VMware Tanzu®, follow the
1. Choose one app in the dropdown, and then select **Apply** to bind the application to Tanzu Service Registry.
- :::image type="content" source="media/enterprise/getting-started-enterprise/service-reg-app-bind-dropdown.png" alt-text="Screenshot of Azure portal Azure Spring Apps with Service Registry page and 'Bind app' dialog showing." lightbox="media/enterprise/getting-started-enterprise/service-reg-app-bind-dropdown.png":::
+ :::image type="content" source="media/how-to-migrate-standard-tier-to-enterprise-tier/service-reg-app-bind-dropdown.png" alt-text="Screenshot of Azure portal Azure Spring Apps with Service Registry page and 'Bind app' dialog showing." lightbox="media/how-to-migrate-standard-tier-to-enterprise-tier/service-reg-app-bind-dropdown.png":::
The list under **App name** shows the apps bound with Tanzu Service Registry.
To check or update the current settings in Application Insights, use the followi
1. Select **Application Insights**. 1. Enable Application Insights by selecting **Edit binding**, or the **Unbound** hyperlink.
- :::image type="content" source="media/enterprise/how-to-application-insights/application-insights-binding-enable.png" alt-text="Screenshot of Azure portal Azure Spring Apps instance with Application Insights page showing and drop-down menu visible with 'Edit binding' option.":::
+ :::image type="content" source="media/how-to-migrate-standard-tier-to-enterprise-tier/application-insights-binding-enable.png" alt-text="Screenshot of Azure portal Azure Spring Apps instance with Application Insights page showing and drop-down menu visible with 'Edit binding' option.":::
1. Edit the binding settings, then select **Save**.
- :::image type="content" source="media/enterprise/how-to-application-insights/application-insights-edit-binding.png" alt-text="Screenshot of Azure portal 'Edit binding' pane." lightbox="media/enterprise/how-to-application-insights/application-insights-edit-binding.png":::
+ :::image type="content" source="media/how-to-migrate-standard-tier-to-enterprise-tier/application-insights-edit-binding.png" alt-text="Screenshot of Azure portal 'Edit binding' pane." lightbox="media/how-to-migrate-standard-tier-to-enterprise-tier/application-insights-edit-binding.png":::
### [Azure CLI](#tab/azure-cli)
spring-apps How To Set Up Sso With Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-set-up-sso-with-azure-ad.md
Register your application to establish a trust relationship between your app and
1. In *Redirect URI (optional)* select **Web**, then enter the URL from the above section in the text box. The redirect URI is the location where Azure AD redirects your client and sends security tokens after authentication. 1. Select **Register** to finish registering the application. When registration finishes, you'll see the *Application (client) ID* on the **Overview** screen of the *App registrations** page.
You can also add redirect URIs after app registration by following these steps:
1. Select **Web**, then select **Add URI** under *Redirect URIs*. 1. Add a new redirect URI, then select **Save**. For more information on Application Registration, see [Quickstart: Register an app in the Microsoft identity platform ](../active-directory/develop/quickstart-register-app.md#quickstart-register-an-application-with-the-microsoft-identity-platform).
spring-apps How To Use Enterprise Api Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-use-enterprise-api-portal.md
This section describes how to view and try out APIs with schema definitions in A
Select the `endpoint URL` to go to API portal. You'll see all the routes configured in Spring Cloud Gateway for Tanzu. ## Try out APIs in API portal
Use the following steps to try out APIs:
1. Select the API you would like to try. 1. Select **EXECUTE**, and the response will be shown.
- :::image type="content" source="media/enterprise/how-to-use-enterprise-api-portal/api-portal-tryout.png" alt-text="Screenshot of A P I portal.":::
+ :::image type="content" source="media/how-to-use-enterprise-api-portal/api-portal-tryout.png" alt-text="Screenshot of API portal.":::
## Next steps
spring-apps How To Use Enterprise Spring Cloud Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-use-enterprise-spring-cloud-gateway.md
Use the following steps to create an example application using Spring Cloud Gate
Select **Yes** next to *Assign endpoint* to assign a public endpoint. You'll get a URL in a few minutes. Save the URL to use later.
- :::image type="content" source="media/enterprise/getting-started-enterprise/gateway-overview.png" alt-text="Screenshot of Azure portal Azure Spring Apps overview page with 'Assign endpoint' highlighted.":::
+ :::image type="content" source="media/how-to-use-enterprise-spring-cloud-gateway/gateway-overview.png" alt-text="Screenshot of Azure portal Azure Spring Apps overview page with 'Assign endpoint' highlighted.":::
You can also use CLI to do it, as shown in the following command:
Use the following steps to create an example application using Spring Cloud Gate
You can also view those properties in the portal.
- :::image type="content" source="media/enterprise/how-to-use-enterprise-spring-cloud-gateway/gateway-configuration.png" alt-text="Screenshot of Azure portal showing Azure Spring Apps Spring Cloud Gateway page with Configuration pane showing.":::
+ :::image type="content" source="media/how-to-use-enterprise-spring-cloud-gateway/gateway-configuration.png" alt-text="Screenshot of Azure portal showing Azure Spring Apps Spring Cloud Gateway page with Configuration pane showing.":::
1. Configure routing rules to apps.
Use the following steps to create an example application using Spring Cloud Gate
You can also view the routes in the portal.
- :::image type="content" source="media/enterprise/how-to-use-enterprise-spring-cloud-gateway/gateway-route.png" alt-text="Screenshot of Azure portal Azure Spring Apps Spring Cloud Gateway page showing 'Routing rules' pane.":::
+ :::image type="content" source="media/how-to-use-enterprise-spring-cloud-gateway/gateway-route.png" alt-text="Screenshot of Azure portal Azure Spring Apps Spring Cloud Gateway page showing 'Routing rules' pane.":::
1. Use the following command to access the `customers service` and `owners` APIs through the gateway endpoint:
spring-apps Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/troubleshoot.md
Creating an Azure Spring Apps Enterprise tier instance fails with error code "11
When you visit the SaaS offer [Azure Spring Apps Enterprise Tier](https://aka.ms/ascmpoffer) in the Azure Marketplace, it may say "No plans are available for market '\<Location>'" as in the following image.
-![No plans available error image](./media/enterprise/how-to-enterprise-marketplace-offer/no-enterprise-plans-available.png)
+![No plans available error image](./media/troubleshoot/no-enterprise-plans-available.png)
Azure Spring Apps Enterprise tier needs customers to pay for a license to Tanzu components through an Azure Marketplace offer. To purchase in the Azure Marketplace, the billing account's country or region for your Azure subscription should be in the SaaS offer's supported geographic locations.
static-web-apps Add Mongoose https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/add-mongoose.md
Title: "Tutorial: Access data in Azure Cosmos DB using Mongoose with Azure Static Web Apps"
+ Title: "Access data in Azure Cosmos DB using Mongoose with Azure Static Web Apps"
description: Learn to access data in Azure Cosmos DB using Mongoose from an Azure Static Web Apps API function. Previously updated : 01/25/2021 Last updated : 10/10/2022
-# Tutorial: Access data in Azure Cosmos DB using Mongoose with Azure Static Web Apps
-
-[Mongoose](https://mongoosejs.com/) is the most popular ODM (Object Document Mapping) client for Node.js. Allowing you to design a data structure and enforce validation, Mongoose provides all the tooling necessary to interact with databases that support the MongoDB API. [Azure Cosmos DB for MongoDB](../cosmos-db/mongodb-introduction.md) supports the necessary MongoDB APIs and is available as a back-end server option on Azure.
-
-In this tutorial, you learn how to:
-
-> [!div class="checklist"]
-> - Create an Azure Cosmos DB serverless account
-> - Create Azure Static Web Apps
-> - Update application settings to store the connection string
-
-If you donΓÇÖt have an Azure subscription, create a [free trial account](https://azure.microsoft.com/free/).
+# Access data in Azure Cosmos DB using Mongoose with Azure Static Web Apps
+[Mongoose](https://mongoosejs.com/) is the most popular ODM (Object Document Mapping) client for Node.js. Mongoose allows you to design a data structure and enforce validation, and provides all the tooling necessary to interact with databases that support the MongoDB API. [Cosmos DB](../cosmos-db/mongodb-introduction.md) supports the necessary MongoDB APIs and is available as a back-end server option on Azure.
## Prerequisites -- An [Azure account](https://azure.microsoft.com/free/)-- A [GitHub account](https://github.com/join)-
-## Sign in to Azure
-
-Sign in to the [Azure portal](https://portal.azure.com).
+- An [Azure account](https://azure.microsoft.com/free/). If you donΓÇÖt have an Azure subscription, create a [free trial account](https://azure.microsoft.com/free/).
+- A [GitHub account](https://github.com/join).
+- A [Cosmos DB serverless](../cosmos-db/serverless.md) account. With a serverless account, you only pay for the resources as they're used and avoid needing to create a full infrastructure.
+## 1. Create a Cosmos DB serverless database
-## Create an Azure Cosmos DB serverless database
+Complete the following steps to create a Cosmos serverless DB.
-Begin by creating an [Azure Cosmos DB serverless](../cosmos-db/serverless.md) account. By using a serverless account, you only pay for the resources as they are used and avoid needing to create a full infrastructure.
-
-1. Navigate to the [Azure portal](https://portal.azure.com)
-2. Select **Create a resource**
-3. Enter **Azure Cosmos DB** in the search box
-4. Select **Azure Cosmos DB**
-5. Select **Create**
-6. If prompted, under **Azure Cosmos DB for MongoDB** select **Create**
-7. Configure your Azure Cosmos DB Account with the following information
+1. Sign in to the [Azure portal](https://portal.azure.com).
+2. Select **Create a resource**.
+3. Enter *Azure Cosmos DB* in the search box.
+4. Select **Azure Cosmos DB**.
+5. Select **Create**.
+6. If prompted, under **Azure Cosmos DB API for MongoDB** select **Create**.
+7. Configure your Azure Cosmos DB Account with the following information:
- Subscription: Choose the subscription you wish to use - Resource: Select **Create new**, and set the name to **aswa-mongoose** - Account name: A unique value is required - Location: **West US 2** - Capacity mode: **Serverless (preview)** - Version: **4.0**
-8. Select **Review + create**
-9. Select **Create**
+8. Select **Review + create**.
+9. Select **Create**.
-The creation process will take a few minutes. Later steps will return to the database to gather the connection string.
+The creation process takes a few minutes. We'll come back to the database to gather the connection string after we [create a static web app](#2-create-a-static-web-app).
-## Create a static web app
+## 2. Create a static web app
This tutorial uses a GitHub template repository to help you create your application.
-1. Navigate to the [starter template](https://github.com/login?return_to=/staticwebdev/mongoose-starter/generate)
-2. Choose the **owner** (if using an organization other than your main account)
-3. Name your repository **aswa-mongoose-tutorial**
-4. Select **Create repository from template**
-5. Return to the [Azure portal](https://portal.azure.com)
-6. Select **Create a resource**
-7. Type **static web app** in the search box
-8. Select **Static Web App**
-9. Select **Create**
-10. Configure your Azure Static Web App with the following information
+1. Go to the [starter template](https://github.com/login?return_to=/staticwebdev/mongoose-starter/generate).
+2. Choose the **owner** (if using an organization other than your main account).
+3. Name your repository **aswa-mongoose-tutorial**.
+4. Select **Create repository from template**.
+5. Return to the [Azure portal](https://portal.azure.com).
+6. Select **Create a resource**.
+7. Enter **static web app** in the search box.
+8. Select **Static Web App**.
+9. Select **Create**.
+10. Configure your Azure Static Web App with the following information:
- Subscription: Choose the same subscription as before - Resource group: Select **aswa-mongoose** - Name: **aswa-mongoose-tutorial**
This tutorial uses a GitHub template repository to help you create your applicat
- Api location: **api** - Output location: **build** :::image type="content" source="media/add-mongoose/azure-static-web-apps.png" alt-text="Completed Azure Static Web Apps form":::
-11. Select **Review and create**
-12. Select **Create**
-13. The creation process takes a few moments; select **Go to resource** once the static web app is provisioned
-
-## Configure database connection string
-
-In order to allow the web app to communicate with the database, the database connection string is stored as an [Application Setting](application-settings.md). Setting values are accessible in Node.js using the `process.env` object.
-
-1. Select **Home** in the upper left corner of the Azure portal (or navigate back to [https://portal.azure.com](https://portal.azure.com))
-2. Select **Resource groups**
-3. Select **aswa-mongoose**
-4. Select the name of your database account - it will have a type of **Azure Cosmos DB for Mongo DB**
-5. Under **Settings** select **Connection String**
-6. Copy the connection string listed under **PRIMARY CONNECTION STRING**
-7. In the breadcrumbs, select **aswa-mongoose**
-8. Select **aswa-mongoose-tutorial** to return to the website instance
-9. Under **Settings** select **Configuration**
-10. Select **Add** and create a new Application Setting with the following values
+
+11. Select **Review and create**.
+12. Select **Create**.
+13. The creation process takes a few moments; select **Go to resource** once the static web app is provisioned.
+
+## 3. Configure database connection string
+
+To allow the web app to communicate with the database, the database connection string is stored as an [application setting](application-settings.md). Setting values are accessible in Node.js using the `process.env` object.
+
+1. Select **Home** in the upper left corner of the Azure portal (or go back to [https://portal.azure.com](https://portal.azure.com)).
+2. Select **Resource groups**.
+3. Select **aswa-mongoose**.
+4. Select the name of your database account - it has a type of **Azure Cosmos DB API for Mongo DB**.
+5. Under **Settings** select **Connection String**.
+6. Copy the connection string listed under **PRIMARY CONNECTION STRING**.
+7. In the breadcrumbs, select **aswa-mongoose**.
+8. Select **aswa-mongoose-tutorial** to return to the website instance.
+9. Under **Settings** select **Configuration**.
+10. Select **Add** and create a new Application Setting with the following values:
- Name: **AZURE_COSMOS_CONNECTION_STRING** - Value: \<Paste the connection string you copied earlier\>
-11. Select **OK**
-12. Select **Add** and create a new Application Setting with the following values for name of the database
+11. Select **OK**.
+12. Select **Add** and create a new Application Setting with the following values for name of the database:
- Name: **AZURE_COSMOS_DATABASE_NAME** - Value: **todo**
-13. Select **Save**
+13. Select **OK**.
+14. Select **Save**.
-## Navigate to your site
+## 4. Go to your site
You can now explore the static web app.
-1. Select **Overview**
-1. Select the URL displayed in the upper right
- 1. It will look similar to `https://calm-pond-05fcdb.azurestaticapps.net`
-1. Select **Please login to see your list of tasks**
-1. Select **Grant consent** to access the application
-1. Create a new lists by typing a name into the textbox labeled **create new list** and selecting **Save**
-1. Create a new task by typing in a title in the textbox labeled **create new item** and selecting **Save**
-1. Confirm the task is displayed (it may take a moment)
-1. Mark the task as complete by **selecting the check**; the task will be moved to the **Done items** section of the page
-1. **Refresh the page** to confirm a database is being used
+1. In the Azure portal, select **Overview**.
+2. Select the URL displayed in the upper right.
+ 1. It looks similar to `https://calm-pond-05fcdb.azurestaticapps.net`.
+3. Select **Please login to see your list of tasks**.
+4. Select **Grant consent** to access the application.
+5. Create a new list by entering a name into the textbox labeled **create new list** and selecting **Save**.
+6. Create a new task by typing in a title in the textbox labeled **create new item** and selecting **Save**.
+7. Confirm the task is displayed (it may take a moment).
+8. Mark the task as complete by **selecting the check**; the task moves to the **Done items** section of the page.
+9. **Refresh the page** to confirm a database is being used.
## Clean up resources If you're not going to continue to use this application, delete the resource group with the following steps:
-1. Return to the [Azure portal](https://portal.azure.com)
-2. Select **Resource groups**
-3. Select **aswa-mongoose**
-4. Select **Delete resource group**
-5. Type **aswa-mongoose** into the textbox
-6. Select **Delete**
+1. Return to the [Azure portal](https://portal.azure.com).
+2. Select **Resource groups**.
+3. Select **aswa-mongoose**.
+4. Select **Delete resource group**.
+5. Enter **aswa-mongoose** into the textbox.
+6. Select **Delete**.
## Next steps Advance to the next article to learn how to configure local development... > [!div class="nextstepaction"]
-> [Setup local development](./local-development.md)
+> [Set up local development](./local-development.md)
static-web-apps Apex Domain Azure Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/apex-domain-azure-dns.md
The following procedure requires you to copy settings from an Azure DNS zone you
1. Open the [Azure portal](https://portal.azure.com).
-1. Navigate to your static web app.
+1. Go to your static web app.
1. Under *Settings*, select **Custom domains**.
-1. Select the **+ Add** button, and select **Custom Domain on Azure DNS** from the drop down.
+1. Select **+ Add**, and then select **Custom Domain on Azure DNS** from the drop down menu.
1. In the *Enter domain* tab, enter your apex domain name. For instance, if your domain name is `example.com`, enter `example.com` into this box (without any subdomains).
-1. Select the **Next** button.
+1. Select **Next**.
1. In the *Validate + Configure* tab, enter the following values.
The following procedure requires you to copy settings from an Azure DNS zone you
| Domain name | This value should match the domain name you entered in the previous step. | | Hostname record type | Select **TXT**. |
-1. Select the **Generate code** button.
+1. Select **Generate code**.
Wait as the code is generated. It make take a minute or so to complete.
-1. Once the `TXT` record value is generated, select the **copy button** (next to the generated value) to copy the code to the clipboard.
+1. Once the `TXT` record value is generated, **copy** (next to the generated value) the code to the clipboard.
-1. Select the **Close** button.
+1. Select **Close**.
-1. Navigate to your Azure DNS zone instance.
+1. Go to your Azure DNS zone instance.
-1. Select the **+ Record set** button.
+1. Select **+ Record set**.
1. Enter the following values in the *Add record set* window.
The following procedure requires you to copy settings from an Azure DNS zone you
| TTL unit | Keep default value. | | Value | Paste in the `TXT` record value in your clipboard from your static web app. |
-1. Select the **OK** button.
+1. Select **OK**.
1. Return to your static web app in the Azure portal. 1. Under *Settings*, select **Custom domains**.
-Observe the *Status* for the row of your apex domain. Once the validation is complete, then your apex domain will be publicly available.
+Observe the *Status* for the row of your apex domain. Once the validation is complete, then your apex domain is publicly available.
While this validation is running, create an ALIAS record to finalize the configuration.
While this validation is running, create an ALIAS record to finalize the configu
1. Return to the Azure DNS zone in the Azure portal.
-1. Select the **+ Record set button** button.
+2. Select **+ Record set**.
-1. Enter the following values in the *Add record set* window.
+3. Enter the following values in the *Add record set* window.
| Setting | Property | |||
While this validation is running, create an ALIAS record to finalize the configu
| TTL | Keep default value. | | TTL unit | Keep default value. |
-1. Select the **OK** button.
+4. Select **OK**.
-1. Open a new browser tab and navigate to your apex domain.
+5. Open a new browser tab and go to your apex domain.
After the DNS records are updated, you should see your static web app in the browser. Also, inspect the location to verify that your site is served securely using `https`.
static-web-apps Apex Domain External https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/apex-domain-external.md
Before you create the `ALIAS` record, you first need to validate that you own th
1. Open the [Azure portal](https://portal.azure.com).
-1. Navigate to your static web app.
+1. Go to your static web app.
1. From the *Overview* window, copy the generated **URL** of your site and set it aside in a text editor for future use. 1. Under *Settings*, select **Custom domains**.
-1. Select the **+ Add** button.
+2. Select **+ Add**.
-1. In the *Enter domain* tab, enter your apex domain name.
+3. In the *Enter domain* tab, enter your apex domain name.
For instance, if your domain name is `example.com`, enter `example.com` into this box (without any subdomains).
-1. Select the **Next** button.
+4. Select **Next**.
-1. In the *Validate + Configure* tab, enter the following values.
+5. In the *Validate + Configure* tab, enter the following values.
| Setting | Value | ||| | Domain name | This value should match the domain name you entered in the previous step. | | Hostname record type | Select **TXT**. |
-1. Select the **Generate code** button.
+6. Select **Generate code**.
Wait as the code is generated. It make take a minute or so to complete.
-1. Once the `TXT` record value is generated, select the **copy button** (next to the generated value) to copy the code to the clipboard.
+7. Once the `TXT` record value is generated, **copy** (next to the generated value) the code to the clipboard.
-1. Select the **Close** button.
+8. Select **Close**.
-1. Open a new browser tab and sign in to your domain registrar account.
+9. Open a new browser tab and sign in to your domain registrar account.
-1. Navigate to your domain name's DNS configuration settings.
+10. Go to your domain name's DNS configuration settings.
-1. Add a new `TXT` record with the following values.
+11. Add a new `TXT` record with the following values.
| Setting | Value | |--|--|
Before you create the `ALIAS` record, you first need to validate that you own th
| Value | Paste the generated code value you copied from the Azure portal. | | TTL (if applicable) | Leave as default value. |
-1. Save changes to your DNS record.
+12. Save changes to your DNS record.
### Set up an ALIAS record
Before you create the `ALIAS` record, you first need to validate that you own th
Since DNS settings need to propagate, this process can take some time to complete.
-1. Open a new browser tab and navigate to your apex domain.
+1. Open a new browser tab and go to your apex domain.
After the DNS records are updated, you should see your static web app in the browser. Also, inspect the location to verify that your site is served securely using `https`.
static-web-apps Apis Api Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/apis-api-management.md
All Azure API Management pricing tiers are available for use with Azure Static W
To link an Azure API Management service as the API backend for a static web app, follow these steps:
-1. In the Azure portal, navigate to the static web app.
+1. In the Azure portal, go to the static web app.
1. Select **APIs** from the navigation menu.
The linking process also automatically applies the following configuration to yo
* The linked static web app is configured to include the subscription's primary key and a valid access token when proxying requests to the API Management service. > [!IMPORTANT]
-> Changing the *validate-jwt* policy or regenerating the subscription's primary key will prevent your static web app from proxying requests to the API Management service. Do not modify or delete the subscription or product associated with your static web app while they are linked.
+> Changing the *validate-jwt* policy or regenerating the subscription's primary key prevents your static web app from proxying requests to the API Management service. Do not modify or delete the subscription or product associated with your static web app while they are linked.
## Unlink an Azure API Management service To unlink an Azure API Management service from a static web app, follow these steps:
-1. In the Azure portal, navigate to the static web app.
+1. In the Azure portal, go to the static web app.
1. Locate the environment that you want to unlink and select the API Management service name.
static-web-apps Apis App Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/apis-app-service.md
All Azure App Service hosting plans are available for use with Azure Static Web
To link a web app as the API backend for a static web app, follow these steps:
-1. In the Azure portal, navigate to the static web app.
+1. In the Azure portal, go to the static web app.
1. Select **APIs** from the navigation menu.
Your App Service app is configured with an identity provider named `Azure Static
To unlink a web app from a static web app, follow these steps:
-1. In the Azure portal, navigate to the static web app.
+1. In the Azure portal, go to the static web app.
1. Select **APIs** from the navigation menu.
static-web-apps Apis Container Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/apis-container-apps.md
By default, when a container app is linked to a static web app, the container ap
To link a container app as the API backend for a static web app, follow these steps:
-1. In the Azure portal, navigate to the static web app.
+1. In the Azure portal, go to the static web app.
1. Select **APIs** from the navigation menu.
Your container app is configured with an identity provider named `Azure Static W
To unlink a container app from a static web app, follow these steps:
-1. In the Azure portal, navigate to the static web app.
+1. In the Azure portal, go to the static web app.
1. Select **APIs** from the navigation menu.
static-web-apps Application Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/application-settings.md
You can configure application settings via the Azure portal or with the Azure CL
The Azure portal provides an interface for creating, updating and deleting application settings.
-1. Navigate to the [Azure portal](https://portal.azure.com).
+1. Go to the [Azure portal](https://portal.azure.com).
1. Open your static web app.
static-web-apps Assign Roles Microsoft Graph https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/assign-roles-microsoft-graph.md
In this tutorial, you learn to:
## Create a GitHub repository
-1. Navigate to the following location to create a new repository:
+1. Go to the following location to create a new repository:
- [https://github.com/staticwebdev/roles-function/generate](https://github.com/login?return_to=/staticwebdev/roles-function/generate) 1. Name your repository **my-custom-roles-app**.
In this tutorial, you learn to:
## Deploy the static web app to Azure
-1. In a new browser window, navigate to the [Azure portal](https://portal.azure.com) and sign in with your Azure account.
+1. In a new browser window, go to the [Azure portal](https://portal.azure.com) and sign in with your Azure account.
1. Select **Create a resource** in the top left corner.
In this tutorial, you learn to:
| _Region_ | Select a region closest to you | | | _Deployment details_ | Select **GitHub** as the source | |
-1. Select the **Sign-in with GitHub** button and authenticate with GitHub.
+1. Select **Sign-in with GitHub** and authenticate with GitHub.
1. Select the name of the _Organization_ where you created the repository.
In this tutorial, you learn to:
## Create an Azure Active Directory application
-1. In the Azure portal, search for and navigate to *Azure Active Directory*.
+1. In the Azure portal, search for and go to *Azure Active Directory*.
1. In the menu bar, select **App registrations**.
In this tutorial, you learn to:
## Configure Active Directory authentication
-1. In a browser, open the GitHub repository containing the static web app you deployed. Navigate to the app's configuration file at *frontend/staticwebapp.config.json*. It contains the following section:
+1. In a browser, open the GitHub repository containing the static web app you deployed. Go to the app's configuration file at *frontend/staticwebapp.config.json*. It contains the following section:
```json "auth": {
In this tutorial, you learn to:
> [!NOTE] > To obtain an access token for Microsoft Graph, the `loginParameters` field must be configured with `resource=https://graph.microsoft.com`.
-1. Select the **Edit** button to update the file.
+2. Select **Edit** to update the file.
-1. Update the *openIdIssuer* value of `https://login.microsoftonline.com/<YOUR_AAD_TENANT_ID>` by replacing `<YOUR_AAD_TENANT_ID>` with the directory (tenant) ID of your Azure Active Directory.
+3. Update the *openIdIssuer* value of `https://login.microsoftonline.com/<YOUR_AAD_TENANT_ID>` by replacing `<YOUR_AAD_TENANT_ID>` with the directory (tenant) ID of your Azure Active Directory.
-1. Select **Commit directly to the main branch** and select **Commit changes**.
+4. Select **Commit directly to the main branch** and select **Commit changes**.
-1. A GitHub Actions run triggers to update the static web app.
+5. A GitHub Actions run triggers to update the static web app.
-1. Navigate to your static web app resource in the Azure portal.
+6. Go to your static web app resource in the Azure portal.
-1. Select **Configuration** in the menu bar.
+7. Select **Configuration** in the menu bar.
-1. In the *Application settings* section, add the following settings:
+8. In the *Application settings* section, add the following settings:
| Name | Value | ||-| | `AAD_CLIENT_ID` | *Your Active Directory application (client) ID* | | `AAD_CLIENT_SECRET` | *Your Active Directory application client secret value* |
-1. Select **Save**.
+9. Select **Save**.
## Verify custom roles The sample application contains a serverless function (*api/GetRoles/index.js*) that queries Microsoft Graph to determine if a user is in a pre-defined group. Based on the user's group memberships, the function assigns custom roles to the user. The application is configured to restrict certain routes based on these custom roles.
-1. In your GitHub repository, navigate to the *GetRoles* function located at *api/GetRoles/index.js*. Near the top, there is a `roleGroupMappings` object that maps custom user roles to Azure Active Directory groups.
+1. In your GitHub repository, go to the *GetRoles* function located at *api/GetRoles/index.js*. Near the top, there is a `roleGroupMappings` object that maps custom user roles to Azure Active Directory groups.
-1. Click the **Edit** button.
+2. Select **Edit**.
-1. Update the object with group IDs from your Azure Active Directory tenant.
+3. Update the object with group IDs from your Azure Active Directory tenant.
For instance, if you have groups with IDs `6b0b2fff-53e9-4cff-914f-dd97a13bfbd6` and `b6059db5-9cef-4b27-9434-bb793aa31805`, you would update the object to:
The sample application contains a serverless function (*api/GetRoles/index.js*)
In the above example, if a user is a member of the Active Directory group with ID `b6059db5-9cef-4b27-9434-bb793aa31805`, they are granted the `reader` role.
-1. Select **Commit directly to the main branch** and select **Commit changes**.
+4. Select **Commit directly to the main branch** and select **Commit changes**.
-1. A GitHub Actions run triggers to update the static web app.
+5. A GitHub Actions run triggers to update the static web app.
-1. When the deployment is complete, you can verify your changes by navigating to the app's URL.
+6. When the deployment is complete, you can verify your changes by navigating to the app's URL.
-1. Log in to your static web app using Azure Active Directory.
+7. Log in to your static web app using Azure Active Directory.
-1. When you are logged in, the sample app displays the list of roles that you are assigned based on your identity's Active Directory group membership. Depending on these roles, you are permitted or prohibited to access some of the routes in the app.
+8. When you are logged in, the sample app displays the list of roles that you are assigned based on your identity's Active Directory group membership. Depending on these roles, you are permitted or prohibited to access some of the routes in the app.
> [!NOTE] > Some queries against Microsoft Graph return multiple pages of data. When more than one query request is required, Microsoft Graph returns an `@odata.nextLink` property in the response which contains a URL to the next page of results. For more details please refer to [Paging Microsoft Graph data in your app](/graph/paging)
static-web-apps Authentication Authorization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/authentication-authorization.md
Invitations are specific to individual authorization-providers, so consider the
| GitHub | username | | Twitter | username |
-1. Navigate to a Static Web Apps resource in the [Azure portal](https://portal.azure.com).
-1. Under _Settings_, click on **Role Management**.
-1. Click on the **Invite** button.
-1. Select an _Authorization provider_ from the list of options.
-1. Add either the username or email address of the recipient in the _Invitee details_ box.
+1. Go to a Static Web Apps resource in the [Azure portal](https://portal.azure.com).
+1. Under _Settings_, select **Role Management**.
+2. Select **Invite**.
+3. Select an _Authorization provider_ from the list of options.
+4. Add either the username or email address of the recipient in the _Invitee details_ box.
- For GitHub and Twitter, you enter the username. For all others, enter the recipient's email address.
-1. Select the domain of your static site from the _Domain_ drop-down.
+5. Select the domain of your static site from the _Domain_ drop-down.
- The domain you select is the domain that appears in the invitation. If you have a custom domain associated with your site, you probably want to choose the custom domain.
-1. Add a comma-separated list of role names in the _Role_ box.
-1. Enter the maximum number of hours you want the invitation to remain valid.
+6. Add a comma-separated list of role names in the _Role_ box.
+7. Enter the maximum number of hours you want the invitation to remain valid.
- The maximum possible limit is 168 hours, which is 7 days.
-1. Click the **Generate** button.
-1. Copy the link from the _Invite link_ box.
-1. Email the invitation link to the person you're granting access to your app.
+8. Select **Generate**.
+9. Copy the link from the _Invite link_ box.
+10. Email the invitation link to the person you're granting access to your app.
-When the user clicks the link in the invitation, they're prompted to log in with their corresponding account. Once successfully logged-in, the user is associated with the selected roles.
+When the user selects the link in the invitation, they're prompted to log in with their corresponding account. Once successfully logged-in, the user is associated with the selected roles.
> [!CAUTION] > Make sure your route rules don't conflict with your selected authentication providers. Blocking a provider with a route rule would prevent users from accepting invitations. ### Update role assignments
-1. Navigate to a Static Web Apps resource in the [Azure portal](https://portal.azure.com).
-1. Under _Settings_, click on **Role Management**.
-1. Click on the user in the list.
-1. Edit the list of roles in the _Role_ box.
-1. Click the **Update** button.
+1. Go to a Static Web Apps resource in the [Azure portal](https://portal.azure.com).
+1. Under _Settings_, select **Role Management**.
+2. Select the user in the list.
+3. Edit the list of roles in the _Role_ box.
+4. Select **Update**.
### Remove user
-1. Navigate to a Static Web Apps resource in the [Azure portal](https://portal.azure.com).
-1. Under _Settings_, click on **Role Management**.
+1. Go to a Static Web Apps resource in the [Azure portal](https://portal.azure.com).
+1. Under _Settings_, select **Role Management**.
1. Locate the user in the list. 1. Check the checkbox on the user's row.
-1. Click the **Delete** button.
+2. Select **Delete**.
As you remove a user, keep in mind the following items:
static-web-apps Azure Dns Zone https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/azure-dns-zone.md
The following procedure requires you to copy settings from an Azure DNS zone you
1. Select **DNS zones**.
-1. Select the **Create** button.
+2. Select **Create**.
-1. In the *Basics* tab, enter the following values.
+3. In the *Basics* tab, enter the following values.
| Property | Value | |||
The following procedure requires you to copy settings from an Azure DNS zone you
| Resource group | Select to create a resource group. | | Name | Enter the domain name for this zone. |
-1. Select **Review + Create**.
+4. Select **Review + Create**.
-1. Select **Create** and wait for the zone to provision.
+5. Select **Create** and wait for the zone to provision.
-1. Select **Go to resource**.
+6. Select **Go to resource**.
With the DNS zone created, you now have access to Azure's DNS name servers for your application.
-1. From the *Overview* window, copy the values for all four name servers listed as **Name server 1** to **Name server 4** and set them aside in a text editor for later use.
+7. From the *Overview* window, copy the values for all four name servers listed as **Name server 1** to **Name server 4** and set them aside in a text editor for later use.
## Update name server addresses
static-web-apps Bitbucket https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/bitbucket.md
In this tutorial, you learn to:
This article uses a GitHub repository as the source to import code into a Bitbucket repository. 1. Sign in to [Bitbucket](https://bitbucket.org).
-1. Navigate to [https://bitbucket.org/repo/import](https://bitbucket.org/repo/import) to begin the import process.
+1. Go to [https://bitbucket.org/repo/import](https://bitbucket.org/repo/import) to begin the import process.
1. Under the *Old repository* label, in the *URL* box, enter the repository URL for your choice of framework. # [No Framework](#tab/vanilla-javascript)
This article uses a GitHub repository as the source to import code into a Bitbuc
1. Next to the *Project* label, select **Create new project**. 1. Enter **MyStaticWebApp**.
-1. Select the **Import repository** button and wait a moment while the website creates your repository.
+2. Select **Import repository** and wait a moment while the website creates your repository.
### Set main branch
From time to time the template repository have more than one branch. Use the fol
1. Expand the **Advanced** section. 1. Under the *Main branch* label, ensure **main** is selected in the drop down. 1. If you made a change, select **Save changes**.
-1. Select the **Back** button on the left.
+2. Select **Back**.
## Create a static web app Now that the repository is created, you can create a static web app from the Azure portal.
-1. Navigate to the [Azure portal](https://portal.azure.com).
+1. Go to the [Azure portal](https://portal.azure.com).
1. Select **Create a Resource**. 1. Search for **Static Web Apps**. 1. Select **Static Web Apps**.
Now that the repository is created, you can create a static web app from the Azu
1. Select **Review + create**. 1. Select **Create**.
-1. Select the **Go to resource** button.
-1. Select the **Manage deployment token** button.
-1. Copy the deployment token value and set it aside in an editor for later use.
-1. Select the **Close** button on the *Manage deployment token* window.
+2. Select **Go to resource**.
+3. Select **Manage deployment token**.
+4. Copy the deployment token value and set it aside in an editor for later use.
+5. Select **Close** on the *Manage deployment token* window.
## Create the pipeline task in Bitbucket
-1. Navigate to the repository in Bitbucket.
+1. Go to the repository in Bitbucket.
1. Select the **Source** menu item. 1. Ensure the **main** branch is selected in the branch drop down. 1. Select **Pipelines**. 1. Select text link **Create your first pipeline**.
-1. On the *Starter pipeline* card, select the **Select** button.
-1. Enter the following YAML into the configuration file.
+2. On the *Starter pipeline* card, select **Select**.
+3. Enter the following YAML into the configuration file.
# [No Framework](#tab/vanilla-javascript)
Next, define value for the `API_TOKEN` variable.
1. In the *Name* box, enter **deployment_token**, which matches the name in the workflow. 1. In the *Value* box, paste in the deployment token value you set aside in a previous step. 1. Check the **Secured** checkbox.
-1. Select the **Add** button.
-1. Select **Commit file** and return to your pipelines tab.
+2. SelectΓÇ»**Add**.
+3. Select **Commit file** and return to your pipelines tab.
Wait a moment on the *Pipelines* window and you'll see your deployment status appear. Once the deployment is finished running, you can view the website in your browser.
Wait a moment on the *Pipelines* window and you'll see your deployment status ap
There are two aspects to deploying a static app. The first step creates the underlying Azure resources that make up your app. The second is a Bitbucket workflow that builds and publishes your application.
-Before you can navigate to your new static site, the deployment build must first finish running.
+Before you can go to your new static site, the deployment build must first finish running.
The Static Web Apps overview window displays a series of links that help you interact with your web app. 1. Return to your static web app in the Azure portal.
-1. Navigate to the **Overview** window.
-1. Select the link under the *URL* label. Your website will load in a new tab.
+1. Go to the **Overview** window.
+2. Select the link under the *URL* label. Your website loads in a new tab.
## Clean up resources If you're not going to continue to use this application, you can delete the Azure Static Web Apps instance and all the associated services by removing the resource group. 1. Select the **static-web-apps-bitbucket** resource group from the *Overview* section.
-1. Select the **Delete resource group** button at the top of the resource group *Overview*.
-1. Enter the resource group name **static-web-apps-bitbucket** in the *Are you sure you want to delete "static-web-apps-bitbucket"?* confirmation dialog.
-1. Select **Delete**.
+2. Select **Delete resource group** at the top of the resource group *Overview*.
+3. Enter the resource group name **static-web-apps-bitbucket** in the *Are you sure you want to delete "static-web-apps-bitbucket"?* confirmation dialog.
+4. Select **Delete**.
The process to delete the resource group may take a few minutes to complete.
static-web-apps Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/configuration.md
For instance, to implement routes for a calendar application, you can rewrite al
The _calendar.html_ file can then use client-side routing to serve a different view for URL variations like `/calendar/january/1`, `/calendar/2020`, and `/calendar/overview`. > [!NOTE]
-> A route pattern of `/calendar/*` matches all requests under the _/calendar/_ path. However, it will not match requests for the paths _/calendar_ or _/calendar.html_. Use `/calendar*` to match all requests that begin with _/calendar_.
+> A route pattern of `/calendar/*` matches all requests under the _/calendar/_ path. However, it won't match requests for the paths _/calendar_ or _/calendar.html_. Use `/calendar*` to match all requests that begin with _/calendar_.
You can filter wildcard matches by file extension. For instance, if you wanted to add a rule that only matches HTML files in a given path you could create the following rule:
A trailing slash is the `/` at the end of a URL. Conventionally, trailing slash
Search engines treat the two URLs separately, regardless of whether it's a file or a directory. When the same content is rendered at both of these URLs, your website serves duplicate content which can negatively impact search engine optimization (SEO). When explicitly configured, Static Web Apps applies a set of URL normalization and redirect rules that help improve your websiteΓÇÖs performance and SEO.
-The following normalization and redirect rules will apply for each of the available configurations:
+The following normalization and redirect rules apply for each of the available configurations:
### Always
static-web-apps Custom Domain Azure Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/custom-domain-azure-dns.md
Now that your domain is configured for Azure to manage the DNS, you can now link
1. Open the [Azure portal](https://portal.azure.com).
-1. Navigate to your static web app.
+1. Go to your static web app.
1. From the *Overview* window, copy the generated **URL** of your site and set it aside in a text editor for future use.
Now that your domain is configured for Azure to manage the DNS, you can now link
1. Return to the DNS zone you created in the Azure portal.
-1. Select the **+ Record set button** button.
+2. Select **+ Record set**.
-1. Enter the following values in the *Add record set* window.
+3. Enter the following values in the *Add record set* window.
| Setting | Property | |||
Now that your domain is configured for Azure to manage the DNS, you can now link
| TTL unit | Keep default value. | | Alias | Paste in the Static Web Apps generated URL you set aside in a previous step. Make sure to remove the `https://` prefix from your URL. |
-1. Select the **OK** button.
+4. Select **OK**.
### Configure static web app custom domain
Now that your domain is configured for Azure to manage the DNS, you can now link
1. Under *Settings*, select **Custom domains**.
-1. Select the **+ Add** button.
+2. Select **+ Add**.
-1. In the *Enter domain* tab, enter your domain name prefixed with **www**.
+3. In the *Enter domain* tab, enter your domain name prefixed with **www**.
For instance, if your domain name is `example.com`, enter `www.example.com` into this box.
-1. Select the **Next** button.
+4. Select **Next**.
-1. In the *Validate + Configure* tab, enter the following values.
+5. In the *Validate + Configure* tab, enter the following values.
| Setting | Value | ||| | Domain name | This value should match the domain name you entered in the previous step (with the `www` subdomain). | | Hostname record type | Select **CNAME**. |
-1. Select the **Add** button.
+6. Select **Add**.
If you get an error saying that the action is invalid, wait 5 minutes and try again.
-1. Open a new browser tab and navigate to your domain with the `www` subdomain.
+7. Open a new browser tab and go to your domain with the `www` subdomain.
After the DNS records are updated, you should see your static web app in the browser. Also, inspect the location to verify that your site is served securely using `https`.
static-web-apps Custom Domain External https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/custom-domain-external.md
This guide demonstrates how to configure your domain name with the `www` subdoma
## Get static web app URL
-1. Navigate to the [Azure portal](https://portal.azure.com).
+1. Go to the [Azure portal](https://portal.azure.com).
-1. Navigate to your static web app.
+1. Go to your static web app.
1. From the *Overview* window, copy the generated **URL** of your site and set it aside in a text editor for future use.
Domain registrars are the services that allow you to purchase and manage domain
1. Open a new browser tab and sign in to your domain registrar account.
-1. Navigate to your domain name's DNS configuration settings.
+1. Go to your domain name's DNS configuration settings.
1. Add a new `CNAME` record with the following values.
Domain registrars are the services that allow you to purchase and manage domain
1. Under *Settings*, select **Custom domains**.
-1. Select the **+ Add** button.
+2. Select **+ Add**.
-1. In the *Enter domain* tab, enter your domain name prefixed with **www**.
+3. In the *Enter domain* tab, enter your domain name prefixed with **www**.
For instance, if your domain name is `example.com`, enter `www.example.com` into this box.
-1. Select the **Next** button.
+4. Select **Next**.
-1. In the *Validate + Configure* tab, enter the following values.
+5. In the *Validate + Configure* tab, enter the following values.
| Setting | Value | ||| | Domain name | This value should match the domain name you entered in the previous step (with the `www` subdomain). | | Hostname record type | Select **CNAME**. |
-1. Select the **Add** button.
+6. Select **Add**.
Your `CNAME` record is being created and the DNS settings are being updated. Since DNS settings need to propagate, this process can take up to an hour or longer to complete.
-1. Once the domain settings are in effect, open a new browser tab and navigate to your domain with the `www` subdomain.
+7. Once the domain settings are in effect, open a new browser tab and go to your domain with the `www` subdomain.
After the DNS records are updated, you should see your static web app in the browser. Also, inspect the location to verify that your site is served securely using `https`.
static-web-apps Deploy Blazor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/deploy-blazor.md
Title: 'Tutorial: Building a static web app with Blazor in Azure Static Web Apps'
+ Title: 'Build an Azure Static Web Apps website with Blazor'
description: Learn to build an Azure Static Web Apps website with Blazor. + Previously updated : 11/08/2021 Last updated : 10/11/2022
-# Tutorial: Building a static web app with Blazor in Azure Static Web Apps
-
-Azure Static Web Apps publishes a website to a production environment by building apps from a GitHub repository. In this tutorial, you deploy a web application to Azure Static Web apps using the Azure portal.
-
-If you don't have an Azure subscription, [create a free trial account](https://azure.microsoft.com/free).
+# Build an Azure Static Web Apps website with Blazor
+Azure Static Web Apps publishes a website to a production environment by building apps from a GitHub repository, which is supported by a serverless backend. The following tutorial shows how to deploy C# Blazor WebAssembly app that displays weather data returned by a serverless API.
## Prerequisites - [GitHub](https://github.com) account-- [Azure](https://portal.azure.com) account-
-## Application overview
-
-Azure Static Web Apps allows you to create static web applications supported by a serverless backend. The following tutorial demonstrates how to deploy C# Blazor WebAssembly application that display weather data returned by a serverless API.
--
-The app featured in this tutorial is made up from three different Visual Studio projects:
--- **Api**: The C# Azure Functions application which implements the API endpoint that provides weather information to the Blazor WebAssembly app. The **WeatherForecastFunction** returns an array of `WeatherForecast` objects.--- **Client**: The front-end Blazor WebAssembly project. A [fallback route](#fallback-route) is implemented to ensure client-side routing is functional.--- **Shared**: Holds common classes referenced by both the Api and Client projects which allows data to flow from API endpoint to the front-end web app. The [`WeatherForecast`](https://github.com/staticwebdev/blazor-starter/blob/main/Shared/WeatherForecast.cs) class is shared among both apps.-
-Together, these projects make up the parts required create a Blazor WebAssembly application running in the browser supported by an Azure Functions API backend.
-
-## Fallback route
-
-The application exposes URLs like _/counter_ and _/fetchdata_ which map to specific routes of the application. Since this app is implemented as a single page application, each route is served the _https://docsupdatetracker.net/index.html_ file. To ensure that requests for any path return _https://docsupdatetracker.net/index.html_ a [fallback route](./configuration.md#fallback-routes) is implemented in the _staticwebapp.config.json_ file found in the Client project's root folder.
-
-```json
-{
- "navigationFallback": {
- "rewrite": "/https://docsupdatetracker.net/index.html"
- }
-}
-```
+- [Azure](https://portal.azure.com) account. If you don't have an Azure subscription, [create a free trial account](https://azure.microsoft.com/free).
-The above configuration ensures that requests to any route in the app returns the _https://docsupdatetracker.net/index.html_ page.
+## 1. Create a repository
-## Create a repository
+This article uses a GitHub template repository to make it easy for you to get started. The template features a starter app that you can deploy to Azure Static Web Apps.
-This article uses a GitHub template repository to make it easy for you to get started. The template features a starter app that can be deployed to Azure Static Web Apps.
+1. Make sure you're signed in to GitHub and go to the following location to create a new repository:
+ [https://github.com/staticwebdev/blazor-starter/generate](https://github.com/login?return_to=/staticwebdev/blazor-starter/generate)
+2. Name your repository **my-first-static-blazor-app**.
-1. Make sure you're signed in to GitHub and navigate to the following location to create a new repository:
- - [https://github.com/staticwebdev/blazor-starter/generate](https://github.com/login?return_to=/staticwebdev/blazor-starter/generate)
-1. Name your repository **my-first-static-blazor-app**.
-
-## Create a static web app
+## 2. Create a static web app
Now that the repository is created, create a static web app from the Azure portal.
-1. Navigate to the [Azure portal](https://portal.azure.com).
+1. Go to the [Azure portal](https://portal.azure.com).
1. Select **Create a Resource**. 1. Search for **Static Web Apps**.
-1. Select **Static Web Apps**.
+1. Select **Static Web App**.
1. Select **Create**. 1. On the _Basics_ tab, enter the following values.
Now that the repository is created, create a static web app from the Azure porta
| _Region for Azure Functions API and staging environments_ | Select a region closest to you. | | _Source_ | **GitHub** |
-1. Select **Sign in with GitHub** and authenticate with GitHub.
-
-1. Enter the following GitHub values.
+5. Select **Sign in with GitHub** and authenticate with GitHub, if you're prompted.
+6. Enter the following GitHub values.
| Property | Value | | | |
Now that the repository is created, create a static web app from the Azure porta
| _Repository_ | Select **my-first-static-blazor-app**. | | _Branch_ | Select **main**. |
-1. In the _Build Details_ section, select **Blazor** from the _Build Presets_ drop-down and the following values are populated.
+7. In the _Build Details_ section, select **Blazor** from the _Build Presets_ drop-down and the following values are populated.
| Property | Value | Description | | | | | | App location | **Client** | Folder containing the Blazor WebAssembly app | | API location | **Api** | Folder containing the Azure Functions app | | Output location | **wwwroot** | Folder in the build output containing the published Blazor WebAssembly application |
-
-### Review and create
-1. Select the **Review + Create** button to verify the details are all correct.
+8. Select **Review + Create** to verify the details are all correct.
-1. Select **Create** to start the creation of the App Service Static Web App and provision a GitHub Actions for deployment.
+9. Select **Create** to start the creation of the static web app and provision a GitHub Actions for deployment.
-1. Once the deployment completes click, **Go to resource**.
+10. Once the deployment completes select, **Go to resource**.
-1. Select **Go to resource**.
+11. Select **Go to resource**.
:::image type="content" source="media/deploy-blazor/resource-button.png" alt-text="Go to resource button":::
-## View the website
+## 3. View the website
There are two aspects to deploying a static app. The first provisions the underlying Azure resources that make up your app. The second is a GitHub Actions workflow that builds and publishes your application.
-Before you can navigate to your new static web app, the deployment build must first finish running.
+Before you can go to your new static web app, the deployment build must first finish running.
The Static Web Apps overview window displays a series of links that help you interact with your web app.
+1. Select the banner that says, _Click here to check the status of your GitHub Actions runs_ to see the GitHub Actions running against your repository. Once you verify the deployment job is complete, then you can go to your website via the generated URL.
-1. Clicking on the banner that says, _Click here to check the status of your GitHub Actions runs_ takes you to the GitHub Actions running against your repository. Once you verify the deployment job is complete, then you can navigate to your website via the generated URL.
+ :::image type="content" source="./media/deploy-blazor/overview-window.png" alt-text="Screenshot showing overview window.":::
2. Once GitHub Actions workflow is complete, you can select the _URL_ link to open the website in new tab.
+ :::image type="content" source="media/deploy-blazor/my-first-static-blazor-app.png" alt-text="Screenshot of Static Web Apps Blazor webpage.":::
+## 4. Understand the application overview
+
+Together, the following projects make up the parts required to create a Blazor WebAssembly application running in the browser supported by an Azure Functions API backend.
+
+|Visual Studio project |Description |
+|||
+|API | The C# Azure Functions application implements the API endpoint that provides weather information to the Blazor WebAssembly app. The **WeatherForecastFunction** returns an array of `WeatherForecast` objects. |
+|Client |The front-end Blazor WebAssembly project. A [fallback route](#fallback-route) is implemented to ensure client-side routing is functional. |
+|Shared | Holds common classes referenced by both the Api and Client projects, which allow data to flow from API endpoint to the front-end web app. The [`WeatherForecast`](https://github.com/staticwebdev/blazor-starter/blob/main/Shared/WeatherForecast.cs) class is shared among both apps. |
+
+**Blazor static web app**
+
+### Fallback route
+
+The app exposes URLs like `_/counter_` and `_/fetchdata_`, which map to specific routes of the app. Since this app is implemented as a single page, each route is served the `_https://docsupdatetracker.net/index.html_` file. To ensure that requests for any path return `_https://docsupdatetracker.net/index.html_`, a [fallback route](./configuration.md#fallback-routes) gets implemented in the `_staticwebapp.config.json_` file found in the client project's root folder.
+
+```json
+{
+ "navigationFallback": {
+ "rewrite": "/https://docsupdatetracker.net/index.html"
+ }
+}
+```
+
+The json configuration ensures that requests to any route in the app return the `_https://docsupdatetracker.net/index.html_` page.
+ ## Clean up resources
-If you're not going to continue to use this application, you can delete the Azure Static Web Apps instance through the following steps:
+If you're not going to use this application, you can delete the Azure Static Web Apps instance through the following steps:
1. Open the [Azure portal](https://portal.azure.com).
-1. Search for **my-blazor-group** from the top search bar.
-1. Select on the group name.
-1. Select on the **Delete** button.
-1. Select **Yes** to confirm the delete action.
+2. Search for **my-blazor-group** from the top search bar.
+3. Select on the group name.
+4. Select **Delete**.
+5. Select **Yes** to confirm the delete action.
## Next steps > [!div class="nextstepaction"]
-> [Authentication and authorization](./authentication-authorization.md)
+> [Authenticate and authorize](./authentication-authorization.md)
+
+## Related articles
+
+- [Set up authentication and authorization](authentication-authorization.md)
+- [Configure app settings](application-settings.md)
+- [Enable monitoring](monitor.md)
+- [Azure CLI](https://github.com/Azure/static-web-apps-cli)
static-web-apps Deploy Nextjs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/deploy-nextjs.md
Previously updated : 10/11/2022 Last updated : 10/12/2022
Rather than using the Next.js CLI to create your app, you can use a starter repo
To begin, create a new repository under your GitHub account from a template repository.
-1. Go to [https://github.com/staticwebdev/nextjs-starter/generate](https://github.com/login?return_to=/staticwebdev/nextjs-starter/generate).
-2. Name the repository **nextjs-starter**.
-3. Clone the new repo to your machine. Make sure to replace `<YOUR_GITHUB_ACCOUNT_NAME>` with your account name.
+1. Go to [https://github.com/staticwebdev/nextjs-starter/generate](https://github.com/login?return_to=/staticwebdev/nextjs-starter/generate)
+1. Name the repository **nextjs-starter**
+1. Next, clone the new repo to your machine. Make sure to replace `<YOUR_GITHUB_ACCOUNT_NAME>` with your account name.
```bash git clone http://github.com/<YOUR_GITHUB_ACCOUNT_NAME>/nextjs-starter ```
-4. Go to the newly cloned Next.js app.
+1. Go to the newly cloned Next.js app.
```bash cd nextjs-starter ```
-5. Install dependencies.
+1. Install dependencies.
```bash npm install ```
-6. Start Next.js app in development.
+1. Start Next.js app in development.
```bash npm run dev ```
-7. Go to `http://localhost:3000` to open the app, where you should see the following website open in your preferred browser.
+1. Go to `http://localhost:3000` to open the app, where you should see the following website open in your preferred browser:
:::image type="content" source="media/deploy-nextjs/start-nextjs-app.png" alt-text="Start Next.js app":::
The following steps show how to link your app to Azure Static Web Apps. Once in
## 3. Review and create
-1. Select the **Review + Create** button to verify the details are all correct.
+1. Select **Review + Create** to verify the details are all correct.
1. Select **Create** to start the creation of the App Service Static Web App and provision a GitHub Actions for deployment. 1. Once the deployment completes select, **Go to resource**.
-2. On the _Overview_ window, select the *URL* link to open your deployed app.
+1. On the _Overview_ window, select the *URL* link to open your deployed application.
If the website doesn't load immediately, then the build is still running. Once the workflow is complete, you can refresh the browser to view your web app.
-To check the status of the Actions workflow, go to the *Actions* dashboard for your repository:
+To check the status of the Actions workflow, go to the Actions dashboard for your repository:
```url https://github.com/<YOUR_GITHUB_USERNAME>/nextjs-starter/actions
Return to the terminal and run the following command `git pull origin main`.
If you're not going to continue to use this app, you can delete the Azure Static Web Apps instance through the following steps. 1. Open the [Azure portal](https://portal.azure.com).
-2. Search for **my-nextjs-group** from the top search bar.
-3. Select on the group name.
-4. Select on the **Delete** button.
-5. Select **Yes** to confirm the delete action.
+1. Search for **my-nextjs-group** from the top search bar.
+1. Select on the group name.
+1. Select **Delete**.
+1. Select **Yes** to confirm the delete action.
## Next steps
static-web-apps Deploy Nuxtjs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/deploy-nuxtjs.md
In this tutorial, you learn to deploy a [Nuxt 3](https://v3.nuxtjs.org/) applica
You can set up a new Nuxt project using `npx nuxi init nuxt-app`. Instead of using a new project, this tutorial uses an existing repository set up to demonstrate how to deploy a Nuxt 3 site with universal rendering on Azure Static Web Apps.
-1. Navigate to [http://github.com/staticwebdev/nuxt-3-starter/generate](https://github.com/login?return_to=/staticwebdev/nuxt-3-starter/generate).
-1. Name the repository **nuxt-3-starter**.
+1. Create a new repository under your GitHub account from a template repository.
+1. Go to [http://github.com/staticwebdev/nuxtjs-starter/generate](https://github.com/login?return_to=/staticwebdev/nuxtjs-starter/generate)
+1. Name the repository **nuxtjs-starter**.
1. Next, clone the new repo to your machine. Make sure to replace <YOUR_GITHUB_ACCOUNT_NAME> with your account name. ```bash git clone http://github.com/<YOUR_GITHUB_ACCOUNT_NAME>/nuxt-3-starter ```
-1. Navigate to the newly cloned Nuxt.js app:
+1. Go to the newly cloned Nuxt.js app:
```bash cd nuxt-3-starter
You can set up a new Nuxt project using `npx nuxi init nuxt-app`. Instead of usi
npm run dev -- -o ```
-Navigate to `http://localhost:3000` to open the app, where you should see the following website open in your preferred browser. Select the buttons to invoke server and API routes.
+Go to `http://localhost:3000` to open the app, where you should see the following website open in your preferred browser:
:::image type="content" source="media/deploy-nuxtjs/nuxt-3-app.png" alt-text="Start Nuxt.js app":::
-## Deploy your Nuxt 3 site
+When you select a framework/library, you should see a details page about the selected item:
-The following steps show how to create an Azure Static Web Apps resource and configure it to deploy your app from GitHub.
+
+## Generate a static website from Nuxt.js build
+
+When you build a Nuxt.js site using `npm run build`, the app is built as a traditional web app, not a static site. To generate a static site, use the following application configuration.
+
+1. Update the _package.json_'s build script to only generate a static site using the `nuxt generate` command:
+
+ ```json
+ "scripts": {
+ "dev": "nuxt dev",
+ "build": "nuxt generate"
+ },
+ ```
+
+ Now with this command in place, Static Web Apps runs the `build` script every time you push a commit.
+
+2. Generate a static site:
+
+ ```bash
+ npm run build
+ ```
+
+ Nuxt.js generates the static site and copy it into a _dist_ folder at the root of your working directory.
+
+ > [!NOTE]
+ > This folder is listed in the _.gitignore_ file because it should be generated by CI/CD when you deploy.
+
+## Deploy your static website
+
+The following steps show how to link the app you just pushed to GitHub to Azure Static Web Apps. Once in Azure, you can deploy the application to a production environment.
### Create an Azure Static Web Apps resource
-1. Navigate to the [Azure portal](https://portal.azure.com).
+1. Go to the [Azure portal](https://portal.azure.com).
1. Select **Create a Resource**. 1. Search for **Static Web Apps**. 1. Select **Static Web Apps**.
The following steps show how to create an Azure Static Web Apps resource and con
1. Select **Review + Create** to verify the details are all correct.
-1. Select **Create** to start the creation of the static web app and provision a GitHub Actions for deployment.
+2. Select **Create** to start the creation of the App Service Static Web App and provision a GitHub Actions for deployment.
-1. Once the deployment completes, select **Go to resource**.
+3. Once the deployment completes, select **Go to resource**.
-1. On the _Overview_ window, select the *URL* link to open your deployed application.
+4. On the _Overview_ window, select the *URL* link to open your deployed application.
-If the website does not immediately load, then the background GitHub Actions workflow is still running. Once the workflow is complete you can then refresh the browser to view your web app.
+If the website does note immediately load, then the background GitHub Actions workflow is still running. Once the workflow is complete you can then select refresh the browser to view your web app.
You can check the status of the Actions workflows by navigating to the Actions for your repository:
https://github.com/<YOUR_GITHUB_USERNAME>/nuxt-3-starter/actions
When you created the app, Azure Static Web Apps created a GitHub Actions workflow file in your repository. Return to the terminal and run the following command to pull the commit containing the new file.
-```bash
-git pull
-```
+## Configure dynamic routes
+
+Go to the newly-deployed site and select one of the framework or library logos. Instead of getting a details page, you get a 404 error page.
++
+The reason for this is, Nuxt.js generated the static site, it only did so for the home page. Nuxt.js can generate equivalent static `.html` files for every `.vue` pages file, but there's an exception.
+
+If the page is a dynamic page, for example `_id.vue`, it won't have enough information to generate a static HTML from such dynamic page. You'll have to explicitly provide the possible paths for the dynamic routes.
+
+## Generate static pages from dynamic routes
+
+1. Update the _nuxt.config.js_ file so that Nuxt.js uses a list of all available data to generate static pages for each framework/library:
+
+ ```javascript
+ import { projects } from "./utils/projectsData";
+
+ export default {
+ mode: "universal",
+
+ //...truncated
+
+ generate: {
+ async routes() {
+ const paths = [];
+
+ projects.forEach(project => {
+ paths.push(`/project/${project.slug}`);
+ });
+
+ return paths;
+ }
+ }
+ };
+ ```
+
+ > [!NOTE]
+ > `routes` is an async function, so you can make a request to an API in this function and use the returned list to generate the paths.
Make changes to the app by updating the code and pushing it to GitHub. GitHub Actions automatically builds and deploys the app.
static-web-apps Deployment Token Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/deployment-token-management.md
Normally, you don't need to worry about the deployment token, but the following
## Reset a deployment token
-1. Click on **Manage deployment token** link on the _Overview_ page of your Azure Static Web Apps site.
+1. Select **Manage deployment token** on the _Overview_ page of your Azure Static Web Apps site.
:::image type="content" source="./media/deployment-token-management/manage-deployment-token-button.png" alt-text="Managing deployment token":::
-1. Click on the **Reset token** button.
+2. Select **Reset token**.
:::image type="content" source="./media/deployment-token-management/manage-deployment-token.png" alt-text="Resetting deployment token":::
-1. After displaying a new token in the _Deployment token_ field, copy the token by clicking **Copy to clipboard** icon.
+3. After displaying a new token in the _Deployment token_ field, copy the token by selecting **Copy to clipboard**.
## Update a secret in the GitHub repository To keep automated deployment running, after resetting a token you need to set the new value in the corresponding GitHub repository.
-1. Navigate to your project's repository on GitHub, and click on the **Settings** tab.
-1. Click on the **Secrets** menu item. You will find a secret generated during Static Web App provisioning named _AZURE_STATIC_WEB_APPS_API_TOKEN_... in the _Repository secrets_ section.
+1. Go to your project's repository on GitHub, and select the **Settings** tab.
+2. Select **Secrets** from the menu item. Find a secret generated during Static Web App provisioning named _AZURE_STATIC_WEB_APPS_API_TOKEN_... in the _Repository secrets_ section.
:::image type="content" source="./media/deployment-token-management/github-repo-secrets.png" alt-text="Listing repository secrets"::: > [!NOTE]
- > If you created the Azure Static Web Apps site against multiple branches of this repository, you will see multiple _AZURE_STATIC_WEB_APPS_API_TOKEN_... secrets in this list. Select the correct one by matching the file name listed in the _Edit workflow_ field on the _Overview_ tab of the Static Web Apps site.
+ > If you created the Azure Static Web Apps site against multiple branches of this repository, you see multiple _AZURE_STATIC_WEB_APPS_API_TOKEN_... secrets in this list. Select the correct one by matching the file name listed in the _Edit workflow_ field on the _Overview_ tab of the Static Web Apps site.
-1. Click on the **Update** button to open the editor.
-1. **Paste the value** of the deployment token to the _Value_ field.
-1. Click **Update secret**.
+3. Select **Update**.
+4. **Paste the value** of the deployment token to the _Value_ field.
+5. Select **Update secret**.
:::image type="content" source="./media/deployment-token-management/github-update-secret.png" alt-text="Updating repository secret":::
static-web-apps Enterprise Edge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/enterprise-edge.md
A manual setup gives you full control over the CDN configuration including the c
# [Azure portal](#tab/azure-portal)
-1. Navigate to your static web app in the Azure portal.
+1. Go to your static web app in the Azure portal.
1. Select **Enterprise-grade edge** in the left menu.
static-web-apps Front Door Manual https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/front-door-manual.md
In this tutorial, you learn how to:
## Copy web app URL
-1. Navigate to the Azure portal.
+1. Go to the Azure portal.
1. Open the static web app that you want to apply Azure Front Door.
In this tutorial, you learn how to:
When creating an Azure Front Door profile, you must select an origin from the same subscription as the selected the Front Door.
-1. Navigate to the Azure portal home.
+1. Go to the Azure home screen.
1. Select **Create a resource**.
When creating an Azure Front Door profile, you must select an origin from the sa
1. Select the **Quick create** option.
-1. Select the **Continue to create a Front Door** button.
+1. Select **Continue to create a front door**.
1. In the *Basics* tab, enter the following values:
When creating an Azure Front Door profile, you must select an origin from the sa
1. Select **Review + create**.
- The validation process may take a moment to complete before you can continue.
- 1. Select **Create**. The creation process may take a few minutes to complete.
Add the following settings to disable Front Door's caching policies from trying
1. Enter **/.auth** in the textbox.
-1. Select the **Update** button.
+1. Select **Update**.
+
+1. Select the **No transform** option from the *Case transform* dropdown.
### Add an action
Add the following settings to disable Front Door's caching policies from trying
1. Select **Disabled** in the *Caching* dropdown.
-1. Select the **Save** button.
+2. Select **Save**.
### Associate rule to an endpoint
Now that the rule is created, you apply the rule to a Front Door endpoint.
1. Select the endpoint name to which you want to apply the caching rule.
-1. Select the **Next** button.
+2. Select **Next**.
-1. Select the **Associate** button.
+3. Select **Associate**.
## Copy Front Door ID
Open the [staticwebapp.config.json](configuration.md) file for your site and mak
} ```
- First, configure your app to only allow traffic from your Front Door instance. In every backend request, Front Door automatically adds an `X-Azure-FDID` header that contains your Front Door instance ID. By configuring your static web app to require this header, it will restrict traffic exclusively to your Front Door instance. In the `forwardingGateway` section in your configuration file, add the `requiredHeaders` section and define the `X-Azure-FDID` header. Replace `<YOUR-FRONT-DOOR-ID>` with the *Front Door ID* you set aside earlier.
+ First, configure your app to only allow traffic from your Front Door instance. In every backend request, Front Door automatically adds an `X-Azure-FDID` header that contains your Front Door instance ID. By configuring your static web app to require this header, it restricts traffic exclusively to your Front Door instance. In the `forwardingGateway` section in your configuration file, add the `requiredHeaders` section and define the `X-Azure-FDID` header. Replace `<YOUR-FRONT-DOOR-ID>` with the *Front Door ID* you set aside earlier.
Next, add the Azure Front Door hostname (not the Azure Static Web Apps hostname) into the `allowedForwardedHosts` array. If you have custom domains configured in your Front Door instance, also include them in this list. In this example, replace `my-sitename.azurefd.net` with the Azure Front Door hostname for your site.
-1. For all secured routes in your app, disable Azure Front Door caching by adding `"Cache-Control": "no-store"` to the route header definition.
+2. For all secured routes in your app, disable Azure Front Door caching by adding `"Cache-Control": "no-store"` to the route header definition.
```json {
static-web-apps Functions Bring Your Own https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/functions-bring-your-own.md
Title: Bring your own functions to Azure Static Web Apps description: Use an existing Azure Functions app with your Azure Static Web Apps site.+ Previously updated : 07/27/2022 Last updated : 10/13/2022 # Bring your own functions to Azure Static Web Apps
-Azure Static Web Apps APIs are supported by two possible configurations: managed functions and bring your own functions. See the [overview](apis-functions.md) for details between the two configurations.
+Azure Static Web Apps APIs are supported by two possible configurations: managed functions and bring your own functions. For more information on the differences in configurations, see the [overview](apis-functions.md).
This article demonstrates how to link an existing Azure Functions app to an Azure Static Web Apps resource.
Once linked, you can access that same endpoint through the `api` path from your
https://red-sea-123.azurestaticapps.net/api/getProducts ```
- Both endpoint URLs point to the same function.
+Both endpoint URLs point to the same function.
## Link an existing Azure Functions app
Before you associate an existing Functions app, you first need to adjust to conf
| Subscription | Select your Azure subscription name. | | Resource name | Select the Azure Functions app name. |
-1. Select the **Link** button.
+2. Select **Link**.
The Azure Functions app is now mapped to the `/api` route of your static web app. > [!IMPORTANT]
-> Make sure to set the `api_location` value to an empty string (`""`) in the [workflow configuration](./build-configuration.md) file before you link an existing Functions application.
+> Make sure to set the `api_location` value to an empty string (`""`) in the [workflow configuration](./build-configuration.md) file before you link an existing Functions application. Also, calls assume that the external function app retains the default `api` route prefix. Many apps remove this prefix in the *host.json*. Make sure the prefix is in place in the configuration, otherwise the call fails.
## Deployment
static-web-apps Get Started Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/get-started-cli.md
Now that the repository is generated from the template, you can deploy the app a
As you execute this command, the CLI starts the GitHub interactive log in experience. Look for a line in your console that resembles the following message.
- > Please navigate to `https://github.com/login/device` and enter the user code 329B-3945 to activate and retrieve your GitHub personal access token.
+ > Go to `https://github.com/login/device` and enter the user code 329B-3945 to activate and retrieve your GitHub personal access token.
-1. Navigate to **https://github.com/login/device**.
+1. Go to **https://github.com/login/device**.
1. Enter the user code as displayed your console's message.
-1. Select the **Continue** button.
+2. Select **Continue**.
-1. Select the **Authorize AzureAppServiceCLI** button.
+3. Select **Authorize AzureAppServiceCLI**.
## View the website There are two aspects to deploying a static app. The first operation creates the underlying Azure resources that make up your app. The second is a workflow that builds and publishes your application.
-Before you can navigate to your new static site, the deployment build must first finish running.
+Before you can go to your new static site, the deployment build must first finish running.
1. Return to your console window and run the following command to list the URLs associated with your app.
Before you can navigate to your new static site, the deployment build must first
--query "defaultHostname" ```
- Copy the URL into your browser to navigate to your website.
+ Copy the URL into your browser to go to your website.
## Clean up resources
static-web-apps Get Started Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/get-started-portal.md
Azure Static Web Apps publishes a website to a production environment by buildin
This article uses a GitHub repository to make it easy for you to get started. The repository features a starter app used to deploy using Azure Static Web Apps. 1. Sign in to Azure DevOps.
-1. Select the **New repository** button.
-1. In the *Create new project* window, expand the **Advanced** button and make the following selections:
+2. Select **New repository**.
+3. In the *Create new project* window, expand **Advanced** menu and make the following selections:
| Setting | Value | |--|--|
This article uses a GitHub repository to make it easy for you to get started. Th
| Version control | Select **Git**. | | Work item process | Select the option that best suits your development methods. |
-1. Select the **Create** button.
-1. Select the **Repos** menu item.
-1. Select the **Files** menu item.
-1. Under the *Import repository* card, select the **Import** button.
-1. Copy a repository URL for the framework of your choice, and paste it into the *Clone URL* box.
+4. Select **Create**.
+5. Select the **Repos** menu item.
+6. Select the **Files** menu item.
+7. Under the *Import repository* card, select **Import**.
+8. Copy a repository URL for the framework of your choice, and paste it into the *Clone URL* box.
# [No Framework](#tab/vanilla-javascript)
This article uses a GitHub repository to make it easy for you to get started. Th
-1. Select the **Import** button and wait for the import process to complete.
+9. Select **Import** and wait for the import process to complete.
::: zone-end
This article uses a GitHub repository to make it easy for you to get started. Th
Now that the repository is created, you can create a static web app from the Azure portal.
-1. Navigate to the [Azure portal](https://portal.azure.com).
+1. Go to the [Azure portal](https://portal.azure.com).
1. Select **Create a Resource**. 1. Search for **Static Web Apps**. 1. Select **Static Web Apps**.
In the _Basics_ section, begin by configuring your new app and linking it to a G
| Azure Functions and staging details | Select a region closest to you. | | Source | Select **GitHub**. |
-Select the **Sign-in with GitHub** button and authenticate with GitHub.
+Select **Sign-in with GitHub** and authenticate with GitHub.
After you sign in with GitHub, enter the repository information.
After you sign in with GitHub, enter the repository information.
> [!NOTE] > If you don't see any repositories: > - You may need to authorize Azure Static Web Apps in GitHub. Browse to your GitHub repository and go to **Settings > Applications > Authorized OAuth Apps**, select **Azure Static Web Apps**, and then select **Grant**.
-> - You may need to authorize Azure Static Web Apps in your Azure DevOps organization. You must be an owner of the organization to grant the permissions. Request third-party application access via via OAuth. For more information, see [Authorize access to REST APIs with OAuth 2.0](https://learn.microsoft.com/azure/devops/integrate/get-started/authentication/oauth?view=azure-devops#2-authorize-your-app).
+> - You may need to authorize Azure Static Web Apps in your Azure DevOps organization. You must be an owner of the organization to grant the permissions. Request third-party application access via via OAuth. For more information, see [Authorize access to REST APIs with OAuth 2.0](/azure/devops/integrate/get-started/authentication/oauth).
::: zone-end
Select **Go to resource**.
There are two aspects to deploying a static app. The first creates the underlying Azure resources that make up your app. The second is a workflow that builds and publishes your application.
-Before you can navigate to your new static site, the deployment build must first finish running.
+Before you can go to your new static site, the deployment build must first finish running.
The Static Web Apps *Overview* window displays a series of links that help you interact with your web app.
The Static Web Apps *Overview* window displays a series of links that help you i
:::image type="content" source="./media/getting-started/overview-window.png" alt-text="The Azure Static Web Apps overview window.":::
-1. Select the banner that says, _Click here to check the status of your GitHub Actions runs_, which takes you to the GitHub Actions and runs against your repository. Once you verify the deployment job is complete, then you can navigate to your website via the generated URL.
+1. Selecting on the banner that says, _Select here to check the status of your GitHub Actions runs_ takes you to the GitHub Actions running against your repository. Once you verify the deployment job is complete, then you can go to your website via the generated URL.
-2. Once GitHub Actions workflow is complete, select on the _URL_ link to open the website in new tab.
+2. Once GitHub Actions workflow is complete, you can select the _URL_ link to open the website in new tab.
::: zone-end ::: zone pivot="azure-devops"
-Once the workflow is complete, select the _URL_ link to open the website in new tab.
+Once the workflow is complete, you can select the _URL_ link to open the website in new tab.
::: zone-end
If you're not going to continue to use this application, you can delete the Azur
1. Open the [Azure portal](https://portal.azure.com). 1. Search for **my-first-web-static-app** from the top search bar. 1. Select the app name.
-2. Select **Delete**.
-3. Select **Yes** to confirm the delete action. This action may take a few moments to complete.
+1. Select **Delete**.
+1. Select **Yes** to confirm the delete action (this action may take a few moments to complete).
## Next steps
static-web-apps Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/getting-started.md
If you don't already have the [Azure Static Web Apps extension for Visual Studio
1. Select **View** > **Extensions**. 1. In the **Search Extensions in Marketplace**, type **Azure Static Web Apps**. 1. Select **Install** for **Azure Static Web Apps**.
-1. The extension will install into Visual Studio Code.
+2. The extension installs into Visual Studio Code.
## Create a static web app
If you don't already have the [Azure Static Web Apps extension for Visual Studio
:::image type="content" source="media/getting-started/extension-azure-logo.png" alt-text="Azure Logo"::: > [!NOTE]
- > You are required to sign in to Azure and GitHub in Visual Studio Code to continue. If you are not already authenticated, the extension will prompt you to sign in to both services during the creation process.
+ > You are required to sign in to Azure and GitHub in Visual Studio Code to continue. If you are not already authenticated, the extension prompts you to sign in to both services during the creation process.
-1. Select <kbd>F1</kbd> to open the Visual Studio Code command palette.
+2. Select <kbd>F1</kbd> to open the Visual Studio Code command palette.
-1. Enter **Create static web app** in the command box.
+3. Enter **Create static web app** in the command box.
-1. Select *Azure Static Web Apps: Create static web app...* and select **Enter**.
+4. Select *Azure Static Web Apps: Create static web app...* and select **Enter**.
# [No Framework](#tab/vanilla-javascript)
If you don't already have the [Azure Static Web Apps extension for Visual Studio
-1. Enter the settings values for that match your framework preset choice.
+5. Enter the settings values for that match your framework preset choice.
# [No Framework](#tab/vanilla-javascript)
If you don't already have the [Azure Static Web Apps extension for Visual Studio
-1. Once the app is created, a confirmation notification is shown in Visual Studio Code.
+6. Once the app is created, a confirmation notification is shown in Visual Studio Code.
:::image type="content" source="media/getting-started/extension-confirmation.png" alt-text="Created confirmation":::
If you don't already have the [Azure Static Web Apps extension for Visual Studio
Once the deployment is complete, you can navigate directly to your website.
-1. To view the website in the browser, right-click on the project in the Static Web Apps extension, and select **Browse Site**.
+7. To view the website in the browser, right-click the project in the Static Web Apps extension, and select **Browse Site**.
:::image type="content" source="media/getting-started/extension-browse-site.png" alt-text="Browse site":::
If you don't already have the [Azure Static Web Apps extension for Visual Studio
If you're not going to continue to use this application, you can delete the Azure Static Web Apps instance through the extension.
-In the Visual Studio Code Explorer window, return to the _Resources_ section and under _Static Web Apps_, right-click on **my-first-static-web-app** and select **Delete**.
+In the Visual Studio Code Explorer window, return to the _Resources_ section and under _Static Web Apps_, right-click **my-first-static-web-app** and select **Delete**.
## Next steps
static-web-apps Gitlab https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/gitlab.md
In this tutorial, you learn to:
This article uses a GitHub repository as the source to import code into a GitLab repository.
-1. Sign in to your GitLab account and navigate to [https://gitlab.com/projects/new#import_project](https://gitlab.com/projects/new#import_project)
-1. Select the **Repo by URL** button.
-1. In the *Git repository URL* box, enter the repository URL for your choice of framework.
+1. Sign in to your GitLab account and go to [https://gitlab.com/projects/new#import_project](https://gitlab.com/projects/new#import_project)
+2. Select **Repo by URL**.
+3. In the *Git repository URL* box, enter the repository URL for your choice of framework.
# [No Framework](#tab/vanilla-javascript)
This article uses a GitHub repository as the source to import code into a GitLab
-1. In the *Project slug* box, enter **my-first-static-web-app**.
-1. Select the **Create project** button and wait a moment while your repository is set up.
+4. In the *Project slug* box, enter **my-first-static-web-app**.
+5. Select **Create project** and wait a moment while your repository is set up.
## Create a static web app Now that the repository is created, you can create a static web app from the Azure portal.
-1. Navigate to the [Azure portal](https://portal.azure.com).
+1. Go to the [Azure portal](https://portal.azure.com).
1. Select **Create a Resource**. 1. Search for **Static Web Apps**. 1. Select **Static Web Apps**.
Now that the repository is created, you can create a static web app from the Azu
1. Select **Review + create**. 1. Select **Create**.
-1. Select the **Go to resource** button.
-1. Select the **Manage deployment token** button.
-1. Copy the deployment token value and set it aside in an editor for later use.
-1. Select the **Close** button on the *Manage deployment token* window.
+2. Select **Go to resource**.
+3. Select **Manage deployment token**.
+4. Copy the deployment token value and set it aside in an editor for later use.
+5. Select **Close** on the *Manage deployment token* window.
## Create the pipeline task in GitLab
Next you add a workflow task responsible for building and deploying your site as
### Add deployment token
-1. Navigate to the repository in GitLab.
+1. Go to the repository in GitLab.
1. Select **Settings**. 1. Select **CI/CD**.
-1. Next to the *Variables* section, select the **Expand** button.
-1. Select the **Add variable** button.
-1. In the *Key* box, enter **DEPLOYMENT_TOKEN**.
-1. In the *Value* box, paste in the deployment token value you set aside in a previous step.
-1. Select the **Add variable** button.
+2. Next to the *Variables* section, select **Expand**.
+3. Select **Add variable**.
+4. In the *Key* box, enter **DEPLOYMENT_TOKEN**.
+5. In the *Value* box, paste in the deployment token value you set aside in a previous step.
+6. Select **Add variable**.
### Add file
Next you add a workflow task responsible for building and deploying your site as
| `OUTPUT_PATH` | Location of the build output folder relative to the `APP_PATH`. | If your application source code is located at `$CI_PROJECT_DIR/app`, and the build script outputs files to the `$CI_PROJECT_DIR/app/build` folder, then set `$CI_PROJECT_DIR/app/build` as the `OUTPUT_PATH` value. | No | | `API_TOKEN` | API token for deployment. | `API_TOKEN: $DEPLOYMENT_TOKEN` | Yes |
-1. Select the **Commit changes** button.
-1. Select the **CI/CD** then **Pipelines** menu items to view the progress of your deployment.
+2. Select **Commit changes**.
+3. Select the **CI/CD** then **Pipelines** menu items to view the progress of your deployment.
Once the deployment is complete, you can view your website.
Once the deployment is complete, you can view your website.
There are two aspects to deploying a static app. The first step creates the underlying Azure resources that make up your app. The second is a GitLab workflow that builds and publishes your application.
-Before you can navigate to your new static site, the deployment build must first finish running.
+Before you can go to your new static site, the deployment build must first finish running.
The Static Web Apps overview window displays a series of links that help you interact with your web app. 1. Return to your static web app in the Azure portal.
-1. Navigate to the **Overview** window.
-1. Select the link under the *URL* label. Your website will load in a new tab.
+1. Go to the **Overview** window.
+2. Select the link under the *URL* label. Your website loads in a new tab.
## Clean up resources If you're not going to continue to use this application, you can delete the Azure Static Web Apps instance and all the associated services by removing the resource group. 1. Select the **static-web-apps-gitlab** resource group from the *Overview* section.
-1. Select the **Delete resource group** button at the top of the resource group *Overview*.
-1. Enter the resource group name **static-web-apps-gitlab** in the *Are you sure you want to delete "static-web-apps-gitlab"?* confirmation dialog.
-1. Select **Delete**.
+2. Select **Delete resource group** at the top of the resource group *Overview*.
+3. Enter the resource group name **static-web-apps-gitlab** in the *Are you sure you want to delete "static-web-apps-gitlab"?* confirmation dialog.
+4. Select **Delete**.
The process to delete the resource group may take a few minutes to complete. ## Next steps > [!div class="nextstepaction"]
-> [Add an API](add-api.md)
+> [Add an API](add-api.md)
static-web-apps Key Vault Secrets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/key-vault-secrets.md
Key Vault integration is not available for:
1. Select the **System assigned** tab.
-1. Under the _Status_ label, select the **On** button.
+2. Under the _Status_ label, select **On**.
-1. Select the **Save** button.
+3. Select **Save**.
:::image type="content" source="media/key-vault-secrets/azure-static-web-apps-enable-managed-identity.png" alt-text="Add system-assigned identity":::
-1. When the confirmation dialog appears, select the **Yes** button.
+4. When the confirmation dialog appears, select **Yes**.
:::image type="content" source="media/key-vault-secrets/azure-static-web-apps-enable-managed-identity-confirmation.png" alt-text="Confirm identity assignment.":::
You can now add an access policy to allow your static web app to read Key Vault
1. Select list item that matches your application name.
-1. Select the **Select** button.
+2. Select **Select**.
-1. Select the **Add** button.
+3. Select **Add**.
-1. Select the **Save** button.
+4. Select **Save**.
:::image type="content" source="media/key-vault-secrets/azure-static-web-apps-key-vault-save-policy.png" alt-text="Save Key Vault access policy":::
The access policy is now saved to Key Vault. Next, access the secret's URI to us
1. Select your desired secret version from the list.
-1. Select the **copy button** at end of _Secret Identifier_ text box to copy the secret URI value to the clipboard.
+2. Select **copy** at end of _Secret Identifier_ text box to copy the secret URI value to the clipboard.
-1. Paste this value into a text editor for later use.
+3. Paste this value into a text editor for later use.
## Add application setting
The access policy is now saved to Key Vault. Next, access the secret's URI to us
1. Under the _Settings_ menu, select **Configuration**.
-1. Under the _Application settings_ section, select the **Add** button.
+2. Under the _Application settings_ section, select **Add**.
-1. Enter a name in the text box for the _Name_ field.
+3. Enter a name in the text box for the _Name_ field.
-1. Determine the secret value in text box for the _Value_ field.
+4. Determine the secret value in text box for the _Value_ field.
The secret value is a composite of a few different values. The following template shows how the final string is built.
The access policy is now saved to Key Vault. Next, access the secret's URI to us
Use the following steps to build the full secret value.
-1. Copy the template from above and paste it into a text editor.
+5. Copy the template from above and paste it into a text editor.
-1. Replace `<YOUR-KEY-VAULT-SECRET-URI>` with the Key Vault URI value you set aside earlier.
+6. Replace `<YOUR-KEY-VAULT-SECRET-URI>` with the Key Vault URI value you set aside earlier.
-1. Copy the new full string value.
+7. Copy the new full string value.
-1. Paste the value into the text box for the _Value_ field.
+8. Paste the value into the text box for the _Value_ field.
-1. Select the **OK** button.
+9. Select **OK**.
-1. Select the **Save** button at the top of the _Application settings_ toolbar.
+10. Select **Save** at the top of the _Application settings_ toolbar.
:::image type="content" source="media/key-vault-secrets/azure-static-web-apps-application-settings-save.png" alt-text="Save application settings":::
static-web-apps Local Development https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/local-development.md
The following chart shows how requests are handled locally.
:::image type="content" source="media/local-development/cli-conceptual.png" alt-text="Azure Static Web App CLI request and response flow"::: > [!IMPORTANT]
-> Navigate to `http://localhost:4280` to access the application served by the CLI.
+> Go to `http://localhost:4280` to access the application served by the CLI.
- **Requests** made to port `4280` are forwarded to the appropriate server depending on the type of request.
Open a terminal to the root folder of your existing Azure Static Web Apps site.
swa start ```
-1. Navigate to `http://localhost:4280` to view the app in the browser.
+1. Go to `http://localhost:4280` to view the app in the browser.
### Other ways to start the CLI
Open a terminal to the root folder of your existing Azure Static Web Apps site.
The Static Web Apps CLI emulates the [security flow](./authentication-authorization.md) implemented in Azure. When a user logs in, you can define a fake identity profile returned to the app.
-For instance, when you try to navigate to `/.auth/login/github`, a page is returned that allows you to define an identity profile.
+For instance, when you try to go to `/.auth/login/github`, a page is returned that allows you to define an identity profile.
> [!NOTE] > The emulator works with any security provider, not just GitHub.
static-web-apps Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/monitor.md
Using the following steps to view traces in your application.
1. Hover your mouse over any card in the _Queries_ window.
-1. Select the **Load Editor** button.
+2. Select **Load Editor**.
-1. Replace the generated query with the word `traces`.
+3. Replace the generated query with the word `traces`.
-1. Select the **Run** button.
+4. Select **Run**.
:::image type="content" source="media/monitoring/azure-static-web-apps-application-insights-traces.png" alt-text="View Application Insights traces":::
static-web-apps Named Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/named-environments.md
steps:
> [!NOTE] > The `...` denotes code skipped for clarity.
-In this example, changes to all branches will be deployed to the `release` named preview environment.
+In this example, changes to all branches get deployed to the `release` named preview environment.
## Next Steps
static-web-apps Password Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/password-protection.md
An existing static web app in the Standard plan.
1. Enter the same password in **Confirm visitor password**.
-1. Select the **Save** button.
+2. Select **Save**.
-When visitors first navigate to a protected environment, they're prompted to enter the password before they can view the site.
+When visitors first go to a protected environment, they're prompted to enter the password before they can view the site.
:::image type="content" source="media/password-protection/password-prompt.png" alt-text="Password prompt":::
static-web-apps Plans https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/plans.md
See the [quotas guide](quotas.md) for limitation details.
You can move between Free or Standard plans via the Azure portal.
-1. Navigate to your Static Web Apps resource in the Azure portal.
+1. Go to your Static Web Apps resource in the Azure portal.
1. Under the _Settings_ menu, select **Hosting plan**.
static-web-apps Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/private-endpoint.md
Configuring Static Web Apps with a private endpoint allows you to use a private
> [!NOTE] > Placing your application behind a private endpoint means your app is only available in the region where your VNet is located. As a result, your application is no longer available across multiple points of presence.
-If your app has a private endpoint enabled, the server will respond with a `403` status code if the request comes from a public IP address. This behavior applies to both the production environment as well as any staging environments. The only way to reach the app is to use the private endpoint deployed within your VNet.
+If your app has a private endpoint enabled, the server responds with a `403` status code if the request comes from a public IP address. This behavior applies to both the production environment as well as any staging environments. The only way to reach the app is to use the private endpoint deployed within your VNet.
-The default DNS resolution of the static web app still exists and routes to a public IP address. The private endpoint will expose 2 IP Addresses within your VNet, one for the production environment and one for any staging environments. To ensure your client is able to reach the app correctly, make sure your client resolves the hostname of the app to the appropriate IP address of the private endpoint. This is required for the default hostname as well as any custom domains configured for the static web app. This resolution is done automatically if you select a private DNS zone when creating the private endpoint (see example below) and is the recommended solution.
+The default DNS resolution of the static web app still exists and routes to a public IP address. The private endpoint exposes 2 IP Addresses within your VNet, one for the production environment and one for any staging environments. To ensure your client is able to reach the app correctly, make sure your client resolves the hostname of the app to the appropriate IP address of the private endpoint. This is required for the default hostname as well as any custom domains configured for the static web app. This resolution is done automatically if you select a private DNS zone when creating the private endpoint (see example below) and is the recommended solution.
If you are connecting from on-prem or do not wish to use a private DNS zone, manually configure the DNS records for your application so that requests are routed to the appropriate IP address of the private endpoint. You can find more information on private endpoint DNS resolution [here](../private-link/private-endpoint-dns.md).
In this section, you create a private endpoint for your static web app.
1. Select the **Private Endpoints** option from the side menu.
-1. Click the **Add** button.
+2. Select **Add**.
-1. In the "Add Private Endpoint" dialog, enter this information:
+3. In the "Add Private Endpoint" dialog, enter this information:
| Setting | Value | | - | -- |
In this section, you create a private endpoint for your static web app.
:::image type="content" source="media/create-private-link-dialog.png" alt-text="./media/create-private-link-dialog.png":::
-1. Select **Ok**.
+4. Select **Ok**.
## Testing your private endpoint
-Since your application is no longer publicly available, the only way to access it is from inside of your virtual network. To test, set up a virtual machine inside of your virtual network and navigate to your site.
+Since your application is no longer publicly available, the only way to access it is from inside of your virtual network. To test, set up a virtual machine inside of your virtual network and go to your site.
## Next steps
static-web-apps Publish Azure Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/publish-azure-resource-manager.md
One of the parameters in the ARM template is `repositoryToken`, which allows the
This article uses a GitHub template repository to make it easy for you to get started. The template features a starter app used to deploy using Azure Static Web Apps.
-1. Navigate to the following location to create a new repository:
+1. Go to the following location to create a new repository:
1. [https://github.com/staticwebdev/vanilla-basic/generate](https://github.com/login?return_to=/staticwebdev/vanilla-basic/generate) 1. Name your repository **myfirstswadeployment**
static-web-apps Publish Gatsby https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/publish-gatsby.md
Create a Gatsby app using the Gatsby Command Line Interface (CLI):
npx gatsby new static-web-app ```
-1. Navigate to the newly created app
+1. Go to the newly created app
```bash cd static-web-app
The following steps show you how to create a new static site app and deploy it t
### Create the application
-1. Navigate to the [Azure portal](https://portal.azure.com)
+1. Go to the [Azure portal](https://portal.azure.com)
1. Select **Create a Resource** 1. Search for **Static Web Apps** 1. Select **Static Web Apps**
The following steps show you how to create a new static site app and deploy it t
### Review and create
-1. Select the **Review + Create** button to verify the details are all correct.
+1. Select **Review + Create** to verify the details are all correct.
-1. Select **Create** to start the creation of the App Service Static Web App and provision a GitHub Actions for deployment.
+2. Select **Create** to start the creation of the App Service Static Web App and provision a GitHub Actions for deployment.
-1. Once the deployment completes click, **Go to resource**.
+3. Once the deployment completes, select **Go to resource**.
-1. On the resource screen, click the _URL_ link to open your deployed application. You may need to wait a minute or two for the GitHub Actions to complete.
+4. On the resource screen, select the _URL_ link to open your deployed application. You may need to wait a minute or two for the GitHub Actions to complete.
:::image type="content" source="./media/publish-gatsby/deployed-app.png" alt-text="Deployed application":::
static-web-apps Publish Hugo https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/publish-hugo.md
Create a Hugo app using the Hugo Command Line Interface (CLI):
hugo new site static-app ```
-1. Navigate to the newly created app.
+1. Go to the newly created app.
```bash cd static-app
The following steps show you how to create a new static site app and deploy it t
### Create the application
-1. Navigate to the [Azure portal](https://portal.azure.com)
+1. Go to the [Azure portal](https://portal.azure.com)
1. Select **Create a Resource** 1. Search for **Static Web Apps** 1. Select **Static Web Apps**
The following steps show you how to create a new static site app and deploy it t
### Review and create
-1. Select the **Review + Create** button to verify the details are all correct.
+1. Select **Review + Create** to verify the details are all correct.
-1. Select **Create** to start the creation of the App Service Static Web App and provision a GitHub Actions for deployment.
+2. Select **Create** to start the creation of the App Service Static Web App and provision a GitHub Actions for deployment.
-1. Once the deployment completes click, **Go to resource**.
+3. Once the deployment completes, select **Go to resource**.
-1. On the resource screen, click the _URL_ link to open your deployed application. You may need to wait a minute or two for the GitHub Actions to complete.
+4. On the resource screen, select the _URL_ link to open your deployed application. You may need to wait a minute or two for the GitHub Actions to complete.
:::image type="content" source="./media/publish-hugo/deployed-app.png" alt-text="Deployed application":::
Update your workflow file to [fetch your full Git history](https://github.com/ac
fetch-depth: 0 ```
-Fetching the full history increases the build time of your GitHub Actions workflow, but your `.Lastmod` and `.GitInfo` variables will be accurate and available for each of your content files.
+Fetching the full history increases the build time of your GitHub Actions workflow, but your `.Lastmod` and `.GitInfo` variables are accurate and available for each of your content files.
## Clean up resources
static-web-apps Publish Jekyll https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/publish-jekyll.md
Create a Jekyll app using the Jekyll Command Line Interface (CLI):
jekyll new static-app ```
-1. Navigate to the newly created app.
+1. Go to the newly created app.
```bash cd static-app
The following steps show you how to create a new static site app and deploy it t
### Create the application
-1. Navigate to the [Azure portal](https://portal.azure.com)
+1. Go to the [Azure portal](https://portal.azure.com)
1. Select **Create a Resource** 1. Search for **Static Web Apps** 1. Select **Static Web Apps**
The following steps show you how to create a new static site app and deploy it t
### Review and create
-1. Select the **Review + Create** button to verify the details are all correct.
+1. Select **Review + Create** to verify the details are all correct.
-1. Select **Create** to start the creation of the App Service Static Web App and provision a GitHub Actions for deployment.
+2. Select **Create** to start the creation of the App Service Static Web App and provision a GitHub Actions for deployment.
-1. Once the deployment completes click, **Go to resource**.
+3. Once the deployment completes, select **Go to resource**.
-1. On the resource screen, click the _URL_ link to open your deployed application. You may need to wait a minute or two for the GitHub Actions to complete.
+4. On the resource screen, select the _URL_ link to open your deployed application. You may need to wait a minute or two for the GitHub Actions to complete.
:::image type="content" source="./media/publish-jekyll/deployed-app.png" alt-text="Deployed application":::
static-web-apps Publish Vuepress https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/publish-vuepress.md
The following steps show you how to create a new static site app and deploy it t
### Create the application
-1. Navigate to the [Azure portal](https://portal.azure.com)
+1. Go to the [Azure portal](https://portal.azure.com)
1. Select **Create a Resource** 1. Search for **Static Web Apps** 1. Select **Static Web Apps**
The following steps show you how to create a new static site app and deploy it t
### Review and create
-1. Select the **Review + Create** button to verify the details are all correct.
+1. Select **Review + Create** to verify the details are all correct.
-1. Select **Create** to start the creation of the App Service Static Web App and provision a GitHub Actions for deployment.
+2. Select **Create** to start the creation of the App Service Static Web App and provision a GitHub Actions for deployment.
-1. Once the deployment completes click, **Go to resource**.
+3. Once the deployment completes, select **Go to resource**.
-1. On the resource screen, click the _URL_ link to open your deployed application. You may need to wait a minute or two for the GitHub Actions to complete.
+4. On the resource screen, select the _URL_ link to open your deployed application. You may need to wait a minute or two for the GitHub Actions to complete.
:::image type="content" source="./media/publish-vuepress/deployed-app.png" alt-text="Deployed application":::
static-web-apps Review Publish Pull Requests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/review-publish-pull-requests.md
There are many benefits of using pre-production environments. For example, you c
Begin by making a change in your repository. You can do it directly on GitHub as shown in the following steps.
-1. Navigate to your project's repository on GitHub, then click on the **Branch** button to create a new branch.
+1. Go to your project's repository on GitHub, then select **Branch** to create a new branch.
:::image type="content" source="./media/review-publish-pull-requests/create-branch.png" alt-text="Create new branch using GitHub interface":::
- Type a branch name and click on **Create branch**.
+ Type a branch name and select **Create branch**.
-1. Go to your _app_ folder and change some text content. For example, you can change a title or paragraph. Once you found the file you want to edit, click on **Edit** to make the change.
+2. Go to your _app_ folder and change some text content. For example, you can change a title or paragraph. Once you found the file you want to edit, select **Edit** to make the change.
:::image type="content" source="./media/review-publish-pull-requests/edit-file.png" alt-text="Edit file button in GitHub interface":::
-1. After you make the changes, click on **Commit changes** to commit your changes to the branch.
+3. After you make the changes, select **Commit changes** to commit your changes to the branch.
:::image type="content" source="./media/review-publish-pull-requests/commit-changes.png" alt-text="Commit changes button in GitHub interface":::
Next, create a pull request from this change.
:::image type="content" source="./media/review-publish-pull-requests/tab.png" alt-text="Pull request tab in a GitHub repository":::
-1. Click on the **Compare & pull request** button of your branch.
+2. Select **Compare & pull request** on your branch.
-1. You can optionally fill in some details about your changes, then click on **Create pull request**.
+3. You can optionally fill in some details about your changes, then select **Create pull request**.
:::image type="content" source="./media/review-publish-pull-requests/open.png" alt-text="Pull request creation in GitHub":::
You can assign reviewers and add comments to discuss your changes if needed.
After the pull request is created, the [GitHub Actions](https://github.com/features/actions) deployment workflow runs and deploys your changes to a pre-production environment.
-Once the workflow has completed building and deploying your app, the GitHub bot adds a comment to your pull request which contains the URL of the pre-production environment. You can click on this link to see your staged changes.
+Once the workflow has completed building and deploying your app, the GitHub bot adds a comment to your pull request which contains the URL of the pre-production environment. You can select this link to see your staged changes.
:::image type="content" source="./media/review-publish-pull-requests/bot-comment.png" alt-text="Pull request comment with the pre-production URL":::
-Click on the generated URL to see the changes.
+Select the generated URL to see the changes.
If you take a closer look at the URL, you can see that it's composed like this: `https://<SUBDOMAIN-PULL_REQUEST_ID>.<AZURE_REGION>.azurestaticapps.net`.
This URL can be referenced in the rest of your workflow to run your tests agains
Once changes are approved, you can publish your changes to production by merging the pull request.
-Click on **Merge pull request**:
+Select **Merge pull request**:
:::image type="content" source="./media/review-publish-pull-requests/merge.png" alt-text="Merge pull request button in GitHub interface":::
static-web-apps Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/troubleshooting.md
npm run build:azure # if specified in package.json
Any errors raised by this process are logged in the GitHub workflow run.
-1. Navigate to the GitHub repository for your static web app.
+1. Go to the GitHub repository for your static web app.
1. Select **Actions**. > [!NOTE]
- > Any failed workflow runs will be displayed with a red *X* rather than a green check mark
+ > Any failed workflow runs display with a red *X* rather than a green check mark.
-1. Select the link for the workflow run you wish to validate.
-1. Select **Build and Deploy Job** to open the details of the deployment
-1. Select **Build And Deploy** to display the log
+2. Select the link for the workflow run you wish to validate.
+3. Select **Build and Deploy Job** to open the details of the deployment
+4. Select **Build And Deploy** to display the log
![Screenshot of the deployment log for a static web app](./media/troubleshooting/build-deploy-log.png)
-1. Review the logs and any error messages.
+5. Review the logs and any error messages.
> [!NOTE] > Some warning error messages may display in red, such as notes about *.oryx_prod_node_modules* and *workspace*. These are part of the normal deployment process.
If you see one of the following error messages in the error log, it's an indicat
| | | |App Directory Location: '/*folder*' is invalid. Could not detect this directory. | Verify your workflow reflects your repository structure. | | The app build failed to produce artifact folder: '*folder*'. | Ensure the `folder` property is configured correctly in your workflow file. |
-| Either no Api directory was specified, or the specified directory was not found. | Azure Functions will not be created as the workflow doesn't define a value for the `api` folder. |
+| Either no Api directory was specified, or the specified directory was not found. | Azure Functions isn't created, as the workflow doesn't define a value for the `api` folder. |
There are three folder locations specified in the workflow. Ensure these settings match both your project and any tools that transform your source code before deployment. | Configuration setting | Description | | | |
-| `app_location` | The root location of the source code to be deployed. This setting will typically be */* or the location of the JavaScript and HTML for your project. |
+| `app_location` | The root location of the source code to be deployed. This setting is typically */* or the location of the JavaScript and HTML for your project. |
| `output_location` | Name of the folder created by any build process from a bundler such as webpack. This folder both needs to be created by the build process, and a subdirectory under the `app_location` | | `api_location` |The root location of your Azure Functions application hosted by Azure Static Web Apps. This points to the root folder of all Azure Functions for your project, typically *api*. |
Use [Application Insights](../azure-monitor/app/app-insights-overview.md) to fin
1. Select the Application Insights instance, which has the same name as your static web app (if created using the steps above). 1. Under *Investigate*, select **Failures**. 1. Scroll down to *Drill into* on the right.
-1. In the lower right corner, under *Drill into*, a button will display the number of recently failed operations.
+2. In the lower right corner, under *Drill into*, a button displays the number of recently failed operations.
![Screenshot of the failures screen](./media/troubleshooting/app-insights-errors.png)
-1. Select the button that says **x Operations** to open a panel displaying the recent failed operations.
+3. Select **x Operations** to open a panel displaying the recent failed operations.
![Screenshot of the operations screen](./media/troubleshooting/app-insights-operations.png)
-1. Explore an error by selecting one from the list.
+4. Explore an error by selecting one from the list.
![Screenshot of the error details screen](./media/troubleshooting/app-insights-details.png)
storage Secure File Transfer Protocol Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-support.md
Blob storage now supports the SSH File Transfer Protocol (SFTP). This support le
> > To help us understand your scenario, please complete [this form](https://forms.office.com/r/gZguN0j65Y) before you begin using SFTP support. After you've tested your end-to-end scenarios with SFTP, please share your experience by using [this form](https://forms.office.com/r/MgjezFV1NR). Both of these forms are optional.
+Here's a video that tells you more about it.
+
+> [!VIDEO https://www.youtube-nocookie.com/embed/5cSo3GqSTWY]
+ Azure allows secure data transfer to Blob Storage accounts using Azure Blob service REST API, Azure SDKs, and tools such as AzCopy. However, legacy workloads often use traditional file transfer protocols such as SFTP. You could update custom applications to use the REST API and Azure SDKs, but only by making significant code changes. Prior to the release of this feature, if you wanted to use SFTP to transfer data to Azure Blob Storage you would have to either purchase a third party product or orchestrate your own solution. For custom solutions, you would have to create virtual machines (VMs) in Azure to host an SFTP server, and then update, patch, manage, scale, and maintain a complex architecture.
storage Elastic San Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-connect.md
Before you can connect to a volume, you'll need to get **StorageTargetIQN**, **S
Run the following command to get these values: ```azurecli
-az elastic-san volume-group list -e $sanName -g $resourceGroupName -v $searchedVolumeGroup -n $searchedVolume
+az elastic-san volume list -e $sanName -g $resourceGroupName -v $searchedVolumeGroup -n $searchedVolume
``` You should see a list of output that looks like the following:
storage Elastic San Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-create.md
New-AzElasticSanVolumeGroup -ResourceGroupName $rgName -ElasticSANName $sanName
# [Azure CLI](#tab/azure-cli) ```azurecli
-az elastic-san volume-group create --elastic-san-name $sanName -g $resourceGroupName -n volumeGroupName
+az elastic-san volume-group create --elastic-san-name $sanName -g $resourceGroupName -n $volumeGroupName
```
New-AzElasticSanVolume -ResourceGroupName $rgName -ElasticSanName $sanName -Volu
Replace `$volumeName` with the name you'd like the volume to use, then run the following script: ```azurecli
-az elastic-san volume-group create --elastic-san-name $sanName -g $resourceGroupName -v volumeGroupName -n $volumeName --size-gib 2000
+az elastic-san volume create --elastic-san-name $sanName -g $resourceGroupName -v $volumeGroupName -n $volumeName --size-gib 2000
```
storage Elastic San Expand https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-expand.md
Update-AzElasticSanVolume -ResourceGroupName $resourceGroupName -ElasticSanName
# [Azure CLI](#tab/azure-cli) ```azurecli
-az elastic-san update -e $sanName -g $resourceGroupName -v $volumeGroupName -n $volumeName --size-gib $newVolumeSize
+az elastic-san volume update -e $sanName -g $resourceGroupName -v $volumeGroupName -n $volumeName --size-gib $newVolumeSize
```
storage File Sync Troubleshoot Sync Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-troubleshoot-sync-errors.md
This error occurs when the Azure file share is inaccessible because of a storage
| **Error string** | ECS_E_SYNC_METADATA_WRITE_LOCK_TIMEOUT | | **Remediation required** | No |
-This error usually resolves itself, and can occur if there are:
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x80c83044 |
+| **HRESULT (decimal)** | -2134364092 |
+| **Error string** | ECS_E_SYNC_METADATA_WRITE_LOCK_TIMEOUT_SERVICEUNAVAILABLE |
+| **Remediation required** | No |
+
+These errors usually resolve themselves and can occur if there are:
* A high number of file changes across the servers in the sync group. * A large number of errors on individual files and directories.
storage Storage Files Identity Ad Ds Mount File Share https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-ad-ds-mount-file-share.md
Before you can mount the Azure file share, make sure you've gone through the fol
Run the PowerShell script below or [use the Azure portal](storage-files-quick-create-use-windows.md#map-the-azure-file-share-to-a-windows-drive) to persistently mount the Azure file share and map it to drive Z: on Windows. If Z: is already in use, replace it with an available drive letter. The script will check to see if this storage account is accessible via TCP port 445, which is the port SMB uses. Remember to replace the placeholder values with your own values. For more information, see [Use an Azure file share with Windows](storage-how-to-use-files-windows.md).
-Always mount Azure file shares using.file.core.windows.net, even if you set up a private endpoint for your share. Using CNAME for file share mount isn't supported for identity-based authentication (AD DS or Azure AD DS).
+Always mount Azure file shares using file.core.windows.net, even if you set up a private endpoint for your share. Using CNAME for file share mount isn't supported for identity-based authentication.
```powershell $connectTestResult = Test-NetConnection -ComputerName <storage-account-name>.file.core.windows.net -Port 445
storage Storage Files Identity Auth Active Directory Domain Service Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-auth-active-directory-domain-service-enable.md
Title: Use Azure AD Domain Services to authorize access to file data over SMB
-description: Learn how to enable identity-based authentication over Server Message Block (SMB) for Azure Files through Azure Active Directory Domain Services. Your domain-joined Windows virtual machines (VMs) can then access Azure file shares by using Azure AD credentials.
+ Title: Use Azure Active Directory Domain Services (Azure AD DS) to authorize user access to Azure Files over SMB
+description: Learn how to enable identity-based authentication over Server Message Block (SMB) for Azure Files through Azure Active Directory Domain Services (Azure AD DS). Your domain-joined Windows virtual machines (VMs) can then access Azure file shares by using Azure AD credentials.
Previously updated : 08/31/2022 Last updated : 10/13/2022 -+ # Enable Azure Active Directory Domain Services authentication on Azure Files
-[Azure Files](storage-files-introduction.md) supports identity-based authentication over Server Message Block (SMB) using three different methods: on-premises Active Directory Domain Services (AD DS), Azure Active Directory Domain Services (Azure AD DS), and Azure Active Directory (Azure AD) Kerberos for hybrid identities (preview). We strongly recommend that you review the [How it works section](./storage-files-active-directory-overview.md#how-it-works) to select the right AD source for authentication. The setup is different depending on the domain service you choose. This article focuses on enabling and configuring Azure AD DS for authentication with Azure file shares.
+This article focuses on enabling and configuring Azure AD DS for authentication with Azure file shares.
-If you are new to Azure file shares, we recommend reading our [planning guide](storage-files-planning.md) before reading the following series of articles.
+We strongly recommend that you review the [How it works section](./storage-files-active-directory-overview.md#how-it-works) to select the right AD source for authentication. The setup is different depending on the AD source you choose.
+
+If you're new to Azure Files, we recommend reading our [planning guide](storage-files-planning.md) before reading this article.
> [!NOTE] > Azure Files supports Kerberos authentication with Azure AD DS with RC4-HMAC and AES-256 encryption. We recommend using AES-256.
If you are new to Azure file shares, we recommend reading our [planning guide](s
## Prerequisites
-Before you enable Azure AD over SMB for Azure file shares, make sure you have completed the following prerequisites:
+Before you enable Azure AD DS over SMB for Azure file shares, make sure you've completed the following prerequisites:
1. **Select or create an Azure AD tenant.**
- You can use a new or existing tenant for Azure AD authentication over SMB. The tenant and the file share that you want to access must be associated with the same subscription.
+ You can use a new or existing tenant. The tenant and the file share that you want to access must be associated with the same subscription.
To create a new Azure AD tenant, you can [Add an Azure AD tenant and an Azure AD subscription](/windows/client-management/mdm/add-an-azure-ad-tenant-and-azure-ad-subscription). If you have an existing Azure AD tenant but want to create a new tenant for use with Azure file shares, see [Create an Azure Active Directory tenant](/rest/api/datacatalog/create-an-azure-active-directory-tenant). 1. **Enable Azure AD Domain Services on the Azure AD tenant.**
- To support authentication with Azure AD credentials, you must enable Azure AD Domain Services for your Azure AD tenant. If you aren't the administrator of the Azure AD tenant, contact the administrator and follow the step-by-step guidance to [Enable Azure Active Directory Domain Services using the Azure portal](../../active-directory-domain-services/tutorial-create-instance.md).
+ To support authentication with Azure AD credentials, you must enable Azure AD DS for your Azure AD tenant. If you aren't the administrator of the Azure AD tenant, contact the administrator and follow the step-by-step guidance to [Enable Azure Active Directory Domain Services using the Azure portal](../../active-directory-domain-services/tutorial-create-instance.md).
It typically takes about 15 minutes for an Azure AD DS deployment to complete. Verify that the health status of Azure AD DS shows **Running**, with password hash synchronization enabled, before proceeding to the next step.
Azure Files authentication with Azure AD DS is available in [all Azure Public, G
## Overview of the workflow
-Before you enable Azure AD DS Authentication over SMB for Azure file shares, verify that your Azure AD and Azure Storage environments are properly configured. We recommend that you walk through the [prerequisites](#prerequisites) to make sure you've completed all the required steps.
+Before you enable Azure AD DS authentication over SMB for Azure file shares, verify that your Azure AD and Azure Storage environments are properly configured. We recommend that you walk through the [prerequisites](#prerequisites) to make sure you've completed all the required steps.
Next, do the following things to grant access to Azure Files resources with Azure AD credentials: 1. Enable Azure AD DS authentication over SMB for your storage account to register the storage account with the associated Azure AD DS deployment.
-2. Assign access permissions for a share to an Azure AD identity (a user, group, or service principal).
-3. Configure NTFS permissions over SMB for directories and files.
+2. Assign share-level permissions to an Azure AD identity (a user, group, or service principal).
+3. Connect to your Azure file share using a storage account key and configure Windows access control lists (ACLs) for directories and files.
4. Mount an Azure file share from a domain-joined VM. The following diagram illustrates the end-to-end workflow for enabling Azure AD DS authentication over SMB for Azure Files.
The following diagram illustrates the end-to-end workflow for enabling Azure AD
To enable Azure AD DS authentication over SMB for Azure Files, you can set a property on storage accounts by using the Azure portal, Azure PowerShell, or Azure CLI. Setting this property implicitly "domain joins" the storage account with the associated Azure AD DS deployment. Azure AD DS authentication over SMB is then enabled for all new and existing file shares in the storage account.
-Keep in mind that you can enable Azure AD DS authentication over SMB only after you have successfully deployed Azure AD DS to your Azure AD tenant. For more information, see the [prerequisites](#prerequisites).
+Keep in mind that you can enable Azure AD DS authentication over SMB only after you've successfully deployed Azure AD DS to your Azure AD tenant. For more information, see the [prerequisites](#prerequisites).
# [Portal](#tab/azure-portal)
Get-ADUser $userObject -properties KerberosEncryptionType
[!INCLUDE [storage-files-aad-permissions-and-mounting](../../../includes/storage-files-aad-permissions-and-mounting.md)]
-You have now successfully enabled Azure AD DS authentication over SMB and assigned a custom role that provides access to an Azure file share with an Azure AD identity. To grant additional users access to your file share, follow the instructions in the [Assign access permissions](#assign-access-permissions-to-an-identity) to use an identity and [Configure NTFS permissions over SMB sections](#configure-ntfs-permissions-over-smb).
+You've now successfully enabled Azure AD DS authentication over SMB and assigned a custom role that provides access to an Azure file share with an Azure AD identity. To grant additional users access to your file share, follow the instructions in [Assign share-level permissions to an identity](#assign-share-level-permissions-to-an-identity) and [Configure Windows ACLs](#configure-windows-acls).
## Next steps
-For more information about Azure Files and how to use Azure AD over SMB, see these resources:
+For more information about identity-based authentication for Azure Files, see these resources:
- [Overview of Azure Files identity-based authentication support for SMB access](storage-files-active-directory-overview.md) - [FAQ](storage-files-faq.md)
storage Storage Files Identity Auth Azure Active Directory Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-auth-azure-active-directory-enable.md
description: Learn how to enable identity-based Kerberos authentication for hybr
Previously updated : 10/12/2022 Last updated : 10/13/2022 # Enable Azure Active Directory Kerberos authentication for hybrid identities on Azure Files (preview)+
+This article focuses on enabling and configuring Azure AD for authenticating [hybrid user identities](../../active-directory/hybrid/whatis-hybrid-identity.md), which are on-premises AD identities that are synced to the cloud. This allows Azure AD users to access Azure file shares using Kerberos authentication. This configuration uses Azure AD to issue the necessary Kerberos tickets to access the file share with the industry-standard SMB protocol. This means your end users can access Azure file shares over the internet without requiring a line-of-sight to domain controllers from hybrid Azure AD-joined and Azure AD-joined VMs. However, configuring Windows access control lists (ACLs) and permissions might require line-of-sight to the domain controller.
> [!IMPORTANT] > Azure Files authentication with Azure Active Directory Kerberos is currently in public preview. > This preview version is provided without a service level agreement, and isn't recommended for production workloads. Certain features might not be supported or might have constrained capabilities. > For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-For more information on all supported options and considerations, see [Overview of Azure Files identity-based authentication options for SMB access](storage-files-active-directory-overview.md). For more information about Azure Active Directory (AD) Kerberos, see [Deep dive: How Azure AD Kerberos works](https://techcommunity.microsoft.com/t5/itops-talk-blog/deep-dive-how-azure-ad-kerberos-works/ba-p/3070889).
-
-[Azure Files](storage-files-introduction.md) supports identity-based authentication over Server Message Block (SMB) using the Kerberos authentication protocol through the following three methods:
--- On-premises Active Directory Domain Services (AD DS)-- Azure Active Directory Domain Services (Azure AD DS)-- Azure Active Directory Kerberos (Azure AD) for hybrid user identities only-
-This article focuses on the last method: enabling and configuring Azure AD for authenticating [hybrid user identities](../../active-directory/hybrid/whatis-hybrid-identity.md), which are on-premises AD identities that are synced to the cloud. This allows Azure AD users to access Azure file shares using Kerberos authentication. This configuration uses Azure AD to issue the necessary Kerberos tickets to access the file share with the industry-standard SMB protocol. This means your end users can access Azure file shares over the internet without requiring a line-of-sight to domain controllers from hybrid Azure AD-joined and Azure AD-joined VMs. However, configuring access control lists (ACLs) and permissions might require line-of-sight to the domain controller.
-
-> [!NOTE]
-> Your Azure Storage account can't authenticate with both Azure AD and a second method like AD DS or Azure AD DS. You can only use one authentication method. If you've already chosen another authentication method for your storage account, you must disable it before enabling Azure AD Kerberos.
+For more information on supported options and considerations, see [Overview of Azure Files identity-based authentication options for SMB access](storage-files-active-directory-overview.md). For more information about Azure AD Kerberos, see [Deep dive: How Azure AD Kerberos works](https://techcommunity.microsoft.com/t5/itops-talk-blog/deep-dive-how-azure-ad-kerberos-works/ba-p/3070889).
## Applies to | File share type | SMB | NFS |
This article focuses on the last method: enabling and configuring Azure AD for a
Before you enable Azure AD over SMB for Azure file shares, make sure you've completed the following prerequisites.
+> [!NOTE]
+> Your Azure storage account can't authenticate with both Azure AD and a second method like AD DS or Azure AD DS. You can only use one AD source. If you've already chosen another AD source for your storage account, you must disable it before enabling Azure AD Kerberos.
+ The Azure AD Kerberos functionality for hybrid identities is only available on the following operating systems: - Windows 11 Enterprise single or multi-session.
storage Storage Troubleshoot Windows File Connection Problems https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-troubleshoot-windows-file-connection-problems.md
If end users are accessing the Azure file share using Active Directory (AD) or A
Validate that permissions are configured correctly: -- **Active Directory (AD)** see [Assign share-level permissions to an identity](./storage-files-identity-ad-ds-assign-permissions.md).
+- **Active Directory Domain Services (AD DS)** see [Assign share-level permissions to an identity](./storage-files-identity-ad-ds-assign-permissions.md).
Share-level permission assignments are supported for groups and users that have been synced from Active Directory Domain Services (AD DS) to Azure Active Directory (Azure AD) using Azure AD Connect. Confirm that groups and users being assigned share-level permissions are not unsupported "cloud-only" groups.-- **Azure Active Directory Domain Services (Azure AD DS)** see [Assign access permissions to an identity](./storage-files-identity-auth-active-directory-domain-service-enable.md?tabs=azure-portal#assign-access-permissions-to-an-identity).
+- **Azure Active Directory Domain Services (Azure AD DS)** see [Assign share-level permissions to an identity](./storage-files-identity-auth-active-directory-domain-service-enable.md?tabs=azure-portal#assign-share-level-permissions-to-an-identity).
<a id="error53-67-87"></a> ## Error 53, Error 67, or Error 87 when you mount or unmount an Azure file share
storage Isv File Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/solution-integration/validated-partners/primary-secondary-storage/isv-file-services.md
# Running ISV file services in Azure Azure offers various options for storing file data. Azure native services are:-- [Azure Files](https://azure.microsoft.com/services/storage/files/) ΓÇô Fully managed file shares in the cloud that are accessible via the industry-standard SMB and NFS protocols. Azure files offer two different types (standard and premium) with different performance characteristics.
+- [Azure Files](https://azure.microsoft.com/services/storage/files/) ΓÇô Fully managed file shares in the cloud that are accessible via the industry-standard SMB and NFS protocols. Azure Files offer two different types (standard and premium) with different performance characteristics.
- [Azure NetApp Files](https://azure.microsoft.com/services/netapp/) ΓÇô Fully managed file shares in the cloud designed to meet the performance requirements for enterprise line-of-business applications. Azure NetApp Files offer multiple service levels with different performance limitations (standard, premium, and ultra). - [Azure Blob Storage](https://azure.microsoft.com/services/storage/blobs/) ΓÇô large-scale object storage platform for storing unstructured data. Azure Blob Storage offer two different types (standard and premium) with different performance characteristics.
stream-analytics No Code Stream Processing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/no-code-stream-processing.md
Last updated 10/12/2022
You can process your real-time data streams in Azure Event Hubs by using Azure Stream Analytics. The no-code editor allows you to develop a Stream Analytics job without writing a single line of code. In minutes, you can develop and run a job that tackles many scenarios, including: -- Filtering and ingesting to Azure Synapse SQL.-- Capturing your Event Hubs data in Parquet format in Azure Data Lake Storage Gen2.-- Materializing data in Azure Cosmos DB.
+- [Filtering and ingesting to Azure Synapse SQL](./filter-ingest-synapse-sql.md)
+- [Capturing your Event Hubs data in Parquet format in Azure Data Lake Storage Gen2](./capture-event-hub-data-parquet.md)
+- [Materializing data in Azure Cosmos DB](./no-code-materialize-cosmos-db.md)
+- [Filter and ingest to Azure Data Lake Storage Gen2](./filter-ingest-data-lake-storage-gen2.md)
+- [Enrich data and ingest to event hub](./no-code-enrich-event-hub-data.md)
+- [Transform and store data to Azure SQL database](./no-code-transform-filter-ingest-sql.md)
+- [Filter and ingest to Azure Data Explorer](./no-code-filter-ingest-data-explorer.md)
The experience provides a canvas that allows you to connect to input sources to quickly see your streaming data. Then you can transform it before writing to your destination of choice in Azure.
The **Manage fields** transformation allows you to add, remove, or rename fields
:::image type="content" source="./media/no-code-stream-processing/manage-field-transformation.png" alt-text="Screenshot that shows selections for managing fields." lightbox="./media/no-code-stream-processing/manage-field-transformation.png" :::
-You can also add new field with the **Build-in Functions** to aggregate the data from upstream. Currently, the build-in functions we support are some functions in **String Functions**, **Data and Time Functions**, **Mathematical Functions**. To learn more about the definitions of these functions, see [Built-in Functions (Azure Stream Analytics)](/stream-analytics-query/built-in-functions-azure-stream-analytics.md).
+You can also add new field with the **Build-in Functions** to aggregate the data from upstream. Currently, the build-in functions we support are some functions in **String Functions**, **Date and Time Functions**, **Mathematical Functions**. To learn more about the definitions of these functions, see [Built-in Functions (Azure Stream Analytics)](/stream-analytics-query/built-in-functions-azure-stream-analytics.md).
:::image type="content" source="./media/no-code-stream-processing/build-in-functions-managed-fields.png" alt-text="Screenshot that shows the build-in functions." lightbox="./media/no-code-stream-processing/build-in-functions-managed-fields.png" :::
Under the **Outputs** section on the ribbon, select **CosmosDB** as the output f
When you're connecting to Azure Cosmos DB, if you select **Managed Identity** as the authentication mode, then the Contributor role will be granted to the managed identity for the Stream Analytics job. To learn more about managed identities for Azure Cosmos DB, see [Use managed identities to access Azure Cosmos DB from an Azure Stream Analytics job (preview)](cosmos-db-managed-identity.md).
-Managed identities authentication method is also supported in the Azure Cosmos DB output in no-code editor which has the same benefit as it is in above ADLS Gen2 output.
+Managed identities authentication method is also supported in the Azure Cosmos DB output in no-code editor that has the same benefit as it is in above ADLS Gen2 output.
### Azure SQL Database
For more information about Azure SQL Database output for a Stream Analytics job,
With the real-time data coming through event hub to ASA, no-code editor can transform, enrich the data and then output the data to another event hub as well. You can choose the **Event Hub** output when you configure your Azure Stream Analytics job.
-To configure Event Hub as output, select **Event Hub** under the Outputs section on the ribbon. Then fill in the needed information to connect your event hub that you want to write data to.
+To configure Event Hubs as output, select **Event Hub** under the Outputs section on the ribbon. Then fill in the needed information to connect your event hub that you want to write data to.
-For more information about Event Hub output for a Stream Analytics job, see [Event Hubs output from Azure Stream Analytics](./event-hubs-output.md).
+For more information about Event Hubs output for a Stream Analytics job, see [Event Hubs output from Azure Stream Analytics](./event-hubs-output.md).
### Azure Data Explorer
After you add and set up any steps in the diagram view, you can test their behav
:::image type="content" source="./media/no-code-stream-processing/get-static-preview.png" alt-text="Screenshot that shows the button for getting a static preview." lightbox="./media/no-code-stream-processing/get-static-preview.png" :::
-After you do, the Stream Analytics job evaluates all transformations and outputs to make sure they're configured correctly. Stream Analytics then displays the results in the static data preview, as shown in the following image.
+After you do, the Stream Analytics job evaluates all transformations, and outputs to make sure they're configured correctly. Stream Analytics then displays the results in the static data preview, as shown in the following image.
:::image type="content" source="./media/no-code-stream-processing/refresh-static-preview.png" alt-text="Screenshot that shows the Data Preview tab, where you can refresh the static preview." lightbox="./media/no-code-stream-processing/refresh-static-preview.png" :::
Learn how to use the no-code editor to address common scenarios by using predefi
- [Materialize data to Azure Cosmos DB](no-code-materialize-cosmos-db.md) - [Transform and store data to SQL database](no-code-transform-filter-ingest-sql.md) - [Filter and store data to Azure Data Explorer](no-code-filter-ingest-data-explorer.md)-- [Enrich data and ingest to Event Hub](no-code-enrich-event-hub-data.md)
+- [Enrich data and ingest to Event Hubs](no-code-enrich-event-hub-data.md)
stream-analytics Stream Analytics Twitter Sentiment Analysis Trends https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-twitter-sentiment-analysis-trends.md
Previously updated : 03/16/2021 Last updated : 10/03/2022 # Social media analysis with Azure Stream Analytics
-This article teaches you how to build a social media sentiment analysis solution by bringing real-time Twitter events into Azure Event Hubs. You write an Azure Stream Analytics query to analyze the data and store the results for later use or create a [Power BI](https://powerbi.com/) dashboard to provide insights in real-time.
+This article teaches you how to build a social media sentiment analysis solution by bringing real-time Twitter events into Azure Event Hubs and then analyzing them using Stream Analytics. You write an Azure Stream Analytics query to analyze the data and store results for later use or create a [Power BI](https://powerbi.com/) dashboard to provide insights in real-time.
Social media analytics tools help organizations understand trending topics. Trending topics are subjects and attitudes that have a high volume of posts on social media. Sentiment analysis, which is also called *opinion mining*, uses social media analytics tools to determine attitudes toward a product or idea.
Real-time Twitter trend analysis is a great example of an analytics tool because
## Scenario: Social media sentiment analysis in real time
-A company that has a news media website is interested in gaining an advantage over its competitors by featuring site content that is immediately relevant to its readers. The company uses social media analysis on topics that are relevant to readers by doing real-time sentiment analysis of Twitter data.
+A company that has a news media website is interested in gaining an advantage over its competitors by featuring site content that's immediately relevant to its readers. The company uses social media analysis on topics that are relevant to readers by doing real-time sentiment analysis of Twitter data.
To identify trending topics in real time on Twitter, the company needs real-time analytics about the tweet volume and sentiment for key topics. ## Prerequisites
-In this how-to guide, you use a client application that connects to Twitter and looks for tweets that have certain hashtags (which you can set). To run the application and analyze the tweets using Azure Streaming Analytics, you must have the following:
+In this how-to guide, you use a client application that connects to Twitter and looks for tweets that have certain hashtags (which you can set). The following list gives you prerequisites for running the application and analyzing the tweets using Azure Streaming Analytics.
* If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/).
In this how-to guide, you use a client application that connects to Twitter and
* Install the [.NET Core CLI](/dotnet/core/tools/?tabs=netcore2x) version 2.1.0.
-Below is the solution architecture you are going to implement.
+Here's the solution architecture you're going to implement.
![A diagram showing different pieces of services and applications used to build the solution.](./media/stream-analytics-twitter-sentiment-analysis-trends/solution-diagram.png "Solution Diagram") ## Create an event hub for streaming input
-The sample application generates events and pushes them to an Azure event hub. Azure Event Hubs are the preferred method of event ingestion for Stream Analytics. For more information, see the [Azure Event Hubs documentation](../event-hubs/event-hubs-about.md).
+The sample application generates events and pushes them to an event hub. Azure Event Hubs is the preferred method of event ingestion for Stream Analytics. For more information, see the [Azure Event Hubs documentation](../event-hubs/event-hubs-about.md).
-### Create an event hub namespace and event hub
-In this section, you create an event hub namespace and add an event hub to that namespace. Event hub namespaces are used to logically group related event bus instances.
-1. Log in to the Azure portal and select **Create a resource**. Then. search for **Event Hubs** and select **Create**.
+### Create an Event Hubs namespace and event hub
-2. On the **Create Namespace** page, enter a namespace name. You can use any name for the namespace, but the name must be valid for a URL, and it must be unique across Azure.
-
-3. Select a pricing tier and subscription, and create or choose a resource group. Then, choose a location and select **Create**.
-
-4. When the namespace has finished deploying, navigate to your resource group and find the event hub namespace in your list of Azure resources.
-
-5. From the new namespace, select **+&nbsp;Event Hub**.
+Follow instructions from [Quickstart: Create an event hub using Azure portal](../event-hubs/event-hubs-create.md) to create an Event Hubs namespace and an event hub named **socialtwitter-eh**. You can use a different name. If you do, make a note of it, because you need the name later. You don't need to set any other options for the event hub.
-6. Name the new event hub *socialtwitter-eh*. You can use a different name. If you do, make a note of it, because you need the name later. You don't need to set any other options for the event hub.
-
-7. Select **Create**.
### Grant access to the event hub Before a process can send data to an event hub, the event hub needs a policy that allows access. The access policy produces a connection string that includes authorization information.
-1. In the navigation bar on the left side of your event hubs namespace, select **Event Hubs**, which is located in the **Entities** section. Then, select the event hub you just created.
+1. In the navigation bar on the left side of your Event Hubs namespace, select **Event Hubs**, which is located in the **Entities** section. Then, select the event hub you just created.
2. In the navigation bar on the left side, select **Shared access policies** located under **Settings**. >[!NOTE]
- >There is a Shared access policies option under for the event hub namespace and for the event hub. Make sure you're working in the context of your event hub, not the overall event hub namespace.
+ >There is a **Shared access policies** option under for the namespace and for the event hub. Make sure you're working in the context of your event hub, not the namespace.
-3. From the access policy page, select **+ Add**. Then enter *socialtwitter-access* for the **Policy name** and check the **Manage** checkbox.
+3. On the **Shared access policies** page, select **+ Add** on the commandbar. Then enter *socialtwitter-access* for the **Policy name** and check the **Manage** checkbox.
4. Select **Create**.
Before a process can send data to an event hub, the event hub needs a policy tha
The connection string looks like this: ```
- Endpoint=sb://EVENTHUBS-NAMESPACE.servicebus.windows.net/;SharedAccessKeyName=socialtwitter-access;SharedAccessKey=Gw2NFZw6r...FxKbXaC2op6a0ZsPkI=;EntityPath=socialtwitter-eh
+ Endpoint=sb://EVENTHUBS-NAMESPACE.servicebus.windows.net/;SharedAccessKeyName=socialtwitter-access;SharedAccessKey=XXXXXXXXXXXXXXX;EntityPath=socialtwitter-eh
``` Notice that the connection string contains multiple key-value pairs, separated with semicolons: `Endpoint`, `SharedAccessKeyName`, `SharedAccessKey`, and `EntityPath`.
Before a process can send data to an event hub, the event hub needs a policy tha
The client application gets tweet events directly from Twitter. In order to do so, it needs permission to call the Twitter Streaming APIs. To configure that permission, you create an application in Twitter, which generates unique credentials (such as an OAuth token). You can then configure the client application to use these credentials when it makes API calls. ### Create a Twitter application
-If you do not already have a Twitter application that you can use for this how-to guide, you can create one. You must already have a Twitter account.
+If you don't already have a Twitter application that you can use for this how-to guide, you can create one. You must already have a Twitter account.
> [!NOTE] > The exact process in Twitter for creating an application and getting the keys, secrets, and token might change. If these instructions don't match what you see on the Twitter site, refer to the Twitter developer documentation.
Before the application runs, it requires certain information from you, like the
* Set `EventHubNameConnectionString` to the connection string. * Set `EventHubName` to the event hub name (that is the value of the entity path).
-3. Open the command line and navigate to the directory where your TwitterClientCore app is located. Use the command `dotnet build` to build the project. Then use the command `dotnet run` to run the app. The app sends Tweets to your Event Hub.
+3. Open the command line and navigate to the directory where your TwitterClientCore app is located. Use the command `dotnet build` to build the project. Then use the command `dotnet run` to run the app. The app sends Tweets to your Event Hubs.
## Create a Stream Analytics job
Now that tweet events are streaming in real time from Twitter, you can set up a
|||| |Input alias| *TwitterStream* | Enter an alias for the input. | |Subscription | \<Your subscription\> | Select the Azure subscription that you want to use. |
- |Event Hub namespace | *asa-twitter-eventhub* |
- |Event Hub name | *socialtwitter-eh* | Choose *Use existing*. Then select the Event Hub you created.|
- |Event compression type| GZip | The data compression type.|
+ |Event Hubs namespace | *asa-twitter-eventhub* |
+ |Event hub name | *socialtwitter-eh* | Choose *Use existing*. Then select the event hub you created.|
+ |Event compression type| Gzip | The data compression type.|
Leave the remaining default values and select **Save**.
To compare the number of mentions among topics, you can use a [Tumbling window](
FROM TwitterStream ```
-3. Event data from the messages should appear in the **Input preview** window below your query. Ensure the **View** is set to **JSON**. If you do not see any data, ensure that your data generator is sending events to your event hub, and that you've selected **GZip** as the compression type for the input.
+3. Event data from the messages should appear in the **Input preview** window below your query. Ensure the **View** is set to **JSON**. If you don't see any data, ensure that your data generator is sending events to your event hub, and that you've selected **Gzip** as the compression type for the input.
4. Select **Test query** and notice the results in the **Test results** window below your query.
To compare the number of mentions among topics, you can use a [Tumbling window](
## Create an output sink
-You have now defined an event stream, an event hub input to ingest events, and a query to perform a transformation over the stream. The last step is to define an output sink for the job.
+You've now defined an event stream, an event hub input to ingest events, and a query to perform a transformation over the stream. The last step is to define an output sink for the job.
In this how-to guide, you write the aggregated tweet events from the job query to Azure Blob storage. You can also push your results to Azure SQL Database, Azure Table storage, Event Hubs, or Power BI, depending on your application needs.
In this how-to guide, you write the aggregated tweet events from the job query t
1. Under the **Job Topology** section on the left navigation menu, select **Outputs**.
-2. In the **Outputs** page, click **+&nbsp;Add** and **Blob storage/Data Lake Storage Gen2**:
+2. In the **Outputs** page, select **+&nbsp;Add** and **Blob storage/Data Lake Storage Gen2**:
* **Output alias**: Use the name `TwitterStream-Output`. * **Import options**: Select **Select storage from your subscriptions**.
In this how-to guide, you write the aggregated tweet events from the job query t
## Start the job
-A job input, query, and output are specified. You are ready to start the Stream Analytics job.
+A job input, query, and output are specified. You're ready to start the Stream Analytics job.
1. Make sure the TwitterClientCore application is running.
synapse-analytics Apache Spark Azure Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-azure-log-analytics.md
val logger = org.apache.log4j.LogManager.getLogger("com.contoso.LoggerExample")
logger.info("info message") logger.warn("warn message") logger.error("error message")
+//log exception
+try {
+ 1/0
+ } catch {
+ case e:Exception =>logger.warn("Exception", e)
+}
+// run job for task level metrics
+val data = sc.parallelize(Seq(1,2,3,4)).toDF().count()
``` Example for PySpark:
synapse-analytics Develop Openrowset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/develop-openrowset.md
WITH (ΓÇ»{'column_name' 'column_type' [ 'column_ordinal' | 'json_path'] })
## Arguments
-You have two choices for input files that contain the target data for querying. Valid values are:
+You have three choices for input files that contain the target data for querying. Valid values are:
- 'CSV' - Includes any delimited text file with row/column separators. Any character can be used as a field separator, such as TSV: FIELDTERMINATOR = tab.
synapse-analytics Synapse Link For Sql Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-link/synapse-link-for-sql-known-issues.md
This is the list of known limitations for Azure Synapse Link for SQL.
```sql EXEC sys.sp_change_feed_disable_db
+### DateTime2(7) and Time(7) Could Cause Snapshot Hang
+* Applies To - Azure SQL Database
+* Issue - One of the preview limitations with the data types DateTime2(7) and Time(7) is the loss of precision (only 6 digits are supported). When certain database settings are turned on (`NUMERIC_ROUNDABORT`, `ANSI_WARNINGS`, and `ARITHABORT`), the snapthot process can hang, requiring a database failover to recover.
+* Resolution - To resolve this situation, take the following steps:
+1. Turn off all three database settings.
+ ```sql
+ ALTER DATABASE <logical_database_name> SET NUMERIC_ROUNDABORT OFF
+ ALTER DATABASE <logical_database_name> SET ANSI_WARNINGS OFF
+ ALTER DATABASE <logical_database_name> SET ARITHABORT OFF
+ ```
+1. Run the following query to verify that the settings are in fact turned off.
+ ```sql
+ SELECT name, is_numeric_roundabort_on, is_ansi_warnings_on, is_arithabort_on
+ FROM sys.databases
+ WHERE name = 'logical_database_name'
+ ```
+1. Open an Azure support ticket requesting a database failover. Alternately, you could change the Service Level Objective (SLO) of your database instead of opening a ticket.
## Next steps
virtual-desktop Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/azure-monitor.md
Before you start using Azure Monitor for Azure Virtual Desktop, you'll need to s
Anyone monitoring Azure Monitor for Azure Virtual Desktop for your environment will also need the following read-access permissions: -- Read-access to the Azure resource groups that hold your Azure Virtual Desktop resources-- Read-access to the subscription's resource groups that hold your Azure Virtual Desktop session hosts-- Read access to the Log Analytics workspace or workspaces
+- Read-access to the Azure resource groups that hold your Azure Virtual Desktop resources.
+- Read-access to the subscription's resource groups that hold your Azure Virtual Desktop session hosts.
+- Read access to the Log Analytics workspace. In the case that multiple Log Analytics workspaces are used, read access should be granted to each to allow viewing data.
->[!NOTE]
+> [!NOTE]
> Read access only lets admins view data. They'll need different permissions to manage resources in the Azure Virtual Desktop portal. ## Open Azure Monitor for Azure Virtual Desktop
virtual-machine-scale-sets Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/whats-new.md
+
+ Title: "What's new for Virtual Machine Scale Sets"
+description: Learn about what's new for Virtual Machine Scale Sets in Azure.
+++ Last updated : 10/12/2022++++
+# What's new for scale sets
+
+This article describes what's new for Virtual Machine Scale Sets in Azure.
++
+## Spot Priority Mix for Flexible scale sets
+++
+## Next steps
+
+For updates and announcements about Azure, see the [Microsoft Azure Blog](https://azure.microsoft.com/blog/).
+
virtual-machines Dedicated Hosts How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dedicated-hosts-how-to.md
Previously updated : 09/01/2021 Last updated : 09/28/2021
This article guides you through how to create an Azure [dedicated host](dedicate
- Not all Azure VM SKUs, regions and availability zones support ultra disks, for more information about this topic, see [Azure ultra disks](disks-enable-ultra-ssd.md). -- Currently ADH does not support ultra disks on following VM series LSv2, M, Mv2, Msv2, Mdsv2, NVv3, NVv4 (though these VMs support ultra disks on multi tenant VMs).
+- Currently dedicated hosts do not support 'ultra disks' on the following VM sizes: LSv2, M, Mv2, Msv2, Mdsv2, NVv3, NVv4 (ultra disks are supported on these sizes for multi tenant VMs).
- The fault domain count of the virtual machine scale set can't exceed the fault domain count of the host group.
virtual-machines Disks Deploy Premium V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-deploy-premium-v2.md
Azure Premium SSD v2 is designed for IO-intense enterprise workloads that requir
## Prerequisites
+- [Sign up](https://aka.ms/PremiumSSDv2AccessRequest) for access to Premium SSD v2.
- Install either the latest [Azure CLI](/cli/azure/install-azure-cli) or the latest [Azure PowerShell module](/powershell/azure/install-az-ps). ## Determine region availability programmatically
virtual-machines Disks Enable Bursting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-enable-bursting.md
Title: Enable on-demand disk bursting
description: Enable on-demand disk bursting on your managed disk. Previously updated : 11/09/2021 Last updated : 10/12/2022
Before you enable on-demand bursting, understand the following:
## Get started
-On-demand bursting can be enabled with either the Azure PowerShell module, the Azure CLI, or Azure Resource Manager templates. The following examples cover how to create a new disk with on-demand bursting enabled and enabling on-demand bursting on existing disks.
+On-demand bursting can be enabled with either the Azure portal, the Azure PowerShell module, the Azure CLI, or Azure Resource Manager templates. The following examples cover how to create a new disk with on-demand bursting enabled and enabling on-demand bursting on existing disks.
+
+# [Portal](#tab/azure-portal)
+
+A managed disk must be larger than 512 GiB to enable on-demand bursting.
+
+To enable on-demand bursting for an existing disk:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/) and navigate to your disk.
+1. Select **Configuration** and select **Enable on-demand bursting**.
+1. Select **Save**.
# [PowerShell](#tab/azure-powershell)
virtual-machines Vm Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/vm-applications.md
Last updated 05/18/2022--++
VM application versions are the deployable resource. Versions are defined with t
The download location of the application package and the configuration files are: ΓÇ» - Linux: `/var/lib/waagent/Microsoft.CPlat.Core.VMApplicationManagerLinux/<appname>/<app version> `-- Windows: `C:\Packages\Plugins\Microsoft.CPlat.Core.VMApplicationManagerWindows\1.0.4\Downloads\<appname>\<app version> `
+- Windows: `C:\Packages\Plugins\Microsoft.CPlat.Core.VMApplicationManagerWindows\1.0.9\Downloads\<appname>\<app version> `
The install/update/remove commands should be written assuming the application package and the configuration file are in the current directory.
virtual-machines Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/whats-new.md
+
+ Title: "What's new for virtual machines"
+description: Learn about what's new for virtual machines in Azure.
+++ Last updated : 10/12/2022++++
+# What's new for virtual machines
+
+This article describes what's new for virtual machines in Azure.
++
+## Spot Priority Mix for Flexible scale sets
++
+## Next steps
+
+For updates and announcements about Azure, see the [Microsoft Azure Blog](https://azure.microsoft.com/blog/).
virtual-machines Planning Guide Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/planning-guide-storage.md
Azure Premium SSD v2 storage is a new version of premium storage that got introd
This type of storage is targeting DBMS workloads, storage traffic that requires submillisecond latency, and SLAs on IOPS and throughput. The Premium SSD v2 disks are delivered with a default set of 3,000 IOPS and 125 MBps throughput. And the possibility to add more IOPS and throughput to individual disks. The pricing of the storage is structured in a way that adding more throughput or IOPS isn't influencing the price majorly. Nevertheless, we leave it up to you to decide how the storage configuration for Premium SSD v2 will look like. For a base start, read [SAP HANA Azure virtual machine Premium SSD v2 storage configurations](./hana-vm-premium-ssd-v2.md).
-For the actual regions, this new block storage type is available and the actual restrictions read the document [Premium SSD v2](../../disks-types.md#premium-ssd-v2-preview).
+For the actual regions, this new block storage type is available and the actual restrictions read the document [Premium SSD v2](../../disks-types.md#premium-ssd-v2).
The capability matrix for SAP workload looks like:
virtual-network Ip Services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/ip-services-overview.md
IP services are a collection of IP address related services that enable communic
IP services consist of: * Public IP addresses- * Public IP address prefixes-
+* Custom IP address prefixes (BYOIP)
* Private IP addresses- * Routing preference- * Routing preference unmetered ## Public IP addresses
Get started creating IP services resources:
- [Create a public IP address using the Azure portal](./create-public-ip-portal.md). - [Create a public IP address prefix using the Azure portal](./create-public-ip-prefix-portal.md). - [Configure a private IP address for a VM using the Azure portal](./virtual-networks-static-private-ip-arm-pportal.md).-- [Configure routing preference for a public IP address using the Azure portal](./routing-preference-portal.md).
+- [Configure routing preference for a public IP address using the Azure portal](./routing-preference-portal.md).
virtual-network Ipv6 Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/ipv6-overview.md
IPv6 for Azure VNET is a foundational feature set which enables customers to hos
## Limitations The current IPv6 for Azure virtual network release has the following limitations:-- VPN gateways currently support IPv4 traffic only, but they still CAN be deployed in a Dual-stacked VNET.
+- VPN gateways currently support IPv4 traffic only, but they still CAN be deployed in a Dual-stacked VNET using Azure PowerShell and Azure CLI commands only.
- Dual-stack configurations that use Floating IP can only be used with Public load balancers (not Internal load balancers) - Application Gateway v2 does not currently support IPv6. It can operate in a dual stack VNet using only IPv4, but the gateway subnet must be IPv4-only. Application Gateway v1 does not support dual stack VNets. - The Azure platform (AKS, etc.) does not support IPv6 communication for Containers.
virtual-wan Virtual Wan Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-faq.md
description: See answers to frequently asked questions about Azure Virtual WAN n
Previously updated : 05/20/2022 Last updated : 10/13/2022 # Customer intent: As someone with a networking background, I want to read more details about Virtual WAN in a FAQ format.
There are two options to add DNS servers for the P2S clients. The first method i
### For User VPN (point-to-site)- how many clients are supported?
-The table below describes the number of concurrent connections and aggregate throughput of the Point-to-site VPN Gateway supported at different scale units.
+The table below describes the number of concurrent connections and aggregate throughput of the Point-to-site VPN gateway supported at different scale units.
Scale Unit | Gateway Instances | Supported Concurrent Connections | Aggregate Throughput| | - | | | |
All virtual WAN APIs are OpenAPI. You can go over the documentation [Virtual WAN
Virtual WAN partners automate IPsec connectivity to Azure VPN end points. If the Virtual WAN partner is an SD-WAN provider, then it's implied that the SD-WAN controller manages automation and IPsec connectivity to Azure VPN end points. If the SD-WAN device requires its own end point instead of Azure VPN for any proprietary SD-WAN functionality, you can deploy the SD-WAN end point in an Azure VNet and coexist with Azure Virtual WAN.
+Virtual WAN supports [BGP Peering](create-bgp-peering-hub-portal.md) and also has the ability to [deploy NVA's into a virtual WAN hub](how-to-nva-hub.md).
+ ### How many VPN devices can connect to a single hub? Up to 1,000 connections are supported per virtual hub. Each connection consists of four links and each link connection supports two tunnels that are in an active-active configuration. The tunnels terminate in an Azure virtual hub VPN gateway. Links represent the physical ISP link at the branch/VPN device.
An Azure Virtual WAN connection is composed of 2 tunnels. A Virtual WAN VPN gate
The Gateway Reset button should be used if your on-premises devices are all working as expected, but the site-to-site VPN connection in Azure is in a Disconnected state. Virtual WAN VPN gateways are always deployed in an Active-Active state for high availability. This means there's always more than one instance deployed in a VPN gateway at any point of time. When the Gateway Reset button is used, it reboots the instances in the VPN gateway in a sequential manner so your connections aren't disrupted. There will be a brief gap as connections move from one instance to the other, but this gap should be less than a minute. Additionally, note that resetting the gateways won't change your Public IPs.
+This scenario only applies to the S2S connections.
+ ### Can the on-premises VPN device connect to multiple hubs? Yes. Traffic flow, when commencing, is from the on-premises device to the closest Microsoft network edge, and then to the virtual hub.
Yes, you can connect your favorite network virtual appliance (NVA) VNet to the A
### Can I create a Network Virtual Appliance inside the virtual hub?
-A Network Virtual Appliance (NVA) can't be deployed inside a virtual hub. However, you can create it in a spoke VNet that is connected to the virtual hub and enable appropriate routing to direct traffic per your needs.
+A Network Virtual Appliance (NVA) can be deployed inside a virtual hub. For steps, see [About NVA's in a Virtual WAN hub](about-nva-hub.md).
### Can a spoke VNet have a virtual network gateway?
When VPN sites connect into a hub, they do so with connections. Virtual WAN supp
Yes, NAT traversal (NAT-T) is supported. The Virtual WAN VPN gateway will NOT perform any NAT-like functionality on the inner packets to/from the IPsec tunnels. In this configuration, ensure the on-premises device initiates the IPsec tunnel.
-### I don't see the 20-Gbps setting for the virtual hub in portal. How do I configure that?
+### How can I configure a scale unit to a specific setting like 20-Gbps?
-Navigate to the VPN gateway inside a hub on the portal, then click on the scale unit to change it to the appropriate setting.
+Go to the VPN gateway inside a hub on the portal, then click on the scale unit to change it to the appropriate setting.
### Does Virtual WAN allow the on-premises device to utilize multiple ISPs in parallel, or is it always a single VPN tunnel?
If a virtual hub learns the same route from multiple remote hubs, the order in w
1. Longest prefix match. 1. Local routes over interhub. 1. Static routes over BGP: This is in context to the decision being made by the virtual hub router. However, if the decision maker is the VPN gateway where a site advertises routes via BGP or provides static address prefixes, static routes may be preferred over BGP routes.
-1. ExpressRoute (ER) over VPN: ER is preferred over VPN when the context is a local hub. Transit connectivity between ExpressRoute circuits is only available through Global Reach. Therefore, in scenarios where ExpressRoute circuit is connected to one hub and there is another ExpressRoute circuit connected to a different hub with VPN connection, VPN may be preferred for inter-hub scenarios.
+1. ExpressRoute (ER) over VPN: ER is preferred over VPN when the context is a local hub. Transit connectivity between ExpressRoute circuits is only available through Global Reach. Therefore, in scenarios where ExpressRoute circuit is connected to one hub and there is another ExpressRoute circuit connected to a different hub with VPN connection, VPN may be preferred for inter-hub scenarios. However, you can [configure virtual hub routing preference](howto-virtual-hub-routing-preference.md) to change the default preference.
1. AS path length (Virtual hubs prepend routes with the AS path 65520-65520 when advertising routes to each other). ### Does the Virtual WAN hub allow connectivity between ExpressRoute circuits?
When multiple ExpressRoute circuits are connected to a virtual hub, routing weig
### Does Virtual WAN prefer ExpressRoute over VPN for traffic egressing Azure
-Yes. Virtual WAN prefers ExpressRoute over VPN for traffic egressing Azure.
+Yes. Virtual WAN prefers ExpressRoute over VPN for traffic egressing Azure. However, you can configure virtual hub routing preference to change the default preference. For steps, see [Configure virtual hub routing preference](howto-virtual-hub-routing-preference.md).
### When a Virtual WAN hub has an ExpressRoute circuit and a VPN site connected to it, what would cause a VPN connection route to be preferred over ExpressRoute?
For the point-to-site User VPN scenario with internet breakout via Azure Firewal
### What is the recommended API version to be used by scripts automating various Virtual WAN functionalities?
-A minimum version of 05-01-2020 (May 1 2020) is required.
+A minimum version of 05-01-2022 (May 1, 2022) is required.
### Are there any Virtual WAN limits?
vpn-gateway Point To Site Vpn Client Cert Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/point-to-site-vpn-client-cert-windows.md
description: Learn how to configure VPN clients for P2S configurations that use
Previously updated : 05/18/2022 Last updated : 10/12/2022
You can generate VPN client profile configuration files using PowerShell, or by
1. For next steps, depending on your P2S configuration, go to one of the following sections:
- * [IKEv2 and SSTP - native client steps](#ike)
+ * [IKEv2 and SSTP - native VPN client steps](#ike)
* [OpenVPN - OpenVPN client steps](#openvpn)
- * [OpenVPN - Azure VPN client steps](#azurevpn)
+ * [OpenVPN - Azure VPN Client steps](#azurevpn)
## <a name="ike"></a>IKEv2 and SSTP - native VPN client steps
-This section helps you configure the native VPN client on your Windows computer to connect to your VNet. This configuration doesn't require additional client software.
+This section helps you configure the native VPN client that's part of your Windows operating system to connect to your VNet. This configuration doesn't require additional client software.
### <a name="view-ike"></a>View config files
You can use the same VPN client configuration package on each Windows client com
## <a name="azurevpn"></a>OpenVPN - Azure VPN Client steps
-This section applies to certificate authentication configurations that are configured to use the OpenVPN tunnel type. The following steps help you download, install, and configure the Azure VPN client to connect to your VNet. To connect to your VNet, each client must have the following items:
+This section applies to certificate authentication configurations that use the OpenVPN tunnel type. The following steps help you download, install, and configure the Azure VPN Client to connect to your VNet. To connect to your VNet, each client must have the following items:
-* The Azure VPN client software is installed.
-* Azure VPN client profile is configured using the downloaded **azurevpnconfig.xml** configuration file.
+* The Azure VPN Client software is installed.
+* Azure VPN Client profile is configured using the downloaded **azurevpnconfig.xml** configuration file.
* The client certificate is installed locally. ### <a name="view-azurevpn"></a>View config files
When you open the zip file, you'll see the **AzureVPN** folder. Locate the **azu
### Configure the VPN client profile
-1. Open the Azure VPN client.
+1. Open the Azure VPN Client.
1. Click **+** on the bottom left of the page, then select **Import**.