Updates from: 06/16/2022 01:07:04
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Add Api Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/add-api-connector.md
Content-type: application/json
{ "clientId": "231c70e8-8424-48ac-9b5d-5623b9e4ccf3", "step": "PreTokenApplicationClaims",
- "ui_locales":"en-US"
+ "ui_locales":"en-US",
"email": "johnsmith@fabrikam.onmicrosoft.com", "identities": [ {
active-directory-b2c Application Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/application-types.md
Previously updated : 03/30/2022 Last updated : 06/14/2022
To see this scenario in action, try one of the web application sign in code samp
In addition to facilitating simple sign in, a web server application might also need to access a back-end web service. In this case, the web application can perform a slightly different [OpenID Connect flow](openid-connect.md) and acquire tokens by using authorization codes and refresh tokens. This scenario is depicted in the following [Web APIs section](#web-apis). ## Single-page applications+ Many modern web applications are built as client-side single-page applications ("SPAs"). Developers write them by using JavaScript or a SPA framework such as Angular, Vue, and React. These applications run on a web browser and have different authentication characteristics than traditional server-side web applications. Azure AD B2C provides **two** options to enable single-page applications to sign in users and get tokens to access back-end services or web APIs:
Applications that are installed on devices, such as mobile and desktop applicati
In this flow, the application executes [policies](user-flow-overview.md) and receives an `authorization_code` from Azure AD after the user completes the policy. The `authorization_code` represents the application's permission to call back-end services on behalf of the user who is currently signed in. The application can then exchange the `authorization_code` in the background for an `access_token` and a `refresh_token`. The application can use the `access_token` to authenticate to a back-end web API in HTTP requests. It can also use the `refresh_token` to get a new `access_token` when an older one expires.
-## Current limitations
-
-### Unsupported application types
-
-#### Daemons/server-side applications
+## Daemons/server-side applications
Applications that contain long-running processes or that operate without the presence of a user also need a way to access secured resources such as web APIs. These applications can authenticate and get tokens by using their identities (rather than a user's delegated identity) and by using the OAuth 2.0 client credentials flow. Client credential flow isn't the same as on-behalf-flow and on-behalf-flow shouldn't be used for server-to-server authentication.
-Although the OAuth 2.0 client credentials grant flow isn't currently directly supported by the Azure AD B2C authentication service, you can set up client credential flow using Azure AD and the Microsoft identity platform /token (https://login.microsoftonline.com/your-tenant-name.onmicrosoft.com/oauth2/v2.0/token) endpoint for an application in your Azure AD B2C tenant. An Azure AD B2C tenant shares some functionality with Azure AD enterprise tenants.
-
-To set up client credential flow, see [Azure Active Directory v2.0 and the OAuth 2.0 client credentials flow](../active-directory/develop/v2-oauth2-client-creds-grant-flow.md). A successful authentication results in the receipt of a token formatted so that it can be used by Azure AD as described in [Azure AD token reference](../active-directory/develop/id-tokens.md).
+The [OAuth 2.0 client credentials flow](./client-credentials-grant-flow.md) is currently in public preview. You can also set up client credential flow using Azure AD and the Microsoft identity platform /token endpoint (`https://login.microsoftonline.com/your-tenant-name.onmicrosoft.com/oauth2/v2.0/token`) for a [Microsoft Graph application](microsoft-graph-get-started.md) or your own application. For more information, check out the [Azure AD token reference](../active-directory/develop/id-tokens.md) article.
-For instructions on registering a management application, see [Manage Azure AD B2C with Microsoft Graph](microsoft-graph-get-started.md).
+## Unsupported application types
-#### Web API chains (on-behalf-of flow)
+### Web API chains (on-behalf-of flow)
Many architectures include a web API that needs to call another downstream web API, where both are secured by Azure AD B2C. This scenario is common in native clients that have a Web API back-end and calls a Microsoft online service such as the Microsoft Graph API.
active-directory-b2c Client Credentials Grant Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/client-credentials-grant-flow.md
+
+ Title: Set up OAuth 2.0 client credentials flow
+
+description: Learn how to set up the OAuth 2.0 client credentials flow in Azure Active Directory B2C.
+++++++ Last updated : 06/15/2022+++
+zone_pivot_groups: b2c-policy-type
++
+# Set up OAuth 2.0 client credentials flow in Azure Active Directory B2C
++
+The OAuth 2.0 client credentials grant flow permits an app (confidential client) to use its own credentials, instead of impersonating a user, to authenticate when calling web resource, such as REST API. This type of grant is commonly used for server-to-server interactions that must run in the background, without immediate interaction with a user. These types of applications are often referred to as daemons or service accounts.
+
+In the client credentials flow, permissions are granted directly to the application itself by an administrator. When the app presents a token to a resource, the resource enforces that the app itself has authorization to perform an action since there's no user involved in the authentication. This article covers the steps needed to authorize an application to call an API, and how to get the tokens needed to call that API.
+
+## App registration overview
+
+To enable your app to sign in with client credentials and call a web API, you register two applications in the Azure AD B2C directory.
+
+- The **application** registration enables your app to sign in with Azure AD B2C. The app registration process generates an *application ID*, also known as the *client ID*, which uniquely identifies your app. You also create a *client secret*, which your app uses to securely acquire the tokens.
+
+- The **web API** registration enables your app to call a secure web API. The registration includes the web API *scopes*. The scopes provide a way to manage permissions to protected resources, such as your web API. Then, you grant your application permissions to the web API scopes. When an access token is requested, your app specifies the `.default` scope parameter of the request. Azure AD B2C returns the web API scopes granted to your app.
+
+The app architecture and registrations are illustrated in the following diagram:
+
+![Diagram of a web app with web A P I call registrations and tokens.](./media/client-credentials-grant-flow/application-architecture.png)
+
+## Step 1. Register the web API app
+
+In this step you register the web API (**App 2**) with its scopes. Later you'll grant your application (**App 1**) permission to those scopes. If you already have such app registration, skip to the next step [Step 1.1 Define web API roles (scopes)](#step-11-define-web-api-roles-scopes).
++
+### Step 1.1 Define web API roles (scopes)
+
+In this step you configure the web API **Application ID URI**, then define **App roles**. The app *roles*, used by the OAuth 2.0 *scopes* and defined on an application registration representing your API. Your application uses the Application ID URI with the `.default` scope. To define app roles, follow these steps:
+
+1. Select the web API that you created, for example *my-api1*.
+1. Under **Manage**, select **Expose an API**.
+1. Next to **Application ID URI**, select the **Set** link. Replace the default value (GUID) with a unique name (for example, **api**), and then select **Save**.
+1. Copy the **Application ID URI**. The following screenshot shows how to copy the Application ID URI.
+
+ ![Screenshot shows how to copy the application I D.](./media/client-credentials-grant-flow/copy-application-id-uri.png)
+
+1. Under **Manage**, select **Manifest** to open the application manifest editor.
+In the editor, locate the `appRoles` setting, and define app roles that target `applications`. Each app role definition must have a global unique identifier (GUID) for its `id` value. Generate
+a new GUID by running `new-guid`command in the Microsoft PowerShell, or an [online GUID generator](https://www.bing.com/search?q=online+guid+generator). The `value` property of each app role definition will appear in the scope, the `scp` claim. The `value` property
+can't contain spaces. The following example demonstrates two app roles, read and write:
+
+ ```json
+ "appRoles": [
+ {
+ "allowedMemberTypes": ["Application"],
+ "displayName": "Read",
+ "id": "d6a15e20-f83c-4264-8e61-5082688e14c8",
+ "isEnabled": true,
+ "description": "Readers have the ability to read tasks.",
+ "value": "app.read"
+ },
+ {
+ "allowedMemberTypes": ["Application"],
+ "displayName": "Write",
+ "id": "204dc4ab-51e1-439f-8c7f-31a1ebf3c7b9",
+ "isEnabled": true,
+ "description": "Writers have the ability to create tasks.",
+ "value": "app.write"
+ }],
+ ```
+
+1. At the top of the page, select **Save** to save the manifest changes.
+
+## Step 2. Register an application
+
+To enable your app to sign in with Azure AD B2C using client credentials flow, register your applications (**App 1**). To create the web API app registration, follow these steps:
+
+1. In the Azure portal, search for and select **Azure AD B2C**
+1. Select **App registrations**, and then select **New registration**.
+1. Enter a **Name** for the application. For example, *ClientCredentials_app*.
+1. Leave the other values as they are, and then select **Register**.
+1. Record the **Application (client) ID** for use in a later step.
+
+ ![Screenshot shows how to get the application I D.](./media/client-credentials-grant-flow/get-application-id.png)
+
+## Step 2.1 Create a client secret
+
+Create a client secret for the registered application. Your app uses the client secret to prove its identity when it requests tokens.
+
+1. Under **Manage**, select **Certificates & secrets**.
+1. Select **New client secret**.
+1. In the **Description** box, enter a description for the client secret (for example, *clientsecret1*).
+1. Under **Expires**, select a duration for which the secret is valid, and then select **Add**.
+1. Record the secret's **Value**. You'll use this value for configuration in a later step.
+
+ ![Screenshot shows how to copy the application secret.](./media/client-credentials-grant-flow/copy-application-secret.png)
+
+## Step 2.2 Grant the app permissions for the web API
+
+To grant your app (**App 1**) permissions, follow these steps:
+
+1. Select **App registrations**, and then select the app that you created (**App 1**).
+1. Under **Manage**, select **API permissions**.
+1. Under **Configured permissions**, select **Add a permission**.
+1. Select the **My APIs** tab.
+1. Select the API (**App 2**) to which the web application should be granted access. For example, enter **my-api1**.
+1. Select **Application permission**.
+1. Under **Permission**, expand **app**, and then select the scopes that you defined earlier (for example, **app.read** and **app.write**).
+
+ ![Screenshot shows how to grant the application A P I permissions.](./media/client-credentials-grant-flow/grant-application-permissions.png)
+
+1. Select **Add permissions**.
+1. Select **Grant admin consent for \<*your tenant name*>**.
+1. Select **Yes**.
+1. Select **Refresh**, and then verify that **Granted for ...** appears under **Status** for both scopes.
+
+## Step 3. Obtain an access token
+
+There are no specific actions to enable the client credentials for user flows or custom policies. Both Azure AD B2C user flows and custom policies support the client credentials flow. If you haven't done so already, create a [user flow or a custom policy](add-sign-up-and-sign-in-policy.md). Then, use your favorite API development application to generate an authorization request. Construct a call like this example with the following information as the body of the POST request:
+
+`https://<tenant-name>.b2clogin.com/<tenant-name>.onmicrosoft.com/<policy>/oauth2/v2.0/token`
+
+- Replace `<tenant-name>` with the [name](tenant-management.md#get-your-tenant-name) of your Azure AD B2C tenant. For example, `contoso.b2clogin.com`.
+- Replace `<policy>` with the full name of your user flow, or custom policy. Note, all types of user flows and custom policies support client credentials flow. You can use any user flow or custom policy you have, or create a new one, such as sign-up or sign-in.
+
+| Key | Value |
+| | -- |
+| grant_type | `client_credentials` |
+| client_id | The **Client ID** from the [Step 2 Register an application](#step-2-register-an-application). |
+| client_secret | The **Client secret** value from [Step 2.1 Create a client secret](#step-21-create-a-client-secret). |
+| scope | The **Application ID URI** from [Step 1.1 Define web API roles (scopes)](#step-11-define-web-api-roles-scopes) and `.default`. For example `https://contoso.onmicrosoft.com/api/.default`, or `https://contoso.onmicrosoft.com/12345678-0000-0000-0000-000000000000/.default`.|
+
+The actual POST request looks like the following example:
+
+```https
+POST /<tenant-name>.onmicrosoft.com/B2C_1A_SUSI/oauth2/v2.0/token HTTP/1.1
+Host: <tenant-name>.b2clogin.com
+Content-Type: application/x-www-form-urlencoded
+
+grant_type=client_credentials&client_id=33333333-0000-0000-0000-000000000000&client_secret=FyX7Q~DuPJ...&scope=https%3A%2F%2Fcontoso.onmicrosoft.com%2Fapi%2F.default
+```
+
+```json
+{
+ "access_token": "eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsImtpZCI6IlBFcG5OZDlnUkNWWUc2dUs...",
+ "token_type": "Bearer",
+ "not_before": 1645172292,
+ "expires_in": 3600,
+ "expires_on": 1645175892,
+ "resource": "33333333-0000-0000-0000-000000000000"
+}
+```
+
+Learn about the return [access token](tokens-overview.md) claims. The following table lists the claims that are related to the client credentials flow.
+
+| Claim | Description | Value |
+| -- | - | - |
+| `aud` | Identifies the intended recipient of the token. | The **Client ID** of the API. |
+| `sub` | The service principal associate with the application that initiated the request. | It's the service principal of the `client_id` of the authorization request. |
+| `azp` | Authorized party - the party to which the access token was issued. | The **Client ID** of the application that initiated the request. It's the same value you specified in the `client_id` of the authorization request. |
+| `scp` | The set of scopes exposed by your application API (space delimiter). | In client credentials flow, the authorization request asks for the `.default` scope, while the token contains the list of scopes exposed (and consented by the app administrator) by the API. For example, `app.read app.write`. |
+
+### Step 3.1 Obtain an access token with script
+
+Use the following PowerShell script to test your configuration:
+
+```powershell
+$appId = "<client ID>"
+$secret = "<client secret>"
+$endpoint = "https://<tenant-name>.b2clogin.com/<tenant-name>.onmicrosoft.com/<policy>/oauth2/v2.0/token"
+$scope = "<Your API id uri>/.default"
+$body = "granttype=client_credentials&scope=" + $scope + "&client_id=" + $appId + "&client_secret=" + $secret
+
+$token = Invoke-RestMethod -Method Post -Uri $endpoint -Body $body
+```
+
+Use the following cURL script to test your configuration:
+
+```bash
+curl --location --request POST 'https://<your-tenant>.b2clogin.com/<your-tenant>.onmicrosoft.com/<policy>/oauth2/v2.0/token' \
+--header 'Content-Type: application/x-www-form-urlencoded' \
+--form 'grant_type="client_credentials"' \
+--form 'client_id="<client ID>"' \
+--form 'client_secret="<client secret>"' \
+--form 'scope="<Your API id uri>/.default"'
+```
+
+## Step 4. Customize the token
+++++
+Custom policies provide a way to extend the token issuance process. To customize the user journey of the OAuth 2.0 Client credentials, follow the [guidance how to configure a client credentials user journey](https://github.com/azure-ad-b2c/samples/tree/master/policies/client_credentials_flow). Then, in the `JwtIssuer` technical profile, add the `ClientCredentialsUserJourneyId` metadata with a reference to the user journey you created.
+
+The following example shows how to add the `ClientCredentialsUserJourneyId` to the token issuer technical profile.
+
+```xml
+<TechnicalProfile Id="JwtIssuer">
+ <Metadata>
+ <Item Key="ClientCredentialsUserJourneyId">ClientCredentialsJourney</Item>
+ </Metadata>
+</TechnicalProfile>
+```
+
+The following example shows a client credentials user journey. The first and the last orchestration steps are required.
+
+```xml
+<UserJourneys>
+ <UserJourney Id="ClientCredentialsJourney">
+ <OrchestrationSteps>
+ <!-- [Required] Do the client credentials -->
+ <OrchestrationStep Order="1" Type="ClaimsExchange">
+ <ClaimsExchanges>
+ <ClaimsExchange Id="ClientCredSetupExchange" TechnicalProfileReferenceId="ClientCredentials_Setup" />
+ </ClaimsExchanges>
+ </OrchestrationStep>
+
+ <!-- [Optional] Call a REST API or claims transformation -->
+ <OrchestrationStep Order="2" Type="ClaimsExchange">
+ <ClaimsExchanges>
+ <ClaimsExchange Id="TokenAugmentation" TechnicalProfileReferenceId="TokenAugmentation" />
+ </ClaimsExchanges>
+ </OrchestrationStep>
+
+ <!-- [Required] Issue the access token -->
+ <OrchestrationStep Order="3" Type="SendClaims" CpimIssuerTechnicalProfileReferenceId="JwtIssuer" />
+ </OrchestrationSteps>
+ </UserJourney>
+</UserJourneys>
+```
+
+++
+## Next steps
+
+Learn how to [set up a resource owner password credentials flow in Azure AD B2C](add-ropc-policy.md)
active-directory-b2c Custom Policy Developer Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-policy-developer-notes.md
Previously updated : 12/09/2021 Last updated : 06/15/2022
The following table summarizes the OAuth 2.0 and OpenId Connect application auth
||::|::|| [Authorization code](authorization-code-flow.md) | GA | GA | Allows users to sign in to web applications. The web application receives an authorization code. The authorization code is redeemed to acquire a token to call web APIs.| [Authorization code with PKCE](authorization-code-flow.md)| GA | GA | Allows users to sign in to mobile and single-page applications. The application receives an authorization code using proof key for code exchange (PKCE). The authorization code is redeemed to acquire a token to call web APIs. |
-[Client credentials grant](https://tools.ietf.org/html/rfc6749#section-4.4)| GA | GA | Allows access web-hosted resources by using the identity of an application. Commonly used for server-to-server interactions that must run in the background, without immediate interaction with a user. <br /> <br /> To use this feature in an Azure AD B2C tenant, use the Azure AD endpoint of your Azure AD B2C tenant. For more information, see [OAuth 2.0 client credentials flow](../active-directory/develop/v2-oauth2-client-creds-grant-flow.md). This flow doesn't use your Azure AD B2C [user flow or custom policy](user-flow-overview.md) settings. |
+[Client credentials flow](client-credentials-grant-flow.md)| Preview | Preview | Allows access web-hosted resources by using the identity of an application. Commonly used for server-to-server interactions that must run in the background, without immediate interaction with a user. |
[Device authorization grant](https://tools.ietf.org/html/rfc8628)| NA | NA | Allows users to sign in to input-constrained devices such as a smart TV, IoT device, or printer. | [Implicit flow](implicit-flow-single-page-application.md) | GA | GA | Allows users to sign in to single-page applications. The app gets tokens directly without performing a back-end server credential exchange.| [On-behalf-of](../active-directory/develop/v2-oauth2-on-behalf-of-flow.md)| NA | NA | An application invokes a service or web API, which in turn needs to call another service or web API. <br /> <br /> For the middle-tier service to make authenticated requests to the downstream service, pass a *client credential* token in the authorization header. Optionally, you can include a custom header with the Azure AD B2C user's token. |
active-directory-domain-services Administration Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/administration-concepts.md
Previously updated : 06/01/2021 Last updated : 06/15/2022
active-directory-domain-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/overview.md
Previously updated : 04/28/2021 Last updated : 06/15/2022
active-directory-domain-services Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/scenarios.md
Previously updated : 08/14/2020 Last updated : 06/15/2022
active-directory-domain-services Scoped Synchronization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/scoped-synchronization.md
Previously updated : 03/07/2022 Last updated : 06/15/2022
active-directory-domain-services Synchronization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/synchronization.md
Previously updated : 10/11/2021 Last updated : 06/15/2022
active-directory Howto Password Smart Lockout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-password-smart-lockout.md
Previously updated : 07/20/2020 Last updated : 06/14/2022 -+
Smart lockout is always on, for all Azure AD customers, with these default setti
Using smart lockout doesn't guarantee that a genuine user is never locked out. When smart lockout locks a user account, we try our best to not lock out the genuine user. The lockout service attempts to ensure that bad actors can't gain access to a genuine user account. The following considerations apply:
-* Each Azure AD data center tracks lockout independently. A user has (*threshold_limit * datacenter_count*) number of attempts, if the user hits each data center.
* Smart Lockout uses familiar location vs unfamiliar location to differentiate between a bad actor and the genuine user. Unfamiliar and familiar locations both have separate lockout counters.
+* Due to the geo-distributed nature of the Azure AD authentication service, there may be slight variance in the total number of failed sign-in attempts before a user gets locked out. For example, if the lockout threshold is set to 10, up to 12 total failed sign-in attempts may occur before the account is locked out.
Smart lockout can be integrated with hybrid deployments that use password hash sync or pass-through authentication to protect on-premises Active Directory Domain Services (AD DS) accounts from being locked out by attackers. By setting smart lockout policies in Azure AD appropriately, attacks can be filtered out before they reach on-premises AD DS.
When the smart lockout threshold is triggered, you will get the following messag
*Your account is temporarily locked to prevent unauthorized use. Try again later, and if you still have trouble, contact your admin.*
-When you test smart lockout, your sign-in requests might be handled by different datacenters due to the geo-distributed and load-balanced nature of the Azure AD authentication service. In that scenario, because each Azure AD datacenter tracks lockout independently, it might take more than your defined lockout threshold number of attempts to cause a lockout. A user has a maximum of (*threshold_limit * datacenter_count*) number of bad attempts before being completely locked out.
- Smart lockout tracks the last three bad password hashes to avoid incrementing the lockout counter for the same password. If someone enters the same bad password multiple times, this behavior won't cause the account to lock out. - ## Default protections In addition to Smart lockout, Azure AD also protects against attacks by analyzing signals including IP traffic and identifying anomalous behavior. Azure AD will block these malicious sign-ins by default and return [AADSTS50053 - IdsLocked error code](../develop/reference-aadsts-error-codes.md), regardless of the password validity. ## Next steps
-To customize the experience further, you can [configure custom banned passwords for Azure AD password protection](tutorial-configure-custom-password-protection.md).
-
-To help users reset or change their password from a web browser, you can [configure Azure AD self-service password reset](tutorial-enable-sspr.md).
+- To customize the experience further, you can [configure custom banned passwords for Azure AD password protection](tutorial-configure-custom-password-protection.md).
+- To help users reset or change their password from a web browser, you can [configure Azure AD self-service password reset](tutorial-enable-sspr.md).
active-directory All Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/all-reports.md
description: View a list and description of all system reports available in Perm
--+ Last updated 02/23/2022
active-directory Faqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/faqs.md
description: Frequently asked questions (FAQs) about Permissions Management.
--+ Last updated 04/20/2022
active-directory How To Add Remove Role Task https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-add-remove-role-task.md
description: How to attach and detach permissions for groups, users, and service
--+ Last updated 02/23/2022
active-directory How To Attach Detach Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-attach-detach-permissions.md
description: How to attach and detach permissions for users, roles, and groups f
--+ Last updated 02/23/2022
active-directory How To Audit Trail Results https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-audit-trail-results.md
description: How to generate an on-demand report from a query in the **Audit** d
--+ Last updated 02/23/2022
active-directory How To Clone Role Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-clone-role-policy.md
description: How to clone a role/policy in the Just Enough Permissions (JEP) Con
--+ Last updated 02/23/2022
active-directory How To Create Alert Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-create-alert-trigger.md
description: How to create and view activity alerts and alert triggers in Permis
--+ Last updated 02/23/2022
active-directory How To Create Approve Privilege Request https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-create-approve-privilege-request.md
description: How to create or approve a request for permissions in the Remediati
--+ Last updated 02/23/2022
active-directory How To Create Custom Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-create-custom-queries.md
description: How to create a custom query in the Audit dashboard in Permissions
--+ Last updated 02/23/2022
active-directory How To Create Group Based Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-create-group-based-permissions.md
description: How to select group-based permissions settings in Permissions Manag
--+ Last updated 02/23/2022
active-directory How To Create Role Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-create-role-policy.md
description: How to create a role/policy in the Remediation dashboard in Permiss
--+ Last updated 02/23/2022
active-directory How To Create Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-create-rule.md
description: How to create a rule in the Autopilot dashboard in Permissions Mana
--+ Last updated 02/23/2022
active-directory How To Delete Role Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-delete-role-policy.md
description: How to delete a role/policy in the Just Enough Permissions (JEP) Co
--+ Last updated 02/23/2022
active-directory How To Modify Role Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-modify-role-policy.md
description: How to modify a role/policy in the Remediation dashboard in Permiss
--+ Last updated 02/23/2022
active-directory How To Notifications Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-notifications-rule.md
description: How to view notification settings for a rule in the Autopilot dash
--+ Last updated 02/23/2022
active-directory How To Recommendations Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-recommendations-rule.md
description: How to generate, view, and apply rule recommendations in the Autopi
--+ Last updated 02/23/2022
active-directory How To Revoke Task Readonly Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-revoke-task-readonly-status.md
description: How to revoke access to high-risk and unused tasks or assign read-o
--+ Last updated 02/23/2022
active-directory How To View Role Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-view-role-policy.md
description: How to view and filter information about roles/ policies in the Rem
--+ Last updated 02/23/2022
active-directory Integration Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/integration-api.md
description: How to view the Permissions Management API integration settings and
--+ Last updated 02/23/2022
active-directory Multi Cloud Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/multi-cloud-glossary.md
description: Permissions Management glossary
--+ Last updated 02/23/2022
active-directory Onboard Add Account After Onboarding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/onboard-add-account-after-onboarding.md
description: How to add an account/ subscription/ project to Permissions Managem
--+ Last updated 02/23/2022
active-directory Onboard Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/onboard-aws.md
description: How to onboard an Amazon Web Services (AWS) account on Permissions
--+ Last updated 04/20/2022
active-directory Onboard Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/onboard-azure.md
description: How to a Microsoft Azure subscription on Permissions Management.
--+ Last updated 04/20/2022
active-directory Onboard Enable Controller After Onboarding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/onboard-enable-controller-after-onboarding.md
description: How to enable or disable the controller in Permissions Management a
--+ Last updated 02/23/2022
active-directory Onboard Enable Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/onboard-enable-tenant.md
description: How to enable Permissions Management in your organization.
--+ Last updated 04/20/2022
active-directory Onboard Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/onboard-gcp.md
description: How to onboard a Google Cloud Platform (GCP) project on Permissions
--+ Last updated 04/20/2022
active-directory Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/overview.md
description: An introduction to Permissions Management.
--+ Last updated 04/20/2022
active-directory Product Account Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-account-explorer.md
Title: View roles and identities that can access account information from an ext
description: How to view information about identities that can access accounts from an external account in Permissions Management. -+ Last updated 02/23/2022
active-directory Product Account Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-account-settings.md
Title: View personal and organization information in Permissions Management
description: How to view personal and organization information in the Account settings dashboard in Permissions Management. -+ Last updated 02/23/2022
active-directory Product Audit Trail https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-audit-trail.md
description: How to filter and query user activity in Permissions Management.
--+ Last updated 02/23/2022
active-directory Product Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-dashboard.md
description: How to view data about the activity in your authorization system in
--+ Last updated 02/23/2022
active-directory Product Data Inventory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-data-inventory.md
description: How to display an inventory of created resources and licenses for y
--+ Last updated 02/23/2022
active-directory Product Data Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-data-sources.md
description: How to view and configure settings for collecting data from your au
--+ Last updated 02/23/2022
active-directory Product Define Permission Levels https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-define-permission-levels.md
description: How to define and manage users, roles, and access levels in Permiss
--+ Last updated 02/23/2022
active-directory Product Integrations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-integrations.md
description: View integration information about an authorization system in Permi
--+ Last updated 02/23/2022
active-directory Product Permission Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-permission-analytics.md
description: How to create and view permission analytics triggers in the Permiss
--+ Last updated 02/23/2022
active-directory Product Permissions Analytics Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-permissions-analytics-reports.md
description: How to generate and download the Permissions analytics report in Pe
--+ Last updated 02/23/2022
active-directory Product Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-reports.md
description: How to view system reports in the Reports dashboard in Permissions
--+ Last updated 02/23/2022
active-directory Product Rule Based Anomalies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-rule-based-anomalies.md
description: How to create and view rule-based anomalies and anomaly triggers in
--+ Last updated 02/23/2022
active-directory Product Statistical Anomalies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-statistical-anomalies.md
description: How to create and view statistical anomalies and anomaly triggers i
--+ Last updated 02/23/2022
active-directory Report Create Custom Report https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/report-create-custom-report.md
description: How to create, view, and share a custom report in the Permissions M
--+ Last updated 02/23/2022
active-directory Report View System Report https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/report-view-system-report.md
description: How to generate and view a system report in the Permissions Managem
--+ Last updated 02/23/2022
active-directory Training Videos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/training-videos.md
description: Permissions Management training videos.
--+ Last updated 04/20/2022
active-directory Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/troubleshoot.md
description: Troubleshoot issues with Permissions Management
--+ Last updated 02/23/2022
active-directory Ui Audit Trail https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/ui-audit-trail.md
description: How to use queries to see how users access information in an author
--+ Last updated 02/23/2022
active-directory Ui Autopilot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/ui-autopilot.md
description: How to view rules in the Autopilot dashboard in Permissions Managem
--+ Last updated 02/23/2022
active-directory Ui Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/ui-dashboard.md
description: How to view statistics and data about your authorization system in
--+ Last updated 02/23/2022
active-directory Ui Remediation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/ui-remediation.md
description: How to view existing roles/policies and requests for permission in
--+ Last updated 02/23/2022
active-directory Ui Tasks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/ui-tasks.md
description: How to view information about active and completed tasks in the Act
--+ Last updated 02/23/2022
active-directory Ui Triggers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/ui-triggers.md
description: How to view information about activity triggers in the Activity tri
--+ Last updated 02/23/2022
active-directory Ui User Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/ui-user-management.md
description: How to manage users and groups in the User management dashboard in
--+ Last updated 02/23/2022
active-directory Usage Analytics Access Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/usage-analytics-access-keys.md
description: How to view analytic information about access keys in Permissions
--+ Last updated 02/23/2022
active-directory Usage Analytics Active Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/usage-analytics-active-resources.md
description: How to view usage analytics about active resources in Permissions M
--+ Last updated 02/23/2022
active-directory Usage Analytics Active Tasks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/usage-analytics-active-tasks.md
description: How to view analytic information about active tasks in Permissions
--+ Last updated 02/23/2022
active-directory Usage Analytics Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/usage-analytics-groups.md
description: How to view analytic information about groups in Permissions Manage
--+ Last updated 02/23/2022
active-directory Usage Analytics Home https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/usage-analytics-home.md
description: How to use the Analytics dashboard in Permissions Management to vie
--+ Last updated 02/23/2022
active-directory Usage Analytics Serverless Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/usage-analytics-serverless-functions.md
description: How to view analytic information about serverless functions in Perm
--+ Last updated 02/23/2022
active-directory Usage Analytics Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/usage-analytics-users.md
description: How to view analytic information about users in Permissions Managem
--+ Last updated 02/23/2022
active-directory Concept Conditional Access Cloud Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-cloud-apps.md
A complete list of all services included can be found in the article [Apps inclu
The Microsoft Azure Management application includes multiple services. - Azure portal
+ - Microsoft Entra admin center
- Azure Resource Manager provider - Classic deployment model APIs - Azure PowerShell
active-directory Licensing Service Plan Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-service-plan-reference.md
Previously updated : 05/02/2022 Last updated : 06/14/2022
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
- **Service plans included (friendly names)**: A list of service plans (friendly names) in the product that correspond to the string ID and GUID >[!NOTE]
->This information last updated on May 2nd, 2022.<br/>You can also download a CSV version of this table [here](https://download.microsoft.com/download/e/3/e/e3e9faf2-f28b-490a-9ada-c6089a1fc5b0/Product%20names%20and%20service%20plan%20identifiers%20for%20licensing.csv).
+>This information last updated on June 14th, 2022.<br/>You can also download a CSV version of this table [here](https://download.microsoft.com/download/e/3/e/e3e9faf2-f28b-490a-9ada-c6089a1fc5b0/Product%20names%20and%20service%20plan%20identifiers%20for%20licensing.csv).
><br/> | Product name | String ID | GUID | Service plans included | Service plans included (friendly names) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| Dynamics 365 Business Central for IWs | PROJECT_MADEIRA_PREVIEW_IW_SKU | 6a4a1628-9b9a-424d-bed5-4118f0ede3fd | PROJECT_MADEIRA_PREVIEW_IW (3f2afeed-6fb5-4bf9-998f-f2912133aead)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | Dynamics 365 Business Central for IWs (3f2afeed-6fb5-4bf9-998f-f2912133aead)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318) | | Dynamics 365 Business Central Premium | DYN365_BUSCENTRAL_PREMIUM | f991cecc-3f91-4cd0-a9a8-bf1c8167e029 | DYN365_BUSCENTRAL_PREMIUM (8e9002c0-a1d8-4465-b952-817d2948e6e2)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW_DYN_APPS (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>POWERAPPS_DYN_APPS (874fc546-6efe-4d22-90b8-5c4e7aa59f4b) | Dynamics 365 Business Central Premium (8e9002c0-a1d8-4465-b952-817d2948e6e2)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Flow for Dynamics 365 (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>PowerApps for Dynamics 365 (874fc546-6efe-4d22-90b8-5c4e7aa59f4b) | | Dynamics 365 Customer Engagement Plan | DYN365_ENTERPRISE_PLAN1 | ea126fc5-a19e-42e2-a731-da9d437bffcf | D365_CSI_EMBED_CE (1412cdc1-d593-4ad1-9050-40c30ad0b023)<br/>DYN365_ENTERPRISE_P1 (d56f3deb-50d8-465a-bedb-f079817ccac1)<br/>D365_ProjectOperations (69f07c66-bee4-4222-b051-195095efee5b)<br/>D365_ProjectOperationsCDS (18fa3aba-b085-4105-87d7-55617b8585e6)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW_DYN_P2 (b650d915-9886-424b-a08d-633cede56f57)<br/>FLOW_DYN_APPS (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>Forms_Pro_CE (97f29a83-1a20-44ff-bf48-5e4ad11f3e51)<br/>NBENTERPRISE (03acaee3-9492-4f40-aed4-bcb6b32981b6)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>POWERAPPS_DYN_P2 (0b03f40b-c404-40c3-8651-2aceb74365fa)<br/>PROJECT_FOR_PROJECT_OPERATIONS (0a05d977-a21a-45b2-91ce-61c240dbafa2)<br/>PROJECT_CLIENT_SUBSCRIPTION (fafd7243-e5c1-4a3a-9e40-495efcb1d3c3)<br/>SHAREPOINT_PROJECT (fe71d6c3-a2ea-4499-9778-da042bf08063)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72) | Dynamics 365 Customer Service Insights for CE Plan (1412cdc1-d593-4ad1-9050-40c30ad0b023)<br/>Dynamics 365 P1 (d56f3deb-50d8-465a-bedb-f079817ccac1)<br/>Dynamics 365 Project Operations (69f07c66-bee4-4222-b051-195095efee5b)<br/>Dynamics 365 Project Operations CDS (18fa3aba-b085-4105-87d7-55617b8585e6)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Flow for Dynamics 365 (b650d915-9886-424b-a08d-633cede56f57)<br/>Flow for Dynamics 365 (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>Microsoft Dynamics 365 Customer Voice for Customer Engagement Plan (97f29a83-1a20-44ff-bf48-5e4ad11f3e51)<br/>Microsoft Social Engagement Enterprise (03acaee3-9492-4f40-aed4-bcb6b32981b6)<br/>Office for the web (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>Power Apps for Dynamics 365 (0b03f40b-c404-40c3-8651-2aceb74365fa)<br/>Project for Project Operations (0a05d977-a21a-45b2-91ce-61c240dbafa2)<br/>Project Online Desktop Client (fafd7243-e5c1-4a3a-9e40-495efcb1d3c3)<br/>Project Online Service (fe71d6c3-a2ea-4499-9778-da042bf08063)<br/>SharePoint (Plan 2) (5dbe027f-2339-4123-9542-606e4d348a72) |
-| Dynamics 365 Customer Insights Viral | DYN365_CUSTOMER_INSIGHTS_VIRAL | 036c2481-aa8a-47cd-ab43-324f0c157c2d | CDS_CUSTOMER_INSIGHTS_TRIAL (94e5cbf6-d843-4ee8-a2ec-8b15eb52019e)<br/>DYN365_CUSTOMER_INSIGHTS_ENGAGEMENT_INSIGHTS_BASE_TRIAL (e2bdea63-235e-44c6-9f5e-5b0e783f07dd)<br/>DYN365_CUSTOMER_INSIGHTS_VIRAL (ed8e8769-94c5-4132-a3e7-7543b713d51f)<br/>Forms_Pro_Customer_Insights (fe581650-cf61-4a09-8814-4bd77eca9cb5) | Common Data Service for Customer Insights Trial (94e5cbf6-d843-4ee8-a2ec-8b15eb52019e)<br/>Dynamics 365 Customer Insights Engagement Insights Viral (e2bdea63-235e-44c6-9f5e-5b0e783f07dd)<br/>Dynamics 365 Customer Insights Viral Plan (ed8e8769-94c5-4132-a3e7-7543b713d51f)<br/>Microsoft Dynamics 365 Customer Voice for Customer Insights (fe581650-cf61-4a09-8814-4bd77eca9cb5) |
+| Dynamics 365 Customer Insights vTrial | DYN365_CUSTOMER_INSIGHTS_VIRAL | 036c2481-aa8a-47cd-ab43-324f0c157c2d | CDS_CUSTOMER_INSIGHTS_TRIAL (94e5cbf6-d843-4ee8-a2ec-8b15eb52019e)<br/>DYN365_CUSTOMER_INSIGHTS_ENGAGEMENT_INSIGHTS_BASE_TRIAL (e2bdea63-235e-44c6-9f5e-5b0e783f07dd)<br/>DYN365_CUSTOMER_INSIGHTS_VIRAL (ed8e8769-94c5-4132-a3e7-7543b713d51f)<br/>Forms_Pro_Customer_Insights (fe581650-cf61-4a09-8814-4bd77eca9cb5) | Common Data Service for Customer Insights Trial (94e5cbf6-d843-4ee8-a2ec-8b15eb52019e)<br/>Dynamics 365 Customer Insights Engagement Insights Viral (e2bdea63-235e-44c6-9f5e-5b0e783f07dd)<br/>Dynamics 365 Customer Insights Viral Plan (ed8e8769-94c5-4132-a3e7-7543b713d51f)<br/>Microsoft Dynamics 365 Customer Voice for Customer Insights (fe581650-cf61-4a09-8814-4bd77eca9cb5) |
| Dynamics 365 Customer Service Enterprise Viral Trial | Dynamics_365_Customer_Service_Enterprise_viral_trial | 1e615a51-59db-4807-9957-aa83c3657351 | CUSTOMER_VOICE_DYN365_VIRAL_TRIAL (dbe07046-af68-4861-a20d-1c8cbda9194f)<br/>CCIBOTS_PRIVPREV_VIRAL (ce312d15-8fdf-44c0-9974-a25a177125ee)<br/>DYN365_CS_MESSAGING_VIRAL_TRIAL (3bf52bdf-5226-4a97-829e-5cca9b3f3392)<br/>DYN365_CS_ENTERPRISE_VIRAL_TRIAL (94fb67d3-465f-4d1f-a50a-952da079a564)<br/>DYNB365_CSI_VIRAL_TRIAL (33f1466e-63a6-464c-bf6a-d1787928a56a)<br/>DYN365_CS_VOICE_VIRAL_TRIAL (3de81e39-4ce1-47f7-a77f-8473d4eb6d7c)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>POWER_APPS_DYN365_VIRAL_TRIAL (54b37829-818e-4e3c-a08a-3ea66ab9b45d)<br/>POWER_AUTOMATE_DYN365_VIRAL_TRIAL (81d4ecb8-0481-42fb-8868-51536c5aceeb) | Customer Voice for Dynamics 365 vTrial (dbe07046-af68-4861-a20d-1c8cbda9194f)<br/>Dynamics 365 AI for Customer Service Virtual Agents Viral (ce312d15-8fdf-44c0-9974-a25a177125ee)<br/>Dynamics 365 Customer Service Digital Messaging vTrial (3bf52bdf-5226-4a97-829e-5cca9b3f3392)<br/>Dynamics 365 Customer Service Enterprise vTrial (94fb67d3-465f-4d1f-a50a-952da079a564)<br/>Dynamics 365 Customer Service Insights vTrial (33f1466e-63a6-464c-bf6a-d1787928a56a)<br/>Dynamics 365 Customer Service Voice vTrial (3de81e39-4ce1-47f7-a77f-8473d4eb6d7c)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Power Apps for Dynamics 365 vTrial (54b37829-818e-4e3c-a08a-3ea66ab9b45d)<br/>Power Automate for Dynamics 365 vTrial (81d4ecb8-0481-42fb-8868-51536c5aceeb) | | Dynamics 365 Customer Service Insights Trial | DYN365_AI_SERVICE_INSIGHTS | 61e6bd70-fbdb-4deb-82ea-912842f39431 | DYN365_AI_SERVICE_INSIGHTS (4ade5aa6-5959-4d2c-bf0a-f4c9e2cc00f2) |Dynamics 365 AI for Customer Service Trial (4ade5aa6-5959-4d2c-bf0a-f4c9e2cc00f2) | | Dynamics 365 Customer Voice Trial | FORMS_PRO | bc946dac-7877-4271-b2f7-99d2db13cd2c | DYN365_CDS_FORMS_PRO (363430d1-e3f7-43bc-b07b-767b6bb95e4b)<br/>FORMS_PRO (17efdd9f-c22c-4ad8-b48e-3b1f3ee1dc9a)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FORMS_PLAN_E5 (e212cbc7-0961-4c40-9825-01117710dcb1)<br/>FLOW_FORMS_PRO (57a0746c-87b8-4405-9397-df365a9db793) | Common Data Service (363430d1-e3f7-43bc-b07b-767b6bb95e4b)<br/>Dynamics 365 Customer Voice (17efdd9f-c22c-4ad8-b48e-3b1f3ee1dc9a)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Forms (Plan E5) (e212cbc7-0961-4c40-9825-01117710dcb1)<br/>Power Automate for Dynamics 365 Customer Voice (57a0746c-87b8-4405-9397-df365a9db793) | | Dynamics 365 Customer Service Professional | DYN365_CUSTOMER_SERVICE_PRO | 1439b6e2-5d59-4873-8c59-d60e2a196e92 | DYN365_CUSTOMER_SERVICE_PRO (6929f657-b31b-4947-b4ce-5066c3214f54)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>POWERAPPS_CUSTOMER_SERVICE_PRO (c507b04c-a905-4940-ada6-918891e6d3ad)<br/>FLOW_CUSTOMER_SERVICE_PRO (0368fc9c-3721-437f-8b7d-3d0f888cdefc)<br/>PROJECT_ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72) | Dynamics 365 for Customer Service Pro (6929f657-b31b-4947-b4ce-5066c3214f54)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Office for the web (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>Power Apps for Customer Service Pro (c507b04c-a905-4940-ada6-918891e6d3ad)<br/>Power Automate for Customer Service Pro (0368fc9c-3721-437f-8b7d-3d0f888cdefc)<br/>Project Online Essentials (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SharePoint (Plan 2) (5dbe027f-2339-4123-9542-606e4d348a72) |
+| Dynamics 365 Customer Voice | DYN365_CUSTOMER_VOICE_BASE | 359ea3e6-8130-4a57-9f8f-ad897a0342f1 | Customer_Voice_Base (296820fe-dce5-40f4-a4f2-e14b8feef383)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | Dynamics 365 Customer Voice Base Plan (296820fe-dce5-40f4-a4f2-e14b8feef383)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318) |
| Dynamics 365 Customer Voice Additional Responses | Forms_Pro_AddOn | 446a86f8-a0cb-4095-83b3-d100eb050e3d | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Forms_Pro_AddOn (90a816f6-de5f-49fd-963c-df490d73b7b5) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Dynamics 365 Customer Voice Add-on (90a816f6-de5f-49fd-963c-df490d73b7b5) | | Dynamics 365 Customer Voice Additional Responses | DYN365_CUSTOMER_VOICE_ADDON | 65f71586-ade3-4ce1-afc0-1b452eaf3782 | CUSTOMER_VOICE_ADDON (e6e35e2d-2e7f-4e71-bc6f-2f40ed062f5d)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | Dynamics Customer Voice Add-On (e6e35e2d-2e7f-4e71-bc6f-2f40ed062f5d)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318) | | Dynamics 365 Customer Voice USL | Forms_Pro_USL | e2ae107b-a571-426f-9367-6d4c8f1390ba | CDS_FORM_PRO_USL (e9830cfd-e65d-49dc-84fb-7d56b9aa2c89)<br/>Forms_Pro_USL (3ca0766a-643e-4304-af20-37f02726339b)<br/>FLOW_FORMS_PRO (57a0746c-87b8-4405-9397-df365a9db793) | Common Data Service (e9830cfd-e65d-49dc-84fb-7d56b9aa2c89)<br/>Microsoft Dynamics 365 Customer Voice USL (3ca0766a-643e-4304-af20-37f02726339b)<br/>Power Automate for Dynamics 365 Customer Voice (57a0746c-87b8-4405-9397-df365a9db793) | | Dynamics 365 Enterprise Edition - Additional Portal (Qualified Offer) | CRM_ONLINE_PORTAL | a4bfb28e-becc-41b0-a454-ac680dc258d3 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>CRM_ONLINE_PORTAL (1d4e9cb1-708d-449c-9f71-943aa8ed1d6a) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Dynamics CRM Online - Portal Add-On (1d4e9cb1-708d-449c-9f71-943aa8ed1d6a) | | Dynamics 365 Field Service Viral Trial | Dynamics_365_Field_Service_Enterprise_viral_trial | 29fcd665-d8d1-4f34-8eed-3811e3fca7b3 | CUSTOMER_VOICE_DYN365_VIRAL_TRIAL (dbe07046-af68-4861-a20d-1c8cbda9194f)<br/>DYN365_FS_ENTERPRISE_VIRAL_TRIAL (20d1455b-72b2-4725-8354-a177845ab77d)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>POWER_APPS_DYN365_VIRAL_TRIAL (54b37829-818e-4e3c-a08a-3ea66ab9b45d)<br/>POWER_AUTOMATE_DYN365_VIRAL_TRIAL (81d4ecb8-0481-42fb-8868-51536c5aceeb) | Customer Voice for Dynamics 365 vTrial (dbe07046-af68-4861-a20d-1c8cbda9194f)<br/>Dynamics 365 Field Service Enterprise vTrial (20d1455b-72b2-4725-8354-a177845ab77d)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Power Apps for Dynamics 365 vTrial (54b37829-818e-4e3c-a08a-3ea66ab9b45d)<br/>Power Automate for Dynamics 365 vTrial (81d4ecb8-0481-42fb-8868-51536c5aceeb) | | Dynamics 365 Finance | DYN365_FINANCE | 55c9eb4e-c746-45b4-b255-9ab6b19d5c62 | DYN365_CDS_FINANCE (e95d7060-d4d9-400a-a2bd-a244bf0b609e)<br/>DYN365_REGULATORY_SERVICE (c7657ae3-c0b0-4eed-8c1d-6a7967bd9c65)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>D365_Finance (9f0e1b4e-9b33-4300-b451-b2c662cd4ff7)<br/>POWERAPPS_DYN_APPS (874fc546-6efe-4d22-90b8-5c4e7aa59f4b)<br/>FLOW_DYN_APPS (7e6d7d78-73de-46ba-83b1-6d25117334ba) | Common Data Service for Dynamics 365 Finance (e95d7060-d4d9-400a-a2bd-a244bf0b609e)<br/>Dynamics 365 for Finance and Operations, Enterprise edition - Regulatory Service (c7657ae3-c0b0-4eed-8c1d-6a7967bd9c65)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Dynamics 365 for Finance (9f0e1b4e-9b33-4300-b451-b2c662cd4ff7)<br/>Power Apps for Dynamics 365 (874fc546-6efe-4d22-90b8-5c4e7aa59f4b)<br/>Power Automate for Dynamics 365 (7e6d7d78-73de-46ba-83b1-6d25117334ba) |
+| Dynamics 365 for Case Management Enterprise Edition | DYN365_ENTERPRISE_CASE_MANAGEMENT | d39fb075-21ae-42d0-af80-22a2599749e0 | DYN365_ENTERPRISE_CASE_MANAGEMENT (2822a3a1-9b8f-4432-8989-e11669a60dc8)<br/>NBENTERPRISE (03acaee3-9492-4f40-aed4-bcb6b32981b6)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>PROJECT_ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>POWERAPPS_DYN_APPS (874fc546-6efe-4d22-90b8-5c4e7aa59f4b)<br/>FLOW_DYN_APPS (7e6d7d78-73de-46ba-83b1-6d25117334ba) | Dynamics 365 for Case Management (2822a3a1-9b8f-4432-8989-e11669a60dc8)<br/>Microsoft Social Engagement (03acaee3-9492-4f40-aed4-bcb6b32981b6)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Office for the Web (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>Project Online Essentials (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SharePoint (Plan 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>Power Apps for Dynamics 365 (874fc546-6efe-4d22-90b8-5c4e7aa59f4b)<br/>Power Automate for Dynamics 365 (7e6d7d78-73de-46ba-83b1-6d25117334ba) |
| Dynamics 365 for Customer Service Enterprise Edition | DYN365_ENTERPRISE_CUSTOMER_SERVICE | 749742bf-0d37-4158-a120-33567104deeb | D365_CSI_EMBED_CSEnterprise (5b1e5982-0e88-47bb-a95e-ae6085eda612)<br/>DYN365_ENTERPRISE_CUSTOMER_SERVICE (99340b49-fb81-4b1e-976b-8f2ae8e9394f)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Forms_Pro_Service (67bf4812-f90b-4db9-97e7-c0bbbf7b2d09)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>POWERAPPS_DYN_APPS (874fc546-6efe-4d22-90b8-5c4e7aa59f4b)<br/>FLOW_DYN_APPS (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>PROJECT_ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>NBENTERPRISE (03acaee3-9492-4f40-aed4-bcb6b32981b6)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72) | Dynamics 365 Customer Service Insights for CS Enterprise (5b1e5982-0e88-47bb-a95e-ae6085eda612)<br/>Dynamics 365 for Customer Service (99340b49-fb81-4b1e-976b-8f2ae8e9394f)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Dynamics 365 Customer Voice for Customer Service Enterprise (67bf4812-f90b-4db9-97e7-c0bbbf7b2d09)<br/>Office for the Web (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>Power Apps for Dynamics 365 (874fc546-6efe-4d22-90b8-5c4e7aa59f4b)<br/>Power Automate for Dynamics 365 (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>Project Online Essentials (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>Retired - Microsoft Social Engagement (03acaee3-9492-4f40-aed4-bcb6b32981b6)<br/>SharePoint (Plan 2) (5dbe027f-2339-4123-9542-606e4d348a72) | | DYNAMICS 365 FOR FINANCIALS BUSINESS EDITION | DYN365_FINANCIALS_BUSINESS_SKU | cc13a803-544e-4464-b4e4-6d6169a138fa | DYN365_FINANCIALS_BUSINESS (920656a2-7dd8-4c83-97b6-a356414dbd36)<br/>FLOW_DYN_APPS (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>POWERAPPS_DYN_APPS (874fc546-6efe-4d22-90b8-5c4e7aa59f4b) |FLOW FOR DYNAMICS 365 (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>POWERAPPS FOR DYNAMICS 365 (874fc546-6efe-4d22-90b8-5c4e7aa59f4b)<br/>DYNAMICS 365 FOR FINANCIALS (920656a2-7dd8-4c83-97b6-a356414dbd36) |
+| Dynamics 365 for Marketing Business Edition | DYN365_BUSINESS_MARKETING | 238e2f8d-e429-4035-94db-6926be4ffe7b | DYN365_BUSINESS_Marketing (393a0c96-9ba1-4af0-8975-fa2f853a25ac)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | Dynamics 365 Marketing (393a0c96-9ba1-4af0-8975-fa2f853a25ac)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318) |
| DYNAMICS 365 FOR SALES AND CUSTOMER SERVICE ENTERPRISE EDITION | DYN365_ENTERPRISE_SALES_CUSTOMERSERVICE | 8edc2cf8-6438-4fa9-b6e3-aa1660c640cc | DYN365_ENTERPRISE_P1 (d56f3deb-50d8-465a-bedb-f079817ccac1)<br/>FLOW_DYN_APPS (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>NBENTERPRISE (03acaee3-9492-4f40-aed4-bcb6b32981b6)<br/>POWERAPPS_DYN_APPS (874fc546-6efe-4d22-90b8-5c4e7aa59f4b)<br/>PROJECT_ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014) |DYNAMICS 365 CUSTOMER ENGAGEMENT PLAN (d56f3deb-50d8-465a-bedb-f079817ccac1)<br/>FLOW FOR DYNAMICS 365 (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>MICROSOFT SOCIAL ENGAGEMENT - SERVICE DISCONTINUATION (03acaee3-9492-4f40-aed4-bcb6b32981b6)<br/>POWERAPPS FOR DYNAMICS 365 (874fc546-6efe-4d22-90b8-5c4e7aa59f4b)<br/>PROJECT ONLINE ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SHAREPOINT ONLINE (PLAN 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014) | | DYNAMICS 365 FOR SALES ENTERPRISE EDITION | DYN365_ENTERPRISE_SALES | 1e1a282c-9c54-43a2-9310-98ef728faace | DYN365_ENTERPRISE_SALES (2da8e897-7791-486b-b08f-cc63c8129df7)<br/>FLOW_DYN_APPS (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>NBENTERPRISE (03acaee3-9492-4f40-aed4-bcb6b32981b6)<br/>POWERAPPS_DYN_APPS (874fc546-6efe-4d22-90b8-5c4e7aa59f4b)<br/>PROJECT_ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014) | DYNAMICS 365 FOR SALES (2da8e897-7791-486b-b08f-cc63c8129df7)<br/>FLOW FOR DYNAMICS 365 (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>MICROSOFT SOCIAL ENGAGEMENT - SERVICE DISCONTINUATION (03acaee3-9492-4f40-aed4-bcb6b32981b6)<br/>POWERAPPS FOR DYNAMICS 365 (874fc546-6efe-4d22-90b8-5c4e7aa59f4b)<br/>PROJECT ONLINE ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SHAREPOINT ONLINE (PLAN 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014) | | Dynamics 365 For Sales Professional | D365_SALES_PRO | be9f9771-1c64-4618-9907-244325141096 | DYN365_SALES_PRO (88d83950-ff78-4e85-aa66-abfc787f8090)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>POWERAPPS_SALES_PRO (6f9f70ce-138d-49f8-bb8b-2e701b7dde75)<br/>FLOW_SALES_PRO (f944d685-f762-4371-806d-a1f48e5bea13)<br/>PROJECT_ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72) | Dynamics 365 for Sales Professional (88d83950-ff78-4e85-aa66-abfc787f8090)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Office for the Web (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>Power Apps for Sales Pro (6f9f70ce-138d-49f8-bb8b-2e701b7dde75)<br/>Power Automate for Sales Pro (f944d685-f762-4371-806d-a1f48e5bea13)<br/>Project Online Essentials (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SharePoint (Plan 2) (5dbe027f-2339-4123-9542-606e4d348a72) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| DYNAMICS 365 TALENT: ONBOARD | DYNAMICS_365_ONBOARDING_SKU | b56e7ccc-d5c7-421f-a23b-5c18bdbad7c0 | DYN365_CDS_DYN_APPS (2d925ad8-2479-4bd8-bb76-5b80f1d48935)<br/>Dynamics_365_Onboarding_Free_PLAN (300b8114-8555-4313-b861-0c115d820f50)<br/>Dynamics_365_Talent_Onboard (048a552e-c849-4027-b54c-4c7ead26150a)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | COMMON DATA SERVICE (2d925ad8-2479-4bd8-bb76-5b80f1d48935)<br/>DYNAMICS 365 FOR TALENT: ONBOARD (300b8114-8555-4313-b861-0c115d820f50)<br/>DYNAMICS 365 FOR TALENT: ONBOARD (048a552e-c849-4027-b54c-4c7ead26150a)<br/>EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | | DYNAMICS 365 TEAM MEMBERS | DYN365_TEAM_MEMBERS | 7ac9fe77-66b7-4e5e-9e46-10eed1cff547 | DYNAMICS_365_FOR_RETAIL_TEAM_MEMBERS (c0454a3d-32b5-4740-b090-78c32f48f0ad)<br/>DYN365_ENTERPRISE_TALENT_ATTRACT_TEAMMEMBER (643d201a-9884-45be-962a-06ba97062e5e)<br/>DYN365_ENTERPRISE_TALENT_ONBOARD_TEAMMEMBER (f2f49eef-4b3f-4853-809a-a055c6103fe0)<br/>DYNAMICS_365_FOR_TALENT_TEAM_MEMBERS (d5156635-0704-4f66-8803-93258f8b2678)<br/>DYN365_TEAM_MEMBERS (4092fdb5-8d81-41d3-be76-aaba4074530b)<br/>DYNAMICS_365_FOR_OPERATIONS_TEAM_MEMBERS (f5aa7b45-8a36-4cd1-bc37-5d06dea98645)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW_DYN_TEAM (1ec58c70-f69c-486a-8109-4b87ce86e449)<br/>FLOW_DYN_APPS (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>POWERAPPS_DYN_TEAM (52e619e2-2730-439a-b0d3-d09ab7e8b705)<br/>PROJECT_ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72) | DYNAMICS 365 FOR RETAIL TEAM MEMBERS (c0454a3d-32b5-4740-b090-78c32f48f0ad)<br/>DYNAMICS 365 FOR TALENT - ATTRACT EXPERIENCE TEAM MEMBER (643d201a-9884-45be-962a-06ba97062e5e)<br/>DYNAMICS 365 FOR TALENT - ONBOARD EXPERIENCE (f2f49eef-4b3f-4853-809a-a055c6103fe0)<br/>DYNAMICS 365 FOR TALENT TEAM MEMBERS (d5156635-0704-4f66-8803-93258f8b2678)<br/>DYNAMICS 365 TEAM MEMBERS (4092fdb5-8d81-41d3-be76-aaba4074530b)<br/>DYNAMICS 365 FOR OPERATIONS TEAM MEMBERS (f5aa7b45-8a36-4cd1-bc37-5d06dea98645)<br/>EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW FOR DYNAMICS 365 (1ec58c70-f69c-486a-8109-4b87ce86e449)<br/>FLOW FOR DYNAMICS 365 (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>OFFICE FOR THE WEB (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>POWERAPPS FOR DYNAMICS 365 (52e619e2-2730-439a-b0d3-d09ab7e8b705)<br/>PROJECT ONLINE ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SHAREPOINT (PLAN 2) (5dbe027f-2339-4123-9542-606e4d348a72) | | DYNAMICS 365 UNF OPS PLAN ENT EDITION | Dynamics_365_for_Operations | ccba3cfe-71ef-423a-bd87-b6df3dce59a9 | DDYN365_CDS_DYN_P2 (d1142cfd-872e-4e77-b6ff-d98ec5a51f66)<br/>DYN365_TALENT_ENTERPRISE (65a1ebf4-6732-4f00-9dcb-3d115ffdeecd)<br/>Dynamics_365_for_Operations (95d2cd7b-1007-484b-8595-5e97e63fe189)<br/>Dynamics_365_for_Retail (a9e39199-8369-444b-89c1-5fe65ec45665)<br/>DYNAMICS_365_HIRING_FREE_PLAN (f815ac79-c5dd-4bcc-9b78-d97f7b817d0d)<br/>Dynamics_365_Onboarding_Free_PLAN (300b8114-8555-4313-b861-0c115d820f50)<br/>FLOW_DYN_P2 (b650d915-9886-424b-a08d-633cede56f57)<br/>POWERAPPS_DYN_P2 (0b03f40b-c404-40c3-8651-2aceb74365fa) | COMMON DATA SERVICE (d1142cfd-872e-4e77-b6ff-d98ec5a51f66)<br/>DYNAMICS 365 FOR TALENT (65a1ebf4-6732-4f00-9dcb-3d115ffdeecd)<br/>DYNAMICS 365 FOR_OPERATIONS (95d2cd7b-1007-484b-8595-5e97e63fe189)<br/>DYNAMICS 365 FOR RETAIL (a9e39199-8369-444b-89c1-5fe65ec45665)<br/>DYNAMICS 365 HIRING FREE PLAN (f815ac79-c5dd-4bcc-9b78-d97f7b817d0d)<br/>DYNAMICS 365 FOR TALENT: ONBOARD (300b8114-8555-4313-b861-0c115d820f50)<br/>FLOW FOR DYNAMICS 365(b650d915-9886-424b-a08d-633cede56f57)<br/>POWERAPPS FOR DYNAMICS 365 (0b03f40b-c404-40c3-8651-2aceb74365fa) |
+| Enterprise Mobility + Security A3 for Faculty | EMS_EDU_FACULTY | aedfac18-56b8-45e3-969b-53edb4ba4952 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>AAD_EDU (3a3976ce-de18-4a87-a78e-5e9245e252df)<br/>AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>ADALLOM_S_DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>INTUNE_EDU (da24caf9-af8e-485c-b7c8-e73336da2693)<br/>WINDOWS_STORE (a420f25f-a7b3-4ff5-a9d0-5d58f73b537d) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Azure Active Directory for Education (3a3976ce-de18-4a87-a78e-5e9245e252df)<br/>Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Azure Information Protection Premium P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>Azure Rights Management (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Defender for Cloud Apps Discovery (932ad362-64a8-4783-9106-97849a1a30b9)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Intune for Education (da24caf9-af8e-485c-b7c8-e73336da2693)<br/>Windows Store Service (a420f25f-a7b3-4ff5-a9d0-5d58f73b537d) |
| ENTERPRISE MOBILITY + SECURITY E3 | EMS | efccb6f7-5641-4e0e-bd10-b4976e1bf68e | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>ADALLOM_S_DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5) | AZURE ACTIVE DIRECTORY PREMIUM P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>AZURE INFORMATION PROTECTION PREMIUM P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>CLOUD APP SECURITY DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MICROSOFT AZURE ACTIVE DIRECTORY RIGHTS (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>MICROSOFT AZURE MULTI-FACTOR AUTHENTICATION (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>MICROSOFT INTUNE (c1ec4a95-1f05-45b3-a911-aa3fa01094f5) | | ENTERPRISE MOBILITY + SECURITY E5 | EMSPREMIUM | b05e124f-c7cc-45a0-a6aa-8cf78c946968 | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>AAD_PREMIUM_P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>RMS_S_PREMIUM2 (5689bec4-755d-4753-8b61-40975025187c)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>ADALLOM_S_STANDALONE (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>ATA (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5) | AZURE ACTIVE DIRECTORY PREMIUM P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>AZURE ACTIVE DIRECTORY PREMIUM P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>AZURE INFORMATION PROTECTION PREMIUM P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>AZURE INFORMATION PROTECTION PREMIUM P2 (5689bec4-755d-4753-8b61-40975025187c)<br/>EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MICROSOFT AZURE ACTIVE DIRECTORY RIGHTS (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>MICROSOFT AZURE MULTI-FACTOR AUTHENTICATION (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>MICROSOFT CLOUD APP SECURITY (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>MICROSOFT DEFENDER FOR IDENTITY (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>MICROSOFT INTUNE (c1ec4a95-1f05-45b3-a911-aa3fa01094f5) | | Enterprise Mobility + Security G3 GCC | EMS_GOV | c793db86-5237-494e-9b11-dcd4877c2c8c | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>ADALLOM_S_DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>EXCHANGE_S_FOUNDATION_GOV (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5) | Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Azure Information Protection Premium P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>Cloud App Security Discovery (932ad362-64a8-4783-9106-97849a1a30b9)<br/>Exchange Foundation for Government (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>Microsoft Azure Active Directory Rights (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| MICROSOFT 365 APPS FOR BUSINESS | O365_BUSINESS | cdd28e44-67e3-425e-be4c-737fab2899d3 | FORMS_PLAN_E1 (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>OFFICE_BUSINESS (094e7854-93fc-4d55-b2c0-3ab5369ebdc1)<br/>ONEDRIVESTANDARD (13696edf-5a08-49f6-8134-03083ed8ba30)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97) | MICROSOFT FORMS (PLAN E1) (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>OFFICE 365 BUSINESS (094e7854-93fc-4d55-b2c0-3ab5369ebdc1)<br/>ONEDRIVESTANDARD (13696edf-5a08-49f6-8134-03083ed8ba30)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97) | | MICROSOFT 365 APPS FOR BUSINESS | SMB_BUSINESS | b214fe43-f5a3-4703-beeb-fa97188220fc | FORMS_PLAN_E1 (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>OFFICE_BUSINESS (094e7854-93fc-4d55-b2c0-3ab5369ebdc1)<br/>ONEDRIVESTANDARD (13696edf-5a08-49f6-8134-03083ed8ba30)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97) | MICROSOFT FORMS (PLAN E1) (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>OFFICE 365 BUSINESS (094e7854-93fc-4d55-b2c0-3ab5369ebdc1)<br/>ONEDRIVESTANDARD (13696edf-5a08-49f6-8134-03083ed8ba30)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97) | | MICROSOFT 365 APPS FOR ENTERPRISE | OFFICESUBSCRIPTION | c2273bd0-dff7-4215-9ef5-2c7bcfb06425 | FORMS_PLAN_E1 (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>ONEDRIVESTANDARD (13696edf-5a08-49f6-8134-03083ed8ba30)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97) | MICROSOFT FORMS (PLAN E1) (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>ONEDRIVESTANDARD (13696edf-5a08-49f6-8134-03083ed8ba30)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97) |
+| Microsoft 365 Apps for enterprise (device) | OFFICE_PROPLUS_DEVICE1 | ea4c5ec8-50e3-4193-89b9-50da5bd4cdc7 | OFFICE_PROPLUS_DEVICE (3c994f28-87d5-4273-b07a-eb6190852599) | Microsoft 365 Apps for Enterprise (Device) (3c994f28-87d5-4273-b07a-eb6190852599) |
| Microsoft 365 Apps for Faculty | OFFICESUBSCRIPTION_FACULTY | 12b8c807-2e20-48fc-b453-542b6ee9d171 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>RMS_S_BASIC (31cf2cfc-6b0d-4adc-a336-88b724ed8122)<br/>OFFICE_FORMS_PLAN_2 (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>SHAREPOINTWAC_EDU (e03c7e47-402c-463c-ab25-949079bedb21)<br/>ONEDRIVESTANDARD (13696edf-5a08-49f6-8134-03083ed8ba30)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>WHITEBOARD_PLAN2 (94a54592-cd8b-425e-87c6-97868b000b91) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft 365 Apps for Enterprise (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Microsoft Azure Rights Management Service (31cf2cfc-6b0d-4adc-a336-88b724ed8122)<br/>Microsoft Forms (Plan 2) (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office for the Web for Education (e03c7e47-402c-463c-ab25-949079bedb21)<br/>OneDrive for Business (Plan 1) (13696edf-5a08-49f6-8134-03083ed8ba30)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>Whiteboard (Plan 2) (94a54592-cd8b-425e-87c6-97868b000b91) |
+| Microsoft 365 Apps for Students | OFFICESUBSCRIPTION_STUDENT | c32f9321-a627-406d-a114-1f9c81aaafac | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>OFFICE_FORMS_PLAN_2 (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>SHAREPOINTWAC_EDU (e03c7e47-402c-463c-ab25-949079bedb21)<br/>ONEDRIVESTANDARD (13696edf-5a08-49f6-8134-03083ed8ba30)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>WHITEBOARD_PLAN2 (94a54592-cd8b-425e-87c6-97868b000b91)<br/>RMS_S_BASIC (31cf2cfc-6b0d-4adc-a336-88b724ed8122) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft 365 Apps for Enterprise (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Microsoft Forms (Plan 2) (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office for the Web for Education (e03c7e47-402c-463c-ab25-949079bedb21)<br/>OneDrive for Business (Plan 1) (13696edf-5a08-49f6-8134-03083ed8ba30)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>Whiteboard (Plan 2) (94a54592-cd8b-425e-87c6-97868b000b91<br/>Microsoft Azure Rights Management Service (31cf2cfc-6b0d-4adc-a336-88b724ed8122) |
| MICROSOFT 365 AUDIO CONFERENCING FOR GCC | MCOMEETADV_GOC | 2d3091c7-0712-488b-b3d8-6b97bde6a1f5 | EXCHANGE_FOUNDATION_GOV (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>MCOMEETADV_GOV (f544b08d-1645-4287-82de-8d91f37c02a1) | EXCHANGE FOUNDATION FOR GOVERNMENT (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>MICROSOFT 365 AUDIO CONFERENCING FOR GOVERNMENT (f544b08d-1645-4287-82de-8d91f37c02a1) | | MICROSOFT 365 BUSINESS BASIC | O365_BUSINESS_ESSENTIALS | 3b555118-da6a-4418-894f-7df1e2096870 | BPOS_S_TODO_1 (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>EXCHANGE_S_STANDARD (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>FLOW_O365_P1 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>FORMS_PLAN_E1 (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>OFFICEMOBILE_SUBSCRIPTION (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>POWERAPPS_O365_P1 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | To-Do (Plan 1) (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>EXCHANGE ONLINE (PLAN 1) (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>FLOW FOR OFFICE 365 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>MICROSOFT FORMS (PLAN E1) (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>SKYPE FOR BUSINESS ONLINE (PLAN 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>OFFICEMOBILE_SUBSCRIPTION (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>POWERAPPS FOR OFFICE 365 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>MICROSOFT PLANNER(b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | | MICROSOFT 365 BUSINESS BASIC | SMB_BUSINESS_ESSENTIALS | dab7782a-93b1-4074-8bb1-0e61318bea0b | BPOS_S_TODO_1 (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>EXCHANGE_S_STANDARD (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>FLOW_O365_P1 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>FORMS_PLAN_E1 (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>OFFICEMOBILE_SUBSCRIPTION (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>POWERAPPS_O365_P1 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>YAMMER_MIDSIZE (41bf139a-4e60-409f-9346-a1361efc6dfb) | TO-DO (PLAN 1) (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>EXCHANGE ONLINE (PLAN 1) (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>FLOW FOR OFFICE 365 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>MICROSOFT FORMS (PLAN E1) (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>SKYPE FOR BUSINESS ONLINE (PLAN 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>OFFICEMOBILE_SUBSCRIPTION (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>POWERAPPS FOR OFFICE 365 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>MICROSOFT PLANNER(b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>YAMMER MIDSIZE (41bf139a-4e60-409f-9346-a1361efc6dfb) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| MICROSOFT DEFENDER FOR ENDPOINT | WIN_DEF_ATP | 111046dd-295b-4d6d-9724-d52ac90bd1f2 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>WINDEFATP (871d91ec-ec1a-452b-a83f-bd76c7d770ef) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MICROSOFT DEFENDER FOR ENDPOINT (871d91ec-ec1a-452b-a83f-bd76c7d770ef) | | Microsoft Defender for Endpoint P1 | DEFENDER_ENDPOINT_P1 | 16a55f2f-ff35-4cd5-9146-fb784e3761a5 | Intune_Defender (1689aade-3d6a-4bfc-b017-46d2672df5ad)<br/>MDE_LITE (292cc034-7b7c-4950-aaf5-943befd3f1d4) | MDE_SecurityManagement (1689aade-3d6a-4bfc-b017-46d2672df5ad)<br/>Microsoft Defender for Endpoint Plan 1 (292cc034-7b7c-4950-aaf5-943befd3f1d4) | | Microsoft Defender for Endpoint Server | MDATP_Server | 509e8ab6-0274-4cda-bcbd-bd164fd562c4 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>WINDEFATP (871d91ec-ec1a-452b-a83f-bd76c7d770ef) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Defender for Endpoint (871d91ec-ec1a-452b-a83f-bd76c7d770ef) |
+| Microsoft Defender for Office 365 (Plan 1) Faculty | ATP_ENTERPRISE_FACULTY | 26ad4b5c-b686-462e-84b9-d7c22b46837f | ATP_ENTERPRISE (f20fedf3-f3c3-43c3-8267-2bfdd51c0939) | Microsoft Defender for Office 365 (Plan 1) (f20fedf3-f3c3-43c3-8267-2bfdd51c0939) |
| MICROSOFT DYNAMICS CRM ONLINE BASIC | CRMPLAN2 | 906af65a-2970-46d5-9b58-4e9aa50f0657 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW_DYN_APPS (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>CRMPLAN2 (bf36ca64-95c6-4918-9275-eb9f4ce2c04f)<br/>POWERAPPS_DYN_APPS (874fc546-6efe-4d22-90b8-5c4e7aa59f4b) | EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW FOR DYNAMICS 365 (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>MICROSOFT DYNAMICS CRM ONLINE BASIC (bf36ca64-95c6-4918-9275-eb9f4ce2c04f)<br/>POWERAPPS FOR DYNAMICS 365 (874fc546-6efe-4d22-90b8-5c4e7aa59f4b) | | Microsoft Defender for Identity | ATA | 98defdf7-f6c1-44f5-a1f6-943b6764e7a5 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>ATA (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>ADALLOM_FOR_AATP (61d18b02-6889-479f-8f36-56e6e0fe5792) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Defender for Identity (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>SecOps Investigation for MDI (61d18b02-6889-479f-8f36-56e6e0fe5792) | | Microsoft Defender for Office 365 (Plan 1) GCC | ATP_ENTERPRISE_GOV | d0d1ca43-b81a-4f51-81e5-a5b1ad7bb005 | ATP_ENTERPRISE_GOV (493ff600-6a2b-4db6-ad37-a7d4eb214516) | Microsoft Defender for Office 365 (Plan 1) for Government (493ff600-6a2b-4db6-ad37-a7d4eb214516) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| Microsoft Intune Device | INTUNE_A_D | 2b317a4a-77a6-4188-9437-b68a77b4e2c6 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5) | | MICROSOFT INTUNE DEVICE FOR GOVERNMENT | INTUNE_A_D_GOV | 2c21e77a-e0d6-4570-b38a-7ff2dc17d2ca | EXCHANGE_S_FOUNDATION_GOV (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5) | Exchange Foundation for Government (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5) | | Microsoft Power Apps Plan 2 Trial | POWERAPPS_VIRAL | dcb1a3ae-b33f-4487-846a-a640262fadf4 | DYN365_CDS_VIRAL (17ab22cd-a0b3-4536-910a-cb6eb12696c0)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW_P2_VIRAL (50e68c76-46c6-4674-81f9-75456511b170)<br/>FLOW_P2_VIRAL_REAL (d20bfa21-e9ae-43fc-93c2-20783f0840c3)<br/>POWERAPPS_P2_VIRAL (d5368ca3-357e-4acb-9c21-8495fb025d1f) | Common Data Service ΓÇô VIRAL (17ab22cd-a0b3-4536-910a-cb6eb12696c0)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Flow Free (50e68c76-46c6-4674-81f9-75456511b170)<br/>Flow P2 Viral (d20bfa21-e9ae-43fc-93c2-20783f0840c3)<br/>PowerApps Trial (d5368ca3-357e-4acb-9c21-8495fb025d1f) |
-| Microsoft Power Apps for Developer | POWERAPPS_DEV | 5b631642-bd26-49fe-bd20-1daaa972ef80 | DYN365_CDS_DEV_VIRAL (d8c638e2-9508-40e3-9877-feb87603837b)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW_DEV_VIRAL (c7ce3f26-564d-4d3a-878d-d8ab868c85fe)<br/>POWERAPPS_DEV_VIRAL (a2729df7-25f8-4e63-984b-8a8484121554) | Common Data Service - DEV VIRAL (d8c638e2-9508-40e3-9877-feb87603837b)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Flow for Developer (c7ce3f26-564d-4d3a-878d-d8ab868c85fe)<br/>PowerApps for Developer (a2729df7-25f8-4e63-984b-8a8484121554) |
| MICROSOFT POWER AUTOMATE PLAN 2 | FLOW_P2 | 4755df59-3f73-41ab-a249-596ad72b5504 | DYN365_CDS_P2 (6ea4c1ef-c259-46df-bce2-943342cd3cb2)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW_P2 (56be9436-e4b2-446c-bb7f-cc15d16cca4d) | Common Data Service - P2 (6ea4c1ef-c259-46df-bce2-943342cd3cb2)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Power Automate (Plan 2) (56be9436-e4b2-446c-bb7f-cc15d16cca4d) | | MICROSOFT INTUNE SMB | INTUNE_SMB | e6025b08-2fa5-4313-bd0a-7e5ffca32958 | AAD_SMB (de377cbc-0019-4ec2-b77c-3f223947e102)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>INTUNE_SMBIZ (8e9ff0ff-aa7a-4b20-83c1-2f636b600ac2)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/> | AZURE ACTIVE DIRECTORY (de377cbc-0019-4ec2-b77c-3f223947e102)<br/> EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/> MICROSOFT INTUNE (8e9ff0ff-aa7a-4b20-83c1-2f636b600ac2)<br/> MICROSOFT INTUNE (c1ec4a95-1f05-45b3-a911-aa3fa01094f5) |
+| Microsoft PowerApps for Developer | POWERAPPS_DEV | 5b631642-bd26-49fe-bd20-1daaa972ef80 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>DYN365_CDS_DEV_VIRAL (d8c638e2-9508-40e3-9877-feb87603837b)<br/>FLOW_DEV_VIRAL (c7ce3f26-564d-4d3a-878d-d8ab868c85fe)<br/>POWERAPPS_DEV_VIRAL (a2729df7-25f8-4e63-984b-8a8484121554) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Common Data Service (d8c638e2-9508-40e3-9877-feb87603837b)<br/>Flow for Developer (c7ce3f26-564d-4d3a-878d-d8ab868c85fe)<br/>PowerApps for Developer (a2729df7-25f8-4e63-984b-8a8484121554) |
| Microsoft Power Apps Plan 2 (Qualified Offer) | POWERFLOW_P2 | ddfae3e3-fcb2-4174-8ebd-3023cb213c8b | DYN365_CDS_P2 (6ea4c1ef-c259-46df-bce2-943342cd3cb2)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>POWERAPPS_P2 (00527d7f-d5bc-4c2a-8d1e-6c0de2410c81)<br/>FLOW_P2 (56be9436-e4b2-446c-bb7f-cc15d16cca4d) | Common Data Service - P2 (6ea4c1ef-c259-46df-bce2-943342cd3cb2)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/> Power Apps (Plan 2) (00527d7f-d5bc-4c2a-8d1e-6c0de2410c81)<br/>Power Automate (Plan 2) (56be9436-e4b2-446c-bb7f-cc15d16cca4d) | | MICROSOFT STREAM | STREAM | 1f2f344a-700d-42c9-9427-5cea1d5d7ba6 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MICROSOFTSTREAM (acffdce6-c30f-4dc2-81c0-372e33c515ec) | EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MICROSOFT STREAM (acffdce6-c30f-4dc2-81c0-372e33c515ec) | | Microsoft Stream Plan 2 | STREAM_P2 | ec156933-b85b-4c50-84ec-c9e5603709ef | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>STREAM_P2 (d3a458d0-f10d-48c2-9e44-86f3f684029e) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Stream Plan 2 (d3a458d0-f10d-48c2-9e44-86f3f684029e) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| POWERAPPS AND LOGIC FLOWS | POWERAPPS_INDIVIDUAL_USER | 87bbbc60-4754-4998-8c88-227dca264858 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>POWERFLOWSFREE (0b4346bb-8dc3-4079-9dfc-513696f56039)<br/>POWERVIDEOSFREE (2c4ec2dc-c62d-4167-a966-52a3e6374015)<br/>POWERAPPSFREE (e61a2945-1d4e-4523-b6e7-30ba39d20f32) | EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>LOGIC FLOWS (0b4346bb-8dc3-4079-9dfc-513696f56039)<br/>MICROSOFT POWER VIDEOS BASIC (2c4ec2dc-c62d-4167-a966-52a3e6374015)<br/>MICROSOFT POWERAPPS (e61a2945-1d4e-4523-b6e7-30ba39d20f32) | | PowerApps per app baseline access | POWERAPPS_PER_APP_IW | bf666882-9c9b-4b2e-aa2f-4789b0a52ba2 | CDS_PER_APP_IWTRIAL (94a669d1-84d5-4e54-8462-53b0ae2c8be5)<br/>Flow_Per_APP_IWTRIAL (dd14867e-8d31-4779-a595-304405f5ad39)<br/>POWERAPPS_PER_APP_IWTRIAL (35122886-cef5-44a3-ab36-97134eabd9ba) | CDS Per app baseline access (94a669d1-84d5-4e54-8462-53b0ae2c8be5)<br/>Flow per app baseline access (dd14867e-8d31-4779-a595-304405f5ad39)<br/>PowerApps per app baseline access (35122886-cef5-44a3-ab36-97134eabd9ba) | | Power Apps per app plan | POWERAPPS_PER_APP | a8ad7d2b-b8cf-49d6-b25a-69094a0be206 | CDS_PER_APP (9f2f00ad-21ae-4ceb-994b-d8bc7be90999)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>POWERAPPS_PER_APP (b4f657ff-d83e-4053-909d-baa2b595ec97)<br/>Flow_Per_APP (c539fa36-a64e-479a-82e1-e40ff2aa83ee) | CDS PowerApps per app plan (9f2f00ad-21ae-4ceb-994b-d8bc7be90999)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Power Apps per App Plan (b4f657ff-d83e-4053-909d-baa2b595ec97)<br/>Power Automate for Power Apps per App Plan (c539fa36-a64e-479a-82e1-e40ff2aa83ee) |
+| Power Apps per app plan (1 app or portal) | POWERAPPS_PER_APP_NEW | b4d7b828-e8dc-4518-91f9-e123ae48440d | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>CDSAICAPACITY_PERAPP (5d7a2e9a-4ee5-4f1c-bc9f-abc481bf39d8)<br/>DATAVERSE_POWERAPPS_PER_APP_NEW (6f0e9100-ff66-41ce-96fc-3d8b7ad26887)<br/>POWERAPPS_PER_APP_NEW (14f8dac2-0784-4daa-9cb2-6d670b088d64)<br/>Flow_Per_APP (c539fa36-a64e-479a-82e1-e40ff2aa83ee) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>AI Builder capacity Per App add-on (5d7a2e9a-4ee5-4f1c-bc9f-abc481bf39d8)<br/>Dataverse for Power Apps per app (6f0e9100-ff66-41ce-96fc-3d8b7ad26887)<br/>Power Apps per app (14f8dac2-0784-4daa-9cb2-6d670b088d64)<br/>Power Automate for Power Apps per App Plan (c539fa36-a64e-479a-82e1-e40ff2aa83ee) |
| Power Apps per user plan | POWERAPPS_PER_USER | b30411f5-fea1-4a59-9ad9-3db7c7ead579 | DYN365_CDS_P2 (6ea4c1ef-c259-46df-bce2-943342cd3cb2)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>POWERAPPS_PER_USER (ea2cf03b-ac60-46ae-9c1d-eeaeb63cec86)<br/>Flow_PowerApps_PerUser (dc789ed8-0170-4b65-a415-eb77d5bb350a) | Common Data Service - P2 (6ea4c1ef-c259-46df-bce2-943342cd3cb2)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Power Apps per User Plan (ea2cf03b-ac60-46ae-9c1d-eeaeb63cec86)<br/>Power Automate for Power Apps per User Plan (dc789ed8-0170-4b65-a415-eb77d5bb350a) | | Power Apps per user plan for Government | POWERAPPS_PER_USER_GCC | 8e4c6baa-f2ff-4884-9c38-93785d0d7ba1 | CDSAICAPACITY_PERUSER (91f50f7b-2204-4803-acac-5cf5668b8b39)<br/>CDSAICAPACITY_PERUSER_NEW (74d93933-6f22-436e-9441-66d205435abb)<br/>DYN365_CDS_P2_GOV (37396c73-2203-48e6-8be1-d882dae53275)<br/>EXCHANGE_S_FOUNDATION_GOV (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>POWERAPPS_PER_USER_GCC (8f55b472-f8bf-40a9-be30-e29919d4ddfe)<br/>Flow_PowerApps_PerUser_GCC (8e3eb3bd-bc99-4221-81b8-8b8bc882e128) | AI Builder capacity Per User add-on (91f50f7b-2204-4803-acac-5cf5668b8b39)<br/>AI Builder capacity Per User add-on (74d93933-6f22-436e-9441-66d205435abb)<br/>Common Data Service for Government (37396c73-2203-48e6-8be1-d882dae53275)<br/>Exchange Foundation for Government (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>Power Apps per User Plan for Government (8f55b472-f8bf-40a9-be30-e29919d4ddfe)<br/>Power Automate for Power Apps per User Plan for GCC (8e3eb3bd-bc99-4221-81b8-8b8bc882e128) | | Power Apps Plan 1 for Government | POWERAPPS_P1_GOV | eca22b68-b31f-4e9c-a20c-4d40287bc5dd | DYN365_CDS_P1_GOV (ce361df2-f2a5-4713-953f-4050ba09aad8)<br/>EXCHANGE_S_FOUNDATION_GOV (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>FLOW_P1_GOV (774da41c-a8b3-47c1-8322-b9c1ab68be9f)<br/>POWERAPPS_P1_GOV (5ce719f1-169f-4021-8a64-7d24dcaec15f) | Common Data Service for Government (ce361df2-f2a5-4713-953f-4050ba09aad8)<br/>Exchange Foundation for Government (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>Power Automate (Plan 1) for Government (774da41c-a8b3-47c1-8322-b9c1ab68be9f)<br/>PowerApps Plan 1 for Government (5ce719f1-169f-4021-8a64-7d24dcaec15f) |
+| Power Apps Portals login capacity add-on Tier 2 (10 unit min) | POWERAPPS_PORTALS_LOGIN_T2 | 57f3babd-73ce-40de-bcb2-dadbfbfff9f7 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>CDS_POWERAPPS_PORTALS_LOGIN (32ad3a4e-2272-43b4-88d0-80d284258208)<br/>POWERAPPS_PORTALS_LOGIN (084747ad-b095-4a57-b41f-061d84d69f6f) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Common Data Service Power Apps Portals Login Capacity (32ad3a4e-2272-43b4-88d0-80d284258208)<br/>Power Apps Portals Login Capacity Add-On (084747ad-b095-4a57-b41f-061d84d69f6f) |
| Power Apps Portals login capacity add-on Tier 2 (10 unit min) for Government | POWERAPPS_PORTALS_LOGIN_T2_GCC | 26c903d5-d385-4cb1-b650-8d81a643b3c4 | CDS_POWERAPPS_PORTALS_LOGIN_GCC (0f7b9a29-7990-44ff-9d05-a76be778f410)<br/>EXCHANGE_S_FOUNDATION_GOV (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>POWERAPPS_PORTALS_LOGIN_GCC (bea6aef1-f52d-4cce-ae09-bed96c4b1811) | Common Data Service Power Apps Portals Login Capacity for GCC (0f7b9a29-7990-44ff-9d05-a76be778f410)<br/>Exchange Foundation for Government (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>Power Apps Portals Login Capacity Add-On for Government (bea6aef1-f52d-4cce-ae09-bed96c4b1811) | | Power Apps Portals page view capacity add-on for Government | POWERAPPS_PORTALS_PAGEVIEW_GCC | 15a64d3e-5b99-4c4b-ae8f-aa6da264bfe7 | CDS_POWERAPPS_PORTALS_PAGEVIEW_GCC (352257a9-db78-4217-a29d-8b8d4705b014)<br/>EXCHANGE_S_FOUNDATION_GOV (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>POWERAPPS_PORTALS_PAGEVIEW_GCC (483d5646-7724-46ac-ad71-c78b7f099d8d) | CDS PowerApps Portals page view capacity add-on for GCC (352257a9-db78-4217-a29d-8b8d4705b014)<br/>Exchange Foundation for Government (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>Power Apps Portals Page View Capacity Add-On for Government (483d5646-7724-46ac-ad71-c78b7f099d8d) | | Power Automate per flow plan | FLOW_BUSINESS_PROCESS | b3a42176-0a8c-4c3f-ba4e-f2b37fe5be6b | CDS_Flow_Business_Process (c84e52ae-1906-4947-ac4d-6fb3e5bf7c2e)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW_BUSINESS_PROCESS (7e017b61-a6e0-4bdc-861a-932846591f6e) | Common data service for Flow per business process plan (c84e52ae-1906-4947-ac4d-6fb3e5bf7c2e)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Flow per business process plan (7e017b61-a6e0-4bdc-861a-932846591f6e) |
The following service plans cannot be assigned together:
| CRMPLAN1 | 119cf168-b6cf-41fb-b82e-7fee7bae5814 | | CRMPLAN2 | bf36ca64-95c6-4918-9275-eb9f4ce2c04f | | CRMSTANDARD | f9646fb2-e3b2-4309-95de-dc4833737456 |
-| DYN365_ENTERPRISE_CUSTOMER_SERVICE | 99340b49-fb81-4b1e-976b-8f2ae8e9394f |
-| DYN365_ENTERPRISE_P1 | d56f3deb-50d8-465a-bedb-f079817ccac1 |
| DYN365_ENTERPRISE_P1_IW | 056a5f80-b4e0-4983-a8be-7ad254a113c9 | | DYN365_ENTERPRISE_SALES | 2da8e897-7791-486b-b08f-cc63c8129df7 | | DYN365_ENTERPRISE_TEAM_MEMBERS | 6a54b05e-4fab-40e7-9828-428db3b336fa |
active-directory Recover From Deletions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/recover-from-deletions.md
The most frequent scenarios for application deletion are:
| Microsoft 365 Groups| *All properties are maintained*, including ObjectID, group memberships, licenses, and application assignments. | | Application registration| *All properties are maintained.* (See more information after this table.) |
-When you delete an application, the application registration by default enters the soft-delete state. To understand the relationship between application registrations and service principals, see [Apps and service principals in Azure AD - Microsoft identity platform](/azure/active-directory/develop/app-objects-and-service-principals).
+When you delete an application, the application registration by default enters the soft-delete state. To understand the relationship between application registrations and service principals, see [Apps and service principals in Azure AD - Microsoft identity platform](../develop/app-objects-and-service-principals.md).
## Recover from soft deletion
For more information on how to restore soft-deleted Microsoft 365 Groups, see th
### Applications
-Applications have two objects: the application registration and the service principal. For more information on the differences between the registration and the service principal, see [Apps and service principals in Azure AD](/azure/active-directory/develop/app-objects-and-service-principals).
+Applications have two objects: the application registration and the service principal. For more information on the differences between the registration and the service principal, see [Apps and service principals in Azure AD](../develop/app-objects-and-service-principals.md).
To restore an application from the Azure portal, select **App registrations** > **Deleted applications**. Select the application registration to restore, and then select **Restore app registration**.
For more information on how to avoid unwanted deletions, see the following topic
* Business continuity and disaster planning * Document known good states
-* Monitoring and data retention
+* Monitoring and data retention
active-directory Access Reviews Application Preparation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/access-reviews-application-preparation.md
The integration patterns listed above are applicable to third party SaaS applica
Now that you have identified the integration pattern for the application, check the application as represented in Azure AD is ready for review. 1. In the Azure portal, click **Azure Active Directory**, click **Enterprise Applications**, and check whether your application is on the [list of enterprise applications](../manage-apps/view-applications-portal.md) in your Azure AD tenant.
-1. If the application is not already listed, then check if the application is available the [application gallery](../manage-apps/overview-application-gallery.md) for applications that can be integrated for federated SSO or provisioning. If it is in the gallery, then use the [tutorials](../saas-apps/tutorial-list.md) to configure the application for federation, and if it supports provisioning, also [configure the application](/azure/active-directory/app-provisioning/configure-automatic-user-provisioning-portal) for provisioning.
+1. If the application is not already listed, then check if the application is available the [application gallery](../manage-apps/overview-application-gallery.md) for applications that can be integrated for federated SSO or provisioning. If it is in the gallery, then use the [tutorials](../saas-apps/tutorial-list.md) to configure the application for federation, and if it supports provisioning, also [configure the application](../app-provisioning/configure-automatic-user-provisioning-portal.md) for provisioning.
1. One the application is in the list of enterprise applications in your tenant, select the application from the list. 1. Change to the **Properties** tab. Verify that the **User assignment required?** option is set to **Yes**. If it's set to **No**, all users in your directory, including external identities, can access the application, and you can't review access to the application.
Once the reviews have started, you can monitor their progress, and update the ap
## Next steps * [Plan an Azure Active Directory access reviews deployment](deploy-access-reviews.md)
-* [Create an access review of a group or application](create-access-review.md)
+* [Create an access review of a group or application](create-access-review.md)
active-directory How To Connect Staged Rollout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-staged-rollout.md
Previously updated : 01/21/2022 Last updated : 06/15/2022
# Migrate to cloud authentication using staged rollout
-Staged rollout allows you to selectively test groups of users with cloud authentication capabilities like Azure AD Multi-Factor Authentication (MFA), Conditional Access, Identity Protection for leaked credentials, Identity Governance, and others, before cutting over your domains. This article discusses how to make the switch. Before you begin the staged rollout, however, you should consider the implications if one or more of the following conditions is true:
+Staged rollout allows you to selectively test groups of users with cloud authentication capabilities like Azure AD Multi-Factor Authentication (MFA), Conditional Access, Identity Protection for leaked credentials, Identity Governance, and others, before cutting over your domains. This article discusses how to make the switch. Before you begin the staged rollout, however, you should consider the implications if one or more of the following conditions is true:
- You're currently using an on-premises Multi-Factor Authentication server. - You're using smart cards for authentication.
For an overview of the feature, view this "Azure Active Directory: What is stage
- You have an Azure Active Directory (Azure AD) tenant with federated domains. -- You have decided to move to either of two options:
- - **Option A** - *password hash synchronization (sync)*. For more information, see [What is password hash sync](whatis-phs.md)
- - **Option B** - *pass-through authentication*. For more information, see [What is pass-through authentication](how-to-connect-pta.md)
+- You have decided to move one of the following options:
+ - **Password hash synchronization (sync)**. For more information, see [What is password hash sync](whatis-phs.md)
+ - **Pass-through authentication**. For more information, see [What is pass-through authentication](how-to-connect-pta.md)
+ - **Azure AD Certificate-based authentication (CBA) settings**. For more information, see [What is pass-through authentication](../authentication/concept-certificate-based-authentication.md)
For both options, we recommend enabling single sign-on (SSO) to achieve a silent sign-in experience. For Windows 7 or 8.1 domain-joined devices, we recommend using seamless SSO. For more information, see [What is seamless SSO](how-to-connect-sso.md).
To roll out a specific feature (*pass-through authentication*, *password hash sy
### Enable a staged rollout of a specific feature on your tenant
-You can roll out one of these options:
+You can roll out these options:
-- **Option A** - *password hash sync* + *seamless SSO*-- **Option B** - *pass-through authentication* + *seamless SSO*-- **Not supported** - *password hash sync* + *pass-through authentication* + *seamless SSO*
+- **Password hash sync** + **Seamless SSO**
+- **Pass-through authentication** + **Seamless SSO**
+- **Not supported** - **Password hash sync** + **Pass-through authentication** + **Seamless SSO**
+- **Certificate-based authentication settings**
Do the following:
Do the following:
2. Select the **Enable staged rollout for managed user sign-in** link.
- For example, if you want to enable *Option A*, slide the **Password Hash Sync** and **Seamless single sign-on** controls to **On**, as shown in the following images.
+ For example, if you want to enable **Password Hash Sync** and **Seamless single sign-on**, slide both controls to **On**.
active-directory Overview Service Health Notifications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/overview-service-health-notifications.md
na Previously updated : 06/13/2022 Last updated : 06/15/2022
Most of the built-in admin roles will have access to see these notifications. Fo
## What you should know
-Service Health events allow the addition of alerts and notifications to be applied to subscription events. Currently, this isn't yet supported with tenant events, but will be coming soon. Until notifications are supported, both tenant events and subscription events to all subscriptions in an impacted tenant will be issued in the event of an outage.
+Service Health events allow the addition of alerts and notifications to be applied to subscription events. Currently, this isn't yet supported with tenant events, but will be coming soon.
active-directory Velpic Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/velpic-provisioning-tutorial.md
The objective of this tutorial is to show you the steps you need to perform in V
The scenario outlined in this tutorial assumes that you already have the following items: * An Azure Active Directory tenant
-* A Velpic tenant with the [Enterprise plan](https://www.velpic.com/pricing.html) or better enabled
+* A Velpic tenant with the Enterprise plan or better enabled
* A user account in Velpic with Admin permissions ## Assigning users to Velpic
For more information on how to read the Azure AD provisioning logs, see [Reporti
## Next steps
-* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
aks Azure Files Csi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-files-csi.md
Title: Use Container Storage Interface (CSI) drivers for Azure Files on Azure Ku
description: Learn how to use the Container Storage Interface (CSI) drivers for Azure Files in an Azure Kubernetes Service (AKS) cluster. Previously updated : 04/01/2021 Last updated : 06/15/2022 # Use Azure Files Container Storage Interface (CSI) drivers in Azure Kubernetes Service (AKS)
-The Azure Files Container Storage Interface (CSI) driver is a [CSI specification](https://github.com/container-storage-interface/spec/blob/master/spec.md)-compliant driver used by Azure Kubernetes Service (AKS) to manage the lifecycle of Azure Files shares.
+The Azure Files Container Storage Interface (CSI) driver is a [CSI specification][csi-specification]-compliant driver used by Azure Kubernetes Service (AKS) to manage the lifecycle of Azure Files shares.
-The CSI is a standard for exposing arbitrary block and file storage systems to containerized workloads on Kubernetes. By adopting and using CSI, AKS now can write, deploy, and iterate plug-ins to expose new or improve existing storage systems in Kubernetes without having to touch the core Kubernetes code and wait for its release cycles.
+The CSI is a standard for exposing arbitrary block and file storage systems to containerized workloads on Kubernetes. By adopting and using CSI, AKS now can write, deploy, and iterate plug-ins to expose new or improve existing storage systems in Kubernetes. Using CSI drivers in AKS avoids having to touch the core Kubernetes code and wait for its release cycles.
-To create an AKS cluster with CSI driver support, see [Enable CSI drivers for Azure disks and Azure Files on AKS](csi-storage-drivers.md).
+To create an AKS cluster with CSI drivers support, see [Enable CSI drivers on AKS][csi-drivers-overview].
> [!NOTE] > *In-tree drivers* refers to the current storage drivers that are part of the core Kubernetes code versus the new CSI drivers, which are plug-ins.
-## Azure File CSI driver new features
-Besides original in-tree driver features, Azure File CSI driver already provides following new features:
-- NFS 4.1-- Private endpoint-- support creating large mount of file shares in parallel
+## Azure Files CSI driver new features
+
+In addition to the original in-tree driver features, Azure Files CSI driver supports the following new features:
+
+- Network File System (NFS) version 4.1
+- [Private endpoint][private-endpoint-overview]
+- Creating large mount of file shares in parallel
## Use a persistent volume with Azure Files
-A [persistent volume (PV)](concepts-storage.md#persistent-volumes) represents a piece of storage that's provisioned for use with Kubernetes pods. A PV can be used by one or many pods and can be dynamically or statically provisioned. If multiple pods need concurrent access to the same storage volume, you can use Azure Files to connect by using the [Server Message Block (SMB)][smb-overview] or NFS protocol. This article shows you how to dynamically create an Azure Files share for use by multiple pods in an AKS cluster. For static provisioning, see [Manually create and use a volume with an Azure Files share](azure-files-volume.md).
+A [persistent volume (PV)][persistent-volume] represents a piece of storage that's provisioned for use with Kubernetes pods. A PV can be used by one or many pods and can be dynamically or statically provisioned. If multiple pods need concurrent access to the same storage volume, you can use Azure Files to connect by using the [Server Message Block (SMB)][smb-overview] or [NFS protocol][nfs-overview]. This article shows you how to dynamically create an Azure Files share for use by multiple pods in an AKS cluster. For static provisioning, see [Manually create and use a volume with an Azure Files share][azure-files-pvc-manual].
For more information on Kubernetes volumes, see [Storage options for applications in AKS][concepts-storage].
A storage class is used to define how an Azure Files share is created. A storage
> [!NOTE] > Azure Files supports Azure Premium Storage. The minimum premium file share is 100 GB.
-When you use storage CSI drivers on AKS, there are two additional built-in `StorageClasses` that use the Azure Files CSI storage drivers. The additional CSI storage classes are created with the cluster alongside the in-tree default storage classes.
+When you use storage CSI drivers on AKS, there are two more built-in `StorageClasses` that use the Azure Files CSI storage drivers. The other CSI storage classes are created with the cluster alongside the in-tree default storage classes.
- `azurefile-csi`: Uses Azure Standard Storage to create an Azure Files share. - `azurefile-csi-premium`: Uses Azure Premium Storage to create an Azure Files share.
-The reclaim policy on both storage classes ensures that the underlying Azure Files share is deleted when the respective PV is deleted. The storage classes also configure the file shares to be expandable, you just need to edit the persistent volume claim (PVC) with the new size.
+The reclaim policy on both storage classes ensures that the underlying Azure Files share is deleted when the respective PV is deleted. The storage classes also configure the file shares to be expandable, you just need to edit the [persistent volume claim][persistent-volume-claim-overview] (PVC) with the new size.
-To use these storage classes, create a [PVC](concepts-storage.md#persistent-volume-claims) and respective pod that references and uses them. A PVC is used to automatically provision storage based on a storage class. A PVC can use one of the pre-created storage classes or a user-defined storage class to create an Azure Files share for the desired SKU and size. When you create a pod definition, the PVC is specified to request the desired storage.
+To use these storage classes, create a PVC and respective pod that references and uses them. A PVC is used to automatically provision storage based on a storage class. A PVC can use one of the pre-created storage classes or a user-defined storage class to create an Azure Files share for the desired SKU and size. When you create a pod definition, the PVC is specified to request the desired storage.
-Create an [example PVC and pod that prints the current date into an `outfile`](https://github.com/kubernetes-sigs/azurefile-csi-driver/blob/master/deploy/example/statefulset.yaml) with the [kubectl apply][kubectl-apply] command:
+Create an [example PVC and pod that prints the current date into an `outfile`](https://github.com/kubernetes-sigs/azurefile-csi-driver/blob/master/deploy/example/statefulset.yaml) by running the [kubectl apply][kubectl-apply] commands:
-```console
-$ kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/azurefile-csi-driver/master/deploy/example/pvc-azurefile-csi.yaml
-$ kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/azurefile-csi-driver/master/deploy/example/nginx-pod-azurefile.yaml
+```bash
+kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/azurefile-csi-driver/master/deploy/example/pvc-azurefile-csi.yaml
+kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/azurefile-csi-driver/master/deploy/example/nginx-pod-azurefile.yaml
+```
+The output of the command resembles the following example:
+
+```bash
persistentvolumeclaim/pvc-azurefile created pod/nginx-azurefile created ``` After the pod is in the running state, you can validate that the file share is correctly mounted by running the following command and verifying the output contains the `outfile`:
-```console
-$ kubectl exec nginx-azurefile -- ls -l /mnt/azurefile
+```bash
+kubectl exec nginx-azurefile -- ls -l /mnt/azurefile
+```
+
+The output of the command resembles the following example:
+```bash
total 29 -rwxrwxrwx 1 root root 29348 Aug 31 21:59 outfile ```
parameters:
skuName: Standard_LRS ```
-Create the storage class with the [kubectl apply][kubectl-apply] command:
+Create the storage class by running the [kubectl apply][kubectl-apply] command:
-```console
+```bash
kubectl apply -f azure-file-sc.yaml
+```
+
+The output of the command resembles the following example:
+```bash
storageclass.storage.k8s.io/my-azurefile created ``` The Azure Files CSI driver supports creating [snapshots of persistent volumes](https://kubernetes-csi.github.io/docs/snapshot-restore-feature.html) and the underlying file shares. > [!NOTE]
-> This driver only supports snapshot creation, restore from snapshot is not supported by this driver, snapshot could be restored from Azure portal or CLI. To get the snapshot created, you can go to Azure Portal -> access the Storage Account -> File shares -> access the file share associated -> Snapshots. There you can click on it and restore.
+> This driver only supports snapshot creation, restore from snapshot is not supported by this driver. Snapshots can be restored from Azure portal or CLI. For more information about creating and restoring a snapshot, see [Overview of share snapshots for Azure Files][share-snapshots-overview].
Create a [volume snapshot class](https://github.com/kubernetes-sigs/azurefile-csi-driver/blob/master/deploy/example/snapshot/volumesnapshotclass-azurefile.yaml) with the [kubectl apply][kubectl-apply] command:
-```console
-$ kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/azurefile-csi-driver/master/deploy/example/snapshot/volumesnapshotclass-azurefile.yaml
+```bash
+kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/azurefile-csi-driver/master/deploy/example/snapshot/volumesnapshotclass-azurefile.yaml
+```
+
+The output of the command resembles the following example:
+```bash
volumesnapshotclass.snapshot.storage.k8s.io/csi-azurefile-vsc created ``` Create a [volume snapshot](https://github.com/kubernetes-sigs/azurefile-csi-driver/blob/master/deploy/example/snapshot/volumesnapshot-azurefile.yaml) from the PVC [we dynamically created at the beginning of this tutorial](#dynamically-create-azure-files-pvs-by-using-the-built-in-storage-classes), `pvc-azurefile`. ```bash
-$ kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/azurefile-csi-driver/master/deploy/example/snapshot/volumesnapshot-azurefile.yaml
+kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/azurefile-csi-driver/master/deploy/example/snapshot/volumesnapshot-azurefile.yaml
+```
+The output of the command resembles the following example:
+
+```bash
volumesnapshot.snapshot.storage.k8s.io/azurefile-volume-snapshot created ```
-Verify the snapshot was created correctly:
+Verify the snapshot was created correctly by running the following command:
```bash
-$ kubectl describe volumesnapshot azurefile-volume-snapshot
+kubectl describe volumesnapshot azurefile-volume-snapshot
+```
+
+The output of the command resembles the following example:
+```bash
Name: azurefile-volume-snapshot Namespace: default Labels: <none>
You can request a larger volume for a PVC. Edit the PVC object, and specify a la
In AKS, the built-in `azurefile-csi` storage class already supports expansion, so use the [PVC created earlier with this storage class](#dynamically-create-azure-files-pvs-by-using-the-built-in-storage-classes). The PVC requested a 100Gi file share. We can confirm that by running:
-```console
-$ kubectl exec -it nginx-azurefile -- df -h /mnt/azurefile
+```bash
+kubectl exec -it nginx-azurefile -- df -h /mnt/azurefile
+```
+The output of the command resembles the following example:
+
+```bash
Filesystem Size Used Avail Use% Mounted on //f149b5a219bd34caeb07de9.file.core.windows.net/pvc-5e5d9980-da38-492b-8581-17e3cad01770 100G 128K 100G 1% /mnt/azurefile ``` Expand the PVC by increasing the `spec.resources.requests.storage` field:
-```console
-$ kubectl patch pvc pvc-azurefile --type merge --patch '{"spec": {"resources": {"requests": {"storage": "200Gi"}}}}'
+```bash
+kubectl patch pvc pvc-azurefile --type merge --patch '{"spec": {"resources": {"requests": {"storage": "200Gi"}}}}'
+```
+
+The output of the command resembles the following example:
+```bash
persistentvolumeclaim/pvc-azurefile patched ``` Verify that both the PVC and the file system inside the pod show the new size:
-```console
-$ kubectl get pvc pvc-azurefile
+```bash
+kubectl get pvc pvc-azurefile
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE pvc-azurefile Bound pvc-5e5d9980-da38-492b-8581-17e3cad01770 200Gi RWX azurefile-csi 64m
-$ kubectl exec -it nginx-azurefile -- df -h /mnt/azurefile
+kubectl exec -it nginx-azurefile -- df -h /mnt/azurefile
Filesystem Size Used Avail Use% Mounted on //f149b5a219bd34caeb07de9.file.core.windows.net/pvc-5e5d9980-da38-492b-8581-17e3cad01770 200G 128K 200G 1% /mnt/azurefile ```
Create the storage class by using the [kubectl apply][kubectl-apply] command:
```console kubectl apply -f private-azure-file-sc.yaml
+```
+
+The output of the command resembles the following example:
+```bash
storageclass.storage.k8s.io/private-azurefile-csi created ```
kubectl apply -f private-pvc.yaml
## NFS file shares
-[Azure Files supports the NFS v4.1 protocol](../storage/files/storage-files-how-to-create-nfs-shares.md). NFS 4.1 support for Azure Files provides you with a fully managed NFS file system as a service built on a highly available and highly durable distributed resilient storage platform.
+[Azure Files supports the NFS v4.1 protocol](../storage/files/storage-files-how-to-create-nfs-shares.md). NFS version 4.1 support for Azure Files provides you with a fully managed NFS file system as a service built on a highly available and highly durable distributed resilient storage platform.
This option is optimized for random access workloads with in-place data updates and provides full POSIX file system support. This section shows you how to use NFS shares with the Azure File CSI driver on an AKS cluster.
-> [!NOTE]
-> Make sure cluster `Control plane` identity(with name `AKS Cluster Name`) has `Contributor` permission on vnet resource group.
+### Prerequsites
+
+- Your AKS clusters service principal or managed identity must be added to the Contributor role to the storage account.
+- Your AKS cluster *Control plane* identity (that is, your AKS cluster name) is added to the [Contributor](../role-based-access-control/built-in-roles.md#contributor) role in the resource group hosting the VNet.
### Create NFS file share storage class
-Save a `nfs-sc.yaml` file with the manifest below editing the respective placeholders.
+Create a ile named `nfs-sc.yaml` and copy the manifest below.
```yml apiVersion: storage.k8s.io/v1
mountOptions:
After editing and saving the file, create the storage class with the [kubectl apply][kubectl-apply] command:
-```console
-$ kubectl apply -f nfs-sc.yaml
+```bash
+kubectl apply -f nfs-sc.yaml
+```
+
+The output of the command resembles the following example:
+```bash
storageclass.storage.k8s.io/azurefile-csi-nfs created ```
storageclass.storage.k8s.io/azurefile-csi-nfs created
You can deploy an example [stateful set](https://github.com/kubernetes-sigs/azurefile-csi-driver/blob/master/deploy/example/nfs/statefulset.yaml) that saves timestamps into a file `data.txt` by deploying the following command with the [kubectl apply][kubectl-apply] command:
-```console
-$ kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/azurefile-csi-driver/master/deploy/example/nfs/statefulset.yaml
+```bash
+kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/azurefile-csi-driver/master/deploy/example/nfs/statefulset.yaml
+```
+The output of the command resembles the following example:
+
+```bash
statefulset.apps/statefulset-azurefile created ```
-Validate the contents of the volume by running:
+Validate the contents of the volume by running the following command:
-```console
-$ kubectl exec -it statefulset-azurefile-0 -- df -h
+```bash
+kubectl exec -it statefulset-azurefile-0 -- df -h
+```
+The output of the command resembles the following example:
+
+```bash
Filesystem Size Used Avail Use% Mounted on ... /dev/sda1 29G 11G 19G 37% /etc/hosts
accountname.file.core.windows.net:/accountname/pvc-fa72ec43-ae64-42e4-a8a2-55660
``` > [!NOTE]
-> Note that since NFS file share is in Premium account, the minimum file share size is 100GB. If you create a PVC with a small storage size, you might encounter an error "failed to create file share ... size (5)...".
+> Note that since NFS file share is in Premium account, the minimum file share size is 100GB. If you create a PVC with a small storage size, you might encounter an error similar to the following: *failed to create file share ... size (5)...*.
## Windows containers
-The Azure Files CSI driver also supports Windows nodes and containers. If you want to use Windows containers, follow the [Windows containers quickstart](./learn/quick-windows-container-deploy-cli.md) to add a Windows node pool.
+The Azure Files CSI driver also supports Windows nodes and containers. To use Windows containers, follow the [Windows containers quickstart](./learn/quick-windows-container-deploy-cli.md) to add a Windows node pool.
-After you have a Windows node pool, use the built-in storage classes like `azurefile-csi` or create custom ones. You can deploy an example [Windows-based stateful set](https://github.com/kubernetes-sigs/azurefile-csi-driver/blob/master/deploy/example/windows/statefulset.yaml) that saves timestamps into a file `data.txt` by deploying the following command with the [kubectl apply][kubectl-apply] command:
+After you have a Windows node pool, use the built-in storage classes like `azurefile-csi` or create a custom one. You can deploy an example [Windows-based stateful set](https://github.com/kubernetes-sigs/azurefile-csi-driver/blob/master/deploy/example/windows/statefulset.yaml) that saves timestamps into a file `data.txt` by running the [kubectl apply][kubectl-apply] command:
-```console
-$ kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/azurefile-csi-driver/master/deploy/example/windows/statefulset.yaml
+```bash
+kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/azurefile-csi-driver/master/deploy/example/windows/statefulset.yaml
+```
+
+The output of the command resembles the following example:
+```bash
statefulset.apps/busybox-azurefile created ```
-Validate the contents of the volume by running:
+Validate the contents of the volume by running the following [kubectl exec][kubectl-exec] command:
-```console
-$ kubectl exec -it busybox-azurefile-0 -- cat c:\\mnt\\azurefile\\data.txt # on Linux/MacOS Bash
-$ kubectl exec -it busybox-azurefile-0 -- cat c:\mnt\azurefile\data.txt # on Windows Powershell/CMD
+```bash
+kubectl exec -it busybox-azurefile-0 -- cat c:\\mnt\\azurefile\\data.txt # on Linux/MacOS Bash
+kubectl exec -it busybox-azurefile-0 -- cat c:\mnt\azurefile\data.txt # on Windows Powershell/CMD
+```
+The output of the commands resembles the following example:
+
+```bash
2020-08-27 22:11:01Z 2020-08-27 22:11:02Z 2020-08-27 22:11:04Z
$ kubectl exec -it busybox-azurefile-0 -- cat c:\mnt\azurefile\data.txt # on Win
[kubernetes-volumes]: https://kubernetes.io/docs/concepts/storage/persistent-volumes/ [managed-disk-pricing-performance]: https://azure.microsoft.com/pricing/details/managed-disks/ [smb-overview]: /windows/desktop/FileIO/microsoft-smb-protocol-and-cifs-protocol-overview
+[nfs-overview]:/windows-server/storage/nfs/nfs-overview
+[kubectl-exec]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#exec
+[csi-specification]: https://github.com/container-storage-interface/spec/blob/master/spec.md
<!-- LINKS - internal -->
+[csi-drivers-overview]: csi-storage-drivers.md
+[persistent-volume-claim-overview]: concepts-storage.md#persistent-volume-claims
[azure-disk-volume]: azure-disk-volume.md [azure-files-pvc]: azure-files-dynamic-pv.md
+[azure-files-pvc-manual]: azure-files-volume.md
[premium-storage]: ../virtual-machines/disks-types.md [az-disk-list]: /cli/azure/disk#az_disk_list [az-snapshot-create]: /cli/azure/snapshot#az_snapshot_create
$ kubectl exec -it busybox-azurefile-0 -- cat c:\mnt\azurefile\data.txt # on Win
[node-resource-group]: faq.md#why-are-two-resource-groups-created-with-aks [storage-skus]: ../storage/common/storage-redundancy.md [use-tags]: use-tags.md
+[private-endpoint-overview]: ../private-link/private-endpoint-overview.md
+[persistent-volume]: concepts-storage.md#persistent-volumes
+[share-snapshots-overview]: ../storage/files/storage-snapshots-files.md
aks Configure Azure Cni https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/configure-azure-cni.md
The following questions and answers apply to the **Azure CNI network configurati
* *Can I assign Pod subnets from a different VNet altogether?*
- The pod subnet should be from the same VNet as the cluster.
+ No, the pod subnet should be from the same VNet as the cluster.
* *Can some node pools in a cluster use the traditional CNI while others use the new CNI?*
aks Control Kubeconfig Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/control-kubeconfig-access.md
For enhanced security on access to AKS clusters, [integrate Azure Active Directo
<!-- LINKS - internal --> [aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md [aks-quickstart-portal]: ./learn/quick-kubernetes-deploy-portal.md
-[aks-quickstart-powershell]: /azure/aks/learn/quick-kubernetes-deploy-powershell
+[aks-quickstart-powershell]: ./learn/quick-kubernetes-deploy-powershell.md
[azure-cli-install]: /cli/azure/install-azure-cli [az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials [azure-rbac]: ../role-based-access-control/overview.md
For enhanced security on access to AKS clusters, [integrate Azure Active Directo
[az-role-assignment-create]: /cli/azure/role/assignment#az_role_assignment_create [az-role-assignment-delete]: /cli/azure/role/assignment#az_role_assignment_delete [aad-integration]: ./azure-ad-integration-cli.md
-[az-ad-group-show]: /cli/azure/ad/group#az_ad_group_show
+[az-ad-group-show]: /cli/azure/ad/group#az_ad_group_show
aks Dapr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/dapr.md
az k8s-extension delete --resource-group myResourceGroup --cluster-name myAKSClu
[az-provider-register]: /cli/azure/provider#az-provider-register [sample-application]: ./quickstart-dapr.md [k8s-version-support-policy]: ./supported-kubernetes-versions.md?tabs=azure-cli#kubernetes-version-support-policy
-[arc-k8s-cluster]: /azure/azure-arc/kubernetes/quickstart-connect-cluster
+[arc-k8s-cluster]: ../azure-arc/kubernetes/quickstart-connect-cluster.md
[update-extension]: ./cluster-extensions.md#update-extension-instance [install-cli]: /cli/azure/install-azure-cli
az k8s-extension delete --resource-group myResourceGroup --cluster-name myAKSClu
[dapr-oss-support]: https://docs.dapr.io/operations/support/support-release-policy/ [dapr-supported-version]: https://docs.dapr.io/operations/support/support-release-policy/#supported-versions [dapr-troubleshooting]: https://docs.dapr.io/operations/troubleshooting/common_issues/
-[supported-cloud-regions]: https://azure.microsoft.com/global-infrastructure/services/?products=azure-arc
+[supported-cloud-regions]: https://azure.microsoft.com/global-infrastructure/services/?products=azure-arc
aks Quickstart Helm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/quickstart-helm.md
helm create azure-vote-front
Update *azure-vote-front/Chart.yaml* to add a dependency for the *redis* chart from the `https://charts.bitnami.com/bitnami` chart repository and update `appVersion` to `v1`. For example:
+> [!NOTE]
+> The container image versions shown in this guide have been tested to work with this example but may not be the latest version available.
+ ```yml apiVersion: v2 name: azure-vote-front
aks Servicemesh About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/servicemesh-about.md
Before you select a service mesh, ensure that you understand your requirements a
## Next steps
-As a next step, explore Open Service Mesh (OSM) on Azure Kubernetes Service (AKS):
+Open Service Mesh (OSM) is a supported service mesh that runs Azure Kubernetes Service (AKS):
> [!div class="nextstepaction"] > [Learn more about OSM ...][osm-about]
-You can also explore the following service meshes on Azure Kubernetes Service (AKS) via the comprehensive project documentation available for each of them:
+There are also service meshes provided by open-source projects and third parties that are commonly used with AKS. These open-source and third-party service meshes are not covered by the [AKS support policy][aks-support-policy].
- [Istio][istio] - [Linkerd][linkerd] - [Consul Connect][consul]
-If you'd like to understand more about the service mesh landscape, the broader set of available service meshes, tooling, and compliance, then explore:
+For more details on the service mesh landscape, see [Layer 5's Service Mesh Landscape][service-mesh-landscape].
-- [Layer 5's Service Mesh Landscape][service-mesh-landscape]-
-You may also want to explore the various service mesh standardization efforts:
+For more details service mesh standardization efforts:
- [Service Mesh Interface (SMI)][smi] - [Service Mesh Federation][smf]
You may also want to explore the various service mesh standardization efforts:
<!-- LINKS - internal --> [osm-about]: ./open-service-mesh-about.md
+[aks-support-policy]: support-policies.md
aks Start Stop Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/start-stop-cluster.md
If the `ProvisioningState` shows `Starting` that means your cluster hasn't fully
<!-- LINKS - internal --> [aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md [aks-quickstart-portal]: ./learn/quick-kubernetes-deploy-portal.md
-[aks-quickstart-powershell]: /azure/aks/learn/quick-kubernetes-deploy-powershell
+[aks-quickstart-powershell]: ./learn/quick-kubernetes-deploy-powershell.md
[install-azure-cli]: /cli/azure/install-azure-cli [az-extension-add]: /cli/azure/extension#az_extension_add [az-extension-update]: /cli/azure/extension#az_extension_update
If the `ProvisioningState` shows `Starting` that means your cluster hasn't fully
[kubernetes-walkthrough-powershell]: kubernetes-walkthrough-powershell.md [stop-azakscluster]: /powershell/module/az.aks/stop-azakscluster [get-azakscluster]: /powershell/module/az.aks/get-azakscluster
-[start-azakscluster]: /powershell/module/az.aks/start-azakscluster
+[start-azakscluster]: /powershell/module/az.aks/start-azakscluster
aks Static Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/static-ip.md
kubectl apply -f load-balancer-service.yaml
If your service is using a dynamic or static public IP address, you can use the service annotation `service.beta.kubernetes.io/azure-dns-label-name` to set a public-facing DNS label. This publishes a fully qualified domain name for your service using Azure's public DNS servers and top-level domain. The annotation value must be unique within the Azure location, so it's recommended to use a sufficiently qualified label.
-Azure will then automatically append a default subnet, such as `<location>.cloudapp.azure.com` (where location is the region you selected), to the name you provide, to create the fully qualified DNS name. For example:
+Azure will then automatically append a default suffix, such as `<location>.cloudapp.azure.com` (where location is the region you selected), to the name you provide, to create the fully qualified DNS name. For example:
```yaml apiVersion: v1
For additional control over the network traffic to your applications, you may wa
[aks-quickstart-portal]: ./learn/quick-kubernetes-deploy-portal.md [aks-quickstart-powershell]: ./learn/quick-kubernetes-deploy-powershell.md [install-azure-cli]: /cli/azure/install-azure-cli
-[ip-sku]: ../virtual-network/ip-services/public-ip-addresses.md#sku
+[ip-sku]: ../virtual-network/ip-services/public-ip-addresses.md#sku
aks Use Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-managed-identity.md
AKS uses several managed identities for built-in services and add-ons.
## Create an AKS cluster using a managed identity > [!NOTE]
-> AKS will create a system-assigned kubelet identity in the Node resource group if you do not [specify your own kubelet managed identity](#Use a pre-created kubelet managed identity).
+> AKS will create a system-assigned kubelet identity in the Node resource group if you do not [specify your own kubelet managed identity][Use a pre-created kubelet managed identity].
You can create an AKS cluster using a system-assigned managed identity by running the following CLI command.
Use [Azure Resource Manager templates ][aks-arm-template] to create a managed id
[aks-arm-template]: /azure/templates/microsoft.containerservice/managedclusters <!-- LINKS - internal -->
+[install-azure-cli]: /cli/azure/install-azure-cli
[az-identity-create]: /cli/azure/identity#az_identity_create [az-identity-list]: /cli/azure/identity#az_identity_list [az-feature-list]: /cli/azure/feature#az_feature_list
api-management Api Management Access Restriction Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-access-restriction-policies.md
Use the `get-authorization-context` policy to get the authorization context of a
The policy fetches and stores authorization and refresh tokens from the configured authorization provider.
-If `identity-type=jwt` is configured, a JWT token is required to be validated. The audience of this token must be https://azure-api.net/authorization-manager.
+If `identity-type=jwt` is configured, a JWT token is required to be validated. The audience of this token must be `https://azure-api.net/authorization-manager`.
[!INCLUDE [api-management-policy-generic-alert](../../includes/api-management-policy-generic-alert.md)]
api-management Api Management Advanced Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-advanced-policies.md
This policy can be used in the following policy [sections](./api-management-howt
## <a name="log-to-eventhub"></a> Log to event hub
-The `log-to-eventhub` policy sends messages in the specified format to an event hub defined by a Logger entity. As its name implies, the policy is used for saving selected request or response context information for online or offline analysis.
+The `log-to-eventhub` policy sends messages in the specified format to an event hub defined by a Logger entity. As its name implies, the policy is used for saving selected request or response context information for online or offline analysis.
+The policy is not affected by Application Insights sampling. All invocations of the policy will be logged.
> [!NOTE] > For a step-by-step guide on configuring an event hub and logging events, see [How to log API Management events with Azure Event Hubs](./api-management-howto-log-event-hubs.md).
The `trace` policy adds a custom trace into the API Inspector output, Applicatio
- The policy adds a custom trace to the [API Inspector](./api-management-howto-api-inspector.md) output when tracing is triggered, i.e. `Ocp-Apim-Trace` request header is present and set to true and `Ocp-Apim-Subscription-Key` request header is present and holds a valid key that allows tracing. - The policy creates a [Trace](../azure-monitor/app/data-model-trace-telemetry.md) telemetry in Application Insights, when [Application Insights integration](./api-management-howto-app-insights.md) is enabled and the `severity` specified in the policy is equal to or greater than the `verbosity` specified in the diagnostic setting. - The policy adds a property in the log entry when [Resource Logs](./api-management-howto-use-azure-monitor.md#activity-logs) is enabled and the severity level specified in the policy is at or higher than the verbosity level specified in the diagnostic setting.
+- The policy is not affected by Application Insights sampling. All invocations of the policy will be logged.
[!INCLUDE [api-management-policy-generic-alert](../../includes/api-management-policy-generic-alert.md)]
api-management Api Management Howto Integrate Internal Vnet Appgateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-integrate-internal-vnet-appgateway.md
By combining API Management provisioned in an internal virtual network with the
For architectural guidance, see: * **Basic enterprise integration**: [Reference architecture](/azure/architecture/reference-architectures/enterprise-integration/basic-enterprise-integration?toc=%2Fazure%2Fapi-management%2Ftoc.json&bc=/azure/api-management/breadcrumb/toc.json)
-* **API Management landing zone accelerator**: [Reference architecture](/azure/architecture/example-scenario/integration/app-gateway-internal-api-management-function?toc=%2Fazure%2Fapi-management%2Ftoc.json&bc=/azure/api-management/breadcrumb/toc.json) and [design guidance](/azure/cloud-adoption-framework/scenarios/app-platform/api-management/land?toc=%2Fazure%2Fapi-management%2Ftoc.json&bc=/azure/api-management/breadcrumb/toc.json)
+* **API Management landing zone accelerator**: [Reference architecture](/azure/architecture/example-scenario/integration/app-gateway-internal-api-management-function?toc=%2Fazure%2Fapi-management%2Ftoc.json&bc=/azure/api-management/breadcrumb/toc.json) and [design guidance](/azure/cloud-adoption-framework/scenarios/app-platform/api-management/landing-zone-accelerator?toc=%2Fazure%2Fapi-management%2Ftoc.json&bc=/azure/api-management/breadcrumb/toc.json)
> [!NOTE]
api-management Api Management Transformation Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-transformation-policies.md
This article provides a reference for API Management policies used to transform
## <a name="TransformationPolicies"></a> Transformation policies -- [Convert JSON to XML](api-management-transformation-policies.md#ConvertJSONtoXML) - Converts request or response body from JSON to XML.
+- [Convert JSON to XML](#ConvertJSONtoXML) - Converts request or response body from JSON to XML.
-- [Convert XML to JSON](api-management-transformation-policies.md#ConvertXMLtoJSON) - Converts request or response body from XML to JSON.
+- [Convert XML to JSON](#ConvertXMLtoJSON) - Converts request or response body from XML to JSON.
-- [Find and replace string in body](api-management-transformation-policies.md#Findandreplacestringinbody) - Finds a request or response substring and replaces it with a different substring.
+- [Find and replace string in body](#Findandreplacestringinbody) - Finds a request or response substring and replaces it with a different substring.
-- [Mask URLs in content](api-management-transformation-policies.md#MaskURLSContent) - Re-writes (masks) links in the response body so that they point to the equivalent link via the gateway.
+- [Mask URLs in content](#MaskURLSContent) - Rewrites (masks) links in the response body so that they point to the equivalent link via the gateway.
-- [Set backend service](api-management-transformation-policies.md#SetBackendService) - Changes the backend service for an incoming request.
+- [Set backend service](#SetBackendService) - Changes the backend service for an incoming request.
-- [Set body](api-management-transformation-policies.md#SetBody) - Sets the message body for incoming and outgoing requests.
+- [Set body](#SetBody) - Sets the message body for incoming and outgoing requests.
-- [Set HTTP header](api-management-transformation-policies.md#SetHTTPheader) - Assigns a value to an existing response and/or request header or adds a new response and/or request header.
+- [Set HTTP header](#SetHTTPheader) - Assigns a value to an existing response and/or request header or adds a new response and/or request header.
-- [Set query string parameter](api-management-transformation-policies.md#SetQueryStringParameter) - Adds, replaces value of, or deletes request query string parameter.
+- [Set query string parameter](#SetQueryStringParameter) - Adds, replaces value of, or deletes request query string parameter.
-- [Rewrite URL](api-management-transformation-policies.md#RewriteURL) - Converts a request URL from its public form to the form expected by the web service.
+- [Rewrite URL](#RewriteURL) - Converts a request URL from its public form to the form expected by the web service.
-- [Transform XML using an XSLT](api-management-transformation-policies.md#XSLTransform) - Applies an XSL transformation to XML in the request or response body.
+- [Transform XML using an XSLT](#XSLTransform) - Applies an XSL transformation to XML in the request or response body.
## <a name="ConvertJSONtoXML"></a> Convert JSON to XML The `json-to-xml` policy converts a request or response body from JSON to XML.
This article provides a reference for API Management policies used to transform
- **Policy scopes:** all scopes ## <a name="MaskURLSContent"></a> Mask URLs in content
- The `redirect-content-urls` policy re-writes (masks) links in the response body so that they point to the equivalent link via the gateway. Use in the outbound section to re-write response body links to make them point to the gateway. Use in the inbound section for an opposite effect.
+ The `redirect-content-urls` policy rewrites (masks) links in the response body so that they point to the equivalent link via the gateway. Use in the outbound section to rewrite response body links to make them point to the gateway. Use in the inbound section for an opposite effect.
> [!NOTE] > This policy does not change any header values such as `Location` headers. To change header values, use the [set-header](api-management-transformation-policies.md#SetHTTPheader) policy.
Initially the backend service base URL is derived from the API settings. So the
When the [<choose\>](api-management-advanced-policies.md#choose) policy statement is applied the backend service base URL may change again either to `http://contoso.com/api/8.2` or `http://contoso.com/api/9.1`, depending on the value of the version request query parameter. For example, if the value is `"2013-15"` the final request URL becomes `http://contoso.com/api/8.2/partners/15?version=2013-05&subscription-key=abcdef`.
-If further transformation of the request is desired, other [Transformation policies](api-management-transformation-policies.md#TransformationPolicies) can be used. For example, to remove the version query parameter now that the request is being routed to a version specific backend, the [Set query string parameter](api-management-transformation-policies.md#SetQueryStringParameter) policy can be used to remove the now redundant version attribute.
+If further transformation of the request is desired, other [Transformation policies](api-management-transformation-policies.md#TransformationPolicies) can be used. For example, to remove the version query parameter now that the request is being routed to a version specific backend, the [Set query string parameter](api-management-transformation-policies.md#SetQueryStringParameter) policy can be used to remove the now redundant version attribute.
### Example
In this example the policy routes the request to a service fabric backend, using
|sf-replica-type|Only applicable when the backend is a Service Fabric service and is specified using 'backend-id'. Controls if the request should go to the primary or secondary replica of a partition. |No|N/A| |sf-resolve-condition|Only applicable when the backend is a Service Fabric service. Condition identifying if the call to Service Fabric backend has to be repeated with new resolution.|No|N/A| |sf-service-instance-name|Only applicable when the backend is a Service Fabric service. Allows changing service instances at runtime. |No|N/A|
-|sf-listener-name|Only applicable when the backend is a Service Fabric service and is specified using ΓÇÿbackend-idΓÇÖ. Service Fabric Reliable Services allows you to create multiple listeners in a service. This attribute is used to select a specific listener when a backend Reliable Service has more than one listener. If this attribute is not specified, API Management will attempt to use a listener without a name. A listener without a name is typical for Reliable Services that have only one listener. |No|N/A|
+|sf-listener-name|Only applicable when the backend is a Service Fabric service and is specified using ΓÇÿbackend-idΓÇÖ. Service Fabric Reliable Services allows you to create multiple listeners in a service. This attribute is used to select a specific listener when a backend Reliable Service has more than one listener. If this attribute isn't specified, API Management will attempt to use a listener without a name. A listener without a name is typical for Reliable Services that have only one listener. |No|N/A|
### Usage This policy can be used in the following policy [sections](./api-management-howto-policies.md#sections) and [scopes](./api-management-howto-policies.md#scopes).
In this example the policy routes the request to a service fabric backend, using
<set-body>Hello world!</set-body> ```
-#### Example accessing the body as a string. Note that we are preserving the original request body so that we can access it later in the pipeline.
+#### Example accessing the body as a string
+
+We are preserving the original request body so that we can access it later in the pipeline.
```xml <set-body>
In this example the policy routes the request to a service fabric backend, using
</set-body> ```
-#### Example accessing the body as a JObject. Note that since we are not reserving the original request body, accessing it later in the pipeline will result in an exception.
+#### Example accessing the body as a JObject
+
+Since we are not reserving the original request body, accessing it later in the pipeline will result in an exception.
```xml <set-body> 
In this example the policy routes the request to a service fabric backend, using
``` ### Using Liquid templates with set body
-The `set-body` policy can be configured to use the [Liquid](https://shopify.github.io/liquid/basics/introduction/) templating language to transform the body of a request or response. This can be very effective if you need to completely reshape the format of your message.
+The `set-body` policy can be configured to use the [Liquid](https://shopify.github.io/liquid/basics/introduction/) templating language to transform the body of a request or response. This can be effective if you need to completely reshape the format of your message.
> [!IMPORTANT] > The implementation of Liquid used in the `set-body` policy is configured in 'C# mode'. This is particularly important when doing things such as filtering. As an example, using a date filter requires the use of Pascal casing and C# date formatting e.g.:
The `set-body` policy can be configured to use the [Liquid](https://shopify.gith
> [!IMPORTANT] > In order to correctly bind to an XML body using the Liquid template, use a `set-header` policy to set Content-Type to either application/xml, text/xml (or any type ending with +xml); for a JSON body, it must be application/json, text/json (or any type ending with +json).
+#### Supported Liquid filters
+
+The following Liquid filters are supported in the `set-body` policy. For filter examples, see the [Liquid documentation](https://shopify.github.io/liquid/).
+
+> [!NOTE]
+> The policy requires Pascal casing for Liquid filter names (for example, "AtLeast" instead of "at_least").
+>
+* Abs
+* Append
+* AtLeast
+* AtMost
+* Capitalize
+* Compact
+* Currency
+* Date
+* Default
+* DividedBy
+* Downcase
+* Escape
+* First
+* H
+* Join
+* Last
+* Lstrip
+* Map
+* Minus
+* Modulo
+* NewlineToBr
+* Plus
+* Prepend
+* Remove
+* RemoveFirst
+* Replace
+* ReplaceFirst
+* Round
+* Rstrip
+* Size
+* Slice
+* Sort
+* Split
+* Strip
+* StripHtml
+* StripNewlines
+* Times
+* Truncate
+* TruncateWords
+* Uniq
+* Upcase
+* UrlDecode
+* UrlEncode
+ #### Convert JSON to SOAP using a Liquid template ```xml <set-body template="liquid">
api-management Authorizations How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/authorizations-how-to.md
Four steps are needed to set up an authorization with the authorization code gra
|Setting |Value | ||| |**Display name** | *github* |
- |**Web service URL** | https://api.github.com/users/ |
+ |**Web service URL** | https://api.github.com/users |
|**API URL suffix** | *github* | 2. Navigate to the newly created API and select **Add Operation**. Enter the following settings and select **Save**.
Four steps are needed to set up an authorization with the authorization code gra
## Next steps
-Learn more about [access restriction policies](api-management-access-restriction-policies.md).
+Learn more about [access restriction policies](api-management-access-restriction-policies.md).
api-management Devops Api Development Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/devops-api-development-templates.md
For details, tools, and code samples to implement the DevOps approach described
For architectural guidance, see:
-* **API Management landing zone accelerator**: [Reference architecture](/azure/architecture/example-scenario/integration/app-gateway-internal-api-management-function?toc=%2Fazure%2Fapi-management%2Ftoc.json&bc=/azure/api-management/breadcrumb/toc.json) and [design guidance](/azure/cloud-adoption-framework/scenarios/app-platform/api-management/land?toc=%2Fazure%2Fapi-management%2Ftoc.json&bc=/azure/api-management/breadcrumb/toc.json)
+* **API Management landing zone accelerator**: [Reference architecture](/azure/architecture/example-scenario/integration/app-gateway-internal-api-management-function?toc=%2Fazure%2Fapi-management%2Ftoc.json&bc=/azure/api-management/breadcrumb/toc.json) and [design guidance](/azure/cloud-adoption-framework/scenarios/app-platform/api-management/landing-zone-accelerator?toc=%2Fazure%2Fapi-management%2Ftoc.json&bc=/azure/api-management/breadcrumb/toc.json)
## The problem
api-management Mitigate Owasp Api Threats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/mitigate-owasp-api-threats.md
More information about this threat: [API6:2019 Mass assignment](https://github.c
### Recommendations
-* External API interfaces should be decoupled from the internal data implementation. Avoid binding API contracts directly to data contracts in backend services. Review the API design frequently, and deprecate and remove legacy properties using [versioning](/api-management-versions.md) in API Management.
+* External API interfaces should be decoupled from the internal data implementation. Avoid binding API contracts directly to data contracts in backend services. Review the API design frequently, and deprecate and remove legacy properties using [versioning](api-management-versions.md) in API Management.
* Precisely define XML and JSON contracts in the API schema and use [validate content](validation-policies.md#validate-content) and [validate parameters](validation-policies.md#validate-parameters) policies to block requests and responses with undocumented properties. Blocking requests with undocumented properties mitigates attacks, while blocking responses with undocumented properties makes it harder to reverse-engineer potential attack vectors.
app-service Configure Common https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-common.md
ms.devlang: azurecli
# Configure an App Service app
-This article explains how to configure common settings for web apps, mobile back end, or API app.
+This article explains how to configure common settings for web apps, mobile back end, or API app. For Azure Functions, see [App settings reference for Azure Functions](../azure-functions/functions-app-settings.md).
## Configure app settings
app-service Quickstart Php https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-php.md
Azure CLI has a command [`az webapp up`](/cli/azure/webapp#az_webapp_up) that wi
In the terminal, deploy the code in your local folder using the [`az webapp up`](/cli/azure/webapp#az_webapp_up) command: ```azurecli
-az webapp up --runtime "php|8.0" --os-type=linux
+az webapp up --runtime "PHP:8.0" --os-type=linux
``` - If the `az` command isn't recognized, be sure you have <a href="/cli/azure/install-azure-cli" target="_blank">Azure CLI</a> installed.
Browse to the deployed application in your web browser at the URL `http://<app-n
1. Save your changes, then redeploy the app using the [az webapp up](/cli/azure/webapp#az-webapp-up) command again with these arguments: ```azurecli
- az webapp up --runtime "php|8.0" --os-type=linux
+ az webapp up --runtime "PHP:8.0" --os-type=linux
``` 1. Once deployment has completed, return to the browser window that opened during the **Browse to the app** step, and refresh the page.
app-service Quickstart Python Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-python-portal.md
Having issues? [Let us know](https://aka.ms/FlaskCLIQuickstartHelp).
## Next steps > [!div class="nextstepaction"]
-> [Tutorial: Python (Django) web app with PostgreSQL](/azure/app-service/tutorial-python-postgresql-app)
+> [Tutorial: Python (Django) web app with PostgreSQL](./tutorial-python-postgresql-app.md)
> [!div class="nextstepaction"] > [Configure Python app](configure-language-python.md)
Having issues? [Let us know](https://aka.ms/FlaskCLIQuickstartHelp).
> [Add user sign-in to a Python web app](../active-directory/develop/quickstart-v2-python-webapp.md) > [!div class="nextstepaction"]
-> [Tutorial: Run Python app in custom container](tutorial-custom-container.md)
+> [Tutorial: Run Python app in custom container](tutorial-custom-container.md)
app-service Reference App Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/reference-app-settings.md
APACHE_RUN_GROUP | RUN sed -i 's!User ${APACHE_RUN_GROUP}!Group www-data!g' /etc
|-|-|-| | `WEBSITE_DNS_SERVER` | IP address of primary DNS server for outgoing connections (such as to a back-end service). The default DNS server for App Service is Azure DNS, whose IP address is `168.63.129.16`. If your app uses [VNet integration](./overview-vnet-integration.md) or is in an [App Service environment](environment/intro.md), it inherits the DNS server configuration from the VNet by default. | `10.0.0.1` | | `WEBSITE_DNS_ALT_SERVER` | IP address of fallback DNS server for outgoing connections. See `WEBSITE_DNS_SERVER`. | |
+| `WEBSITE_ENABLE_DNS_CACHE` | Allows successful DNS resolutions to be cached. By Default expired DNS cache entries will be flushed & in addition to the existing cache to be flushed every 4.5 mins. | |
<!-- DOMAIN_OWNERSHIP_VERIFICATION_IDENTIFIERS
applied-ai-services How To Cache Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/immersive-reader/how-to-cache-token.md
This article demonstrates how to cache the authentication token in order to impr
Import the **Microsoft.IdentityModel.Clients.ActiveDirectory** NuGet package, which is used to acquire a token. Next, use the following code to acquire an `AuthenticationResult`, using the authentication values you got when you [created the Immersive Reader resource](./how-to-create-immersive-reader.md). > [!IMPORTANT]
-> The [Microsoft.IdentityModel.Clients.ActiveDirectory](https://www.nuget.org/packages/Microsoft.IdentityModel.Clients.ActiveDirectory) NuGet package and Azure AD Authentication Library (ADAL) have been deprecated. No new features have been added since June 30, 2020. We strongly encourage you to upgrade, see the [migration guide](/azure/active-directory/develop/msal-migration) for more details.
+> The [Microsoft.IdentityModel.Clients.ActiveDirectory](https://www.nuget.org/packages/Microsoft.IdentityModel.Clients.ActiveDirectory) NuGet package and Azure AD Authentication Library (ADAL) have been deprecated. No new features have been added since June 30, 2020. We strongly encourage you to upgrade, see the [migration guide](../../active-directory/develop/msal-migration.md) for more details.
```csharp
automation Automation Hrw Run Runbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-hrw-run-runbooks.md
There are two ways to use the Managed Identities in Hybrid Runbook Worker script
> [!NOTE] > It is **Not** possible to use the Automation Account's User Managed Identity on a Hybrid Runbook Worker, it must be the Automation Account's System Managed Identity.
-2. Use the VM Managed Identity for both the Azure VM or Arc-enabled server running as a Hybrid Runbook Worker.
- Here, you can use either the **VMΓÇÖs User-assigned Managed Identity** or the **VMΓÇÖs System-assigned Managed Identity**.
+2. For an Azure VM running as a Hybrid Runbook Worker, use the **VM Managed Identity**. In this case, you can use either the VMΓÇÖs User-assigned Managed Identity **OR** the VMΓÇÖs System-assigned Managed Identity.
> [!NOTE]
- > This will **Not** work in an Automation Account which has been configured with an Automation account Managed Identity. As soon as the Automation account Managed Identity is enabled, you can't use the VM Managed Identity. The only available option is to use the Automation Account **System-Assigned Managed Identity** as mentioned in option 1.
-
- **To use a VM's system-assigned managed identity**:
+ > This will **NOT** work in an Automation Account which has been configured with an Automation account Managed Identity. As soon as the Automation account Managed Identity is enabled, it is no longer possible to use the VM Managed Identity and then, it is only possible to use the Automation Account System-Assigned Managed Identity as mentioned in option 1 above.
+ Use any **one** of the following managed identities:
+
+ # [VM's system-assigned managed identity](#tab/sa-mi)
+
1. [Configure](/active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm#enable-system-assigned-managed-identity-on-an-existing-vm) a System Managed Identity for the VM. 1. Grant this identity the [required permissions](/active-directory/managed-identities-azure-resources/tutorial-windows-vm-access-arm#grant-your-vm-access-to-a-resource-group-in-resource-manager) within the subscription to perform its tasks. 1. Update the runbook to use the [Connect-Az-Account](/powershell/module/az.accounts/connect-azaccount) cmdlet with the `Identity` parameter to authenticate to Azure resources. This configuration reduces the need to use a Run As Account and perform the associated account management.
There are two ways to use the Managed Identities in Hybrid Runbook Worker script
# Get all VM names from the subscription Get-AzVM -DefaultProfile $AzureContext | Select Name ```
+
+ # [VM's user-assigned managed identity](#tab/ua-mi)
- **To use a VM's user-assigned managed identity**:
1. [Configure](/active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm#user-assigned-managed-identity) a User Managed Identity for the VM. 1. Grant this identity the [required permissions](/active-directory/managed-identities-azure-resources/howto-assign-access-portal) within the Subscription to perform its tasks. 1. Update the runbook to use the [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) cmdlet with the `Identity ` and `AccountID` parameters to authenticate to Azure resources. This configuration reduces the need to use a Run As account and perform the associated account management.
There are two ways to use the Managed Identities in Hybrid Runbook Worker script
# Get all VM names from the subscription Get-AzVM -DefaultProfile $AzureContext | Select Name ```
+
> [!NOTE] > You can find the client Id of the user-assigned managed identity in the Azure portal. > :::image type="content" source="./media/automation-hrw-run-runbooks/managed-identities-client-id-inline.png" alt-text="Screenshot of client id in Managed Identites." lightbox="./media/automation-hrw-run-runbooks/managed-identities-client-id-expanded.png"::: ++
+**An Arc-enabled server running as a Hybrid Runbook Worker** already has a built-in System Managed Identity assigned to it which can be used for authentication.
+
+1. You can grant this Managed Identity access to resources in your subscription in the Access control (IAM) blade for the resource by adding the appropriate role assignment.
+
+ :::image type="content" source="./media/automation-hrw-run-runbooks/access-control-add-role-assignment.png" alt-text="Screenshot of how to select managed identities.":::
+
+2. Add the Azure Arc Managed Identity to your chosen role as required.
+
+ :::image type="content" source="./media/automation-hrw-run-runbooks/select-managed-identities-inline.png" alt-text="Screenshot of how to add role assignment in the Access control blade." lightbox="./media/automation-hrw-run-runbooks/select-managed-identities-expanded.png":::
+
+> [!NOTE]
+> This will **NOT** work in an Automation Account which has been configured with an Automation account Managed Identity. As soon as the Automation account Managed Identity is enabled, it is no longer possible to use the Arc Managed Identity and then, it is **only** possible to use the Automation Account System-Assigned Managed Identity as mentioned in option 1 above.
>[!NOTE] >By default, the Azure contexts are saved for use between PowerShell sessions. It is possible that when a previous runbook on the Hybrid Runbook Worker has been authenticated with Azure, that context persists to the disk in the System PowerShell profile, as per [Azure contexts and sign-in credentials | Microsoft Docs](/powershell/azure/context-persistence?view=azps-7.3.2&preserve-view=true).
automation Extension Based Hybrid Runbook Worker Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/extension-based-hybrid-runbook-worker-install.md
Set-AzVMExtension -ResourceGroupName <VMResourceGroupName> -Location <VMLocation
**Azure Arc-enabled VMs** ```powershell
-New-AzConnectedMachineExtension -ResourceGroupName <VMResourceGroupName> -Location <VMLocation> -VMName <VMName> -Name "HybridWorkerExtension" -Publisher "Microsoft.Azure.Automation.HybridWorker" -ExtensionType HybridWorkerForWindows -TypeHandlerVersion 0.1 -Settings $settings -NoWait
+New-AzConnectedMachineExtension -ResourceGroupName <VMResourceGroupName> -Location <VMLocation> -MachineName <VMName> -Name "HybridWorkerExtension" -Publisher "Microsoft.Azure.Automation.HybridWorker" -ExtensionType HybridWorkerForWindows -TypeHandlerVersion 0.1 -Setting $settings -NoWait
``` # [Linux](#tab/linux)
Set-AzVMExtension -ResourceGroupName <VMResourceGroupName> -Location <VMLocation
**Azure Arc-enabled VMs** ```powershell
-New-AzConnectedMachineExtension -ResourceGroupName <VMResourceGroupName> -Location <VMLocation> -VMName <VMName> -Name "HybridWorkerExtension" -Publisher "Microsoft.Azure.Automation.HybridWorker" -ExtensionType HybridWorkerForLinux -TypeHandlerVersion 0.1 -Settings $settings -NoWait
+New-AzConnectedMachineExtension -ResourceGroupName <VMResourceGroupName> -Location <VMLocation> -MachineName <VMName> -Name "HybridWorkerExtension" -Publisher "Microsoft.Azure.Automation.HybridWorker" -ExtensionType HybridWorkerForLinux -TypeHandlerVersion 0.1 -Setting $settings -NoWait
```
To create a hybrid worker group in the Azure portal, follow these steps:
1. Select **Next** to advance to the **Hybrid workers** tab. You can select Azure virtual machines or Azure Arc-enabled servers to be added to this Hybrid worker group. If you don't select any machines, an empty Hybrid worker group will be created. You can still add machines later.
- :::image type="content" source="./media/extension-based-hybrid-runbook-worker-install/basics-tab-portal.png" alt-text="Screenshot showing to entering name and credentials in basics tab.":::
+ :::image type="content" source="./media/extension-based-hybrid-runbook-worker-install/basics-tab-portal.png" alt-text="Screenshot showing to enter name and credentials in basics tab.":::
1. Select **Add machines** to go to the **Add machines as hybrid worker** page. You'll only see machines that aren't part of any other hybrid worker group.
automation Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/troubleshoot/managed-identity.md
This article discusses solutions to problems that you might encounter when you use a managed identity with your Automation account. For general information about using managed identity with Automation accounts, see [Azure Automation account authentication overview](../automation-security-overview.md#managed-identities).
+## Scenario: Runbook fails with "this.Client.SubscriptionId cannot be null." error message
+
+### Issue
+
+Your runbook using a managed identity Connect-AzAccount -Identity which attempts to manage Azure objects, fails to work successfully and logs the following error - `this.Client.SubscriptionId cannot be null.`
+
+```error
+get-azvm : 'this.Client.SubscriptionId' cannot be null. At line:5 char:1 + get-azvm + ~~~~~~~~ + CategoryInfo : CloseError: (:) [Get-AzVM], ValidationException + FullyQualifiedErrorId : Microsoft.Azure.Commands.Compute.GetAzureVMCommand
+```
+
+### Cause
+
+This can happen when the Managed Identity (or other account used in the runbook) has not been granted any permissions to access the subscription.
+
+### Resolution
+Grant the Managed Identity (or other account used in the runbook) an appropriate role membership in the subscription. [Learn more](../enable-managed-identity-for-automation.md#assign-role-to-a-system-assigned-managed-identity)
++++ ## Scenario: Fail to get MSI token for account ### Issue
automation Runbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/troubleshoot/runbooks.md
When you receive errors during runbook execution in Azure Automation, you can us
If you're running your runbooks on a Hybrid Runbook Worker instead of in Azure Automation, you might need to [troubleshoot the hybrid worker itself](hybrid-runbook-worker.md). +
+## <a name="runbook-fails-no-permission"></a>Scenario: Runbook fails with "this.Client.SubscriptionId cannot be null." error message
+
+### Issue
+
+Your runbook using a managed identity Connect-AzAccount -Identity which attempts to manage Azure objects, fails to work successfully and logs the following error - `this.Client.SubscriptionId cannot be null.`
+
+```error
+get-azvm : 'this.Client.SubscriptionId' cannot be null. At line:5 char:1 + get-azvm + ~~~~~~~~ + CategoryInfo : CloseError: (:) [Get-AzVM], ValidationException + FullyQualifiedErrorId : Microsoft.Azure.Commands.Compute.GetAzureVMCommand
+```
+
+### Cause
+
+This can happen when the Managed Identity (or other account used in the runbook) has not been granted any permissions to access the subscription.
+
+### Resolution
+Grant the Managed Identity (or other account used in the runbook) an appropriate role membership in the subscription. [Learn more](../enable-managed-identity-for-automation.md#assign-role-to-a-system-assigned-managed-identity)
+++ ## Scenario: Access blocked to Azure Storage, or Azure Key Vault, or Azure SQL This scenario uses [Azure Storage](../../storage/common/storage-network-security.md) as an example; however, the information is equally applicable to [Azure Key Vault](../../key-vault/general/network-security.md) and [Azure SQL](/azure/azure-sql/database/firewall-configure).
azure-arc Tutorial Akv Secrets Provider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-akv-secrets-provider.md
For more information about resolving common issues, see the open source troubles
## Next steps - Want to try things out? Get started quickly with an [Azure Arc Jumpstart scenario](https://aka.ms/arc-jumpstart-akv-secrets-provider) using Cluster API.-- Learn more about [Azure Key Vault](/azure/key-vault/general/overview).
+- Learn more about [Azure Key Vault](../../key-vault/general/overview.md).
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/overview.md
Azure Arc-enabled System Center Virtual Machine Manager allows you to manage you
Arc-enabled System Center VMM allows you to: - Perform various VM lifecycle operations such as start, stop, pause, delete VMs on VMM managed VMs directly from Azure.-- Empower developers and application teams to self-serve VM operations on-demand using [Azure role-based access control (RBAC)](/azure/role-based-access-control/overview).
+- Empower developers and application teams to self-serve VM operations on-demand using [Azure role-based access control (RBAC)](../../role-based-access-control/overview.md).
- Browse your VMM resources (VMs, templates, VM networks, and storage) in Azure, providing you a single pane view for your infrastructure across both environments. - Discover and onboard existing SCVMM managed VMs to Azure. ## How does it work?
-To Arc-enable a System Center VMM management server, deploy [Azure Arc resource bridge](/azure/azure-arc/resource-bridge/overview) (preview) in the VMM environment. Arc resource bridge is a virtual appliance that connects VMM management server to Azure. Azure Arc resource bridge (preview) enables you to represent the SCVMM resources (clouds, VMs, templates etc.) in Azure and do various operations on them.
+To Arc-enable a System Center VMM management server, deploy [Azure Arc resource bridge](../resource-bridge/overview.md) (preview) in the VMM environment. Arc resource bridge is a virtual appliance that connects VMM management server to Azure. Azure Arc resource bridge (preview) enables you to represent the SCVMM resources (clouds, VMs, templates etc.) in Azure and do various operations on them.
## Architecture
Azure Arc-enabled SCVMM (preview) is currently supported in the following region
## Next steps
-[See how to create a Azure Arc VM](create-virtual-machine.md)
+[See how to create a Azure Arc VM](create-virtual-machine.md)
azure-cache-for-redis Cache How To Premium Persistence https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-premium-persistence.md
Persistence writes Redis data into an Azure Storage account that you own and man
## Set up data persistence
-1. To create a premium cache, sign in to the [Azure portal](https://portal.azure.com) and select **Create a resource**. You can create caches in the Azure portal. Y You can also create them using Resource Manager templates, PowerShell, or Azure CLI. For more information about creating an Azure Cache for Redis, see [Create a cache](cache-dotnet-how-to-use-azure-redis-cache.md#create-a-cache).
+1. To create a premium cache, sign in to the [Azure portal](https://portal.azure.com) and select **Create a resource**. You can create caches in the Azure portal. You can also create them using Resource Manager templates, PowerShell, or Azure CLI. For more information about creating an Azure Cache for Redis, see [Create a cache](cache-dotnet-how-to-use-azure-redis-cache.md#create-a-cache).
- :::image type="content" source="media/cache-private-link/1-create-resource.png" alt-text="Create resource.":::
+ :::image type="content" source="media/cache-how-to-premium-persistence/create-resource.png" alt-text="Screenshot that shows a form to create an Azure Cache for Redis resource.":::
2. On the **Create a resource** page, select **Databases** and then select **Azure Cache for Redis**.
- :::image type="content" source="media/cache-private-link/2-select-cache.png" alt-text="Select Azure Cache for Redis.":::
+ :::image type="content" source="media/cache-how-to-premium-persistence/select-cache.png" alt-text="Screenshot showing Azure Cache for Redis selected as a new database type.":::
3. On the **New Redis Cache** page, configure the settings for your new premium cache.
Persistence writes Redis data into an Azure Storage account that you own and man
4. Select the **Networking** tab or select the **Networking** button at the bottom of the page.
-5. In the **Networking** tab, select your connectivity method. For premium cache instances, you connect either publicly, via Public IP addresses or service endpoints. You connect privately using a private endpoint.
+5. In the **Networking** tab, select your connectivity method. For premium cache instances, you connect either publicly, via Public IP addresses or service endpoints. You connect privately using a private endpoint.
6. Select the **Next: Advanced** tab or select the **Next: Advanced** button on the bottom of the page.
All RDB persistence backups, except for the most recent one, are automatically d
### When should I use a second storage account?
-Use a second storage account for AOF persistence when you believe you have higher than expected set operations on the cache. Setting up the secondary storage account helps ensure your cache doesn't reach storage bandwidth limits.
+Use a second storage account for AOF persistence when you believe you've higher than expected set operations on the cache. Setting up the secondary storage account helps ensure your cache doesn't reach storage bandwidth limits.
### Does AOF persistence affect throughout, latency, or performance of my cache?
azure-functions Analyze Telemetry Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/analyze-telemetry-data.md
Azure Functions integrates with Application Insights to better enable you to mon
By default, the data collected from your function app is stored in Application Insights. In the [Azure portal](https://portal.azure.com), Application Insights provides an extensive set of visualizations of your telemetry data. You can drill into error logs and query events and metrics. This article provides basic examples of how to view and query your collected data. To learn more about exploring your function app data in Application Insights, see [What is Application Insights?](../azure-monitor/app/app-insights-overview.md).
+To be able to view Application Insights data from a function app, you must have at least Contributor role permissions on the function app. You also need to have the the [Monitoring Reader permission](../azure-monitor/roles-permissions-security.md#monitoring-reader) on the Application Insights instance. You have these permissions by default for any function app and Application Insights instance that you create.
+ To learn more about data retention and potential storage costs, see [Data collection, retention, and storage in Application Insights](../azure-monitor/app/data-retention-privacy.md). ## Viewing telemetry in Monitor tab
azure-functions Functions Host Json https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-host-json.md
The following sample *host.json* file for version 2.x+ has all possible options
"managedDependency": { "enabled": true },
- "retry": {
- "strategy": "fixedDelay",
- "maxRetryCount": 5,
- "delayInterval": "00:00:05"
- },
"singleton": { "lockPeriod": "00:00:15", "listenerLockPeriod": "00:01:00",
Managed dependency is a feature that is currently only supported with PowerShell
Configuration settings can be found in [Storage queue triggers and bindings](functions-bindings-storage-queue.md#host-json).
-## retry
-
-Controls the [retry policy](./functions-bindings-error-pages.md#retry-policies) options for all executions in the app.
-
-```json
-{
- "retry": {
- "strategy": "fixedDelay",
- "maxRetryCount": 2,
- "delayInterval": "00:00:03"
- }
-}
-```
-
-|Property |Default | Description |
-||||
-|strategy|null|Required. The retry strategy to use. Valid values are `fixedDelay` or `exponentialBackoff`.|
-|maxRetryCount|null|Required. The maximum number of retries allowed per function execution. `-1` means to retry indefinitely.|
-|delayInterval|null|The delay that's used between retries with a `fixedDelay` strategy.|
-|minimumInterval|null|The minimum retry delay when using `exponentialBackoff` strategy.|
-|maximumInterval|null|The maximum retry delay when using `exponentialBackoff` strategy.|
- ## sendGrid Configuration setting can be found in [SendGrid triggers and bindings](functions-bindings-sendgrid.md#host-json).
azure-functions Security Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/security-concepts.md
As with any application or service, the goal is run your function app with the l
Functions supports built-in [Azure role-based access control (Azure RBAC)](../role-based-access-control/overview.md). Azure roles supported by Functions are [Contributor](../role-based-access-control/built-in-roles.md#contributor), [Owner](../role-based-access-control/built-in-roles.md#owner), and [Reader](../role-based-access-control/built-in-roles.md#owner).
-Permissions are effective at the function app level. The Contributor role is required to perform most function app-level tasks. Only the Owner role can delete a function app.
+Permissions are effective at the function app level. The Contributor role is required to perform most function app-level tasks. You also need the Contributor role along with the [Monitoring Reader permission](../azure-monitor/roles-permissions-security.md#monitoring-reader) to be able to view log data in Application Insights. Only the Owner role can delete a function app.
#### Organize functions by privilege
azure-functions Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/start-stop-vms/manage.md
## Azure dashboard
-Start/Stop VMs v2 includes a [dashboard](../../azure-monitor/best-practices-analysis.md#azure-dashboards) to help you understand the management scope and recent operations against your VMs. It is a quick and easy way to verify the status of each operation thatΓÇÖs performed on your Azure VMs. The visualization in each tile is based on a Log query and to see the query, select the **Open in logs blade** option in the right-hand corner of the tile. This opens the [Log Analytics](../../azure-monitor/logs/log-analytics-overview.md#starting-log-analytics) tool in the Azure portal, and from here you can evaluate the query and modify to support your needs, such as custom [log alerts](../../azure-monitor/alerts/alerts-log.md), a custom [workbook](../../azure-monitor/visualize/workbooks-overview.md), etc.
+Start/Stop VMs v2 includes a [dashboard](../../azure-monitor/best-practices-analysis.md#azure-dashboards) to help you understand the management scope and recent operations against your VMs. It is a quick and easy way to verify the status of each operation thatΓÇÖs performed on your Azure VMs. The visualization in each tile is based on a Log query and to see the query, select the **Open in logs blade** option in the right-hand corner of the tile. This opens the [Log Analytics](../../azure-monitor/logs/log-analytics-overview.md#start-log-analytics) tool in the Azure portal, and from here you can evaluate the query and modify to support your needs, such as custom [log alerts](../../azure-monitor/alerts/alerts-log.md), a custom [workbook](../../azure-monitor/visualize/workbooks-overview.md), etc.
The log data each tile in the dashboard displays is refreshed every hour, with a manual refresh option on demand by clicking the **Refresh** icon on a given visualization, or by refreshing the full dashboard.
azure-maps Tutorial Geofence https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-geofence.md
New-AzMapsAccount -ResourceGroupName your-Resource-Group -Name name-of-maps-acco
### Use Azure CLI to create an Azure Maps account with a global region
-The Azure CLI command [az maps account create](/cli/azure/maps/account?view=azure-cli-latest#az-maps-account-create) doesnΓÇÖt have a location property, but defaults to ΓÇ£globalΓÇ¥, making it useful for creating an Azure Maps account with a global region setting for use with the Geofence API async event.
+The Azure CLI command [az maps account create](/cli/azure/maps/account?view=azure-cli-latest&preserve-view=true#az-maps-account-create) doesnΓÇÖt have a location property, but defaults to ΓÇ£globalΓÇ¥, making it useful for creating an Azure Maps account with a global region setting for use with the Geofence API async event.
## Upload geofencing GeoJSON data
azure-monitor Azure Monitor Agent Troubleshoot Windows Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-troubleshoot-windows-vm.md
Follow the steps below to troubleshoot the latest version of the Azure Monitor a
2. On your virtual machine, verify the existence of the file `C:\WindowsAzure\Resources\AMADataStore.<virtual-machine-name>\mcs\mcsconfig.latest.xml`. If this file doesn't exist: - The virtual machine may not be associated with a DCR. See step 3 - The virtual machine may not have Managed Identity enabled. [See here](../../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md#enable-system-assigned-managed-identity-during-creation-of-a-vm) on how to enable.
- - IMDS service is not running/accessible from the virtual machine. [Check if you can access IMDS from the machine](/azure/virtual-machines/windows/instance-metadata-service?tabs=windows). If not, [file a ticket](#file-a-ticket) with **Summary** as 'IMDS service not running' and **Problem type** as 'I need help configuring data collection from a VM'.
+ - IMDS service is not running/accessible from the virtual machine. [Check if you can access IMDS from the machine](../../virtual-machines/windows/instance-metadata-service.md?tabs=windows). If not, [file a ticket](#file-a-ticket) with **Summary** as 'IMDS service not running' and **Problem type** as 'I need help configuring data collection from a VM'.
- AMA cannot access IMDS. Check if you see IMDS errors in `C:\WindowsAzure\Resources\AMADataStore.<virtual-machine-name>\Tables\MAEventTable.tsf` file. If yes, [file a ticket](#file-a-ticket) with **Summary** as 'AMA cannot access IMDS' and **Problem type** as 'I need help configuring data collection from a VM'. 3. Open Azure portal > select your data collection rule > Open **Configuration** : **Resources** blade from left menu > You should see the virtual machine listed here 4. If not listed, click 'Add' and select your virtual machine from the resource picker. Repeat across all DCRs.
Follow the steps below to troubleshoot the latest version of the Azure Monitor a
--
azure-monitor Azure Monitor Agent Windows Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-windows-client.md
Here is a comparison between client installer and VM extension for Azure Monitor
| Windows 10, 11 desktops, workstations | Yes | Client installer (preview) | Installs the agent using a Windows MSI installer | | Windows 10, 11 laptops | Yes | Client installer (preview) | Installs the agent using a Windows MSI installer. The installs works on laptops but the agent is **not optimized yet** for battery, network consumption | | Virtual machines, scale sets | No | [Virtual machine extension](./azure-monitor-agent-manage.md#virtual-machine-extension-details) | Installs the agent using Azure extension framework |
-| On-premise servers | No | [Virtual machine extension](./azure-monitor-agent-manage.md#virtual-machine-extension-details) (with Azure Arc agent) | Installs the agent using Azure extension framework, provided for on-premises by installing Arc agent |
+| On-premises servers | No | [Virtual machine extension](./azure-monitor-agent-manage.md#virtual-machine-extension-details) (with Azure Arc agent) | Installs the agent using Azure extension framework, provided for on-premises by installing Arc agent |
## Prerequisites
azure-monitor Alerts Prepare Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-prepare-migration.md
The following table is a reference to the programmatic interfaces for both class
| Deployment script type | Classic alerts | New metric alerts | | - | -- | -- | |REST API | [microsoft.insights/alertrules](/rest/api/monitor/alertrules) | [microsoft.insights/metricalerts](/rest/api/monitor/metricalerts) |
-|Azure CLI | [az monitor alert](/cli/monitor/alert) | [az monitor metrics alert](/cli/azure/monitor/metrics/alert) |
+|Azure CLI | [az monitor alert](/cli/azure/monitor/metrics/alert) | [az monitor metrics alert](/cli/azure/monitor/metrics/alert) |
|PowerShell | [Reference](/powershell/module/az.monitor/add-azmetricalertrule) | [Reference](/powershell/module/az.monitor/add-azmetricalertrulev2) | | Azure Resource Manager template | [For classic alerts](./alerts-enable-template.md)|[For new metric alerts](./alerts-metric-create-templates.md)|
azure-monitor Alerts Troubleshoot Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-troubleshoot-log.md
Try the following steps to resolve the problem:
1. Try running the query in Azure Monitor Logs, and fix any syntax issues. 2. If your query syntax is valid, check the connection to the service. - Flush the DNS cache on your local machine, by opening a command prompt and running the following command: `ipconfig /flushdns`, and then check again. If you still get the same error message, try the next step.
- - Copy and paste this URL into the browser: [https://api.loganalytics.io/v1/version](https://api.loganalytics.io/v1/version). If you get an error, contact your IT administrator to allow the IP addresses associated with **api.loganalytics.io** listed [here](../app/ip-addresses.md#application-insights--log-analytics-apis).
+ - Copy and paste this URL into the browser: [https://api.loganalytics.io/v1/version](https://api.loganalytics.io/v1/version). If you get an error, contact your IT administrator to allow the IP addresses associated with **api.loganalytics.io** listed [here](../app/ip-addresses.md#application-insights-and-log-analytics-apis).
## Next steps
azure-monitor Itsmc Definition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-definition.md
After you've prepped your ITSM tool, complete these steps to create a connection
1. Specify the connection settings for the ITSM product that you're using: - [ServiceNow](./itsmc-connections-servicenow.md)
- - [System Center Service Manager](/azure/azure-monitor/alerts/itsmc-connections)
+ - [System Center Service Manager](./itsmc-connections.md)
> [!NOTE] > By default, ITSMC refreshes the connection's configuration data once every 24 hours. To refresh your connection's data instantly to reflect any edits or template updates that you make, select the **Sync** button on your connection's pane:
When you create or edit an Azure alert rule, use an action group, which has an I
## Next steps
-* [Troubleshoot problems in ITSMC](./itsmc-resync-servicenow.md)
+* [Troubleshoot problems in ITSMC](./itsmc-resync-servicenow.md)
azure-monitor Api Custom Events Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/api-custom-events-metrics.md
Title: Application Insights API for custom events and metrics | Microsoft Docs
-description: Insert a few lines of code in your device or desktop app, webpage, or service, to track usage and diagnose issues.
+description: Insert a few lines of code in your device or desktop app, webpage, or service to track usage and diagnose issues.
Last updated 05/11/2020 ms.devlang: csharp, java, javascript, vb
# Application Insights API for custom events and metrics
-Insert a few lines of code in your application to find out what users are doing with it, or to help diagnose issues. You can send telemetry from device and desktop apps, web clients, and web servers. Use the [Azure Application Insights](./app-insights-overview.md) core telemetry API to send custom events and metrics, and your own versions of standard telemetry. This API is the same API that the standard Application Insights data collectors use.
+Insert a few lines of code in your application to find out what users are doing with it, or to help diagnose issues. You can send telemetry from device and desktop apps, web clients, and web servers. Use the [Application Insights](./app-insights-overview.md) core telemetry API to send custom events and metrics and your own versions of standard telemetry. This API is the same API that the standard Application Insights data collectors use.
[!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)] ## API summary
-The core API is uniform across all platforms, apart from a few variations like `GetMetric`(.NET only).
+The core API is uniform across all platforms, apart from a few variations like `GetMetric` (.NET only).
| Method | Used for | | | | | [`TrackPageView`](#page-views) |Pages, screens, blades, or forms. | | [`TrackEvent`](#trackevent) |User actions and other events. Used to track user behavior or to monitor performance. |
-| [`GetMetric`](#getmetric) |Zero and multi-dimensional metrics, centrally configured aggregation, C# only. |
+| [`GetMetric`](#getmetric) |Zero and multidimensional metrics, centrally configured aggregation, C# only. |
| [`TrackMetric`](#trackmetric) |Performance measurements such as queue lengths not related to specific events. | | [`TrackException`](#trackexception) |Logging exceptions for diagnosis. Trace where they occur in relation to other events and examine stack traces. | | [`TrackRequest`](#trackrequest) |Logging the frequency and duration of server requests for performance analysis. |
If you don't have a reference on Application Insights SDK yet:
Get an instance of `TelemetryClient` (except in JavaScript in webpages):
-For [ASP.NET Core](asp-net-core.md) apps and [Non HTTP/Worker for .NET/.NET Core](worker-service.md#how-can-i-track-telemetry-thats-not-automatically-collected) apps, it is recommended to get an instance of `TelemetryClient` from the dependency injection container as explained in their respective documentation.
+For [ASP.NET Core](asp-net-core.md) apps and [Non-HTTP/Worker for .NET/.NET Core](worker-service.md#how-can-i-track-telemetry-thats-not-automatically-collected) apps, get an instance of `TelemetryClient` from the dependency injection container as explained in their respective documentation.
-If you use AzureFunctions v2+ or Azure WebJobs v3+ - follow [this document](../../azure-functions/functions-monitoring.md).
+If you use Azure Functions v2+ or Azure WebJobs v3+, see [Monitor Azure Functions](../../azure-functions/functions-monitoring.md).
*C#*
If you use AzureFunctions v2+ or Azure WebJobs v3+ - follow [this document](../.
private TelemetryClient telemetry = new TelemetryClient(); ```
-For anyone seeing this method is obsolete messages please visit [microsoft/ApplicationInsights-dotnet#1152](https://github.com/microsoft/ApplicationInsights-dotnet/issues/1152) for further details.
+If you see a message that tells you this method is obsolete, see [microsoft/ApplicationInsights-dotnet#1152](https://github.com/microsoft/ApplicationInsights-dotnet/issues/1152) for more information.
*Visual Basic*
private TelemetryClient telemetry = new TelemetryClient();
var telemetry = applicationInsights.defaultClient; ```
-TelemetryClient is thread-safe.
+`TelemetryClient` is thread safe.
-For ASP.NET and Java projects, incoming HTTP Requests are automatically captured. You might want to create additional instances of TelemetryClient for other module of your app. For instance, you may have one TelemetryClient instance in your middleware class to report business logic events. You can set properties such as UserId and DeviceId to identify the machine. This information is attached to all events that the instance sends.
+For ASP.NET and Java projects, incoming HTTP requests are automatically captured. You might want to create more instances of `TelemetryClient` for other modules of your app. For example, you might have one `TelemetryClient` instance in your middleware class to report business logic events. You can set properties such as `UserId` and `DeviceId` to identify the machine. This information is attached to all events that the instance sends.
*C#*
telemetry.getContext().getUser().setId("...");
telemetry.getContext().getDevice().setId("..."); ```
-In Node.js projects, you can use `new applicationInsights.TelemetryClient(instrumentationKey?)` to create a new instance, but this is recommended only for scenarios that require isolated configuration from the singleton `defaultClient`.
+In Node.js projects, you can use `new applicationInsights.TelemetryClient(instrumentationKey?)` to create a new instance. We recommend this approach only for scenarios that require isolated configuration from the singleton `defaultClient`.
## TrackEvent
-In Application Insights, a *custom event* is a data point that you can display in [Metrics Explorer](../essentials/metrics-charts.md) as an aggregated count, and in [Diagnostic Search](./diagnostic-search.md) as individual occurrences. (It isn't related to MVC or other framework "events.")
+In Application Insights, a *custom event* is a data point that you can display in [Metrics Explorer](../essentials/metrics-charts.md) as an aggregated count and in [Diagnostic Search](./diagnostic-search.md) as individual occurrences. (It isn't related to MVC or other framework "events.")
-Insert `TrackEvent` calls in your code to count various events. How often users choose a particular feature, how often they achieve particular goals, or maybe how often they make particular types of mistakes.
+Insert `TrackEvent` calls in your code to count various events. For example, you might want to track how often users choose a particular feature. Or you might want to know how often they achieve certain goals or make specific types of mistakes.
For example, in a game app, send an event whenever a user wins the game:
telemetry.trackEvent("WinGame");
telemetry.trackEvent({name: "WinGame"}); ```
-### Custom events in Analytics
+### Custom events in Log Analytics
-The telemetry is available in the `customEvents` table in [Application Insights Logs tab](../logs/log-query-overview.md) or [Usage Experience](usage-overview.md). Events may come from `trackEvent(..)` or [Click Analytics Auto-collection Plugin](javascript-click-analytics-plugin.md).
+The telemetry is available in the `customEvents` table on the [Application Insights Logs tab](../logs/log-query-overview.md) or [usage experience](usage-overview.md). Events might come from `trackEvent(..)` or the [Click Analytics Auto-collection plug-in](javascript-click-analytics-plugin.md).
-If [sampling](./sampling.md) is in operation, the itemCount property shows a value greater than 1. For example itemCount==10 means that of 10 calls to trackEvent(), the sampling process only transmitted one of them. To get a correct count of custom events, you should therefore use code such as `customEvents | summarize sum(itemCount)`.
+If [sampling](./sampling.md) is in operation, the `itemCount` property shows a value greater than `1`. For example, `itemCount==10` means that of 10 calls to `trackEvent()`, the sampling process transmitted only one of them. To get a correct count of custom events, use code such as `customEvents | summarize sum(itemCount)`.
## GetMetric
-To learn how to effectively use the GetMetric() call to capture locally pre-aggregated metrics for .NET and .NET Core applications visit the [GetMetric](./get-metric.md) documentation.
+To learn how to effectively use the `GetMetric()` call to capture locally pre-aggregated metrics for .NET and .NET Core applications, see [Custom metric collection in .NET and .NET Core](./get-metric.md).
## TrackMetric > [!NOTE]
-> Microsoft.ApplicationInsights.TelemetryClient.TrackMetric is not the preferred method for sending metrics. Metrics should always be pre-aggregated across a time period before being sent. Use one of the GetMetric(..) overloads to get a metric object for accessing SDK pre-aggregation capabilities. If you are implementing your own pre-aggregation logic, you can use the TrackMetric() method to send the resulting aggregates. If your application requires sending a separate telemetry item at every occasion without aggregation across time, you likely have a use case for event telemetry; see TelemetryClient.TrackEvent
-(Microsoft.ApplicationInsights.DataContracts.EventTelemetry).
+> `Microsoft.ApplicationInsights.TelemetryClient.TrackMetric` isn't the preferred method for sending metrics. Metrics should always be pre-aggregated across a time period before being sent. Use one of the `GetMetric(..)` overloads to get a metric object for accessing SDK pre-aggregation capabilities.
+>
+> If you're implementing your own pre-aggregation logic, you can use the `TrackMetric()` method to send the resulting aggregates. If your application requires sending a separate telemetry item on every occasion without aggregation across time, you likely have a use case for event telemetry. See `TelemetryClient.TrackEvent
+(Microsoft.ApplicationInsights.DataContracts.EventTelemetry)`.
-Application Insights can chart metrics that are not attached to particular events. For example, you could monitor a queue length at regular intervals. With metrics, the individual measurements are of less interest than the variations and trends, and so statistical charts are useful.
+Application Insights can chart metrics that aren't attached to particular events. For example, you could monitor a queue length at regular intervals. With metrics, the individual measurements are of less interest than the variations and trends, and so statistical charts are useful.
-In order to send metrics to Application Insights, you can use the `TrackMetric(..)` API. There are two ways to send a metric:
+To send metrics to Application Insights, you can use the `TrackMetric(..)` API. There are two ways to send a metric:
-* Single value. Every time you perform a measurement in your application, you send the corresponding value to Application Insights. For example, assume that you have a metric describing the number of items in a container. During a particular time period, you first put three items into the container and then you remove two items. Accordingly, you would call `TrackMetric` twice: first passing the value `3` and then the value `-2`. Application Insights stores both values on your behalf.
+* **Single value**. Every time you perform a measurement in your application, you send the corresponding value to Application Insights.
-* Aggregation. When working with metrics, every single measurement is rarely of interest. Instead a summary of what happened during a particular time period is important. Such a summary is called _aggregation_. In the above example, the aggregate metric sum for that time period is `1` and the count of the metric values is `2`. When using the aggregation approach, you only invoke `TrackMetric` once per time period and send the aggregate values. This is the recommended approach since it can significantly reduce the cost and performance overhead by sending fewer data points to Application Insights, while still collecting all relevant information.
+ For example, assume you have a metric that describes the number of items in a container. During a particular time period, you first put three items into the container and then you remove two items. Accordingly, you would call `TrackMetric` twice. First, you would pass the value `3` and then pass the value `-2`. Application Insights stores both values for you.
-### Examples
+* **Aggregation**. When you work with metrics, every single measurement is rarely of interest. Instead, a summary of what happened during a particular time period is important. Such a summary is called _aggregation_.
-#### Single values
+ In the preceding example, the aggregate metric sum for that time period is `1` and the count of the metric values is `2`. When you use the aggregation approach, you invoke `TrackMetric` only once per time period and send the aggregate values. We recommend this approach because it can significantly reduce the cost and performance overhead by sending fewer data points to Application Insights, while still collecting all relevant information.
+
+### Single value examples
To send a single metric value:
telemetry.trackMetric("queueLength", 42.0);
telemetry.trackMetric({name: "queueLength", value: 42.0}); ```
-### Custom metrics in Analytics
+### Custom metrics in Log Analytics
The telemetry is available in the `customMetrics` table in [Application Insights Analytics](../logs/log-query-overview.md). Each row represents a call to `trackMetric(..)` in your app.
-* `valueSum` - This is the sum of the measurements. To get the mean value, divide by `valueCount`.
-* `valueCount` - The number of measurements that were aggregated into this `trackMetric(..)` call.
+* `valueSum`: The sum of the measurements. To get the mean value, divide by `valueCount`.
+* `valueCount`: The number of measurements that were aggregated into this `trackMetric(..)` call.
## Page views
-In a device or webpage app, page view telemetry is sent by default when each screen or page is loaded. But you can change that to track page views at additional or different times. For example, in an app that displays tabs or blades, you might want to track a page whenever the user opens a new blade.
+In a device or webpage app, page view telemetry is sent by default when each screen or page is loaded. But you can change the default to track page views at more or different times. For example, in an app that displays tabs or blades, you might want to track a page whenever the user opens a new blade.
-User and session data is sent as properties along with page views, so the user and session charts come alive when there is page view telemetry.
+User and session data is sent as properties along with page views, so the user and session charts come alive when there's page view telemetry.
### Custom page views
appInsights.trackPageView("tab1", "http://fabrikam.com/page1.htm");
### Timing page views
-By default, the times reported as **Page view load time** are measured from when the browser sends the request, until the browser's page load event is called.
+By default, the times reported as **Page view load time** are measured from when the browser sends the request until the browser's page load event is called.
Instead, you can either:
The name that you use as the first parameter associates the start and stop calls
The resulting page load durations displayed in Metrics Explorer are derived from the interval between the start and stop calls. It's up to you what interval you actually time.
-### Page telemetry in Analytics
+### Page telemetry in Log Analytics
-In [Analytics](../logs/log-query-overview.md) two tables show data from browser operations:
+In [Log Analytics](../logs/log-query-overview.md), two tables show data from browser operations:
-* The `pageViews` table contains data about the URL and page title
-* The `browserTimings` table contains data about client performance, such as the time taken to process the incoming data
+* `pageViews`: Contains data about the URL and page title.
+* `browserTimings`: Contains data about client performance like the time taken to process the incoming data.
To find how long the browser takes to process different pages:
browserTimings
| summarize avg(networkDuration), avg(processingDuration), avg(totalDuration) by name ```
-To discover the popularities of different browsers:
+To discover the popularity of different browsers:
```kusto pageViews
pageViews
## TrackRequest
-The server SDK uses TrackRequest to log HTTP requests.
+The server SDK uses `TrackRequest` to log HTTP requests.
You can also call it yourself if you want to simulate requests in a context where you don't have the web service module running.
-However, the recommended way to send request telemetry is where the request acts as an <a href="#operation-context">operation context</a>.
+The recommended way to send request telemetry is where the request acts as an <a href="#operation-context">operation context</a>.
## Operation context
-You can correlate telemetry items together by associating them with operation context. The standard request-tracking module does this for exceptions and other events that are sent while an HTTP request is being processed. In [Search](./diagnostic-search.md) and [Analytics](../logs/log-query-overview.md), you can easily find any events associated with the request using its operation ID.
+You can correlate telemetry items together by associating them with operation context. The standard request-tracking module does this for exceptions and other events that are sent while an HTTP request is being processed. In [Search](./diagnostic-search.md) and [Analytics](../logs/log-query-overview.md), you can easily find any events associated with the request by using its operation ID.
-See [Telemetry correlation in Application Insights](./correlation.md) for more details on correlation.
+For more information on correlation, see [Telemetry correlation in Application Insights](./correlation.md).
-When tracking telemetry manually, the easiest way to ensure telemetry correlation by using this pattern:
+When you track telemetry manually, the easiest way to ensure telemetry correlation is by using this pattern:
*C#*
using (var operation = telemetryClient.StartOperation<RequestTelemetry>("operati
} // When operation is disposed, telemetry item is sent. ```
-Along with setting an operation context, `StartOperation` creates a telemetry item of the type that you specify. It sends the telemetry item when you dispose the operation, or if you explicitly call `StopOperation`. If you use `RequestTelemetry` as the telemetry type, its duration is set to the timed interval between start and stop.
+Along with setting an operation context, `StartOperation` creates a telemetry item of the type that you specify. It sends the telemetry item when you dispose of the operation or if you explicitly call `StopOperation`. If you use `RequestTelemetry` as the telemetry type, its duration is set to the timed interval between start and stop.
-Telemetry items reported within a scope of operation become 'children' of such operation. Operation contexts could be nested.
+Telemetry items reported within a scope of operation become children of such an operation. Operation contexts could be nested.
-In Search, the operation context is used to create the **Related Items** list:
+In **Search**, the operation context is used to create the **Related Items** list.
-![Related items](./media/api-custom-events-metrics/21.png)
+![Screenshot that shows the Related Items list.](./media/api-custom-events-metrics/21.png)
-See [Track custom operations with Application Insights .NET SDK](./custom-operations-tracking.md) for more information on custom operations tracking.
+For more information on custom operations tracking, see [Track custom operations with Application Insights .NET SDK](./custom-operations-tracking.md).
-### Requests in Analytics
+### Requests in Log Analytics
In [Application Insights Analytics](../logs/log-query-overview.md), requests show up in the `requests` table.
-If [sampling](./sampling.md) is in operation, the itemCount property will show a value greater than 1. For example itemCount==10 means that of 10 calls to trackRequest(), the sampling process only transmitted one of them. To get a correct count of requests and average duration segmented by request names, use code such as:
+If [sampling](./sampling.md) is in operation, the `itemCount` property shows a value greater than `1`. For example, `itemCount==10` means that of 10 calls to `trackRequest()`, the sampling process transmitted only one of them. To get a correct count of requests and average duration segmented by request names, use code such as:
```kusto requests
catch (ex)
} ```
-The SDKs catch many exceptions automatically, so you don't always have to call TrackException explicitly.
+The SDKs catch many exceptions automatically, so you don't always have to call `TrackException` explicitly:
-* ASP.NET: [Write code to catch exceptions](./asp-net-exceptions.md).
-* Java EE: [Exceptions are caught automatically](./java-in-process-agent.md).
-* JavaScript: Exceptions are caught automatically. If you want to disable automatic collection, add a line to the code snippet that you insert in your webpages:
+* **ASP.NET**: [Write code to catch exceptions](./asp-net-exceptions.md).
+* **Java EE**: [Exceptions are caught automatically](./java-in-process-agent.md).
+* **JavaScript**: Exceptions are caught automatically. If you want to disable automatic collection, add a line to the code snippet that you insert in your webpages:
```javascript ({
The SDKs catch many exceptions automatically, so you don't always have to call T
}) ```
-### Exceptions in Analytics
+### Exceptions in Log Analytics
In [Application Insights Analytics](../logs/log-query-overview.md), exceptions show up in the `exceptions` table.
-If [sampling](./sampling.md) is in operation, the `itemCount` property shows a value greater than 1. For example itemCount==10 means that of 10 calls to trackException(), the sampling process only transmitted one of them. To get a correct count of exceptions segmented by type of exception, use code such as:
+If [sampling](./sampling.md) is in operation, the `itemCount` property shows a value greater than `1`. For example, `itemCount==10` means that of 10 calls to `trackException()`, the sampling process transmitted only one of them. To get a correct count of exceptions segmented by type of exception, use code such as:
```kusto exceptions | summarize sum(itemCount) by type ```
-Most of the important stack information is already extracted into separate variables, but you can pull apart the `details` structure to get more. Since this structure is dynamic, you should cast the result to the type you expect. For example:
+Most of the important stack information is already extracted into separate variables, but you can pull apart the `details` structure to get more. Because this structure is dynamic, you should cast the result to the type you expect. For example:
```kusto exceptions
exceptions
## TrackTrace
-Use TrackTrace to help diagnose problems by sending a "breadcrumb trail" to Application Insights. You can send chunks of diagnostic data and inspect them in [Diagnostic Search](./diagnostic-search.md).
+Use `TrackTrace` to help diagnose problems by sending a "breadcrumb trail" to Application Insights. You can send chunks of diagnostic data and inspect them in [Diagnostic Search](./diagnostic-search.md).
-In .NET [Log adapters](./asp-net-trace-logs.md) use this API to send third-party logs to the portal.
+In .NET [Log adapters](./asp-net-trace-logs.md), use this API to send third-party logs to the portal.
-In Java, the [Application Insights Java agent](java-in-process-agent.md) auto-collects and sends logs to the portal.
+In Java, the [Application Insights Java agent](java-in-process-agent.md) autocollects and sends logs to the portal.
*C#*
Log a diagnostic event such as entering or leaving a method.
Parameter | Description | `message` | Diagnostic data. Can be much longer than a name.
-`properties` | Map of string to string: Additional data used to [filter exceptions](#properties) in the portal. Defaults to empty.
-`severityLevel` | Supported values: [SeverityLevel.ts](https://github.com/microsoft/ApplicationInsights-JS/blob/17ef50442f73fd02a758fbd74134933d92607ecf/shared/AppInsightsCommon/src/Interfaces/Contracts/Generated/SeverityLevel.ts)
+`properties` | Map of string to string. More data is used to [filter exceptions](#properties) in the portal. Defaults to empty.
+`severityLevel` | Supported values: [SeverityLevel.ts](https://github.com/microsoft/ApplicationInsights-JS/blob/17ef50442f73fd02a758fbd74134933d92607ecf/shared/AppInsightsCommon/src/Interfaces/Contracts/Generated/SeverityLevel.ts).
-You can search on message content, but (unlike property values) you can't filter on it.
+You can search on message content, but unlike property values, you can't filter on it.
-The size limit on `message` is much higher than the limit on properties.
-An advantage of TrackTrace is that you can put relatively long data in the message. For example, you can encode POST data there.
+The size limit on `message` is much higher than the limit on properties. An advantage of `TrackTrace` is that you can put relatively long data in the message. For example, you can encode POST data there.
-In addition, you can add a severity level to your message. And, like other telemetry, you can add property values to help you filter or search for different sets of traces. For example:
+You can also add a severity level to your message. And, like other telemetry, you can add property values to help you filter or search for different sets of traces. For example:
*C#*
telemetry.trackTrace("Slow Database response", SeverityLevel.Warning, properties
In [Search](./diagnostic-search.md), you can then easily filter out all the messages of a particular severity level that relate to a particular database.
-### Traces in Analytics
+### Traces in Log Analytics
-In [Application Insights Analytics](../logs/log-query-overview.md), calls to TrackTrace show up in the `traces` table.
+In [Application Insights Analytics](../logs/log-query-overview.md), calls to `TrackTrace` show up in the `traces` table.
-If [sampling](./sampling.md) is in operation, the itemCount property shows a value greater than 1. For example itemCount==10 means that of 10 calls to `trackTrace()`, the sampling process only transmitted one of them. To get a correct count of trace calls, you should use therefore code such as `traces | summarize sum(itemCount)`.
+If [sampling](./sampling.md) is in operation, the `itemCount` property shows a value greater than `1`. For example, `itemCount==10` means that of 10 calls to `trackTrace()`, the sampling process transmitted only one of them. To get a correct count of trace calls, use code such as `traces | summarize sum(itemCount)`.
## TrackDependency
-Use the TrackDependency call to track the response times and success rates of calls to an external piece of code. The results appear in the dependency charts in the portal. The code snippet below needs to be added wherever a dependency call is made.
+Use the `TrackDependency` call to track the response times and success rates of calls to an external piece of code. The results appear in the dependency charts in the portal. The following code snippet must be added wherever a dependency call is made.
> [!NOTE]
-> For .NET and .NET Core you can alternatively use the `TelemetryClient.StartOperation` (extension) method that fills the `DependencyTelemetry` properties that are needed for correlation and some other properties like the start time and duration so you don't need to create a custom timer as with the examples below. For more information consult this article's [section on outgoing dependency tracking](./custom-operations-tracking.md#outgoing-dependencies-tracking).
+> For .NET and .NET Core, you can alternatively use the `TelemetryClient.StartOperation` (extension) method that fills the `DependencyTelemetry` properties that are needed for correlation and some other properties like the start time and duration, so you don't need to create a custom timer as with the following examples. For more information, see the section on outgoing dependency tracking in [Track custom operations with Application Insights .NET SDK](./custom-operations-tracking.md#outgoing-dependencies-tracking).
*C#*
finally
} ```
-Remember that the server SDKs include a [dependency module](./asp-net-dependencies.md) that discovers and tracks certain dependency calls automatically--for example, to databases and REST APIs. You have to install an agent on your server to make the module work.
+Remember that the server SDKs include a [dependency module](./asp-net-dependencies.md) that discovers and tracks certain dependency calls automatically, for example, to databases and REST APIs. You have to install an agent on your server to make the module work.
-In Java, many dependency calls can be automatically tracked using the
+In Java, many dependency calls can be automatically tracked by using the
[Application Insights Java agent](java-in-process-agent.md). You use this call if you want to track calls that the automated tracking doesn't catch. To turn off the standard dependency-tracking module in C#, edit [ApplicationInsights.config](./configuration-with-applicationinsights-config.md) and delete the reference to `DependencyCollector.DependencyTrackingTelemetryModule`. For Java, see
-[suppressing specific auto-collected telemetry](./java-standalone-config.md#suppressing-specific-auto-collected-telemetry).
+[Suppressing specific autocollected telemetry](./java-standalone-config.md#suppressing-specific-auto-collected-telemetry).
-### Dependencies in Analytics
+### Dependencies in Log Analytics
-In [Application Insights Analytics](../logs/log-query-overview.md), trackDependency calls show up in the `dependencies` table.
+In [Application Insights Analytics](../logs/log-query-overview.md), `trackDependency` calls show up in the `dependencies` table.
-If [sampling](./sampling.md) is in operation, the itemCount property shows a value greater than 1. For example itemCount==10 means that of 10 calls to trackDependency(), the sampling process only transmitted one of them. To get a correct count of dependencies segmented by target component, use code such as:
+If [sampling](./sampling.md) is in operation, the `itemCount` property shows a value greater than 1. For example, `itemCount==10` means that of 10 calls to `trackDependency()`, the sampling process transmitted only one of them. To get a correct count of dependencies segmented by target component, use code such as:
```kusto dependencies
dependencies
## Flushing data
-Normally, the SDK sends data at fixed intervals (typically 30 secs) or whenever buffer is full (typically 500 items). However, in some cases, you might want to flush the buffer--for example, if you are using the SDK in an application that shuts down.
+Normally, the SDK sends data at fixed intervals, typically 30 seconds, or whenever the buffer is full, which is typically 500 items. In some cases, you might want to flush the buffer. An example is if you're using the SDK in an application that shuts down.
*.NET*
-When using Flush(), we recommend this [pattern](./console.md#full-example):
+When you use `Flush()`, we recommend this [pattern](./console.md#full-example):
```csharp telemetry.Flush();
telemetry.Flush();
System.Threading.Thread.Sleep(5000); ```
-When using FlushAsync(), we recommend this pattern:
+When you use `FlushAsync()`, we recommend this pattern:
```csharp await telemetryClient.FlushAsync() ```
-We recommend always flushing as part of the application shutdown to guarantee that telemetry is not lost.
+We recommend always flushing as part of the application shutdown to guarantee that telemetry isn't lost.
*Java*
The function is asynchronous for the [server telemetry channel](https://www.nuge
## Authenticated users
-In a web app, users are (by default) [identified by cookies](./usage-segmentation.md#the-users-sessions-and-events-segmentation-tool). A user might be counted more than once if they access your app from a different machine or browser, or if they delete cookies.
+In a web app, users are [identified by cookies](./usage-segmentation.md#the-users-sessions-and-events-segmentation-tool) by default. A user might be counted more than once if they access your app from a different machine or browser, or if they delete cookies.
If users sign in to your app, you can get a more accurate count by setting the authenticated user ID in the browser code:
It isn't necessary to use the user's actual sign-in name. It only has to be an I
The user ID is also set in a session cookie and sent to the server. If the server SDK is installed, the authenticated user ID is sent as part of the context properties of both client and server telemetry. You can then filter and search on it.
-If your app groups users into accounts, you can also pass an identifier for the account (with the same character restrictions).
+If your app groups users into accounts, you can also pass an identifier for the account. The same character restrictions apply.
```javascript appInsights.setAuthenticatedUserContext(validatedId, accountId);
appInsights.setAuthenticatedUserContext(validatedId, accountId);
In [Metrics Explorer](../essentials/metrics-charts.md), you can create a chart that counts **Users, Authenticated**, and **User accounts**.
-You can also [Search](./diagnostic-search.md) for client data points with specific user names and accounts.
+You can also [search](./diagnostic-search.md) for client data points with specific user names and accounts.
> [!NOTE]
-> The [EnableAuthenticationTrackingJavaScript property in the ApplicationInsightsServiceOptions class](https://github.com/microsoft/ApplicationInsights-dotnet/blob/develop/NETCORE/src/Shared/Extensions/ApplicationInsightsServiceOptions.cs) in the .NET Core SDK simplifies the JavaScript configuration needed to inject the username as the Auth Id for each trace sent by the Application Insights JavaScript SDK. When this property is set to true, the username from the user in the ASP.NET Core is printed along with [client-side telemetry](asp-net-core.md#enable-client-side-telemetry-for-web-applications), so adding `appInsights.setAuthenticatedUserContext` manually wouldn't be needed anymore, as it is already injected by the SDK for ASP.NET Core. The Auth Id will also be sent to the server where the SDK in .NET Core will identify it and use it for any server-side telemetry, as described in the [JavaScript API reference](https://github.com/microsoft/ApplicationInsights-JS/blob/master/API-reference.md#setauthenticatedusercontext). However, for JavaScript applications that don't work in the same way as ASP.NET Core MVC (such as SPA web apps), you would still need to add `appInsights.setAuthenticatedUserContext` manually.
+> The [EnableAuthenticationTrackingJavaScript property in the ApplicationInsightsServiceOptions class](https://github.com/microsoft/ApplicationInsights-dotnet/blob/develop/NETCORE/src/Shared/Extensions/ApplicationInsightsServiceOptions.cs) in the .NET Core SDK simplifies the JavaScript configuration needed to inject the user name as the Auth ID for each trace sent by the Application Insights JavaScript SDK.
+>
+>When this property is set to `true`, the user name from the user in the ASP.NET Core is printed along with [client-side telemetry](asp-net-core.md#enable-client-side-telemetry-for-web-applications). For this reason, adding `appInsights.setAuthenticatedUserContext` manually wouldn't be needed anymore because it's already injected by the SDK for ASP.NET Core. The Auth ID will also be sent to the server where the SDK in .NET Core will identify it and use it for any server-side telemetry, as described in the [JavaScript API reference](https://github.com/microsoft/ApplicationInsights-JS/blob/master/API-reference.md#setauthenticatedusercontext).
+>
+>For JavaScript applications that don't work in the same way as ASP.NET Core MVC, such as SPA web apps, you would still need to add `appInsights.setAuthenticatedUserContext` manually.
-## <a name="properties"></a>Filtering, searching, and segmenting your data by using properties
+## <a name="properties"></a>Filter, search, and segment your data by using properties
-You can attach properties and measurements to your events (and also to metrics, page views, exceptions, and other telemetry data).
+You can attach properties and measurements to your events, metrics, page views, exceptions, and other telemetry data.
*Properties* are string values that you can use to filter your telemetry in the usage reports. For example, if your app provides several games, you can attach the name of the game to each event so that you can see which games are more popular.
-There's a limit of 8192 on the string length. (If you want to send large chunks of data, use the message parameter of TrackTrace.)
+There's a limit of 8,192 on the string length. If you want to send large chunks of data, use the message parameter of `TrackTrace`.
-*Metrics* are numeric values that can be presented graphically. For example, you might want to see if there's a gradual increase in the scores that your gamers achieve. The graphs can be segmented by the properties that are sent with the event, so that you can get separate or stacked graphs for different games.
+*Metrics* are numeric values that can be presented graphically. For example, you might want to see if there's a gradual increase in the scores that your gamers achieve. The graphs can be segmented by the properties that are sent with the event so that you can get separate or stacked graphs for different games.
-For metric values to be correctly displayed, they should be greater than or equal to 0.
+Metric values should be greater than or equal to 0 to display correctly.
There are some [limits on the number of properties, property values, and metrics](#limits) that you can use.
telemetry.trackEvent("WinGame", properties, metrics);
``` > [!NOTE]
-> Take care not to log personally identifiable information in properties.
+> Make sure you don't log personally identifiable information in properties.
### Alternative way to set properties and metrics
telemetry.TrackEvent(event);
``` > [!WARNING]
-> Don't reuse the same telemetry item instance (`event` in this example) to call Track*() multiple times. This may cause telemetry to be sent with incorrect configuration.
+> Don't reuse the same telemetry item instance (`event` in this example) to call `Track*()` multiple times. This practice might cause telemetry to be sent with incorrect configuration.
-### Custom measurements and properties in Analytics
+### Custom measurements and properties in Log Analytics
-In [Analytics](../logs/log-query-overview.md), custom metrics and properties show in the `customMeasurements` and `customDimensions` attributes of each telemetry record.
+In [Log Analytics](../logs/log-query-overview.md), custom metrics and properties show in the `customMeasurements` and `customDimensions` attributes of each telemetry record.
-For example, if you have added a property named "game" to your request telemetry, this query counts the occurrences of different values of "game", and show the average of the custom metric "score":
+For example, if you add a property named "game" to your request telemetry, this query counts the occurrences of different values of "game" and shows the average of the custom metric "score":
```kusto requests
requests
Notice that:
-* When you extract a value from the customDimensions or customMeasurements JSON, it has dynamic type, and so you must cast it `tostring` or `todouble`.
-* To take account of the possibility of [sampling](./sampling.md), you should use `sum(itemCount)`, not `count()`.
+* When you extract a value from the `customDimensions` or `customMeasurements` JSON, it has dynamic type, so you must cast it `tostring` or `todouble`.
+* To take account of the possibility of [sampling](./sampling.md), use `sum(itemCount)` not `count()`.
## <a name="timed"></a> Timing events
-Sometimes you want to chart how long it takes to perform an action. For example, you might want to know how long users take to consider choices in a game. You can use the measurement parameter for this.
+Sometimes you want to chart how long it takes to perform an action. For example, you might want to know how long users take to consider choices in a game. To obtain this information, use the measurement parameter.
*C#*
telemetry.trackEvent("SignalProcessed", properties, metrics);
## <a name="defaults"></a>Default properties for custom telemetry
-If you want to set default property values for some of the custom events that you write, you can set them in a TelemetryClient instance. They are attached to every telemetry item that's sent from that client.
+If you want to set default property values for some of the custom events that you write, set them in a `TelemetryClient` instance. They're attached to every telemetry item that's sent from that client.
*C#*
Individual telemetry calls can override the default values in their property dic
*To add properties to all telemetry*, including the data from standard collection modules, [implement `ITelemetryInitializer`](./api-filtering-sampling.md#add-properties).
-## Sampling, filtering, and processing telemetry
+## Sample, filter, and process telemetry
You can write code to process your telemetry before it's sent from the SDK. The processing includes data that's sent from the standard telemetry modules, such as HTTP request collection and dependency collection.
You can write code to process your telemetry before it's sent from the SDK. The
[Filtering](./api-filtering-sampling.md#filtering) can modify or discard telemetry before it's sent from the SDK by implementing `ITelemetryProcessor`. You control what is sent or discarded, but you have to account for the effect on your metrics. Depending on how you discard items, you might lose the ability to navigate between related items.
-[Sampling](./api-filtering-sampling.md) is a packaged solution to reduce the volume of data that's sent from your app to the portal. It does so without affecting the displayed metrics. And it does so without affecting your ability to diagnose problems by navigating between related items such as exceptions, requests, and page views.
+[Sampling](./api-filtering-sampling.md) is a packaged solution to reduce the volume of data that's sent from your app to the portal. It does so without affecting the displayed metrics. And it does so without affecting your ability to diagnose problems by navigating between related items like exceptions, requests, and page views.
-[Learn more](./api-filtering-sampling.md).
+To learn more, see [Filter and preprocess telemetry in the Application Insights SDK](./api-filtering-sampling.md).
-## Disabling telemetry
+## Disable telemetry
To *dynamically stop and start* the collection and transmission of telemetry:
TelemetryConfiguration.Active.DisableTelemetry = true;
telemetry.getConfiguration().setTrackingDisabled(true); ```
-To *disable selected standard collectors*--for example, performance counters, HTTP requests, or dependencies--delete or comment out the relevant lines in [ApplicationInsights.config](./configuration-with-applicationinsights-config.md). You can do this, for example, if you want to send your own TrackRequest data.
+To *disable selected standard collectors*, for example, performance counters, HTTP requests, or dependencies, delete or comment out the relevant lines in [ApplicationInsights.config](./configuration-with-applicationinsights-config.md). An example is if you want to send your own `TrackRequest` data.
*Node.js*
To *disable selected standard collectors*--for example, performance counters, HT
telemetry.config.disableAppInsights = true; ```
-To *disable selected standard collectors*--for example, performance counters, HTTP requests, or dependencies--at initialization time, chain configuration methods to your SDK initialization code:
+To *disable selected standard collectors*, for example, performance counters, HTTP requests, or dependencies, at initialization time, chain configuration methods to your SDK initialization code.
```javascript applicationInsights.setup()
applicationInsights.setup()
.start(); ```
-To disable these collectors after initialization, use the Configuration object: `applicationInsights.Configuration.setAutoCollectRequests(false)`
+To disable these collectors after initialization, use the Configuration object: `applicationInsights.Configuration.setAutoCollectRequests(false)`.
## <a name="debug"></a>Developer mode
-During debugging, it's useful to have your telemetry expedited through the pipeline so that you can see results immediately. You also get additional messages that help you trace any problems with the telemetry. Switch it off in production, because it may slow down your app.
+During debugging, it's useful to have your telemetry expedited through the pipeline so that you can see results immediately. You also get other messages that help you trace any problems with the telemetry. Switch it off in production because it might slow down your app.
*C#*
TelemetryConfiguration.Active.TelemetryChannel.DeveloperMode = True
*Node.js*
-For Node.js, you can enable developer mode by enabling internal logging via `setInternalLogging` and setting `maxBatchSize` to 0, which causes your telemetry to be sent as soon as it is collected.
+For Node.js, you can enable developer mode by enabling internal logging via `setInternalLogging` and setting `maxBatchSize` to `0`, which causes your telemetry to be sent as soon as it's collected.
```js applicationInsights.setup("ikey")
applicationInsights.setup("ikey")
applicationInsights.defaultClient.config.maxBatchSize = 0; ```
-## <a name="ikey"></a> Setting the instrumentation key for selected custom telemetry
+## <a name="ikey"></a> Set the instrumentation key for selected custom telemetry
*C#*
telemetry.InstrumentationKey = "my key";
To avoid mixing up telemetry from development, test, and production environments, you can [create separate Application Insights resources](./create-new-resource.md) and change their keys, depending on the environment.
-Instead of getting the instrumentation key from the configuration file, you can set it in your code. Set the key in an initialization method, such as global.aspx.cs in an ASP.NET service:
+Instead of getting the instrumentation key from the configuration file, you can set it in your code. Set the key in an initialization method, such as `global.aspx.cs` in an ASP.NET service:
*C#*
protected void Application_Start()
appInsights.config.instrumentationKey = myKey; ```
-In webpages, you might want to set it from the web server's state, rather than coding it literally into the script. For example, in a webpage generated in an ASP.NET app:
+In webpages, you might want to set it from the web server's state instead of coding it literally into the script. For example, in a webpage generated in an ASP.NET app:
*JavaScript in Razor*
var appInsights = window.appInsights || function(config){ ...
## TelemetryContext
-TelemetryClient has a Context property, which contains values that are sent along with all telemetry data. They are normally set by the standard telemetry modules, but you can also set them yourself. For example:
+`TelemetryClient` has a Context property, which contains values that are sent along with all telemetry data. They're normally set by the standard telemetry modules, but you can also set them yourself. For example:
```csharp telemetry.Context.Operation.Name = "MyOperationName"; ```
-If you set any of these values yourself, consider removing the relevant line from [ApplicationInsights.config](./configuration-with-applicationinsights-config.md), so that your values and the standard values don't get confused.
+If you set any of these values yourself, consider removing the relevant line from [ApplicationInsights.config](./configuration-with-applicationinsights-config.md) so that your values and the standard values don't get confused.
* **Component**: The app and its version.
-* **Device**: Data about the device where the app is running. (In web apps, this is the server or client device that the telemetry is sent from.)
-* **InstrumentationKey**: The Application Insights resource in Azure where the telemetry appears. It's usually picked up from ApplicationInsights.config.
+* **Device**: Data about the device where the app is running. In web apps, it's the server or client device that the telemetry is sent from.
+* **InstrumentationKey**: The Application Insights resource in Azure where the telemetry appears. It's usually picked up from `ApplicationInsights.config`.
* **Location**: The geographic location of the device.
-* **Operation**: In web apps, the current HTTP request. In other app types, you can set this to group events together.
- * **ID**: A generated value that correlates different events, so that when you inspect any event in Diagnostic Search, you can find related items.
+* **Operation**: In web apps, the current HTTP request. In other app types, you can set this value to group events together.
+ * **ID**: A generated value that correlates different events so that when you inspect any event in Diagnostic Search, you can find related items.
* **Name**: An identifier, usually the URL of the HTTP request.
- * **SyntheticSource**: If not null or empty, a string that indicates that the source of the request has been identified as a robot or web test. By default, it is excluded from calculations in Metrics Explorer.
-* **Session**: The user's session. The ID is set to a generated value, which is changed when the user has not been active for a while.
+ * **SyntheticSource**: If not null or empty, a string that indicates that the source of the request has been identified as a robot or web test. By default, it's excluded from calculations in Metrics Explorer.
+* **Session**: The user's session. The ID is set to a generated value, which is changed when the user hasn't been active for a while.
* **User**: User information. ## Limits
To determine how long data is kept, see [Data retention and privacy](./data-rete
## Questions
-* *What exceptions might Track_() calls throw?*
+* What exceptions might `Track_()` calls throw?
- None. You don't need to wrap them in try-catch clauses. If the SDK encounters problems, it will log messages in the debug console output and--if the messages get through--in Diagnostic Search.
-* *Is there a REST API to get data from the portal?*
+ None. You don't need to wrap them in try-catch clauses. If the SDK encounters problems, it will log messages in the debug console output and, if the messages get through, in Diagnostic Search.
+* Is there a REST API to get data from the portal?
- Yes, the [data access API](https://dev.applicationinsights.io/). Other ways to extract data include [export from Analytics to Power BI](./export-power-bi.md) and [continuous export](./export-telemetry.md).
+ Yes, the [data access API](https://dev.applicationinsights.io/). Other ways to extract data include [export from Log Analytics to Power BI](./export-power-bi.md) and [continuous export](./export-telemetry.md).
## <a name="next"></a>Next steps
azure-monitor Ip Addresses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/ip-addresses.md
Title: IP addresses used by Azure Monitor
-description: Server firewall exceptions required by Application Insights
+description: This article discusses server firewall exceptions that are required by Application Insights.
Last updated 01/27/2020
# IP addresses used by Azure Monitor
-[Azure Monitor](../overview.md) uses a number of IP addresses. Azure Monitor is made up of core platform metrics and log in addition to Log Analytics and Application Insights. You might need to know these addresses if the app or infrastructure that you are monitoring is hosted behind a firewall.
+[Azure Monitor](../overview.md) uses several IP addresses. Azure Monitor is made up of core platform metrics and logs in addition to Log Analytics and Application Insights. You might need to know IP addresses if the app or infrastructure that you're monitoring is hosted behind a firewall.
> [!NOTE]
-> Although these addresses are static, it's possible that we will need to change them from time to time. All Application Insights traffic represents outbound traffic with the exception of availability monitoring and webhooks which require inbound firewall rules.
+> Although these addresses are static, it's possible that we'll need to change them from time to time. All Application Insights traffic represents outbound traffic with the exception of availability monitoring and webhooks, which require inbound firewall rules.
-> [!TIP]
-> You can use Azure [network service tags](../../virtual-network/service-tags-overview.md) to manage access if you are using Azure Network Security Groups. If you are managing access for hybrid/on premises resources you can download the equivalent IP address lists as [JSON files](../../virtual-network/service-tags-overview.md#discover-service-tags-by-using-downloadable-json-files) which are updated each week. To cover all the exceptions in this article you would need to use the service tags: `ActionGroup`, `ApplicationInsightsAvailability`, and `AzureMonitor`.
-
-Alternatively, you can subscribe to this page as a RSS feed by adding https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/azure-monitor/app/ip-addresses.md to your favorite RSS/ATOM reader to get notified of the latest changes.
+You can use Azure [network service tags](../../virtual-network/service-tags-overview.md) to manage access if you're using Azure network security groups. If you're managing access for hybrid/on-premises resources, you can download the equivalent IP address lists as [JSON files](../../virtual-network/service-tags-overview.md#discover-service-tags-by-using-downloadable-json-files), which are updated each week. To cover all the exceptions in this article, use the service tags `ActionGroup`, `ApplicationInsightsAvailability`, and `AzureMonitor`.
+Alternatively, you can subscribe to this page as an RSS feed by adding https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/azure-monitor/app/ip-addresses.md to your favorite RSS/ATOM reader to get notified of the latest changes.
## Outgoing ports
-You need to open some outgoing ports in your server's firewall to allow the Application Insights SDK and/or Status Monitor to send data to the portal:
+You need to open some outgoing ports in your server's firewall to allow the Application Insights SDK or Status Monitor to send data to the portal.
| Purpose | URL | IP | Ports | | | | | |
You need to open some outgoing ports in your server's firewall to allow the Appl
## Status Monitor
-Status Monitor Configuration - needed only when making changes.
+Status Monitor configuration is needed only when you're making changes.
| Purpose | URL | IP | Ports | | | | | |
Status Monitor Configuration - needed only when making changes.
## Availability tests
-This is the list of addresses from which [availability web tests](./monitor-web-app-availability.md) are run. If you want to run web tests on your app, but your web server is restricted to serving specific clients, then you will have to permit incoming traffic from our availability test servers.
-
+This is the list of addresses from which [availability web tests](./monitor-web-app-availability.md) are run. If you want to run web tests on your app but your web server is restricted to serving specific clients, you'll have to permit incoming traffic from our availability test servers.
> [!NOTE]
-> For resources located inside private virtual networks that cannot allow direct inbound communication with the availability test agents in public Azure, the only option is to [create and host your own custom availability tests](availability-azure-functions.md).
+> For resources located inside private virtual networks that can't allow direct inbound communication with the availability test agents in public Azure, the only option is to [create and host your own custom availability tests](availability-azure-functions.md).
### Service tag
-If you are using Azure Network Security Groups, simply add an **inbound port rule** to allow traffic from Application Insights availability tests by selecting **Service Tag** as the **Source** and **ApplicationInsightsAvailability** as the **Source service tag**.
+If you're using Azure network security groups, add an *inbound port rule* to allow traffic from Application Insights availability tests. Select **Service Tag** as the **Source** and **ApplicationInsightsAvailability** as the **Source service tag**.
>[!div class="mx-imgBorder"]
->![Under settings select Inbound security rules and then select add at the top of the tab ](./media/ip-addresses/add-inbound-security-rule.png)
+>![Screenshot that shows selecting Inbound security rules and then selecting Add.](./media/ip-addresses/add-inbound-security-rule.png)
>[!div class="mx-imgBorder"]
->![Add inbound security rule tab](./media/ip-addresses/add-inbound-security-rule2.png)
+>![Screenshot that shows the Add inbound security rule tab.](./media/ip-addresses/add-inbound-security-rule2.png)
-Open ports 80 (http) and 443 (https) for incoming traffic from these addresses (IP addresses are grouped by location):
+Open port 80 (HTTP) and port 443 (HTTPS) for incoming traffic from these addresses. IP addresses are grouped by location.
-### IP Addresses
+### IP addresses
-If you're looking for the actual IP addresses so you can add them to the list of allowed IP's in your firewall, please download the JSON file describing Azure IP Ranges. These files contain the most up-to-date information. For Azure public cloud, you may also look up the IP address ranges by location using the table below.
+If you're looking for the actual IP addresses so that you can add them to the list of allowed IPs in your firewall, download the JSON file that describes Azure IP ranges. These files contain the most up-to-date information. For Azure public cloud, you might also look up the IP address ranges by location using the following table.
-After downloading the appropriate file, open it using your favorite text editor and search for "ApplicationInsightsAvailability" to go straight to the section of the file describing the service tag for availability tests.
+After you download the appropriate file, open it by using your favorite text editor. Search for **ApplicationInsightsAvailability** to go straight to the section of the file that describes the service tag for availability tests.
> [!NOTE]
-> These addresses are listed using Classless Inter-Domain Routing (CIDR) notation. This means that an entry like `51.144.56.112/28` is equivalent to 16 IPs starting at `51.144.56.112` and ending at `51.144.56.127`.
+> These addresses are listed by using Classless Interdomain Routing notation. As an example, an entry like `51.144.56.112/28` is equivalent to 16 IPs that start at `51.144.56.112` and end at `51.144.56.127`.
+
+#### Azure public cloud
+
+Download [public cloud IP addresses](https://www.microsoft.com/download/details.aspx?id=56519).
+
+#### Azure US Government cloud
-#### Azure Public Cloud
-Download [Public Cloud IP addresses](https://www.microsoft.com/download/details.aspx?id=56519).
+Download [US Government cloud IP addresses](https://www.microsoft.com/download/details.aspx?id=57063).
-#### Azure US Government Cloud
-Download [Government Cloud IP addresses](https://www.microsoft.com/download/details.aspx?id=57063).
+#### Azure China cloud
-#### Azure China Cloud
-Download [China Cloud IP addresses](https://www.microsoft.com/download/details.aspx?id=57062).
+Download [China cloud IP addresses](https://www.microsoft.com/download/details.aspx?id=57062).
-#### Addresses grouped by location (Azure Public Cloud)
+#### Addresses grouped by location (Azure public cloud)
``` Australia East
East US
``` ### Discovery API
-You may also want to [programmatically retrieve](../../virtual-network/service-tags-overview.md#use-the-service-tag-discovery-api) the current list of service tags together with IP address range details.
-## Application Insights & Log Analytics APIs
+You might also want to [programmatically retrieve](../../virtual-network/service-tags-overview.md#use-the-service-tag-discovery-api) the current list of service tags together with IP address range details.
+
+## Application Insights and Log Analytics APIs
| Purpose | URI | IP | Ports | | | | | | | API |`api.applicationinsights.io`<br/>`api1.applicationinsights.io`<br/>`api2.applicationinsights.io`<br/>`api3.applicationinsights.io`<br/>`api4.applicationinsights.io`<br/>`api5.applicationinsights.io`<br/>`dev.applicationinsights.io`<br/>`dev.applicationinsights.microsoft.com`<br/>`dev.aisvc.visualstudio.com`<br/>`www.applicationinsights.io`<br/>`www.applicationinsights.microsoft.com`<br/>`www.aisvc.visualstudio.com`<br/>`api.loganalytics.io`<br/>`*.api.loganalytics.io`<br/>`dev.loganalytics.io`<br>`docs.loganalytics.io`<br/>`www.loganalytics.io` |20.37.52.188 <br/> 20.37.53.231 <br/> 20.36.47.130 <br/> 20.40.124.0 <br/> 20.43.99.158 <br/> 20.43.98.234 <br/> 13.70.127.61 <br/> 40.81.58.225 <br/> 20.40.160.120 <br/> 23.101.225.155 <br/> 52.139.8.32 <br/> 13.88.230.43 <br/> 52.230.224.237 <br/> 52.242.230.209 <br/> 52.173.249.138 <br/> 52.229.218.221 <br/> 52.229.225.6 <br/> 23.100.94.221 <br/> 52.188.179.229 <br/> 52.226.151.250 <br/> 52.150.36.187 <br/> 40.121.135.131 <br/> 20.44.73.196 <br/> 20.41.49.208 <br/> 40.70.23.205 <br/> 20.40.137.91 <br/> 20.40.140.212 <br/> 40.89.189.61 <br/> 52.155.118.97 <br/> 52.156.40.142 <br/> 23.102.66.132 <br/> 52.231.111.52 <br/> 52.231.108.46 <br/> 52.231.64.72 <br/> 52.162.87.50 <br/> 23.100.228.32 <br/> 40.127.144.141 <br/> 52.155.162.238 <br/> 137.116.226.81 <br/> 52.185.215.171 <br/> 40.119.4.128 <br/> 52.171.56.178 <br/> 20.43.152.45 <br/> 20.44.192.217 <br/> 13.67.77.233 <br/> 51.104.255.249 <br/> 51.104.252.13 <br/> 51.143.165.22 <br/> 13.78.151.158 <br/> 51.105.248.23 <br/> 40.74.36.208 <br/> 40.74.59.40 <br/> 13.93.233.49 <br/> 52.247.202.90 |80,443 | | Azure Pipeline annotations extension | aigs1.aisvc.visualstudio.com |dynamic|443 |
-## Application Insights Analytics
+## Application Insights analytics
| Purpose | URI | IP | Ports | | | | | |
-| Analytics Portal | analytics.applicationinsights.io | dynamic | 80,443 |
+| Analytics portal | analytics.applicationinsights.io | dynamic | 80,443 |
| CDN | applicationanalytics.azureedge.net | dynamic | 80,443 | | Media CDN | applicationanalyticsmedia.azureedge.net | dynamic | 80,443 |
-Note: *.applicationinsights.io domain is owned by Application Insights team.
+The *.applicationinsights.io domain is owned by the Application Insights team.
-## Log Analytics Portal
+## Log Analytics portal
| Purpose | URI | IP | Ports | | | | | | | Portal | portal.loganalytics.io | dynamic | 80,443 | | CDN | applicationanalytics.azureedge.net | dynamic | 80,443 |
-Note: *.loganalytics.io domain is owned by the Log Analytics team.
+The *.loganalytics.io domain is owned by the Log Analytics team.
-## Application Insights Azure portal Extension
+## Application Insights Azure portal extension
| Purpose | URI | IP | Ports | | | | | |
-| Application Insights Extension | stamp2.app.insightsportal.visualstudio.com | dynamic | 80,443 |
-| Application Insights Extension CDN | insightsportal-prod2-cdn.aisvc.visualstudio.com<br/>insightsportal-prod2-asiae-cdn.aisvc.visualstudio.com<br/>insightsportal-cdn-aimon.applicationinsights.io | dynamic | 80,443 |
+| Application Insights extension | stamp2.app.insightsportal.visualstudio.com | dynamic | 80,443 |
+| Application Insights extension CDN | insightsportal-prod2-cdn.aisvc.visualstudio.com<br/>insightsportal-prod2-asiae-cdn.aisvc.visualstudio.com<br/>insightsportal-cdn-aimon.applicationinsights.io | dynamic | 80,443 |
## Application Insights SDKs
Note: *.loganalytics.io domain is owned by the Log Analytics team.
| | | | | | Application Insights JS SDK CDN | az416426.vo.msecnd.net<br/>js.monitor.azure.com | dynamic | 80,443 |
-## Action Group webhooks
+## Action group webhooks
+
+You can query the list of IP addresses used by action groups by using the [Get-AzNetworkServiceTag PowerShell command](/powershell/module/az.network/Get-AzNetworkServiceTag).
-You can query the list of IP addresses used by Action Groups using the [Get-AzNetworkServiceTag PowerShell command](/powershell/module/az.network/Get-AzNetworkServiceTag).
+### Action group service tag
-### Action Groups Service Tag
-Managing changes to Source IP addresses can be quite time consuming. Using **Service Tags** eliminates the need to update your configuration. A service tag represents a group of IP address prefixes from a given Azure service. Microsoft manages the IP addresses and automatically updates the service tag as addresses change, eliminating the need to update network security rules for an Action Group.
+Managing changes to source IP addresses can be time consuming. Using *service tags* eliminates the need to update your configuration. A service tag represents a group of IP address prefixes from a specific Azure service. Microsoft manages the IP addresses and automatically updates the service tag as addresses change, which eliminates the need to update network security rules for an action group.
-1. In the Azure portal under Azure Services search for *Network Security Group*.
-2. Click on **Add** and create a Network Security Group.
+1. In the Azure portal under **Azure Services**, search for **Network Security Group**.
+1. Select **Add** and create a network security group:
- 1. Add the Resource Group Name and then enter *Instance Details*.
- 1. Click on **Review + Create** and then click *Create*.
+ 1. Add the resource group name, and then enter **Instance details** information.
+ 1. Select **Review + Create**, and then select **Create**.
- :::image type="content" source="../alerts/media/action-groups/action-group-create-security-group.png" alt-text="Example on how to create a Network Security Group."border="true":::
+ :::image type="content" source="../alerts/media/action-groups/action-group-create-security-group.png" alt-text="Screenshot that shows how to create a network security group."border="true":::
-3. Go to Resource Group and then click on *Network Security Group* you have created.
+1. Go to **Resource Group**, and then select the network security group you created:
- 1. Select *Inbound Security Rules*.
- 1. Click on **Add**.
+ 1. Select **Inbound security rules**.
+ 1. Select **Add**.
- :::image type="content" source="../alerts/media/action-groups/action-group-add-service-tag.png" alt-text="Example on how to add a service tag." border="true":::
+ :::image type="content" source="../alerts/media/action-groups/action-group-add-service-tag.png" alt-text="Screenshot that shows how to add inbound security rules." border="true":::
-4. A new window will open in right pane.
- 1. Select Source: **Service Tag**
- 1. Source Service Tag: **ActionGroup**
- 1. Click **Add**.
-
- :::image type="content" source="../alerts/media/action-groups/action-group-service-tag.png" alt-text="Example on how to add service tag." border="true":::
+1. A new window opens in the right pane:
+ 1. Under **Source**, enter **Service Tag**.
+ 1. Under **Source service tag**, enter **ActionGroup**.
+ 1. Select **Add**.
+
+ :::image type="content" source="../alerts/media/action-groups/action-group-service-tag.png" alt-text="Screenshot that shows how to add a service tag." border="true":::
## Profiler
azure-monitor Sdk Support Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sdk-support-guidance.md
Microsoft announces feature deprecations or breaking changes at least three years in advance and strives to provide a seamless process for migration to the replacement experience.
-The [Microsoft Azure SDK lifecycle policy](https://docs.microsoft.com/lifecycle/faq/azure) is followed when features are enhanced in a new SDK or before an SDK is designated as legacy. Microsoft strives to retain legacy SDK functionality, but newer features may not be available with older versions.
+The [Microsoft Azure SDK lifecycle policy](/lifecycle/faq/azure) is followed when features are enhanced in a new SDK or before an SDK is designated as legacy. Microsoft strives to retain legacy SDK functionality, but newer features may not be available with older versions.
> [!NOTE] > Diagnostic tools often provide better insight into the root cause of a problem when the latest stable SDK version is used.
Support engineers are expected to provide SDK update guidance according to the f
|||| |Stable and less than one year old | Newer supported stable version | **UPDATE RECOMMENDED** | |Stable and more than one year old | Newer supported stable version | **UPDATE REQUIRED** |
-|Unsupported ([support policy](https://docs.microsoft.com/lifecycle/faq/azure)) | Any supported version | **UPDATE REQUIRED** |
+|Unsupported ([support policy](/lifecycle/faq/azure)) | Any supported version | **UPDATE REQUIRED** |
|Preview | Stable version | **UPDATE REQUIRED** | |Preview | Older stable version | **UPDATE RECOMMENDED** | |Preview | Newer preview version, no older stable version | **UPDATE RECOMMENDED** |
azure-monitor Activity Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/activity-log.md
Before using Activity log insights, you'll have to [enable sending logs to your
### How does Activity log insights work?
-Activity logs you send to a [Log Analytics workspace](/azure/azure-monitor/logs/log-analytics-workspace-overview) are stored in a table called AzureActivity.
+Activity logs you send to a [Log Analytics workspace](../logs/log-analytics-workspace-overview.md) are stored in a table called AzureActivity.
-Activity log insights are a curated [Log Analytics workbook](/azure/azure-monitor/visualize/workbooks-overview) with dashboards that visualize the data in the AzureActivity table. For example, which administrators deleted, updated or created resources, and whether the activities failed or succeeded.
+Activity log insights are a curated [Log Analytics workbook](../visualize/workbooks-overview.md) with dashboards that visualize the data in the AzureActivity table. For example, which administrators deleted, updated or created resources, and whether the activities failed or succeeded.
:::image type="content" source="media/activity-log/activity-logs-insights-main-screen.png" lightbox= "media/activity-log/activity-logs-insights-main-screen.png" alt-text="A screenshot showing Azure Activity logs insights dashboards.":::
To view Activity log insights on a resource level:
1. At the top of the **Activity Logs Insights** page, select: 1. A time range for which to view data from the **TimeRange** dropdown.
- * **Azure Activity Log Entries** shows the count of Activity log records in each [activity log category](/azure/azure-monitor/essentials/activity-log#categories).
+ * **Azure Activity Log Entries** shows the count of Activity log records in each activity log category.
:::image type="content" source="media/activity-log/activity-logs-insights-category-value.png" lightbox= "media/activity-log/activity-logs-insights-category-value.png" alt-text="Screenshot of Azure Activity Logs by Category Value":::
azure-monitor Log Analytics Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/log-analytics-overview.md
Title: Overview of Log Analytics in Azure Monitor
-description: Describes Log Analytics which is a tool in the Azure portal used to edit and run log queries for analyzing data in Azure Monitor Logs.
+description: This overview describes Log Analytics, which is a tool in the Azure portal used to edit and run log queries for analyzing data in Azure Monitor logs.
Last updated 10/04/2020 # Overview of Log Analytics in Azure Monitor
-Log Analytics is a tool in the Azure portal used to edit and run log queries with data in Azure Monitor Logs. You may write a simple query that returns a set of records and then use features of Log Analytics to sort, filter, and analyze them. Or you may write a more advanced query to perform statistical analysis and visualize the results in a chart to identify a particular trend. Whether you work with the results of your queries interactively or use them with other Azure Monitor features such as log query alerts or workbooks, Log Analytics is the tool that you're going to use write and test them.
+Log Analytics is a tool in the Azure portal that's used to edit and run log queries with data in Azure Monitor Logs.
+
+You might write a simple query that returns a set of records and then use features of Log Analytics to sort, filter, and analyze them. Or you might write a more advanced query to perform statistical analysis and visualize the results in a chart to identify a particular trend.
+
+Whether you work with the results of your queries interactively or use them with other Azure Monitor features, such as log query alerts or workbooks, Log Analytics is the tool that you'll use to write and test them.
> [!TIP]
-> This article provides a description of Log Analytics and each of its features. If you want to jump right into a tutorial, see [Log Analytics tutorial](./log-analytics-tutorial.md).
+> This article describes Log Analytics and its features. If you want to jump right into a tutorial, see [Log Analytics tutorial](./log-analytics-tutorial.md).
+## Start Log Analytics
+To start Log Analytics in the Azure portal, on the **Azure Monitor** menu select **Logs**. You'll also see this option on the menu for most Azure resources. No matter where you start Log Analytics, the tool is the same. But the menu you use to start Log Analytics determines the data that's available.
-## Starting Log Analytics
-Start Log Analytics from **Logs** in the **Azure Monitor** menu in the Azure portal. You'll also see this option in the menu for most Azure resources. Regardless of where you start it from, it will be the same Log Analytics tool. The menu you use to start Log Analytics determines the data that will be available though. If you start it from the **Azure Monitor** menu or the **Log Analytics workspaces** menu, you'll have access to all of the records in a workspace. If you select **Logs** from another type of resource, then your data will be limited to log data for that resource. See [Log query scope and time range in Azure Monitor Log Analytics](./scope.md) for details.
+If you start Log Analytics from the **Azure Monitor** menu or the **Log Analytics workspaces** menu, you'll have access to all the records in a workspace. If you select **Logs** from another type of resource, your data will be limited to log data for that resource. For more information, see [Log query scope and time range in Azure Monitor Log Analytics](./scope.md).
-[![Start Log Analytics](media/log-analytics-overview/start-log-analytics.png)](media/log-analytics-overview/start-log-analytics.png#lightbox)
+[![Screenshot that shows starting Log Analytics.](media/log-analytics-overview/start-log-analytics.png)](media/log-analytics-overview/start-log-analytics.png#lightbox)
-When you start Log Analytics, the first thing you'll see is a dialog box with [example queries](../logs/queries.md). These are categorized by solution, and you can browse or search for queries that match your particular requirements. You may be able to find one that does exactly what you need, or load one to the editor and modify it as required. Browsing through example queries is actually a great way to learn how to write your own queries.
+When you start Log Analytics, a dialog appears that contains [example queries](../logs/queries.md). The queries are categorized by solution. Browse or search for queries that match your requirements. You might find one that does exactly what you need. You can also load one to the editor and modify it as required. Browsing through example queries is a good way to learn how to write your own queries.
-Of course if you want to start with an empty script and write it yourself, you can close the example queries. Just click the **Queries** at the top of the screen if you want to get them back.
+If you want to start with an empty script and write it yourself, close the example queries. Select **Queries** at the top of the screen to get them back.
## Log Analytics interface
-The following image identifies the different components of Log Analytics.
-[![Log Analytics](media/log-analytics-overview/log-analytics.png)](media/log-analytics-overview/log-analytics.png#lightbox)
+The following image identifies four Log Analytics components.
+
+[![Screenshot that shows the Log Analytics interface with four features identified.](media/log-analytics-overview/log-analytics.png)](media/log-analytics-overview/log-analytics.png#lightbox)
-### 1. Top action bar
-Controls for working with the query in the query window.
+### Top action bar
+
+The top bar has controls for working with a query in the query window.
| Option | Description | |:|:|
-| Scope | Specifies the scope of data used for the query. This could be all data in a Log Analytics workspace or data for a particular resource across multiple workspaces. See [Query scope](./scope.md). |
-| Run button | Click to run the selected query in the query window. You can also press shift+enter to run a query. |
-| Time picker | Select the time range for the data available to the query. This is overridden if you include a time filter in the query. See [Log query scope and time range in Azure Monitor Log Analytics](./scope.md). |
-| Save button | Save the query to the Query Explorer for the workspace. |
+| Scope | Specifies the scope of data used for the query. This could be all the data in a Log Analytics workspace or data for a particular resource across multiple workspaces. See [Query scope](./scope.md). |
+| Run button | Run the selected query in the query window. You can also select **Shift+Enter** to run a query. |
+| Time picker | Select the time range for the data available to the query. This action is overridden if you include a time filter in the query. See [Log query scope and time range in Azure Monitor Log Analytics](./scope.md). |
+| Save button | Save the query to **Query Explorer** for the workspace. |
Copy button | Copy a link to the query, the query text, or the query results to the clipboard. | | New alert rule button | Create a new tab with an empty query. | | Export button | Export the results of the query to a CSV file or the query to Power Query Formula Language format for use with Power BI. | | Pin to button | Pin the results of the query to an Azure dashboard or add them to an Azure workbook. | | Format query button | Arrange the selected text for readability. |
-| Example queries button | Open the example queries dialog box that is displayed when you first open Log Analytics. |
-| Query Explorer button | Open **Query Explorer** which provides access to saved queries in the workspace. |
+| Example queries button | Open the example queries dialog that appears when you first open Log Analytics. |
+| Query Explorer button | Open **Query Explorer**, which provides access to saved queries in the workspace. |
+### Left sidebar
-### 2. Sidebar
-Lists of tables in the workspace, sample queries, and filter options for the current query.
+The sidebar on the left lists tables in the workspace, sample queries, and filter options for the current query.
| Tab | Description | |:|:|
-| Tables | Lists the tables that are part of the selected scope. Select **Group by** to change the grouping of the tables. Hover over a table name to display a dialog box with a description of the table and options to view its documentation and to preview its data. Expand a table to view its columns. Double-click on a table or column name to add it to the query. |
-| Queries | List of example queries that you can open in the query window. This is the same list that's displayed when you open Log Analytics. Select **Group by** to change the grouping of the queries. Double-click on a query to add it to the query window or hover over it for other options. |
-| Filter | Creates filter options based on the results of a query. After you a run a query, columns will be displayed with different values from the results. Select one or more values and then click **Apply & Run** to add a **where** command to the query and run it again. |
+| Tables | Lists the tables that are part of the selected scope. Select **Group by** to change the grouping of the tables. Hover over a table name to display a dialog with a description of the table and options to view its documentation and preview its data. Expand a table to view its columns. Double-click a table or column name to add it to the query. |
+| Queries | List of example queries that you can open in the query window. This list is the same one that appears when you open Log Analytics. Select **Group by** to change the grouping of the queries. Double-click a query to add it to the query window or hover over it for other options. |
+| Filter | Creates filter options based on the results of a query. After you run a query, columns appear with different values from the results. Select one or more values, and then select **Apply & Run** to add a **where** command to the query and run it again. |
+
+### Query window
+
+The query window is where you edit your query. IntelliSense is used for KQL commands and color coding enhances readability. Select **+** at the top of the window to open another tab.
-### 3. Query window
-The query window is where you edit your query. This includes intellisense for KQL commands and color coding to enhance readability. Click the **+** at the top of the window to open another tab.
+A single window can include multiple queries. A query can't include any blank lines, so you can separate multiple queries in a window with one or more blank lines. The current query is the one with the cursor positioned anywhere in it.
-As single window can include multiple queries. A query cannot include any blank lines, so you can separate multiple queries in a window with one or more blank lines. The current query is the one with the cursor positioned anywhere in it.
+To run the current query, select the **Run** button or select **Shift+Enter**.
-To run the current query, click the **Run** button or press Shift+Enter.
+### Results window
-### 4. Results window
-The results of the query are displayed in the results window. By default, the results are displayed as a table. To display as a chart, either select **Chart** in the results window, or add a **render** command to your query.
+The results of a query appear in the results window. By default, the results are displayed as a table. To display the results as a chart, select **Chart** in the results window. You can also add a **render** command to your query.
#### Results view
-Displays query results in a table organized by columns and rows. Click to the left of a row to expand its values. Click on the **Columns** dropdown to change the list of columns. Sort the results by clicking on a column name. Filter the results by clicking the funnel next to a column name. Clear the filters and reset the sorting by running the query again.
-Select **Group columns** to display the grouping bar above the query results. Group the results by any column by dragging it to the bar. Create nested groups in the results by adding additional columns.
+The results view displays query results in a table organized by columns and rows. Click to the left of a row to expand its values. Select the **Columns** dropdown to change the list of columns. Sort the results by selecting a column name. Filter the results by selecting the funnel next to a column name. Clear the filters and reset the sorting by running the query again.
+
+Select **Group columns** to display the grouping bar above the query results. Group the results by any column by dragging it to the bar. Create nested groups in the results by adding more columns.
#### Chart view
-Displays the results as one of multiple available chart types. You can specify the chart type in a **render** command in your query or select it from the **Visualization Type** dropdown.
+
+The chart view displays the results as one of multiple available chart types. You can specify the chart type in a **render** command in your query. You can also select it from the **Visualization Type** dropdown.
| Option | Description | |:|:|
-| **Visualization Type** | Type of chart to display. |
-| **X-Axis** | Column in the results to use for the X-Axis
-| **Y-Axis** | Column in the results to use for the Y-Axis. This will typically be a numeric column. |
-| **Split by** | Column in the results that defines the series in the chart. A series is created for each value in the column. |
-| **Aggregation** | Type of aggregation to perform on the numeric values in the Y-Axis. |
+| Visualization type | Type of chart to display. |
+| X-axis | Column in the results to use for the x-axis.
+| Y-axis | Column in the results to use for the y-axis. Typically, this is a numeric column. |
+| Split by | Column in the results that defines the series in the chart. A series is created for each value in the column. |
+| Aggregation | Type of aggregation to perform on the numeric values in the y-axis. |
## Relationship to Azure Data Explorer
-If you're already familiar with the Azure Data Explorer Web UI, then Log Analytics should look familiar. That's because it's built on top of Azure Data Explorer and uses the same Kusto Query Language (KQL). Log Analytics adds features specific to Azure Monitor such as filtering by time range and the ability to create an alert rule from a query. Both tools included an explorer that lets you scan through the structure of available tables, but the Azure Data Explorer Web UI primarily works with tables in Azure Data Explorer databases while Log Analytics works with tables in a Log Analytics workspace.
+
+If you've worked with the Azure Data Explorer web UI, Log Analytics should look familiar. That's because it's built on top of Azure Data Explorer and uses the same Kusto Query Language.
+
+Log Analytics adds features specific to Azure Monitor, such as filtering by time range and the ability to create an alert rule from a query. Both tools include an explorer that lets you scan through the structure of available tables. The Azure Data Explorer web UI primarily works with tables in Azure Data Explorer databases. Log Analytics works with tables in a Log Analytics workspace.
## Next steps+ - Walk through a [tutorial on using Log Analytics in the Azure portal](./log-analytics-tutorial.md). - Walk through a [tutorial on writing queries](./get-started-queries.md).
azure-monitor Log Analytics Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/log-analytics-tutorial.md
Last updated 06/28/2021
# Log Analytics tutorial
-Log Analytics is a tool in the Azure portal to edit and run log queries from data collected by Azure Monitor Logs and interactively analyze their results. You can use Log Analytics queries to retrieve records that match particular criteria, identify trends, analyze patterns, and provide various insights into your data.
-This tutorial walks you through the Log Analytics interface, gets you started with some basic queries, and shows you how you can work with the results. You'll learn the following:
+Log Analytics is a tool in the Azure portal to edit and run log queries from data collected by Azure Monitor logs and interactively analyze their results. You can use Log Analytics queries to retrieve records that match particular criteria, identify trends, analyze patterns, and provide various insights into your data.
+
+This tutorial walks you through the Log Analytics interface, gets you started with some basic queries, and shows you how you can work with the results. You'll learn how to:
> [!div class="checklist"]
-> * Understand the log data schema
-> * Write and run simple queries, and modify the time range for queries
-> * Filter, sort, and group query results
-> * View, modify, and share visuals of query results
-> * Load, export, and copy queries and results
+> * Understand the log data schema.
+> * Write and run simple queries, and modify the time range for queries.
+> * Filter, sort, and group query results.
+> * View, modify, and share visuals of query results.
+> * Load, export, and copy queries and results.
> [!IMPORTANT]
-> In this tutorial, you'll use Log Analytics features to build one query and use another example query. When you're ready to learn the syntax of queries and start directly editing the query itself, read the [Kusto Query Language tutorial](/azure/data-explorer/kusto/query/tutorial?pivots=azuremonitor). That tutorial walks through example queries that you can edit and run in Log Analytics. It uses several of the features that you'll learn in this tutorial.
-
+> In this tutorial, you'll use Log Analytics features to build one query and use another example query. When you're ready to learn the syntax of queries and start directly editing the query itself, read the [Kusto Query Language tutorial](/azure/data-explorer/kusto/query/tutorial?pivots=azuremonitor). That tutorial walks you through example queries that you can edit and run in Log Analytics. It uses several of the features that you'll learn in this tutorial.
## Prerequisites+ This tutorial uses the [Log Analytics demo environment](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring_Logs/DemoLogsBlade), which includes plenty of sample data that supports the sample queries. You can also use your own Azure subscription, but you might not have data in the same tables. ## Open Log Analytics
-Open the [Log Analytics demo environment](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring_Logs/DemoLogsBlade), or select **Logs** from the Azure Monitor menu in your subscription. This step will set the initial scope to a Log Analytics workspace, so that your query will select from all data in that workspace. If you select **Logs** from an Azure resource's menu, the scope is set to only records from that resource. For details about the scope, see [Log query scope](./scope.md).
+
+Open the [Log Analytics demo environment](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring_Logs/DemoLogsBlade), or select **Logs** from the Azure Monitor menu in your subscription. This step sets the initial scope to a Log Analytics workspace so that your query selects from all data in that workspace. If you select **Logs** from an Azure resource's menu, the scope is set to only records from that resource. For details about the scope, see [Log query scope](./scope.md).
You can view the scope in the upper-left corner of the screen. If you're using your own environment, you'll see an option to select a different scope. This option isn't available in the demo environment. :::image type="content" source="media/log-analytics-tutorial/log-analytics-query-scope.png" alt-text="Screenshot that shows the Log Analytics scope for the demo." lightbox="media/log-analytics-tutorial/log-analytics-query-scope.png"::: ## View table information
-The left side of the screen includes the **Tables** tab, where you can inspect the tables that are available in the current scope. These tables are grouped by **Solution** by default, but you can change their grouping or filter them.
-Expand the **Log Management** solution and locate the **AppRequests** table. You can expand the table to view its schema, or hover over its name to show more information about it.
+The left side of the screen includes the **Tables** tab, where you can inspect the tables that are available in the current scope. These tables are grouped by **Solution** by default, but you can change their grouping or filter them.
+
+Expand the **Log Management** solution and locate the **AppRequests** table. You can expand the table to view its schema, or hover over its name to show more information about it.
:::image type="content" source="media/log-analytics-tutorial/table-details.png" alt-text="Screenshot that shows the Tables view." lightbox="media/log-analytics-tutorial/table-details.png":::
Select the link below **Useful links** to go to the table reference that documen
:::image type="content" source="media/log-analytics-tutorial/preview-data.png" alt-text="Screenshot that shows preview data for the AppRequests table." lightbox="media/log-analytics-tutorial/preview-data.png"::: ## Write a query+ Let's write a query by using the **AppRequests** table. Double-click its name to add it to the query window. You can also type directly in the window. You can even get IntelliSense that will help complete the names of tables in the current scope and Kusto Query Language (KQL) commands.
-This is the simplest query that we can write. It just returns all the records in a table. Run it by selecting the **Run** button or by selecting Shift+Enter with the cursor positioned anywhere in the query text.
+This is the simplest query that we can write. It just returns all the records in a table. Run it by selecting the **Run** button or by selecting **Shift+Enter** with the cursor positioned anywhere in the query text.
:::image type="content" source="media/log-analytics-tutorial/query-results.png" alt-text="Screenshot that shows query results." lightbox="media/log-analytics-tutorial/query-results.png":::
-You can see that we do have results. The number of records that the query has returned appears in the lower-right corner.
+You can see that we do have results. The number of records that the query has returned appears in the lower-right corner.
### Time range
-All queries return records generated within a set time range. By default, the query returns records generated in the last 24 hours.
+All queries return records generated within a set time range. By default, the query returns records generated in the last 24 hours.
-You can set a different time range using the [where operator](/azure/data-explorer/kusto/query/tutorial?pivots=azuremonitor#filter-by-boolean-expression-where-1) in the query, or using the **Time range** dropdown list at the top of the screen.
+You can set a different time range by using the [where operator](/azure/data-explorer/kusto/query/tutorial?pivots=azuremonitor#filter-by-boolean-expression-where-1) in the query. You can also use the **Time range** dropdown list at the top of the screen.
-LetΓÇÖs change the time range of the query by selecting **Last 12 hours** from the **Time range** dropdown. Select **Run** to return the results.
+Let's change the time range of the query by selecting **Last 12 hours** from the **Time range** dropdown. Select **Run** to return the results.
> [!NOTE]
-> Changing the time range using the **Time range** dropdown does not change the query in the query editor.
+> Changing the time range by using the **Time range** dropdown doesn't change the query in the query editor.
:::image type="content" source="media/log-analytics-tutorial/query-time-range.png" alt-text="Screenshot that shows the time range." lightbox="media/log-analytics-tutorial/query-time-range.png"::: - ### Multiple query conditions
-Let's reduce our results further by adding another filter condition. A query can include any number of filters to target exactly the set of records that you want. Select **Get Home/Index** under **Name**, and then select **Apply & Run**.
+Let's reduce our results further by adding another filter condition. A query can include any number of filters to target exactly the set of records that you want. Select **Get Home/Index** under **Name**, and then select **Apply & Run**.
## Analyze results+ In addition to helping you write and run queries, Log Analytics provides features for working with the results. Start by expanding a record to view the values for all of its columns. :::image type="content" source="media/log-analytics-tutorial/expand-query-search-result.png" alt-text="Screenshot that shows a record expanded in the search results." lightbox="media/log-analytics-tutorial/expand-query-search-result.png":::
-Select the name of any column to sort the results by that column. Select the filter icon next to it to provide a filter condition. This is similar to adding a filter condition to the query itself, except that this filter is cleared if the query is run again. Use this method if you want to quickly analyze a set of records as part of interactive analysis.
+Select the name of any column to sort the results by that column. Select the filter icon next to it to provide a filter condition. This action is similar to adding a filter condition to the query itself, except that this filter is cleared if the query is run again. Use this method if you want to quickly analyze a set of records as part of interactive analysis.
-For example, set a filter on the **DurationMs** column to limit the records to those that took more than **150** milliseconds.
+For example, set a filter on the **DurationMs** column to limit the records to those that took more than **150** milliseconds.
:::image type="content" source="media/log-analytics-tutorial/query-results-filter.png" alt-text="Screenshot that shows a query results filter." lightbox="media/log-analytics-tutorial/query-results-filter.png"::: ### Search through query results
-Let's search through the query results using the search box at the top right of the results pane.
+Let's search through the query results by using the search box at the top right of the results pane.
-Enter **Chicago** in the query results search box and select the arrows to find all instances of this string in your search results.
+Enter **Chicago** in the query results search box, and select the arrows to find all instances of this string in your search results.
### Reorganize and summarize data To better visualize your data, you can reorganize and summarize the data in the query results based on your needs.
-Select **Columns** to the right of the results pane to open the **Columns** sidebar.
-
+Select **Columns** to the right of the results pane to open the **Columns** sidebar.
+
-In the sidebar, you'll see a list of all available columns. Drag the **Url** column into the **Row Group** section. Results are now organized by that column, and you can collapse each group to help you with your analysis. This is similar to adding a filter condition to the query, but instead of refetching data from the server, you're processing the data your original query returned. When you run the query again, Log Analytics retrieves data based on your original query. Use this method if you want to quickly analyze a set of records as part of interactive analysis.
+In the sidebar, you'll see a list of all available columns. Drag the **Url** column into the **Row Groups** section. Results are now organized by that column, and you can collapse each group to help you with your analysis. This action is similar to adding a filter condition to the query, but instead of refetching data from the server, you're processing the data your original query returned. When you run the query again, Log Analytics retrieves data based on your original query. Use this method if you want to quickly analyze a set of records as part of interactive analysis.
:::image type="content" source="media/log-analytics-tutorial/query-results-grouped.png" alt-text="Screenshot that shows query results grouped by URL." lightbox="media/log-analytics-tutorial/query-results-grouped.png":::+ ### Create a pivot table
-To analyze the performance of your pages, create a pivot table.
+To analyze the performance of your pages, create a pivot table.
-In the **Columns** sidebar, select **Pivot Mode**.
+In the **Columns** sidebar, select **Pivot Mode**.
Select **Url** and **DurationMs** to show the total duration of all calls to each URL.
-To view the maximum call duration to each URL, select **sum(DurationMs)** > **max**.
+To view the maximum call duration to each URL, select **sum(DurationMs)** > **max**.
:::image type="content" source="media/log-analytics-tutorial/log-analytics-pivot-table.png" alt-text="Screenshot that shows how to turn on Pivot Mode and configure a pivot table based on the URL and DurationMS values." lightbox="media/log-analytics-tutorial/log-analytics-pivot-table.png"::: Now let's sort the results by longest maximum call duration by selecting the **max(DurationMs)** column in the results pane. ## Work with charts+ Let's look at a query that uses numerical data that we can view in a chart. Instead of building a query, we'll select an example query. Select **Queries** on the left pane. This pane includes example queries that you can add to the query window. If you're using your own workspace, you should have various queries in multiple categories. If you're using the demo environment, you might see only a single **Log Analytics workspaces** category. Expand that to view the queries in the category.
-Select the query called **Function Error rate** in the **Applications** category. This step adds the query to the query window. Notice that the new query is separated from the other by a blank line. A query in KQL ends when it encounters a blank line, so these are considered separate queries.
+Select the query called **Function Error rate** in the **Applications** category. This step adds the query to the query window. Notice that the new query is separated from the other by a blank line. A query in KQL ends when it encounters a blank line, so these are considered separate queries.
:::image type="content" source="media/log-analytics-tutorial/example-query.png" alt-text="Screenshot that shows a new query." lightbox="media/log-analytics-tutorial/example-query.png":::
The current query is the one that the cursor is positioned on. You can see that
:::image type="content" source="media/log-analytics-tutorial/example-query-output-table.png" alt-text="Screenshot that shows the query results table." lightbox="media/log-analytics-tutorial/example-query-output-table.png":::
-To view the results in a graph, select **Chart** on the results pane. Notice that there are various options for working with the chart, such as changing it to another type.
+To view the results in a graph, select **Chart** on the results pane. Notice that there are various options for working with the chart, such as changing it to another type.
:::image type="content" source="media/log-analytics-tutorial/example-query-output-chart.png" alt-text="Screenshot that shows the query results chart." lightbox="media/log-analytics-tutorial/example-query-output-chart.png"::: - ## Next steps Now that you know how to use Log Analytics, complete the tutorial on using log queries:
azure-monitor Log Analytics Workspace Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/log-analytics-workspace-overview.md
Title: Log Analytics workspace overview
-description: Overview of Log Analytics workspace which store data for Azure Monitor Logs.
+description: Overview of Log Analytics workspace, which stores data for Azure Monitor Logs.
na Last updated 05/15/2022 # Log Analytics workspace overview
-A Log Analytics workspace is a unique environment for log data from Azure Monitor and other Azure services such as Microsoft Sentinel and Microsoft Defender for Cloud. Each workspace has its own data repository and configuration but may combine data from multiple services. This article provides an overview of concepts related to Log Analytics workspaces and provides links to other documentation for more details on each.
+
+A Log Analytics workspace is a unique environment for log data from Azure Monitor and other Azure services, such as Microsoft Sentinel and Microsoft Defender for Cloud. Each workspace has its own data repository and configuration but might combine data from multiple services. This article provides an overview of concepts related to Log Analytics workspaces and provides links to other documentation for more details on each.
> [!IMPORTANT]
-> You may see the term *Microsoft Sentinel workspace* used in [Microsoft Sentinel](../../sentinel/overview.md) documentation. This is the same Log Analytics workspace described in this article but enabled for Microsoft Sentinel. This subjects all data in the workspace to Sentinel pricing as described in [Cost](#cost) below.
+> You might see the term *Microsoft Sentinel workspace* used in [Microsoft Sentinel](../../sentinel/overview.md) documentation. This workspace is the same Log Analytics workspace described in this article, but it's enabled for Microsoft Sentinel. All data in the workspace is subject to Microsoft Sentinel pricing as described in the [Cost](#cost) section.
-You can use a single workspace for all your data collection, or you may create multiple workspaces based on a variety of requirements such as the geographic location of the data, access rights that define which users can access data, and configuration settings such as the pricing tier and data retention.
+You can use a single workspace for all your data collection. You can also create multiple workspaces based on requirements such as:
-To create a new workspace, see [Create a Log Analytics workspace in the Azure portal](./quick-create-workspace.md). For considerations on creating multiple workspaces, see [Design a Log Analytics workspace configuration](./workspace-design.md).
+- The geographic location of the data.
+- Access rights that define which users can access data.
+- Configuration settings like pricing tiers and data retention.
+To create a new workspace, see [Create a Log Analytics workspace in the Azure portal](./quick-create-workspace.md). For considerations on creating multiple workspaces, see [Design a Log Analytics workspace configuration](./workspace-design.md).
## Data structure+ Each workspace contains multiple tables that are organized into separate columns with multiple rows of data. Each table is defined by a unique set of columns. Rows of data provided by the data source share those columns. Log queries define columns of data to retrieve and provide output to different features of Azure Monitor and other services that use workspaces. [![Diagram that shows the Azure Monitor Logs structure.](media/data-platform-logs/logs-structure.png)](media/data-platform-logs/logs-structure.png#lightbox) - ## Cost
-There is no direct cost for creating or maintaining a workspace. You're charged for the data sent to it (data ingestion) and how long that data is stored (data retention). These costs may vary based on the data plan of each table as described in [Log data plans (preview)](#log-data-plans-preview).
-See [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/) for detailed pricing and [Azure Monitor best practices - Cost management](../best-practices-cost.md) for guidance on reducing your costs. If you are using your Log Analytics workspace with services other than Azure Monitor, then see the documentation for those services for pricing information.
+There's no direct cost for creating or maintaining a workspace. You're charged for the data sent to it, which is also known as data ingestion. You're charged for how long that data is stored, which is otherwise known as data retention. These costs might vary based on the data plan of each table, as described in [Log data plans (preview)](#log-data-plans-preview).
+
+For information on pricing, see [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/). For guidance on how to reduce your costs, see [Azure Monitor best practices - Cost management](../best-practices-cost.md). If you're using your Log Analytics workspace with services other than Azure Monitor, see the documentation for those services for pricing information.
## Log data plans (preview)
-By default, all tables in a workspace are **Analytics** tables, which are available to all features of Azure Monitor and any other services that use the workspace. You can configure certain tables as **Basic Logs (preview)** to reduce the cost of storing high-volume verbose logs you use for debugging, troubleshooting and auditing, but not for analytics and alerts. Tables configured for Basic Logs have a lower ingestion cost in exchange for reduced features.
-The following table gives a brief summary of the two plans. See [Configure Basic Logs in Azure Monitor (Preview)](basic-logs-configure.md) for more details on Basic Logs and how to configure them.
+By default, all tables in a workspace are **Analytics** tables, which are available to all features of Azure Monitor and any other services that use the workspace. You can configure certain tables as **Basic Logs (preview)** to reduce the cost of storing high-volume verbose logs you use for debugging, troubleshooting, and auditing, but not for analytics and alerts. Tables configured for Basic Logs have a lower ingestion cost in exchange for reduced features.
-> [!NOTE]
-> Basic Logs are currently in public preview. You can currently work with Basic Logs tables in the Azure Portal and using a limited number of other components. The Basic Logs feature is not available for workspaces in [legacy pricing tiers](cost-logs.md#legacy-pricing-tiers).
+The following table summarizes the two plans. For more information on Basic Logs and how to configure them, see [Configure Basic Logs in Azure Monitor (preview)](basic-logs-configure.md).
-The following table summarizes the differences between the plans.
+> [!NOTE]
+> Basic Logs are in public preview. You can currently work with Basic Logs tables in the Azure portal and use a limited number of other components. The Basic Logs feature isn't available for workspaces in [legacy pricing tiers](cost-logs.md#legacy-pricing-tiers).
| Category | Analytics Logs | Basic Logs | |:|:|:| | Ingestion | Cost for ingestion. | Reduced cost for ingestion. |
-| Log queries | No additional cost. Full query capabilities. | Additional cost.<br>[Subset of query capabilities](basic-logs-query.md#limitations). |
+| Log queries | No extra cost. Full query capabilities. | Extra cost.<br>[Subset of query capabilities](basic-logs-query.md#limitations). |
| Retention | Configure retention from 30 days to 730 days. | Retention fixed at 8 days. | | Alerts | Supported. | Not supported. | ## Ingestion-time transformations
-[Data collection rules (DCRs)](../essentials/data-collection-rule-overview.md) that define data coming into Azure Monitor can include transformations that allow you to filter and transform data before it's ingested into the workspace. Since all workflows don't yet support DCRs, each workspace can define ingestion-time transformations. This allows you filter or transform data before it's stored.
-[Ingestion-time transformations](ingestion-time-transformations.md) are defined for each table in a workspace and apply to all data sent to that table, even if sent from multiple sources. Ingestion-time transformations though only apply to workflows that don't already use a data collection rule. For example, [Azure Monitor agent](../agents/azure-monitor-agent-overview.md) uses a data collection rule to define data collected from virtual machines. This data will not be subject to any ingestion-time transformations defined in the workspace.
+[Data collection rules (DCRs)](../essentials/data-collection-rule-overview.md) that define data coming into Azure Monitor can include transformations that allow you to filter and transform data before it's ingested into the workspace. Since all workflows don't yet support DCRs, each workspace can define ingestion-time transformations. For this reason, you can filter or transform data before it's stored.
-For example, you might have [diagnostic settings](../essentials/diagnostic-settings.md) that send [resource logs](../essentials/resource-logs.md) for different Azure resources to your workspace. You can create a transformation for the table that collects the resource logs that filters this data for only records that you want, saving you the ingestion cost for records you don't need. You may also want to extract important data from certain columns and store it in additional columns in the workspace to support simpler queries.
+[Ingestion-time transformations](ingestion-time-transformations.md) are defined for each table in a workspace and apply to all data sent to that table, even if sent from multiple sources. Ingestion-time transformations only apply to workflows that don't already use a DCR. For example, [Azure Monitor agent](../agents/azure-monitor-agent-overview.md) uses a DCR to define data collected from virtual machines. This data won't be subject to any ingestion-time transformations defined in the workspace.
+For example, you might have [diagnostic settings](../essentials/diagnostic-settings.md) that send [resource logs](../essentials/resource-logs.md) for different Azure resources to your workspace. You can create a transformation for the table that collects the resource logs that filters this data for only records that you want. This method saves you the ingestion cost for records you don't need. You might also want to extract important data from certain columns and store it in other columns in the workspace to support simpler queries.
## Data retention and archive+ Data in each table in a [Log Analytics workspace](log-analytics-workspace-overview.md) is retained for a specified period of time after which it's either removed or archived with a reduced retention fee. Set the retention time to balance your requirement for having data available with reducing your cost for data retention. > [!NOTE] > Archive is currently in public preview.
-To access archived data, you must first retrieve data from it in an Analytics Logs table using one of the following methods:
+To access archived data, you must first retrieve data from it in an Analytics Logs table by using one of the following methods:
| Method | Description | |:|:|
-| [Search Jobs](search-jobs.md) | Retrieve data matching particular criteria. |
+| [Search jobs](search-jobs.md) | Retrieve data matching particular criteria. |
| [Restore](restore.md) | Retrieve data from a particular time range. | - ## Permissions
-Permission to data in a Log Analytics workspace is defined by the [access control mode](manage-access.md#access-control-mode), which is a setting on each workspace. Users can either be given explicit access to the workspace using a [built-in or custom role](../roles-permissions-security.md), or you can allow access to data collected for Azure resources to users with access to those resources.
-See [Manage access to log data and workspaces in Azure Monitor](manage-access.md) for details on the different permission options and on configuring permissions.
+Permission to access data in a Log Analytics workspace is defined by the [access control mode](manage-access.md#access-control-mode), which is a setting on each workspace. You can give users explicit access to the workspace by using a [built-in or custom role](../roles-permissions-security.md). Or, you can allow access to data collected for Azure resources to users with access to those resources.
+
+See [Manage access to log data and workspaces in Azure Monitor](manage-access.md) for information on the different permission options and how to configure permissions.
## Next steps -- [Create a new Log Analytics workspace](quick-create-workspace.md)-- See Design a Log Analytics workspace configuration(workspace-design.md) for considerations on creating multiple workspaces.-- [Learn about log queries to retrieve and analyze data from a Log Analytics workspace.](./log-query-overview.md)
+- [Create a new Log Analytics workspace](quick-create-workspace.md).
+- See [Design a Log Analytics workspace configuration](workspace-design.md) for considerations on creating multiple workspaces.
+- [Learn about log queries to retrieve and analyze data from a Log Analytics workspace](./log-query-overview.md).
azure-monitor Manage Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/manage-access.md
Title: Manage access to Log Analytics workspaces
-description: You can manage access to data stored in a Log Analytics workspace in Azure Monitor using resource, workspace, or table-level permissions. This article details how to complete.
+description: This article explains how you can manage access to data stored in a Log Analytics workspace in Azure Monitor by using resource, workspace, or table-level permissions.
Last updated 03/22/2022
# Manage access to Log Analytics workspaces
- The data in a Log Analytics workspace that a user can access is determined by a combination of factors including settings on the workspace itself, the user's access to resources sending data to the workspace, and the method that the user accesses the workspace. This article describes how access is managed and how to perform any required configuration.
+
+ The data in a Log Analytics workspace that you can access is determined by a combination of the following factors:
+
+- The settings on the workspace itself.
+- The access to resources sending data to the workspace.
+- The method used to access the workspace.
+
+This article describes how access is managed and how to perform any required configuration.
## Overview
-The factors that define the data a user can access are briefly described in the following table. Each is further described in the sections below.
+
+The factors that define the data you can access are described in the following table. Each factor is further described in the sections that follow.
| Factor | Description | |:|:|
-| [Access mode](#access-mode) | Method the user uses to access the workspace. Defines the scope of the data available and the access control mode that's applied. |
+| [Access mode](#access-mode) | Method used to access the workspace. Defines the scope of the data available and the access control mode that's applied. |
| [Access control mode](#access-control-mode) | Setting on the workspace that defines whether permissions are applied at the workspace or resource level. |
-| [Azure RBAC](#azure-rbac) | Permissions applied to individual or groups of users for the workspace or resource sending data to the workspace. Defines what data the user will have access to. |
-| [Table level Azure RBAC](#table-level-azure-rbac) | Optional permissions that defines specific data types in the workspace that a user can access. Apply to all users regardless of their access mode or access control mode. |
-
+| [Azure role-based access control (RBAC)](#azure-rbac) | Permissions applied to individuals or groups of users for the workspace or resource sending data to the workspace. Defines what data you have access to. |
+| [Table-level Azure RBAC](#table-level-azure-rbac) | Optional permissions that define specific data types in the workspace that you can access. Apply to all users no matter your access mode or access control mode. |
## Access mode
-The *access mode* refers to how a user accesses a Log Analytics workspace and defines the data they can access during the current session. The mode is determined according to the [scope](scope.md) you select in Log Analytics.
-There are two access modes:
+The *access mode* refers to how you access a Log Analytics workspace and defines the data you can access during the current session. The mode is determined according to the [scope](scope.md) you select in Log Analytics.
-- **Workspace-context**: You can view all logs in the workspace that you have permission to. Queries in this mode are scoped to all data in all tables in the workspace. This is the access mode used when logs are accessed with the workspace as the scope, such as when you select **Logs** from the **Azure Monitor** menu in the Azure portal.
+There are two access modes:
+- **Workspace-context**: You can view all logs in the workspace for which you have permission. Queries in this mode are scoped to all data in all tables in the workspace. This access mode is used when logs are accessed with the workspace as the scope, such as when you select **Logs** on the **Azure Monitor** menu in the Azure portal.
+ - **Resource-context**: When you access the workspace for a particular resource, resource group, or subscription, such as when you select **Logs** from a resource menu in the Azure portal, you can view logs for only resources in all tables that you have access to. Queries in this mode are scoped to only data associated with that resource. This mode also enables granular Azure RBAC. Workspaces use a resource-context log model where every log record emitted by an Azure resource is automatically associated with this resource.
-
-Records are only available in resource-context queries if they are associated with the relevant resource. You can check this association by running a query and verifying that the [_ResourceId](./log-standard-columns.md#_resourceid) column is populated.
+Records are only available in resource-context queries if they're associated with the relevant resource. To check this association, run a query and verify that the [_ResourceId](./log-standard-columns.md#_resourceid) column is populated.
There are known limitations with the following resources: -- Computers outside of Azure. Resource-context is only supported with [Azure Arc for Servers](../../azure-arc/servers/index.yml).-- Application Insights. Supported for resource-context only when using [Workspace-based Application Insights resource](../app/create-workspace-resource.md)-- Service Fabric-
+- **Computers outside of Azure**: Resource-context is only supported with [Azure Arc for servers](../../azure-arc/servers/index.yml).
+- **Application Insights**: Supported for resource-context only when using a [workspace-based Application Insights resource](../app/create-workspace-resource.md).
+- **Azure Service Fabric**
-### Comparing access modes
+### Compare access modes
The following table summarizes the access modes: | Issue | Workspace-context | Resource-context | |:|:|:| | Who is each model intended for? | Central administration.<br>Administrators who need to configure data collection and users who need access to a wide variety of resources. Also currently required for users who need to access logs for resources outside of Azure. | Application teams.<br>Administrators of Azure resources being monitored. Allows them to focus on their resource without filtering. |
-| What does a user require to view logs? | Permissions to the workspace.<br>See **Workspace permissions** in [Manage access using workspace permissions](./manage-access.md#azure-rbac). | Read access to the resource.<br>See **Resource permissions** in [Manage access using Azure permissions](./manage-access.md#azure-rbac). Permissions can be inherited from the resource group or subscription or directly assigned to the resource. Permission to the logs for the resource will be automatically assigned. The user doesn't require access to the workspace.|
-| What is the scope of permissions? | Workspace.<br>Users with access to the workspace can query all logs in the workspace from tables that they have permissions to. See [Table access control](./manage-access.md#table-level-azure-rbac) | Azure resource.<br>User can query logs for specific resources, resource groups, or subscription they have access to in any workspace but can't query logs for other resources. |
-| How can user access logs? | Start **Logs** from **Azure Monitor** menu.<br><br>Start **Logs** from **Log Analytics workspaces**.<br><br>From Azure Monitor [Workbooks](../best-practices-analysis.md#workbooks). | Start **Logs** from the menu for the Azure resource. User will have access to data for that resource.<br><br>Start **Logs** from **Azure Monitor** menu. User will have access to data for all resources they have access to.<br><br>Start **Logs** from **Log Analytics workspaces**. User will have access to data for all resources they have access to.<br><br>From Azure Monitor [Workbooks](../best-practices-analysis.md#workbooks). |
+| What does a user require to view logs? | Permissions to the workspace.<br>See "Workspace permissions" in [Manage access using workspace permissions](./manage-access.md#azure-rbac). | Read access to the resource.<br>See "Resource permissions" in [Manage access using Azure permissions](./manage-access.md#azure-rbac). Permissions can be inherited from the resource group or subscription or directly assigned to the resource. Permission to the logs for the resource will be automatically assigned. The user doesn't require access to the workspace.|
+| What is the scope of permissions? | Workspace.<br>Users with access to the workspace can query all logs in the workspace from tables they have permissions to. See [Table access control](./manage-access.md#table-level-azure-rbac). | Azure resource.<br>Users can query logs for specific resources, resource groups, or subscriptions they have access to in any workspace, but they can't query logs for other resources. |
+| How can a user access logs? | On the **Azure Monitor** menu, select **Logs**.<br><br>Select **Logs** from **Log Analytics workspaces**.<br><br>From Azure Monitor [workbooks](../best-practices-analysis.md#workbooks). | Select **Logs** on the menu for the Azure resource. Users will have access to data for that resource.<br><br>Select **Logs** on the **Azure Monitor** menu. Users will have access to data for all resources they have access to.<br><br>Select **Logs** from **Log Analytics workspaces**. Users will have access to data for all resources they have access to.<br><br>From Azure Monitor [workbooks](../best-practices-analysis.md#workbooks). |
## Access control mode
-The *Access control mode* is a setting on each workspace that defines how permissions are determined for the workspace.
+The *access control mode* is a setting on each workspace that defines how permissions are determined for the workspace.
-* **Require workspace permissions**. This control mode does not allow granular Azure RBAC. For a user to access the workspace, they must be [granted permissions to the workspace](#azure-rbac) or to [specific tables](#table-level-azure-rbac).
+* **Require workspace permissions**. This control mode doesn't allow granular Azure RBAC. To access the workspace, the user must be [granted permissions to the workspace](#azure-rbac) or to [specific tables](#table-level-azure-rbac).
If a user accesses the workspace in [workspace-context mode](#access-mode), they have access to all data in any table they've been granted access to. If a user accesses the workspace in [resource-context mode](#access-mode), they have access to only data for that resource in any table they've been granted access to.
- This is the default setting for all workspaces created before March 2019.
+ This setting is the default for all workspaces created before March 2019.
-* **Use resource or workspace permissions**. This control mode allows granular Azure RBAC. Users can be granted access to only data associated with resources they can view by assigning Azure `read` permission.
+* **Use resource or workspace permissions**. This control mode allows granular Azure RBAC. Users can be granted access to only data associated with resources they can view by assigning Azure `read` permission.
When a user accesses the workspace in [workspace-context mode](#access-mode), workspace permissions apply. When a user accesses the workspace in [resource-context mode](#access-mode), only resource permissions are verified, and workspace permissions are ignored. Enable Azure RBAC for a user by removing them from workspace permissions and allowing their resource permissions to be recognized.
- This is the default setting for all workspaces created after March 2019.
+ This setting is the default for all workspaces created after March 2019.
> [!NOTE]
- > If a user has only resource permissions to the workspace, they are only able to access the workspace using resource-context mode assuming the workspace access mode is set to **Use resource or workspace permissions**.
+ > If a user has only resource permissions to the workspace, they can only access the workspace by using resource-context mode assuming the workspace access mode is set to **Use resource or workspace permissions**.
### Configure access control mode for a workspace - # [Azure portal](#tab/portal) View the current workspace access control mode on the **Overview** page for the workspace in the **Log Analytics workspace** menu.
-![View workspace access control mode](media/manage-access/view-access-control-mode.png)
+![Screenshot that shows the workspace access control mode.](media/manage-access/view-access-control-mode.png)
-Change this setting from the **Properties** page of the workspace. Changing the setting will be disabled if you don't have permissions to configure the workspace.
+Change this setting on the **Properties** page of the workspace. If you don't have permissions to configure the workspace, changing the setting is disabled.
-![Change workspace access mode](media/manage-access/change-access-control-mode.png)
+![Screenshot that shows changing workspace access mode.](media/manage-access/change-access-control-mode.png)
# [PowerShell](#tab/powershell)
DefaultWorkspace38917: True
DefaultWorkspace21532: False ```
-A value of `False` means the workspace is configured with *workspace-context* access mode. A value of `True` means the workspace is configured with *resource-context* access mode.
+A value of `False` means the workspace is configured with *workspace-context* access mode. A value of `True` means the workspace is configured with *resource-context* access mode.
> [!NOTE]
-> If a workspace is returned without a boolean value and is blank, this also matches the results of a `False` value.
+> If a workspace is returned without a Boolean value and is blank, this result also matches the results of a `False` value.
> Use the following script to set the access control mode for a specific workspace to *resource-context* permission:
Set-AzResource -ResourceId $_.ResourceId -Properties $_.Properties -Force
# [Resource Manager](#tab/arm)
-To configure the access mode in an Azure Resource Manager template, set the **enableLogAccessUsingOnlyResourcePermissions** feature flag on the workspace to one of the following values.
+To configure the access mode in an Azure Resource Manager template, set the **enableLogAccessUsingOnlyResourcePermissions** feature flag on the workspace to one of the following values:
-* **false**: Set the workspace to *workspace-context* permissions. This is the default setting if the flag isn't set.
+* **false**: Set the workspace to *workspace-context* permissions. This setting is the default if the flag isn't set.
* **true**: Set the workspace to *resource-context* permissions. ## Azure RBAC
-Access to a workspace is managed using [Azure role-based access control (Azure RBAC)](../../role-based-access-control/role-assignments-portal.md). To grant access to the Log Analytics workspace using Azure permissions, follow the steps in [assign Azure roles to manage access to your Azure subscription resources](../../role-based-access-control/role-assignments-portal.md).
+
+Access to a workspace is managed by using [Azure RBAC](../../role-based-access-control/role-assignments-portal.md). To grant access to the Log Analytics workspace by using Azure permissions, follow the steps in [Assign Azure roles to manage access to your Azure subscription resources](../../role-based-access-control/role-assignments-portal.md).
+ ### Workspace permissions
-Each workspace can have multiple accounts associated with it, and each account can have access to multiple workspaces. The following table lists the Azure permissions for different workspace actions:
-|Action |Azure Permissions Needed |Notes |
+Each workspace can have multiple accounts associated with it. Each account can have access to multiple workspaces. The following table lists the Azure permissions for different workspace actions:
+
+|Action |Azure permissions needed |Notes |
|-|-||
-| Change the pricing tier | `Microsoft.OperationalInsights/workspaces/*/write` |
-| Creating a workspace in the Azure portal | `Microsoft.Resources/deployments/*` <br> `Microsoft.OperationalInsights/workspaces/*` |
-| View workspace basic properties and enter the workspace blade in the portal | `Microsoft.OperationalInsights/workspaces/read` |
-| Query logs using any interface | `Microsoft.OperationalInsights/workspaces/query/read` |
-| Access all log types using queries | `Microsoft.OperationalInsights/workspaces/query/*/read` |
-| Access a specific log table | `Microsoft.OperationalInsights/workspaces/query/<table_name>/read` |
-| Read the workspace keys to allow sending logs to this workspace | `Microsoft.OperationalInsights/workspaces/sharedKeys/action` |
-| Add and remove monitoring solutions | `Microsoft.Resources/deployments/*` <br> `Microsoft.OperationalInsights/*` <br> `Microsoft.OperationsManagement/*` <br> `Microsoft.Automation/*` <br> `Microsoft.Resources/deployments/*/write`<br><br>These permissions need to be granted at resource group or subscription level. |
-| View data in the *Backup* and *Site Recovery* solution tiles | Administrator / Co-administrator<br><br>Accesses resources deployed using the classic deployment model |
+| Change the pricing tier. | `Microsoft.OperationalInsights/workspaces/*/write` |
+| Create a workspace in the Azure portal. | `Microsoft.Resources/deployments/*` <br> `Microsoft.OperationalInsights/workspaces/*` |
+| View workspace basic properties and enter the workspace pane in the portal. | `Microsoft.OperationalInsights/workspaces/read` |
+| Query logs by using any interface. | `Microsoft.OperationalInsights/workspaces/query/read` |
+| Access all log types by using queries. | `Microsoft.OperationalInsights/workspaces/query/*/read` |
+| Access a specific log table. | `Microsoft.OperationalInsights/workspaces/query/<table_name>/read` |
+| Read the workspace keys to allow sending logs to this workspace. | `Microsoft.OperationalInsights/workspaces/sharedKeys/action` |
+| Add and remove monitoring solutions. | `Microsoft.Resources/deployments/*` <br> `Microsoft.OperationalInsights/*` <br> `Microsoft.OperationsManagement/*` <br> `Microsoft.Automation/*` <br> `Microsoft.Resources/deployments/*/write`<br><br>These permissions need to be granted at resource group or subscription level. |
+| View data in the **Backup** and **Site Recovery** solution tiles. | Administrator/Co-administrator<br><br>Accesses resources deployed by using the classic deployment model. |
### Built-in roles+ Assign users to these roles to give them access at different scopes:
-* Subscription - Access to all workspaces in the subscription
-* Resource Group - Access to all workspace in the resource group
-* Resource - Access to only the specified workspace
+* **Subscription**: Access to all workspaces in the subscription
+* **Resource group**: Access to all workspaces in the resource group
+* **Resource**: Access to only the specified workspace
Create assignments at the resource level (workspace) to assure accurate access control. Use [custom roles](../../role-based-access-control/custom-roles.md) to create roles with the specific permissions needed. > [!NOTE]
-> To add and remove users to a user role, you must to have `Microsoft.Authorization/*/Delete` and `Microsoft.Authorization/*/Write` permission.
-
+> To add and remove users to a user role, you must have `Microsoft.Authorization/*/Delete` and `Microsoft.Authorization/*/Write` permission.
#### Log Analytics Reader
-Members of the *Log Analytics Reader* role can view all monitoring data and monitoring settings, including the configuration of Azure diagnostics on all Azure resources.
-Members of the *Log Analytics Reader* role can:
+Members of the Log Analytics Reader role can view all monitoring data and monitoring settings, including the configuration of Azure diagnostics on all Azure resources.
+
+Members of the Log Analytics Reader role can:
-- View and search all monitoring data
+- View and search all monitoring data.
- View monitoring settings, including viewing the configuration of Azure diagnostics on all Azure resources.
-*Log Analytics Reader* includes the following Azure actions:
+The Log Analytics Reader role includes the following Azure actions:
| Type | Permission | Description | | - | - | -- |
-| Action | `*/read` | Ability to view all Azure resources and resource configuration.<br>Includes viewing:<br>- Virtual machine extension status<br>- Configuration of Azure diagnostics on resources<br>- All properties and settings of all resources.<br><br>For workspaces, allows full unrestricted permissions to read the workspace settings and query data. See more granular options above. |
-| Action | `Microsoft.Support/*` | Ability to open support cases |
-|Not Action | `Microsoft.OperationalInsights/workspaces/sharedKeys/read` | Prevents reading of workspace key required to use the data collection API and to install agents. This prevents the user from adding new resources to the workspace |
+| Action | `*/read` | Ability to view all Azure resources and resource configuration.<br>Includes viewing:<br>- Virtual machine extension status.<br>- Configuration of Azure diagnostics on resources.<br>- All properties and settings of all resources.<br><br>For workspaces, allows full unrestricted permissions to read the workspace settings and query data. See more granular options in the preceding list. |
+| Action | `Microsoft.Support/*` | Ability to open support cases. |
+|Not Action | `Microsoft.OperationalInsights/workspaces/sharedKeys/read` | Prevents reading of workspace key required to use the data collection API and to install agents. This prevents the user from adding new resources to the workspace. |
| Action | `Microsoft.OperationalInsights/workspaces/analytics/query/action` | Deprecated. | | Action | `Microsoft.OperationalInsights/workspaces/search/action` | Deprecated. | #### Log Analytics Contributor
-Members of the *Log Analytics Contributor* role can:
-- Read all monitoring data granted by the *Log Analytics Reader role*.-- Edit monitoring settings for Azure resources, including
- - Adding the VM extension to VMs
- - Configuring Azure diagnostics on all Azure resources
-- Create and configure Automation accounts. Permission needs to be granted at the resource group or subscription level.-- Add and remove management solutions. Permission needs to be granted at the resource group or subscription level.-- Read storage account keys-- Configure the collection of logs from Azure Storage
+Members of the Log Analytics Contributor role can:
+- Read all monitoring data granted by the Log Analytics Reader role.
+- Edit monitoring settings for Azure resources, including:
+ - Adding the VM extension to VMs.
+ - Configuring Azure diagnostics on all Azure resources.
+- Create and configure Automation accounts. Permission must be granted at the resource group or subscription level.
+- Add and remove management solutions. Permission must be granted at the resource group or subscription level.
+- Read storage account keys.
+- Configure the collection of logs from Azure Storage.
> [!WARNING] > You can use the permission to add a virtual machine extension to a virtual machine to gain full control over a virtual machine.
The Log Analytics Contributor role includes the following Azure actions:
| Permission | Description | | - | -- |
-| `*/read` | Ability to view all Azure resources and resource configuration.<br><br>Includes viewing:<br>- Virtual machine extension status<br>- Configuration of Azure diagnostics on resources<br>- All properties and settings of all resources.<br><br>For workspaces, allows full unrestricted permissions to read the workspace settings and query data. See more granular options above. |
-| `Microsoft.Automation/automationAccounts/*` | Ability to create and configure Azure Automation accounts, including adding and editing runbooks |
-| `Microsoft.ClassicCompute/virtualMachines/extensions/*` <br> `Microsoft.Compute/virtualMachines/extensions/*` | Add, update and remove virtual machine extensions, including the Microsoft Monitoring Agent extension and the OMS Agent for Linux extension |
-| `Microsoft.ClassicStorage/storageAccounts/listKeys/action` <br> `Microsoft.Storage/storageAccounts/listKeys/action` | View the storage account key. Required to configure Log Analytics to read logs from Azure storage accounts |
-| `Microsoft.Insights/alertRules/*` | Add, update, and remove alert rules |
-| `Microsoft.Insights/diagnosticSettings/*` | Add, update, and remove diagnostics settings on Azure resources |
+| `*/read` | Ability to view all Azure resources and resource configuration.<br><br>Includes viewing:<br>- Virtual machine extension status.<br>- Configuration of Azure diagnostics on resources.<br>- All properties and settings of all resources.<br><br>For workspaces, allows full unrestricted permissions to read the workspace settings and query data. See more granular options in the preceding list. |
+| `Microsoft.Automation/automationAccounts/*` | Ability to create and configure Azure Automation accounts, including adding and editing runbooks. |
+| `Microsoft.ClassicCompute/virtualMachines/extensions/*` <br> `Microsoft.Compute/virtualMachines/extensions/*` | Add, update, and remove virtual machine extensions, including the Microsoft Monitoring Agent extension and the OMS Agent for Linux extension. |
+| `Microsoft.ClassicStorage/storageAccounts/listKeys/action` <br> `Microsoft.Storage/storageAccounts/listKeys/action` | View the storage account key. Required to configure Log Analytics to read logs from Azure Storage accounts. |
+| `Microsoft.Insights/alertRules/*` | Add, update, and remove alert rules. |
+| `Microsoft.Insights/diagnosticSettings/*` | Add, update, and remove diagnostics settings on Azure resources. |
| `Microsoft.OperationalInsights/*` | Add, update, and remove configuration for Log Analytics workspaces. To edit workspace advanced settings, user needs `Microsoft.OperationalInsights/workspaces/write`. |
-| `Microsoft.OperationsManagement/*` | Add and remove management solutions |
-| `Microsoft.Resources/deployments/*` | Create and delete deployments. Required for adding and removing solutions, workspaces, and automation accounts |
-| `Microsoft.Resources/subscriptions/resourcegroups/deployments/*` | Create and delete deployments. Required for adding and removing solutions, workspaces, and automation accounts |
--
+| `Microsoft.OperationsManagement/*` | Add and remove management solutions. |
+| `Microsoft.Resources/deployments/*` | Create and delete deployments. Required for adding and removing solutions, workspaces, and automation accounts. |
+| `Microsoft.Resources/subscriptions/resourcegroups/deployments/*` | Create and delete deployments. Required for adding and removing solutions, workspaces, and automation accounts. |
### Resource permissions
-When users query logs from a workspace using [resource-context access](#access-mode), they'll have the following permissions on the resource:
+When users query logs from a workspace by using [resource-context access](#access-mode), they'll have the following permissions on the resource:
| Permission | Description | | - | -- |
-| `Microsoft.Insights/logs/<tableName>/read`<br><br>Examples:<br>`Microsoft.Insights/logs/*/read`<br>`Microsoft.Insights/logs/Heartbeat/read` | Ability to view all log data for the resource. |
-| `Microsoft.Insights/diagnosticSettings/write` | Ability to configure diagnostics setting to allow setting up logs for this resource. |
-
-`/read` permission is usually granted from a role that includes _\*/read or_ _\*_ permissions such as the built-in [Reader](../../role-based-access-control/built-in-roles.md#reader) and [Contributor](../../role-based-access-control/built-in-roles.md#contributor) roles. Custom roles that include specific actions or dedicated built-in roles might not include this permission.
+| `Microsoft.Insights/logs/<tableName>/read`<br><br>Examples:<br>`Microsoft.Insights/logs/*/read`<br>`Microsoft.Insights/logs/Heartbeat/read` | Ability to view all log data for the resource |
+| `Microsoft.Insights/diagnosticSettings/write` | Ability to configure diagnostics setting to allow setting up logs for this resource |
+The `/read` permission is usually granted from a role that includes _\*/read or_ _\*_ permissions, such as the built-in [Reader](../../role-based-access-control/built-in-roles.md#reader) and [Contributor](../../role-based-access-control/built-in-roles.md#contributor) roles. Custom roles that include specific actions or dedicated built-in roles might not include this permission.
### Custom role examples
-In addition to using the built-in roles for Log Analytics workspace, you can create custom roles to assign more granular permissions. Following are some common examples.
-**Grant a user access to log data from their resources.**
+In addition to using the built-in roles for a Log Analytics workspace, you can create custom roles to assign more granular permissions. Here are some common examples.
-- Configure the workspace access control mode to **use workspace or resource permissions**-- Grant users `*/read` or `Microsoft.Insights/logs/*/read` permissions to their resources. If they are already assigned the [Log Analytics Reader](../../role-based-access-control/built-in-roles.md#reader) role on the workspace, it is sufficient.
+Grant a user access to log data from their resources:
-**Grant a user access to log data from their resources and configure their resources to send logs to the workspace.**
+- Configure the workspace access control mode to *use workspace or resource permissions*.
+- Grant users `*/read` or `Microsoft.Insights/logs/*/read` permissions to their resources. If they're already assigned the [Log Analytics Reader](../../role-based-access-control/built-in-roles.md#reader) role on the workspace, it's sufficient.
-- Configure the workspace access control mode to **use workspace or resource permissions**-- Grant users the following permissions on the workspace: `Microsoft.OperationalInsights/workspaces/read` and `Microsoft.OperationalInsights/workspaces/sharedKeys/action`. With these permissions, users cannot perform any workspace-level queries. They can only enumerate the workspace and use it as a destination for diagnostic settings or agent configuration.-- Grant users the following permissions to their resources: `Microsoft.Insights/logs/*/read` and `Microsoft.Insights/diagnosticSettings/write`. If they are already assigned the [Log Analytics Contributor](../../role-based-access-control/built-in-roles.md#contributor) role, assigned the Reader role, or granted `*/read` permissions on this resource, it is sufficient.
+Grant a user access to log data from their resources and configure their resources to send logs to the workspace:
-**Grant a user access to log data from their resources without being able to read security events and send data.**
+- Configure the workspace access control mode to *use workspace or resource permissions*.
+- Grant users the following permissions on the workspace: `Microsoft.OperationalInsights/workspaces/read` and `Microsoft.OperationalInsights/workspaces/sharedKeys/action`. With these permissions, users can't perform any workspace-level queries. They can only enumerate the workspace and use it as a destination for diagnostic settings or agent configuration.
+- Grant users the following permissions to their resources: `Microsoft.Insights/logs/*/read` and `Microsoft.Insights/diagnosticSettings/write`. If they're already assigned the [Log Analytics Contributor](../../role-based-access-control/built-in-roles.md#contributor) role, assigned the Reader role, or granted `*/read` permissions on this resource, it's sufficient.
-- Configure the workspace access control mode to **use workspace or resource permissions**
+Grant a user access to log data from their resources without being able to read security events and send data:
+
+- Configure the workspace access control mode to *use workspace or resource permissions*.
- Grant users the following permissions to their resources: `Microsoft.Insights/logs/*/read`.-- Add the following NonAction to block users from reading the SecurityEvent type: `Microsoft.Insights/logs/SecurityEvent/read`. The NonAction shall be in the same custom role as the action that provides the read permission (`Microsoft.Insights/logs/*/read`). If the user inherent the read action from another role that is assigned to this resource or to the subscription or resource group, they would be able to read all log types. This is also true if they inherit `*/read`, that exist for example, with the Reader or Contributor role.
+- Add the following NonAction to block users from reading the SecurityEvent type: `Microsoft.Insights/logs/SecurityEvent/read`. The NonAction shall be in the same custom role as the action that provides the read permission (`Microsoft.Insights/logs/*/read`). If the user inherits the read action from another role that's assigned to this resource or to the subscription or resource group, they could read all log types. This scenario is also true if they inherit `*/read` that exists, for example, with the Reader or Contributor role.
+
+Grant a user access to log data from their resources and read all Azure AD sign-in and read Update Management solution log data from the workspace:
-**Grant a user access to log data from their resources and read all Azure AD sign-in and read Update Management solution log data from the workspace.**
+- Configure the workspace access control mode to *use workspace or resource permissions*.
+- Grant users the following permissions on the workspace:
+ - `Microsoft.OperationalInsights/workspaces/read`: Required so the user can enumerate the workspace and open the workspace pane in the Azure portal
+ - `Microsoft.OperationalInsights/workspaces/query/read`: Required for every user that can execute queries
+ - `Microsoft.OperationalInsights/workspaces/query/SigninLogs/read`: To be able to read Azure AD sign-in logs
+ - `Microsoft.OperationalInsights/workspaces/query/Update/read`: To be able to read Update Management solution logs
+ - `Microsoft.OperationalInsights/workspaces/query/UpdateRunProgress/read`: To be able to read Update Management solution logs
+ - `Microsoft.OperationalInsights/workspaces/query/UpdateSummary/read`: To be able to read Update Management logs
+ - `Microsoft.OperationalInsights/workspaces/query/Heartbeat/read`: Required to be able to use Update Management solutions
+ - `Microsoft.OperationalInsights/workspaces/query/ComputerGroup/read`: Required to be able to use Update Management solutions
+- Grant users the following permissions to their resources: `*/read`, assigned to the Reader role, or `Microsoft.Insights/logs/*/read`
-- Configure the workspace access control mode to **use workspace or resource permissions**-- Grant users the following permissions on the workspace:
- - `Microsoft.OperationalInsights/workspaces/read` ΓÇô required so the user can enumerate the workspace and open the workspace blade in the Azure portal
- - `Microsoft.OperationalInsights/workspaces/query/read` ΓÇô required for every user that can execute queries
- - `Microsoft.OperationalInsights/workspaces/query/SigninLogs/read` ΓÇô to be able to read Azure AD sign-in logs
- - `Microsoft.OperationalInsights/workspaces/query/Update/read` ΓÇô to be able to read Update Management solution logs
- - `Microsoft.OperationalInsights/workspaces/query/UpdateRunProgress/read` ΓÇô to be able to read Update Management solution logs
- - `Microsoft.OperationalInsights/workspaces/query/UpdateSummary/read` ΓÇô to be able to read Update management logs
- - `Microsoft.OperationalInsights/workspaces/query/Heartbeat/read` ΓÇô required to be able to use Update Management solution
- - `Microsoft.OperationalInsights/workspaces/query/ComputerGroup/read` ΓÇô required to be able to use Update Management solution
-- Grant users the following permissions to their resources: `*/read`, assigned to the Reader role, or `Microsoft.Insights/logs/*/read`.
+## Table-level Azure RBAC
-## Table level Azure RBAC
-Table level Azure RBAC allows you to define more granular control to data in a Log Analytics workspace by defining specific data types that are accessible only to a specific set of users.
+By using table-level Azure RBAC, you can define more granular control to data in a Log Analytics workspace by defining specific data types that are accessible only to a specific set of users.
-Implement table access control with [Azure custom roles](../../role-based-access-control/custom-roles.md) to either grant access to specific [tables](../logs/data-platform-logs.md) in the workspace. These roles are applied to workspaces with either workspace-context or resource-context [access control modes](#access-control-mode) regardless of the user's [access mode](#access-mode).
+Implement table access control with [Azure custom roles](../../role-based-access-control/custom-roles.md) to grant access to specific [tables](../logs/data-platform-logs.md) in the workspace. These roles are applied to workspaces with either workspace-context or resource-context [access control modes](#access-control-mode) regardless of the user's [access mode](#access-mode).
-Create a [custom role](../../role-based-access-control/custom-roles.md) with the following actions to define access to a particular table.
+Create a [custom role](../../role-based-access-control/custom-roles.md) with the following actions to define access to a particular table:
* Include the **Actions** section of the role definition. To subtract access from the allowed **Actions**, include it in the **NotActions** section. * Use `Microsoft.OperationalInsights/workspaces/query/*` to specify all tables. - ### Examples
-Following are examples of custom role actions to grant and deny access to specific tables.
-**Grant access to the _Heartbeat_ and _AzureActivity_ tables.**
+Here are examples of custom role actions to grant and deny access to specific tables.
+
+Grant access to the _Heartbeat_ and _AzureActivity_ tables:
``` "Actions": [
Following are examples of custom role actions to grant and deny access to specif
], ```
-**Grant access to only the _SecurityBaseline_ table.**
+Grant access to only the _SecurityBaseline_ table:
``` "Actions": [
Following are examples of custom role actions to grant and deny access to specif
```
-**Grant access to all tables except the _SecurityAlert_ table.**
+Grant access to all tables except the _SecurityAlert_ table:
``` "Actions": [
Following are examples of custom role actions to grant and deny access to specif
### Custom logs
- Custom logs are tables created from data sources such as [text logs](../agents/data-sources-custom-logs.md) and [HTTP Data Collector API](data-collector-api.md). The easiest way to identify the type of log is by checking the tables listed under [Custom Logs in the log schema](./log-analytics-tutorial.md#view-table-information).
+ Custom logs are tables created from data sources such as [text logs](../agents/data-sources-custom-logs.md) and the [HTTP Data Collector API](data-collector-api.md). The easiest way to identify the type of log is by checking the tables listed under [Custom Logs in the log schema](./log-analytics-tutorial.md#view-table-information).
> [!NOTE]
-> Tables created by the [custom logs API](../essentials/../logs/custom-logs-overview.md) does not yet support table level RBAC.
+> Tables created by the [Custom Logs API](../essentials/../logs/custom-logs-overview.md) don't yet support table-level RBAC.
- You can't grant access to individual custom logs tables, but you can grant access to all custom logs. To create a role with access to all custom log tables, create a custom role using the following actions:
+ You can't grant access to individual custom log tables, but you can grant access to all custom logs. To create a role with access to all custom log tables, create a custom role by using the following actions:
``` "Actions": [
Following are examples of custom role actions to grant and deny access to specif
], ```
-An alternative approach to manage access to custom logs is to assign them to an Azure resource and manage access using resource-context access control.Include the resource ID by specifying it in the [x-ms-AzureResourceId](../logs/data-collector-api.md#request-headers) header when data is ingested to Log Analytics via the [HTTP Data Collector API](../logs/data-collector-api.md). The resource ID must be valid and have access rules applied to it. After the logs are ingested, they are accessible to users with read access to the resource.
+An alternative approach to managing access to custom logs is to assign them to an Azure resource and manage access by using resource-context access control. Include the resource ID by specifying it in the [x-ms-AzureResourceId](../logs/data-collector-api.md#request-headers) header when data is ingested to Log Analytics via the [HTTP Data Collector API](../logs/data-collector-api.md). The resource ID must be valid and have access rules applied to it. After the logs are ingested, they're accessible to users with read access to the resource.
+
+Some custom logs come from sources that aren't directly associated to a specific resource. In this case, create a resource group to manage access to these logs. The resource group doesn't incur any cost, but it gives you a valid resource ID to control access to the custom logs.
-Some custom logs come from sources that are not directly associated to a specific resource. In this case, create a resource group to manage access to these logs. The resource group does not incur any cost, but gives you a valid resource ID to control access to the custom logs. For example, if a specific firewall is sending custom logs, create a resource group called *MyFireWallLogs* and make sure that the API requests contain the resource ID of *MyFireWallLogs*. The firewall log records are then accessible only to users that were granted access to either MyFireWallLogs or those with full workspace access
+For example, if a specific firewall is sending custom logs, create a resource group called *MyFireWallLogs*. Make sure that the API requests contain the resource ID of *MyFireWallLogs*. The firewall log records are then accessible only to users who were granted access to *MyFireWallLogs* or those users with full workspace access.
### Considerations - If a user is granted global read permission with the standard Reader or Contributor roles that include the _\*/read_ action, it will override the per-table access control and give them access to all log data.-- If a user is granted per-table access but no other permissions, they would be able to access log data from the API but not from the Azure portal. To provide access from the Azure portal, use Log Analytics Reader as its base role.
+- If a user is granted per-table access but no other permissions, they can access log data from the API but not from the Azure portal. To provide access from the Azure portal, use Log Analytics Reader as its base role.
- Administrators and owners of the subscription will have access to all data types regardless of any other permission settings. - Workspace owners are treated like any other user for per-table access control.-- Assign roles to security groups instead of individual users to reduce the number of assignments. This will also help you use existing group management tools to configure and verify access.
+- Assign roles to security groups instead of individual users to reduce the number of assignments. This practice will also help you use existing group management tools to configure and verify access.
## Next steps * See [Log Analytics agent overview](../agents/log-analytics-agent.md) to gather data from computers in your datacenter or other cloud environment.- * See [Collect data about Azure virtual machines](../vm/monitor-virtual-machine.md) to configure data collection from Azure VMs.
azure-monitor Quick Create Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/quick-create-workspace.md
Last updated 03/28/2022
-# Customer intent: As a DevOps engineer or IT expert I want to set up a workspace to collect logs from multiple data sources from Azure, on-premises and third-party cloud deployments.
+# Customer intent: As a DevOps engineer or IT expert, I want to set up a workspace to collect logs from multiple data sources from Azure, on-premises, and third-party cloud deployments.
# Create a Log Analytics workspace
-This article shows you how to create a Log Analytics workspace. When you collect logs and data, the information is stored in a workspace. A workspace has unique workspace ID and resource ID. The workspace name must be unique for a given resource group. Once you have created a workspace, configure data sources and solutions to store their data there.
+This article shows you how to create a Log Analytics workspace. When you collect logs and data, the information is stored in a workspace. A workspace has a unique workspace ID and resource ID. The workspace name must be unique for a given resource group. After you've created a workspace, configure data sources and solutions to store their data there.
-You need a Log Analytics workspace if you are collecting data from the following sources:
-* Azure resources in your subscription
-* On-premises computers monitored by System Center Operations Manager
-* Device collections from Configuration Manager
-* Diagnostics or log data from Azure storage
+You need a Log Analytics workspace if you collect data from:
+
+* Azure resources in your subscription.
+* On-premises computers monitored by System Center Operations Manager.
+* Device collections from Configuration Manager.
+* Diagnostics or log data from Azure Storage.
+
+## Create a workspace
-## Create a Workspace
-
## [Portal](#tab/azure-portal)
-Use the **Log Analytics workspaces** menu to create a workspace.
+Use the **Log Analytics workspaces** menu to create a workspace.
-1. In the [Azure portal](https://portal.azure.com), type **Log Analytics** in the search box. As you begin typing, the list filters based on your input. Select **Log Analytics workspaces**.
+1. In the [Azure portal](https://portal.azure.com), enter **Log Analytics** in the search box. As you begin typing, the list filters based on your input. Select **Log Analytics workspaces**.
+
+ :::image type="content" source="media/quick-create-workspace/azure-portal-01.png" alt-text="Screenshot that shows the search bar at the top of the Azure home screen. As you begin typing, the list of search results filters based on your input.":::
- :::image type="content" source="media/quick-create-workspace/azure-portal-01.png" alt-text="Screenshot showing the search bar at the top of the Azure home screen. As you begin typing, the list of search results filters based on your input.":::
-
1. Select **Add**.
-1. Select a **Subscription** from the dropdown list.
-1. Use an existing **Resource Group** or create a new one.
+1. Select a **Subscription** from the dropdown.
+1. Use an existing **Resource Group** or create a new one.
1. Provide a name for the new **Log Analytics workspace**, such as *DefaultLAWorkspace*. This name must be unique per resource group.
-1. Select an available **Region**. For more information, see which [regions Log Analytics is available in](https://azure.microsoft.com/regions/services/) and search for Azure Monitor from the **Search for a product** field.
-
+1. Select an available **Region**. For more information, see which [regions Log Analytics is available in](https://azure.microsoft.com/regions/services/). Search for Azure Monitor in the **Search for a product** box.
- :::image type="content" source="media/quick-create-workspace/create-workspace.png" alt-text="Screenshot showing the fields that need to be populated on the Basic tab of the Create Log Analytics workspace screen.":::
+ :::image type="content" source="media/quick-create-workspace/create-workspace.png" alt-text="Screenshot that shows the boxes that need to be populated on the Basics tab of the Create Log Analytics workspace screen.":::
+1. Select **Review + Create** to review the settings. Then select **Create** to create the workspace. A default pricing tier of pay-as-you-go is applied. No charges will be incurred until you start collecting enough data. For more information about other pricing tiers, see [Log Analytics pricing details](https://azure.microsoft.com/pricing/details/log-analytics/).
-1. Select **Review + create** to review the settings and then **Create** to create the workspace. A default pricing tier of Pay-as-you-go is applied. No charges will be incurred until you start collecting a sufficient amount of data. For more information about other pricing tiers, see [Log Analytics Pricing Details](https://azure.microsoft.com/pricing/details/log-analytics/).
-
-
## [PowerShell](#tab/azure-powershell)
-The following sample script creates a workspace with no data source configuration.
+
+The following sample script creates a workspace with no data source configuration.
```powershell $ResourceGroup = <"my-resource-group">
try {
# Create the workspace New-AzOperationalInsightsWorkspace -Location $Location -Name $WorkspaceName -ResourceGroupName $ResourceGroup ``` + > [!NOTE] > Log Analytics was previously called Operational Insights. The PowerShell cmdlets use Operational Insights in Log Analytics commands.
-Once you've created a workspace, [configure a Log Analytics workspace in Azure Monitor using PowerShell](/azure/azure-monitor/logs/powershell-workspace-configuration).
+After you've created a workspace, [configure a Log Analytics workspace in Azure Monitor by using PowerShell](/azure/azure-monitor/logs/powershell-workspace-configuration).
## [Azure CLI](#tab/azure-cli)
-Manage Azure Log Analytics workspaces using [Azure CLI](/cli/azure/monitor/log-analytics/workspace) commands.
-
+ Run the [az group create](/cli/azure/group#az-group-create) command to create a resource group or use an existing resource group. To create a workspace, use the [az monitor log-analytics workspace create](/cli/azure/monitor/log-analytics/workspace#az-monitor-log-analytics-workspace-create) command. ```Azure CLI
Run the [az group create](/cli/azure/group#az-group-create) command to create a
For more information about Azure Monitor Logs in Azure CLI, see [Managing Azure Monitor Logs in Azure CLI](/azure/azure-monitor/logs/azure-cli-log-analytics-workspace-sample).
-## [Resource Manager Template](#tab/azure-resource-manager)
-The following sample uses the [Microsoft.OperationalInsights workspaces](/azure/templates/microsoft.operationalinsights/workspaces?tabs=bicep) template to create a Log Analytics workspace in Azure Monitor.
+## [Resource Manager template](#tab/azure-resource-manager)
+
+The following sample uses the [Microsoft.OperationalInsights workspaces](/azure/templates/microsoft.operationalinsights/workspaces?tabs=bicep) template to create a Log Analytics workspace in Azure Monitor.
For more information about Azure Resource Manager templates, see [Azure Resource Manager templates](../../azure-resource-manager/templates/syntax.md). [!INCLUDE [azure-monitor-samples](../../../includes/azure-monitor-resource-manager-samples.md)] - ### Template file+ ```json { "$schema": "https://schema.management.azure.com/schemas/2019-08-01/deploymentTemplate.json#",
For more information about Azure Resource Manager templates, see [Azure Resource
} } ```
-
-Once you've created a workspace, see [Resource Manager template samples for Log Analytics workspaces in Azure Monitor](/azure/azure-monitor/logs/resource-manager-workspace) to configure data sources.
-- ## Troubleshooting
-When you create a workspace that was deleted in the last 14 days and in [soft-delete state](../logs/delete-workspace.md#soft-delete-behavior), the operation could have different outcome depending on your workspace configuration:
-1. If you provide the same workspace name, resource group, subscription and region as in the deleted workspace, your workspace will be recovered including its data, configuration and connected agents.
-2. Workspace names must be unique for a resource group. If you use a workspace name that already exists, or is soft-deleted, an error is returned. Follow the steps below to permanently delete your soft-deleted and create a new workspace with the same name:
- - [Recover](../logs/delete-workspace.md#recover-workspace) your workspace
- - [Permanently delete](../logs/delete-workspace.md#permanent-workspace-delete) your workspace
- - Create a new workspace using the same workspace name
-
+When you create a workspace that was deleted in the last 14 days and in [soft-delete state](../logs/delete-workspace.md#soft-delete-behavior), the operation could have a different outcome depending on your workspace configuration:
+
+1. If you provide the same workspace name, resource group, subscription, and region as in the deleted workspace, your workspace will be recovered including its data, configuration, and connected agents.
+1. Workspace names must be unique for a resource group. If you use a workspace name that already exists, or is soft deleted, an error is returned. To permanently delete your soft-deleted name and create a new workspace with the same name, follow these steps:
+
+ 1. [Recover](../logs/delete-workspace.md#recover-workspace) your workspace.
+ 1. [Permanently delete](../logs/delete-workspace.md#permanent-workspace-delete) your workspace.
+ 1. Create a new workspace by using the same workspace name.
+
## Next steps
-Now that you have a workspace available, you can configure collection of monitoring telemetry, run log searches to analyze that data, and add a management solution to provide additional data and analytic insights.
-* See [Monitor health of Log Analytics workspace in Azure Monitor](../logs/monitor-workspace.md) create alert rules to monitor the health of your workspace.
-* See [Collect Azure service logs and metrics for use in Log Analytics](../essentials/resource-logs.md#send-to-log-analytics-workspace) to enable data collection from Azure resources with Azure Diagnostics or Azure storage.
+
+Now that you have a workspace available, you can configure collection of monitoring telemetry, run log searches to analyze that data, and add a management solution to provide more data and analytic insights. To learn more:
+
+* See [Monitor health of Log Analytics workspace in Azure Monitor](../logs/monitor-workspace.md) to create alert rules to monitor the health of your workspace.
+* See [Collect Azure service logs and metrics for use in Log Analytics](../essentials/resource-logs.md#send-to-log-analytics-workspace) to enable data collection from Azure resources with Azure Diagnostics or Azure Storage.
azure-netapp-files Azure Netapp Files Develop With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-develop-with-rest-api.md
na Previously updated : 06/29/2021 Last updated : 06/15/2022 # Develop for Azure NetApp Files with REST API
The REST API for the Azure NetApp Files service defines HTTP operations against
## Azure NetApp Files REST API specification
-The REST API specification for Azure NetApp Files is published through [GitHub](https://github.com/Azure/azure-rest-api-specs/tree/master/specification/netapp/resource-manager):
+The REST API specification for Azure NetApp Files is published through [GitHub](https://github.com/Azure/azure-rest-api-specs/tree/main/specification/netapp/resource-manager):
-`https://github.com/Azure/azure-rest-api-specs/tree/master/specification/netapp/resource-manager`
+`https://github.com/Azure/azure-rest-api-specs/tree/main/specification/netapp/resource-manager`
## Considerations
azure-netapp-files Azure Netapp Files Solution Architectures https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-solution-architectures.md
na Previously updated : 06/14/2022 Last updated : 06/15/2022 # Solution architectures using Azure NetApp Files
This section provides references to SAP on Azure solutions.
* [SAP HANA scale-out with HSR and Pacemaker on RHEL - Azure Virtual Machines](../virtual-machines/workloads/sap/sap-hana-high-availability-scale-out-hsr-rhel.md) * [Implementing Azure NetApp Files with Kerberos for SAP HANA](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/implementing-azure-netapp-files-with-kerberos/ba-p/3142010) * [Azure Application Consistent Snapshot tool (AzAcSnap)](azacsnap-introduction.md)
+* [Manual Recovery Guide for SAP HANA on Azure VMs from Azure NetApp Files snapshot with AzAcSnap](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/manual-recovery-guide-for-sap-hana-on-azure-vms-from-azure/ba-p/3290161)
* [SAP HANA Disaster Recovery with Azure NetApp Files](https://docs.netapp.com/us-en/netapp-solutions-sap/pdfs/sidebar/SAP_HANA_Disaster_Recovery_with_Azure_NetApp_Files.pdf) * [SAP HANA backup and recovery on Azure NetApp Files with SnapCenter Service](https://docs.netapp.com/us-en/netapp-solutions-sap/pdfs/sidebar/SAP_HANA_backup_and_recovery_on_Azure_NetApp_Files_with_SnapCenter_Service.pdf)
azure-netapp-files Azure Policy Definitions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-policy-definitions.md
To learn how to assign a policy to resources and view compliance report, see [As
## Next steps
-* [Azure Policy documentation](/azure/governance/policy/)
+* [Azure Policy documentation](../governance/policy/index.yml)
azure-netapp-files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/whats-new.md
na Previously updated : 06/14/2022 Last updated : 06/15/2022
Azure NetApp Files is updated regularly. This article provides a summary about t
* [Azure NetApp Files datastores for Azure VMware Solution](../azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts.md) (Preview)
- [Azure NetApp Files datastores for Azure VMware Solution](https://azure.microsoft.com/blog/power-your-file-storageintensive-workloads-with-azure-vmware-solution) is now in public preview. This new integration between Azure VMware Solution and Azure NetApp Files will enable you to [create datastores via the Azure VMware Solution resource provider with Azure NetApp Files NFS volumes](../azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts.md) and mount the datastores on your private cloud clusters of choice. Along with the integration of Azure disk pools for Azure VMware Solution, this will provide more choice to scale storage needs independently of compute resources. For your storage-intensive workloads running on Azure VMware Solution, the integration with Azure NetApp Files helps to easily scale storage capacity and performance beyond the limits of native vSAN built on top of the AVS nodes and lower your overall total cost of ownership.
+ [Azure NetApp Files datastores for Azure VMware Solution](https://azure.microsoft.com/blog/power-your-file-storageintensive-workloads-with-azure-vmware-solution) is now in public preview. This new integration between Azure VMware Solution and Azure NetApp Files will enable you to [create datastores via the Azure VMware Solution resource provider with Azure NetApp Files NFS volumes](../azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts.md) and mount the datastores on your private cloud clusters of choice. Along with the integration of Azure disk pools for Azure VMware Solution, this will provide more choice to scale storage needs independently of compute resources. For your storage-intensive workloads running on Azure VMware Solution, the integration with Azure NetApp Files helps to easily scale storage capacity beyond the limits of the local instance storage for AVS provided by vSAN and lower your overall total cost of ownership for storage-intensive workloads.
Regional Coverage: Australia East, Australia Southeast, Brazil South, Canada Central, Canada East, Central US, East US, France Central, Germany West Central, Japan West, North Central US, North Europe, South Central US, Southeast Asia, Switzerland West, UK South, UK West, West US. Regional coverage will expand as the preview progresses.
azure-resource-manager Publish Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/publish-managed-identity.md
There are two common ways to create a Managed Application with **identity**: [Cr
### Using CreateUIDefinition
-A Managed Application can be configured with Managed Identity through the [CreateUIDefinition.json](./create-uidefinition-overview.md). In the [outputs section](./create-uidefinition-overview.md#outputs), the key `managedIdentity` can be used to override the identity property of the Managed Application template. The sample bellow will enable **system-assigned** identity on the Managed Application. More complex identity objects can be formed by using CreateUIDefinition elements to ask the consumer for inputs. These inputs can be used to construct Managed Applications with **user-assigned identity**.
+A Managed Application can be configured with Managed Identity through the [CreateUIDefinition.json](./create-uidefinition-overview.md). In the [outputs section](./create-uidefinition-overview.md#outputs), the key `managedIdentity` can be used to override the identity property of the Managed Application template. The sample below will enable **system-assigned** identity on the Managed Application. More complex identity objects can be formed by using CreateUIDefinition elements to ask the consumer for inputs. These inputs can be used to construct Managed Applications with **user-assigned identity**.
```json "outputs": {
token_type | The type of the token.
## Next steps > [!div class="nextstepaction"]
-> [How to configure a Managed Application with a Custom Provider](../custom-providers/overview.md)
+> [How to configure a Managed Application with a Custom Provider](../custom-providers/overview.md)
azure-resource-manager Template Tutorial Create First Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-tutorial-create-first-template.md
Title: Tutorial - Create and deploy template description: Create your first Azure Resource Manager template (ARM template). In the tutorial, you learn about the template file syntax and how to deploy a storage account.- Previously updated : 06/07/2022+ Last updated : 06/15/2022
This tutorial introduces you to Azure Resource Manager templates (ARM templates). It shows you how to create a starter template and deploy it to Azure. It teaches you about the template structure and the tools you need to work with templates. It takes about **12 minutes** to complete this tutorial, but the actual time varies based on how many tools you need to install.
-This tutorial is the first of a series. As you progress through the series, you modify the starting template, step by step, until you've explored all of the core parts of an ARM template. These elements are the building blocks for more complex templates. We hope by the end of the series you're confident in creating your own templates and ready to automate your deployments with templates.
+This tutorial is the first of a series. As you progress through the series, you modify the starting template, step by step, until you explore all of the core parts of an ARM template. These elements are the building blocks for more complex templates. We hope by the end of the series you're confident in creating your own templates and ready to automate your deployments with templates.
-If you want to learn about the benefits of using templates and why you should automate deployments with templates, see [ARM template overview](overview.md). To learn about ARM templates through a guided set of modules on [Microsoft Learn](/learn), see [Deploy and manage resources in Azure by using JSON ARM templates](/learn/paths/deploy-manage-resource-manager-templates.md).
+If you want to learn about the benefits of using templates and why you should automate deployments with templates, see [ARM template overview](overview.md). To learn about ARM templates through a guided set of modules on [Microsoft Learn](/learn), see [Deploy and manage resources in Azure by using JSON ARM templates](/learn/paths/deploy-manage-resource-manager-templates).
If you don't have a Microsoft Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
The deployment command returns results. Look for `ProvisioningState` to see whet
> [!NOTE]
-> If the deployment failed, use the `verbose` switch to get information about the resources being created. Use the `debug` switch to get more information for debugging.
+> If the deployment fails, use the `verbose` switch to get information about the resources being created. Use the `debug` switch to get more information for debugging.
## Verify deployment
If you're moving on to the next tutorial, you don't need to delete the resource
If you're stopping now, you might want to delete the resource group. 1. From the Azure portal, select **Resource groups** from the left menu.
-2. Type the resource group name in the **Filter for any field...**.
-3. Check the box next to **myResourceGroup** and select **myResourceGroup** or the resource group name you chose.
+2. Type the resource group name in the **Filter for any field...** text field.
+3. Check the box next to **myResourceGroup** and select **myResourceGroup** or your resource group name.
4. Select **Delete resource group** from the top menu. :::image type="content" source="./media/template-tutorial-create-first-template/resource-deletion.png" alt-text="See deletion.":::
azure-signalr Concept Connection String https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/concept-connection-string.md
Different `TokenCredential` will be used to generate Azure AD tokens with the re
- `type=azure.app`
- `clientId` and `tenantId` are required to use [Azure AD application with service principal](/azure/active-directory/develop/howto-create-service-principal-portal).
+ `clientId` and `tenantId` are required to use [Azure AD application with service principal](../active-directory/develop/howto-create-service-principal-portal.md).
1. [ClientSecretCredential(clientId, tenantId, clientSecret)](/dotnet/api/azure.identity.clientsecretcredential) will be used if `clientSecret` is given. ```
There are also two ways to configure multiple instances:
``` Azure:SignalR:ConnectionString:<name>:<type>
- ```
+ ```
azure-video-indexer Network Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/network-security.md
# NSG service tags for Azure Video Indexer
-Azure Video Indexer is a service hosted on Azure. In some architecture cases the service needs to interact with other services in order to index video files (that is, a Storage Account) or when a customer orchestrates indexing jobs against our API endpoint using their own service hosted on Azure (i.e AKS, Web Apps, Logic Apps, Functions). Customers who would like to limit access to their resources on a network level can use [Network Security Groups with Service Tags](/azure/virtual-network/service-tags-overview). A service tag represents a group of IP address prefixes from a given Azure service, in this case Azure Video Indexer. Microsoft manages the address prefixes grouped by the service tag and automatically updates the service tag as addresses change in our backend, minimizing the complexity of frequent updates to network security rules by the customer.
+Azure Video Indexer is a service hosted on Azure. In some architecture cases the service needs to interact with other services in order to index video files (that is, a Storage Account) or when a customer orchestrates indexing jobs against our API endpoint using their own service hosted on Azure (i.e AKS, Web Apps, Logic Apps, Functions). Customers who would like to limit access to their resources on a network level can use [Network Security Groups with Service Tags](../virtual-network/service-tags-overview.md). A service tag represents a group of IP address prefixes from a given Azure service, in this case Azure Video Indexer. Microsoft manages the address prefixes grouped by the service tag and automatically updates the service tag as addresses change in our backend, minimizing the complexity of frequent updates to network security rules by the customer.
## Get started with service tags
azure-vmware Attach Azure Netapp Files To Azure Vmware Solution Hosts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts.md
# Attach Azure NetApp Files datastores to Azure VMware Solution hosts (Preview)
-[Azure NetApp Files](/azure/azure-netapp-files/azure-netapp-files-introduction?) is an enterprise-class, high-performance, metered file storage service. The service supports the most demanding enterprise file-workloads in the cloud: databases, SAP, and high-performance computing applications, with no code changes. For more information on Azure NetApp Files, see [Azure NetApp Files](https://docs.microsoft.com/azure/azure-netapp-files/) documentation.
+[Azure NetApp Files](../azure-netapp-files/azure-netapp-files-introduction.md) is an enterprise-class, high-performance, metered file storage service. The service supports the most demanding enterprise file-workloads in the cloud: databases, SAP, and high-performance computing applications, with no code changes. For more information on Azure NetApp Files, see [Azure NetApp Files](../azure-netapp-files/index.yml) documentation.
-[Azure VMware Solution](/azure/azure-vmware/introduction) supports attaching Network File System (NFS) datastores as a persistent storage option. You can create NFS datastores with Azure NetApp Files volumes and attach them to clusters of your choice. You can also create virtual machines (VMs) for optimal cost and performance.
+[Azure VMware Solution](./introduction.md) supports attaching Network File System (NFS) datastores as a persistent storage option. You can create NFS datastores with Azure NetApp Files volumes and attach them to clusters of your choice. You can also create virtual machines (VMs) for optimal cost and performance.
> [!IMPORTANT] > Azure NetApp Files datastores for Azure VMware Solution hosts is currently in public preview. This version is provided without a service-level agreement and is not recommended for production workloads. Some features may not be supported or may have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
The following diagram demonstrates a typical architecture of Azure NetApp Files
Before you begin the prerequisites, review the [Performance best practices](#performance-best-practices) section to learn about optimal performance of NFS datastores on Azure NetApp Files volumes.
-1. [Deploy Azure VMware Solution](https://docs.microsoft.com/azure/azure-vmware/deploy-azure-vmware-solution?) private cloud in a configured virtual network. For more information, see [Network planning checklist](/azure/azure-vmware/tutorial-network-checklist) and [Configure networking for your VMware private cloud](https://review.docs.microsoft.com/azure/azure-vmware/tutorial-configure-networking?).
-1. Create an [NFSv3 volume for Azure NetApp Files](/azure/azure-netapp-files/azure-netapp-files-create-volumes) in the same virtual network as the Azure VMware Solution private cloud.
+1. [Deploy Azure VMware Solution](./deploy-azure-vmware-solution.md) private cloud in a configured virtual network. For more information, see [Network planning checklist](./tutorial-network-checklist.md) and [Configure networking for your VMware private cloud](https://review.docs.microsoft.com/azure/azure-vmware/tutorial-configure-networking?).
+1. Create an [NFSv3 volume for Azure NetApp Files](../azure-netapp-files/azure-netapp-files-create-volumes.md) in the same virtual network as the Azure VMware Solution private cloud.
1. Verify connectivity from the private cloud to Azure NetApp Files volume by pinging the attached target IP. 2. Verify the subscription is registered to the `ANFAvsDataStore` feature in the `Microsoft.NetApp` namespace. If the subscription isn't registered, register it now.
Before you begin the prerequisites, review the [Performance best practices](#per
`az feature show --name "ANFAvsDataStore" --namespace "Microsoft.NetApp" --query properties.state` 1. Based on your performance requirements, select the correct service level needed for the Azure NetApp Files capacity pool. For optimal performance, it's recommended to use the Ultra tier. Select option **Azure VMware Solution Datastore** listed under the **Protocol** section.
- 1. Create a volume with **Standard** [network features](/azure/azure-netapp-files/configure-network-features) if available for ExpressRoute FastPath connectivity.
+ 1. Create a volume with **Standard** [network features](../azure-netapp-files/configure-network-features.md) if available for ExpressRoute FastPath connectivity.
1. Under the **Protocol** section, select **Azure VMware Solution Datastore** to indicate the volume is created to use as a datastore for Azure VMware Solution private cloud.
- 1. If you're using [export policies](/azure/azure-netapp-files/azure-netapp-files-configure-export-policy) to control access to Azure NetApp Files volumes, enable the Azure VMware private cloud IP range, not individual host IPs. Faulty hosts in a private cloud could get replaced so if the IP isn't enabled, connectivity to datastore will be impacted.
+ 1. If you're using [export policies](../azure-netapp-files/azure-netapp-files-configure-export-policy.md) to control access to Azure NetApp Files volumes, enable the Azure VMware private cloud IP range, not individual host IPs. Faulty hosts in a private cloud could get replaced so if the IP isn't enabled, connectivity to datastore will be impacted.
## Supported regions
Azure VMware Solution currently supports the following regions:
**America** : East US, West US, Central US, South Central US, North Central US, Canada East, Canada Central .
-**Europe** : North Europe, UK West, UK South, France Central, Switzerland West, Germany West Central.
+**Europe** : West Europe, North Europe, UK West, UK South, France Central, Switzerland West, Germany West Central.
**Asia** : Southeast Asia, Japan West.
The list of supported regions will expand as the preview progresses.
There are some important best practices to follow for optimal performance of NFS datastores on Azure NetApp Files volumes. - Create Azure NetApp Files volumes using **Standard** network features to enable optimized connectivity from Azure VMware Solution private cloud via ExpressRoute FastPath connectivity.-- For optimized performance, choose **UltraPerformance** gateway and enable [ExpressRoute FastPath](/azure/expressroute/expressroute-howto-linkvnet-arm#configure-expressroute-fastpath) from a private cloud to Azure NetApp Files volumes virtual network. View more detailed information on gateway SKUs at [About ExpressRoute virtual network gateways](/azure/expressroute/expressroute-about-virtual-network-gateways).
+- For optimized performance, choose **UltraPerformance** gateway and enable [ExpressRoute FastPath](../expressroute/expressroute-howto-linkvnet-arm.md#configure-expressroute-fastpath) from a private cloud to Azure NetApp Files volumes virtual network. View more detailed information on gateway SKUs at [About ExpressRoute virtual network gateways](../expressroute/expressroute-about-virtual-network-gateways.md).
- Based on your performance requirements, select the correct service level needed for the Azure NetApp Files capacity pool. For best performance, it's recommended to use the Ultra tier.-- Create multiple datastores of 4-TB size for better performance. The default limit is 8 but it can be increased up to a maximum of 256 by submitting a support ticket. To submit a support ticket, go to [Create an Azure support request](/azure/azure-portal/supportability/how-to-create-azure-support-request).-- Work with your Microsoft representative to ensure that the Azure VMware Solution private cloud and the Azure NetApp Files volumes are deployed within same [Availability Zone](https://docs.microsoft.com/azure/availability-zones/az-overview#availability-zones).
+- Create multiple datastores of 4-TB size for better performance. The default limit is 8 but it can be increased up to a maximum of 256 by submitting a support ticket. To submit a support ticket, go to [Create an Azure support request](../azure-portal/supportability/how-to-create-azure-support-request.md).
+- Work with your Microsoft representative to ensure that the Azure VMware Solution private cloud and the Azure NetApp Files volumes are deployed within same [Availability Zone](../availability-zones/az-overview.md#availability-zones).
## Attach an Azure NetApp Files volume to your private cloud
You can use the instructions provided to disconnect an Azure NetApp Files-based
Now that you've attached a datastore on Azure NetApp Files-based NFS volume to your Azure VMware Solution hosts, you can create your VMs. Use the following resources to learn more. -- [Service levels for Azure NetApp Files](/azure/azure-netapp-files/azure-netapp-files-service-levels)-- Datastore protection using [Azure NetApp Files snapshots](/azure/azure-netapp-files/snapshots-introduction)-- [About ExpressRoute virtual network gateways](https://docs.microsoft.com/azure/expressroute/expressroute-about-virtual-network-gateways)-- [Understand Azure NetApp Files backup](/azure/azure-netapp-files/backup-introduction)-- [Guidelines for Azure NetApp Files network planning](https://docs.microsoft.com/azure/azure-netapp-files/azure-netapp-files-network-topologies)
+- [Service levels for Azure NetApp Files](../azure-netapp-files/azure-netapp-files-service-levels.md)
+- Datastore protection using [Azure NetApp Files snapshots](../azure-netapp-files/snapshots-introduction.md)
+- [About ExpressRoute virtual network gateways](../expressroute/expressroute-about-virtual-network-gateways.md)
+- [Understand Azure NetApp Files backup](../azure-netapp-files/backup-introduction.md)
+- [Guidelines for Azure NetApp Files network planning](../azure-netapp-files/azure-netapp-files-network-topologies.md)
## FAQs
Now that you've attached a datastore on Azure NetApp Files-based NFS volume to y
- **How many datastores are we supporting with Azure VMware Solution?**
- The default limit is 8 but it can be increased up to a maximum of 256 by submitting a support ticket. To submit a support ticket, go to [Create an Azure support request](/azure/azure-portal/supportability/how-to-create-azure-support-request).
+ The default limit is 8 but it can be increased up to a maximum of 256 by submitting a support ticket. To submit a support ticket, go to [Create an Azure support request](../azure-portal/supportability/how-to-create-azure-support-request.md).
- **What latencies and bandwidth can be expected from the datastores backed by Azure NetApp Files?**
Now that you've attached a datastore on Azure NetApp Files-based NFS volume to y
- **What are my options for backup and recovery?**
- Azure NetApp Files (ANF) supports [snapshots](/azure/azure-netapp-files/azure-netapp-files-manage-snapshots) of datastores for quick checkpoints for near term recovery or quick clones. ANF backup lets you offload your ANF snapshots to Azure storage. This feature is available in public preview. Only for this technology are copies and stores-changed blocks relative to previously offloaded snapshots in an efficient format. This ability decreases Recovery Point Objective (RPO) and Recovery Time Objective (RTO) while lowering backup data transfer burden on the Azure VMware Solution service.
+ Azure NetApp Files (ANF) supports [snapshots](../azure-netapp-files/azure-netapp-files-manage-snapshots.md) of datastores for quick checkpoints for near term recovery or quick clones. ANF backup lets you offload your ANF snapshots to Azure storage. This feature is available in public preview. Only for this technology are copies and stores-changed blocks relative to previously offloaded snapshots in an efficient format. This ability decreases Recovery Point Objective (RPO) and Recovery Time Objective (RTO) while lowering backup data transfer burden on the Azure VMware Solution service.
- **How do I monitor Storage Usage?**
- Use [Metrics for Azure NetApp Files](/azure/azure-netapp-files/azure-netapp-files-metrics) to monitor storage and performance usage for the Datastore volume and to set alerts.
+ Use [Metrics for Azure NetApp Files](../azure-netapp-files/azure-netapp-files-metrics.md) to monitor storage and performance usage for the Datastore volume and to set alerts.
- **What metrics are available for monitoring?**
- Usage and performance metrics are available for monitoring the Datastore volume. Replication metrics are also available for ANF datastore that can be replicated to another region using Cross Regional Replication. For more information about metrics, see [Metrics for Azure NetApp Files](/azure/azure-netapp-files/azure-netapp-files-metrics).
+ Usage and performance metrics are available for monitoring the Datastore volume. Replication metrics are also available for ANF datastore that can be replicated to another region using Cross Regional Replication. For more information about metrics, see [Metrics for Azure NetApp Files](../azure-netapp-files/azure-netapp-files-metrics.md).
- **What happens if a new node is added to the cluster, or an existing node is removed from the cluster?**
Now that you've attached a datastore on Azure NetApp Files-based NFS volume to y
- **How are the datastores charged, is there an additional charge?**
- Azure NetApp Files NFS volumes that are used as datastores will be billed following the [capacity pool based billing model](/azure/azure-netapp-files/azure-netapp-files-cost-model). Billing will depend on the service level. There's no extra charge for using Azure NetApp Files NFS volumes as datastores.
+ Azure NetApp Files NFS volumes that are used as datastores will be billed following the [capacity pool based billing model](../azure-netapp-files/azure-netapp-files-cost-model.md). Billing will depend on the service level. There's no extra charge for using Azure NetApp Files NFS volumes as datastores.
azure-vmware Concepts Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/concepts-storage.md
vSAN datastores use data-at-rest encryption by default using keys stored in Azur
## Azure storage integration
-You can use Azure storage services in workloads running in your private cloud. The Azure storage services include Storage Accounts, Table Storage, and Blob Storage. The connection of workloads to Azure storage services doesn't traverse the internet. This connectivity provides more security and enables you to use SLA-based Azure storage services in your private cloud workloads.
+You can use Azure storage services in workloads running in your private cloud. The Azure storage services include Storage Accounts, Table Storage, and Blob Storage. The connection of workloads to Azure storage services doesn't traverse the internet. This connectivity provides more security and enables you to use SLA-based Azure storage services in your private cloud workloads. You can also connect Azure disk pools or Azure NetApp Files to expand the storage capacity. This functionality is in preview.
## Alerts and monitoring
Now that you've covered Azure VMware Solution storage concepts, you may want to
- [Scale clusters in the private cloud][tutorial-scale-private-cloud] - You can scale the clusters and hosts in a private cloud as required for your application workload. Performance and availability limitations for specific services should be addressed on a case by case basis. -- [Azure NetApp Files with Azure VMware Solution](netapp-files-with-azure-vmware-solution.md) - You can use Azure NetApp to migrate and run the most demanding enterprise file-workloads in the cloud: databases, SAP, and high-performance computing applications, with no code changes.
+- [Azure NetApp Files with Azure VMware Solution](netapp-files-with-azure-vmware-solution.md) - You can use Azure NetApp Files to migrate and run the most demanding enterprise file-workloads in the cloud: databases, SAP, and high-performance computing applications, with no code changes. Azure NetApp Files volumes can be attached to virtual machines and can also be connected as data stores directly to Azure VMware Solution. This functionality is in preview.
- [vSphere role-based access control for Azure VMware Solution](concepts-identity.md) - You use vCenter Server to manage VM workloads and NSX-T Manager to manage and extend the private cloud. Access and identity management use the CloudAdmin role for vCenter Server and restricted administrator rights for NSX-T Manager.
azure-vmware Deploy Disaster Recovery Using Jetstream https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/deploy-disaster-recovery-using-jetstream.md
For full details, refer to the article: [Disaster Recovery with Azure NetApp Fil
- (Optional) Azure NetApp Files volume(s) are created and attached to the Azure VMware Solution private cloud for recovery or failover of protected VMs to Azure NetApp Files backed datastores. - [Attach Azure NetApp Files datastores to Azure VMware Solution hosts (Preview)](attach-azure-netapp-files-to-azure-vmware-solution-hosts.md)
- - [Disaster Recovery with Azure NetApp Files, JetStream DR and AVS (Azure VMware Solution)](https://www.jetstreamsoft.com/portal/jetstream-knowledge-base/disaster-recovery-with-azure-netapp-files-jetstream-dr-and-avs-azure-vmware-solution/)
+ - [Disaster Recovery with Azure NetApp Files, JetStream DR and AVS (Azure VMware Solution)](https://www.jetstreamsoft.com/portal/jetstream-knowledge-base/disaster-recovery-with-azure-netapp-files-jetstream-dr-and-avs-azure-vmware-solution/)
### Scenario 2: Azure VMware Solution to Azure VMware Solution DR
Azure VMware Solution uses the Run command (Preview) to automate both the instal
- [Failover to Azure VMware Solution](https://vimeo.com/491883564/ca9fc57092) - [Failback to on-premises](https://vimeo.com/491884402/65ee817b60)-
azure-vmware Enable Public Ip Nsx Edge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/enable-public-ip-nsx-edge.md
In this article, you'll learn how to enable Public IP to the NSX Edge for your A
Public IP to the NSX Edge is a feature in Azure VMware Solution that enables inbound and outbound internet access for your Azure VMware Solution environment. >[!IMPORTANT]
->To enable this feature for your subscription, register the ```PIPOnNSXEnabled``` flag and follow these steps to [set up the preview feature in your Azure subscription](https://docs.microsoft.com/azure/azure-resource-manager/management/preview-features?tabs=azure-portal).
+>To enable this feature for your subscription, register the ```PIPOnNSXEnabled``` flag and follow these steps to [set up the preview feature in your Azure subscription](../azure-resource-manager/management/preview-features.md?tabs=azure-portal).
The Public IP is configured in Azure VMware Solution through the Azure portal and the NSX-T Data center interface within your Azure VMware Solution private cloud.
With this capability, you have the following features:
- HCX Migration support over the Public Internet. >[!TIP]
->To enable this feature for your subscription, register the ```PIPOnNSXEnabled``` flag and follow these steps to [set up the preview feature in your Azure subscription](https://docs.microsoft.com/azure/azure-resource-manager/management/preview-features?tabs=azure-portal).
+>To enable this feature for your subscription, register the ```PIPOnNSXEnabled``` flag and follow these steps to [set up the preview feature in your Azure subscription](../azure-resource-manager/management/preview-features.md?tabs=azure-portal).
## Reference architecture The architecture shows Internet access to and from your Azure VMware Solution private cloud using a Public IP directly to the NSX Edge.
The Distributed Firewall could be used to filter traffic to VMs. This feature is
[Enable Managed SNAT for Azure VMware Solution Workloads (Preview)](enable-managed-snat-for-workloads.md)
-[Disable Internet access or enable a default route](disable-internet-access.md)
-
+[Disable Internet access or enable a default route](disable-internet-access.md)
azure-vmware Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/introduction.md
The diagram shows the adjacency between private clouds and VNets in Azure, Azure
## Hosts, clusters, and private clouds
-Azure VMware Solution private clouds and clusters are built from a bare-metal, hyper-converged Azure infrastructure host. The high-end (HE) hosts have 576-GB RAM and dual Intel 18 core, 2.3-GHz processors. In addition, the HE hosts have two vSAN disk groups with 15.36 TB (SSD) of raw vSAN capacity tier and a 3.2 TB (NVMe) vSAN cache tier.
-You can deploy new private clouds through the Azure portal or Azure CLI.
+You can deploy new or scale existing private clouds through the Azure portal or Azure CLI.
## Networking
azure-web-pubsub Reference Rest Api Data Plane https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/reference-rest-api-data-plane.md
Like using `AccessKey`, a [JSON Web Token (JWT)](https://en.wikipedia.org/wiki/J
**The difference is**, in this scenario, JWT Token is generated by Azure Active Directory.
-[Learn how to generate Azure AD Tokens](/azure/active-directory/develop/reference-v2-libraries)
+[Learn how to generate Azure AD Tokens](../active-directory/develop/reference-v2-libraries.md)
You could also use **Role Based Access Control (RBAC)** to authorize the request from your server to Azure Web PubSub Service.
azure-web-pubsub Tutorial Serverless Iot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/tutorial-serverless-iot.md
If you already have a device registered in your IoT hub, you can skip this secti
- For quickest results, simulate temperature data using the [Raspberry Pi Azure IoT Online Simulator](https://azure-samples.github.io/raspberry-pi-web-simulator/#Getstarted). Paste in the **device connection string**, and select the **Run** button. -- If you have a physical Raspberry Pi and BME280 sensor, you may measure and report real temperature and humidity values by following the [Connect Raspberry Pi to Azure IoT Hub (Node.js)](/azure/iot-hub/iot-hub-raspberry-pi-kit-node-get-started) tutorial.
+- If you have a physical Raspberry Pi and BME280 sensor, you may measure and report real temperature and humidity values by following the [Connect Raspberry Pi to Azure IoT Hub (Node.js)](../iot-hub/iot-hub-raspberry-pi-kit-node-get-started.md) tutorial.
## Run the visualization website Open function host index page: `http://localhost:7071/api/index` to view the real-time dashboard. Register multiple devices and you can see the dashboard updates multiple devices in real-time. Open multiple browsers and you can see every page are updated in real-time.
In this quickstart, you learned how to run a serverless chat application. Now, y
> [Azure Web PubSub bindings for Azure Functions](https://azure.github.io/azure-webpubsub/references/functions-bindings) > [!div class="nextstepaction"]
-> [Explore more Azure Web PubSub samples](https://github.com/Azure/azure-webpubsub/tree/main/samples)
+> [Explore more Azure Web PubSub samples](https://github.com/Azure/azure-webpubsub/tree/main/samples)
bastion Diagnostic Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/diagnostic-logs.md
To access your diagnostics logs, you can directly use the storage account that y
"message":"Successfully Connected.", "resourceType":"VM", "targetVMIPAddress":"172.16.1.5",
- "userEmail":"<userAzureAccountEmailAddress>"
+ "userEmail":"<userAzureAccountEmailAddress>",
"tunnelId":"<tunnelID>" }, "FluentdIngestTimestamp":"2019-10-03T16:03:34.0000000Z",
To access your diagnostics logs, you can directly use the storage account that y
"message":"Login Failed", "resourceType":"VM", "targetVMIPAddress":"172.16.1.5",
- "userEmail":"<userAzureAccountEmailAddress>"
+ "userEmail":"<userAzureAccountEmailAddress>",
"tunnelId":"<tunnelID>" }, "FluentdIngestTimestamp":"2019-10-03T16:03:34.0000000Z",
cdn Cdn App Dev Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-app-dev-net.md
You need Visual Studio 2015 to complete this tutorial. [Visual Studio Community
Now that we've created a resource group for our CDN profiles and given our Azure AD application permission to manage CDN profiles and endpoints within that group, we can start creating our application. > [!IMPORTANT]
-> The [Microsoft.IdentityModel.Clients.ActiveDirectory](https://www.nuget.org/packages/Microsoft.IdentityModel.Clients.ActiveDirectory) NuGet package and Azure AD Authentication Library (ADAL) have been deprecated. No new features have been added since June 30, 2020. We strongly encourage you to upgrade, see the [migration guide](/azure/active-directory/develop/msal-migration) for more details.
+> The [Microsoft.IdentityModel.Clients.ActiveDirectory](https://www.nuget.org/packages/Microsoft.IdentityModel.Clients.ActiveDirectory) NuGet package and Azure AD Authentication Library (ADAL) have been deprecated. No new features have been added since June 30, 2020. We strongly encourage you to upgrade, see the [migration guide](../active-directory/develop/msal-migration.md) for more details.
From within Visual Studio 2015, click **File**, **New**, **Project...** to open the new project dialog. Expand **Visual C#**, then select **Windows** in the pane on the left. Click **Console Application** in the center pane. Name your project, then click **OK**.
To see the completed project from this walkthrough, [download the sample](https:
To find additional documentation on the Azure CDN Management Library for .NET, view the [reference on MSDN](/dotnet/api/overview/azure/cdn).
-Manage your CDN resources with [PowerShell](cdn-manage-powershell.md).
+Manage your CDN resources with [PowerShell](cdn-manage-powershell.md).
chaos-studio Chaos Studio Tutorial Aks Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-tutorial-aks-cli.md
Azure Chaos Studio uses [Chaos Mesh](https://chaos-mesh.org/), a free, open-sour
## Prerequisites - An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] -- An AKS cluster with Linux node pools. If you do not have an AKS cluster, see the AKS quickstart [using the Azure CLI][./learn/quick-kubernetes-deploy-cli], [using Azure PowerShell][./learn/quick-kubernetes-deploy-powershell], or [using the Azure portal][./learn/quick-kubernetes-deploy-portal].
+- An AKS cluster with Linux node pools. If you do not have an AKS cluster, see the AKS quickstart [using the Azure CLI](../aks/learn/quick-kubernetes-deploy-cli.md), [using Azure PowerShell](../aks/learn/quick-kubernetes-deploy-powershell.md), or [using the Azure portal](../aks/learn/quick-kubernetes-deploy-portal.md).
> [!WARNING] > AKS Chaos Mesh faults are only supported on Linux node pools.
chaos-studio Chaos Studio Tutorial Aks Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-tutorial-aks-portal.md
Azure Chaos Studio uses [Chaos Mesh](https://chaos-mesh.org/), a free, open-sour
## Prerequisites - An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] -- An AKS cluster with a Linux node pool. If you do not have an AKS cluster, see the AKS quickstart [using the Azure CLI][./learn/quick-kubernetes-deploy-cli], [using Azure PowerShell][./learn/quick-kubernetes-deploy-powershell], or [using the Azure portal][./learn/quick-kubernetes-deploy-portal]..
+- An AKS cluster with a Linux node pool. If you do not have an AKS cluster, see the AKS quickstart [using the Azure CLI](../aks/learn/quick-kubernetes-deploy-cli.md), [using Azure PowerShell](../aks/learn/quick-kubernetes-deploy-powershell.md), or [using the Azure portal](../aks/learn/quick-kubernetes-deploy-portal.md).
> [!WARNING] > AKS Chaos Mesh faults are only supported on Linux node pools.
cloud-services Cloud Services Dotnet Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-dotnet-get-started.md
The application is an advertising bulletin board. Users create an ad by entering
The application uses the [queue-centric work pattern](https://www.asp.net/aspnet/overview/developing-apps-with-windows-azure/building-real-world-cloud-apps-with-windows-azure/queue-centric-work-pattern) to off-load the CPU-intensive work of creating thumbnails to a back-end process. ## Alternative architecture: App Service and WebJobs
-This tutorial shows how to run both front-end and back-end in an Azure cloud service. An alternative is to run the front-end in [Azure App Service](../app-service/index.yml) and use the [WebJobs](/azure/app-service/webjobs-create) feature for the back-end. For a tutorial that uses WebJobs, see [Get Started with the Azure WebJobs SDK](https://github.com/Azure/azure-webjobs-sdk/wiki). For information about how to choose the services that best fit your scenario, see [Azure App Service, Cloud Services, and virtual machines comparison](/azure/architecture/guide/technology-choices/compute-decision-tree).
+This tutorial shows how to run both front-end and back-end in an Azure cloud service. An alternative is to run the front-end in [Azure App Service](../app-service/index.yml) and use the [WebJobs](../app-service/webjobs-create.md) feature for the back-end. For a tutorial that uses WebJobs, see [Get Started with the Azure WebJobs SDK](https://github.com/Azure/azure-webjobs-sdk/wiki). For information about how to choose the services that best fit your scenario, see [Azure App Service, Cloud Services, and virtual machines comparison](/azure/architecture/guide/technology-choices/compute-decision-tree).
## What you'll learn * How to enable your machine for Azure development by installing the Azure SDK.
For more information, see the following resources:
* [Azure Cloud Services Part 1: Introduction](https://justazure.com/microsoft-azure-cloud-services-part-1-introduction/) * [How to manage Cloud Services](cloud-services-how-to-manage-portal.md) * [Azure Storage](../storage/index.yml)
-* [How to choose a cloud service provider](https://azure.microsoft.com/overview/choosing-a-cloud-service-provider/)
+* [How to choose a cloud service provider](https://azure.microsoft.com/overview/choosing-a-cloud-service-provider/)
cloud-services Cloud Services Nodejs Chat App Socketio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-nodejs-chat-app-socketio.md
Azure emulator:
> [!IMPORTANT] > Be sure to use a unique name, otherwise the publish process will fail. After the deployment has completed, the browser will open and navigate to the deployed service. >
- > If you receive an error stating that the provided subscription name doesn't exist in the imported publish profile, you must download and import the publishing profile for your subscription before deploying to Azure. See the **Deploying the Application to Azure** section of [Build and deploy a Node.js application to an Azure Cloud Service](/azure/cloud-services/cloud-services-nodejs-develop-deploy-app)
+ > If you receive an error stating that the provided subscription name doesn't exist in the imported publish profile, you must download and import the publishing profile for your subscription before deploying to Azure. See the **Deploying the Application to Azure** section of [Build and deploy a Node.js application to an Azure Cloud Service](./cloud-services-nodejs-develop-deploy-app.md)
> > ![A browser window displaying the service hosted on Azure][completed-app] > [!NOTE]
- > If you receive an error stating that the provided subscription name doesn't exist in the imported publish profile, you must download and import the publishing profile for your subscription before deploying to Azure. See the **Deploying the Application to Azure** section of [Build and deploy a Node.js application to an Azure Cloud Service](/azure/cloud-services/cloud-services-nodejs-develop-deploy-app)
+ > If you receive an error stating that the provided subscription name doesn't exist in the imported publish profile, you must download and import the publishing profile for your subscription before deploying to Azure. See the **Deploying the Application to Azure** section of [Build and deploy a Node.js application to an Azure Cloud Service](./cloud-services-nodejs-develop-deploy-app.md)
> >
For more information, see also the [Node.js Developer Center](/azure/developer/j
[chat-contents]: ./media/cloud-services-nodejs-chat-app-socketio/socketio-5.png [The-output-of-the-npm-install-command]: ./media/cloud-services-nodejs-chat-app-socketio/socketio-7.png
-[The output of the Publish-AzureService command]: ./media/cloud-services-nodejs-chat-app-socketio/socketio-9.png
+[The output of the Publish-AzureService command]: ./media/cloud-services-nodejs-chat-app-socketio/socketio-9.png
cognitive-services Overview Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/overview-identity.md
This documentation contains the following types of articles:
* The [conceptual articles](./concept-face-detection.md) provide in-depth explanations of the service's functionality and features. * The [tutorials](./enrollment-overview.md) are longer guides that show you how to use this service as a component in broader business solutions.
+For a more structured approach, follow a Microsoft Learn module for Face.
+* [Detect and analyze faces with the Face service](/learn/modules/detect-analyze-faces/)
+ ## Example use cases **Identity verification**: Verify someone's identity against a government-issued ID card like a passport or driver's license or other enrollment image. You can use this verification to grant access to digital or physical services or to recover an account. Specific access scenarios include opening a new account, verifying a worker, or administering an online assessment. Identity verification can be done once when a person is onboarded, and repeated when they access a digital or physical service.
cognitive-services Overview Image Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/overview-image-analysis.md
This documentation contains the following types of articles:
* The [conceptual articles](concept-tagging-images.md) provide in-depth explanations of the service's functionality and features. * The [tutorials](./tutorials/storage-lab-tutorial.md) are longer guides that show you how to use this service as a component in broader business solutions.
+For a more structured approach, follow a Microsoft Learn module for Image Analysis.
+* [Analyze images with the Computer Vision service](/learn/modules/analyze-images-computer-vision/)
+ ## Image Analysis features You can analyze images to provide insights about their visual features and characteristics. All of the features in the list below are provided by the [Analyze Image](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) API. Follow a [quickstart](./quickstarts-sdk/image-analysis-client-library.md) to get started.
cognitive-services Overview Ocr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/overview-ocr.md
This documentation contains the following types of articles:
<!--* The [conceptual articles](how-to/call-read-api.md) provide in-depth explanations of the service's functionality and features. * The [tutorials](./tutorials/storage-lab-tutorial.md) are longer guides that show you how to use this service as a component in broader business solutions. -->
+For a more structured approach, follow a Microsoft Learn module for OCR.
+* [Read Text in Images and Documents with the Computer Vision Service](/learn/modules/read-text-images-documents-with-computer-vision-service/)
+ ## Read API The Computer Vision [Read API](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/5d986960601faab4bf452005) is Azure's latest OCR technology ([learn what's new](./whats-new.md)) that extracts printed text (in several languages), handwritten text (in several languages), digits, and currency symbols from images and multi-page PDF documents. It's optimized to extract text from text-heavy images and multi-page PDF documents with mixed languages. It supports extracting both printed and handwritten text in the same image or document.
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Content-Moderator/overview.md
This documentation contains the following article types:
* [**Quickstarts**](client-libraries.md) are getting-started instructions to guide you through making requests to the service. * [**How-to guides**](try-text-api.md) contain instructions for using the service in more specific or customized ways. * [**Concepts**](text-moderation-api.md) provide in-depth explanations of the service functionality and features.
-* [**Tutorials**](ecommerce-retail-catalog-moderation.md) are longer guides that show you how to use the service as a component in broader business solutions.
+* [**Tutorials**](ecommerce-retail-catalog-moderation.md) are longer guides that show you how to use the service as a component in broader business solutions.
+
+For a more structured approach, follow a Microsoft Learn module for Content Moderator.
+* [Introduction to Content Moderator](/learn/modules/intro-to-content-moderator/)
+* [Classify and moderate text with Azure Content Moderator](/learn/modules/classify-and-moderate-text-with-azure-content-moderator/)
## Where it's used
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Custom-Vision-Service/overview.md
This documentation contains the following types of articles:
* The [tutorials](./iot-visual-alerts-tutorial.md) are longer guides that show you how to use this service as a component in broader business solutions. <!--* The [conceptual articles](Vision-API-How-to-Topics/call-read-api.md) provide in-depth explanations of the service's functionality and features.-->
+For a more structured approach, follow a Microsoft Learn module for Custom Vision.
+* [Classify images with the Custom Vision service](/learn/modules/classify-images-custom-vision/)
+* [Classify endangered bird species with Custom Vision](/learn/modules/cv-classify-bird-species/)
+ ## What it does The Custom Vision service uses a machine learning algorithm to analyze images. You, the developer, submit groups of images that have and don't have the characteristics in question. You label the images yourself with your own custom labels (tags) at the time of submission. Then the algorithm trains to this data and calculates its own accuracy by testing itself on those same images. Once you've trained the algorithm, you can test, retrain, and eventually use it in your image recognition app to [classify images](getting-started-build-a-classifier.md). You can also [export the model](export-your-model.md) itself for offline use.
cognitive-services How To Custom Speech Train Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-train-model.md
Copying a model directly to a project in another region is not supported with th
::: zone pivot="rest-api"
-To copy a model to another Speech resource, use the [CopyModel](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CopyModel) operation of the [Speech-to-text REST API v3.0](rest-speech-to-text.md). Construct the request body according to the following instructions:
+To copy a model to another Speech resource, use the [CopyModelToSubscription](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CopyModelToSubscription) operation of the [Speech-to-text REST API v3.0](rest-speech-to-text.md). Construct the request body according to the following instructions:
- Set the required `targetSubscriptionKey` property to the key of the destination Speech resource.
To connect a new model to a project of the Speech resource where the model was c
- Set the required `project` property to the URI of an existing project. This is recommended so that you can also view and manage the model in Speech Studio. You can make a [GetProjects](https://westus2.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetProjects) request to get available projects.
-Make an HTTP PATCH request using the URI as shown in the following example. Use the URI of the new model. You can get the new model ID from the `self` property of the [CopyModel](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CopyModel) response body. Replace `YourSubscriptionKey` with your Speech resource key, replace `YourServiceRegion` with your Speech resource region, and set the request body properties as previously described.
+Make an HTTP PATCH request using the URI as shown in the following example. Use the URI of the new model. You can get the new model ID from the `self` property of the [CopyModelToSubscription](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CopyModelToSubscription) response body. Replace `YourSubscriptionKey` with your Speech resource key, replace `YourServiceRegion` with your Speech resource region, and set the request body properties as previously described.
```azurecli-interactive curl -v -X PATCH -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" -H "Content-Type: application/json" -d '{
cognitive-services How To Custom Voice Create Voice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-voice-create-voice.md
Unresolved errors listed in the next table affect the quality of training, but d
| Mismatch | Low signal-noise ratio | Audio SNR level is lower than 20 dB. At least 35 dB is recommended.| | Mismatch | No score available |Failed to recognize speech content in this audio. Check the audio and the script content to make sure the audio is valid, and matches the script.|
+### Fix data issues online
+
+You can fix the utterances with issues individually on **Data details** page.
+
+1. On the **Data details** page, select individual utterances you want to edit, then click **Edit**.
+
+ :::image type="content" source="media/custom-voice/cnv-edit-trainingset.png" alt-text="Screenshot of selecting edit button on the Data details page.":::
+
+1. Edit window will be displayed.
+
+ :::image type="content" source="media/custom-voice/cnv-edit-trainingset-editscript.png" alt-text="Screenshot of displaying Edit transcript and recording file window.":::
+
+1. Update transcript or recording file according to issue description on the edit window.
+
+ You can edit transcript in the text box, then click **Done**
+
+ :::image type="content" source="media/custom-voice/cnv-edit-trainingset-scriptedit-done.png" alt-text="Screenshot of selecting Done button on the Edit transcript and recording file window.":::
+
+ If you need to update recording file, select **Update recording file**, then upload the fixed recording file (.wav).
+
+ :::image type="content" source="media/custom-voice/cnv-edit-trainingset-upload-recording.png" alt-text="Screenshot that shows how to upload recording file on the Edit transcript and recording file window.":::
+
+1. After the data in a training set are updated, you need to check the data quality by clicking **Analyze data** before using this training set for training.
+
+ You can't select this training set for training model before the analysis is complete.
+
+ :::image type="content" source="media/custom-voice/cnv-edit-trainingset-analyze.png" alt-text="Screenshot of selecting Analyze data on Data details page.":::
+
+ You can also delete utterances with issues by selecting them and clicking **Delete**.
+ ## Train your Custom Neural Voice model After you validate your data files, you can use them to build your Custom Neural Voice model. 1. On the **Train model** tab, select **Train a new model** to create a voice model with the data you've uploaded.
-1. Select the neural training method for your model and target language.
+1. Select the training method for your model.
+
+ If you want to create a voice in the same language of your training data, select **Neural** method. For the **Neural** method, you can select different versions of the training recipe for your model. The versions vary according to the features supported and model training time. Normally new versions are enhanced ones with bugs fixed and new features supported. The latest version is selected by default.
- By default, your voice model is trained in the same language of your training data. You can also select to create a secondary language for your voice model. For more information, see [language support for Custom Neural Voice](language-support.md#custom-neural-voice). Also see information about [pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/) for neural training.
+ You can also select **Neural - cross lingual** and **Target language** to create a secondary language for your voice model. Only one target language can be selected for a voice model. You don't need to prepare additional data in the target language for training, but your test script needs to be in the target language. For the languages supported by cross lingual feature, see [supported languages](language-support.md#custom-neural-voice).
+
+ The same unit price applies to both **Neural** and **Neural - cross lingual**. Check [the pricing details](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/) for training.
1. Choose the data you want to use for training, and specify a speaker file.
On **Add test scripts** window, click **Browse for a file** to select your own s
:::image type="content" source="media/custom-voice/cnv-model-upload-testscripts.png" alt-text="Screenshot of uploading model test scripts.":::
-For more information, [learn more about the capabilities and limits of this feature, and the best practice to improve your model quality](/legal/cognitive-services/speech-service/custom-neural-voice/characteristics-and-limitations-custom-neural-voice?context=%2fazure%2fcognitive-services%2fspeech-service%2fcontext%2fcontext).
+### Update engine version for your voice model
+
+Azure Text-to-Speech engines are updated from time to time to capture the latest language model that defines the pronunciation of the language. After you've trained your voice, you can apply your voice to the new language model by updating to the latest engine version.
+
+When a new engine is available, you're prompted to update your neural voice model.
++
+Go to the model details page, click **Update** at the top to display **Update** window.
++
+Then click **Update** to update your model to the latest engine version.
++
+You're not charged for engine update. The previous versions are still kept. You can check all engine versions for this model from **Engine version** drop-down list, or remove one if you don't need it anymore.
++
+The updated version is automatically set as default. But you can change the default version by selecting a version from the drop-down list and clicking **Set as default**.
++
+If you want to test each engine version of your voice model, you can select a version from the drop-down list, then select **DefaultTests** under **Testing** to listen to the sample audios. If you want to upload your own test scripts to further test your current engine version, first make sure the version is set as default, then follow the [testing steps above](#test-your-voice-model).
+
+After you've updated the engine version for your voice model, you need to redeploy this new version. You can only deploy the default version.
+
+For more information, [learn more about the capabilities and limits of this feature, and the best practice to improve your model quality](/legal/cognitive-services/speech-service/custom-neural-voice/characteristics-and-limitations-custom-neural-voice?context=%2fazure%2fcognitive-services%2fspeech-service%2fcontext%2fcontext).
> [!NOTE] > Custom Neural Voice training is only available in some regions. But you can easily copy a neural voice model from these regions to other regions. For more information, see the [regions for Custom Neural Voice](regions.md#text-to-speech).
cognitive-services How To Custom Voice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-voice.md
The style and the characteristics of the trained voice model depend on the style
SSML is the markup language used to communicate with the text-to-speech service to convert text into audio. The adjustments you can make include change of pitch, rate, intonation, and pronunciation correction. If the voice model is built with multiple styles, you can also use SSML to switch the styles.
+## Cross lingual feature
+
+With cross lingual feature (public preview), you can create a different language for your voice model. If the language of your training data is supported by cross lingual feature, you can create a voice that speaks a different language from your training data. For example, with the `zh-CN` training data, you can create a voice that speaks `en-US` or any of the languages supported by cross lingual feature. For details, see [supported languages](language-support.md#custom-neural-voice). You don't need to prepare additional data in the target language for training, but your test script needs to be in the target language.
+
+For how to create a different language from your training data, select the training method **Neural-cross lingual** during training. See [how to train your custom neural voice model](how-to-custom-voice-create-voice.md#train-your-custom-neural-voice-model).
+
+After the voice is created, you can use the Audio Content Creation tool to fine-tune your deployed voice, with richer voice tuning supports. Sign in to the Audio Content Creation of [Speech Studio]( https://aka.ms/speechstudio/) with your Azure account, and select your created voice from the target language to start tuning experience.
+ ## Migrate to Custom Neural Voice If you're using the old version of Custom Voice (which is scheduled to be retired in February 2024), see [How to migrate to Custom Neural Voice](how-to-migrate-to-custom-neural-voice.md).
cognitive-services How To Pronunciation Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-pronunciation-assessment.md
You can get pronunciation assessment scores for:
> [!NOTE] > For information about availability of pronunciation assessment, see [supported languages](language-support.md#pronunciation-assessment) and [available regions](regions.md#speech-to-text-pronunciation-assessment-text-to-speech-and-translation).
+>
+> The syllable groups, IPA phonemes, and spoken phoneme features of pronunciation assessment are currently only available for the en-US locale.
## Configuration parameters
This table lists some of the key configuration parameters for pronunciation asse
|--|-| | `ReferenceText` | The text that the pronunciation will be evaluated against. | | `GradingSystem` | The point system for score calibration. The `FivePoint` system gives a 0-5 floating point score, and `HundredMark` gives a 0-100 floating point score. |
-| `Granularity` | The evaluation granularity. Accepted values are `Phoneme`, which shows the score on the full text, word and phoneme level, `Word`, which shows the score on the full text and word level, or `FullText`, which shows the score on the full text level only. |
+| `Granularity` | Determines the lowest level of evaluation granularity. Scores for levels above or equal to the minimal value are returned. Accepted values are `Phoneme`, which shows the score on the full text, word, syllable, and phoneme level, `Syllable`, which shows the score on the full text, word, and syllable level, `Word`, which shows the score on the full text and word level, or `FullText`, which shows the score on the full text level only. The provided full reference text can be a word, sentence, or paragraph, and it depends on your input reference text.|
| `EnableMiscue` | Enables miscue calculation when the pronounced words are compared to the reference text. If this value is `True`, the `ErrorType` result value can be set to `Omission` or `Insertion` based on the comparison. Accepted values are `False` and `True`. Default: `False`. | You must create a `PronunciationAssessmentConfig` object with the reference text, grading system, and granularity. Enabling miscue and other configuration settings are optional.
do {
## Syllable groups
-For the [supported languages](language-support.md#pronunciation-assessment) in public preview, pronunciation assessment can provide syllable-level assessment results along with phonemes. Grouping in syllables is more legible and aligned with speaking habits, as a word is typically pronounced syllable by syllable rather than phoneme by phoneme.
+Pronunciation assessment can provide syllable-level assessment results. Grouping in syllables is more legible and aligned with speaking habits, as a word is typically pronounced syllable by syllable rather than phoneme by phoneme.
The following table compares example phonemes with the corresponding syllables.
To request syllable-level results along with phonemes, set the granularity [conf
## Phoneme alphabet format
-The phoneme name is provided together with the score, to help identity which phonemes were pronounced accurately or inaccurately. For the [supported languages](language-support.md#pronunciation-assessment) in public preview, you can get the phoneme name in [SAPI](/previous-versions/windows/desktop/ee431828(v=vs.85)#american-english-phoneme-table) or [IPA](https://en.wikipedia.org/wiki/IPA) format.
+The phoneme name is provided together with the score, to help identity which phonemes were pronounced accurately or inaccurately. For the [supported languages](language-support.md#pronunciation-assessment), you can get the phoneme name in [SAPI](/previous-versions/windows/desktop/ee431828(v=vs.85)#american-english-phoneme-table) format, and for the `en-US` locale, you can also get the phoneme name in [IPA](https://en.wikipedia.org/wiki/IPA) format.
The following table compares example SAPI phonemes with the corresponding IPA phonemes.
pronunciationAssessmentConfig?.phonemeAlphabet = "IPA"
## Spoken phoneme
-> [!NOTE]
-> The spoken phoneme feature of pronunciation assessment is only generally available for the `en-US` locale.
- With spoken phonemes, you can get confidence scores indicating how likely the spoken phonemes matched the expected phonemes. For example, when you speak the word "hello", the expected IPA phonemes are "h ɛ l oʊ". The actual spoken phonemes could be "h ə l oʊ". In the following assessment result, the most likely spoken phoneme was `"ə"` instead of the expected phoneme `"ɛ"`. The expected phoneme `"ɛ"` only received a confidence score of 47. Other potential matches received confidence scores of 52, 17, and 2.
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/language-support.md
Use the following table to determine supported styles and roles for each neural
|en-US-GuyNeural|`angry`, `cheerful`, `excited`, `friendly`, `hopeful`, `newscast`, `sad`, `shouting`, `terrified`, `unfriendly`, `whispering`||| |en-US-JaneNeural <sup>Public preview</sup>|`angry`, `cheerful`, `excited`, `friendly`, `hopeful`, `sad`, `shouting`, `terrified`, `unfriendly`, `whispering`||| |en-US-JasonNeural <sup>Public preview</sup>|`angry`, `cheerful`, `excited`, `friendly`, `hopeful`, `sad`, `shouting`, `terrified`, `unfriendly`, `whispering`|||
-|en-US-JennyNeural|`angry`, `assistant`, `chat`, `cheerful`,`customerservice`, `excited`, `friendly`, `hopeful`, `newscast`, `sad`, `shouting`, `terrified`, , `unfriendly`, `whispering`|||
+|en-US-JennyNeural|`angry`, `assistant`, `chat`, `cheerful`,`customerservice`, `excited`, `friendly`, `hopeful`, `newscast`, `sad`, `shouting`, `terrified`, `unfriendly`, `whispering`|||
|en-US-NancyNeural <sup>Public preview</sup>|`angry`, `cheerful`, `excited`, `friendly`, `hopeful`, `sad`, `shouting`, `terrified`, `unfriendly`, `whispering`||| |en-US-SaraNeural|`angry`, `cheerful`, `excited`, `friendly`, `hopeful`, `sad`, `shouting`, `terrified`, `unfriendly`, `whispering`||| |en-US-TonyNeural <sup>Public preview</sup>|`angry`, `cheerful`, `excited`, `friendly`, `hopeful`, `sad`, `shouting`, `terrified`, `unfriendly`, `whispering`|||
cognitive-services Pronunciation Assessment Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/pronunciation-assessment-tool.md
Pronunciation assessment provides subjective and objective feedback to language
Pronunciation assessment provides various assessment results in different granularities, from individual phonemes to the entire text input. - At the full-text level, pronunciation assessment offers additional Fluency and Completeness scores: Fluency indicates how closely the speech matches a native speaker's use of silent breaks between words, and Completeness indicates how many words are pronounced in the speech to the reference text input. An overall score aggregated from Accuracy, Fluency and Completeness is then given to indicate the overall pronunciation quality of the given speech. - At the word-level, pronunciation assessment can automatically detect miscues and provide accuracy score simultaneously, which provides more detailed information on omission, repetition, insertions, and mispronunciation in the given speech.
+- Syllable-level accuracy scores are currently only available via the [JSON file](?tabs=json#scores-within-words) or [Speech SDK](how-to-pronunciation-assessment.md).
- At the phoneme level, pronunciation assessment provides accuracy scores of each phoneme, helping learners to better understand the pronunciation details of their speech. This article describes how to use the pronunciation assessment tool through the [Speech Studio](https://speech.microsoft.com). You can get immediate feedback on the accuracy and fluency of your speech without writing any code. For information about how to integrate pronunciation assessment in your speech applications, see [How to use pronunciation assessment](how-to-pronunciation-assessment.md).
The complete transcription is shown in the `text` attribute. You can see accurac
## Next steps - Use [pronunciation assessment with the Speech SDK](how-to-pronunciation-assessment.md)-- Read the blog about [use cases](https://techcommunity.microsoft.com/t5/azure-ai-blog/speech-service-update-pronunciation-assessment-is-generally/ba-p/2505501)
+- Read the blog about [use cases](https://techcommunity.microsoft.com/t5/azure-ai-blog/speech-service-update-pronunciation-assessment-is-generally/ba-p/2505501)
cognitive-services Speech Synthesis Markup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-synthesis-markup.md
The `ph` element is used for phonetic pronunciation in SSML documents. The `ph`
Phonetic alphabets are composed of phones, which are made up of letters, numbers, or characters, sometimes in combination. Each phone describes a unique sound of speech. This is in contrast to the Latin alphabet, where any letter might represent multiple spoken sounds. Consider the different pronunciations of the letter "c" in the words "candy" and "cease" or the different pronunciations of the letter combination "th" in the words "thing" and "those." > [!NOTE]
-> At this time, the phonemes tag isn't supported for five voices: et-EE-AnuNeural, ga-IE-OrlaNeural, lt-LT-OnaNeural, lv-LV-EveritaNeural, and mt-MT-GarceNeural.
+> Phonemes tag may not work on all locales.
**Syntax**
cognitive-services Model Lifecycle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/concepts/model-lifecycle.md
Use the table below to find which API versions are supported by each feature:
| Feature | Supported versions | Latest Generally Available version | Latest preview version | |--||||
-| Custom text classification | `2022-05-01` ,`2022-05-15-preview` | `2022-05-01` | `2022-05-15-preview` |
-| Conversational language understanding | `2022-05-01` ,`2022-05-15-preview` | `2022-05-01` | `2022-05-15-preview` |
-| Custom named entity recognition | `2022-05-01` ,`2022-05-15-preview` | `2022-05-01` | `2022-05-15-preview` |
-| Orchestration workflow | `2022-05-01`,`2022-05-15-preview` | `2022-05-01` | `2022-05-15-preview` |
+| Custom text classification | `2022-05-01` | `2022-05-01` | |
+| Conversational language understanding | `2022-05-01` | `2022-05-01` | |
+| Custom named entity recognition | `2022-05-01` | `2022-05-01` | |
+| Orchestration workflow | `2022-05-01` | `2022-05-01` | |
## Next steps
cognitive-services Data Formats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/concepts/data-formats.md
If you're [importing a project](../how-to/create-project.md#import-project) into
|Key |Placeholder |Value | Example | |||-|--|
-| `api-version` | `{API-VERSION}` | The version of the API you're calling. The value referenced here is for the latest released [model version](../../concepts/model-lifecycle.md#choose-the-model-version-used-on-your-data) released. | `2022-03-01-preview` |
+| `api-version` | `{API-VERSION}` | The version of the API you're calling. The value referenced here is for the latest released [model version](../../concepts/model-lifecycle.md#choose-the-model-version-used-on-your-data) released. | `2022-05-01` |
|`confidenceThreshold`|`{CONFIDENCE-THRESHOLD}`|This is the threshold score below which the intent will be predicted as [none intent](none-intent.md)|`0.7`| | `projectName` | `{PROJECT-NAME}` | The name of your project. This value is case-sensitive. | `EmailApp` | | `multilingual` | `true`| A boolean value that enables you to have documents in multiple languages in your dataset and when your model is deployed you can query the model in any supported language (not necessarily included in your training documents. See [Language support](../language-support.md#multi-lingual-option) for more information about supported language codes. | `true`|
cognitive-services Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/how-to/call-api.md
First you will need to get your resource key and endpoint:
# [Client libraries (Azure SDK)](#tab/azure-sdk)
-First you will need to get your resource key and endpoint:
-- ### Use the client libraries (Azure SDK) You can also use the client libraries provided by the Azure SDK to send requests to your model.
cognitive-services Create Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/how-to/create-project.md
You can export a Conversational Language Understanding project as a JSON file at
-## Delete resources
+## Delete project
### [Language Studio](#tab/language-studio)
cognitive-services Deploy Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/how-to/deploy-model.md
# Deploy a model
-Once you are satisfied with how your model performs, it's ready to be deployed, and query it for predictions from utterances. Deploying a model makes it available for use through the [prediction API](https://aka.ms/ct-runtime-swagger).
+Once you are satisfied with how your model performs, it's ready to be deployed, and query it for predictions from utterances. Deploying a model makes it available for use through the [prediction API](https://aka.ms/clu-runtime-api).
## Prerequisites * A successfully [created project](create-project.md) * [Labeled utterances](tag-utterances.md) and successfully [trained model](train-model.md)
-* Reviewed the [model evaluation details](view-model-evaluation.md) to determine how your model is performing.
+* Reviewed the [model performance](view-model-evaluation.md) to determine how your model is performing.
See [project development lifecycle](../overview.md#project-development-lifecycle) for more information. ## Deploy model
-After you have reviewed the model's performance and decide it's fit to be used in your environment, you need to assign it to a deployment to be able to query it. Assigning the model to a deployment makes it available for use through the [prediction API](https://aka.ms/clu-apis). It is recommended to create a deployment named `production` to which you assign the best model you have built so far and use it in your system. You can create another deployment called `staging` to which you can assign the model you're currently working on to be able to test it. You can have a maximum on 10 deployments in your project.
+After you have reviewed the model's performance and decide it's fit to be used in your environment, you need to assign it to a deployment to be able to query it. Assigning the model to a deployment makes it available for use through the [prediction API](https://aka.ms/clu-runtime-api). It is recommended to create a deployment named `production` to which you assign the best model you have built so far and use it in your system. You can create another deployment called `staging` to which you can assign the model you're currently working on to be able to test it. You can have a maximum on 10 deployments in your project.
# [Language Studio](#tab/language-studio)
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/overview.md
This documentation contains the following article types:
* [Quickstarts](quickstart.md) are getting-started instructions to guide you through making requests to the service. * [Concepts](concepts/evaluation-metrics.md) provide explanations of the service functionality and features.
-* [How-to guides](how-to/tag-utterances.md) contain instructions for using the service in more specific or customized ways.
+* [How-to guides](how-to/create-project.md) contain instructions for using the service in more specific or customized ways.
## Example usage scenarios
Follow these steps to get the most out of your model:
3. **Train model**: Your model starts learning from your labeled data.
-4. **View model evaluation details**: View the evaluation details for your model to determine how well it performs when introduced to new data.
+4. **Viewmodel evaluation details**: View the evaluation details for your model to determine how well it performs when introduced to new data.
-5. **Deploy model**: Deploying a model makes it available for use via the [Runtime API](https://aka.ms/clu-apis).
+5. **Deploy model**: Deploying a model makes it available for use via the [Runtime API](https://aka.ms/clu-runtime-api).
6. **Predict intents and entities**: Use your custom model to predict intents and entities from user's utterances.
As you use CLU, see the following reference documentation and samples for Azure
|||| |REST APIs (Authoring) | [REST API documentation](https://aka.ms/clu-authoring-apis) | | |REST APIs (Runtime) | [REST API documentation](https://aka.ms/clu-apis) | |
-|C# | [C# documentation](/dotnet/api/azure.ai.textanalytics?view=azure-dotnet-preview&preserve-view=true) | [C# samples](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/cognitivelanguage/Azure.AI.Language.Conversations/samples) |
-|Python | [Python documentation](/python/api/overview/azure/ai-textanalytics-readme?view=azure-python-preview&preserve-view=true) | [Python samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/cognitivelanguage/azure-ai-language-conversations/samples) |
+|C# (Runtime) | [C# documentation](/dotnet/api/azure.ai.textanalytics?view=azure-dotnet-preview&preserve-view=true) | [C# samples](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/cognitivelanguage/Azure.AI.Language.Conversations/samples) |
+|Python (Runtime)| [Python documentation](/python/api/overview/azure/ai-textanalytics-readme?view=azure-python-preview&preserve-view=true) | [Python samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/cognitivelanguage/azure-ai-language-conversations/samples) |
## Responsible AI
-An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which it's deployed. Read the [transparency note for CLU]() to learn about responsible AI use and deployment in your systems. You can also see the following articles for more information:
+An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which it's deployed. Read the transparency note for CLU to learn about responsible AI use and deployment in your systems. You can also see the following articles for more information:
[!INCLUDE [Responsible AI links](../includes/overview-responsible-ai-links.md)]
cognitive-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/service-limits.md
The following limits are observed for the conversational language understanding.
| Item | Limits | |--|--|
-| Project name | You can only use letters `(a-z, A-Z)`, and numbers `(0-9)` with no spaces. Maximum allowed length is 50 characters. |
+| Project name | You can only use letters `(a-z, A-Z)`, and numbers `(0-9)` ,symbols `_ . -`,with no spaces. Maximum allowed length is 50 characters. |
| Model name | You can only use letters `(a-z, A-Z)`, numbers `(0-9)` and symbols `_ . -`. Maximum allowed length is 50 characters. | | Deployment name | You can only use letters `(a-z, A-Z)`, numbers `(0-9)` and symbols `_ . -`. Maximum allowed length is 50 characters. |
-| Intent name | You can only use letters `(a-z, A-Z)`, numbers `(0-9)` and symbols `_ . -`. Maximum allowed length is 50 characters. |
-| Entity name | You can only use letters `(a-z, A-Z)`, numbers `(0-9)` and symbols `_ . -`. Maximum allowed length is 50 characters. |
+| Intent name| You can only use letters `(a-z, A-Z)`, numbers `(0-9)` and all symbols except ":", `$ & % * ( ) + ~ # / ?`. Maximum allowed length is 50 characters.|
+| Entity name| You can only use letters `(a-z, A-Z)`, numbers `(0-9)` and all symbols except ":", `$ & % * ( ) + ~ # / ?`. Maximum allowed length is 50 characters.|
## Next steps
cognitive-services Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/faq.md
When you're ready to start [using your model to make predictions](#how-do-i-use-
## What is the recommended CI/CD process?
-You can train multiple models on the same dataset within the same project. After you have trained your model successfully, you can [view its evaluation](how-to/view-model-evaluation.md). You can [deploy and test](quickstart.md#deploy-your-model) your model within [Language studio](https://aka.ms/languageStudio). You can add or remove tags from your data and train a **new** model and test it as well. View [service limits](service-limits.md)to learn about maximum number of trained models with the same project. When you [train your data](how-to/train-model.md) you can determine how your dataset is split into training and testing sets. You can also have your data split randomly into training and testing set where there is no guarantee that the reflected model evaluation is about the same test set, and the results are not comparable. It's recommended that you develop your own test set and use it to evaluate both models so you can measure improvement.
+You can train multiple models on the same dataset within the same project. After you have trained your model successfully, you can [view its performance](how-to/view-model-evaluation.md). You can [deploy and test](quickstart.md#deploy-your-model) your model within [Language studio](https://aka.ms/languageStudio). You can add or remove labels from your data and train a **new** model and test it as well. View [service limits](service-limits.md)to learn about maximum number of trained models with the same project. When you [train a model](how-to/train-model.md), you can determine how your dataset is split into training and testing sets. You can also have your data split randomly into training and testing set where there is no guarantee that the reflected model evaluation is about the same test set, and the results are not comparable. It's recommended that you develop your own test set and use it to evaluate both models so you can measure improvement.
## Does a low or high model score guarantee bad or good performance in production?
cognitive-services Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/how-to/call-api.md
# Query deployment to extract entities After the deployment is added successfully, you can query the deployment to extract entities from your text based on the model you assigned to the deployment.
-You can query the deployment programmatically using the [Prediction API](https://aka.ms/ct-runtime-api) or through the [Client libraries (Azure SDK)](#get-task-results).
+You can query the deployment programmatically using the [Prediction API](https://aka.ms/ct-runtime-api) or through the Client libraries (Azure SDK).
## Test deployed model
cognitive-services Improve Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/how-to/improve-model.md
In some cases, the model is expected to extract entities that are inconsistent w
## Prerequisites * A successfully [created project](create-project.md) with a configured Azure blob storage account
- * Text data that [has been uploaded](design-schema.md#data-preparation) to your storage account.
+* Text data that [has been uploaded](design-schema.md#data-preparation) to your storage account.
* [Labeled data](tag-data.md) * A [successfully trained model](train-model.md) * Reviewed the [model evaluation details](view-model-evaluation.md) to determine how your model is performing.
To review inconsistent predictions in the [test set](train-model.md) from within
## Next steps
-Once you're satisfied with how your model performs, you can [deploy your model](call-api.md).
+Once you're satisfied with how your model performs, you can [deploy your model](deploy-model.md).
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/overview.md
As you use custom NER, see the following reference documentation and samples for
|||| |REST APIs (Authoring) | [REST API documentation](https://aka.ms/ct-authoring-swagger) | | |REST APIs (Runtime) | [REST API documentation](https://aka.ms/ct-runtime-swagger) | |
-|C# | [C# documentation](/dotnet/api/azure.ai.textanalytics?view=azure-dotnet-preview&preserve-view=true) | [C# samples](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/textanalytics/Azure.AI.TextAnalytics/samples/Sample9_RecognizeCustomEntities.md) |
-| Java | [Java documentation](/java/api/overview/azure/ai-textanalytics-readme?view=azure-java-preview&preserve-view=true) | [Java Samples](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/textanalytics/azure-ai-textanalytics/src/samples/java/com/azure/ai/textanalytics/lro/RecognizeCustomEntities.java) |
-|JavaScript | [JavaScript documentation](/javascript/api/overview/azure/ai-text-analytics-readme?view=azure-node-preview&preserve-view=true) | [JavaScript samples](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/textanalytics/ai-text-analytics/samples/v5/javascript/customText.js) |
-|Python | [Python documentation](/python/api/azure-ai-textanalytics/azure.ai.textanalytics?view=azure-python-preview&preserve-view=true) | [Python samples](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/sample_recognize_custom_entities.py) |
+|C# (Runtime) | [C# documentation](/dotnet/api/azure.ai.textanalytics?view=azure-dotnet-preview&preserve-view=true) | [C# samples](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/textanalytics/Azure.AI.TextAnalytics/samples/Sample9_RecognizeCustomEntities.md) |
+| Java (Runtime) | [Java documentation](/java/api/overview/azure/ai-textanalytics-readme?view=azure-java-preview&preserve-view=true) | [Java Samples](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/textanalytics/azure-ai-textanalytics/src/samples/java/com/azure/ai/textanalytics/lro/RecognizeCustomEntities.java) |
+|JavaScript (Runtime) | [JavaScript documentation](/javascript/api/overview/azure/ai-text-analytics-readme?view=azure-node-preview&preserve-view=true) | [JavaScript samples](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/textanalytics/ai-text-analytics/samples/v5/javascript/customText.js) |
+|Python (Runtime) | [Python documentation](/python/api/azure-ai-textanalytics/azure.ai.textanalytics?view=azure-python-preview&preserve-view=true) | [Python samples](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/sample_recognize_custom_entities.py) |
## Responsible AI
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/quickstart.md
zone_pivot_groups: usage-custom-language-features
-# Quickstart: Custom named entity recognition (preview)
+# Quickstart: Custom named entity recognition
Use this article to get started with creating a custom NER project where you can train custom models for custom entity recognition. A model is an object that's trained to do a certain task. For this system, the models extract named entities. Models are trained by learning from tagged data.
cognitive-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/service-limits.md
The following limits are observed for the custom named entity recognition.
| Item | Limits | |--|--|
-| Project name | You can only use letters `(a-z, A-Z)`, and numbers `(0-9)` with no spaces. Maximum length allowed is 50 characters. |
-| Model name | You can only use letters `(a-z, A-Z)`, numbers `(0-9)` and symbols `@ # _ . , ^ \ [ ]`. Maximum allowed length is 50 characters. |
-| Deployment name | You can only use letters `(a-z, A-Z)`, numbers `(0-9)` and symbols `@ # _ . , ^ \ [ ]`. Maximum allowed length is 50 characters. |
-| Entity name | You can only use letters `(a-z, A-Z)`, and numbers `(0-9)` and symbols `@ # _ . , ^ \ [ ]`. Maximum length allowed is 50 characters. |
+| Project name | You can only use letters `(a-z, A-Z)`, and numbers `(0-9)` ,symbols `_ . -`,with no spaces. Maximum allowed length is 50 characters. |
+| Model name | You can only use letters `(a-z, A-Z)`, numbers `(0-9)` and symbols `_ . -`. Maximum allowed length is 50 characters. |
+| Deployment name | You can only use letters `(a-z, A-Z)`, numbers `(0-9)` and symbols `_ . -`. Maximum allowed length is 50 characters. |
+| Entity name| You can only use letters `(a-z, A-Z)`, numbers `(0-9)` and all symbols except ":", `$ & % * ( ) + ~ # / ?`. Maximum allowed length is 50 characters.|
| Document name | You can only use letters `(a-z, A-Z)`, and numbers `(0-9)` with no spaces. | + ## Next steps * [Custom NER overview](overview.md)
cognitive-services Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-classification/how-to/call-api.md
# Query deployment to classify text After the deployment is added successfully, you can query the deployment to classify text based on the model you assigned to the deployment.
-You can query the deployment programmatically [Prediction API](https://aka.ms/ct-runtime-api) or through the [client libraries (Azure SDK)](#get-task-results).
+You can query the deployment programmatically [Prediction API](https://aka.ms/ct-runtime-api) or through the client libraries (Azure SDK).
## Test deployed model
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-classification/overview.md
As you use custom text classification, see the following reference documentation
|||| |REST APIs (Authoring) | [REST API documentation](https://aka.ms/ct-authoring-swagger) | | |REST APIs (Runtime) | [REST API documentation](https://aka.ms/ct-runtime-swagger) | |
-|C# | [C# documentation](/dotnet/api/azure.ai.textanalytics?view=azure-dotnet-preview&preserve-view=true) | [C# samples - Single label classification](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/textanalytics/Azure.AI.TextAnalytics/samples/Sample10_SingleCategoryClassify.md) [C# samples - Multi label classification](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/textanalytics/Azure.AI.TextAnalytics/samples/Sample11_MultiCategoryClassify.md) |
-| Java | [Java documentation](/java/api/overview/azure/ai-textanalytics-readme?view=azure-java-preview&preserve-view=true) | [Java Samples - Single label classification](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/textanalytics/azure-ai-textanalytics/src/samples/java/com/azure/ai/textanalytics/lro/ClassifyDocumentSingleCategory.java) [Java Samples - Multi label classification](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/textanalytics/azure-ai-textanalytics/src/samples/java/com/azure/ai/textanalytics/lro/ClassifyDocumentMultiCategory.java) |
-|JavaScript | [JavaScript documentation](/javascript/api/overview/azure/ai-text-analytics-readme?view=azure-node-preview&preserve-view=true) | [JavaScript samples - Single label classification](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/textanalytics/ai-text-analytics/samples/v5/javascript/customText.js) [JavaScript samples - Multi label classification](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/textanalytics/ai-text-analytics/samples/v5/javascript/customText.js) |
-|Python | [Python documentation](/python/api/azure-ai-textanalytics/azure.ai.textanalytics?view=azure-python-preview&preserve-view=true) | [Python samples - Single label classification](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/sample_single_category_classify.py) [Python samples - Multi label classification](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/sample_multi_category_classify.py) |
+|C# (Runtime) | [C# documentation](/dotnet/api/azure.ai.textanalytics?view=azure-dotnet-preview&preserve-view=true) | [C# samples - Single label classification](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/textanalytics/Azure.AI.TextAnalytics/samples/Sample10_SingleCategoryClassify.md) [C# samples - Multi label classification](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/textanalytics/Azure.AI.TextAnalytics/samples/Sample11_MultiCategoryClassify.md) |
+| Java (Runtime) | [Java documentation](/java/api/overview/azure/ai-textanalytics-readme?view=azure-java-preview&preserve-view=true) | [Java Samples - Single label classification](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/textanalytics/azure-ai-textanalytics/src/samples/java/com/azure/ai/textanalytics/lro/ClassifyDocumentSingleCategory.java) [Java Samples - Multi label classification](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/textanalytics/azure-ai-textanalytics/src/samples/java/com/azure/ai/textanalytics/lro/ClassifyDocumentMultiCategory.java) |
+|JavaScript (Runtime) | [JavaScript documentation](/javascript/api/overview/azure/ai-text-analytics-readme?view=azure-node-preview&preserve-view=true) | [JavaScript samples - Single label classification](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/textanalytics/ai-text-analytics/samples/v5/javascript/customText.js) [JavaScript samples - Multi label classification](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/textanalytics/ai-text-analytics/samples/v5/javascript/customText.js) |
+|Python (Runtime)| [Python documentation](/python/api/azure-ai-textanalytics/azure.ai.textanalytics?view=azure-python-preview&preserve-view=true) | [Python samples - Single label classification](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/sample_single_category_classify.py) [Python samples - Multi label classification](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/sample_multi_category_classify.py) |
## Responsible AI
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-classification/quickstart.md
zone_pivot_groups: usage-custom-language-features
-# Quickstart: Custom text classification (preview)
+# Quickstart: Custom text classification
Use this article to get started with creating a custom text classification project where you can train custom models for text classification. A model is an object that's trained to do a certain task. For this system, the models classify text. Models are trained by learning from tagged data.
cognitive-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-classification/service-limits.md
The following limits are observed for the custom text classification.
| Item | Limits | |--|--|
-| Project name | You can only use letters `(a-z, A-Z)`, and numbers `(0-9)` with no spaces. Maximum allowed length is 50 characters. |
-| Model name | You can only use letters `(a-z, A-Z)`, numbers `(0-9)` and symbols `@ # _ . , ^ \ [ ]`. Maximum allowed length is 50 characters. |
-| Deployment name | You can only use letters `(a-z, A-Z)`, numbers `(0-9)` and symbols `@ # _ . , ^ \ [ ]`. Maximum allowed length is 50 characters. |
-| Class name| You can only use letters `(a-z, A-Z)`, numbers `(0-9)` and symbols `@ # _ . , ^ \ [ ]`. Maximum allowed length is 50 characters. |
+| Project name | You can only use letters `(a-z, A-Z)`, and numbers `(0-9)` ,symbols `_ . -`,with no spaces. Maximum allowed length is 50 characters. |
+| Model name | You can only use letters `(a-z, A-Z)`, numbers `(0-9)` and symbols `_ . -`. Maximum allowed length is 50 characters. |
+| Deployment name | You can only use letters `(a-z, A-Z)`, numbers `(0-9)` and symbols `_ . -`. Maximum allowed length is 50 characters. |
+| Class name| You can only use letters `(a-z, A-Z)`, numbers `(0-9)` and all symbols except ":", `$ & % * ( ) + ~ # / ?`. Maximum allowed length is 50 characters.|
| Document name | You can only use letters `(a-z, A-Z)`, and numbers `(0-9)` with no spaces. | ## Next steps
cognitive-services Create Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/orchestration-workflow/how-to/create-project.md
Before you start using orchestration workflow, you will need an Azure Language r
> [!NOTE] > * You need to have an **owner** role assigned on the resource group to create a Language resource.
+> * If you are planning to use question answering, you have to enable question answering in resource creation
[!INCLUDE [create a new resource from the Azure portal](../includes/resource-creation-azure-portal.md)]
You can export an orchestration workflow project as a JSON file at any time.
-## Delete resources
+## Delete project
### [Language Studio](#tab/language-studio)
When you don't need your project anymore, you can delete your project using the
## Next Steps
-[Build schema](./train-model.md)
+[Build schema](./build-schema.md)
cognitive-services Train Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/orchestration-workflow/how-to/train-model.md
Training could take sometime depending on the size of your training data and com
## Next steps
-* [Model evaluation metrics](../concepts/evaluation-metrics.md)
-* [Deploy and query the model](./deploy-model.md)
+* [Model evaluation metrics concepts](../concepts/evaluation-metrics.md)
+* How to [deploy a model](./deploy-model.md)
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/orchestration-workflow/overview.md
This documentation contains the following article types:
* [Quickstarts](quickstart.md) are getting-started instructions to guide you through making requests to the service. * [Concepts](concepts/evaluation-metrics.md) provide explanations of the service functionality and features.
-* [How-to guides](how-to/tag-utterances.md) contain instructions for using the service in more specific or customized ways.
+* [How-to guides](how-to/create-project.md) contain instructions for using the service in more specific or customized ways.
## Example usage scenarios
As you use orchestration workflow, see the following reference documentation and
|Development option / language |Reference documentation |Samples | |||| |REST APIs (Authoring) | [REST API documentation](https://aka.ms/clu-authoring-apis) | |
-|REST APIs (Prediction) | [REST API documentation](https://aka.ms/clu-runtime-api) | |
-|C# | [C# documentation](/dotnet/api/azure.ai.textanalytics?view=azure-dotnet-preview&preserve-view=true) | [C# samples](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/cognitivelanguage/Azure.AI.Language.Conversations/samples) |
-|Python | [Python documentation](/python/api/overview/azure/ai-textanalytics-readme?view=azure-python-preview&preserve-view=true) | [Python samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/cognitivelanguage/azure-ai-language-conversations/samples) |
+|REST APIs (Runtime) | [REST API documentation](https://aka.ms/clu-runtime-api) | |
+|C# (Runtime) | [C# documentation](/dotnet/api/azure.ai.textanalytics?view=azure-dotnet-preview&preserve-view=true) | [C# samples](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/cognitivelanguage/Azure.AI.Language.Conversations/samples) |
+|Python (Runtime)| [Python documentation](/python/api/overview/azure/ai-textanalytics-readme?view=azure-python-preview&preserve-view=true) | [Python samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/cognitivelanguage/azure-ai-language-conversations/samples) |
## Responsible AI
-An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which it is deployed. Read the [transparency note for CLU and orchestration workflow]() to learn about responsible AI use and deployment in your systems. You can also see the following articles for more information:
+An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which it is deployed. Read the transparency note for CLU and orchestration workflow to learn about responsible AI use and deployment in your systems. You can also see the following articles for more information:
[!INCLUDE [Responsible AI links](../includes/overview-responsible-ai-links.md)]
cognitive-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/orchestration-workflow/service-limits.md
The following limits are observed for orchestration workflow.
| Attribute | Limits | |--|--|
-| Project name | You can only use letters `(a-z, A-Z)`, and numbers `(0-9)` with no spaces. Maximum allowed length is 50 characters. |
+| Project name | You can only use letters `(a-z, A-Z)`, and numbers `(0-9)` ,symbols `_ . -`,with no spaces. Maximum allowed length is 50 characters. |
| Model name | You can only use letters `(a-z, A-Z)`, numbers `(0-9)` and symbols `_ . -`. Maximum allowed length is 50 characters. | | Deployment name | You can only use letters `(a-z, A-Z)`, numbers `(0-9)` and symbols `_ . -`. Maximum allowed length is 50 characters. |
-| Intent name| You can only use letters `(a-z, A-Z)`, numbers `(0-9)` and symbols `_ . -`. Maximum allowed length is 50 characters. |
+| Intent name| You can only use letters `(a-z, A-Z)`, numbers `(0-9)` and all symbols except ":", `$ & % * ( ) + ~ # / ?`. Maximum allowed length is 50 characters.|
## Next steps
cognitive-services Responsible Use Cases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/responsible-use-cases.md
An AI system includes not only the technology, but also the people who will use
Microsoft provides *Transparency Notes* to help you understand how our AI technology works. This includes the choices system owners can make that influence system performance and behavior, and the importance of thinking about the whole system, including the technology, the people, and the environment. You can use Transparency Notes when developing or deploying your own system, or share them with the people who will use or be affected by your system.
-Transparency Notes are part of a broader effort at Microsoft to put our AI principles into practice. To find out more, see [Microsoft AI Principles](https://
-www.microsoft.com/ai/responsible-ai).
+Transparency Notes are part of a broader effort at Microsoft to put our AI principles into practice. To find out more, see [Microsoft AI Principles](https://www.microsoft.com/ai/responsible-ai).
## Introduction to Personalizer
communication-services Real Time Inspection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/developer-tools/real-time-inspection.md
Communication Monitoring is compatible with the same browsers as the Calling SDK
## Get started with Communication Monitoring
-The tool can be accessed through an npm package `@azure/communication-monitoring`. The package contains the `CommunicationMonitoring` object that can be attached to a `Call`. Instructions on how to initialize the required `CallClient` and `CallAgent` objects can be found [here](https://docs.microsoft.com/azure/communication-services/how-tos/calling-sdk/manage-calls?pivots=platform-web#initialize-required-objects). `CommunicationMonitoring` also requires an `HTMLDivElement` as part of its constructor on which it will be rendered. The `HTMLDivElement` will dictate the size of the rendered panel.
+The tool can be accessed through an npm package `@azure/communication-monitoring`. The package contains the `CommunicationMonitoring` object that can be attached to a `Call`. Instructions on how to initialize the required `CallClient` and `CallAgent` objects can be found [here](../../how-tos/calling-sdk/manage-calls.md?pivots=platform-web#initialize-required-objects). `CommunicationMonitoring` also requires an `HTMLDivElement` as part of its constructor on which it will be rendered. The `HTMLDivElement` will dictate the size of the rendered panel.
### Installing Communication Monitoring
The tool includes the ability to download the logs captured using the `Download
- [Explore User-Facing Diagnostic APIs](../voice-video-calling/user-facing-diagnostics.md) - [Enable Media Quality Statistics in your application](../voice-video-calling/media-quality-sdk.md) - [Leverage Network Diagnostic Tool](./network-diagnostic.md)-- [Consume call logs with Azure Monitor](../analytics/call-logs-azure-monitor.md)
+- [Consume call logs with Azure Monitor](../analytics/call-logs-azure-monitor.md)
container-instances Container Instances Custom Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-custom-dns.md
This article demonstrates how to deploy a container group with custom DNS settin
For more information on deploying container groups to a virtual network, see the [Deploy in a virtual network article](container-instances-vnet.md). > [!IMPORTANT]
-> Previously, the process of deploying container groups on virtual networks used [network profiles](/azure/container-instances/container-instances-virtual-network-concepts#network-profile) for configuration. However, network profiles have been retired as of the `2021-07-01` API version. We recommend you use the latest API version, which relies on [subnet IDs](/azure/virtual-network/subnet-delegation-overview) instead.
+> Previously, the process of deploying container groups on virtual networks used [network profiles](./container-instances-virtual-network-concepts.md#network-profile) for configuration. However, network profiles have been retired as of the `2021-07-01` API version. We recommend you use the latest API version, which relies on [subnet IDs](../virtual-network/subnet-delegation-overview.md) instead.
## Prerequisites
container-instances Container Instances Using Azure Container Registry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-using-azure-container-registry.md
## Limitations
-* The [Azure Container Registry](../container-registry/container-registry-vnet.md) must have [Public Access set to 'All Networks'](../container-registry/container-registry-access-selected-networks.md). To use an Azure container registry with Public Access set to 'Select Networks' or 'None', visit [ACI's article for using Managed-Identity based authentication with ACR](/azure/container-registry/container-registry-authentication-managed-identity).
+* The [Azure Container Registry](../container-registry/container-registry-vnet.md) must have [Public Access set to 'All Networks'](../container-registry/container-registry-access-selected-networks.md). To use an Azure container registry with Public Access set to 'Select Networks' or 'None', visit [ACI's article for using Managed-Identity based authentication with ACR](../container-registry/container-registry-authentication-managed-identity.md).
## Configure registry authentication
For more information about Azure Container Registry authentication, see [Authent
[az-acr-show]: /cli/azure/acr#az_acr_show [az-ad-sp-create-for-rbac]: /cli/azure/ad/sp#az_ad_sp_create_for_rbac [az-container-create]: /cli/azure/container#az_container_create
-[az-keyvault-secret-set]: /cli/azure/keyvault/secret#az_keyvault_secret_set
+[az-keyvault-secret-set]: /cli/azure/keyvault/secret#az_keyvault_secret_set
cosmos-db Analytical Store Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/analytical-store-introduction.md
How to enable analytical store on a container:
To learn more, see [how to configure analytical TTL on a container](configure-synapse-link.md#create-analytical-ttl).
-## Cost-effective archival of historical data
+## Cost-effective analytics on historical data
Data tiering refers to the separation of data between storage infrastructures optimized for different scenarios. Thereby improving the overall performance and cost-effectiveness of the end-to-end data stack. With analytical store, Azure Cosmos DB now supports automatic tiering of data from the transactional store to analytical store with different data layouts. With analytical store optimized in terms of storage cost compared to the transactional store, allows you to retain much longer horizons of operational data for historical analysis.
-After the analytical store is enabled, based on the data retention needs of the transactional workloads, you can configure the transactional store Time-to-Live (TTTL) property to have records automatically deleted from the transactional store after a certain time period. Similarly, the analytical store Time-to-Live (ATTL)' allows you to manage the lifecycle of data retained in the analytical store independent from the transactional store. By enabling analytical store and configuring TTL properties, you can seamlessly tier and define the data retention period for the two stores.
+After the analytical store is enabled, based on the data retention needs of the transactional workloads, you can configure the transactional store Time-to-Live (TTTL) property to have records automatically deleted from the transactional store after a certain time period. Similarly, the analytical store Time-to-Live (ATTL) allows you to manage the lifecycle of data retained in the analytical store independent from the transactional store. By enabling analytical store and configuring TTL properties, you can seamlessly tier and define the data retention period for the two stores.
## Backup
cosmos-db Manage Data Cqlsh https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/manage-data-cqlsh.md
In this quickstart, you create an Azure Cosmos DB Cassandra API account, and use
## Prerequisites-- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). Or [try Azure Cosmos DB for free](https://azure.microsoft.com/try/cosmosdb/) without an Azure subscription.
+- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). Or [try Azure Cosmos DB for free](/azure/cosmos-db/try-free) without an Azure subscription.
## Create a database account
cosmos-db Bulk Executor Graph Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/graph/bulk-executor-graph-dotnet.md
Each state will contain the following values:
## Next steps * Review the [BulkExecutor Java, which is Open Source](https://github.com/Azure-Samples/azure-cosmos-graph-bulk-executor/tree/main/java/src/main/java/com/azure/graph/bulk/impl) for more details about the classes and methods defined in this namespace.
-* Review the [BulkMode, which is part of .NET V3 SDK](../sql/tutorial-sql-api-dotnet-bulk-import.md)
+* Review the [BulkMode, which is part of .NET V3 SDK](../sql/tutorial-sql-api-dotnet-bulk-import.md)
cosmos-db Create Sql Api Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/create-sql-api-python.md
In this quickstart, you create and manage an Azure Cosmos DB SQL API account fro
* [Visual Studio Monthly Credits](https://azure.microsoft.com/pricing/member-offers/credit-for-visual-studio-subscribers) * [Azure Cosmos DB Free Tier](../optimize-dev-test.md#azure-cosmos-db-free-tier) * Without an Azure active subscription:
- * [Try Azure Cosmos DB for free](/azure/cosmos-db/try-free), a tests environment that lasts for 30 days.
+ * [Try Azure Cosmos DB for free](../try-free.md), a tests environment that lasts for 30 days.
* [Azure Cosmos DB Emulator](https://aka.ms/cosmosdb-emulator) - [Python 2.7 or 3.6+](https://www.python.org/downloads/), with the `python` executable in your `PATH`. - [Visual Studio Code](https://code.visualstudio.com/).
Trying to do capacity planning for a migration to Azure Cosmos DB? You can use i
* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md) > [!div class="nextstepaction"]
-> [Import data into Azure Cosmos DB for the SQL API](../import-data.md)
+> [Import data into Azure Cosmos DB for the SQL API](../import-data.md)
cosmos-db How To Dotnet Create Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/how-to-dotnet-create-container.md
# Create a container in Azure Cosmos DB SQL API using .NET + Containers in Azure Cosmos DB store sets of items. Before you can create, query, or manage items, you must first create a container. ## Name a container
The following example shows the **Database.CreateContainerIfNotExistsAsync** met
:::code language="csharp" source="~/azure-cosmos-dotnet-v3/226-create-container-options/Program.cs" id="create_container_response" highlight="2,6":::
-## See also
+## Next steps
+
+Now that you've create a container, use the next guide to create items.
-- [Get started with Azure Cosmos DB SQL API and .NET](how-to-dotnet-get-started.md)-- [Create a database](how-to-dotnet-create-database.md)
+> [!div class="nextstepaction"]
+> [Create an item](how-to-dotnet-create-item.md)
cosmos-db How To Dotnet Create Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/how-to-dotnet-create-database.md
# Create a database in Azure Cosmos DB SQL API using .NET + Databases in Azure Cosmos DB are units of management for one or more containers. Before you can create or manage containers, you must first create a database. ## Name a database
The following example shows the **CosmosClient.CreateDatabaseIfNotExistsAsync**
:::code language="csharp" source="~/azure-cosmos-dotnet-v3/201-create-database-options/Program.cs" id="create_database_response" highlight="2,6":::
-## See also
+## Next steps
+
+Now that you've created a database, use the next guide to create containers.
-- [Get started with Azure Cosmos DB SQL API and .NET](how-to-dotnet-get-started.md)-- [Create a container](how-to-dotnet-create-container.md)
+> [!div class="nextstepaction"]
+> [Create a container](how-to-dotnet-create-container.md)
cosmos-db How To Dotnet Create Item https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/how-to-dotnet-create-item.md
+
+ Title: Create an item in Azure Cosmos DB SQL API using .NET
+description: Learn how to create, upsert, or replace an item in your Azure Cosmos DB SQL API container using the .NET SDK.
++++
+ms.devlang: csharp
+ Last updated : 06/15/2022+++
+# Create an item in Azure Cosmos DB SQL API using .NET
++
+Items in Azure Cosmos DB represent a specific entity stored within a container. In the SQL API, an item consists of JSON-formatted data with a unique identifier.
+
+## Create a unique identifier for an item
+
+The unique identifier is a distinct string that identifies an item within a container. The ``id`` property is the only required property when creating a new JSON document. For example, this JSON document is a valid item in Azure Cosmos DB:
+
+```json
+{
+ "id": "unique-string-2309509"
+}
+```
+
+Within the scope of a container, two items can't share the same unique identifier.
+
+> [!IMPORTANT]
+> The ``id`` property is case-sensitive. Properties named ``ID``, ``Id``, ``iD``, and ``_id`` will be treated as an arbitrary JSON property.
+
+Once created, the URI for an item is in this format:
+
+``https://<cosmos-account-name>.documents.azure.com/dbs/<database-name>/docs/<item-resource-identifier>``
+
+When referencing the item using a URI, use the system-generated *resource identifier* instead of the ``id`` field. For more information about system-generated item properties in Azure Cosmos DB SQL API, see [properties of an item](../account-databases-containers-items.md#properties-of-an-item)
+
+## Create an item
+
+> [!NOTE]
+> The examples in this article assume that you have already defined a C# type to represent your data named **Product**:
+>
+> :::code language="csharp" source="~/azure-cosmos-dotnet-v3/250-create-item/Product.cs" id="type" :::
+>
+> The examples also assume that you have already created a new object of type **Product** named **newItem**:
+>
+> :::code language="csharp" source="~/azure-cosmos-dotnet-v3/250-create-item/Program.cs" id="create_object" :::
+>
+
+To create an item, call one of the following methods:
+
+* [``CreateItemAsync<>``](#create-an-item-asynchronously)
+* [``ReplaceItemAsync<>``](#replace-an-item-asynchronously)
+* [``UpsertItemAsync<>``](#create-or-replace-an-item-asynchronously)
+
+## Create an item asynchronously
+
+The following example creates a new item asynchronously:
++
+The [``Container.CreateItemAsync<>``](/dotnet/api/microsoft.azure.cosmos.container.createitemasync) method will throw an exception if there's a conflict with the unique identifier of an existing item. To learn more about potential exceptions, see [``CreateItemAsync<>`` exceptions](/dotnet/api/microsoft.azure.cosmos.container.createitemasync#exceptions).
+
+## Replace an item asynchronously
+
+The following example replaces an existing item asynchronously:
++
+The [``Container.ReplaceItemAsync<>``](/dotnet/api/microsoft.azure.cosmos.container.replaceitemasync) method requires the provided string for the ``id`` parameter to match the unique identifier of the ``item`` parameter.
+
+## Create or replace an item asynchronously
+
+The following example will create a new item or replace an existing item if an item already exists with the same unique identifier:
++
+The [``Container.UpsertItemAsync<>``](/dotnet/api/microsoft.azure.cosmos.container.upsertitemasync) method will use the unique identifier of the ``item`` parameter to determine if there's a conflict with an existing item and to replace the item appropriately.
+
+## Next steps
+
+Now that you've created various items, use the next guide to read an item.
+
+> [!div class="nextstepaction"]
+> [Read an item](how-to-dotnet-read-item.md)
cosmos-db How To Dotnet Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/how-to-dotnet-get-started.md
# Get started with Azure Cosmos DB SQL API and .NET+ [!INCLUDE[appliesto-sql-api](../includes/appliesto-sql-api.md)] This article shows you how to connect to Azure Cosmos DB SQL API using the .NET SDK. Once connected, you can perform operations on databases, containers, and items.
Another constructor for **CosmosClient** only contains a single parameter:
##### [Azure CLI](#tab/azure-cli)
-1. Use the [``az cosmosdb list``](/cli/azure/cosmosdb#az-cosmosdb-list) command to retrieve the name of the first Azure Cosmos DB account in your resource group and store it in the *accountName* shell variable.
+1. Use the [``az cosmosdb list``](/cli/azure/cosmosdb#az-cosmosdb-list) command to retrieve the name of the first Azure Cosmos DB account in your resource group and store it in the *accountName* shell variable.
```azurecli-interactive # Retrieve most recently created account name
In your code editor, add using directives for ``Azure.Core`` and ``Azure.Identit
#### Create CosmosClient with default credential implementation
-If you're testing on a local machine, or your application will run on Azure services with direct support for managed identities, obtain an OAuth token by creating a [``DefaultAzureCredential``](/dotnet/api/azure.identity.defaultazurecredential) instance.
+If you're testing on a local machine, or your application will run on Azure services with direct support for managed identities, obtain an OAuth token by creating a [``DefaultAzureCredential``](/dotnet/api/azure.identity.defaultazurecredential) instance.
For this example, we saved the instance in a variable of type [``TokenCredential``](/dotnet/api/azure.core.tokencredential) as that's a more generic type that's reusable across SDKs.
For this example, we create a [``ClientSecretCredential``](/dotnet/api/azure.ide
:::code language="csharp" source="~/azure-cosmos-dotnet-v3/104-client-secret-credential/Program.cs" id="credential" highlight="3-5":::
-You can obtain the client ID, tenant ID, and client secret when you register an application in Azure Active Directory (AD). For more information about registering Azure AD applications, see [Register an application with the Microsoft identity platform](/azure/active-directory/develop/quickstart-register-app).
+You can obtain the client ID, tenant ID, and client secret when you register an application in Azure Active Directory (AD). For more information about registering Azure AD applications, see [Register an application with the Microsoft identity platform](../../active-directory/develop/quickstart-register-app.md).
Create a new instance of the **CosmosClient** class with the ``COSMOS_ENDPOINT`` environment variable and the **TokenCredential** object as parameters. :::code language="csharp" source="~/azure-cosmos-dotnet-v3/104-client-secret-credential/Program.cs" id="secret_credential" highlight="4"::: - ## Build your application As you build your application, your code will primarily interact with four types of resources: -- The SQL API account, which is the unique top-level namespace for your Azure Cosmos DB data.
+* The SQL API account, which is the unique top-level namespace for your Azure Cosmos DB data.
-- Databases, which organize the containers in your account.
+* Databases, which organize the containers in your account.
-- Containers, which contain a set of individual items in your database.
+* Containers, which contain a set of individual items in your database.
-- Items, which represent a JSON document in your container.
+* Items, which represent a JSON document in your container.
The following diagram shows the relationship between these resources.
The following guides show you how to use each of these classes to build your app
## See also -- [Package (NuGet)](https://www.nuget.org/packages/Microsoft.Azure.Cosmos)-- [Samples](samples-dotnet.md)-- [API reference](/dotnet/api/microsoft.azure.cosmos)-- [Library source code](https://github.com/Azure/azure-cosmos-dotnet-v3)-- [Give Feedback](https://github.com/Azure/azure-cosmos-dotnet-v3/issues)
+* [Package (NuGet)](https://www.nuget.org/packages/Microsoft.Azure.Cosmos)
+* [Samples](samples-dotnet.md)
+* [API reference](/dotnet/api/microsoft.azure.cosmos)
+* [Library source code](https://github.com/Azure/azure-cosmos-dotnet-v3)
+* [Give Feedback](https://github.com/Azure/azure-cosmos-dotnet-v3/issues)
## Next steps
cosmos-db How To Dotnet Query Items https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/how-to-dotnet-query-items.md
+
+ Title: Query items in Azure Cosmos DB SQL API using .NET
+description: Learn how to query items in your Azure Cosmos DB SQL API container using the .NET SDK.
++++
+ms.devlang: csharp
+ Last updated : 06/15/2022+++
+# Query items in Azure Cosmos DB SQL API using .NET
++
+Items in Azure Cosmos DB represent entities stored within a container. In the SQL API, an item consists of JSON-formatted data with a unique identifier. When you issue queries using the SQL API, results are returned as a JSON array of JSON documents.
+
+## Query items using SQL
+
+The Azure Cosmos DB SQL API supports the use of Structured Query Language (SQL) to perform queries on items in containers. A simple SQL query like ``SELECT * FROM products`` will return all items and properties from a container. Queries can be even more complex and include specific field projections, filters, and other common SQL clauses:
+
+```sql
+SELECT
+ p.name,
+ p.description AS copy
+FROM
+ products p
+WHERE
+ p.price > 500
+```
+
+To learn more about the SQL syntax for Azure Cosmos DB SQL API, see [Getting started with SQL queries](sql-query-getting-started.md).
+
+## Query an item
+
+> [!NOTE]
+> The examples in this article assume that you have already defined a C# type to represent your data named **Product**:
+>
+> :::code language="csharp" source="~/azure-cosmos-dotnet-v3/300-query-items/Product.cs" id="type" :::
+>
+
+To query items in a container, call one of the following methods:
+
+* [``GetItemQueryIterator<>``](#query-items-using-a-sql-query-asynchronously)
+* [``GetItemLinqQueryable<>``](#query-items-using-linq-asynchronously)
+
+## Query items using a SQL query asynchronously
+
+This example builds a SQL query using a simple string, retrieves a feed iterator, and then uses nested loops to iterate over results. The outer **while** loop will iterate through result pages, while the inner **foreach** loop iterates over results within a page.
++
+The [Container.GetItemQueryIterator<>](/dotnet/api/microsoft.azure.cosmos.container.getitemqueryiterator) method returns a [``FeedIterator<>``](/dotnet/api/microsoft.azure.cosmos.feediterator-1) that is used to iterate through multi-page results. The ``HasMoreResults`` property indicates if there are more result pages left. The ``ReadNextAsync`` method gets the next page of results as an enumerable that is then used in a loop to iterate over results.
+
+Alternatively, use the [QueryDefinition](/dotnet/api/microsoft.azure.cosmos.querydefinition) to build a SQL query with parameterized input:
++
+> [!TIP]
+> Parameterized input values can help prevent many common SQL query injection attacks.
+
+## Query items using LINQ asynchronously
+
+In this example, an [``IQueryable``<>](/dotnet/api/system.linq.iqueryable) object is used to construct a [Language Integrated Query (LINQ)](/dotnet/csharp/programming-guide/concepts/linq/). The results are then iterated over using a feed iterator.
++
+The [Container.GetItemLinqQueryable<>](/dotnet/api/microsoft.azure.cosmos.container.getitemlinqqueryable) method constructs an ``IQueryable`` to build the LINQ query. Then the ``ToFeedIterator<>`` method is used to convert the LINQ query expression into a [``FeedIterator<>``](/dotnet/api/microsoft.azure.cosmos.feediterator-1).
+
+> [!TIP]
+> While you can iterate over the ``IQueryable<>``, this operation is synchronous. Use the ``ToFeedIterator<>`` method to gather results asynchronously.
+
+## Next steps
+
+Now that you've queried multiple items, try one of our end-to-end tutorials with the SQL API.
+
+> [!div class="nextstepaction"]
+> [Build a .NET console app in Azure Cosmos DB SQL API](sql-api-get-started.md)
cosmos-db How To Dotnet Read Item https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/how-to-dotnet-read-item.md
+
+ Title: Read an item in Azure Cosmos DB SQL API using .NET
+description: Learn how to point read a specific item in your Azure Cosmos DB SQL API container using the .NET SDK.
++++
+ms.devlang: csharp
+ Last updated : 06/15/2022+++
+# Read an item in Azure Cosmos DB SQL API using .NET
++
+Items in Azure Cosmos DB represent a specific entity stored within a container. In the SQL API, an item consists of JSON-formatted data with a unique identifier.
+
+## Reading items with unique identifiers
+
+Every item in Azure Cosmos DB SQL API has a unique identifier specified by the ``id`` property. Within the scope of a container, two items can't share the same unique identifier. However, Azure Cosmos DB requires both the unique identifier and the partition key value of an item to perform a quick *point read* of that item. If only the unique identifier is available, you would have to perform a less efficient [query](how-to-dotnet-query-items.md) to look up the item across multiple logical partitions. To learn more about point reads and queries, see [optimize request cost for reading data](../optimize-cost-reads-writes.md#reading-data-point-reads-and-queries).
+
+## Read an item
+
+> [!NOTE]
+> The examples in this article assume that you have already defined a C# type to represent your data named **Product**:
+>
+> :::code language="csharp" source="~/azure-cosmos-dotnet-v3/275-read-item/Product.cs" id="type" :::
+>
+
+To perform a point read of an item, call one of the following methods:
+
+* [``ReadItemAsync<>``](#read-an-item-asynchronously)
+* [``ReadItemStreamAsync<>``](#read-an-item-as-a-stream-asynchronously)
+* [``ReadManyItemsAsync<>``](#read-multiple-items-asynchronously)
+
+## Read an item asynchronously
+
+The following example point reads a single item asynchronously and returns a deserialized item using the provided generic type:
++
+The [``Database.ReadItemAsync<>``](/dotnet/api/microsoft.azure.cosmos.container.readitemasync) method reads and item and returns an object of type [``ItemResponse<>``](/dotnet/api/microsoft.azure.cosmos.itemresponse-1). The **ItemResponse<>** type inherits from the [``Response<>``](/dotnet/api/microsoft.azure.cosmos.response-1) type, which contains an implicit conversion operator to convert the object into the generic type. To learn more about implicit operators, see [user-defined conversion operators](/dotnet/csharp/language-reference/operators/user-defined-conversion-operators).
+
+Alternatively, you can return the **ItemResponse<>** generic type and explicitly get the resource. The more general **ItemResponse<>** type also contains useful metadata about the underlying API operation. In this example, metadata about the request unit charge for this operation is gathered using the **RequestCharge** property.
++
+## Read an item as a stream asynchronously
+
+This example reads an item as a data stream directly:
++
+The [``Container.ReadItemStreamAsync``](/dotnet/api/microsoft.azure.cosmos.container.readitemstreamasync) method returns the item as a [``Stream``](/dotnet/api/system.io.stream) without deserializing the contents.
+
+If you aren't planning to deserialize the items directly, using the stream APIs can improve performance by handing off the item as a stream directly to the next component of your application. For more tips on how to optimize the SDK for high performance scenarios, see [SDK performance tips](performance-tips-dotnet-sdk-v3-sql.md#sdk-usage).
+
+## Read multiple items asynchronously
+
+In this example, a list of tuples containing unique identifier and partition key pairs are used to look up and retrieve multiple items:
++
+[``Container.ReadManyItemsAsync<>``](/dotnet/api/microsoft.azure.cosmos.container.readmanyitemsasync) returns a list of items based on the unique identifiers and partition keys you provide. This operation is typically more performant than a query since you'll effectively perform a point read operation on all items in the list.
+
+## Next steps
+
+Now that you've read various items, use the next guide to query items.
+
+> [!div class="nextstepaction"]
+> [Query items](how-to-dotnet-query-items.md)
cosmos-db Quickstart Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/quickstart-dotnet.md
This quickstart will create a single Azure Cosmos DB account using the SQL API.
```azurecli-interactive # Variable for resource group name
- resourceGroupName="msdocs-cosmos-dotnet-quickstart-rg"
+ resourceGroupName="msdocs-cosmos-quickstart-rg"
location="westus" # Variable for account name with a randomnly generated suffix let suffix=$RANDOM*$RANDOM
- accountName="msdocs-dotnet-$suffix"
+ accountName="msdocs-$suffix"
``` 1. If you haven't already, sign in to the Azure CLI using the [``az login``](/cli/azure/reference-index#az-login) command.
This quickstart will create a single Azure Cosmos DB account using the SQL API.
```azurepowershell-interactive # Variable for resource group name
- $RESOURCE_GROUP_NAME = "msdocs-cosmos-dotnet-quickstart-rg"
+ $RESOURCE_GROUP_NAME = "msdocs-cosmos-quickstart-rg"
$LOCATION = "West US" # Variable for account name with a randomnly generated suffix $SUFFIX = Get-Random
- $ACCOUNT_NAME = "msdocs-dotnet-$SUFFIX"
+ $ACCOUNT_NAME = "msdocs-$SUFFIX"
``` 1. If you haven't already, sign in to Azure PowerShell using the [``Connect-AzAccount``](/powershell/module/az.accounts/connect-azaccount) cmdlet.
This quickstart will create a single Azure Cosmos DB account using the SQL API.
#### [Portal](#tab/azure-portal) > [!TIP]
-> For this quickstart, we recommend using the resource group name ``msdocs-cosmos-dotnet-quickstart-rg``.
+> For this quickstart, we recommend using the resource group name ``msdocs-cosmos-quickstart-rg``.
1. Sign in to the [Azure portal](https://portal.azure.com).
Create an item in the container by calling [``Container.UpsertItemAsync``](/dotn
:::code language="csharp" source="~/azure-cosmos-dotnet-v3/001-quickstart/Program.cs" id="new_item" highlight="3-4,12":::
+For more information on creating, upserting, or replacing items, see [Create an item in Azure Cosmos DB SQL API using .NET](how-to-dotnet-create-item.md).
+ ### Get an item In Azure Cosmos DB, you can perform a point read operation by using both the unique identifier (``id``) and partition key fields. In the SDK, call [``Container.ReadItemAsync<>``](/dotnet/api/microsoft.azure.cosmos.container.readitemasync) passing in both values to return a deserialized instance of your C# type. :::code language="csharp" source="~/azure-cosmos-dotnet-v3/001-quickstart/Program.cs" id="read_item" highlight="3-4":::
+For more information about reading items and parsing the response, see [Read an item in Azure Cosmos DB SQL API using .NET](how-to-dotnet-read-item.md).
+ ### Query items After you insert an item, you can run a query to get all items that match a specific filter. This example runs the SQL query: ``SELECT * FROM todo t WHERE t.partitionKey = 'gear-surf-surfboards'``. This example uses the **QueryDefinition** type and a parameterized query expression for the partition key filter. Once the query is defined, call [``Container.GetItemQueryIterator<>``](/dotnet/api/microsoft.azure.cosmos.container.getitemqueryiterator) to get a result iterator that will manage the pages of results. Then, use a combination of ``while`` and ``foreach`` loops to retrieve pages of results and then iterate over the individual items.
Remove-AzResourceGroup @parameters
1. Navigate to the resource group you previously created in the Azure portal. > [!TIP]
- > In this quickstart, we recommended the name ``msdocs-cosmos-dotnet-quickstart-rg``.
+ > In this quickstart, we recommended the name ``msdocs-cosmos-quickstart-rg``.
1. Select **Delete resource group**. :::image type="content" source="media/delete-account-portal/delete-resource-group-option.png" lightbox="media/delete-account-portal/delete-resource-group-option.png" alt-text="Screenshot of the Delete resource group option in the navigation bar for a resource group.":::
cosmos-db Sql Api Python Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-python-samples.md
Sample solutions that do CRUD operations and other common operations on Azure Co
* [Visual Studio Monthly Credits](https://azure.microsoft.com/pricing/member-offers/credit-for-visual-studio-subscribers) * [Azure Cosmos DB Free Tier](../free-tier.md) * Without an Azure active subscription:
- * [Try Azure Cosmos DB for free](/azure/cosmos-db/try-free), a tests environment that lasts for 30 days.
+ * [Try Azure Cosmos DB for free](../try-free.md), a tests environment that lasts for 30 days.
* [Azure Cosmos DB Emulator](https://aka.ms/cosmosdb-emulator) - [Python 2.7 or 3.6+](https://www.python.org/downloads/), with the `python` executable in your `PATH`. - [Visual Studio Code](https://code.visualstudio.com/).
The [index_management.py](https://github.com/Azure/azure-sdk-for-python/blob/mas
Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning. * If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
-* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
+* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cost-management-billing Manage Tenants https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/microsoft-customer-agreement/manage-tenants.md
After they accept, they can [view the Microsoft Customer Agreement billing accou
:::image type="content" source="./media/manage-tenants/billing-microsoft-customer-agreement-in-list.png" alt-text="Screenshot showing the Microsoft Customer Agreement in the list of billing accounts." lightbox="./media/manage-tenants/billing-microsoft-customer-agreement-in-list.png" :::
-Authorization to invite guest users is controlled by your Azure AD settings. The value of the settings is shown under **Settings** on the **Organizational relationships** page. Ensure that the setting is selected, otherwise the invitation fails.For more information, see [Restrict guest user access permissions](../../active-directory/enterprise-users/users-restrict-guest-permissions.md).
+Authorization to invite guest users is controlled by your Azure AD settings. The value of the settings is shown under **Settings** on the **Organizational relationships** page. Ensure that the setting is selected, otherwise the invitation fails. For more information, see [Restrict guest user access permissions](../../active-directory/enterprise-users/users-restrict-guest-permissions.md).
:::image type="content" source="./media/manage-tenants/external-collaboration-settings.png" alt-text="Screenshot showing External collaboration settings." lightbox="./media/manage-tenants/external-collaboration-settings.png" :::
Read the following articles to learn how to administer flexible billing ownershi
- [Restrict guest access permissions (preview) in Azure Active Directory](../../active-directory/enterprise-users/users-restrict-guest-permissions.md) - [Add guest users to your directory in the Azure portal](../../active-directory/external-identities/b2b-quickstart-add-guest-users-portal.md#accept-the-invitation) - [What are the default user permissions in Azure Active Directory?](../../active-directory/external-identities/b2b-quickstart-add-guest-users-portal.md#accept-the-invitation)-- [What is Azure Active Directory?](../../active-directory/fundamentals/active-directory-whatis.md)
+- [What is Azure Active Directory?](../../active-directory/fundamentals/active-directory-whatis.md)
data-factory Control Flow Web Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-web-activity.md
linkedServices | List of linked services passed to endpoint. | Array of linked s
connectVia | The [integration runtime](./concepts-integration-runtime.md) to be used to connect to the data store. You can use the Azure integration runtime or the self-hosted integration runtime (if your data store is in a private network). If this property isn't specified, the service uses the default Azure integration runtime. | The integration runtime reference. | No > [!NOTE]
-> REST endpoints that the web activity invokes must return a response of type JSON. The activity will timeout at 1 minute with an error if it does not receive a response from the endpoint. For endpoints that support [Asynchronous Request-Reply pattern](https://docs.microsoft.com/azure/architecture/patterns/async-request-reply), the web activity will continue to wait without timeing out (upto 7 day) or till the endpoints signals completion of the job.
+> REST endpoints that the web activity invokes must return a response of type JSON. The activity will timeout at 1 minute with an error if it does not receive a response from the endpoint. For endpoints that support [Asynchronous Request-Reply pattern](/azure/architecture/patterns/async-request-reply), the web activity will continue to wait without timeing out (upto 7 day) or till the endpoints signals completion of the job.
The following table shows the requirements for JSON content:
databox-online Azure Stack Edge Gpu Manage Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-manage-cluster.md
Previously updated : 02/14/2022 Last updated : 06/15/2022
Perform these steps on the node of the device that you were trying to prepare. Y
1. In the local UI, go to the **Get started** page. Under **Prepare a node for clustering**, select **Undo node preparation**.
- ![Screenshot of local web UI "Get started" page when Preparing a node for clustering with Undo node preparation is selected.](./media/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy/undo-node-preparation-1.png)
+ ![Screenshot of local web U I Get started page when Preparing a node for clustering with Undo node preparation is selected.](./media/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy/undo-node-preparation-1.png)
1. When you select **Undo node preparation**, you'll go back to the **Get authentication token** tile and **Prepare node** option will be available. If you decide to prepare this node again, you'll need to select **Prepare node** again.
- ![Screenshot of local web UI "Get started" page when Preparing a node for clustering with Prepare node is selected in Get authentication token tile.](./media/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy/undo-node-preparation-2.png)
+ ![Screenshot of local web U I Get started page when Preparing a node for clustering with Prepare node is selected in Get authentication token tile.](./media/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy/undo-node-preparation-2.png)
## View existing nodes 1. In the local UI, go to the **Cluster** page. 1. Under **Existing nodes**, you can view the existing nodes for your cluster.
- ![Screenshot of local web UI "Cluster" page with "Modify" option selected for "Cluster witness" on first node -1.](./media/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy/view-cluster-nodes-1.png)
+ ![Screenshot of local web U I Cluster page with the Modify option selected for Cluster witness on first node -1.](./media/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy/view-cluster-nodes-1.png)
## Replace a node
-You may need to replace a node if one of the nodes on your device is down or not healthy. Perform these steps on the node that you are trying to replace.
+You may need to replace a node if one of the nodes on your device is down or not healthy. Perform these steps on the node that you're trying to replace.
1. In the local UI, go to the **Cluster** page. Under **Existing nodes**, view the status of the nodes. You'll want to replace the node that shows the status as **Down**.
- ![Screenshot of local web UI "Cluster" page with "Existing nodes" option displaying a node status as Down.](./media/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy/replace-node-1.png)
+ ![Screenshot of local web U I Cluster page with the Existing nodes option displaying a node status as Down.](./media/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy/replace-node-1.png)
1. Select **Replace node** and enter the following inputs.
- 1. Choose the node to replace. This should be automatically selected as the node, which is down.
- 1. Prepare another node. Configure the networking on this node in the same way as you set up on the first node. Get the node serial number and authentication token from the new incoming node.
- 1. Provide the **Node serial number** for the incoming replacement node.
- 1. Supply the **Node token** for the incoming replacement node.
- 1. Select **Validate & add**. The credentials of the incoming node are now validated.
+ a. Choose the node to replace. This should be automatically selected as the node, which is down.
- ![Screenshot of local web UI "Cluster" page with "Apply" selected on "Validate & add" blade.](./media/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy/replace-node-2.png)
+ b. Prepare another node. Configure the networking on this node in the same way as you set up on the first node. Get the node serial number and authentication token from the new incoming node.
+
+ c. Provide the **Node serial number** for the incoming replacement node.
+
+ d. Supply the **Node token** for the incoming replacement node.
+
+ e. Select **Validate & add**. The credentials of the incoming node are now validated.
+
+ ![Screenshot of local web U I Cluster page with Apply selected on the Validate & add blade.](./media/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy/replace-node-2.png)
- 1. Once the validation has successfully completed, select **Add node** to complete the node replacement. It may take several minutes for the replacement node to get added to form the cluster.
+ f. Once the validation has successfully completed, select **Add node** to complete the node replacement. It may take several minutes for the replacement node to get added to form the cluster.
## Configure cluster witness
Perform these steps on the first node of the device.
1. In the local UI, go to the **Cluster** page. Under **Cluster witness type**, select **Modify**.
- ![Screenshot of local web UI "Cluster" page with "Modify" option selected for "Cluster witness" on first node - 2.](./media/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy/add-cluster-witness-1m.png)
+ ![Screenshot of local web U I Cluster page with Modify option selected for Cluster witness on first node - 2.](./media/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy/add-cluster-witness-1m.png)
1. In the **Modify cluster witness** blade, enter the following inputs. 1. Choose the **Witness type** as **Cloud.**
Perform these steps on the first node of the device.
1. If you chose Access key as the authentication mechanism, enter the Access key of the Storage account, Azure Storage container where the witness lives, and the service endpoint. 1. Select **Apply**.
- ![Screenshot of local web UI "Cluster" page with cloud witness type selected in "Modify cluster witness" blade on first node.](./media/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy/add-cluster-witness-cloud-1.png)
+ ![Screenshot of local web U I Cluster page with cloud witness type selected in the Modify cluster witness blade on first node.](./media/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy/add-cluster-witness-cloud-1.png)
### Configure local witness
Perform these steps on the first node of the device.
1. In the local UI, go to the **Cluster** page. Under **Cluster witness type**, select **Modify**.
- ![Screenshot of local web UI "Cluster" page with "Modify" option selected for "Cluster witness" on first node - 3.](./media/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy/add-cluster-witness-1m.png)
+ ![Screenshot of local web U I Cluster page with Modify option selected for Cluster witness on first node - 3.](./media/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy/add-cluster-witness-1m.png)
1. In the **Modify cluster witness** blade, enter the following inputs. 1. Choose the **Witness type** as **Local.** 1. Enter the file share path as *//server/fileshare* format. 1. Select **Apply**.
- ![Screenshot of local web UI "Cluster" page with local witness type selected in "Modify cluster witness" blade on first node](./media/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy/add-cluster-witness-local-1.png)
+ ![Screenshot of local web U I Cluster page with local witness type selected in the Modify cluster witness blade on first node.](./media/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy/add-cluster-witness-local-1.png)
## Configure virtual IPs
For Azure Consistent Services, follow these steps to configure virtual IP.
1. In the local UI on the **Cluster** page, under the **Virtual IP settings** section, select **Azure Consistent Services**.
- ![Screenshot of local web UI "Cluster" page with "Azure Consistent Services" selected for "Virtual IP Settings" on first node.](./media/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy/configure-azure-consistent-services-1m.png)
+ ![Screenshot of local web U I Cluster page with Azure Consistent Services selected for Virtual I P Settings on first node.](./media/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy/configure-azure-consistent-services-1m.png)
1. In the **Virtual IP settings** blade, input the following.
For Azure Consistent Services, follow these steps to configure virtual IP.
1. If you chose IP settings as static, enter a virtual IP. This should be a free IP from within the Azure Consistent Services network that you specified. If you selected DHCP, a virtual IP is automatically picked from the Azure Consistent Services network that you selected. 1. Select **Apply**.
- ![Screenshot of local web UI "Cluster" page with "Virtual IP Settings" blade configured for Azure consistent services on first node.](./media/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy/configure-azure-consistent-services-2.png)
+ ![Screenshot of local web U I Cluster page with Virtual I P Settings blade configured for Azure consistent services on first node.](./media/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy/configure-azure-consistent-services-2.png)
### For Network File System
For clients connecting via NFS protocol to the two-node device, follow these ste
1. In the local UI on the **Cluster** page, under the **Virtual IP settings** section, select **Network File System**.
- ![Screenshot of local web UI "Cluster" page with "Network File System" selected for "Virtual IP Settings" on first node.](./media/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy/configure-network-file-system-1m.png)
+ ![Screenshot of local web U I Cluster page with Network File System selected for Virtual I P Settings on first node.](./media/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy/configure-network-file-system-1m.png)
1. In the **Virtual IP settings** blade, input the following.
For clients connecting via NFS protocol to the two-node device, follow these ste
1. If you chose IP settings as static, enter a virtual IP. This should be a free IP from within the NFS network that you specified. If you selected DHCP, a virtual IP is automatically picked from the NFS network that you selected. 1. Select **Apply**.
- ![Screenshot of local web UI "Cluster" page with "Virtual IP Settings" blade configured for NFS on first node.](./media/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy/configure-network-file-system-2.png)
+ ![Screenshot of local web U I Cluster page with Virtual I P Settings blade configured for N F S on first node.](./media/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy/configure-network-file-system-2.png)
> [!NOTE] > Virtual IP settings are required. If you do not configure this IP, you will be blocked when configuring the **Device settings** in the next step.
defender-for-cloud Review Security Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/review-security-recommendations.md
The Insights column of the page gives you more details for each recommendation.
| Icon | Name | Description | |--|--|--|
-| :::image type="icon" source="media/secure-score-security-controls/preview-icon.png" border="false"::: | Preview recommendation | This recommendation won't affect your secure score until it's GA. |
+| :::image type="icon" source="media/secure-score-security-controls/preview-icon.png" border="false"::: | **Preview recommendation** | This recommendation won't affect your secure score until it's GA. |
| :::image type="icon" source="media/secure-score-security-controls/fix-icon.png" border="false"::: | **Fix** | From within the recommendation details page, you can use 'Fix' to resolve this issue. | | :::image type="icon" source="media/secure-score-security-controls/enforce-icon.png" border="false"::: | **Enforce** | From within the recommendation details page, you can automatically deploy a policy to fix this issue whenever someone creates a non-compliant resource. | | :::image type="icon" source="media/secure-score-security-controls/deny-icon.png" border="false"::: | **Deny** | From within the recommendation details page, you can prevent new resources from being created with this issue. |
defender-for-iot References Work With Defender For Iot Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/references-work-with-defender-for-iot-apis.md
Define conditions under which alerts won't be sent. For example, define and upda
The APIs that you define here appear in the on-premises management console's Alert Exclusions window as a read-only exclusion rule.
-This API is supported for maintenance purposes only and is not meant to be used instead of [alert exclusion rules](/azure/defender-for-iot/organizations/how-to-work-with-alerts-on-premises-management-console#create-alert-exclusion-rules). Use this API for one-time maintenance operations only.
+This API is supported for maintenance purposes only and is not meant to be used instead of [alert exclusion rules](./how-to-work-with-alerts-on-premises-management-console.md#create-alert-exclusion-rules). Use this API for one-time maintenance operations only.
#### Method - POST
The below API's can be used with the ServiceNow integration via the ServiceNow's
- [Investigate sensor detections in a device inventory](how-to-investigate-sensor-detections-in-a-device-inventory.md) -- [Investigate all enterprise sensor detections in a device inventory](how-to-investigate-all-enterprise-sensor-detections-in-a-device-inventory.md)
+- [Investigate all enterprise sensor detections in a device inventory](how-to-investigate-all-enterprise-sensor-detections-in-a-device-inventory.md)
event-grid Delivery And Retry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/delivery-and-retry.md
The following table describes the types of endpoints and errors for which retry
| Endpoint Type | Error codes | | --| --|
-| Azure Resources | 400 Bad Request, 413 Request Entity Too Large, 403 Forbidden |
+| Azure Resources | 400 Bad Request, 413 Request Entity Too Large, 403 Forbidden, 404 Not Found |
| Webhook | 400 Bad Request, 413 Request Entity Too Large, 403 Forbidden, 404 Not Found, 401 Unauthorized | > [!NOTE]
-> If Dead-Letter isn't configured for an endpoint, events will be dropped when the above errors happen. Consider configuring Dead-Letter if you don't want these kinds of events to be dropped.
+> If dead-letter isn't configured for an endpoint, events will be dropped when the above errors happen. Consider configuring dead-letter if you don't want these kinds of events to be dropped. Dead lettered events will be dropped when the dead dead-letter destination is not found.
If the error returned by the subscribed endpoint isn't among the above list, Event Grid performs the retry using policies described below:
firewall-manager Configure Ddos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/configure-ddos.md
Previously updated : 09/30/2021 Last updated : 06/15/2022
-# Configure an Azure DDoS Protection Plan using Azure Firewall Manager (preview)
+# Configure an Azure DDoS Protection Plan using Azure Firewall Manager
Azure Firewall Manager is a platform to manage and protect your network resources at scale. You can associate your virtual networks with a DDoS protection plan within Azure Firewall Manager.
-> [!IMPORTANT]
-> Using Azure Firewall Manager to configure an Azure DDoS Protection Plan is currently in PREVIEW.
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
- > [!TIP] > DDoS Protection Standard currently does not support virtual WANs. However, you can workaround this limitation by force tunneling Internet traffic to an Azure Firewall in a virtual network that has a DDoS Protection Plan associated with it.
firewall-manager Manage Web Application Firewall Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/manage-web-application-firewall-policies.md
Title: Manage Azure Web Application Firewall policies (preview)
+ Title: Manage Azure Web Application Firewall policies
description: Learn how to use Azure Firewall Manager to manage Azure Web Application Firewall policies Previously updated : 06/02/2022 Last updated : 06/15/2022
-# Manage Web Application Firewall policies (preview)
+# Manage Web Application Firewall policies
You can centrally create and associate Web Application Firewall (WAF) policies for your application delivery platforms, including Azure Front Door and Azure Application Gateway.
-> [!IMPORTANT]
-> Managing Web Application Firewall policies using Azure Firewall Manager is currently in PREVIEW.
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
- ## Prerequisites - A deployed [Azure Front Door](../frontdoor/quickstart-create-front-door.md) or [Azure Application Gateway](../application-gateway/quick-create-portal.md)
You can centrally create and associate Web Application Firewall (WAF) policies f
## Next steps -- [Configure WAF policies using Azure Firewall Manager (preview)](../web-application-firewall/shared/manage-policies.md)
+- [Configure WAF policies using Azure Firewall Manager](../web-application-firewall/shared/manage-policies.md)
firewall-manager Secure Hybrid Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/secure-hybrid-network.md
Previously updated : 03/03/2021 Last updated : 06/15/2022
If you don't have an Azure subscription, create a [free account](https://azure.m
2. For the policy name, type **Pol-Net01**. 3. For Region, select **East US**. 1. Select **Next : DNS Settings**.
-1. Select **Next : TLS inspection (preview)**
+1. Select **Next : TLS inspection**
1. Select **Next:Rules**. 1. Select **Add a rule collection**. 1. For **Name**, type **RCNet01**.
frontdoor Rules Match Conditions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/rules-match-conditions.md
In this example, we match all requests that have been detected as coming from a
Use the **HTTP version** match condition to identify requests that have been made by using a specific version of the HTTP protocol. > [!NOTE]
-> The **request cookies** match condition is only available on Azure Front Door Standard/Premium.
+> The **HTTP version** match condition is only available on Azure Front Door Standard/Premium.
### Properties
governance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/management-groups/overview.md
When looking to query on management groups outside the Azure portal, the target
management groups looks like **"/providers/Microsoft.Management/managementGroups/{_management-group-id_}"**. > [!NOTE]
-> Using the Azure Resource Manager REST API, you can enable diagnostic settings on a management group to send related Azure Activity log entries to a Log Analytics workspace, Azure Storage, or Azure Event Hub. For more information, see [Management Group Diagnostic Settings - Create Or Update](https://docs.microsoft.com/rest/api/monitor/management-group-diagnostic-settings/create-or-update).
+> Using the Azure Resource Manager REST API, you can enable diagnostic settings on a management group to send related Azure Activity log entries to a Log Analytics workspace, Azure Storage, or Azure Event Hub. For more information, see [Management Group Diagnostic Settings - Create Or Update](/rest/api/monitor/management-group-diagnostic-settings/create-or-update).
## Next steps
To learn more about management groups, see:
- [Create management groups to organize Azure resources](./create-management-group-portal.md) - [How to change, delete, or manage your management groups](./manage.md)-- See options for [How to protect your resource hierarchy](./how-to/protect-resource-hierarchy.md)
+- See options for [How to protect your resource hierarchy](./how-to/protect-resource-hierarchy.md)
governance Definition Structure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/definition-structure.md
The following Resource Provider modes are fully supported:
- `Microsoft.Kubernetes.Data` for managing your Kubernetes clusters on or off Azure. Definitions using this Resource Provider mode use effects _audit_, _deny_, and _disabled_. This mode supports custom definitions as a _public preview_. See
- [Create policy definition from constraint template](https://docs.microsoft.com/azure/governance/policy/how-to/extension-for-vscode#create-policy-definition-from-constraint-template) to create a
+ [Create policy definition from constraint template](../how-to/extension-for-vscode.md#create-policy-definition-from-constraint-template) to create a
custom definition from an existing [Open Policy Agent](https://www.openpolicyagent.org/) (OPA) GateKeeper v3 [constraint template](https://open-policy-agent.github.io/gatekeeper/website/docs/howto/#constraint-templates). Use
For more information and examples, see
- Understand how to [programmatically create policies](../how-to/programmatically-create.md). - Learn how to [get compliance data](../how-to/get-compliance-data.md). - Learn how to [remediate non-compliant resources](../how-to/remediate-resources.md).-- Review what a management group is with [Organize your resources with Azure management groups](../../management-groups/overview.md).
+- Review what a management group is with [Organize your resources with Azure management groups](../../management-groups/overview.md).
guides Azure Operations Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/guides/operations/azure-operations-guide.md
One of the benefits of using Azure is that you can deploy your applications into
### Azure portal
-The Azure portal is a web-based application that can be used to create, manage, and remove Azure resources and services. The Azure portal is located at [portal.azure.com](https://portal.azure.com). It includes a customizable dashboard and tooling for managing Azure resources. It also provides billing and subscription information. For more information, see [Microsoft Azure portal overview](/azure/azure-portal/azure-portal-overview) and [Manage Azure resources through portal](../../azure-resource-manager/management/manage-resources-portal.md).
+The Azure portal is a web-based application that can be used to create, manage, and remove Azure resources and services. The Azure portal is located at [portal.azure.com](https://portal.azure.com). It includes a customizable dashboard and tooling for managing Azure resources. It also provides billing and subscription information. For more information, see [Microsoft Azure portal overview](../../azure-portal/azure-portal-overview.md) and [Manage Azure resources through portal](../../azure-resource-manager/management/manage-resources-portal.md).
### Resources
You can help secure Azure virtual networks by using a network security group. NS
## Next steps - [Create a Windows VM](../../virtual-machines/windows/quick-create-portal.md)-- [Create a Linux VM](../../virtual-machines/linux/quick-create-portal.md)
+- [Create a Linux VM](../../virtual-machines/linux/quick-create-portal.md)
hdinsight Hdinsight Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-release-notes.md
This release applies for HDInsight 4.0. HDInsight release is made available to a
**The Hive Warehouse Connector (HWC) on Spark v3.1.2** The Hive Warehouse Connector (HWC) allows you to take advantage of the unique features of Hive and Spark to build powerful big-data applications. HWC is currently supported for Spark v2.4 only. This feature adds business value by allowing ACID transactions on Hive Tables using Spark. This feature is useful for customers who use both Hive and Spark in their data estate.
-For more information, see [Apache Spark & Hive - Hive Warehouse Connector - Azure HDInsight | Microsoft Docs](/azure/hdinsight/interactive-query/apache-hive-warehouse-connector)
+For more information, see [Apache Spark & Hive - Hive Warehouse Connector - Azure HDInsight | Microsoft Docs](./interactive-query/apache-hive-warehouse-connector.md)
## Ambari
HDI Hive 3.1 version is upgraded to OSS Hive 3.1.2. This version has all fixes a
| Incompatible change in Hive bucket computation|[HIVE-21376](https://issues.apache.org/jira/browse/HIVE-21376)| | Provide a fallback authorizer when no other authorizer is in use|[HIVE-20420](https://issues.apache.org/jira/browse/HIVE-20420)| | Some alterPartitions invocations throw 'NumberFormatException: null'|[HIVE-18767](https://issues.apache.org/jira/browse/HIVE-18767)|
-| HiveServer2: Preauthenticated subject for http transport isn't retained for entire duration of http communication in some cases|[HIVE-20555](https://issues.apache.org/jira/browse/HIVE-20555)|
+| HiveServer2: Preauthenticated subject for http transport isn't retained for entire duration of http communication in some cases|[HIVE-20555](https://issues.apache.org/jira/browse/HIVE-20555)|
healthcare-apis Api Versioning Dicom Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/api-versioning-dicom-service.md
Title: API Versioning for DICOM service - Azure Health Data Services
+ Title: API versioning for DICOM service - Azure Health Data Services
description: This guide gives an overview of the API version policies for the DICOM service. Previously updated : 02/24/2022- Last updated : 06/11/2022+ # API versioning for DICOM service
All versions of the DICOM APIs will always conform to the DICOMwebΓäó Standard s
## Specifying version of REST API in requests
-The version of the REST API should be explicitly specified in the request URL as in the following example:
+The version of the REST API must be explicitly specified in the request URL as in the following example:
`<service_url>/v<version>/studies`
-Currently routes without a version are still supported. For example, `<service_url>/studies` has the same behavior as specifying the version as v1.0-prerelease. However, we strongly recommend that you specify the version in all requests via the URL as routes without a version won't be supported after the General Availability release of the DICOM service.
+> [!NOTE]
+> Routes without a version are no longer supported.
## Supported versions
Currently the supported versions are:
* v1 The OpenApi Doc for the supported versions can be found at the following url:
-
-`<service_url>/{version}/api.yaml`
+
+`<service_url>/v<version>/api.yaml`
## Prerelease versions
healthcare-apis Configure Cross Origin Resource Sharing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/configure-cross-origin-resource-sharing.md
+
+ Title: Configure cross-origin resource sharing in DICOM service in Azure Health Data Services
+description: This article describes how to configure cross-origin resource sharing in DICOM service in Azure Health Data Services
++ Last updated : 06/14/2022+++++
+# Configure cross-origin resource sharing in DICOM service in Azure Health Data Services
+
+## What is cross-origin resource sharing in DICOM service in Azure Health Data Services?
+
+DICOM service in Azure Health Data Services (hereby called DICOM service) supports [cross-origin resource sharing (CORS)](https://wikipedia.org/wiki/Cross-Origin_Resource_Sharing). CORS allows you to configure settings so that applications from one domain (origin) can access resources from a different domain, known as a cross-domain request.
+
+CORS is often used in a single-page app that must call a RESTful API to a different domain.
+
+## Cross-origin resource sharing configuration settings
+
+To configure a CORS setting in the DICOM service, specify the following settings:
+
+- **Origins (Access-Control-Allow-Origin)**. A list of domains allowed to make cross-origin requests to the DICOM service. Each domain (origin) must be entered in a separate line. You can enter an asterisk (*) to allow calls from any domain, but we don't recommend it because it's a security risk.
+
+- **Headers (Access-Control-Allow-Headers)**. A list of headers that the origin request will contain. To allow all headers, enter an asterisk (*).
+
+- **Methods (Access-Control-Allow-Methods)**. The allowed methods (PUT, GET, POST, and so on) in an API call. Choose **Select all** for all methods.
+
+- **Max age (Access-Control-Max-Age)**. The value in seconds to cache preflight request results for Access-Control-Allow-Headers and Access-Control-Allow-Methods.
+
+- **Allow credentials (Access-Control-Allow-Credentials)**. CORS requests normally donΓÇÖt include cookies to prevent [cross-site request forgery (CSRF)](https://en.wikipedia.org/wiki/Cross-site_request_forgery) attacks. If you select this setting, the request can be made to include credentials, such as cookies. You can't configure this setting if you already set Origins with an asterisk (*).
++
+> [!NOTE]
+> You can't specify different settings for different domain origins. All settings (**Headers**, **Methods**, **Max age**, and **Allow credentials**) apply to all origins specified in the Origins setting.
+
+## Next steps
+
+For more information about DICOM service, see
+
+> [!div class="nextstepaction"]
+> [Overview of the DICOM service](./dicom-services-overview.md)
healthcare-apis Dicom Services Conformance Statement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicom-services-conformance-statement.md
Previously updated : 02/24/2022- Last updated : 06/10/2022+ # DICOM Conformance Statement
Additionally, the following non-standard API(s) are supported:
- [Delete](#delete)
-Our service also makes use of REST API versioning. For information on how to specify the version when making requests visit the [API Versioning for DICOM service Documentation](api-versioning-dicom-service.md).
+Our service makes use of REST API versioning.
+
+> [!NOTE]
+> The version of the REST API must be explicitly specified in the request URL as in the following example:
+>
+> `https://<service_url>/v<version>/studies`
+
+For information on how to specify the version when making requests visit the [API Versioning for DICOM service documentation](api-versioning-dicom-service.md).
## Store (STOW-RS)
healthcare-apis Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/release-notes.md
Azure Health Data Services is a set of managed API services based on open standa
|Enhancements | Related information | | : | :- |
-|DICOM service supports cross-origin resource sharing (CORS) |DICOM service now supports [CORS](./../healthcare-apis/fhir/configure-cross-origin-resource-sharing.md). CORS allows you to configure settings so that applications from one domain (origin) can access resources from a different domain, known as a cross-domain request. |
+|DICOM service supports cross-origin resource sharing (CORS) |DICOM service now supports [CORS](./../healthcare-apis/dicom/configure-cross-origin-resource-sharing.md). CORS allows you to configure settings so that applications from one domain (origin) can access resources from a different domain, known as a cross-domain request. |
|DICOMcast supports Private Link |DICOMcast has been updated to support Azure Health Data Services workspaces that have been configured to use [Private Link](./../healthcare-apis/healthcare-apis-configure-private-link.md). | |UPS-RS supports Change and Retrieve work item |Modality worklist (UPS-RS) endpoints have been added to support Change and Retrieve operations for work items. | |API version is now required as part of the URI |All REST API requests to the DICOM service must now include the API version in the URI. For more details, see [API versioning for DICOM service](./../healthcare-apis/dicom/api-versioning-dicom-service.md). |
industrial-iot Reference Command Line Arguments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/industrial-iot/reference-command-line-arguments.md
Last updated 3/22/2021
# Command-line Arguments
-In the following, there are several command-line arguments described that can be used to set global settings for OPC Publisher.
+In the following, there are several Command-line Arguments described that can be used to set global settings for OPC Publisher.
## OPC Publisher Command-line Arguments for Version 2.5 and below
There are a couple of environment variables, which can be used to control the ap
``` > [!NOTE]
-> Command-line arguments overrule environment variable settings.
+> Command-line Arguments overrule environment variable settings.
``` --pf, --publishfile=VALUE
There are a couple of environment variables, which can be used to control the ap
--at, --appcertstoretype=VALUE the own application cert store type (allowed: Directory, X509Store). ```+
+## OPC Publisher Command-line Arguments for Version 2.8.2 and above
+
+The following OPC Publisher configuration can be applied by Command Line Interface (CLI) options or as environment variable settings.
+The `Alternative` field, where present, refers to the CLI argument applicable in **standalone mode only**. When both environment variable and CLI argument are provided, the latest will overrule the env variable.
+```
+ PublishedNodesFile=VALUE
+ The file used to store the configuration of the nodes to be published
+ along with the information to connect to the OPC UA server sources
+ When this file is specified, or the default file is accessible by
+ the module, OPC Publisher will start in standalone mode
+ Alternative: --pf, --publishfile
+ Mode: Standalone only
+ Type: string - file name, optionally prefixed with the path
+ Default: publishednodes.json
+
+ site=VALUE
+ The site OPC Publisher is assigned to
+ Alternative: --s, --site
+ Mode: Standalone and Orchestrated
+ Type: string
+ Default: <not set>
+
+ LogFileName==VALUE
+ The filename of the logfile to use
+ Alternative: --lf, --logfile
+ Mode: Standalone only
+ Type: string - file name, optionally prefixed with the path
+ Default: <not set>
+
+ LogFileFlushTimeSpan=VALUE
+ The time span in seconds when the logfile should be flushed in the storage
+ Alternative: --lt, --logflushtimespan
+ Mode: Standalone only
+ Environment variable type: time span string {[d.]hh:mm:ss[.fffffff]}
+ Alternative argument type: integer in seconds
+ Default: {00:00:30}
+
+ loglevel=Value
+ The level for logs to pe persisted in the logfile
+ Alternative: --ll --loglevel
+ Mode: Standalone only
+ Type: string enum - Fatal, Error, Warning, Information, Debug, Verbose
+ Default: info
+
+ EdgeHubConnectionString=VALUE
+ An IoT Edge Device or IoT Edge module connection string to use,
+ when deployed as module in IoT Edge, the environment variable
+ is already set as part of the container deployment
+ Alternative: --dc, --deviceconnectionstring
+ --ec, --edgehubconnectionstring
+ Mode: Standalone and Orchestrated
+ Type: connection string
+ Default: <not set> <set by iotedge runtime>
+
+ Transport=VALUE
+ Protocol to use for upstream communication to edgeHub or IoTHub
+ Alternative: --ih, --iothubprotocol
+ Mode: Standalone and Orchestrated
+ Type: string enum: Any, Amqp, Mqtt, AmqpOverTcp, AmqpOverWebsocket,
+ MqttOverTcp, MqttOverWebsocket, Tcp, Websocket.
+ Default: MqttOverTcp
+
+ BypassCertVerification=VALUE
+ Enables/disables bypass of certificate verification for upstream communication to edgeHub
+ Alternative: N/A
+ Mode: Standalone and Orchestrated
+ Type: boolean
+ Default: false
+
+ EnableMetrics=VALUE
+ Enables/disables upstream metrics propagation
+ Alternative: N/A
+ Mode: Standalone and Orchestrated
+ Type: boolean
+ Default: true
+
+ DefaultPublishingInterval=VALUE
+ Default value for the OPC UA publishing interval of OPC UA subscriptions
+ created to an OPC UA server. This value is used when no explicit setting
+ is configured.
+ Alternative: --op, --opcpublishinginterval
+ Mode: Standalone only
+ Environment variable type: time span string {[d.]hh:mm:ss[.fffffff]}
+ Alternative argument type: integer in milliseconds
+ Default: {00:00:01} (1000)
+
+ DefaultSamplingInterval=VALUE
+ Default value for the OPC UA sampling interval of nodes to publish.
+ This value is used when no explicit setting is configured.
+ Alternative: --oi, --opcsamplinginterval
+ Mode: Standalone only
+ Environment variable type: time span string {[d.]hh:mm:ss[.fffffff]}
+ Alternative argument type: integer in milliseconds
+ Default: {00:00:01} (1000)
+
+ DefaultQueueSize=VALUE
+ Default setting value for the monitored item's queue size to be used when
+ not explicitly specified in pn.json file
+ Alternative: --mq, --monitoreditemqueuecapacity
+ Mode: Standalone only
+ Type: integer
+ Default: 1
+
+ DefaultHeartbeatInterval=VALUE
+ Default value for the heartbeat interval setting of published nodes
+ having no explicit setting for heartbeat interval.
+ Alternative: --hb, --heartbeatinterval
+ Mode: Standalone
+ Environment variable type: time span string {[d.]hh:mm:ss[.fffffff]}
+ Alternative argument type: integer in seconds
+ Default: {00:00:00} meaning heartbeat is disabled
+
+ MessageEncoding=VALUE
+ The messaging encoding for outgoing telemetry.
+ Alternative: --me, --messageencoding
+ Mode: Standalone only
+ Type: string enum - Json, Uadp
+ Default: Json
+
+ MessagingMode=VALUE
+ The messaging mode for outgoing telemetry.
+ Alternative: --mm, --messagingmode
+ Mode: Standalone only
+ Type: string enum - PubSub, Samples
+ Default: Samples
+
+ FetchOpcNodeDisplayName=VALUE
+ Fetches the DisplayName for the nodes to be published from
+ the OPC UA Server when not explicitly set in the configuration.
+ Note: This has high impact on OPC Publisher startup performance.
+ Alternative: --fd, --fetchdisplayname
+ Mode: Standalone only
+ Type: boolean
+ Default: false
+
+ FullFeaturedMessage=VALUE
+ The full featured mode for messages (all fields filled in the telemetry).
+ Default is 'false' for legacy compatibility.
+ Alternative: --fm, --fullfeaturedmessage
+ Mode: Standalone only
+ Type:boolean
+ Default: false
+
+ BatchSize=VALUE
+ The number of incoming OPC UA data change messages to be cached for batching.
+ When BatchSize is 1 or TriggerInterval is set to 0 batching is disabled.
+ Alternative: --bs, --batchsize
+ Mode: Standalone and Orchestrated
+ Type: integer
+ Default: 50
+
+ BatchTriggerInterval=VALUE
+ The batching trigger interval.
+ When BatchSize is 1 or TriggerInterval is set to 0 batching is disabled.
+ Alternative: --si, --iothubsendinterval
+ Mode: Standalone and Orchestrated
+ Environment variable type: time span string {[d.]hh:mm:ss[.fffffff]}
+ Alternative argument type: integer in seconds
+ Default: {00:00:10}
+
+ IoTHubMaxMessageSize=VALUE
+ The maximum size of the (IoT D2C) telemetry message.
+ Alternative: --ms, --iothubmessagesize
+ Mode: Standalone and Orchestrated
+ Type: integer
+ Default: 0
+
+ DiagnosticsInterval=VALUE
+ Shows publisher diagnostic info at the specified interval in seconds
+ (need log level info). -1 disables remote diagnostic log and
+ diagnostic output
+ Alternative: --di, --diagnosticsinterval
+ Mode: Standalone only
+ Environment variable type: time span string {[d.]hh:mm:ss[.fffffff]}
+ Alternative argument type: integer in seconds
+ Default: {00:00:60}
+
+ LegacyCompatibility=VALUE
+ Forces the Publisher to operate in 2.5 legacy mode, using
+ `"application/opcua+uajson"` for `ContentType` on the IoT Hub
+ Telemetry message.
+ Alternative: --lc, --legacycompatibility
+ Mode: Standalone only
+ Type: boolean
+ Default: false
+
+ PublishedNodesSchemaFile=VALUE
+ The validation schema filename for published nodes file.
+ Alternative: --pfs, --publishfileschema
+ Mode: Standalone only
+ Type: string
+ Default: <not set>
+
+ MaxNodesPerDataSet=VALUE
+ Maximum number of nodes within a DataSet/Subscription.
+ When more nodes than this value are configured for a
+ DataSetWriter, they will be added in a separate DataSet/Subscription.
+ Alternative: N/A
+ Mode: Standalone only
+ Type: integer
+ Default: 1000
+
+ ApplicationName=VALUE
+ OPC UA Client Application Config - Application name as per
+ OPC UA definition. This is used for authentication during communication
+ init handshake and as part of own certificate validation.
+ Alternative: --an, --appname
+ Mode: Standalone and Orchestrated
+ Type: string
+ Default: "Microsoft.Azure.IIoT"
+
+ ApplicationUri=VALUE
+ OPC UA Client Application Config - Application URI as per
+ OPC UA definition.
+ Alternative: N/A
+ Mode: Standalone and Orchestrated
+ Type: string
+ Default: $"urn:localhost:{ApplicationName}:microsoft:"
+
+ ProductUri=VALUE
+ OPC UA Client Application Config - Product URI as per
+ OPC UA definition.
+ Alternative: N/A
+ Mode: Standalone and Orchestrated
+ Type: string
+ Default: "https://www.github.com/Azure/Industrial-IoT"
+
+ DefaultSessionTimeout=VALUE
+ OPC UA Client Application Config - Session timeout in seconds
+ as per OPC UA definition.
+ Alternative: --ct --createsessiontimeout
+ Mode: Standalone and Orchestrated
+ Type: integer
+ Default: 0, meaning <not set>
+
+ MinSubscriptionLifetime=VALUE
+ OPC UA Client Application Config - Minimum subscription lifetime in seconds
+ as per OPC UA definition.
+ Alternative: N/A
+ Mode: Standalone and Orchestrated
+ Type: integer
+ Default: 0, <not set>
+
+ KeepAliveInterval=VALUE
+ OPC UA Client Application Config - Keep alive interval in seconds
+ as per OPC UA definition.
+ Alternative: --ki, --keepaliveinterval
+ Mode: Standalone and Orchestrated
+ Type: integer milliseconds
+ Default: 10,000 (10s)
+
+ MaxKeepAliveCount=VALUE
+ OPC UA Client Application Config - Maximum count of keep alive events
+ as per OPC UA definition.
+ Alternative: --kt, --keepalivethreshold
+ Mode: Standalone and Orchestrated
+ Type: integer
+ Default: 50
+
+ PkiRootPath=VALUE
+ OPC UA Client Security Config - PKI certificate store root path
+ Alternative: N/A
+ Mode: Standalone and Orchestrated
+ Type: string
+ Default: "pki"
+
+ ApplicationCertificateStorePath=VALUE
+ OPC UA Client Security Config - application's
+ own certificate store path
+ Alternative: --ap, --appcertstorepath
+ Mode: Standalone and Orchestrated
+ Type: string
+ Default: $"{PkiRootPath}/own"
+
+ ApplicationCertificateStoreType=VALUE
+ OPC UA Client Security Config - application's
+ own certificate store type
+ Alternative: --at, --appcertstoretype
+ Mode: Standalone and Orchestrated
+ Type: enum string : Directory, X509Store
+ Default: Directory
+
+ ApplicationCertificateSubjectName=VALUE
+ OPC UA Client Security Config - the subject name
+ in the application's own certificate
+ Alternative: --sn, --appcertsubjectname
+ Mode: Standalone and Orchestrated
+ Type: string
+ Default: "CN=Microsoft.Azure.IIoT, C=DE, S=Bav, O=Microsoft, DC=localhost"
+
+ TrustedIssuerCertificatesPath=VALUE
+ OPC UA Client Security Config - trusted certificate issuer
+ store path
+ Alternative: --ip, --issuercertstorepath
+ Mode: Standalone and Orchestrated
+ Type: string
+ Default: $"{PkiRootPath}/issuers"
+
+ TrustedIssuerCertificatesType=VALUE
+ OPC UA Client Security Config - trusted issuer certificates
+ store type
+ Alternative: N/A
+ Mode: Standalone and Orchestrated
+ Type: enum string : Directory, X509Store
+ Default: Directory
+
+ TrustedPeerCertificatesPath=VALUE
+ OPC UA Client Security Config - trusted peer certificates
+ store path
+ Alternative: --tp, --trustedcertstorepath
+ Mode: Standalone and Orchestrated
+ Type: string
+ Default: $"{PkiRootPath}/trusted"
+
+ TrustedPeerCertificatesType=VALUE
+ OPC UA Client Security Config - trusted peer certificates
+ store type
+ Alternative: N/A
+ Mode: Standalone and Orchestrated
+ Type: enum string : Directory, X509Store
+ Default: Directory
+
+ RejectedCertificateStorePath=VALUE
+ OPC UA Client Security Config - rejected certificates
+ store path
+ Alternative: --rp, --rejectedcertstorepath
+ Mode: Standalone and Orchestrated
+ Type: string
+ Default: $"{PkiRootPath}/rejected"
+
+ RejectedCertificateStoreType=VALUE
+ OPC UA Client Security Config - rejected certificates
+ store type
+ Alternative: N/A
+ Mode: Standalone and Orchestrated
+ Type: enum string : Directory, X509Store
+ Default: Directory
+
+ AutoAcceptUntrustedCertificates=VALUE
+ OPC UA Client Security Config - auto accept untrusted
+ peer certificates
+ Alternative: --aa, --autoaccept
+ Mode: Standalone and Orchestrated
+ Type: boolean
+ Default: false
+
+ RejectSha1SignedCertificates=VALUE
+ OPC UA Client Security Config - reject deprecated Sha1
+ signed certificates
+ Alternative: N/A
+ Mode: Standalone and Orchestrated
+ Type: boolean
+ Default: false
+
+ MinimumCertificateKeySize=VALUE
+ OPC UA Client Security Config - minimum accepted
+ certificates key size
+ Alternative: N/A
+ Mode: Standalone and Orchestrated
+ Type: integer
+ Default: 1024
+
+ AddAppCertToTrustedStore=VALUE
+ OPC UA Client Security Config - automatically copy own
+ certificate's public key to the trusted certificate store
+ Alternative: --tm, --trustmyself
+ Mode: Standalone and Orchestrated
+ Type: boolean
+ Default: true
+
+ SecurityTokenLifetime=VALUE
+ OPC UA Stack Transport Secure Channel - Security token lifetime in milliseconds
+ Alternative: N/A
+ Mode: Standalone and Orchestrated
+ Type: integer (milliseconds)
+ Default: 3,600,000 (1h)
+
+ ChannelLifetime=VALUE
+ OPC UA Stack Transport Secure Channel - Channel lifetime in milliseconds
+ Alternative: N/A
+ Mode: Standalone and Orchestrated
+ Type: integer (milliseconds)
+ Default: 300,000 (5 min)
+
+ MaxBufferSize=VALUE
+ OPC UA Stack Transport Secure Channel - Max buffer size
+ Alternative: N/A
+ Mode: Standalone and Orchestrated
+ Type: integer
+ Default: 65,535 (64KB -1)
+
+ MaxMessageSize=VALUE
+ OPC UA Stack Transport Secure Channel - Max message size
+ Alternative: N/A
+ Mode: Standalone and Orchestrated
+ Type: integer
+ Default: 4,194,304 (4 MB)
+
+ MaxArrayLength=VALUE
+ OPC UA Stack Transport Secure Channel - Max array length
+ Alternative: N/A
+ Mode: Standalone and Orchestrated
+ Type: integer
+ Default: 65,535 (64KB - 1)
+
+ MaxByteStringLength=VALUE
+ OPC UA Stack Transport Secure Channel - Max byte string length
+ Alternative: N/A
+ Mode: Standalone and Orchestrated
+ Type: integer
+ Default: 1,048,576 (1MB);
+
+ OperationTimeout=VALUE
+ OPC UA Stack Transport Secure Channel - OPC UA Service call
+ operation timeout
+ Alternative: --ot, --operationtimeout
+ Mode: Standalone and Orchestrated
+ Type: integer (milliseconds)
+ Default: 120,000 (2 min)
+
+ MaxStringLength=VALUE
+ OPC UA Stack Transport Secure Channel - Maximum length of a string
+ that can be send/received over the OPC UA Secure channel
+ Alternative: --ol, --opcmaxstringlen
+ Mode: Standalone and Orchestrated
+ Type: integer
+ Default: 130,816 (128KB - 256)
+
+ RuntimeStateReporting=VALUE
+ Enables reporting of OPC Publisher restarts.
+ Alternative: --rs, --runtimestatereporting
+ Mode: Standalone
+ Type: boolean
+ Default: false
+
+ EnableRoutingInfo=VALUE
+ Adds the routing info to telemetry messages. The name of the property is
+ `$$RoutingInfo` and the value is the `DataSetWriterGroup` for that particular message.
+ When the `DataSetWriterGroup` is not configured, the `$$RoutingInfo` property will
+ not be added to the message even if this argument is set.
+ Alternative: --ri, --enableroutinginfo
+ Mode: Standalone
+ Type: boolean
+ Default: false
+```
+ ## Next steps Further resources can be found in the GitHub repositories:
industrial-iot Tutorial Configure Industrial Iot Components https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/industrial-iot/tutorial-configure-industrial-iot-components.md
Internet
* Secrets: Manage platform settings * Access policies: Manage which applications and users may access the data in the Key Vault and which operations (for example, read, write, list, delete) they are allowed to perform on the network, firewall, VNET, and private endpoints
-* Azure Active Directory (AAD)→App registrations
+* Microsoft Azure Active Directory (Azure AD)→App registrations
* <APP_NAME>-web → Authentication: Manage reply URIs, which is the list of URIs that
-can be used as landing pages after authentication succeeds. The deployment script may be unable to configure this automatically under certain scenarios, such as lack of AAD admin rights. You may want to add or modify URIs when changing the hostname of the Web app, for example, the port number used by the localhost for debugging
+can be used as landing pages after authentication succeeds. The deployment script may be unable to configure this automatically under certain scenarios, such as lack of Azure AD admin rights. You may want to add or modify URIs when changing the hostname of the Web app, for example, the port number used by the localhost for debugging
* App Service * Configuration: Manage the environment variables that control the services or UI * Virtual machine
output of deployment script or reset the password
* Manage the identities of the IoT Edge devices that may access the hub, configure which modules are installed and which configuration they use, for example, encoding parameters for the OPC Publisher * IoT Hub → IoT Edge → \<DEVICE> → Set Modules → OpcPublisher (for standalone OPC Publisher operation only)
+## Configuration via Command-line Arguments for OPC Publisher 2.8.2 and above
-## OPC Publisher 2.8.2 Configuration options for orchestrated mode
-
-The following OPC Publisher configuration can be applied by Command Line Interface (CLI) options or as environment variable settings. When both the environment variable and the CLI argument are provided, the latest will overrule the env variable.
-
-|Configuration Option | Description | Default |
-|-||--|
-site=VALUE |The site OPC Publisher is assigned to. |Not set
-AutoAcceptUntrustedCertificates=VALUE |OPC UA Client Security Config - auto accept untrusted peer certificates. |false
-BatchSize=VALUE |The number of OPC UA data-change messages to be cached for batching. When BatchSize is 1 or TriggerInterval is set to 0 batching is disabled. |50
-BatchTriggerInterval=VALUE |The trigger batching interval in seconds. When BatchSize is 1 or TriggerInterval is set to 0 batching is disabled. |{00:00:10}
-IoTHubMaxMessageSize=VALUE |The maximum size of the (IoT D2C) telemetry message. |0
-Transport=VALUE |Protocol to use for communication with the hub. Allowed values: AmqpOverTcp, AmqpOverWebsocket, MqttOverTcp, MqttOverWebsocket, Amqp, Mqtt, Tcp, Websocket, Any. |MqttOverTcp
-BypassCertVerification=VALUE |Enables/disables bypass of certificate verification for upstream communication to edgeHub. |false
-EnableMetrics=VALUE |Enables/disables upstream metrics propagation. |true
-OperationTimeout=VALUE |OPC UA Stack Transport Secure Channel - OPC UA Service call operation timeout |120,000 (2 min)
-MaxStringLength=VALUE |OPC UA Stack Transport Secure Channel - Maximum length of a string that can be send/received over the OPC UA Secure channel. |130,816 (128KB - 256)
-DefaultSessionTimeout=VALUE |The interval the OPC Publisher is sending keep alive messages in seconds to the OPC servers on the endpoints it's connected to. |0, meaning not set
-MinSubscriptionLifetime=VALUE | OPC UA Client Application Config - Minimum subscription lifetime as per OPC UA definition. |0, meaning not set
-AddAppCertToTrustedStore=VALUE |OPC UA Client Security Config - automatically copy own certificate's public key to the trusted certificate store |true
-ApplicationName=VALUE |OPC UA Client Application Config - Application name as per OPC UA definition. This is used for authentication during communication init handshake and as part of own certificate validation. |"Microsoft.Azure.IIoT"
-ApplicationUri=VALUE | OPC UA Client Application Config - Application URI as per OPC UA definition. |$"urn:localhost:{ApplicationName}:microsoft:"
-KeepAliveInterval=VALUE |OPC UA Client Application Config - Keep alive interval as per OPC UA definition. |10,000 (10s)
-MaxKeepAliveCount=VALUE |OPC UA Client Application Config - Maximum count of kee alive events as per OPC UA definition. | 50
-PkiRootPath=VALUE | OPC UA Client Security Config - PKI certificate store root path. |"pki
-ApplicationCertificateStorePath=VALUE |OPC UA Client Security Config - application's own certificate store path. |$"{PkiRootPath}/own"
-ApplicationCertificateStoreType=VALUE |The own application cert store type (allowed: Directory, X509Store). |Directory
-ApplicationCertificateSubjectName=VALUE |OPC UA Client Security Config - the subject name in the application's own certificate. |"CN=Microsoft.Azure.IIoT, C=DE, S=Bav, O=Microsoft, DC=localhost"
-TrustedIssuerCertificatesPath=VALUE |OPC UA Client Security Config - trusted certificate issuer store path. |$"{PkiRootPath}/issuers"
-TrustedIssuerCertificatesType=VALUE | OPC UA Client Security Config - trusted issuer certificates store type. |Directory
-TrustedPeerCertificatesPath=VALUE | OPC UA Client Security Config - trusted peer certificates store path. |$"{PkiRootPath}/trusted"
-TrustedPeerCertificatesType=VALUE | OPC UA Client Security Config - trusted peer certificates store type. |Directory
-RejectedCertificateStorePath=VALUE | OPC UA Client Security Config - rejected certificates store path. |$"{PkiRootPath}/rejected"
-RejectedCertificateStoreType=VALUE | OPC UA Client Security Config - rejected certificates store type. |Directory
-RejectSha1SignedCertificates=VALUE | OPC UA Client Security Config - reject deprecated Sha1 signed certificates. |false
-MinimumCertificateKeySize=VALUE | OPC UA Client Security Config - minimum accepted certificates key size. |1024
-SecurityTokenLifetime=VALUE | OPC UA Stack Transport Secure Channel - Security token lifetime in milliseconds. |3,600,000 (1h)
-ChannelLifetime=VALUE | OPC UA Stack Transport Secure Channel - Channel lifetime in milliseconds. |300,000 (5 min)
-MaxBufferSize=VALUE | OPC UA Stack Transport Secure Channel - Max buffer size. |65,535 (64KB -1)
-MaxMessageSize=VALUE | OPC UA Stack Transport Secure Channel - Max message size. |4,194,304 (4 MB)
-MaxArrayLength=VALUE | OPC UA Stack Transport Secure Channel - Max array length. |65,535 (64KB - 1)
-MaxByteStringLength=VALUE | OPC UA Stack Transport Secure Channel - Max byte string length. |1,048,576 (1MB);
-
+There are [several Command-line Arguments](reference-command-line-arguments.md#opc-publisher-command-line-arguments-for-version-282-and-above) that can be used to set global settings for OPC Publisher.
+Refer to the `mode` part in the command line description to check if a Command-line Argument is applicable to orchestrated or standalone mode.
## Next steps Now that you have learned how to change the default values of the configuration, you can
iot-edge How To Deploy At Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-deploy-at-scale.md
IoT Edge provides two different types of automatic deployments that you can use
The steps for creating a deployment and a layered deployment are very similar. Any differences are called out in the following steps. 1. In the [Azure portal](https://portal.azure.com), go to your IoT Hub.
-1. On the menu in the left pane, select **IoT Edge** under **Automatic Device Management**.
-1. On the upper bar, select **Create Deployment** or **Create Layered Deployment**.
+1. On the menu in the left pane, select **IoT Edge** under **Device Management**.
+1. On the upper bar, select **Add Deployment** or **Add Layered Deployment**.
There are five steps to create a deployment. The following sections walk through each one.
machine-learning Concept Causal Inference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-causal-inference.md
Then the method combines these two predictive models in a final stage estimation
[EconML](https://www.microsoft.com/research/project/econml/) (powering the backend of the Responsible AI dashboard) is a Python package that applies the power of machine learning techniques to estimate individualized causal responses from observational or experimental data. The suite of estimation methods provided in EconML represents the latest advances in causal machine learning. By incorporating individual machine learning steps into interpretable causal models, these methods improve the reliability of what-if predictions and make causal analysis quicker and easier for a broad set of users.
-[DoWhy](https://microsoft.github.io/dowhy/) is a Python library that aims to spark causal thinking and analysis. DoWhy provides a principled four-step interface for causal inference that focuses on explicitly modeling causal assumptions and validating them as much as possible. The key feature of DoWhy is its state-of-the-art refutation API that can automatically test causal assumptions for any estimation method, thus making inference more robust and accessible to non-experts. DoWhy supports estimation of the average causal effect for backdoor, front-door, instrumental variable and other identification methods, and estimation of the conditional effect (CATE) through an integration with the EconML library.
+[DoWhy](https://py-why.github.io/dowhy/) is a Python library that aims to spark causal thinking and analysis. DoWhy provides a principled four-step interface for causal inference that focuses on explicitly modeling causal assumptions and validating them as much as possible. The key feature of DoWhy is its state-of-the-art refutation API that can automatically test causal assumptions for any estimation method, thus making inference more robust and accessible to non-experts. DoWhy supports estimation of the average causal effect for backdoor, front-door, instrumental variable and other identification methods, and estimation of the conditional effect (CATE) through an integration with the EconML library.
## Next steps
machine-learning Concept Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-endpoints.md
--++ Last updated 05/24/2022
machine-learning Concept Model Management And Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-model-management-and-deployment.md
--++ Last updated 05/11/2022
machine-learning How To Access Resources From Endpoints Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-resources-from-endpoints-managed-identities.md
description: Securely access Azure resources for your machine learning model dep
-++ - Last updated 04/07/2022
machine-learning How To Authenticate Online Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-authenticate-online-endpoint.md
description: Learn to authenticate clients to an Azure Machine Learning online e
-++ - Last updated 05/10/2022
machine-learning How To Auto Train Nlp Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-auto-train-nlp-models.md
Automated ML's NLP capability is triggered through task specific `automl` type j
However, there are key differences: * You can ignore `primary_metric`, as it is only for reporting purposes. Currently, automated ML only trains one model per run for NLP and there is no model selection.
-* The `label_column_name` parameter is only required for multi-class and multi-label text classification tasks.
-* If the majority of the samples in your dataset contain more than 128 words, it's considered long range. For this scenario, you can enable the long range text option with the `enable_long_range_text=True` parameter in your task function. Doing so, helps improve model performance but requires longer training times.
+* The `label_column_name` parameter is only required for multi-class and multi-label text classification tasks.
+* If the majority of the samples in your dataset contain more than 128 words, it's considered long range. By default, automated ML considers all samples long range text. To disable this feature, include the `enable_long_range_text=False` parameter in your `AutoMLConfig`.
* If you enable long range text, then a GPU with higher memory is required such as, [NCv3](../virtual-machines/ncv3-series.md) series or [ND](../virtual-machines/nd-series.md) series. * The `enable_long_range_text` parameter is only available for multi-class classification tasks.
machine-learning How To Autoscale Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-autoscale-endpoints.md
description: Learn to scale up online endpoints. Get more CPU, memory, disk spac
--++
machine-learning How To Configure Network Isolation With V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-network-isolation-with-v2.md
ws.update(v1_legacy_mode=false)
# [Azure CLI extension v1](#tab/azurecliextensionv1)
-The Azure CLI [extension v1 for machine learning](reference-azure-machine-learning-cli.md) provides the [az ml workspace update](/cli/azure/ml/workspace#az-ml-workspace-update) command. To enable the parameter for a workspace, add the parameter `--v1-legacy-mode true`.
+The Azure CLI [extension v1 for machine learning](reference-azure-machine-learning-cli.md) provides the [az ml workspace update](/cli/azure/ml(v1)/workspace#az-ml(v1)-workspace-update) command. To enable the parameter for a workspace, add the parameter `--v1-legacy-mode true`.
> [!IMPORTANT] > The `v1-legacy-mode` parameter is only available in version 1.41.0 or newer of the Azure CLI extension for machine learning v1 (`azure-cli-ml`). Use the `az version` command to view version information.
machine-learning How To Deploy Managed Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-managed-online-endpoints.md
description: Learn to deploy your machine learning model as a web service that's
-++ - Last updated 04/26/2022
machine-learning How To Deploy With Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-with-rest.md
-- Previously updated : 12/22/2021-++ Last updated : 06/15/2022+
machine-learning How To Safely Rollout Managed Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-safely-rollout-managed-endpoints.md
description: Roll out newer versions of ML models without disruption.
-++ - Last updated 04/29/2022
machine-learning How To Secure Online Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-online-endpoint.md
--++ Last updated 06/06/2022
machine-learning How To Troubleshoot Secure Connection Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-secure-connection-workspace.md
+
+ Title: Troubleshoot private endpoint connection
+
+description: 'Learn how to troubleshoot connectivity problems to a workspace that is configured with a private endpoint.'
+++++++ Last updated : 06/09/2022++
+# Troubleshoot connection to a workspace with a private endpoint
+
+When connecting to a workspace that has been configured with a private endpoint, you may encounter a 403 or a messaging saying that access is forbidden. Use the information in this article to check for common configuration problems that can cause this error.
+
+> [!TIP]
+> Before using the steps in this article, try the Azure Machine Learning workspace diagnostic API. It can help identify configuration problems with your workspace. For more information, see [How to use workspace diagnostics](how-to-workspace-diagnostic-api.md).
+
+## DNS configuration
+
+The troubleshooting steps for DNS configuration differ based on whether you're using Azure DNS or a custom DNS. Use the following steps to determine which one you're using:
+
+1. In the [Azure portal](https://portal.azure.com), select the private endpoint for your Azure Machine Learning workspace.
+1. From the __Overview__ page, select the __Network Interface__ link.
+
+ :::image type="content" source="./media/how-to-troubleshoot-secure-connection-workspace/private-endpoint-overview.png" alt-text="Screenshot of the private endpoint overview with network interface link highlighted.":::
+
+1. Under __Settings__, select __IP Configurations__ and then select the __Virtual network__ link.
+
+ :::image type="content" source="./media/how-to-troubleshoot-secure-connection-workspace/network-interface-ip-configurations.png" alt-text="Screenshot of the IP configuration with virtual network link highlighted.":::
+
+1. From the __Settings__ section on the left of the page, select the __DNS servers__ entry.
+
+ :::image type="content" source="./media/how-to-troubleshoot-secure-connection-workspace/dns-servers.png" alt-text="Screenshot of the DNS servers configuration.":::
+
+ * If this value is __Default (Azure-provided)__ or __168.63.129.16__, then the VNet is using Azure DNS. Skip to the [Azure DNS troubleshooting](#azure-dns-troubleshooting) section.
+ * If there's a different IP address listed, then the VNet is using a custom DNS solution. Skip to the [Custom DNS troubleshooting](#custom-dns-troubleshooting) section.
+
+### Custom DNS troubleshooting
+
+Use the following steps to verify if your custom DNS solution is correctly resolving names to IP addresses:
+
+1. From a virtual machine, laptop, desktop, or other compute resource that has a working connection to the private endpoint, open a web browser. In the browser, use the URL for your Azure region:
+
+ | Azure region | URL |
+ | -- | -- |
+ | Azure Government | https://portal.azure.us/?feature.privateendpointmanagedns=false |
+ | Azure China 21Vianet | https://portal.azure.cn/?feature.privateendpointmanagedns=false |
+ | All other regions | https://ms.portal.azure.com/?feature.privateendpointmanagedns=false |
+
+1. In the portal, select the private endpoint for the workspace. Make a list of FQDNs listed for the private endpoint.
+
+ :::image type="content" source="./media/how-to-troubleshoot-secure-connection-workspace/custom-dns-settings.png" alt-text="Screenshot of the private endpoint with custom DNS settings highlighted.":::
+
+1. Open a command prompt, PowerShell, or other command line and run the following command for each FQDN returned from the previous step. Each time you run the command, verify that the IP address returned matches the IP address listed in the portal for the FQDN:
+
+ `nslookup <fqdn>`
+
+ For example, running the command `nslookup 29395bb6-8bdb-4737-bf06-848a6857793f.workspace.eastus.api.azureml.ms` would return a value similar to the following text:
+
+ ```
+ Server: yourdnsserver
+ Address: yourdnsserver-IP-address
+
+ Name: 29395bb6-8bdb-4737-bf06-848a6857793f.workspace.eastus.api.azureml.ms
+ Address: 10.3.0.5
+ ```
+
+1. If the `nslookup` command returns an error, or returns a different IP address than displayed in the portal, then the custom DNS solution isn't configured correctly. For more information, see [How to use your workspace with a custom DNS server](how-to-custom-dns.md)
+
+### Azure DNS troubleshooting
+
+When using Azure DNS for name resolution, use the following steps to verify that the Private DNS integration is configured correctly:
+
+1. On the Private Endpoint, select __DNS configuration__. For each entry in the __Private DNS zone__ column, there should also be an entry in the __DNS zone group__ column.
+
+ :::image type="content" source="./media/how-to-troubleshoot-secure-connection-workspace/dns-zone-group.png" alt-text="Screenshot of the DNS configuration with Private DNS zone and group highlighted.":::
+
+ * If there's a Private DNS zone entry, but __no DNS zone group entry__, delete and recreate the Private Endpoint. When recreating the private endpoint, __enable Private DNS zone integration__.
+ * If __DNS zone group__ isn't empty, select the link for the __Private DNS zone__ entry.
+
+ From the Private DNS zone, select __Virtual network links__. There should be a link to the VNet. If there isn't one, then delete and recreate the private endpoint. When recreating it, select a Private DNS Zone linked to the VNet or create a new one that is linked to it.
+
+ :::image type="content" source="./media/how-to-troubleshoot-secure-connection-workspace/virtual-network-links.png" alt-text="Screenshot of the virtual network links for the Private DNS zone.":::
+
+1. Repeat the previous steps for the rest of the Private DNS zone entries.
+
+## Browser configuration (DNS over HTTPS)
+
+Check if DNS over HTTP is enabled in your web browser. DNS over HTTP can prevent Azure DNS from responding with the IP address of the Private Endpoint.
+
+* Mozilla Firefox: For more information, see [Disable DNS over HTTPS in Firefox](https://support.mozilla.org/en-US/kb/firefox-dns-over-https).
+* Microsoft Edge:
+ 1. Search for DNS in Microsoft Edge settings: image.png
+ 2. Disable __Use secure DNS to specify how to look up the network address for websites__.
+
+## Proxy configuration
+
+If you use a proxy, it may prevent communication with a secured workspace. To test, use one of the following options:
+
+* Temporarily disable the proxy setting and see if you can connect.
+* Create a [Proxy auto-config (PAC)](https://wikipedia.org/wiki/Proxy_auto-config) file that allows direct access to the FQDNs listed on the private endpoint. It should also allow direct access to the FQDN for any compute instances.
+* Configure your proxy server to forward DNS requests to Azure DNS.
+++
machine-learning How To Troubleshoot Serialization Error https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-serialization-error.md
+
+ Title: Troubleshoot SerializationError
+
+description: Troubleshooting steps when you get the "cannot import name 'SerializationError'" message.
++++++ Last updated : 06/15/2022+++
+# Troubleshoot "cannot import name 'SerializationError'"
+
+When using Azure Machine Learning, you may receive the error "Cannot import name 'SerializationError'". This error may occur when using an Azure Machine Learning environment. For example, when submitting a training job.
+
+## Cause
+
+This problem is caused by a bug in the Azure Machine Learning SDK version 1.42.0.
+
+## Resolution
+
+Update your Azure Machine Learning environment to use SDK version 1.42.0.post1 or greater.
+
+For more information on updating an environment, see the following articles:
+
+* [Manage environments in studio](how-to-manage-environments-in-studio.md#rebuild-an-environment)
+* [Create & use software environments (SDK v1)](how-to-use-environments.md#update-an-existing-environment)
+* [Create & manage environments (CLI v2)](how-to-manage-environments-v2.md#update)
machine-learning Reference Yaml Deployment Managed Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-deployment-managed-online.md
--++ Last updated 04/26/2022
machine-learning Reference Yaml Endpoint Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-endpoint-online.md
--++ Last updated 04/26/2022
openshift Howto Deploy Java Jboss Enterprise Application Platform App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-deploy-java-jboss-enterprise-application-platform-app.md
Execute the following command if you want to delete the secret that holds the ap
```bash $ oc delete secrets/todo-list-secret
-secret "todo-list-secret" deleted
+# secret "todo-list-secret" deleted
``` ### Delete the OpenShift project
You can also delete all the configuration created for this demo by deleting the
```bash $ oc delete project eap-demo
-project.project.openshift.io "eap-demo" deleted
+# project.project.openshift.io "eap-demo" deleted
``` ### Delete the ARO cluster
postgresql Sample Change Server Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/scripts/sample-change-server-configuration.md
Last updated 01/26/2022
# List and update configurations of an Azure Database for PostgreSQL server using Azure CLI + This sample CLI script lists all available configuration parameters as well as their allowable values for Azure Database for PostgreSQL server, and sets the *log_retention_days* to a value that is other than the default one.+ [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] [!INCLUDE [azure-cli-prepare-your-environment.md](../../../includes/azure-cli-prepare-your-environment.md)]
postgresql Sample Create Server And Firewall Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/scripts/sample-create-server-and-firewall-rule.md
Last updated 01/26/2022
# Create an Azure Database for PostgreSQL server and configure a firewall rule using the Azure CLI + This sample CLI script creates an Azure Database for PostgreSQL server and configures a server-level firewall rule. Once the script has been successfully run, the PostgreSQL server can be accessed from all Azure services and the configured IP address. [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
postgresql Sample Create Server With Vnet Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/scripts/sample-create-server-with-vnet-rule.md
Last updated 01/26/2022
# Create a PostgreSQL server and configure a vNet rule using the Azure CLI + This sample CLI script creates an Azure Database for PostgreSQL server and configures a vNet rule. [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
postgresql Sample Point In Time Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/scripts/sample-point-in-time-restore.md
Last updated 02/11/2022
# Restore an Azure Database for PostgreSQL server using Azure CLI + This sample CLI script restores a single Azure Database for PostgreSQL server to a previous point in time. [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
postgresql Sample Scale Server Up Or Down https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/scripts/sample-scale-server-up-or-down.md
Last updated 01/26/2022
# Monitor and scale a single PostgreSQL server using Azure CLI
-This sample CLI script scales compute and storage for a single Azure Database for PostgreSQL server after querying the metrics. Compute can scale up or down. Storage can only scale up. \
+
+This sample CLI script scales compute and storage for a single Azure Database for PostgreSQL server after querying the metrics. Compute can scale up or down. Storage can only scale up.
> [!IMPORTANT] > Storage can only be scaled up, not down.
postgresql Sample Server Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/scripts/sample-server-logs.md
Last updated 01/26/2022
# Enable and download server slow query logs of an Azure Database for PostgreSQL server using Azure CLI + This sample CLI script enables and downloads the slow query logs of a single Azure Database for PostgreSQL server. [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
postgresql Concepts Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-backup.md
Previously updated : 11/08/2021 Last updated : 06/14/2022 # Backup and restore in Azure Database for PostgreSQL - Single Server
These backup files cannot be exported. The backups can only be used for restore
#### Servers with up to 4-TB storage
-For servers which support up to 4-TB maximum storage, full backups occur once every week. Differential backups occur twice a day. Transaction log backups occur every five minutes.
+For servers that support up to 4-TB maximum storage, full backups occur once every week. Differential backups occur twice a day. Transaction log backups occur every five minutes.
#### Servers with up to 16-TB storage
-In a subset of [Azure regions](./concepts-pricing-tiers.md#storage), all newly provisioned servers can support up to 16-TB storage. Backups on these large storage servers are snapshot-based. The first full snapshot backup is scheduled immediately after a server is created. That first full snapshot backup is retained as the server's base backup. Subsequent snapshot backups are differential backups only. Differential snapshot backups do not occur on a fixed schedule. In a day, three differential snapshot backups are performed. Transaction log backups occur every five minutes.
+In a subset of [Azure regions](./concepts-pricing-tiers.md#storage), all newly provisioned servers can support up to 16-TB storage. Backups on these large storage servers are snapshot-based. The first full snapshot backup is scheduled immediately after a server is created. That first full snapshot backup is retained as the server's base backup. Subsequent snapshot backups are differential backups only. Differential snapshot backups do not occur on a fixed schedule. In a day, multiple differential snapshot backups are performed, but only 3 backups are retained. Transaction log backups occur every five minutes.
> [!NOTE] > Automatic backups are performed for [replica servers](./concepts-read-replicas.md) that are configured with up to 4TB storage configuration.
The backup retention period governs how far back in time a point-in-time restore
### Backup redundancy options
-Azure Database for PostgreSQL provides the flexibility to choose between locally redundant or geo-redundant backup storage in the General Purpose and Memory Optimized tiers. When the backups are stored in geo-redundant backup storage, they are not only stored within the region in which your server is hosted, but are also replicated to a [paired data center](../../availability-zones/cross-region-replication-azure.md). This provides better protection and ability to restore your server in a different region in the event of a disaster. The Basic tier only offers locally redundant backup storage.
+Azure Database for PostgreSQL provides the flexibility to choose between locally redundant or geo-redundant backup storage in the General Purpose and Memory Optimized tiers. When the backups are stored in geo-redundant backup storage, an additional backup copy is replicated to a [paired region](../../availability-zones/cross-region-replication-azure.md). This provides better protection and ability to restore your server in the event of a regional disaster. The Basic tier only offers locally redundant backup storage.
> [!IMPORTANT] > Configuring locally redundant or geo-redundant storage for backup is only allowed during server create. Once the server is provisioned, you cannot change the backup storage redundancy option. ### Backup storage cost
-Azure Database for PostgreSQL provides up to 100% of your provisioned server storage as backup storage at no additional cost. Any additional backup storage used is charged in GB per month. For example, if you have provisioned a server with 250 GB of storage, you have 250 GB of additional storage available for server backups at no additional charge. Storage consumed for backups more than 250 GB is charged as per the [pricing model](https://azure.microsoft.com/pricing/details/postgresql/).
+Azure Database for PostgreSQL provides up to 100% of your provisioned server storage as backup storage at no extra cost. Any additional backup storage used is charged in GB per month. For example, if you have provisioned a server with 250 GB of storage, you have 250 GB of additional storage available for server backups at no extra cost. Storage consumed for backups more than 250 GB is charged as per the [pricing model](https://azure.microsoft.com/pricing/details/postgresql/).
You can use the [Backup Storage used](concepts-monitoring.md) metric in Azure Monitor available in the Azure portal to monitor the backup storage consumed by a server. The Backup Storage used metric represents the sum of storage consumed by all the full database backups, differential backups, and log backups retained based on the backup retention period set for the server. The frequency of the backups is service managed and explained earlier. Heavy transactional activity on the server can cause backup storage usage to increase irrespective of the total database size. For geo-redundant storage, backup storage usage is twice that of the locally redundant storage.
private-link Configure Asg Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/configure-asg-private-endpoint.md
Previously updated : 06/02/2022 Last updated : 06/14/2022
Azure Private endpoints support application security groups for network security
- An Azure account with an active subscription. If you don't already have an Azure account, [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). -- An Azure web app with a *PremiumV2-tier* or higher app service plan, deployed in your Azure subscription.
+- An Azure web app with a **PremiumV2-tier** or higher app service plan, deployed in your Azure subscription.
- For more information and an example, see [Quickstart: Create an ASP.NET Core web app in Azure](../app-service/quickstart-dotnetcore.md).
Azure Private endpoints support application security groups for network security
- The example virtual network used in this article is named **myVNet**. Replace the example with your virtual network.
+- The latest version of the Azure CLI, installed.
+
+ Check your version of the Azure CLI in a terminal or command window by running `az --version`. For the latest version, see the most recent [release notes](/cli/azure/release-notes-azure-cli?tabs=azure-cli).
+
+ If you don't have the latest version of the Azure CLI, update it by following the [installation guide for your operating system or platform](/cli/azure/install-azure-cli).
+
+If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. To find the installed version, run `Get-Module -ListAvailable Az`. If you need to upgrade, see [Install the Azure PowerShell module](/powershell/azure/install-Az-ps). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+ ## Create private endpoint with an ASG An ASG can be associated with a private endpoint when it's created. The following procedures demonstrate how to associate an ASG with a private endpoint when it's created.
+# [**Portal**](#tab/portal)
+ 1. Sign-in to the [Azure portal](https://portal.azure.com). 2. In the search box at the top of the portal, enter **Private endpoint**. Select **Private endpoints** in the search results.
An ASG can be associated with a private endpoint when it's created. The followin
12. Select **Create**.
+# [**PowerShell**](#tab/powershell)
+
+```azurepowershell-interactive
+## Place the previously created webapp into a variable. ##
+$webapp = Get-AzWebApp -ResourceGroupName myResourceGroup -Name myWebApp1979
+
+## Create the private endpoint connection. ##
+$pec = @{
+ Name = 'myConnection'
+ PrivateLinkServiceId = $webapp.ID
+ GroupID = 'sites'
+}
+$privateEndpointConnection = New-AzPrivateLinkServiceConnection @pec
+
+## Place the virtual network you created previously into a variable. ##
+$vnet = Get-AzVirtualNetwork -ResourceGroupName 'myResourceGroup' -Name 'myVNet'
+
+## Place the application security group you created previously into a variable. ##
+$asg = Get-AzApplicationSecurityGroup -ResourceGroupName 'myResourceGroup' -Name 'myASG'
+
+## Create the private endpoint. ##
+$pe = @{
+ ResourceGroupName = 'myResourceGroup'
+ Name = 'myPrivateEndpoint'
+ Location = 'eastus'
+ Subnet = $vnet.Subnets[0]
+ PrivateLinkServiceConnection = $privateEndpointConnection
+ ApplicationSecurityGroup = $asg
+}
+New-AzPrivateEndpoint @pe
+```
+
+# [**CLI**](#tab/cli)
+
+```azurecli-interactive
+id=$(az webapp list \
+ --resource-group myResourceGroup \
+ --query '[].[id]' \
+ --output tsv)
+
+asgid=$(az network asg show \
+ --name myASG \
+ --resource-group myResourceGroup \
+ --query id \
+ --output tsv)
+
+az network private-endpoint create \
+ --connection-name myConnection \
+ --name myPrivateEndpoint \
+ --private-connection-resource-id $id \
+ --resource-group myResourceGroup \
+ --subnet myBackendSubnet \
+ --asg id=$asgid \
+ --group-id sites \
+ --vnet-name myVNet
+```
++ ## Associate an ASG with an existing private endpoint An ASG can be associated with an existing private endpoint. The following procedures demonstrate how to associate an ASG with an existing private endpoint.
An ASG can be associated with an existing private endpoint. The following proced
> [!IMPORTANT] > You must have a previously deployed private endpoint to proceed with the steps in this section. The example endpoint used in this section is named **myPrivateEndpoint**. Replace the example with your private endpoint.
+# [**Portal**](#tab/portal)
+ 1. Sign-in to the [Azure portal](https://portal.azure.com). 2. In the search box at the top of the portal, enter **Private endpoint**. Select **Private endpoints** in the search results.
An ASG can be associated with an existing private endpoint. The following proced
6. Select **Save**.
+# [**PowerShell**](#tab/powershell)
+
+Associating an ASG with an existing private endpoint with Azure PowerShell is currently unsupported.
+
+# [**CLI**](#tab/cli)
+
+```azurecli-interactive
+asgid=$(az network asg show \
+ --name myASG \
+ --resource-group myResourceGroup \
+ --query id \
+ --output tsv)
+
+az network private-endpoint asg add \
+ --resource-group myResourceGroup \
+ --endpoint-name myPrivateEndpoint \
+ --asg-id $asgid
+```
++ ## Next steps For more information about Azure Private Link, see:
private-link Tutorial Private Endpoint Cosmosdb Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/tutorial-private-endpoint-cosmosdb-portal.md
In this section, you'll create a virtual machine that will be used to test the p
| Resource Group | Select **myResourceGroup** | | **Instance details** | | | Virtual machine name | Enter **myVM** |
- | Region | Select **East US** |
+ | Region | Select **(US) East US** |
| Availability Options | Select **No infrastructure redundancy required** | | Security type | Select **Standard** | | Image | Select **Windows Server 2019 Datacenter - Gen2** |
private-link Tutorial Private Endpoint Storage Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/tutorial-private-endpoint-storage-portal.md
# Tutorial: Connect to a storage account using an Azure Private Endpoint
-Azure Private endpoint is the fundamental building block for Private Link in Azure. It enables Azure resources, like virtual machines (VMs), to communicate with Private Link resources privately.
+Azure Private endpoint is the fundamental building block for Private Link in Azure. It enables Azure resources, like virtual machines (VMs), to privately and securely communicate with Private Link resources such as Azure Storage.
In this tutorial, you learn how to:
route-server Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/overview.md
Azure Route Server simplifies dynamic routing between your network virtual appliance (NVA) and your virtual network. It allows you to exchange routing information directly through Border Gateway Protocol (BGP) routing protocol between any NVA that supports the BGP routing protocol and the Azure Software Defined Network (SDN) in the Azure Virtual Network (VNET) without the need to manually configure or maintain route tables. Azure Route Server is a fully managed service and is configured with high availability. > [!IMPORTANT]
-> If you have an Azure Route Server created before September 1st and it doesn't have a public IP address asssociated, you'll need to recreate the Route Server so it can obtain an IP address for management purpose.
+> Azure Route Servers created before November 1st, 2021, that don't have a public IP address associated, are deployed with the Public preview offering. The public preview offering is not backed by Generally Available SLA and support. To deploy Azure Route Server with the Generally Available offering, and to acheive Generally Available SLA and support, please delete and recreate Route Server.
## How does it work?
security Secure Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/secure-deploy.md
The focus of the release phase is readying a project for public release. This in
### Check your applicationΓÇÖs performance before you launch
-Check your application's performance before you launch it or deploy updates to production. Run cloud-based [load tests](https://www.visualstudio.com/docs/test/performance-testing/getting-started/getting-started-with-performance-testing) by using Visual Studio to find performance problems in your application, improve deployment quality, make sure that your application is always up or available, and that your application can handle traffic for your launch.
+Check your application's performance before you launch it or deploy updates to production. Run cloud-based [load tests](/azure/load-testing/) by using Visual Studio to find performance problems in your application, improve deployment quality, make sure that your application is always up or available, and that your application can handle traffic for your launch.
### Install a web application firewall
security Encryption Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/encryption-models.md
The Azure services that support each encryption model:
| IoT Hub | Yes | Yes | Yes | | IoT Hub Device Provisioning | Yes | Yes | - | | **Management and Governance** | | | |
+| Azure Managed Grafana | Yes | - | N/A |
| Azure Site Recovery | Yes | - | - | | Azure Migrate | Yes | Yes | - | | **Media** | | | |
security Operational Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/operational-best-practices.md
You can use [Azure Resource Manager](../../azure-resource-manager/templates/synt
**Detail**: [Azure Pipelines](/azure/devops/pipelines/index) is a solution for automating multiple-stage deployment and managing the release process. Create managed continuous deployment pipelines to release quickly, easily, and often. With Azure Pipelines, you can automate your release process, and you can have predefined approval workflows. Deploy on-premises and to the cloud, extend, and customize as required. **Best practice**: Check your app's performance before you launch it or deploy updates to production.
-**Detail**: Run cloud-based [load tests](/azure/devops/test/load-test/overview#alternatives) to:
+**Detail**: Run cloud-based [load tests](/azure/load-testing/) to:
- Find performance problems in your app. - Improve deployment quality.
sentinel Data Connectors Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors-reference.md
Title: Find your Microsoft Sentinel data connector | Microsoft Docs description: Learn about specific configuration steps for Microsoft Sentinel data connectors.-+ Last updated 01/04/2022-+
For more information, see the Cognito Detect Syslog Guide, which can be download
| **Log Analytics table(s)** | [CommonSecurityLog](/azure/azure-monitor/reference/tables/commonsecuritylog) | | **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) | | **Kusto function alias:** | ArubaClearPass |
-| **Kusto function URL:** | https://aka.ms/Sentinel-arubaclearpass-parser |
+| **Kusto function URL:** | https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/Aruba%20ClearPass/Parsers/ArubaClearPass.txt |
| **Vendor documentation/<br>installation instructions** | Follow Aruba's instructions to [configure ClearPass](https://www.arubanetworks.com/techdocs/ClearPass/6.7/PolicyManager/Content/CPPM_UserGuide/Admin/syslogExportFilters_add_syslog_filter_general.htm). | | **Supported by** | Microsoft |
Microsoft Sentinel can apply machine learning (ML) to Security events data to id
| **Data ingestion method** | [**Azure Functions and the REST API**](connect-azure-functions-template.md)<br><br>[Configure Webhooks](#configure-webhooks) <br>[Add Callback URL to Webhook configuration](#add-callback-url-to-webhook-configuration)| | **Log Analytics table(s)** | Workplace_Facebook_CL | | **DCR support** | Not currently supported |
-| **Azure Function App code** | https://aka.ms/Sentinel-WorkplaceFacebook-functionapp |
+| **Azure Function App code** | https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/Workplace%20from%20Facebook/Data%20Connectors/WorkplaceFacebook/WorkplaceFacebookWebhooksSentinelConn.zip |
| **API credentials** | <li>WorkplaceAppSecret<li>WorkplaceVerifyToken | | **Vendor documentation/<br>installation instructions** | <li>[Configure Webhooks](https://developers.facebook.com/docs/workplace/reference/webhooks)<li>[Configure permissions](https://developers.facebook.com/docs/workplace/reference/permissions) | | **Connector deployment instructions** | <li>[Single-click deployment](connect-azure-functions-template.md?tabs=ARM) via Azure Resource Manager (ARM) template<li>[Manual deployment](connect-azure-functions-template.md?tabs=MPY) | | **Kusto function alias** | Workplace_Facebook |
-| **Kusto function URL/<br>Parser config instructions** | https://aka.ms/Sentinel-WorkplaceFacebook-parser |
+| **Kusto function URL/<br>Parser config instructions** | https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/Workplace%20from%20Facebook/Parsers/Workplace_Facebook.txt |
| **Application settings** | <li>WorkplaceAppSecret<li>WorkplaceVerifyToken<li>WorkspaceID<li>WorkspaceKey<li>logAnalyticsUri (optional) | | **Supported by** | Microsoft |
sentinel Migration Ingestion Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/migration-ingestion-tool.md
To optimize performance, [configure the Logstash tier size](https://www.elastic.
You can ingest data to Azure Blob Storage in several ways. - [Azure Data Factory or Azure Synapse](../data-factory/connector-azure-blob-storage.md) - [AzCopy](../storage/common/storage-use-azcopy-v10.md)-- [Azure Storage Explorer](/architecture/data-science-process/move-data-to-azure-blob-using-azure-storage-explorer)
+- [Azure Storage Explorer](/azure/architecture/data-science-process/move-data-to-azure-blob-using-azure-storage-explorer)
- [Python](../storage/blobs/storage-quickstart-blobs-python.md) - [SSIS](/azure/architecture/data-science-process/move-data-to-azure-blob-using-ssis)
To use the SIEM data migration accelerator:
In this article, you learned how to select a tool to ingest your data into the target platform. > [!div class="nextstepaction"]
-> [Ingest your data](migration-export-ingest.md)
+> [Ingest your data](migration-export-ingest.md)
sentinel Migration Splunk Detection Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/migration-splunk-detection-rules.md
series_decompose_anomalies(Trend)
|`match(X,Y)` |Returns if X matches the regex pattern Y. |`match(field, "^\d{1,3}.\d$")` |[matches regex](/azure/data-explorer/kusto/query/re2) |`… | where field matches regex @"^\d{1,3}.\d$")` | |`max(X,…)` |Returns the maximum value in a column. |`max(delay, mydelay)` |• [max()](/azure/data-explorer/kusto/query/max-aggfunction)<br>• [arg_max()](/azure/data-explorer/kusto/query/arg-max-aggfunction) |`… | summarize max(field)` | |`md5(X)` |Returns the MD5 hash of a string value `X`. |`md5(field)` |[hash_md5](/azure/data-explorer/kusto/query/md5hashfunction) |`hash_md5("X")` |
-|`min(X,…)` |Returns the minimum value in a column. |`min(delay, mydelay)` |• [min_of()](/azure/data-explorer/kusto/query/min-offunction)<br>• [min()](/azure/data-explorer/kusto/query/min-aggfunction)<br>• [arg_min](/azure/data-explorer/kusto/query/arg-min-%3Csub%3E**aggfunction) |[KQL example](#minx-kql-example) |
+|`min(X,…)` |Returns the minimum value in a column. |`min(delay, mydelay)` |• [min_of()](/azure/data-explorer/kusto/query/min-offunction)<br>• [min()](/azure/data-explorer/kusto/query/min-aggfunction)<br>• [arg_min](/azure/data-explorer/kusto/query/arg-min-aggfunction) |[KQL example](#minx-kql-example) |
|`mvcount(X)` |Returns the number (total) of `X` values. |`mvcount(multifield)` |[dcount](/azure/data-explorer/kusto/query/dcount-aggfunction) |`…| summarize dcount(X) by Y` | |`mvfilter(X)` |Filters a multi-valued field based on the boolean `X` expression. |`mvfilter(match(email, "net$"))` |[mv-apply](/azure/data-explorer/kusto/query/mv-applyoperator) |[KQL example](#mvfilterx-kql-example) | |`mvindex(X,Y,Z)` |Returns a subset of the multi-valued `X` argument from a start position (zero-based) `Y` to `Z` (optional). |`mvindex( multifield, 2)` |[array_slice](/azure/data-explorer/kusto/query/arrayslicefunction) |`array_slice(arr, 1, 2)` |
service-health Service Health Portal Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-health/service-health-portal-update.md
We're' updating the Azure Service Health portal experience. The new experience l
## Highlights of the new experience -- **Tenant level view** - Users who are Tenant Admins can now see Service Issues that happen at a Tenant level. Service Issues blade and Health History blades are updated to show incidents both at Tenant and Subscription levels. Users can filter on the scope (Tenant or Subscription) within the blades. The scope column indicates when an event is at the Tenant or Subscriber level.
+- **Tenant level view** - Users who are Tenant Admins can now see Service Issues that happen at a Tenant level. Service Issues blade and Health History blades are updated to show incidents both at Tenant and Subscription levels. Users can filter on the scope (Tenant or Subscription) within the blades. The scope column indicates when an event is at the Tenant or Subscriber level. Classic view does not support tenant-level events. Tenant-level events are only available in the new user interface.
- **Enhanced Map** - The Service Issues blade shows an enhanced version of the map with all the user services across the world. This version helps you find services that might be impacted by an outage easily. - **Issues Details** - The issues details look and feel has been updated, for better readability. - **Removal of personalized dashboard** - Users can no longer pin a personalized map to the dashboard. This feature has been deprecated in the new experience.
site-recovery Hyper V Azure Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/hyper-v-azure-tutorial.md
If you're running a Hyper-V core server, download the setup file and follow thes
3. Register the server by running this command: ```
- cd "C:\Program Files\Microsoft Azure Site Recovery Provider\DRConfigurator.exe" /r /Friendlyname "FriendlyName of the Server" /Credentials "path to where the credential file is saved"
+ cd "C:\Program Files\Microsoft Azure Site Recovery Provider"
+ "C:\Program Files\Microsoft Azure Site Recovery Provider\DRConfigurator.exe" /r /Friendlyname "FriendlyName of the Server" /Credentials "path to where the credential file is saved"
``` ## Set up the target environment
Site Recovery checks that you have one or more compatible Azure storage accounts
## Next steps > [!div class="nextstepaction"]
-> [Run a disaster recovery drill](tutorial-dr-drill-azure.md)
+> [Run a disaster recovery drill](tutorial-dr-drill-azure.md)
site-recovery Migrate Tutorial Aws Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/migrate-tutorial-aws-azure.md
This article describes options for migrating Amazon Web Services (AWS) instances to Azure.
+> [!NOTE]
+> On Linux distributions, only the stock kernels that are part of the distribution minor version release/update are supported. [Learn more](https://docs.microsoft.com/azure/site-recovery/vmware-physical-azure-support-matrix#for-linux).
+ ## Migrate with Azure Migrate We recommend that you migrate AWS EC2 instances to Azure using the [Azure Migrate](../migrate/migrate-services-overview.md) service. Azure Migrate is purpose-built for server migration. Azure Migrate provides a centralized hub for discovery, assessment and migration of on-premises machines to Azure.
If you're already using Azure Site Recovery, and you want to continue using it f
> [!NOTE]
-> When you run a failover for disaster recovery, as a last step you commit the failover. When you migrate AWS instances, the **Commit** option isn't relevant. Instead, you select the **Complete Migration** option.
+> When you run a failover for disaster recovery, as a last step you commit the failover. When you migrate AWS instances, the **Commit** option isn't relevant. Instead, you select the **Complete Migration** option.
## Next steps
storage Blob Inventory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blob-inventory.md
Previously updated : 10/11/2021 Last updated : 06/14/2022
Several filters are available for customizing a blob inventory report:
|--|--|--|--| | blobTypes | Array of predefined enum values | Valid values are `blockBlob` and `appendBlob` for hierarchical namespace enabled accounts, and `blockBlob`, `appendBlob`, and `pageBlob` for other accounts. This field is not applicable for inventory on a container, (objectType: `container`). | Yes | | prefixMatch | Array of up to 10 strings for prefixes to be matched. | If you don't define *prefixMatch* or provide an empty prefix, the rule applies to all blobs within the storage account. A prefix must be a container name prefix or a container name. For example, `container`, `container1/foo`. | No |
+| excludePrefix | Array of up to 10 strings for prefixes to be excluded. | Specifies the blob paths to exclude from the inventory report.<br><br>An *excludePrefix* must be a container name prefix or a container name. An empty *excludePrefix* would mean that all blobs with names matching any *prefixMatch* string will be listed.<br><br>If you want to include a certain prefix, but exclude some specific subset from it, then you could use the excludePrefix filter. For example, if you want to include all blobs under `container-a` except those under the folder `container-a/folder`, then *prefixMatch* should be set to `container-a` and *excludePrefix* should be set to `container-a/folder`. | No |
| includeSnapshots | boolean | Specifies whether the inventory should include snapshots. Default is `false`. This field is not applicable for inventory on a container, (objectType: `container`). | No | | includeBlobVersions | boolean | Specifies whether the inventory should include blob versions. Default is `false`. This field is not applicable for inventory on a container, (objectType: `container`). | No |
View the JSON for inventory rules by selecting the **Code view** tab in the **Bl
"filters": { "blobTypes": ["blockBlob", "appendBlob", "pageBlob"], "prefixMatch": ["inventorytestcontainer1", "inventorytestcontainer2/abcd", "etc"],
+ "excludePrefix": ["inventorytestcontainer10", "etc/logs"],
"includeSnapshots": false, "includeBlobVersions": true, },
View the JSON for inventory rules by selecting the **Code view** tab in the **Bl
### Custom schema fields supported for blob inventory -- Name (Required)-- Creation-Time-- Last-Modified-- Content-Length-- Content-MD5-- BlobType-- AccessTier-- AccessTierChangeTime-- Expiry-Time-- hdi_isfolder-- Owner-- Group-- Permissions-- Acl-- Snapshot (Available and required when you choose to include snapshots in your report)-- VersionId (Available and required when you choose to include blob versions in your report)-- IsCurrentVersion (Available and required when you choose to include blob versions in your report)-- Metadata-- LastAccessTime
+> [!NOTE]
+> The **Data Lake Storage Gen2** column shows support in accounts that have the hierarchical namespace feature enabled.
+
+| Field | Blob Storage (default support) | Data Lake Storage Gen2 |
+||-||
+| Name (Required) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
+| Creation-Time | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
+| Last-Modified | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
+| Last-Access-Time | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
+| ETag | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
+| Content-Length | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
+| Content-Type | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
+| Content-Encoding | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
+| Content-Language | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
+| Content-CRC64 | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
+| Content-MD5 | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
+| Cache-Control | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
+| Cache-Disposition | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
+| BlobType | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
+| AccessTier | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
+| AccessTierChangeTime | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
+| LeaseStatus | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
+| LeaseState | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
+| ServerEncrypted | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
+| CustomerProvidedKeySHA256 | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
+| Metadata | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
+| Expiry-Time | ![No](../media/icons/no-icon.png) | ![Yes](../media/icons/yes-icon.png) |
+| hdi_isfolder | ![No](../media/icons/no-icon.png) | ![Yes](../media/icons/yes-icon.png) |
+| Owner | ![No](../media/icons/no-icon.png) | ![Yes](../media/icons/yes-icon.png) |
+| Group | ![No](../media/icons/no-icon.png) | ![Yes](../media/icons/yes-icon.png) |
+| Permissions | ![No](../media/icons/no-icon.png) | ![Yes](../media/icons/yes-icon.png) |
+| Acl | ![No](../media/icons/no-icon.png) | ![Yes](../media/icons/yes-icon.png) |
+| Snapshot (Available and required when you choose to include snapshots in your report) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
+| Deleted | ![Yes](../media/icons/yes-icon.png)| ![Yes](../media/icons/yes-icon.png) |
+| DeletedId | ![No](../media/icons/no-icon.png) | ![Yes](../media/icons/yes-icon.png) |
+| DeletedTime | ![No](../media/icons/no-icon.png)| ![Yes](../media/icons/yes-icon.png) |
+| RemainingRetentionDays | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png)|
+| VersionId (Available and required when you choose to include blob versions in your report) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) |
+| IsCurrentVersion (Available and required when you choose to include blob versions in your report) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) |
+| TagCount | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) |
+| Tags | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) |
+| CopyId | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
+| CopySource | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
+| CopyStatus | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
+| CopyProgress | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
+| CopyCompletionTime | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
+| CopyStatusDescription | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
+| ImmutabilityPolicyUntilDate | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
+| ImmutabilityPolicyMode | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
+| LegalHold | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
+| RehydratePriority | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
+| ArchiveStatus | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
+| EncryptionScope | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
+| IncrementalCopy | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
+| x-ms-blob-sequence-number | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) |
### Custom schema fields supported for container inventory -- Name (Required)-- Last-Modified-- LeaseStatus-- LeaseState-- LeaseDuration-- PublicAccess-- HasImmutabilityPolicy-- HasLegalHold-- Metadata
+> [!NOTE]
+> The **Data Lake Storage Gen2** column shows support in accounts that have the hierarchical namespace feature enabled.
+
+| Field | Blob Storage (default support) | Data Lake Storage Gen2 |
+||-||
+| Name (Required) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
+| Last-Modified | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
+| ETag | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
+| LeaseStatus | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
+| LeaseState | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
+| LeaseDuration | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
+| Metadata | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
+| PublicAccess | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
+| DefaultEncryptionScope | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
+| DenyEncryptionScopeOverride | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
+| HasImmutabilityPolicy | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
+| HasLegalHold | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
+| ImmutableStorageWithVersioningEnabled | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
+| Deleted (Will appear only if include deleted containers is selected) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
+| Version (Will appear only if include deleted containers is selected) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
+| DeletedTime (Will appear only if include deleted containers is selected) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
+| RemainingRetentionDays (Will appear only if include deleted containers is selected) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
+ ## Inventory run
storage Storage Blob Change Feed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-change-feed.md
description: Learn about change feed logs in Azure Blob Storage and how to use t
Previously updated : 04/13/2022 Last updated : 06/15/2022
The following example shows a change event record in JSON format that uses event
This section describes known issues and conditions in the current release of the change feed. -- Change event records for any single change might appear more than once in your change feed. - The `url` property of the log file is currently always empty. - The `LastConsumable` property of the segments.json file does not list the very first segment that the change feed finalizes. This issue occurs only after the first segment is finalized. All subsequent segments after the first hour are accurately captured in the `LastConsumable` property. - You currently cannot see the **$blobchangefeed** container when you call ListContainers API and the container does not show up on Azure portal or Storage Explorer. You can view the contents by calling the ListBlobs API on the $blobchangefeed container directly.
storage File Sync Server Registration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-server-registration.md
description: Learn how to register and unregister a Windows Server with an Azure
Previously updated : 01/3/2022 Last updated : 06/15/2022
Now that all data has been recalled and the server has been removed from all syn
![Unregister server](media/storage-sync-files-server-registration/unregister-server-1.png)
+#### Unregister the server with PowerShell
+You can also unregister the server via PowerShell by using the `Unregister-AzStorageSyncServer` cmdlet.
+
+> [!WARNING]
+> Unregistering a server will result in cascading deletes of all server endpoints on the server. You should only run this cmdlet when you are certain that no path on the server is to be synced anymore.
+
+```powershell
+$RegisteredServer = Get-AzStorageSyncServer -ResourceGroupName "<your-resource-group-name>" -StorageSyncServiceName "<your-storage-sync-service-name>"
+Unregister-AzStorageSyncServer -Force -ResourceGroupName "<your-resource-group-name>" -StorageSyncServiceName "<your-storage-sync-service-name>" -ServerId $RegisteredServer.ServerId
+```
+ ## Ensuring Azure File Sync is a good neighbor in your datacenter Since Azure File Sync will rarely be the only service running in your datacenter, you may want to limit the network and storage usage of Azure File Sync.
synapse-analytics Get Started Create Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/get-started-create-workspace.md
Fill in the following fields:
1. **Region** - Pick the region where you have placed your client applications/services (for example, Azure VM, Power BI, Azure Analysis Service) and storages that contain data (for example Azure Data Lake storage, Azure Cosmos DB analytical storage). > [!NOTE]
-> A workspace that is not co-located with the client applications or storage can be the root cause of many performance issues. If you data or the clients are placed in multiple regions, you can create separate workspaces in different regions co-located with your data and clients.
+> A workspace that is not co-located with the client applications or storage can be the root cause of many performance issues. If your data or the clients are placed in multiple regions, you can create separate workspaces in different regions co-located with your data and clients.
Under **Select Data Lake Storage Gen 2**:
synapse-analytics Cheat Sheet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/cheat-sheet.md
-# Cheat sheet for dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytic
+# Cheat sheet for dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics
This cheat sheet provides helpful tips and best practices for building dedicated SQL pool (formerly SQL DW) solutions.
Learn more about [typical architectures that take advantage of dedicated SQL poo
Deploy in one click your spokes in SQL databases from dedicated SQL pool (formerly SQL DW):
-[![Image showing a button labeled "Deploy to Azure".](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/1-CONTRIBUTION-GUIDE/images/deploytoazure.png)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FMicrosoft%2Fsql-data-warehouse-samples%2Fmaster%2Farm-templates%2FsqlDwSpokeDbTemplate%2Fazuredeploy.json)
+[![Image showing a button labeled "Deploy to Azure".](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/1-CONTRIBUTION-GUIDE/images/deploytoazure.png)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FMicrosoft%2Fsql-data-warehouse-samples%2Fmaster%2Farm-templates%2FsqlDwSpokeDbTemplate%2Fazuredeploy.json)
synapse-analytics Quickstart Bulk Load Copy Tsql Examples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/quickstart-bulk-load-copy-tsql-examples.md
Previously updated : 03/07/2022 Last updated : 06/15/2022
Managed Identity authentication is required when your storage account is attache
Set-AzSqlServer -ResourceGroupName your-database-server-resourceGroup -ServerName your-SQL-servername -AssignIdentity ```
- This step is not required for dedicated SQL pools within a Synapse workspace.
-
-1. If you have a Synapse workspace, register your workspace's system-managed identity:
-
- 1. Go to your Synapse workspace in the Azure portal.
- 2. Go to the **Managed identities** page.
- 3. Make sure the "Allow Pipelines" option is enabled.
-
- ![Register workspace system msi](./media/quickstart-bulk-load-copy-tsql-examples/msi-register-example.png)
+ This step is not required for dedicated SQL pools within a Synapse workspace. The system assigned managed identity (SA-MI) of the workspace is a member of the Synapse Administrator role and thus has elevated privileges on the dedicated SQL pools of the workspace.
1. Create a **general-purpose v2 Storage Account**. For more information, see [Create a storage account](../../storage/common/storage-account-create.md).
virtual-machine-scale-sets Virtual Machine Scale Sets Design Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-design-overview.md
A scale set that is not defined with Azure Managed Disks relies on user-created
## Overprovisioning
-Scale sets currently default to "overprovisioning" VMs. With overprovisioning turned on, the scale set actually spins up more VMs than you asked for, then deletes the extra VMs once the requested number of VMs are successfully provisioned. Overprovisioning improves provisioning success rates and reduces deployment time. You are not billed for the extra VMs, and they do not count toward your quota limits.
+With overprovisioning turned on, the scale set actually spins up more VMs than you asked for, then deletes the extra VMs once the requested number of VMs are successfully provisioned. Overprovisioning improves provisioning success rates and reduces deployment time. You are not billed for the extra VMs, and they do not count toward your quota limits.
While overprovisioning does improve provisioning success rates, it can cause confusing behavior for an application that is not designed to handle extra VMs appearing and then disappearing. To turn overprovisioning off, ensure you have the following string in your template: `"overprovision": "false"`. More details can be found in the [Scale Set REST API documentation](/rest/api/virtualmachinescalesets/create-or-update-a-set).
virtual-machines Ephemeral Os Disks Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/ephemeral-os-disks-faq.md
A: Yes, you can attach a managed data disk to a VM that uses an ephemeral OS dis
**Q: Will all VM sizes be supported for ephemeral OS disks?**
-A: No, most Premium Storage VM sizes are supported (DS, ES, FS, GS, M, etc.). To know whether a particular VM size supports ephemeral OS disks, you can:
+A: No, most Premium Storage VM sizes are supported (DS, ES, FS, GS, M, etc.). To know whether a particular VM size supports ephemeral OS disks for an OS image size you can use the below script. It takes the OS image size and location as inputs and provides a list of VM SKUs and corresponding placement supported. If both OS Cache and temp disk placement are marked as not supported then Ephemeral OS disk cannot be used for the given OS image size.
-Call `Get-AzComputeResourceSku` PowerShell cmdlet
```azurepowershell-interactive
+[CmdletBinding()]
+param([Parameter(Mandatory=$true)]
+ [ValidateNotNullOrEmpty()]
+ [string]$Location,
+ [Parameter(Mandatory=$true)]
+ [long]$OSImageSizeInGB
+ )
-$vmSizes=Get-AzComputeResourceSku | where{$_.ResourceType -eq 'virtualMachines' -and $_.Locations.Contains('CentralUSEUAP')}
-
-foreach($vmSize in $vmSizes)
+Function HasSupportEphemeralOSDisk([object[]] $capability)
+{
+ return $capability | where { $_.Name -eq "EphemeralOSDiskSupported" -and $_.Value -eq "True"}
+}
+
+Function Get-MaxTempDiskAndCacheSize([object[]] $capabilities)
{
- foreach($capability in $vmSize.capabilities)
- {
- if($capability.Name -eq 'EphemeralOSDiskSupported' -and $capability.Value -eq 'true')
- {
- $vmSize
- }
- }
+ $MaxResourceVolumeGB = 0;
+ $CachedDiskGB = 0;
+
+ foreach($capability in $capabilities)
+ {
+ if ($capability.Name -eq "MaxResourceVolumeMB")
+ { $MaxResourceVolumeGB = [int]($capability.Value / 1024) }
+
+ if ($capability.Name -eq "CachedDiskBytes")
+ { $CachedDiskGB = [int]($capability.Value / (1024 * 1024 * 1024)) }
+ }
+
+ return ($MaxResourceVolumeGB, $CachedDiskGB)
}
+
+Function Get-EphemeralSupportedVMSku
+{
+ [CmdletBinding()]
+ Param
+ (
+ [Parameter(Mandatory=$true)]
+ [long]$OSImageSizeInGB,
+ [Parameter(Mandatory=$true)]
+ [string]$Location
+ )
+
+ $VmSkus = Get-AzComputeResourceSku $Location | Where-Object { $_.ResourceType -eq "virtualMachines" -and (HasSupportEphemeralOSDisk $_.Capabilities) -ne $null }
+
+ $Response = @()
+ foreach ($sku in $VmSkus)
+ {
+ ($MaxResourceVolumeGB, $CachedDiskGB) = Get-MaxTempDiskAndCacheSize $sku.Capabilities
+
+ $Response += New-Object PSObject -Property @{
+ ResourceSKU = $sku.Size
+ TempDiskPlacement = @{ $true = "NOT SUPPORTED"; $false = "SUPPORTED"}[$MaxResourceVolumeGB -lt $OSImageSizeInGB]
+ CacheDiskPlacement = @{ $true = "NOT SUPPORTED"; $false = "SUPPORTED"}[$CachedDiskGB -lt $OSImageSizeInGB]
+ };
+ }
+
+ return $Response
+}
+
+Get-EphemeralSupportedVMSku -OSImageSizeInGB $OSImageSizeInGB -Location $Location | Format-Table
``` **Q: Can the ephemeral OS disk be applied to existing VMs and scale sets?**
virtual-machines Automation Configure Control Plane https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-configure-control-plane.md
The table below defines the parameters used for defining the Key Vault informati
> | Variable | Description | Type | Notes | > | | - | -- | -- | > | `firewall_deployment` | Boolean flag controlling if an Azure firewall is to be deployed | Optional | |
-> | `bastion_deployment` | Boolean flag controlling if Azure bastion host is to be deployed | Optional | |
+> | `bastion_deployment` | Boolean flag controlling if Azure Bastion host is to be deployed | Optional | |
> | `enable_purge_control_for_keyvaults` | Boolean flag controlling if purge control is enabled on the Key Vault. | Optional | Use only for test deployments | > | `use_private_endpoint` | Boolean flag controlling if private endpoints are used. | Optional | Recommended |
virtual-wan About Virtual Hub Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/about-virtual-hub-routing.md
Title: 'About virtual hub routing' description: Learn about Virtual WAN virtual hub routing.- Previously updated : 04/27/2021 Last updated : 06/14/2022
Configuring static routes provides a mechanism to steer traffic through a next h
## <a name="route"></a>Route tables for pre-existing routes
-Route tables now have features for association and propagation. A pre-existing route table is a route table that does not have these features. If you have pre-existing routes in hub routing and would like to use the new capabilities, consider the following:
+Route tables now have features for association and propagation. A pre-existing route table is a route table that doesn't have these features. If you have pre-existing routes in hub routing and would like to use the new capabilities, consider the following:
* **Standard Virtual WAN Customers with pre-existing routes in virtual hub**:
- If you have pre-existing routes in Routing section for the hub in Azure portal, you will need to first delete them and then attempt creating new route tables (available in the Route Tables section for the hub in Azure portal).
+ If you have pre-existing routes in Routing section for the hub in Azure portal, you'll need to first delete them and then attempt creating new route tables (available in the Route Tables section for the hub in Azure portal).
* **Basic Virtual WAN Customers with pre-existing routes in virtual hub**:
- If you have pre-existing routes in Routing section for the hub in Azure portal, you will need to first delete them, then **upgrade** your Basic Virtual WAN to Standard Virtual WAN. See [Upgrade a virtual WAN from Basic to Standard](upgrade-virtual-wan.md).
+ If you have pre-existing routes in Routing section for the hub in Azure portal, you'll need to first delete them, then **upgrade** your Basic Virtual WAN to Standard Virtual WAN. See [Upgrade a virtual WAN from Basic to Standard](upgrade-virtual-wan.md).
## <a name="reset"></a>Hub reset
-Virtual hub **Reset** is available only in the Azure portal. Resetting provides you a way to bring any failed resources such as route tables, hub router, or the virtual hub resource itself back to its rightful provisioning state. Consider resetting the hub prior to contacting Microsoft for support. This operation does not reset any of the gateways in a virtual hub.
+Virtual hub **Reset** is available only in the Azure portal. Resetting provides you a way to bring any failed resources such as route tables, hub router, or the virtual hub resource itself back to its rightful provisioning state. Consider resetting the hub prior to contacting Microsoft for support. This operation doesn't reset any of the gateways in a virtual hub.
## <a name="considerations"></a>Additional considerations
Consider the following when configuring Virtual WAN routing:
* All branch connections (Point-to-site, Site-to-site, and ExpressRoute) need to be associated to the Default route table. That way, all branches will learn the same prefixes. * All branch connections need to propagate their routes to the same set of route tables. For example, if you decide that branches should propagate to the Default route table, this configuration should be consistent across all branches. As a result, all connections associated to the Default route table will be able to reach all of the branches. * Branch-to-branch via Azure Firewall is currently not supported.
-* When using Azure Firewall in multiple regions, all spoke virtual networks must be associated to the same route table. For example, having a subset of the VNets going through the Azure Firewall while other VNets bypass the Azure Firewall in the same virtual hub is not possible.
-* You may specify multiple next hop IP addresses on a single Virtual Network connection. However, Virtual Network Connection does not support ΓÇÿmultiple/uniqueΓÇÖ next hop IP to the ΓÇÿsameΓÇÖ network virtual appliance in a SPOKE Virtual Network 'if' one of the routes with next hop IP is indicated to be public IP address or 0.0.0.0/0 (internet)
-* All information pertaining to 0.0.0.0/0 route is confined to a local hub's route table. This route does not propagate across hubs.
-* You can only use Virtual WAN to program routes in a spoke if the prefix is shorter (less specific) than the virtual network prefix. For example, in the diagram above the spoke VNET1 has the prefix 10.1.0.0/16: in this case, Virtual WAN would not be able to inject a route that matches the virtual network prefix (10.1.0.0/16) or any of the subnets (10.1.0.0/24, 10.1.1.0/24). In other words, Virtual WAN cannot attract traffic between two subnets that are in the same virtual network.
+* When using Azure Firewall in multiple regions, all spoke virtual networks must be associated to the same route table. For example, having a subset of the VNets going through the Azure Firewall while other VNets bypass the Azure Firewall in the same virtual hub isn't possible.
+* You may specify multiple next hop IP addresses on a single Virtual Network connection. However, Virtual Network Connection doesn't support ΓÇÿmultiple/uniqueΓÇÖ next hop IP to the ΓÇÿsameΓÇÖ network virtual appliance in a SPOKE Virtual Network 'if' one of the routes with next hop IP is indicated to be public IP address or 0.0.0.0/0 (internet)
+* All information pertaining to 0.0.0.0/0 route is confined to a local hub's route table. This route doesn't propagate across hubs.
+* You can only use Virtual WAN to program routes in a spoke if the prefix is shorter (less specific) than the virtual network prefix. For example, in the diagram above the spoke VNET1 has the prefix 10.1.0.0/16: in this case, Virtual WAN wouldn't be able to inject a route that matches the virtual network prefix (10.1.0.0/16) or any of the subnets (10.1.0.0/24, 10.1.1.0/24). In other words, Virtual WAN can't attract traffic between two subnets that are in the same virtual network.
## Next steps
virtual-wan Migrate From Hub Spoke Topology https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/migrate-from-hub-spoke-topology.md
Title: 'Architecture: Migrate to Azure Virtual WAN' description: Learn how to migrate from an existing customer-managed hub-and-spoke topology, to a design that leverages Microsoft-managed Virtual WAN hubs.- Previously updated : 04/27/2021 Last updated : 06/14/2022
This article shows how to migrate an existing customer-managed hub-and-spoke env
## Scenario
-Contoso is a global financial organization with offices in both Europe and Asia. They are planning to move their existing applications from an on-premises data center in to Azure and have built out a foundation design based on the customer-managed hub-and-spoke architecture, including regional hub virtual networks for hybrid connectivity. As part of the move to cloud-based technologies, the network team have been tasked with ensuring that their connectivity is optimized for the business moving forward.
+Contoso is a global financial organization with offices in both Europe and Asia. They are planning to move their existing applications from an on-premises data center in to Azure and have built out a foundation design based on the customer-managed hub-and-spoke architecture, including regional hub virtual networks for hybrid connectivity. As part of the move to cloud-based technologies, the network team has been tasked with ensuring that their connectivity is optimized for the business moving forward.
The following figure shows a high-level view of the existing global network including connectivity to multiple Azure regions.
The following points can be understood from the existing network topology:
## Requirements
-The networking team have been tasked with delivering a global network model that can support the Contoso migration to the cloud and must optimize in the areas of cost, scale, and performance. In summary, the following requirements are to be met:
+The networking team has been tasked with delivering a global network model that can support the Contoso migration to the cloud and must optimize in the areas of cost, scale, and performance. In summary, the following requirements are to be met:
* Provide both head quarter (HQ) and branch offices with optimized path to cloud hosted applications. * Remove the reliance on existing on-premises data centers (DC) for VPN termination while retaining the following connectivity paths:
Prior to using the managed Virtual WAN hub for production connectivity, we recom
:::image type="content" source="./media/migrate-from-hub-spoke-topology/figure4.png" alt-text="Test hybrid connectivity via Virtual WAN"::: **Figure 4: Customer-managed hub-and-spoke to Virtual WAN migration**
-At this stage, it is important to recognize that both the original customer-managed hub virtual network and the new Virtual WAN Hub are both connected to the same ExpressRoute circuit. Due to this, we have a traffic path that can be used to enable spokes in both environments to communicate. For example, traffic from a spoke that is attached to the customer-managed hub virtual network will traverse the MSEE devices used for the ExpressRoute circuit to reach any spoke connected via a VNet connection to the new Virtual WAN hub. This allows a staged migration of spokes in Step 5.
+At this stage, it's important to recognize that both the original customer-managed hub virtual network and the new Virtual WAN Hub are both connected to the same ExpressRoute circuit. Due to this, we have a traffic path that can be used to enable spokes in both environments to communicate. For example, traffic from a spoke that is attached to the customer-managed hub virtual network will traverse the MSEE devices used for the ExpressRoute circuit to reach any spoke connected via a VNet connection to the new Virtual WAN hub. This allows a staged migration of spokes in Step 5.
### Step 5: Transition connectivity to virtual WAN hub
We have now redesigned our Azure network to make the Virtual WAN hub the central
:::image type="content" source="./media/migrate-from-hub-spoke-topology/figure6.png" alt-text="Old hub becomes Shared Services spoke"::: **Figure 6: Customer-managed hub-and-spoke to Virtual WAN migration**
-Because the Virtual WAN hub is a managed entity and does not allow deployment of custom resources such as virtual machines, the shared services block now exists as a spoke virtual network and hosts functions such as internet ingress via Azure Application Gateway or network virtualized appliance. Traffic between the shared services environment and backend virtual machines now transits the Virtual WAN-managed hub.
+Because the Virtual WAN hub is a managed entity and doesn't allow deployment of custom resources such as virtual machines, the shared services block now exists as a spoke virtual network and hosts functions such as internet ingress via Azure Application Gateway or network virtualized appliance. Traffic between the shared services environment and backend virtual machines now transits the Virtual WAN-managed hub.
### Step 7: Optimize on-premises connectivity to fully utilize Virtual WAN
virtual-wan Openvpn Azure Ad Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/openvpn-azure-ad-tenant.md
Title: 'Azure AD tenant for User VPN connections: Azure AD authentication'
+ Title: 'Azure AD tenant for User VPN connections: Azure AD authentication -OpenVPN'
description: You can use Azure Virtual WAN User VPN (point-to-site) to connect to your VNet using Azure AD authentication - - Previously updated : 09/22/2020- Last updated : 06/14/2022+
-# Prepare Azure Active Directory tenant for User VPN OpenVPN protocol connections
-When connecting to your Virtual Hub over the IKEv2 protocol, you can use certificate-based authentication or RADIUS authentication. However, when you use the OpenVPN protocol, you can also use Azure Active Directory authentication. This article helps you set up an Azure AD tenant for Virtual WAN User VPN (point-to-site) using OpenVPN authentication.
+# Configure an Azure AD tenant for P2S User VPN OpenVPN protocol connections
+
+When you connect to your VNet using Virtual WAN User VPN (point-to-site), you have a choice of which protocol to use. The protocol you use determines the authentication options that are available to you. If you're using the OpenVPN protocol, Azure Active Directory authentication is one of the authentication options available for you to use. This article helps you configure an Azure AD tenant for Virtual WAN User VPN (point-to-site) using OpenVPN authentication.
-> [!NOTE]
-> Azure AD authentication is supported only for OpenVPN&reg; protocol connections.
->
## <a name="tenant"></a>1. Create the Azure AD tenant
Verify that you have an Azure AD tenant. If you don't have an Azure AD tenant, y
* Organization name * Initial domain name
-Example:
-
- ![New Azure AD tenant](./media/openvpn-create-azure-ad-tenant/newtenant.png)
- ## <a name="users"></a>2. Create Azure AD tenant users
-Next, create two user accounts in the newly created Azure AD tenant, one Global administrator account and one user account. The user account can be used to test OpenVPN authentication and the Global administrator account will be used to grant consent to the Azure VPN app registration. After you have created an Azure AD user account, you assign a **Directory Role** to the user in order to delegate administrative permissions.
-
-Use the steps in [this article](../active-directory/fundamentals/add-users-azure-active-directory.md) to create the two users for your Azure AD tenant. Be sure to change the **Directory Role** on one of the created accounts to **Global administrator**.
-
-## <a name="enable-authentication"></a>3. Grant consent to the Azure VPN app registration
-
-1. Sign in to the Azure Portal as a user that is assigned the **Global administrator** role.
-
-2. Next, grant admin consent for your organization, this allows the Azure VPN application to sign in and read user profiles. Copy and paste the URL that pertains to your deployment location in the address bar of your browser:
-
- Public
+1. Create two accounts in the newly created Azure AD tenant. For steps, see [Add or delete a new user](../active-directory/fundamentals/add-users-azure-active-directory.md).
- ```
- https://login.microsoftonline.com/common/oauth2/authorize?client_id=41b23e61-6c1e-4545-b367-cd054e0ed4b4&response_type=code&redirect_uri=https://portal.azure.com&nonce=1234&prompt=admin_consent
- ````
+ * Global administrator account
+ * User account
- Azure Government
+ The global administrator account will be used to grant consent to the Azure VPN app registration. The user account can be used to test OpenVPN authentication.
+1. Assign one of the accounts the **Global administrator** role. For steps, see [Assign administrator and non-administrator roles to users with Azure Active Directory](../active-directory/fundamentals/active-directory-users-assign-role-azure-portal.md).
- ```
- https://login-us.microsoftonline.com/common/oauth2/authorize?client_id=51bb15d4-3a4f-4ebf-9dca-40096fe32426&response_type=code&redirect_uri=https://portal.azure.us&nonce=1234&prompt=admin_consent
- ````
-
- Microsoft Cloud Germany
-
- ```
- https://login-us.microsoftonline.de/common/oauth2/authorize?client_id=538ee9e6-310a-468d-afef-ea97365856a9&response_type=code&redirect_uri=https://portal.microsoftazure.de&nonce=1234&prompt=admin_consent
- ````
-
- Azure China 21Vianet
-
- ```
- https://https://login.chinacloudapi.cn/common/oauth2/authorize?client_id=49f817b6-84ae-4cc0-928c-73f27289b3aa&response_type=code&redirect_uri=https://portal.azure.cn&nonce=1234&prompt=admin_consent
- ```
-
-3. Select the **Global administrator** account if prompted.
-
- ![Directory ID](./media/openvpn-create-azure-ad-tenant/pick.png)
-
-4. Select **Accept** when prompted.
-
- ![Screenshot shows dialog box with the message Permissions requested Accept for your organization and additional information.](./media/openvpn-create-azure-ad-tenant/accept.jpg)
-
-5. Under your Azure AD, in **Enterprise applications**, you should now see **Azure VPN** listed.
+## <a name="enable-authentication"></a>3. Grant consent to the Azure VPN app registration
- ![Azure VPN](./media/openvpn-create-azure-ad-tenant/azurevpn.png)
## Next steps
-In order to connect to your virtual networks using Azure AD authentication, you must create a User VPN configuration and associate it to a Virtual Hub. See [Configure Azure AD authentication for Point-to-Site connection to Azure](virtual-wan-point-to-site-azure-ad.md).
+In order to connect to your virtual networks using Azure AD authentication, you must create a User VPN configuration and associate it to a Virtual Hub. See [Configure Azure AD authentication for point-to-site connection to Azure](virtual-wan-point-to-site-azure-ad.md).
virtual-wan Virtual Wan Global Transit Network Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-global-transit-network-architecture.md
Title: 'Architecture: Global transit network architecture' description: Learn how Azure Virtual WAN allows a global transit network architecture by enabling ubiquitous, any-to-any connectivity between globally distributed sets of cloud workloads in VNets, branch sites, SaaS and PaaS applications, and users.- Previously updated : 05/07/2020 Last updated : 06/14/2022
You can establish a virtual WAN by creating a single virtual WAN hub in the regi
## <a name="hubtohub"></a>Hub-to-hub connectivity
-An Enterprise cloud footprint can span multiple cloud regions and it is optimal (latency-wise) to access the cloud from a region closest to their physical site and users. One of the key principles of global transit network architecture is to enable cross-region connectivity between all cloud and on-premises network endpoints. This means that traffic from a branch that is connected to the cloud in one region can reach another branch or a VNet in a different region using hub-to-hub connectivity enabled by [Azure Global Network](https://azure.microsoft.com/global-infrastructure/global-network/).
+An Enterprise cloud footprint can span multiple cloud regions and it's optimal (latency-wise) to access the cloud from a region closest to their physical site and users. One of the key principles of global transit network architecture is to enable cross-region connectivity between all cloud and on-premises network endpoints. This means that traffic from a branch that is connected to the cloud in one region can reach another branch or a VNet in a different region using hub-to-hub connectivity enabled by [Azure Global Network](https://azure.microsoft.com/global-infrastructure/global-network/).
![cross-region](./media/virtual-wan-global-transit-network-architecture/figure3.png)
Additionally, hubs that are all part of the same virtual WAN, can be associated
## <a name="anytoany"></a>Any-to-any connectivity
-Global transit network architecture enables any-to-any connectivity via virtual WAN hubs. This architecture eliminates or reduces the need for full mesh or partial mesh connectivity between spokes, that are more complex to build and maintain. In addition, routing control in hub-and-spoke vs. mesh networks is easier to configure and maintain.
+Global transit network architecture enables any-to-any connectivity via virtual WAN hubs. This architecture eliminates or reduces the need for full mesh or partial mesh connectivity between spokes that are more complex to build and maintain. In addition, routing control in hub-and-spoke vs. mesh networks is easier to configure and maintain.
Any-to-any connectivity (in the context of a global architecture) allows an enterprise with globally distributed users, branches, datacenters, VNets, and applications to connect to each other through the ΓÇ£transitΓÇ¥ hub(s). Azure Virtual WAN acts as the global transit system.
Azure Virtual WAN supports the following global transit connectivity paths. The
### Branch-to-VNet (a) and Branch-to-VNet Cross-region (g)
-Branch-to-VNet is the primary path supported by Azure Virtual WAN. This path allows you to connect branches to Azure IAAS enterprise workloads that are deployed in Azure VNets. Branches can be connected to the virtual WAN via ExpressRoute or site-to-site VPN. The traffic transits to VNets that are connected to the virtual WAN hubs via VNet Connections. Explicit [gateway transit](../virtual-network/virtual-network-peering-overview.md#gateways-and-on-premises-connectivity) is not required for Virtual WAN because Virtual WAN automatically enables gateway transit to branch site. See [Virtual WAN Partners](virtual-wan-configure-automation-providers.md) article on how to connect an SD-WAN CPE to Virtual WAN.
+Branch-to-VNet is the primary path supported by Azure Virtual WAN. This path allows you to connect branches to Azure IAAS enterprise workloads that are deployed in Azure VNets. Branches can be connected to the virtual WAN via ExpressRoute or site-to-site VPN. The traffic transits to VNets that are connected to the virtual WAN hubs via VNet Connections. Explicit [gateway transit](../virtual-network/virtual-network-peering-overview.md#gateways-and-on-premises-connectivity) isn't required for Virtual WAN because Virtual WAN automatically enables gateway transit to branch site. See [Virtual WAN Partners](virtual-wan-configure-automation-providers.md) article on how to connect an SD-WAN CPE to Virtual WAN.
### ExpressRoute Global Reach and Virtual WAN
The Remote User-to-branch path lets remote users who are using a point-to-site c
### VNet-to-VNet transit (e) and VNet-to-VNet cross-region (h)
-The VNet-to-VNet transit enables VNets to connect to each other in order to interconnect multi-tier applications that are implemented across multiple VNets. Optionally, you can connect VNets to each other through VNet Peering and this may be suitable for some scenarios where transit via the VWAN hub is not necessary.
+The VNet-to-VNet transit enables VNets to connect to each other in order to interconnect multi-tier applications that are implemented across multiple VNets. Optionally, you can connect VNets to each other through VNet Peering and this may be suitable for some scenarios where transit via the VWAN hub isn't necessary.
## <a name="DefaultRoute"></a>Force tunneling and default route
Force Tunneling can be enabled by configuring the enable default route on a VPN,
A virtual hub propagates a learned default route to a virtual network/site-to-site VPN/ExpressRoute connection if enable default flag is 'Enabled' on the connection.
-This flag is visible when the user edits a virtual network connection, a VPN connection, or an ExpressRoute connection. By default, this flag is disabled when a site or an ExpressRoute circuit is connected to a hub. It is enabled by default when a virtual network connection is added to connect a VNet to a virtual hub. The default route does not originate in the Virtual WAN hub; the default route is propagated if it is already learned by the Virtual WAN hub as a result of deploying a firewall in the hub, or if another connected site has forced-tunneling enabled.
+This flag is visible when the user edits a virtual network connection, a VPN connection, or an ExpressRoute connection. By default, this flag is disabled when a site or an ExpressRoute circuit is connected to a hub. It's enabled by default when a virtual network connection is added to connect a VNet to a virtual hub. The default route doesn't originate in the Virtual WAN hub; the default route is propagated if it is already learned by the Virtual WAN hub as a result of deploying a firewall in the hub, or if another connected site has forced-tunneling enabled.
## <a name="security"></a>Security and policy control
The VNet-to-VNet secured transit enables VNets to connect to each other via the
### VNet-to-Internet or third-party Security Service (i)
-The VNet-to-Internet enables VNets to connect to the internet via the Azure Firewall in the virtual WAN hub. Traffic to internet via supported third-party security services does not flow through the Azure Firewall. You can configure Vnet-to-Internet path via supported third-party security service using Azure Firewall Manager.
+The VNet-to-Internet enables VNets to connect to the internet via the Azure Firewall in the virtual WAN hub. Traffic to internet via supported third-party security services doesn't flow through the Azure Firewall. You can configure Vnet-to-Internet path via supported third-party security service using Azure Firewall Manager.
### Branch-to-Internet or third-party Security Service (j)
-The Branch-to-Internet enables branches to connect to the internet via the Azure Firewall in the virtual WAN hub. Traffic to internet via supported third-party security services does not flow through the Azure Firewall. You can configure Branch-to-Internet path via supported third-party security service using Azure Firewall Manager.
+The Branch-to-Internet enables branches to connect to the internet via the Azure Firewall in the virtual WAN hub. Traffic to internet via supported third-party security services doesn't flow through the Azure Firewall. You can configure Branch-to-Internet path via supported third-party security service using Azure Firewall Manager.
### Branch-to-branch secured transit cross-region (f)
vpn-gateway Openvpn Azure Ad Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/openvpn-azure-ad-tenant.md
Title: 'Create an Azure AD tenant for P2S VPN connections: Azure AD authentication'
+ Title: 'Configure Azure AD tenant for P2S VPN connections: Azure AD authentication-OpenVPN'
description: Learn how to set up an Azure AD tenant for P2S Azure AD authentication - OpenVPN protocol.- - Previously updated : 04/21/2022 Last updated : 06/14/2022
-# Create an Azure AD tenant for P2S OpenVPN protocol connections
+# Configure an Azure AD tenant for P2S OpenVPN protocol connections
-When you connect to your VNet using Azure VPN Gateway Point-to-Site, you have a choice of which protocol to use. The protocol you use determines the authentication options that are available to you. If you want to use Azure Active Directory authentication, you can do so when using the OpenVPN protocol. This article helps you set up an Azure AD tenant. For more information about Point-to-Site protocols and authentication, see [About Point-to-Site VPN](point-to-site-about.md).
+When you connect to your VNet using the Azure VPN Gateway point-to-site VPN, you have a choice of which protocol to use. The protocol you use determines the authentication options that are available to you. If you're using the OpenVPN protocol, Azure Active Directory authentication is one of the authentication options available for you to use. This article helps you configure your AD tenant and P2S VPN gateway for Azure AD authentication. For more information about point-to-site protocols and authentication, see [About point-to-site VPN](point-to-site-about.md).
[!INCLUDE [OpenVPN note](../../includes/vpn-gateway-openvpn-auth-include.md)]
Verify that you have an Azure AD tenant. If you don't have an Azure AD tenant, y
## <a name="users"></a>2. Create Azure AD tenant users
-Your Azure AD tenant needs the following accounts: a Global Admin account and a user account. The user account is used as your embedding account (service account). When you create an Azure AD tenant user account, you adjust the Directory role for the type of user that you want to create.
+1. Create two accounts in the newly created Azure AD tenant. For steps, see [Add or delete a new user](../active-directory/fundamentals/add-users-azure-active-directory.md).
-Use the steps in [Add or delete users - Azure Active Directory](../active-directory/fundamentals/add-users-azure-active-directory.md) to create at least two users for your Azure AD tenant. Be sure to change the **Directory Role** to create the account types:
+ * Global administrator account
+ * User account
-* Global Admin
-* User
+ The global administrator account will be used to grant consent to the Azure VPN app registration. The user account can be used to test OpenVPN authentication.
+1. Assign one of the accounts the **Global administrator** role. For steps, see [Assign administrator and non-administrator roles to users with Azure Active Directory](../active-directory/fundamentals/active-directory-users-assign-role-azure-portal.md).
## <a name="enable-authentication"></a>3. Enable Azure AD authentication on the VPN gateway
-1. Locate the Tenant ID of the directory that you want to use for authentication. It's listed in the properties section of the Active Directory page. For help with finding your tenant ID, see [How to find your Azure Active Directory tenant ID](../active-directory/fundamentals/active-directory-how-to-find-tenant.md).
-
-1. Copy the Tenant ID.
-
-1. Sign in to the Azure portal as a user that is assigned the **Global administrator** role.
-
-1. Next, give admin consent. Copy and paste the URL that pertains to your deployment location in the address bar of your browser:
-
- Public
-
- ```
- https://login.microsoftonline.com/common/oauth2/authorize?client_id=41b23e61-6c1e-4545-b367-cd054e0ed4b4&response_type=code&redirect_uri=https://portal.azure.com&nonce=1234&prompt=admin_consent
- ````
-
- Azure Government
+### Enable the application
- ```
- https://login.microsoftonline.us/common/oauth2/authorize?client_id=51bb15d4-3a4f-4ebf-9dca-40096fe32426&response_type=code&redirect_uri=https://portal.azure.us&nonce=1234&prompt=admin_consent
- ````
- Microsoft Cloud Germany
+### Configure P2S gateway settings
- ```
- https://login-us.microsoftonline.de/common/oauth2/authorize?client_id=538ee9e6-310a-468d-afef-ea97365856a9&response_type=code&redirect_uri=https://portal.microsoftazure.de&nonce=1234&prompt=admin_consent
- ````
+1. Locate the tenant ID of the directory that you want to use for authentication. It's listed in the properties section of the Active Directory page. For help with finding your tenant ID, see [How to find your Azure Active Directory tenant ID](../active-directory/fundamentals/active-directory-how-to-find-tenant.md).
- Azure China 21Vianet
-
- ```
- https://login.chinacloudapi.cn/common/oauth2/authorize?client_id=49f817b6-84ae-4cc0-928c-73f27289b3aa&response_type=code&redirect_uri=https://portal.azure.cn&nonce=1234&prompt=admin_consent
- ```
-
- > [!NOTE]
- > If you using a global admin account that is not native to the Azure AD tenant to provide consent, please replace ΓÇ£commonΓÇ¥ with the Azure AD tenant ID in the URL. You may also have to replace ΓÇ£commonΓÇ¥ with your tenant ID in certain other cases as well.
- >
-
-1. Select the **Global Admin** account if prompted.
-
- :::image type="content" source="./media/openvpn-create-azure-ad-tenant/pick.png" alt-text="Screnshot showing Pick an account page." border="false":::
-1. Select **Accept** when prompted.
-
- :::image type="content" source="./media/openvpn-create-azure-ad-tenant/accept.jpg" alt-text="Screenshot shows the message Permissions requested Accept for your organization with details and the option to accept." border="false":::
-1. Under your Azure AD, in **Enterprise applications**, you see **Azure VPN** listed.
-
- :::image type="content" source="./media/openvpn-create-azure-ad-tenant/azurevpn.png" alt-text="Screenshot that shows the All applications page." lightbox="./media/openvpn-create-azure-ad-tenant/azurevpn.png" :::
1. If you don't already have a functioning point-to-site environment, follow the instruction to create one. See [Create a point-to-site VPN](vpn-gateway-howto-point-to-site-resource-manager-portal.md) to create and configure a point-to-site VPN gateway. > [!IMPORTANT] > The Basic SKU is not supported for OpenVPN.
-1. Enable Azure AD authentication on the VPN gateway by navigating to **Point-to-site configuration** and picking **OpenVPN (SSL)** as the **Tunnel type**. Select **Azure Active Directory** as the **Authentication type**, then fill in the information under the **Azure Active Directory** section.
+1. Enable Azure AD authentication on the VPN gateway by navigating to **Point-to-site configuration** and picking **OpenVPN (SSL)** as the **Tunnel type**. Select **Azure Active Directory** as the **Authentication type**, then fill in the information under the **Azure Active Directory** section. Replace {AzureAD TenantID} with your tenant ID.
* **Tenant:** TenantID for the Azure AD tenant * Enter `https://login.microsoftonline.com/{AzureAD TenantID}/` for Azure Public AD
Use the steps in [Add or delete users - Azure Active Directory](../active-direct
* **Issuer**: URL of the Secure Token Service `https://sts.windows.net/{AzureAD TenantID}/`
- :::image type="content" source="./media/openvpn-create-azure-ad-tenant/azure-ad-auth-portal.png" alt-text="Screenshot showing settings for Tunnel type, Authentication type, and Azure Active Directory settings." border="false":::
+ :::image type="content" source="./media/openvpn-create-azure-ad-tenant/configuration.png" alt-text="Screenshot showing settings for Tunnel type, Authentication type, and Azure Active Directory settings.":::
+
+ > [!NOTE]
+ > Make sure you include a trailing slash at the end of the `AadIssuerUri` **Issuer** value. Otherwise, the connection may fail.
+ >
- > [!NOTE]
- > Make sure you include a trailing slash at the end of the `AadIssuerUri` value. Otherwise, the connection may fail.
- >
+1. Save your changes.
1. Create and download the profile by clicking on the **Download VPN client** link.
web-application-firewall Waf Front Door Drs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/waf-front-door-drs.md
Previously updated : 04/07/2022 Last updated : 06/15/2022 # Web Application Firewall DRS rule groups and rules
Front Door.
|932110|Remote Command Execution: Windows Command Injection| |932115|Remote Command Execution: Windows Command Injection| |932120|Remote Command Execution: Windows PowerShell Command Found|
-|932130|Remote Command Execution: Unix Shell Expression Found|
+|932130|Remote Command Execution: Unix Shell Expression or Confluence Vulnerability (CVE-2022-26134) Found|
|932140|Remote Command Execution: Windows FOR/IF Command Found| |932150|Remote Command Execution: Direct Unix Command Execution| |932160|Remote Command Execution: Unix Shell Code Found|
Front Door.
|932110|Remote Command Execution: Windows Command Injection| |932115|Remote Command Execution: Windows Command Injection| |931120|Remote Command Execution: Windows PowerShell Command Found|
-|932130|Remote Command Execution: Unix Shell Expression Found|
+|932130|Remote Command Execution: Unix Shell Expression or Confluence Vulnerability (CVE-2022-26134) Found|
|932140|Remote Command Execution: Windows FOR/IF Command Found| |932150|Remote Command Execution: Direct Unix Command Execution| |932160|Remote Command Execution: Shellshock (CVE-2014-6271)|
Front Door.
|932110|Remote Command Execution: Windows Command Injection| |932115|Remote Command Execution: Windows Command Injection| |932120|Remote Command Execution: Windows PowerShell Command Found|
-|932130|Remote Command Execution: Unix Shell Expression Found|
+|932130|Remote Command Execution: Unix Shell Expression or Confluence Vulnerability (CVE-2022-26134) Found|
|932140|Remote Command Execution: Windows FOR/IF Command Found| |932150|Remote Command Execution: Direct Unix Command Execution| |932160|Remote Command Execution: Unix Shell Code Found|
web-application-firewall Application Gateway Waf Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/application-gateway-waf-configuration.md
description: This article provides information on Web Application Firewall exclu
Previously updated : 05/18/2022 Last updated : 06/13/2022
az network application-gateway waf-policy managed-rule exclusion rule-set add \
# [Bicep](#tab/bicep) ```bicep
-resource wafPolicy 'Microsoft.Network/ApplicationGatewayWebApplicationFirewallPolicies@2021-05-01' = {
+resource wafPolicy 'Microsoft.Network/ApplicationGatewayWebApplicationFirewallPolicies@2021-08-01' = {
name: wafPolicyName location: location properties: {
resource wafPolicy 'Microsoft.Network/ApplicationGatewayWebApplicationFirewallPo
```json { "type": "Microsoft.Network/ApplicationGatewayWebApplicationFirewallPolicies",
- "apiVersion": "2021-05-01",
+ "apiVersion": "2021-08-01",
"name": "[parameters('wafPolicyName')]", "location": "[parameters('location')]", "properties": {
resource wafPolicy 'Microsoft.Network/ApplicationGatewayWebApplicationFirewallPo
+You can also exclude the `User-Agent` header from evaluation just by rule 942270:
+
+# [Azure portal](#tab/portal)
+
+Follow the steps described in the preceding example, and select rule 942270 in step 4.
+
+# [Azure PowerShell](#tab/powershell)
+
+```azurepowershell
+$ruleEntry = New-AzApplicationGatewayFirewallPolicyExclusionManagedRule `
+ -Rule '942270'
+
+$ruleGroupEntry = New-AzApplicationGatewayFirewallPolicyExclusionManagedRuleGroup `
+ -RuleGroupName 'REQUEST-942-APPLICATION-ATTACK-SQLI' `
+ -Rule $ruleEntry
+
+$exclusionManagedRuleSet = New-AzApplicationGatewayFirewallPolicyExclusionManagedRuleSet `
+ -RuleSetType 'OWASP' `
+ -RuleSetVersion '3.2' `
+ -RuleGroup $ruleGroupEntry
+
+$exclusionEntry = New-AzApplicationGatewayFirewallPolicyExclusion `
+ -MatchVariable "RequestHeaderValues" `
+ -SelectorMatchOperator 'Equals' `
+ -Selector 'User-Agent' `
+ -ExclusionManagedRuleSet $exclusionManagedRuleSet
+
+$wafPolicy = Get-AzApplicationGatewayFirewallPolicy `
+ -Name $wafPolicyName `
+ -ResourceGroupName $resourceGroupName
+$wafPolicy.ManagedRules[0].Exclusions.Add($exclusionEntry)
+$wafPolicy | Set-AzApplicationGatewayFirewallPolicy
+```
+
+# [Azure CLI](#tab/cli)
+
+```azurecli
+az network application-gateway waf-policy managed-rule exclusion rule-set add \
+ --resource-group $resourceGroupName \
+ --policy-name $wafPolicyName \
+ --type OWASP \
+ --version 3.2 \
+ --group-name 'REQUEST-942-APPLICATION-ATTACK-SQLI' \
+ --rule-ids 942270 \
+ --match-variable 'RequestHeaderValues' \
+ --match-operator 'Equals' \
+ --selector 'User-Agent'
+```
+
+# [Bicep](#tab/bicep)
+
+```bicep
+resource wafPolicy 'Microsoft.Network/ApplicationGatewayWebApplicationFirewallPolicies@2021-08-01' = {
+ name: wafPolicyName
+ location: location
+ properties: {
+ managedRules: {
+ managedRuleSets: [
+ {
+ ruleSetType: 'OWASP'
+ ruleSetVersion: '3.2'
+ }
+ ]
+ exclusions: [
+ {
+ matchVariable: 'RequestHeaderValues'
+ selectorMatchOperator: 'Equals'
+ selector: 'User-Agent'
+ exclusionManagedRuleSets: [
+ {
+ ruleSetType: 'OWASP'
+ ruleSetVersion: '3.2'
+ ruleGroups: [
+ {
+ ruleGroupName: 'REQUEST-942-APPLICATION-ATTACK-SQLI'
+ rules: [
+ {
+ ruleId: '942270'
+ }
+ ]
+ }
+ ]
+ }
+ ]
+ }
+ ]
+ }
+ }
+}
+```
+
+# [ARM template](#tab/armtemplate)
+
+```json
+{
+ "type": "Microsoft.Network/ApplicationGatewayWebApplicationFirewallPolicies",
+ "apiVersion": "2021-08-01",
+ "name": "[parameters('wafPolicyName')]",
+ "location": "[parameters('location')]",
+ "properties": {
+ "managedRules": {
+ "managedRuleSets": [
+ {
+ "ruleSetType": "OWASP",
+ "ruleSetVersion": "3.2"
+ }
+ ],
+ "exclusions": [
+ {
+ "matchVariable": "RequestHeaderValues",
+ "selectorMatchOperator": "Equals",
+ "selector": "User-Agent",
+ "exclusionManagedRuleSets": [
+ {
+ "ruleSetType": "OWASP",
+ "ruleSetVersion": "3.2",
+ "ruleGroups": [
+ {
+ "ruleGroupName": "REQUEST-942-APPLICATION-ATTACK-SQLI",
+ "rules": [
+ {
+ "ruleId": "942270"
+ }
+ ]
+ }
+ ]
+ }
+ ]
+ }
+ ]
+ }
+ }
+}
+```
+++ ### Global exclusions You can configure an exclusion to apply across all WAF rules.
The following example shows how you can exclude the `user` query string argument
# [Azure portal](#tab/portal)
-To configure a g;lobal exclusion by using the Azure portal, follow these steps:
+To configure a global exclusion by using the Azure portal, follow these steps:
1. Navigate to the WAF policy, and select **Managed rules**.
az network application-gateway waf-policy managed-rule exclusion add \
# [Bicep](#tab/bicep) ```bicep
-resource wafPolicy 'Microsoft.Network/ApplicationGatewayWebApplicationFirewallPolicies@2021-05-01' = {
+resource wafPolicy 'Microsoft.Network/ApplicationGatewayWebApplicationFirewallPolicies@2021-08-01' = {
name: wafPolicyName location: location properties: {
resource wafPolicy 'Microsoft.Network/ApplicationGatewayWebApplicationFirewallPo
```json { "type": "Microsoft.Network/ApplicationGatewayWebApplicationFirewallPolicies",
- "apiVersion": "2021-05-01",
+ "apiVersion": "2021-08-01",
"name": "[parameters('wafPolicyName')]", "location": "[parameters('location')]", "properties": {
web-application-firewall Manage Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/shared/manage-policies.md
Title: Use Azure Firewall Manager to manage Web Application Firewall policies (preview)
+ Title: Use Azure Firewall Manager to manage Web Application Firewall policies
description: Learn about managing Azure Web Application Firewall policies using Azure Firewall Manager Previously updated : 06/02/2022 Last updated : 06/15/2022
-# Configure WAF policies using Azure Firewall Manager (preview)
-
-> [!IMPORTANT]
-> Configure Web Application Firewall policies using Azure Firewall Manager is currently in PREVIEW.
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+# Configure WAF policies using Azure Firewall Manager
Azure Firewall Manager is a platform to manage and protect your network security resources at scale. You can associate your WAF policies to an Application Gateway or Azure Front Door within Azure Firewall Manager, all in a single place.
To upgrade a WAF configuration to a WAF policy, select **Upgrade from WAF config
## Next steps -- [Manage Azure Web Application Firewall policies (preview)](../../firewall-manager/manage-web-application-firewall-policies.md)
+- [Manage Azure Web Application Firewall policies](../../firewall-manager/manage-web-application-firewall-policies.md)