Updates from: 08/14/2021 03:16:58
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Azure Sentinel https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/azure-sentinel.md
Now that you've enabled Sentinel you'll want to be notified when something suspi
You can create custom analytics rules to help you discover threats and anomalous behaviors that are present in your environment. These rules search for specific events or sets of events, alert you when certain event thresholds or conditions are reached to then generate incidents for further investigation. > [!NOTE]
-> For a detailed review on Analytic Rules you can see this [Tutorial](../sentinel/tutorial-detect-threats-custom.md).
+> For a detailed review on Analytic Rules you can see this [Tutorial](/azure/active-directory-b2c/articles/sentinel/detect-threats-custom.md).
In our scenario, we want to receive a notification if someone is trying to force access to our environment but they are not successful, this could mean a brute-force attack, we want to get notified for **_2 or more non successful logins within 60 sec_**
In our scenario, we want to receive a notification if someone is trying to force
An incident can include multiple alerts. It's an aggregation of all the relevant evidence for a specific investigation. You can set properties such as severity and status at the incident level. > [!NOTE]
- > For detailed review on Incident investigation please see [this Tutorial](../sentinel/tutorial-investigate-cases.md)
+ > For detailed review on Incident investigation please see [this Tutorial](/azure/active-directory-b2c/articles/sentinel/investigate-cases.md)
To begin the investigation, select a specific incident. On the right, you can see detailed information for the incident including its severity, entities involved, the raw events that triggered the incident, and the incidentΓÇÖs unique ID.
Once the Playbook is configured, you'll have to just edit the existing rule and
- To help with data analysis and creation of rich visual reports, choose and download from a gallery of expertly created workbooks that surface insights based on your data. [These workbooks](https://github.com/azure-ad-b2c/siem#workbooks) can be easily customized to your needs. -- Learn more about Sentinel in the [Azure Sentinel documentation](../sentinel/index.yml)
+- Learn more about Sentinel in the [Azure Sentinel documentation](../sentinel/index.yml)
active-directory-b2c Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/custom-domain.md
When using custom domains, consider the following:
## Step 1. Add a custom domain name to your Azure AD B2C tenant
-Follow the guidance for how to [add and validate your custom domain in Azure AD](../active-directory/fundamentals/add-custom-domain.md). After the domain is verified, delete the DNS TXT record you created.
+Every new Azure AD B2C tenant comes with an initial domain name, <domainname>.onmicrosoft.com. You can't change or delete the initial domain name, but you can add a custom domain.
-> [!IMPORTANT]
-> For these steps, be sure to sign in to your **Azure AD B2C** tenant and select the **Azure Active Directory** service.
+Follow these steps to add a custom domain to your Azure AD B2C tenant:
+
+1. [Add your custom domain name to Azure AD](../active-directory/fundamentals/add-custom-domain.md#add-your-custom-domain-name-to-azure-ad).
+ > [!IMPORTANT]
+ > For these steps, be sure to sign in to your **Azure AD B2C** tenant and select the **Azure Active Directory** service.
-Verify each subdomain you plan to use. Verifying just the top-level domain isn't sufficient. For example, to be able to sign-in with *login.contoso.com* and *account.contoso.com*, you need to verify both subdomains and not just the top-level domain *contoso.com*.
+1. [Add your DNS information to the domain registrar](../active-directory/fundamentals/add-custom-domain.md#add-your-dns-information-to-the-domain-registrar). After you add your custom domain name to Azure AD, create a DNS `TXT`, or `MX` record for your domain. Creating this DNS record for your domain verifies ownership of your domain name.
+
+ The following examples demonstrate TXT records for *login.contoso.com* and *account.contoso.com*:
+
+ |Name (hostname) |Type |Data |
+ ||||
+ |login | TXT | MS=ms12345678 |
+ |account | TXT | MS=ms87654321 |
+
+ The TXT record must be associated with the subdomain, or hostname of the domain. For example, the *login* part of the *contoso.com* domain. If the hostname is empty or `@`, Azure AD will not be able to verify the custom domain you added. In the following examples, both records are wrongly configured.
+
+ |Name (hostname) |Type |Data |
+ ||||
+ | | TXT | MS=ms12345678 |
+ | @ | TXT | MS=ms12345678 |
+
+ > [!TIP]
+ > You can manage your custom domain with any publicly available DNS service, such as GoDaddy. If you don't have a DNS server, you can use [Azure DNS zone](../dns/dns-getstarted-portal.md), or [App Service domains](../app-service/manage-custom-dns-buy-domain.md).
+
+1. [Verify your custom domain name](../active-directory/fundamentals/add-custom-domain.md#verify-your-custom-domain-name). Verify each subdomain, or hostname you plan to use. Verifying just the top-level domain isn't sufficient. For example, to be able to sign-in with *login.contoso.com* and *account.contoso.com*, you need to verify both subdomains and not just the top-level domain *contoso.com*.
+
+ After the domain is verified, **delete** the DNS TXT record you created.
-> [!TIP]
-> You can manage your custom domain with any publicly available DNS service, such as GoDaddy. If you don't have a DNS server, you can use App Service domains. To use App Service domains:
->
-> 1. [Buy a custom domain name](../app-service/manage-custom-dns-buy-domain.md).
-> 1. [Add your custom domain in Azure AD](../active-directory/fundamentals/add-custom-domain.md).
-> 1. Validate the domain name by [managing custom DNS records](../app-service/manage-custom-dns-buy-domain.md#manage-custom-dns-records).
## Step 2. Create a new Azure Front Door instance
After you add the custom domain and configure your application, users will still
1. Make sure the [custom domain](../frontdoor/front-door-custom-domain.md) is configured properly. The `CNAME` record for your custom domain must point to your Azure Front Door default frontend host (for example, contoso.azurefd.net). 1. Make sure the [Azure Front Door backend pool configuration](#22-add-backend-and-backend-pool) points to the tenant where you set up the custom domain name, and where your user flow or custom policies are stored. +
+### Azure AD B2C returns the resource you are looking for has been removed, had its name changed, or is temporarily unavailable.
+
+- **Symptom** - After you configure a custom domain, when you try to sign in with the custom domain, you get *the resource you are looking for has been removed, had its name changed, or is temporarily unavailable* error message.
+- **Possible causes** - This issue could be related to the Azure AD custom domain verification.
+- **Resolution**: Make sure the custom domain is [registered and **successfully verified**](#step-1-add-a-custom-domain-name-to-your-azure-ad-b2c-tenant) in your Azure AD B2C tenant.
+ ### Identify provider returns an error - **Symptom** - After you configure a custom domain, you're able to sign in with local accounts. But when you sign in with credentials from external [social or enterprise identity providers](add-identity-provider.md), the identity provider presents an error message. - **Possible causes** - When Azure AD B2C takes the user to sign in with a federated identity provider, it specifies the redirect URI. The redirect URI is the endpoint to where the identity provider returns the token. The redirect URI is the same domain your application uses with the authorization request. If the redirect URI is not yet registered in the identity provider, it may not trust the new redirect URI, which results in an error message. - **Resolution** - Follow the steps in [Configure your identity provider](#configure-your-identity-provider) to add the new redirect URI. - ## Frequently asked questions ### Can I use Azure Front Door advanced configuration, such as *Web application firewall Rules*?
active-directory-b2c Enable Authentication Spa App Options https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/enable-authentication-spa-app-options.md
const msalConfig = {
myMSALObj.loginPopup(loginRequest); ```
+## Secure your logout redirect
+
+After logout, the user is redirected to the URI specified in the `post_logout_redirect_uri` parameter, regardless of the reply URLs that have been specified for the application. However, if a valid `id_token_hint` is passed and the [Require ID Token in logout requests](session-behavior.md#secure-your-logout-redirect) is turned on, Azure AD B2C verifies that the value of `post_logout_redirect_uri` matches one of the application's configured redirect URIs before performing the redirect. If no matching reply URL was configured for the application, an error message is displayed and the user is not redirected.
+
+To support a secured logout redirect URI, follow the steps below:
+
+1. Create a globally accessible variable to store the `id_token`.
+ ```javascript
+ let id_token = "";
+ ```
+
+1. In the MSAL `handleResponse` function, parse the `id_token` from the `authenticationResult` object into the `id_token` variable.
+ ```javascript
+ function handleResponse(response) {
+ if (response !== null) {
+ setAccount(response.account);
+ id_token = response.idToken;
+ } else {
+ selectAccount();
+ }
+ }
+ ```
+
+1. In the `signOut` function, add the `id_token_hint` parameter to the **logoutRequest** object.
+ ```javascript
+ function signOut() {
+ const logoutRequest = {
+ postLogoutRedirectUri: msalConfig.auth.redirectUri,
+ //set id_token_hint to the id_token value
+ idTokenHint : id_token,
+ mainWindowRedirectUri: msalConfig.auth.redirectUri
+ };
+ myMSALObj.logoutPopup(logoutRequest);
+ }
+ ```
+
+In the above example, the **post_logout_redirect_uri** passed into the logout request will be in the format: `https://your-app.com/`. This URL must be added to the Application Registration's reply URL's.
+ ## Enable single logout Single logout in Azure AD B2C uses OpenId Connect front-channel logout to make logout requests to all applications the user has signed into through Azure AD B2C.
active-directory-b2c Enable Authentication Web Application Options https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/enable-authentication-web-application-options.md
Complete the [Support advanced scenarios](#support-advanced-scenarios) procedure
private async Task OnRedirectToIdentityProviderFunc(RedirectContext context) { // Read the custom parameter
- var campaign_id = (context.Properties.Items.ContainsKey("campaign_id"))
-
+ var campaign_id = context.Properties.Items.FirstOrDefault(x => x.Key == "campaign_id").Value;
+ // Add your custom code here
+ if (campaign_id != null)
+ {
+ // Send parameter to authentication request
+ context.ProtocolMessage.SetParameter("campaign_id", campaign_id);
+ }
await Task.CompletedTask.ConfigureAwait(false); } ```
+## Secure your logout redirect
+
+After logout, the user is redirected to the URI specified in the `post_logout_redirect_uri` parameter, regardless of the reply URLs that have been specified for the application. However, if a valid `id_token_hint` is passed and the [Require ID Token in logout requests](session-behavior.md#secure-your-logout-redirect) is turned on, Azure AD B2C verifies that the value of `post_logout_redirect_uri` matches one of the application's configured redirect URIs before performing the redirect. If no matching reply URL was configured for the application, an error message is displayed and the user is not redirected.
+
+To support a secured logout redirect in your application, first follow the steps in the [Account controller](enable-authentication-web-application-options.md#add-the-account-controller) and [Support advanced scenarios](#support-advanced-scenarios) sections. Then follow the steps below:
+
+1. In `MyAccountController.cs` controller, add a **SignOut** action using the following code snippet:
+
+ ```csharp
+ [HttpGet("{scheme?}")]
+ public async Task<IActionResult> SignOutAsync([FromRoute] string scheme)
+ {
+ scheme ??= OpenIdConnectDefaults.AuthenticationScheme;
+
+ //obtain the id_token
+ var idToken = await HttpContext.GetTokenAsync("id_token");
+ //send the id_token value to the authentication middleware
+ properties.Items["id_token_hint"] = idToken;
+
+ return SignOut(properties,CookieAuthenticationDefaults.AuthenticationScheme,scheme);
+ }
+ ```
+
+1. In the **Startup.cs** class, parse the `id_token_hint` value and append the value to the authentication request. The following code snippet demonstrates how to pass the `id_token_hint` value to the authentication request:
+
+ ```csharp
+ private async Task OnRedirectToIdentityProviderFunc(RedirectContext context)
+ {
+ var id_token_hint = context.Properties.Items.FirstOrDefault(x => x.Key == "id_token_hint").Value;
+ if (id_token_hint != null)
+ {
+ // Send parameter to authentication request
+ context.ProtocolMessage.SetParameter("id_token_hint", id_token_hint);
+ }
+
+ await Task.CompletedTask.ConfigureAwait(false);
+ }
+ ```
+
+1. In the `ConfigureServices` function, add the `SaveTokens` option for **Controllers** have access to the `id_token` value:
+
+ ```csharp
+ services.AddAuthentication(OpenIdConnectDefaults.AuthenticationScheme)
+ .AddMicrosoftIdentityWebApp(options =>
+ {
+ Configuration.Bind("AzureAdB2C", options);
+ options.Events ??= new OpenIdConnectEvents();
+ options.Events.OnRedirectToIdentityProvider += OnRedirectToIdentityProviderFunc;
+ options.SaveTokens = true;
+ });
+ ```
+
+1. In the **appsettings.json** configuration file, add your logout redirect URI path to `SignedOutCallbackPath` key.
+
+ ```json
+ "AzureAdB2C": {
+ "Instance": "https://<your-tenant-name>.b2clogin.com",
+ "ClientId": "<web-app-application-id>",
+ "Domain": "<your-b2c-domain>",
+ "SignedOutCallbackPath": "/signout/<your-sign-up-in-policy>",
+ "SignUpSignInPolicyId": "<your-sign-up-in-policy>"
+ }
+ ```
+
+In the above example, the **post_logout_redirect_uri** passed into the logout request will be in the format: `https://your-app.com/signout/<your-sign-up-in-policy>`. This URL must be added to the Application Registration's reply URL's.
+ ## Role-based access control With [authorization in ASP.NET Core](/aspnet/core/security/authorization/introduction) you can use [role-based authorization](/aspnet/core/security/authorization/roles), [claims-based authorization](/aspnet/core/security/authorization/claims), or [policy-based authorization](/aspnet/core/security/authorization/policies) to check if the user is authorized to access a protected resource.
active-directory-b2c Identity Provider Azure Ad B2c https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-azure-ad-b2c.md
This article describes how to set up a federation with another Azure AD B2C tena
### Verify the application's publisher domain As of November 2020, new application registrations show up as unverified in the user consent prompt unless [the application's publisher domain is verified](../active-directory/develop/howto-configure-publisher-domain.md) ***and*** the companyΓÇÖs identity has been verified with the Microsoft Partner Network and associated with the application. ([Learn more](../active-directory/develop/publisher-verification-overview.md) about this change.) Note that for Azure AD B2C user flows, the publisherΓÇÖs domain appears only when using a [Microsoft account](../active-directory-b2c/identity-provider-microsoft-account.md) or other Azure AD tenant as the identity provider. To meet these new requirements, do the following:
-1. [Verify your company identity using your Microsoft Partner Network (MPN) account](https://docs.microsoft.com/partner-center/verification-responses). This process verifies information about your company and your companyΓÇÖs primary contact.
+1. [Verify your company identity using your Microsoft Partner Network (MPN) account](/partner-center/verification-responses). This process verifies information about your company and your companyΓÇÖs primary contact.
1. Complete the publisher verification process to associate your MPN account with your app registration using one of the following options: - If the app registration for the Microsoft account identity provider is in an Azure AD tenant, [verify your app in the App Registration portal](../active-directory/develop/mark-app-as-publisher-verified.md). - If your app registration for the Microsoft account identity provider is in an Azure AD B2C tenant, [mark your app as publisher verified using Microsoft Graph APIs](../active-directory/develop/troubleshoot-publisher-verification.md#making-microsoft-graph-api-calls) (for example, using Graph Explorer). The UI for setting an appΓÇÖs verified publisher is currently disabled for Azure AD B2C tenants.
If the sign-in process is successful, your browser is redirected to `https://jwt
## Next steps
-Learn how to [pass the other Azure AD B2C token to your application](idp-pass-through-user-flow.md).
+Learn how to [pass the other Azure AD B2C token to your application](idp-pass-through-user-flow.md).
active-directory-b2c Identity Provider Azure Ad Single Tenant https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-azure-ad-single-tenant.md
This article shows you how to enable sign-in for users from a specific Azure AD
### Verify the application's publisher domain As of November 2020, new application registrations show up as unverified in the user consent prompt unless [the application's publisher domain is verified](../active-directory/develop/howto-configure-publisher-domain.md) ***and*** the companyΓÇÖs identity has been verified with the Microsoft Partner Network and associated with the application. ([Learn more](../active-directory/develop/publisher-verification-overview.md) about this change.) Note that for Azure AD B2C user flows, the publisherΓÇÖs domain appears only when using a [Microsoft account](../active-directory-b2c/identity-provider-microsoft-account.md) or other Azure AD tenant as the identity provider. To meet these new requirements, do the following:
-1. [Verify your company identity using your Microsoft Partner Network (MPN) account](https://docs.microsoft.com/partner-center/verification-responses). This process verifies information about your company and your companyΓÇÖs primary contact.
+1. [Verify your company identity using your Microsoft Partner Network (MPN) account](/partner-center/verification-responses). This process verifies information about your company and your companyΓÇÖs primary contact.
1. Complete the publisher verification process to associate your MPN account with your app registration using one of the following options: - If the app registration for the Microsoft account identity provider is in an Azure AD tenant, [verify your app in the App Registration portal](../active-directory/develop/mark-app-as-publisher-verified.md). - If your app registration for the Microsoft account identity provider is in an Azure AD B2C tenant, [mark your app as publisher verified using Microsoft Graph APIs](../active-directory/develop/troubleshoot-publisher-verification.md#making-microsoft-graph-api-calls) (for example, using Graph Explorer). The UI for setting an appΓÇÖs verified publisher is currently disabled for Azure AD B2C tenants.
If the sign-in process is successful, your browser is redirected to `https://jwt
## Next steps
-Learn how to [pass the Azure AD token to your application](idp-pass-through-user-flow.md).
+Learn how to [pass the Azure AD token to your application](idp-pass-through-user-flow.md).
active-directory-b2c Identity Provider Microsoft Account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-microsoft-account.md
zone_pivot_groups: b2c-policy-type
### Verify the application's publisher domain As of November 2020, new application registrations show up as unverified in the user consent prompt unless [the application's publisher domain is verified](../active-directory/develop/howto-configure-publisher-domain.md) ***and*** the companyΓÇÖs identity has been verified with the Microsoft Partner Network and associated with the application. ([Learn more](../active-directory/develop/publisher-verification-overview.md) about this change.) Note that for Azure AD B2C user flows, the publisherΓÇÖs domain appears only when using a Microsoft account or other [Azure AD](../active-directory-b2c/identity-provider-azure-ad-single-tenant.md) tenant as the identity provider. To meet these new requirements, do the following:
-1. [Verify your company identity using your Microsoft Partner Network (MPN) account](https://docs.microsoft.com/partner-center/verification-responses). This process verifies information about your company and your companyΓÇÖs primary contact.
+1. [Verify your company identity using your Microsoft Partner Network (MPN) account](/partner-center/verification-responses). This process verifies information about your company and your companyΓÇÖs primary contact.
1. Complete the publisher verification process to associate your MPN account with your app registration using one of the following options: - If the app registration for the Microsoft account identity provider is in an Azure AD tenant, [verify your app in the App Registration portal](../active-directory/develop/mark-app-as-publisher-verified.md). - If your app registration for the Microsoft account identity provider is in an Azure AD B2C tenant, [mark your app as publisher verified using Microsoft Graph APIs](../active-directory/develop/troubleshoot-publisher-verification.md#making-microsoft-graph-api-calls) (for example, using Graph Explorer). The UI for setting an appΓÇÖs verified publisher is currently disabled for Azure AD B2C tenants.
You've now configured your policy so that Azure AD B2C knows how to communicate
If the sign-in process is successful, your browser is redirected to `https://jwt.ms`, which displays the contents of the token returned by Azure AD B2C.
active-directory-b2c Partner Biocatch https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/partner-biocatch.md
document.getElementById("clientSessionId").style.displayΓÇ»=ΓÇ»'none';
</ClaimType>
- <ClaimsSchema>
+ </ClaimsSchema>
``` 5. Configure self-asserted claims provider for the client session ID field.
document.getElementById("clientSessionId").style.displayΓÇ»=ΓÇ»'none';
<OutputClaims>
- <OutputClaim ClaimTypeReferenceId="clientSessionId" Required="false" DefaultValue="100"/>
+ <OutputClaim ClaimTypeReferenceId="clientSessionId" Required="false" DefaultValue="100"/>
</OutputClaims>
document.getElementById("clientSessionId").style.displayΓÇ»=ΓÇ»'none';
<DisplayName>Technical profile for BioCatch API to return session information</DisplayName>
- <Protocol Name="Proprietary" Handler="Web.TPEngine.Providers.RestfulProvider, Web.TPEngine, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" />
+ <Protocol Name="Proprietary" Handler="Web.TPEngine.Providers.RestfulProvider, Web.TPEngine, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" />
<Metadata>
- <Item Key="ServiceUrl">https://biocatch-url.com/api/v6/score?customerID=<customerid>&amp;action=getScore&amp;uuid=<uuid>&amp;customerSessionID={clientSessionId}&amp;solution=ATO&amp;activtyType=<activity_type>&amp;brand=<brand></Item>
+ <Item Key="ServiceUrl">https://biocatch-url.com/api/v6/score?customerID=<customerid>&amp;action=getScore&amp;uuid=<uuid>&amp;customerSessionID={clientSessionId}&amp;solution=ATO&amp;activtyType=<activity_type>&amp;brand=<brand></Item>
<Item Key="SendClaimsIn">Url</Item>
document.getElementById("clientSessionId").style.displayΓÇ»=ΓÇ»'none';
1. If the returned claim *risk* equals *low*, skip the step for MFA, else force user MFA ```XML
- <OrchestrationStep Order="8" Type="ClaimsExchange">
-
- <ClaimsExchanges>
+ <OrchestrationStep Order="8" Type="ClaimsExchange">
- <ClaimsExchange Id="clientSessionIdInput" TechnicalProfileReferenceId="login-NonInteractive-clientSessionId" />
+ <ClaimsExchanges>
- </ClaimsExchanges>
+ <ClaimsExchange Id="clientSessionIdInput" TechnicalProfileReferenceId="login-NonInteractive-clientSessionId" />
- </OrchestrationStep>
+ </ClaimsExchanges>
- <OrchestrationStep Order="9" Type="ClaimsExchange">
+ </OrchestrationStep>
- <ClaimsExchanges>
+ <OrchestrationStep Order="9" Type="ClaimsExchange">
- <ClaimsExchange Id="BcGetScore" TechnicalProfileReferenceId=" BioCatch-API-GETSCORE" />
+ <ClaimsExchanges>
- </ClaimsExchanges>
+ <ClaimsExchange Id="BcGetScore" TechnicalProfileReferenceId=" BioCatch-API-GETSCORE" />
- </OrchestrationStep>
+ </ClaimsExchanges>
- <OrchestrationStep Order="10" Type="ClaimsExchange">
+ </OrchestrationStep>
- <Preconditions>
+ <OrchestrationStep Order="10" Type="ClaimsExchange">
- <Precondition Type="ClaimEquals" ExecuteActionsIf="true">
+ <Preconditions>
- <Value>riskLevel</Value>
+ <Precondition Type="ClaimEquals" ExecuteActionsIf="true">
- <Value>LOW</Value>
+ <Value>riskLevel</Value>
- <Action>SkipThisOrchestrationStep</Action>
+ <Value>LOW</Value>
- </Precondition>
+ <Action>SkipThisOrchestrationStep</Action>
- </Preconditions>
+ </Precondition>
- <ClaimsExchanges>
+ </Preconditions>
- <ClaimsExchange Id="PhoneFactor-Verify" TechnicalProfileReferenceId="PhoneFactor-InputOrVerify" />
+ <ClaimsExchanges>
- </ClaimsExchanges>
+ <ClaimsExchange Id="PhoneFactor-Verify" TechnicalProfileReferenceId="PhoneFactor-InputOrVerify" />
+ </ClaimsExchanges>
``` 8. Configure on relying party configuration (optional)
document.getElementById("clientSessionId").style.displayΓÇ»=ΓÇ»'none';
```XML <RelyingParty>
- <DefaultUserJourney ReferenceId="SignUpOrSignInMfa" />
+ <DefaultUserJourney ReferenceId="SignUpOrSignInMfa" />
- <UserJourneyBehaviors>
+ <UserJourneyBehaviors>
- <SingleSignOn Scope="Tenant" KeepAliveInDays="30" />
+ <SingleSignOn Scope="Tenant" KeepAliveInDays="30" />
- <SessionExpiryType>Absolute</SessionExpiryType>
+ <SessionExpiryType>Absolute</SessionExpiryType>
- <SessionExpiryInSeconds>1200</SessionExpiryInSeconds>
+ <SessionExpiryInSeconds>1200</SessionExpiryInSeconds>
- <ScriptExecution>Allow</ScriptExecution>
+ <ScriptExecution>Allow</ScriptExecution>
- </UserJourneyBehaviors>
+ </UserJourneyBehaviors>
- <TechnicalProfile Id="PolicyProfile">
+ <TechnicalProfile Id="PolicyProfile">
- <DisplayName>PolicyProfile</DisplayName>
+ <DisplayName>PolicyProfile</DisplayName>
- <Protocol Name="OpenIdConnect" />
-
- <OutputClaims>
+ <Protocol Name="OpenIdConnect" />
- <OutputClaim ClaimTypeReferenceId="displayName" />
+ <OutputClaims>
- <OutputClaim ClaimTypeReferenceId="givenName" />
+ <OutputClaim ClaimTypeReferenceId="displayName" />
- <OutputClaim ClaimTypeReferenceId="surname" />
+ <OutputClaim ClaimTypeReferenceId="givenName" />
- <OutputClaim ClaimTypeReferenceId="email" />
+ <OutputClaim ClaimTypeReferenceId="surname" />
- <OutputClaim ClaimTypeReferenceId="objectId" PartnerClaimType="sub" />
+ <OutputClaim ClaimTypeReferenceId="email" />
- <OutputClaim ClaimTypeReferenceId="identityProvider" />                
+ <OutputClaim ClaimTypeReferenceId="objectId" PartnerClaimType="sub" />
- <OutputClaim ClaimTypeReferenceId="riskLevel" />
+ <OutputClaim ClaimTypeReferenceId="identityProvider" />
- <OutputClaim ClaimTypeReferenceId="score" />
+ <OutputClaim ClaimTypeReferenceId="riskLevel" />
- <OutputClaim ClaimTypeReferenceId="tenantId" AlwaysUseDefaultValue="true" DefaultValue="{Policy:TenantObjectId}" />
+ <OutputClaim ClaimTypeReferenceId="score" />
- </OutputClaims>
+ <OutputClaim ClaimTypeReferenceId="tenantId" AlwaysUseDefaultValue="true" DefaultValue="{Policy:TenantObjectId}" />
- <SubjectNamingInfo ClaimType="sub" />
+ </OutputClaims>
- </TechnicalProfile>
+ <SubjectNamingInfo ClaimType="sub" />
- </RelyingParty>
+ </TechnicalProfile>
+ </RelyingParty>
``` ## Integrate with Azure AD B2C
active-directory-b2c Partner Bloksec https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/partner-bloksec.md
To get started, you'll need:
- An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). -- An [Azure AD B2C tenant](/azure/active-directory-b2c/tutorial-create-tenant) that's linked to your Azure subscription.
+- An [Azure AD B2C tenant](./tutorial-create-tenant.md) that's linked to your Azure subscription.
- A BlokSec [trial account](https://bloksec.com/). -- If you haven't already done so, [register](/azure/active-directory-b2c/tutorial-register-applications) a web application, [and enable ID token implicit grant](/azure/active-directory-b2c/tutorial-register-applications#enable-id-token-implicit-grant).
+- If you haven't already done so, [register](./tutorial-register-applications.md) a web application, [and enable ID token implicit grant](./tutorial-register-applications.md#enable-id-token-implicit-grant).
::: zone-end ::: zone pivot="b2c-custom-policy"
To get started, you'll need:
- An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). -- An [Azure AD B2C tenant](/azure/active-directory-b2c/tutorial-create-tenant) that's linked to your Azure subscription.
+- An [Azure AD B2C tenant](./tutorial-create-tenant.md) that's linked to your Azure subscription.
- A BlokSec [trial account](https://bloksec.com/). -- If you haven't already done so, [register](/azure/active-directory-b2c/tutorial-register-applications) a web application, [and enable ID token implicit grant](/azure/active-directory-b2c/tutorial-register-applications#enable-id-token-implicit-grant).
+- If you haven't already done so, [register](./tutorial-register-applications.md) a web application, [and enable ID token implicit grant](./tutorial-register-applications.md#enable-id-token-implicit-grant).
-- Complete the steps in the [**Get started with custom policies in Azure Active Directory B2C**](/azure/active-directory-b2c/tutorial-create-user-flows?pivots=b2c-custom-policy).
+- Complete the steps in the [**Get started with custom policies in Azure Active Directory B2C**](./tutorial-create-user-flows.md?pivots=b2c-custom-policy).
::: zone-end ### Part 1 - Create an application registration in BlokSec
To get started, you'll need:
|SSO type | OIDC| |Logo URI |[https://bloksec.io/assets/AzureB2C.png](https://bloksec.io/assets/AzureB2C.png) a link to the image of your choice| |Redirect URIs | https://**your-B2C-tenant-name**.b2clogin.com/**your-B2C-tenant-name**.onmicrosoft.com/oauth2/authresp<BR>**For Example**: 'https://fabrikam.b2clogin.com/fabrikam.onmicrosoft.com/oauth2/authresp' <BR><BR>If you use a custom domain, enter https://**your-domain-name**/**your-tenant-name**.onmicrosoft.com/oauth2/authresp. <BR> Replace your-domain-name with your custom domain, and your-tenant-name with the name of your tenant. |
- |Post log out redirect URIs |https://**your-B2C-tenant-name**.b2clogin.com/**your-B2C-tenant-name**.onmicrosoft.com/**{policy}**/oauth2/v2.0/logout <BR> [Send a sign-out request](/azure/active-directory-b2c/openid-connect#send-a-sign-out-request). |
+ |Post log out redirect URIs |https://**your-B2C-tenant-name**.b2clogin.com/**your-B2C-tenant-name**.onmicrosoft.com/**{policy}**/oauth2/v2.0/logout <BR> [Send a sign-out request](./openid-connect.md#send-a-sign-out-request). |
4. Once saved, select the newly created Azure AD B2C application to open the application configuration, select **Generate App Secret**.
You should now see BlokSec as a new OIDC Identity provider listed within your B2
For additional information, review the following articles: -- [Custom policies in Azure AD B2C](/azure/active-directory-b2c/custom-policy-overview)
+- [Custom policies in Azure AD B2C](./custom-policy-overview.md)
-- [Get started with custom policies in Azure AD B2C](/azure/active-directory-b2c/tutorial-create-user-flows?pivots=b2c-custom-policy)
+- [Get started with custom policies in Azure AD B2C](./tutorial-create-user-flows.md?pivots=b2c-custom-policy)
::: zone-end ::: zone pivot="b2c-custom-policy" >[!NOTE]
->In Azure Active Directory B2C, [**custom policies**](/azure/active-directory-b2c/user-flow-overview) are designed primarily to address complex scenarios. For most scenarios, we recommend that you use built-in [**user flows**](/azure/active-directory-b2c/user-flow-overview).
+>In Azure Active Directory B2C, [**custom policies**](./user-flow-overview.md) are designed primarily to address complex scenarios. For most scenarios, we recommend that you use built-in [**user flows**](./user-flow-overview.md).
### Part 2 - Create a policy key
Select **Upload Custom Policy**, and then upload the two policy files that you c
1. Select your relying party policy, for example `B2C_1A_signup_signin`.
-2. For **Application**, select a web application that you [previously registered](/azure/active-directory-b2c/tutorial-register-applications). The **Reply URL** should show `https://jwt.ms`.
+2. For **Application**, select a web application that you [previously registered](./tutorial-register-applications.md). The **Reply URL** should show `https://jwt.ms`.
3. Select the **Run now** button.
If the sign-in process is successful, your browser is redirected to `https://jwt
For additional information, review the following articles: -- [Custom policies in Azure AD B2C](/azure/active-directory-b2c/custom-policy-overview)
+- [Custom policies in Azure AD B2C](./custom-policy-overview.md)
-- [Get started with custom policies in Azure AD B2C](/azure/active-directory-b2c/tutorial-create-user-flows?pivots=b2c-custom-policy)
+- [Get started with custom policies in Azure AD B2C](./tutorial-create-user-flows.md?pivots=b2c-custom-policy)
::: zone-end
active-directory-b2c Tutorial Create Tenant https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/tutorial-create-tenant.md
If you don't have an Azure subscription, create a [free account](https://azure.m
![Subscription tenant, Directory + Subscription filter with subscription tenant selected](media/tutorial-create-tenant/portal-01-pick-directory.png)
-1. Add **Microsoft.AzureActiveDirectory** as a resource provider for the Azure subscription your're using ([learn more](https://docs.microsoft.com/azure/azure-resource-manager/management/resource-providers-and-types?WT.mc_id=Portal-Microsoft_Azure_Support#register-resource-provider-1)):
+1. Add **Microsoft.AzureActiveDirectory** as a resource provider for the Azure subscription your're using ([learn more](../azure-resource-manager/management/resource-providers-and-types.md?WT.mc_id=Portal-Microsoft_Azure_Support#register-resource-provider-1)):
1. On the Azure portal menu or from the **Home** page, select **Subscriptions**. 2. Select your subscription, and then select **Resource providers**.
In this article, you learned how to:
Next, learn how to register a web application in your new tenant. > [!div class="nextstepaction"]
-> [Register your applications >](tutorial-register-applications.md)
+> [Register your applications >](tutorial-register-applications.md)
active-directory-domain-services Network Considerations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-domain-services/network-considerations.md
Previously updated : 07/06/2021 Last updated : 08/12/2021
A managed domain creates some networking resources during deployment. These reso
## Network security groups and required ports
-A [network security group (NSG)](../virtual-network/network-security-groups-overview.md) contains a list of rules that allow or deny network traffic to traffic in an Azure virtual network. A network security group is created when you deploy a managed domain that contains a set of rules that let the service provide authentication and management functions. This default network security group is associated with the virtual network subnet your managed domain is deployed into.
+A [network security group (NSG)](../virtual-network/network-security-groups-overview.md) contains a list of rules that allow or deny network traffic in an Azure virtual network. When you deploy a managed domain, a network security group is created with a set of rules that let the service provide authentication and management functions. This default network security group is associated with the virtual network subnet your managed domain is deployed into.
-The following network security group rules are required for the managed domain to provide authentication and management services. Don't edit or delete these network security group rules for the virtual network subnet your managed domain is deployed into.
+The following sections cover network security groups and Inbound and Outbound port requirements.
-| Port number | Protocol | Source | Destination | Action | Required | Purpose |
+### Inbound connectivity
+
+The following network security group Inbound rules are required for the managed domain to provide authentication and management services. Don't edit or delete these network security group rules for the virtual network subnet your managed domain is deployed into.
+
+| Inbound port number | Protocol | Source | Destination | Action | Required | Purpose |
|:--:|:--:|:-:|:--:|::|:--:|:--| | 5986 | TCP | AzureActiveDirectoryDomainServices | Any | Allow | Yes | Management of your domain. | | 3389 | TCP | CorpNetSaw | Any | Allow | Optional | Debugging for support. |
An Azure standard load balancer is created that requires these rules to be place
If needed, you can [create the required network security group and rules using Azure PowerShell](powershell-create-instance.md#create-a-network-security-group). > [!WARNING]
-> Don't manually edit these network resources and configurations. When you associate a misconfigured network security group or a user defined route table with the subnet in which the managed domain is deployed, you may disrupt Microsoft's ability to service and manage the domain. Synchronization between your Azure AD tenant and your managed domain is also disrupted.
+> When you associate a misconfigured network security group or a user defined route table with the subnet in which the managed domain is deployed, you may disrupt Microsoft's ability to service and manage the domain. Synchronization between your Azure AD tenant and your managed domain is also disrupted. Follow all listed requirements to avoid an unsupported configuration that could break sync, patching, or management.
> > If you use secure LDAP, you can add the required TCP port 636 rule to allow external traffic if needed. Adding this rule doesn't place your network security group rules in an unsupported state. For more information, see [Lock down secure LDAP access over the internet](tutorial-configure-ldaps.md#lock-down-secure-ldap-access-over-the-internet) >
-> Default rules for *AllowVnetInBound*, *AllowAzureLoadBalancerInBound*, *DenyAllInBound*, *AllowVnetOutBound*, *AllowInternetOutBound*, and *DenyAllOutBound* also exist for the network security group. Don't edit or delete these default rules.
->
-> The Azure SLA doesn't apply to deployments where an improperly configured network security group and/or user defined route tables have been applied that blocks Azure AD DS from updating and managing your domain.
+> The Azure SLA doesn't apply to deployments that are blocked from updates or management by an improperly configured network security group or user defined route table. A broken network configuration can also prevent security patches from being applied.
+
+### Outbound connectivity
+
+For Outbound connectivity, you can either keep **AllowVnetOutbound** and **AllowInternetOutBound** or restrict Outbound traffic by using ServiceTags listed in the following table. The ServiceTag for AzureUpdateDelivery must be added via [PowerShell](powershell-create-instance.md).
+
+Filtered Outbound traffic is not supported on Classic deployments.
++
+| Outbound port number | Protocol | Source | Destination | Action | Required | Purpose |
+|:--:|:--:|::|:-:|::|:--:|:-:|
+| 443 | TCP | Any | AzureActiveDirectoryDomainServices| Allow | Yes | Communication with the Azure AD Domain Services management service. |
+| 443 | TCP | Any | AzureMonitor | Allow | Yes | Monitoring of the virtual machines. |
+| 443 | TCP | Any | Storage | Allow | Yes | Communication with Azure Storage. |
+| 443 | TCP | Any | AzureActiveDirectory | Allow | Yes | Communication with Azure Active Directory. |
+| 443 | TCP | Any | AzureUpdateDelivery | Allow | Yes | Communication with Windows Update. |
+| 80 | TCP | Any | AzureFrontDoor.FirstParty | Allow | Yes | Download of patches from Windows Update. |
+| 443 | TCP | Any | GuestAndHybridManagement | Allow | Yes | Automated management of security patches. |
+ ### Port 5986 - management using PowerShell remoting
active-directory-domain-services Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-domain-services/policy-reference.md
Title: Built-in policy definitions for Azure Active Directory Domain Services description: Lists Azure Policy built-in policy definitions for Azure Active Directory Domain Services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/16/2021 Last updated : 08/13/2021
active-directory On Premises Ecma Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/on-premises-ecma-troubleshoot.md
Previously updated : 08/02/2021 Last updated : 08/10/2021
After you configure the ECMA host and provisioning agent, it's time to test conn
1. After you assign an agent, you need to wait 10 to 20 minutes for the registration to complete. The connectivity test won't work until the registration completes. 1. Ensure that you're using a valid certificate. Go to the **Settings** tab of the ECMA host to generate a new certificate. 1. Restart the provisioning agent by going to the taskbar on your VM by searching for the Microsoft Azure AD Connect provisioning agent. Right-click **Stop**, and then select **Start**.
- 1. When you provide the tenant URL in the Azure portal, ensure that it follows the following pattern. You can replace `localhost` with your host name, but it isn't required. Replace `connectorName` with the name of the connector you specified in the ECMA host.
+ 1. When you provide the tenant URL in the Azure portal, ensure that it follows the following pattern. You can replace `localhost` with your host name, but it isn't required. Replace `connectorName` with the name of the connector you specified in the ECMA host. The error message 'invalid resource' generally indicates that the URL does not follow the expected format.
``` https://localhost:8585/ecma2host_connectorName/scim
active-directory Use Scim To Provision Users And Groups https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/use-scim-to-provision-users-and-groups.md
TLS 1.2 Cipher Suites minimum bar:
### IP Ranges The Azure AD provisioning service currently operates under the IP Ranges for AzureActiveDirectory as listed [here](https://www.microsoft.com/download/details.aspx?id=56519&WT.mc_id=rss_alldownloads_all). You can add the IP ranges listed under the AzureActiveDirectory tag to allow traffic from the Azure AD provisioning service into your application. Note that you will need to review the IP range list carefully for computed addresses. An address such as '40.126.25.32' could be represented in the IP range list as '40.126.0.0/18'. You can also programmatically retrieve the IP range list using the following [API](/rest/api/virtualnetwork/servicetags/list).
-Azure AD also supports an agent based solution to provide connectivity to applications in private networks (on-premises, hosted in Azure, hosted in AWS, etc.). Customers can deploy a lightweight agent, which provides connectivity to Azure AD without opening an inbound ports, on a server in their private network. Learn more [here](/azure/active-directory/app-provisioning/on-premises-scim-provisioning).
+Azure AD also supports an agent based solution to provide connectivity to applications in private networks (on-premises, hosted in Azure, hosted in AWS, etc.). Customers can deploy a lightweight agent, which provides connectivity to Azure AD without opening an inbound ports, on a server in their private network. Learn more [here](./on-premises-scim-provisioning.md).
## Build a SCIM endpoint
To help drive awareness and demand of our joint integration, we recommend you up
> [Writing expressions for attribute mappings](functions-for-customizing-application-data.md) > [Scoping filters for user provisioning](define-conditional-rules-for-provisioning-user-accounts.md) > [Account provisioning notifications](user-provisioning.md)
-> [List of tutorials on how to integrate SaaS apps](../saas-apps/tutorial-list.md)
+> [List of tutorials on how to integrate SaaS apps](../saas-apps/tutorial-list.md)
active-directory Concept Authentication Passwordless https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/concept-authentication-passwordless.md
The following providers offer FIDO2 security keys of different form factors that
| Nymi | ![y] | ![n]| ![y]| ![n]| ![n] | https://www.nymi.com/product | | OneSpan Inc. | ![y] | ![n]| ![n]| ![y]| ![n] | https://www.onespan.com/products/fido | | Thales Group | ![n] | ![y]| ![y]| ![n]| ![n] | https://cpl.thalesgroup.com/access-management/authenticators/fido-devices |
-| Thetis | ![y] | |[y]| ![y]| ![y]| ![n] | https://thetis.io/collections/fido2 |
+| Thetis | ![y] | ![y]| ![y]| ![y]| ![n] | https://thetis.io/collections/fido2 |
| Token2 Switzerland | ![y] | ![y]| ![y]| ![n]| ![n] | https://www.token2.swiss/shop/product/token2-t2f2-alu-fido2-u2f-and-totp-security-key | | TrustKey Solutions | ![y] | ![y]| ![n]| ![n]| ![n] | https://www.trustkeysolutions.com/security-keys/ | | VinCSS | ![n] | ![y]| ![n]| ![n]| ![n] | https://passwordless.vincss.net |
active-directory Howto Mfa Mfasettings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-mfa-mfasettings.md
Previously updated : 04/13/2021 Last updated : 08/12/2021
To enable and configure fraud alerts, complete the following steps:
### View fraud reports
-Select **Azure Active Directory** > **Sign-ins** > **Authentication Details**. The fraud report is now part of the standard Azure AD Sign-ins report and it will show in the **"Result Detail"** as MFA denied, Fraud Code Entered.
+When a user reports fraud, the event shows up in the Sign-ins report (as a sign-in that was rejected by the user) and in the Audit logs.
+
+- To view fraud reports in the Sign-ins report, click **Azure Active Directory** > **Sign-ins** > **Authentication Details**. The fraud report is part of the standard Azure AD Sign-ins report and appears in the **Result Detail** as **MFA denied, Fraud Code Entered**.
+
+- To view fraud reports in the Audit logs, click **Azure Active Directory** > **Audit Logs**. The fraud report appears under Activity type **Fraud reported - user is blocked for MFA** or **Fraud reported - no action taken** based on the tenant-level settings for fraud report.
## Notifications
active-directory Authorization Basics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/authorization-basics.md
+
+ Title: Authorization basics | Azure
+
+description: Learn about the basics of authorization in the Microsoft identity platform.
+++
+
++++ Last updated : 07/23/2021++++
+#Customer intent: As an application developer, I want to understand the basic concepts of authorization in the Microsoft identity platform.
++
+# Authorization basics
+
+**Authorization** (sometimes abbreviated as *AuthZ*) is used to set permissions that are used to evaluate access to resources or functionality. In contrast, **authentication** (sometimes abbreviated as *AuthN*) is focused on proving that an entity like a user or service is indeed who they claim to be.
+
+Authorization can include specifying what functionality (or resources) an entity is allowed to access or what data that entity can access and what they can do with that data. This is often referred to as *access control*.
+
+> [!NOTE]
+> Authentication and authorization are concepts that are not limited to only users. Services or daemon applications are often built to make requests for resources as themselves rather than on behalf of a specific user. When discussing these topics, the term ΓÇ£entityΓÇ¥ is used to refer to either a user or an application.
++
+## Authorization approaches
+
+There are several common approaches to handle authorization. [Role-based access control](./custom-rbac-for-developers.md) is currently the most common approach using Microsoft identity platform.
++
+### Authentication as authorization
+
+Possibly the simplest form of authorization is to grant or deny access based on whether the entity making a request has been authenticated. If the requestor can prove they're who they claim to be, they can access the protected resources or functionality.
+
+### Access control lists
+
+Authorization via access control lists (ACLs) involves maintaining explicit lists of specific entities who do or don't have access to a resource or functionality. ACLs offer finer control over authentication-as-authorization but become difficult to manage as the number of entities increases.
+
+### Role-based access control
+
+Role-based access control (RBAC) is possibly the most common approach to enforcing authorization in applications. When using RBAC, roles are defined to describe the kinds of activities an entity may perform. An application developer grants access to roles rather than to individual entities. An administrator can then assign roles to different entities to control which ones have access to what resources and functionality.
+
+In advanced RBAC implementations, roles may be mapped to collections of permissions, where a permission describes a granular action or activity that can be performed. Roles are then configured as combinations of permissions. You compute the entitiesΓÇÖ overall permission set for an application by intersecting the permissions granted to the various roles the entity is assigned. A good example of this approach is the RBAC implementation that governs access to resources in Azure subscriptions.
+
+> [!NOTE]
+> [Application RBAC](./custom-rbac-for-developers.md) differs from [Azure RBAC](/azure/role-based-access-control/overview) and [Azure AD RBAC](../roles/custom-overview.md#understand-azure-ad-role-based-access-control). Azure custom roles and built-in roles are both part of Azure RBAC, which helps you manage Azure resources. Azure AD RBAC allows you to manage Azure AD resources.
+
+### Attribute-based access control
+
+Attribute-based access control (ABAC) is a more fine-grained access control mechanism. In this approach, rules are applied to attributes of the entity, the resources being accessed, and the current environment to determine whether access to some resources or functionality is permitted. An example might be only allowing users who are managers to access files identified with a metadata tag of ΓÇ£managers during working hours onlyΓÇ¥ during the hours of 9AM - 5PM on working days. In this case, access is determined by examining the userΓÇÖs attribute (status as manager), the resourceΓÇÖs attribute (metadata tag on a file), and also an environment attribute (the current time).
+
+One advantage of ABAC is that more granular and dynamic access control can be achieved through rule and condition evaluations without the need to create large numbers of very specific roles and RBAC assignments.
+
+One method for achieving ABAC with Azure Active Directory is using [dynamic groups](../enterprise-users/groups-create-rule.md). Dynamic groups allow administrators to dynamically assign users to groups based on specific user attributes with desired values. For example, an Authors group could be created where all users with the job title Author are dynamically assigned to the Authors group. Dynamic groups can be used in combination with RBAC for authorization where you map roles to groups and dynamically assign users to groups.
+
+## Implementing authorization
+
+Authorization logic is often implemented within the applications or solutions where access control is required. In many cases, application development platforms offer middleware or other API solutions that simplify the implementation of authorization. Examples include use of the [AuthorizeAttribute](/aspnet/core/security/authorization/simple?view=aspnetcore-5.0&preserve-view=true) in ASP.NET or [Route Guards](./scenario-spa-sign-in.md?tabs=angular2#sign-in-with-a-pop-up-window) in Angular.
+
+For authorization approaches that rely on information about the authenticated entity, an application will evaluate information exchanged during authentication. For example, by using the information that was provided within a [security token](./security-tokens.md)). For information not contained in a security token, an application might make extra calls to external resources.
+
+It's not strictly necessary for developers to embed authorization logic entirely within their applications. Instead, dedicated authorization services can be used to centralize authorization implementation and management.
++
+## Next steps
+
+- To learn about custom role-based access control implementation in applications, see [Role-based access control for application developers](./custom-rbac-for-developers.md).
+- To learn about the process of registering your application so it can integrate with the Microsoft identity platform, see [Application model](./application-model.md).
+- For an example of configuring simple authentication-based authorization, see [Configure your App Service or Azure Functions app to use Azure AD login](/azure/app-service/configure-authentication-provider-aad).
active-directory Concept Primary Refresh Token https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/concept-primary-refresh-token.md
A PRT can get a multi-factor authentication (MFA) claim in specific scenarios. W
* **MFA during WAM interactive sign in**: During a token request through WAM, if a user is required to do MFA to access the app, the PRT that is renewed during this interaction is imprinted with an MFA claim. * In this case, the MFA claim is not updated continuously, so the MFA duration is based on the lifetime set on the directory. * When a previous existing PRT and RT are used for access to an app, the PRT and RT will be regarded as the first proof of authentication. A new AT will be required with a second proof and an imprinted MFA claim. This will also issue a new PRT and RT.
-* **MFA during device registration**: If an admin has configured their device settings in Azure AD to [require MFA to register devices](device-management-azure-portal.md#configure-device-settings), the user needs to do MFA to complete the registration. During this process, the PRT that is issued to the user has the MFA claim obtained during the registration. This capability only applies to the registered owner of the device, not to other users who sign in to that device.
- * Similar to the WAM interactive sign in, the MFA claim is not updated continuously, so the MFA duration is based on the lifetime set on the directory.
Windows 10 maintains a partitioned list of PRTs for each credential. So, thereΓÇÖs a PRT for each of Windows Hello for Business, password, or smartcard. This partitioning ensures that MFA claims are isolated based on the credential used, and not mixed up during token requests.
The following diagrams illustrate the underlying details in issuing, renewing, a
| Step | Description | | :: | | | A | An application (for example, Outlook, OneNote etc.) initiates a token request to WAM. WAM, in turn, asks the Azure AD WAM plugin to service the token request. |
-| B | If a Refresh token for the application is already available, Azure AD WAM plugin uses it to request an access token. To provide proof of device binding, WAM plugin signs the request with the Session key. Azure AD validates the Session key and issues an access token and a new refresh token for the app, encrypted by the Session key. WAM plugin requests Cloud AP plugin to decrypt the tokens, which, in turn, requests the TPM to decrypt using the Session key, resulting in WAM plugin getting both the tokens. Next, WAM plugin provides only the access token to the application, while it re-encrypts the refresh token with DPAPI and stores it in its own cache |
+| B | If a Refresh token for the application is already available, Azure AD WAM plugin uses it to request an access token. To provide proof of device binding, WAM plugin signs the request with the Session key. Azure AD validates the Session key and issues an access token and a new refresh token for the app, encrypted by the Session key. WAM plugin requests CloudAP plugin to decrypt the tokens, which, in turn, requests the TPM to decrypt using the Session key, resulting in WAM plugin getting both the tokens. Next, WAM plugin provides only the access token to the application, while it re-encrypts the refresh token with DPAPI and stores it in its own cache |
| C | If a Refresh token for the application is not available, Azure AD WAM plugin uses the PRT to request an access token. To provide proof of possession, WAM plugin signs the request containing the PRT with the Session key. Azure AD validates the Session key signature by comparing it against the Session key embedded in the PRT, verifies that the device is valid and issues an access token and a refresh token for the application. in addition, Azure AD can issue a new PRT (based on refresh cycle), all of them encrypted by the Session key. |
-| D | WAM plugin requests Cloud AP plugin to decrypt the tokens, which, in turn, requests the TPM to decrypt using the Session key, resulting in WAM plugin getting both the tokens. Next, WAM plugin provides only the access token to the application, while it re-encrypts the refresh token with DPAPI and stores it in its own cache. WAM plugin will use the refresh token going forward for this application. WAM plugin also gives back the new PRT to Cloud AP plugin, which validates the PRT with Azure AD before updating it in its own cache. Cloud AP plugin will use the new PRT going forward. |
+| D | WAM plugin requests CloudAP plugin to decrypt the tokens, which, in turn, requests the TPM to decrypt using the Session key, resulting in WAM plugin getting both the tokens. Next, WAM plugin provides only the access token to the application, while it re-encrypts the refresh token with DPAPI and stores it in its own cache. WAM plugin will use the refresh token going forward for this application. WAM plugin also gives back the new PRT to CloudAP plugin, which validates the PRT with Azure AD before updating it in its own cache. CloudAP plugin will use the new PRT going forward. |
| E | WAM provides the newly issued access token to WAM, which in turn, provides it back to the calling application| ### Browser SSO using PRT
active-directory Azure Ad Account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/azure-ad-account.md
Azure AD account is an identity provider option for your self-service sign-up us
## Verifying the application's publisher domain As of November 2020, new application registrations show up as unverified in the user consent prompt unless [the application's publisher domain is verified](../develop/howto-configure-publisher-domain.md) ***and*** the companyΓÇÖs identity has been verified with the Microsoft Partner Network and associated with the application. ([Learn more](../develop/publisher-verification-overview.md) about this change.) Note that for Azure AD user flows, the publisherΓÇÖs domain appears only when using a [Microsoft account](microsoft-account.md) or other Azure AD tenant as the identity provider. To meet these new requirements, do the following:
-1. [Verify your company identity using your Microsoft Partner Network (MPN) account](https://docs.microsoft.com/partner-center/verification-responses). This process verifies information about your company and your companyΓÇÖs primary contact.
+1. [Verify your company identity using your Microsoft Partner Network (MPN) account](/partner-center/verification-responses). This process verifies information about your company and your companyΓÇÖs primary contact.
1. Complete the publisher verification process to associate your MPN account with your app registration using one of the following options: - If the app registration for the Microsoft account identity provider is in an Azure AD tenant, [verify your app in the App Registration portal](../develop/mark-app-as-publisher-verified.md). - If your app registration for the Microsoft account identity provider is in an Azure AD B2C tenant, [mark your app as publisher verified using Microsoft Graph APIs](../develop/troubleshoot-publisher-verification.md#making-microsoft-graph-api-calls) (for example, using Graph Explorer).
active-directory Microsoft Account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/microsoft-account.md
Microsoft account is an identity provider option for your self-service sign-up u
## Verifying the application's publisher domain As of November 2020, new application registrations show up as unverified in the user consent prompt unless [the application's publisher domain is verified](../develop/howto-configure-publisher-domain.md) ***and*** the companyΓÇÖs identity has been verified with the Microsoft Partner Network and associated with the application. ([Learn more](../develop/publisher-verification-overview.md) about this change.) Note that for Azure AD user flows, the publisherΓÇÖs domain appears only when using a Microsoft account or other [Azure AD tenant](azure-ad-account.md) as the identity provider. To meet these new requirements, do the following:
-1. [Verify your company identity using your Microsoft Partner Network (MPN) account](https://docs.microsoft.com/partner-center/verification-responses). This process verifies information about your company and your companyΓÇÖs primary contact.
+1. [Verify your company identity using your Microsoft Partner Network (MPN) account](/partner-center/verification-responses). This process verifies information about your company and your companyΓÇÖs primary contact.
1. Complete the publisher verification process to associate your MPN account with your app registration using one of the following options: - If the app registration for the Microsoft account identity provider is in an Azure AD tenant, [verify your app in the App Registration portal](../develop/mark-app-as-publisher-verified.md). - If your app registration for the Microsoft account identity provider is in an Azure AD B2C tenant, [mark your app as publisher verified using Microsoft Graph APIs](../develop/troubleshoot-publisher-verification.md#making-microsoft-graph-api-calls) (for example, using Graph Explorer).
active-directory Migrate From Federation To Cloud Authentication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/migrate-from-federation-to-cloud-authentication.md
You can move SaaS applications that are currently federated with ADFS to Azure A
For more information, see ΓÇô -- [Moving application authentication from Active Directory Federation Services to Azure Active Directory](/azure/active-directory/manage-apps/migrate-adfs-apps-to-azure) and
+- [Moving application authentication from Active Directory Federation Services to Azure Active Directory](../manage-apps/migrate-adfs-apps-to-azure.md) and
- [AD FS to Azure AD application migration playbook for developers](/samples/azure-samples/ms-identity-adfs-to-aad/ms-identity-dotnet-adfs-to-aad) ### Remove relying party trust
active-directory 8X8virtualoffice Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/8x8virtualoffice-tutorial.md
To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment. * 8x8 supports **SP and IDP** initiated SSO
+* 8x8 supports [**Automated** user provisioning and deprovisioning](8x8-provisioning-tutorial.md) (recommended).
> [!NOTE] > Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
active-directory Adobe Identity Management Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/adobe-identity-management-tutorial.md
To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment. * Adobe Identity Management supports **SP** initiated SSO
+* Adobe Identity Management supports [**automated** user provisioning and deprovisioning](adobe-identity-management-provisioning-tutorial.md) (recommended).
## Adding Adobe Identity Management from the gallery
active-directory Asana Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/asana-tutorial.md
In this tutorial, you configure and test Azure AD single sign-on in a test envir
* Asana supports **SP** initiated SSO
-* Asana supports [**Automated** user provisioning](asana-provisioning-tutorial.md)
+* Asana supports [**automated** user provisioning](asana-provisioning-tutorial.md)
## Add Asana from the gallery
active-directory Bluejeans Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/bluejeans-provisioning-tutorial.md
# Tutorial: Configure BlueJeans for automatic user provisioning
-This tutorial describes the steps you need to perform in both BlueJeans and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users to [BlueJeans](https://www.bluejeans.com/pricing) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
+This tutorial describes the steps you need to perform in both BlueJeans and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users to [BlueJeans](https://www.bluejeans.com) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
## Capabilities supported > [!div class="checklist"]
The scenario outlined in this tutorial assumes that you already have the followi
* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md). * A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
-* A BlueJeans tenant with [My Company](https://www.bluejeans.com/pricing) plan or better enabled.
+* A BlueJeans tenant with [My Company](https://www.bluejeans.com) plan or better enabled.
* A user account in BlueJeans with Admin permissions. * SCIM provisioning enabled in BlueJeans Enterprise.
Once you've configured provisioning, use the following resources to monitor your
## Next steps
-* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Cisco Webex Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/cisco-webex-tutorial.md
To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment. * Cisco Webex Meetings supports **SP and IDP** initiated SSO.-
+* Cisco Webex Meetings supports [**Automated** user provisioning and deprovisioning](cisco-webex-provisioning-tutorial.md) (recommended).
* Cisco Webex Meetings supports **Just In Time** user provisioning. ## Adding Cisco Webex Meetings from the gallery
active-directory Clarizen Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/clarizen-tutorial.md
To get started, you need the following items:
In this tutorial, you configure and test Azure AD single sign-on in a test environment. * Clarizen One supports **IDP** initiated SSO.
+* Clarizen One supports [**automated** user provisioning and deprovisioning](clarizen-one-provisioning-tutorial.md) (recommended).
> [!NOTE] > Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
active-directory Code42 Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/code42-tutorial.md
To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment. * Code42 supports **SP** initiated SSO.
+* Code42 supports [**automated user provisioning and deprovisioning**](code42-provisioning-tutorial.md) (recommended).
+ > [!NOTE] > Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
active-directory Figma Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/figma-tutorial.md
To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment. * Figma supports **SP and IDP** initiated SSO.
+* Figma supports [**Automated** user provisioning and deprovisioning](figma-provisioning-tutorial.md) (recommended).
* Figma supports **Just In Time** user provisioning. ## Add Figma from the gallery
active-directory Grammarly Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/grammarly-tutorial.md
To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment. * Grammarly supports **IDP** initiated SSO.
+* Grammarly supports [**automated** user provisioning and deprovisioning](grammarly-provisioning-tutorial.md) (recommended).
* Grammarly supports **Just In Time** user provisioning. > [!NOTE]
active-directory Infor Cloud Suite Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/infor-cloud-suite-tutorial.md
To configure Azure AD integration with Infor CloudSuite, you need the following
In this tutorial, you configure and test Azure AD single sign-on in a test environment. * Infor CloudSuite supports **SP and IDP** initiated SSO
+* Infor CloudSuite supports [**Automated** user provisioning and deprovisioning](infor-cloudsuite-provisioning-tutorial.md) (recommended).
* Infor CloudSuite supports **Just In Time** user provisioning ## Add Infor CloudSuite from the gallery
active-directory Keeperpasswordmanager Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/keeperpasswordmanager-tutorial.md
To configure Azure AD integration with Keeper Password Manager & Digital Vault,
In this tutorial, you configure and test Azure AD single sign-on in a test environment. * Keeper Password Manager & Digital Vault supports SP-initiated SSO.-
+* Keeper Password Manager supports [**Automated** user provisioning and deprovisioning](keeper-password-manager-digitalvault-provisioning-tutorial.md) (recommended).
* Keeper Password Manager & Digital Vault supports just-in-time user provisioning. ## Add Keeper Password Manager & Digital Vault from the gallery
active-directory Lucidchart Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/lucidchart-tutorial.md
To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment. * Lucidchart supports **SP** initiated SSO
+* Lucidchart supports [**Automated** user provisioning and deprovisioning](lucidchart-provisioning-tutorial.md) (recommended).
* Lucidchart supports **Just In Time** user provisioning ## Add Lucidchart from the gallery
active-directory Miro Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/miro-tutorial.md
To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment. * Miro supports **SP and IDP** initiated SSO and supports **Just In Time** user provisioning.
+* Miro supports [**Automated** user provisioning and deprovisioning](miro-provisioning-tutorial.md) (recommended).
> [!NOTE] > Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
active-directory Mondaycom Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/mondaycom-tutorial.md
To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment. * monday.com supports **SP and IDP** initiated SSO
+* monday.com supports [**automated** user provisioning and deprovisioning](mondaycom-provisioning-tutorial.md) (recommended).
* monday.com supports **Just In Time** user provisioning ## Add monday.com from the gallery
active-directory New Relic Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/new-relic-tutorial.md
To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment. * New Relic by Account supports **SP** initiated SSO
+* New Relic supports [**automated user provisioning and deprovisioning**](new-relic-by-organization-provisioning-tutorial.md) (recommended).
+ ## Add New Relic by Account from the gallery
active-directory Officespace Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/officespace-tutorial.md
To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment. * OfficeSpace Software supports **SP** initiated SSO.
+* OfficeSpace Software supports [**automated user provisioning and deprovisioning**](officespace-software-provisioning-tutorial.md) (recommended).
* OfficeSpace Software supports **Just In Time** user provisioning. ## Add OfficeSpace Software from the gallery
active-directory Oracle Cloud Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/oracle-cloud-tutorial.md
To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment. * Oracle Cloud Infrastructure Console supports **SP** initiated SSO.
+* Oracle Cloud Infrastructure Console supports [**Automated** user provisioning and deprovisioning](oracle-cloud-infrastructure-console-provisioning-tutorial.md) (recommended).
## Adding Oracle Cloud Infrastructure Console from the gallery
active-directory Oracle Fusion Erp Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/oracle-fusion-erp-tutorial.md
To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment. * Oracle Fusion ERP supports **SP** initiated SSO.
+* Oracle Fusion ERP supports [**Automated** user provisioning and deprovisioning](oracle-fusion-erp-provisioning-tutorial.md) (recommended).
## Add Oracle Fusion ERP from the gallery
active-directory Peakon Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/peakon-tutorial.md
To configure Azure AD integration with Peakon, you need the following items:
In this tutorial, you configure and test Azure AD single sign-on in a test environment. * Peakon supports **SP** and **IDP** initiated SSO
+* Peakon supports [**automated** user provisioning and deprovisioning](peakon-provisioning-tutorial.md) (recommended).
## Adding Peakon from the gallery
active-directory Snowflake Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/snowflake-tutorial.md
To configure Azure AD integration with Snowflake, you need the following items:
In this tutorial, you will configure and test Azure AD single sign-on in a test environment. - Snowflake supports **SP and IDP** initiated SSO-- Snowflake supports [Automated user provisioning and deprovisioning](snowflake-provisioning-tutorial.md) (recommended)
+- Snowflake supports [automated user provisioning and deprovisioning](snowflake-provisioning-tutorial.md) (recommended)
## Adding Snowflake from the gallery
active-directory Splashtop Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/splashtop-tutorial.md
To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment. * Splashtop supports **SP** initiated SSO.
+* Splashtop supports [**automated** user provisioning and deprovisioning](splashtop-provisioning-tutorial.md) (recommended).
> [!NOTE] > Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
active-directory Tableauonline Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/tableauonline-tutorial.md
To get started, you need the following items:
In this tutorial, you configure and test Azure AD single sign-on in a test environment. * Tableau Online supports **SP** initiated SSO
+* Tableau Online supports [**automated user provisioning and deprovisioning**](tableau-online-provisioning-tutorial.md) (recommended).
## Adding Tableau Online from the gallery
active-directory Teamviewer Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/teamviewer-tutorial.md
To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment. * TeamViewer supports **SP** initiated SSO.
+* TeamViewer supports [**Automated** user provisioning and deprovisioning](teamviewer-provisioning-tutorial.md) (recommended).
## Add TeamViewer from the gallery
active-directory Wrike Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/wrike-tutorial.md
To configure Azure AD integration with Wrike, you need the following items:
In this tutorial, you configure and test Azure AD single sign-on in a test environment. * Wrike supports **SP** and **IDP** initiated SSO.-
+* Wrike supports [**automated** user provisioning and deprovisioning](wrike-provisioning-tutorial.md) (recommended).
* Wrike supports **Just In Time** user provisioning. > [!NOTE]
aks Certificate Rotation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/certificate-rotation.md
AKS generates and uses the following certificates, Certificate Authorities, and
* The `kubectl` client has a certificate for communicating with the AKS cluster. > [!NOTE]
-> AKS clusters created prior to May 2019 have certificates that expire after two years. Any cluster created after May 2019 or any cluster that has its certificates rotated have Cluster CA certificates that expire after 30 years. All other AKS certificates, which use the Cluster CA to for signing, expire after two years and are automatically rotated when they expire. To verify when your cluster was created, use `kubectl get nodes` to see the *Age* of your node pools.
+> AKS clusters created prior to May 2019 have certificates that expire after two years. Any cluster created after May 2019 or any cluster that has its certificates rotated have Cluster CA certificates that expire after 30 years. All other AKS certificates, which use the Cluster CA to for signing, will expire after two years and are automatically rotated during AKS version upgrade. To verify when your cluster was created, use `kubectl get nodes` to see the *Age* of your node pools.
> > Additionally, you can check the expiration date of your cluster's certificate. For example, the following bash command displays the client certificate details for the *myAKSCluster* cluster in resource group *rg* > ```console
aks Configure Azure Cni https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/configure-azure-cni.md
The following screenshot from the Azure portal shows an example of configuring t
[!INCLUDE [preview features callout](./includes/preview/preview-callout.md)]
-> [!NOTE]
-> This preview feature is currently available in the following regions:
->
-> * East US
-> * East US 2
-> * North Central US
-> * West Central US
-> * West US
-> * West US 2
-> * Canada Central
-> * Australia East
-> * UK South
-> * North Europe
-> * West Europe
-> * Southeast Asia
-
-A drawback with the traditional CNI is the exhaustion of pod IP addresses as the AKS cluster grows, resulting in the need to rebuild the entire cluster in a bigger subnet. The new dynamic IP allocation capability in Azure CNI solves this problem by allotting pod IPs from a subnet separate from the subnet hosting the AKS cluster. It offers the following benefits:
+A drawback with the traditional CNI is the exhaustion of pod IP addresses as the AKS cluster grows, resulting in the need to rebuild the entire cluster in a bigger subnet. The new dynamic IP allocation capability in Azure CNI solves this problem by allotting pod IPs from a subnet separate from the subnet hosting the AKS cluster. It offers the following benefits:
* **Better IP utilization**: IPs are dynamically allocated to cluster Pods from the Pod subnet. This leads to better utilization of IPs in the cluster compared to the traditional CNI solution, which does static allocation of IPs for every node.
A drawback with the traditional CNI is the exhaustion of pod IP addresses as the
* **Kubernetes network policies**: Both the Azure Network Policies and Calico work with this new solution.
+### Additional prerequisites
+
+The [prerequisites][prerequisites] already listed for Azure CNI still apply, but there are a few additional limitations:
+
+* Only linux node clusters and node pools are supported.
+* AKS Engine and DIY clusters are not supported.
+ ### Install the `aks-preview` Azure CLI You will need the *aks-preview* Azure CLI extension. Install the *aks-preview* Azure CLI extension by using the [az extension add][az-extension-add] command. Or install any available updates by using the [az extension update][az-extension-update] command.
When ready, refresh the registration of the *Microsoft.ContainerService* resourc
az provider register --namespace Microsoft.ContainerService ```
-### Additional prerequisites
-
-The prerequisites already listed for Azure CNI still apply, but there are a few additional limitations:
-
-* Only linux node clusters and node pools are supported.
-* AKS Engine and DIY clusters are not supported.
- ### Planning IP addressing When using this feature, planning is much simpler. Since the nodes and pods scale independently, their address spaces can also be planned separately. Since pod subnets can be configured to the granularity of a node pool, customers can always add a new subnet when they add a node pool. The system pods in a cluster/node pool also receive IPs from the pod subnet, so this behavior needs to be accounted for.
Learn more about networking in AKS in the following articles:
[nodepool-upgrade]: use-multiple-node-pools.md#upgrade-a-node-pool [network-comparisons]: concepts-network.md#compare-network-models [system-node-pools]: use-system-pools.md
+[prerequisites]: configure-azure-cni.md#prerequisites
aks Configure Kubenet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/configure-kubenet.md
The following basic calculations compare the difference in network models:
- This node count could only support up to *240* pods (with a default maximum of 30 pods per node with *Azure CNI*) > [!NOTE]
-> These maximums don't take into account upgrade or scale operations. In practice, you can't run the maximum number of nodes that the subnet IP address range supports. You must leave some IP addresses available for use during scale of upgrade operations.
+> These maximums don't take into account upgrade or scale operations. In practice, you can't run the maximum number of nodes that the subnet IP address range supports. You must leave some IP addresses available for use during scale or upgrade operations.
### Virtual network peering and ExpressRoute connections
aks Operator Best Practices Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/operator-best-practices-identity.md
Azure Active Directory Pod Identity supports 2 modes of operation:
1. Standard Mode: In this mode, the following 2 components are deployed to the AKS cluster: * [Managed Identity Controller(MIC)](https://azure.github.io/aad-pod-identity/docs/concepts/mic/): A Kubernetes controller that watches for changes to pods, [AzureIdentity](https://azure.github.io/aad-pod-identity/docs/concepts/azureidentity/) and [AzureIdentityBinding](https://azure.github.io/aad-pod-identity/docs/concepts/azureidentitybinding/) through the Kubernetes API Server. When it detects a relevant change, the MIC adds or deletes [AzureAssignedIdentity](https://azure.github.io/aad-pod-identity/docs/concepts/azureassignedidentity/) as needed. Specifically, when a pod is scheduled, the MIC assigns the managed identity on Azure to the underlying VMSS used by the node pool during the creation phase. When all pods using the identity are deleted, it removes the identity from the VMSS of the node pool, unless the same managed identity is used by other pods. The MIC takes similar actions when AzureIdentity or AzureIdentityBinding are created or deleted.
- * [Node Managed Identity (NMI)](https://azure.github.io/aad-pod-identity/docs/concepts/nmi/): is a pod that runs as a DaemonSet on each node in the AKS cluster. NMI intercepts security token requests to the [Azure Instance Metadata Service](/azure/virtual-machines/linux/instance-metadata-service?tabs=linux) on each node, redirect them to itself and validates if the pod has access to the identity it's requesting a token for and fetch the token from the Azure Active Directory tenant on behalf of the application.
+ * [Node Managed Identity (NMI)](https://azure.github.io/aad-pod-identity/docs/concepts/nmi/): is a pod that runs as a DaemonSet on each node in the AKS cluster. NMI intercepts security token requests to the [Azure Instance Metadata Service](../virtual-machines/linux/instance-metadata-service.md?tabs=linux) on each node, redirect them to itself and validates if the pod has access to the identity it's requesting a token for and fetch the token from the Azure Active Directory tenant on behalf of the application.
2. Managed Mode: In this mode, there is only NMI. The identity needs to be manually assigned and managed by the user. For more information, see [Pod Identity in Managed Mode](https://azure.github.io/aad-pod-identity/docs/configure/pod_identity_in_managed_mode/). In this mode, when you use the [az aks pod-identity add](/cli/azure/aks/pod-identity?view=azure-cli-latest#az_aks_pod_identity_add) command to add a pod identity to an Azure Kubernetes Service (AKS) cluster, it creates the [AzureIdentity](https://azure.github.io/aad-pod-identity/docs/concepts/azureidentity/) and [AzureIdentityBinding](https://azure.github.io/aad-pod-identity/docs/concepts/azureidentitybinding/) in the namespace specified by the `--namespace` parameter, while the AKS resource provider assigns the managed identity specified by the `--identity-resource-id` parameter to virtual machine scale set (VMSS) of each node pool in the AKS cluster. > [!NOTE]
-> If you instead decide to install the Azure Active Directory Pod Identity using the [AKS cluster add-on](/azure/aks/use-azure-ad-pod-identity), the setup will use the `managed` mode.
+> If you instead decide to install the Azure Active Directory Pod Identity using the [AKS cluster add-on](./use-azure-ad-pod-identity.md), the setup will use the `managed` mode.
The `managed` mode provides the following advantages over the `standard`:
For more information about cluster operations in AKS, see the following best pra
[aks-best-practices-scheduler]: operator-best-practices-scheduler.md [aks-best-practices-advanced-scheduler]: operator-best-practices-advanced-scheduler.md [aks-best-practices-cluster-isolation]: operator-best-practices-cluster-isolation.md
-[azure-ad-rbac]: azure-ad-rbac.md
+[azure-ad-rbac]: azure-ad-rbac.md
aks Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/policy-reference.md
Title: Built-in policy definitions for Azure Kubernetes Service description: Lists Azure Policy built-in policy definitions for Azure Kubernetes Service. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/16/2021 Last updated : 08/13/2021
aks Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Kubernetes Service (AKS) description: Lists Azure Policy Regulatory Compliance controls available for Azure Kubernetes Service (AKS). These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 07/16/2021 Last updated : 08/13/2021
aks Use Azure Ad Pod Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/use-azure-ad-pod-identity.md
az aks create -g myResourceGroup -n myAKSCluster --enable-pod-identity --network
> > 1. Standard Mode: In this mode, the following 2 components are deployed to the AKS cluster: > * [Managed Identity Controller(MIC)](https://azure.github.io/aad-pod-identity/docs/concepts/mic/): A Kubernetes controller that watches for changes to pods, [AzureIdentity](https://azure.github.io/aad-pod-identity/docs/concepts/azureidentity/) and [AzureIdentityBinding](https://azure.github.io/aad-pod-identity/docs/concepts/azureidentitybinding/) through the Kubernetes API Server. When it detects a relevant change, the MIC adds or deletes [AzureAssignedIdentity](https://azure.github.io/aad-pod-identity/docs/concepts/azureassignedidentity/) as needed. Specifically, when a pod is scheduled, the MIC assigns the managed identity on Azure to the underlying VMSS used by the node pool during the creation phase. When all pods using the identity are deleted, it removes the identity from the VMSS of the node pool, unless the same managed identity is used by other pods. The MIC takes similar actions when AzureIdentity or AzureIdentityBinding are created or deleted.
-> * [Node Managed Identity (NMI)](https://azure.github.io/aad-pod-identity/docs/concepts/nmi/): is a pod that runs as a DaemonSet on each node in the AKS cluster. NMI intercepts security token requests to the [Azure Instance Metadata Service](/azure/virtual-machines/linux/instance-metadata-service?tabs=linux) on each node, redirect them to itself and validates if the pod has access to the identity it's requesting a token for and fetch the token from the Azure Active Directory tenant on behalf of the application.
+> * [Node Managed Identity (NMI)](https://azure.github.io/aad-pod-identity/docs/concepts/nmi/): is a pod that runs as a DaemonSet on each node in the AKS cluster. NMI intercepts security token requests to the [Azure Instance Metadata Service](../virtual-machines/linux/instance-metadata-service.md?tabs=linux) on each node, redirect them to itself and validates if the pod has access to the identity it's requesting a token for and fetch the token from the Azure Active Directory tenant on behalf of the application.
> 2. Managed Mode: In this mode, there is only NMI. The identity needs to be manually assigned and managed by the user. For more information, see [Pod Identity in Managed Mode](https://azure.github.io/aad-pod-identity/docs/configure/pod_identity_in_managed_mode/). >
->When you install the Azure Active Directory Pod Identity via Helm chart or YAML manifest as shown in the [Installation Guide](https://azure.github.io/aad-podidentity/docs/getting-started/installation/), you can choose between the `standard` and `managed` mode. If you instead decide to install the Azure Active Directory Pod Identity using the [AKS cluster add-on](/azure/aks/use-azure-ad-pod-identity) as shown in this article, the setup will use the `managed` mode.
+>When you install the Azure Active Directory Pod Identity via Helm chart or YAML manifest as shown in the [Installation Guide](https://azure.github.io/aad-podidentity/docs/getting-started/installation/), you can choose between the `standard` and `managed` mode. If you instead decide to install the Azure Active Directory Pod Identity using the [AKS cluster add-on]() as shown in this article, the setup will use the `managed` mode.
Use [az aks get-credentials][az-aks-get-credentials] to sign in to your AKS cluster. This command also downloads and configures the `kubectl` client certificate on your development computer.
For more information on managed identities, see [Managed identities for Azure re
[az-group-create]: /cli/azure/group#az_group_create [az-identity-create]: /cli/azure/identity#az_identity_create [az-managed-identities]: ../active-directory/managed-identities-azure-resources/overview.md
-[az-role-assignment-create]: /cli/azure/role/assignment#az_role_assignment_create
+[az-role-assignment-create]: /cli/azure/role/assignment#az_role_assignment_create
api-management Api Management Howto App Insights https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-howto-app-insights.md
na Previously updated : 07/19/2021 Last updated : 08/04/2021
Before you can use Application Insights, you first need to create an instance of
1. Navigate to your **Azure API Management service instance** in the **Azure portal**. 1. Select **Application Insights** from the menu on the left.
-1. Click **+ Add**.
+1. Select **+ Add**.
:::image type="content" source="media/api-management-howto-app-insights/apim-app-insights-logger-1.png" alt-text="Screenshot that shows where to add a new connection"::: 1. Select the previously created **Application Insights** instance and provide a short description.
-1. Click **Create**.
+1. To enable [availability monitoring](../azure-monitor/app/monitor-web-app-availability.md) of your API Management instance in Application Insights, select the **Add availability monitor** checkbox.
+
+ This setting regularly validates whether the API Management service endpoint is responding. Results appear in the **Availability** pane of the Application Insights instance.
+1. Select **Create**.
1. You have just created an Application Insights logger with an instrumentation key. It should now appear in the list. :::image type="content" source="media/api-management-howto-app-insights/apim-app-insights-logger-2.png" alt-text="Screenshot that shows where to view the newly created Application Insights logger with instrumentation key"::: > [!NOTE]
-> Behind the scene, a [Logger](/rest/api/apimanagement/2020-12-01/logger/create-or-update) entity is created in your API Management instance, containing the Instrumentation Key of the Application Insights instance.
+> Behind the scenes, a [Logger](/rest/api/apimanagement/2019-12-01/logger/createorupdate) entity is created in your API Management instance, containing the instrumentation key of the Application Insights instance.
## Enable Application Insights logging for your API
Before you can use Application Insights, you first need to create an instance of
> Overriding the default value **0** in the **Number of payload bytes to log** setting may significantly decrease the performance of your APIs. > [!NOTE]
-> Behind the scene, a [Diagnostic](/rest/api/apimanagement/2020-12-01/diagnostic/create-or-update) entity named 'applicationinsights' is created at the API level.
+> Behind the scenes, a [Diagnostic](/rest/api/apimanagement/2019-12-01/diagnostic/createorupdate) entity named 'applicationinsights' is created at the API level.
| Setting name | Value type | Description | |-|--|--|
api-management Export Api Power Platform https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/export-api-power-platform.md
Title: Export APIs from Azure API Management to the Power Platform | Microsoft Docs
-description: Learn how to export APIs from API Management to the Power Platform.
+ Title: Export APIs from Azure API Management to Microsoft Power Platform | Microsoft Docs
+description: Learn how to export an API from API Management as a custom connector to Power Apps and Power Automate in the Microsoft Power Platform.
- -- Previously updated : 05/01/2020+ Last updated : 07/27/2021 # Export APIs from Azure API Management to the Power Platform
-Citizen developers using the Microsoft [Power Platform](https://powerplatform.microsoft.com) often needs to reach the business capabilities that are developed by professional developers and deployed in Azure. [Azure API Management](https://aka.ms/apimrocks) enables professional developers to publish their backend service as APIs, and easily export these APIs to the Power Platform (Power Apps and Power Automate) as custom connectors for consumption by citizen developers.
+Citizen developers using the Microsoft [Power Platform](https://powerplatform.microsoft.com) often need to reach the business capabilities that are developed by professional developers and deployed in Azure. [Azure API Management](https://aka.ms/apimrocks) enables professional developers to publish their backend service as APIs, and easily export these APIs to the Power Platform ([Power Apps](/powerapps/powerapps-overview) and [Power Automate](/power-automate/getting-started)) as custom connectors for discovery and consumption by citizen developers.
-This article walks through the steps to export APIs from API Management to the Power Platform.
+This article walks through the steps in the Azure portal to create a custom Power Platform connector to an API in API Management. With this capability, citizen developers can use the Power Platform to create and distribute apps that are based on internal and external APIs managed by API Management.
## Prerequisites
This article walks through the steps to export APIs from API Management to the P
+ Make sure there is an API in your API Management instance that you'd like to export to the Power Platform + Make sure you have a Power Apps or Power Automate [environment](/powerapps/powerapps-overview#power-apps-for-admins)
-## Export an API
+## Create a custom connector to an API
-1. Navigate to your API Management service in the Azure portal and select **APIs** from the menu.
-2. Click on the three dots next to the API you want to export.
-3. Select **Export**.
-4. Select **Power Apps and Power Automate**.
-5. Choose an environment to export the API to.
-6. Provide a display name, which will be used as the name of the custom connector.
-7. Optional, if the API is protected by an OAuth 2.0 server, you will also need to provide additional details including `Client ID`, `Client secret`, `Authorization URL`, `Token URL`, and `Refresh URL`.
-8. Select **Export**.
+1. Navigate to your API Management service in the Azure portal.
+1. In the menu, under **APIs**, select **Power Platform**.
+1. Select **Create a connector**.
+1. In the **Create a connector** window, do the following:
+ 1. Select an API to publish to the Power Platform.
+ 1. Select a Power Platform environment to publish the API to.
+ 1. Enter a display name, which will be used as the name of the custom connector.
+ 1. Optionally, if the API is [protected by an OAuth 2.0 server](api-management-howto-protect-backend-with-aad.md), provide details including **Client ID**, **Client secret**, **Authorization URL**, **Token URL**, and **Refresh URL**.
+1. Select **Create**.
-Once the export completes, navigate to your Power App or Power Automate environment. You will see the API as a custom connector.
+ :::image type="content" source="media/export-api-power-platform/create-custom-connector.png" alt-text="Create custom connector to API in API Management":::
+
+Once the connector is created, navigate to your [Power Apps](https://make.powerapps.com) or [Power Automate](https://flow.microsoft.com) environment. You will see the API listed under **Data > Custom Connectors**.
+ ## Next steps * [Learn more about the Power Platform](https://powerplatform.microsoft.com/)
+* [Learn more about creating and using custom connectors](/connectors/custom-connectors/)
* [Learn common tasks in API Management by following the tutorials](./import-and-publish.md)
api-management Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/policy-reference.md
Title: Built-in policy definitions for Azure API Management description: Lists Azure Policy built-in policy definitions for Azure API Management. These built-in policy definitions provide approaches to managing your Azure resources. Previously updated : 07/16/2021 Last updated : 08/13/2021
api-management Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure API Management description: Lists Azure Policy Regulatory Compliance controls available for Azure API Management. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 07/16/2021 Last updated : 08/13/2021
app-service Configure Ssl Certificate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/configure-ssl-certificate.md
Once the rekey operation is complete, click **Sync**. The sync operation automat
To turn on automatic renewal of your certificate at any time, select the certificate in the [App Service Certificates](https://portal.azure.com/#blade/HubsExtension/Resources/resourceType/Microsoft.CertificateRegistration%2FcertificateOrders) page, then click **Auto Renew Settings** in the left navigation. By default, App Service Certificates have a one-year validity period.
-Select **On** and click **Save**. Certificates can start automatically renewing 30 days before expiration if you have automatic renewal turned on.
+Select **On** and click **Save**. Certificates can start automatically renewing 31 days before expiration if you have automatic renewal turned on.
![Renew App Service certificate automatically](./media/configure-ssl-certificate/auto-renew-app-service-cert.png)
app-service Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/policy-reference.md
Title: Built-in policy definitions for Azure App Service description: Lists Azure Policy built-in policy definitions for Azure App Service. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/16/2021 Last updated : 08/13/2021
app-service Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure App Service description: Lists Azure Policy Regulatory Compliance controls available for Azure App Service. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 07/16/2021 Last updated : 08/13/2021
application-gateway Ingress Controller Expose Service Over Http Https https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/ingress-controller-expose-service-over-http-https.md
The following ingress will allow you to add additional paths into this ingress a
```yaml apiVersion: extensions/v1beta1
- kind: Ingress
- metadata:
- name: guestbook
- annotations:
- kubernetes.io/ingress.class: azure/application-gateway
- spec:
- rules:
- - http:
- paths:
- - path: </other/*>
- backend:
- serviceName: <other-service>
- servicePort: 80
- - backend:
- serviceName: frontend
- servicePort: 80
+kind: Ingress
+metadata:
+ name: guestbook
+ annotations:
+ kubernetes.io/ingress.class: azure/application-gateway
+spec:
+ rules:
+ - http:
+ paths:
+ - path: </other/*>
+ backend:
+ serviceName: <other-service>
+ servicePort: 80
+ - backend:
+ serviceName: frontend
+ servicePort: 80
```
attestation Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/attestation/policy-reference.md
Title: Built-in policy definitions for Azure Attestation description: Lists Azure Policy built-in policy definitions for Azure Attestation. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/16/2021 Last updated : 08/13/2021
automation Automation Runbook Execution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-runbook-execution.md
Title: Runbook execution in Azure Automation
description: This article provides an overview of the processing of runbooks in Azure Automation. Previously updated : 07/27/2021 Last updated : 08/13/2021
Automation executes your runbooks based on the logic defined inside them. If a r
Starting a runbook in Azure Automation creates a job, which is a single execution instance of the runbook. Each job accesses Azure resources by making a connection to your Azure subscription. The job can only access resources in your datacenter if those resources are accessible from the public cloud.
-Azure Automation assigns a worker to run each job during runbook execution. While workers are shared by many Azure accounts, jobs from different Automation accounts are isolated from one another. You can't control which worker services your job requests.
+Azure Automation assigns a worker to run each job during runbook execution. While workers are shared by many Automation accounts, jobs from different Automation accounts are isolated from one another. You can't control which worker services your job requests.
When you view the list of runbooks in the Azure portal, it shows the status of each job that has been started for each runbook. Azure Automation stores job logs for a maximum of 30 days.
The following diagram shows the lifecycle of a runbook job for [PowerShell runbo
Runbooks in Azure Automation can run on either an Azure sandbox or a [Hybrid Runbook Worker](automation-hybrid-runbook-worker.md).
-When runbooks are designed to authenticate and run against resources in Azure, they run in an Azure sandbox, which is a shared environment that multiple jobs can use. Jobs using the same sandbox are bound by the resource limitations of the sandbox. The Azure sandbox environment does not support interactive operations. It prevents access to all out-of-process COM servers, and it does not support making [WMI calls](/windows/win32/wmisdk/wmi-architecture) to the Win32 provider in your runbook.  These scenarios are only supported by running the runbook on a Windows Hybrid Runbook Worker.
+When runbooks are designed to authenticate and run against resources in Azure, they run in an Azure sandbox. Azure Automation assigns a worker to run each job during runbook execution in the sandbox. While workers are shared by many Automation accounts, jobs from different Automation accounts are isolated from one another. Jobs using the same sandbox are bound by the resource limitations of the sandbox. The Azure sandbox environment does not support interactive operations. It prevents access to all out-of-process COM servers, and it does not support making [WMI calls](/windows/win32/wmisdk/wmi-architecture) to the Win32 provider in your runbook.  These scenarios are only supported by running the runbook on a Windows Hybrid Runbook Worker.
You can also use a [Hybrid Runbook Worker](automation-hybrid-runbook-worker.md) to run runbooks directly on the computer that hosts the role and against local resources in the environment. Azure Automation stores and manages runbooks and then delivers them to one or more assigned computers.
automation Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/policy-reference.md
Title: Built-in policy definitions for Azure Automation description: Lists Azure Policy built-in policy definitions for Azure Automation. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/16/2021 Last updated : 08/13/2021
automation Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Automation description: Lists Azure Policy Regulatory Compliance controls available for Azure Automation. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 07/16/2021 Last updated : 08/13/2021
automation Update Agent Issues Linux https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/troubleshoot/update-agent-issues-linux.md
The operating system check verifies if the Hybrid Runbook Worker is running one
### Log Analytics agent
-This check ensures that the Log Analytics agent for Linux is installed. For instructions on how to install it, see [Install the agent for Linux](../../azure-monitor/vm/quick-collect-linux-computer.md#install-the-agent-for-linux).
+This check ensures that the Log Analytics agent for Linux is installed. For instructions on how to install it, see [Install the agent for Linux](../../azure-monitor/vm/monitor-virtual-machine.md#agents).
### Log Analytics agent status
Passed: TCP test for {ods.systemcenteradvisor.com} (port 443) succeeded
## Next steps
-[Troubleshoot Hybrid Runbook Worker issues](hybrid-runbook-worker.md).
+[Troubleshoot Hybrid Runbook Worker issues](hybrid-runbook-worker.md).
automation Query Logs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/update-management/query-logs.md
On a Windows computer, you can review the following information to verify agent
1. Open the Windows Event Log. Go to **Application and Services Logs\Operations Manager** and search for Event ID 3000 and Event ID 5002 from the source **Service Connector**. These events indicate that the computer has registered with the Log Analytics workspace and is receiving configuration.
-If the agent can't communicate with Azure Monitor logs and the agent is configured to communicate with the internet through a firewall or proxy server, confirm the firewall or proxy server is properly configured. To learn how to verify the firewall or proxy server is properly configured, see [Network configuration for Windows agent](../../azure-monitor/agents/agent-windows.md) or [Network configuration for Linux agent](../../azure-monitor/vm/quick-collect-linux-computer.md).
+If the agent can't communicate with Azure Monitor logs and the agent is configured to communicate with the internet through a firewall or proxy server, confirm the firewall or proxy server is properly configured. To learn how to verify the firewall or proxy server is properly configured, see [Network configuration for Windows agent](../../azure-monitor/agents/agent-windows.md) or [Network configuration for Linux agent](../../azure-monitor/vm/monitor-virtual-machine.md).
> [!NOTE] > If your Linux systems are configured to communicate with a proxy or Log Analytics Gateway and you're enabling Update Management, update the `proxy.conf` permissions to grant the omiuser group read permission on the file by using the following commands:
Update
## Next steps * For details of Azure Monitor logs, see [Azure Monitor logs](../../azure-monitor/logs/log-query-overview.md).
-* For help with alerts, see [Configure alerts](configure-alerts.md).
+* For help with alerts, see [Configure alerts](configure-alerts.md).
azure-app-configuration Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/policy-reference.md
Title: Built-in policy definitions for Azure App Configuration description: Lists Azure Policy built-in policy definitions for Azure App Configuration. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/16/2021 Last updated : 08/13/2021
azure-app-configuration Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure App Configuration description: Lists Azure Policy Regulatory Compliance controls available for Azure App Configuration. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 07/16/2021 Last updated : 08/13/2021
azure-arc Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/policy-reference.md
Title: Built-in policy definitions for Azure Arc enabled Kubernetes description: Lists Azure Policy built-in policy definitions for Azure Arc enabled Kubernetes. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/16/2021 Last updated : 08/13/2021 #
azure-arc Tutorial Arc Enabled Open Service Mesh https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/tutorial-arc-enabled-open-service-mesh.md
Certificate:
Timeout: 1s Use HTTPS Ingress: false ```
-Refer to the [Config API reference](https://docs.openservicemesh.io/docs/apidocs/config/v1alpha1) for more information. Notice that **spec.traffic.enablePermissiveTrafficPolicyMode** is set to **true**. Permissive traffic policy mode in OSM is a mode where the [SMI](https://smi-spec.io/) traffic policy enforcement is bypassed. In this mode, OSM automatically discovers services that are a part of the service mesh and programs traffic policy rules on each Envoy proxy sidecar to be able to communicate with these services.
+Refer to the [Config API reference](https://docs.openservicemesh.io/docs/api_reference/config/v1alpha1/) for more information. Notice that **spec.traffic.enablePermissiveTrafficPolicyMode** is set to **true**. Permissive traffic policy mode in OSM is a mode where the [SMI](https://smi-spec.io/) traffic policy enforcement is bypassed. In this mode, OSM automatically discovers services that are a part of the service mesh and programs traffic policy rules on each Envoy proxy sidecar to be able to communicate with these services.
### Making changes to OSM controller configuration
Add namespaces to the mesh by running the following command:
osm namespace add <namespace_name> ```
-More information about onboarding services can be found [here](https://docs.openservicemesh.io/docs/tasks/onboard_services/).
+More information about onboarding services can be found [here](https://docs.openservicemesh.io/docs/guides/app_onboarding/#onboard-services).
### Configure OSM with Service Mesh Interface (SMI) policies
-You can start with a [demo application](https://docs.openservicemesh.io/docs/getting_started/manual_demo/#deploy-applications) or use your test environment to try out SMI policies.
+You can start with a [demo application](https://docs.openservicemesh.io/docs/getting_started/quickstart/manual_demo/#deploy-applications) or use your test environment to try out SMI policies.
> [!NOTE] > Ensure that the version of the bookstore application you run matches the version of the OSM extension installed on your cluster. Ex: if you are using v0.8.4 of the OSM extension, use the bookstore demo from release-v0.8 branch of OSM upstream repository.
The OSM extension does not install add-ons like [Jaeger](https://www.jaegertraci
> [!NOTE] > Use the commands provided in the OSM GitHub documentation with caution. Ensure that you use the correct namespace name 'arc-osm-system' when making changes to `osm-mesh-config`. -- [BYO-Jaeger instance](https://docs.openservicemesh.io/docs/tasks/observability/tracing/#byo-bring-your-own)-- [BYO-Prometheus instance](https://docs.openservicemesh.io/docs/tasks/observability/metrics/#byo-prometheus)-- [BYO-Grafana dashboard](https://docs.openservicemesh.io/docs/tasks/observability/metrics/#importing-dashboards-on-a-byo-grafana-instance)
+- [BYO-Jaeger instance](https://docs.openservicemesh.io/docs/guides/observability/tracing/#byo-bring-your-own)
+- [BYO-Prometheus instance](https://docs.openservicemesh.io/docs/guides/observability/metrics/#byo-prometheus)
+- [BYO-Grafana dashboard](https://docs.openservicemesh.io/docs/guides/observability/metrics/#importing-dashboards-on-a-byo-grafana-instance)
## Monitoring application using Azure Monitor and Applications Insights
azure-arc Tutorial Gitops Ci Cd https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/tutorial-gitops-ci-cd.md
This tutorial assumes familiarity with Azure DevOps, Azure Repos and Pipelines,
```azurecli az extension add --name connectedk8s
- az extension add --name k8sconfiguration
+ az extension add --name k8s-configuration
``` * To update these extensions to the latest version, run the following commands: ```azurecli az extension update --name connectedk8s
- az extension update --name k8sconfiguration
+ az extension update --name k8s-configuration
``` ## Import application and GitOps repos into Azure Repos
The CI/CD workflow will populate the manifest directory with extra manifests to
1. [Create a new GitOps connection](./tutorial-use-gitops-connected-cluster.md) to your newly imported **arc-cicd-demo-gitops** repo in Azure Repos. ```azurecli
- az k8sconfiguration create \
+ az k8s-configuration create \
--name cluster-config \ --cluster-name arc-cicd-cluster \ --resource-group myResourceGroup \ --operator-instance-name cluster-config \ --operator-namespace cluster-config \
- --repository-url https://dev.azure.com/<Your organization>/arc-cicd-demo-gitops \
+ --repository-url https://dev.azure.com/<Your organization>/<Your project>/_git/arc-cicd-demo-gitops \
--https-user <Azure Repos username> \ --https-key <Azure Repos PAT token> \ --scope cluster \
The CI/CD workflow will populate the manifest directory with extra manifests to
`--git-path=arc-cicd-cluster/manifests` > [!NOTE]
- > If you are using an HTTPS connection string and are having connection problems, ensure you omit the username prefix in the URL. For example, `https://alice@dev.azure.com/contoso/arc-cicd-demo-gitops` must have `alice@` removed. The `--https-user` specifies the user instead, for example `--https-user alice`.
+ > If you are using an HTTPS connection string and are having connection problems, ensure you omit the username prefix in the URL. For example, `https://alice@dev.azure.com/contoso/project/_git/arc-cicd-demo-gitops` must have `alice@` removed. The `--https-user` specifies the user instead, for example `--https-user alice`.
1. Check the state of the deployment in Azure portal. * If successful, you'll see both `dev` and `stage` namespaces created in your cluster.
To avoid having to set an imagePullSecret for every Pod, consider adding the ima
| AZURE_VOTE_IMAGE_REPO | The full path to the Azure Vote App repo, for example azurearctest.azurecr.io/azvote | | ENVIRONMENT_NAME | Dev | | MANIFESTS_BRANCH | `master` |
-| MANIFESTS_FOLDER | `azure-vote-manifests` |
-| MANIFESTS_REPO | `azure-cicd-demo-gitops` |
+| MANIFESTS_FOLDER | `azure-vote` |
+| MANIFESTS_REPO | `acr-cicd-demo-gitops` |
| ORGANIZATION_NAME | Name of Azure DevOps organization | | PROJECT_NAME | Name of GitOps project in Azure DevOps | | REPO_URL | Full URL for GitOps repo |
If you're not going to continue to use this application, delete any resources wi
1. Delete the Azure Arc GitOps configuration connection: ```azurecli
- az k8sconfiguration delete \
+ az k8s-configuration delete \
--name cluster-config \ --cluster-name arc-cicd-cluster \ --resource-group myResourceGroup \
In this tutorial, you have set up a full CI/CD workflow that implements DevOps f
Advance to our conceptual article to learn more about GitOps and configurations with Azure Arc enabled Kubernetes. > [!div class="nextstepaction"]
-> [CI/CD Workflow using GitOps - Azure Arc enabled Kubernetes](./conceptual-gitops-ci-cd.md)
+> [CI/CD Workflow using GitOps - Azure Arc enabled Kubernetes](./conceptual-gitops-ci-cd.md)
azure-arc Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/policy-reference.md
Title: Built-in policy definitions for Azure Arc enabled servers description: Lists Azure Policy built-in policy definitions for Azure Arc enabled servers (preview). These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/16/2021 Last updated : 08/13/2021
azure-arc Scenario Onboard Azure Sentinel https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/scenario-onboard-azure-sentinel.md
Azure Sentinel comes with a number of connectors for Microsoft solutions, availa
We recommend installing the Log Analytics agent for Windows or Linux using Azure Policy.
-After your Arc-enabled servers are connected, your data starts streaming into Azure Sentinel and is ready for you to start working with. You can view the logs in the [built-in workbooks](../../sentinel/quickstart-get-visibility.md) and start building queries in Log Analytics to [investigate the data](../../sentinel/tutorial-investigate-cases.md).
+After your Arc-enabled servers are connected, your data starts streaming into Azure Sentinel and is ready for you to start working with. You can view the logs in the [built-in workbooks](/azure/azure-arc/servers/articles/sentinel/get-visibility.md) and start building queries in Log Analytics to [investigate the data](/azure/azure-arc/servers/articles/sentinel/investigate-cases.md).
## Next steps
-Get started [detecting threats with Azure Sentinel](../../sentinel/tutorial-detect-threats-built-in.md).
+Get started [detecting threats with Azure Sentinel](/azure/azure-arc/servers/articles/sentinel/detect-threats-built-in.md).
azure-arc Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Arc-enabled servers (preview) description: Lists Azure Policy Regulatory Compliance controls available for Azure Arc-enabled servers (preview). These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 07/16/2021 Last updated : 08/13/2021
azure-australia Gateway Log Audit Visibility https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-australia/gateway-log-audit-visibility.md
Virtual Machines are end points that send and receive network communications, pr
|Resources|Link| ||| |Virtual Machines|[https://docs.microsoft.com/azure/virtual-machines](../virtual-machines/index.yml)|
-|Collect Data from Virtual Machines|[https://docs.microsoft.com/azure/log-analytics/log-analytics-quick-collect-azurevm](../azure-monitor/vm/quick-collect-azurevm.md)|
+|Collect Data from Virtual Machines|[https://docs.microsoft.com/azure/log-analytics/log-analytics-quick-collect-azurevm](../azure-monitor/vm/monitor-virtual-machine.md)|
|Stream Virtual Machine Logs to Event Hubs|[https://docs.microsoft.com/azure/monitoring-and-diagnostics/azure-diagnostics-streaming-event-hubs](../azure-monitor/agents/diagnostics-extension-stream-event-hubs.md)| |
azure-cache-for-redis Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/policy-reference.md
Title: Built-in policy definitions for Azure Cache for Redis description: Lists Azure Policy built-in policy definitions for Azure Cache for Redis. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/16/2021 Last updated : 08/13/2021
azure-cache-for-redis Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Cache for Redis description: Lists Azure Policy Regulatory Compliance controls available for Azure Cache for Redis. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 07/16/2021 Last updated : 08/13/2021
azure-functions Durable Functions Code Constraints https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/durable/durable-functions-code-constraints.md
The following table shows examples of APIs that you should avoid because they ar
| API category | Reason | Workaround | | | | - |
-| Dates and times | APIs that return the current date or time are nondeterministic because the returned value is different for each replay. | Use the [CurrentUtcDateTime](/dotnet/api/microsoft.azure.webjobs.extensions.durabletask.idurableorchestrationcontext.currentutcdatetime) property in .NET, the `currentUtcDateTime` API in JavaScript, or the `current_utc_datetime` API in Python, which are safe for replay. |
+| Dates and times | APIs that return the current date or time are nondeterministic because the returned value is different for each replay. | Use the [CurrentUtcDateTime](/dotnet/api/microsoft.azure.webjobs.extensions.durabletask.idurableorchestrationcontext.currentutcdatetime) property in .NET, the `currentUtcDateTime` API in JavaScript, or the `current_utc_datetime` API in Python, which are safe for replay. Similarly, avoid "stopwatch" type objects (like the [Stopwatch class in .NET](/dotnet/api/system.diagnostics.stopwatch)). If you need to measure elapsed time, store the value of `CurrentUtcDateTime` at the beginning of execution, and subtract that value from `CurrentUtcDateTime` when execution concludes. |
| GUIDs and UUIDs | APIs that return a random GUID or UUID are nondeterministic because the generated value is different for each replay. | Use [NewGuid](/dotnet/api/microsoft.azure.webjobs.extensions.durabletask.idurableorchestrationcontext.newguid) in .NET, `newGuid` in JavaScript, and `new_guid` in Python to safely generate random GUIDs. | | Random numbers | APIs that return random numbers are nondeterministic because the generated value is different for each replay. | Use an activity function to return random numbers to an orchestration. The return values of activity functions are always safe for replay. | | Bindings | Input and output bindings typically do I/O and are nondeterministic. An orchestrator function must not directly use even the [orchestration client](durable-functions-bindings.md#orchestration-client) and [entity client](durable-functions-bindings.md#entity-client) bindings. | Use input and output bindings inside client or activity functions. |
azure-functions Functions Bindings Storage Blob Trigger https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-bindings-storage-blob-trigger.md
The [Event Grid trigger](functions-bindings-event-grid.md) also has built-in sup
- **High-scale**: High scale can be loosely defined as containers that have more than 100,000 blobs in them or storage accounts that have more than 100 blob updates per second.
+- **Existing Blobs**: The blob trigger will process all existing blobs in the container when you set up the trigger. If you have a container with many existing blobs and only want to trigger for new blobs, use the Event Grid trigger.
+ - **Minimizing latency**: If your function app is on the Consumption plan, there can be up to a 10-minute delay in processing new blobs if a function app has gone idle. To avoid this latency, you can switch to an App Service plan with Always On enabled. You can also use an [Event Grid trigger](functions-bindings-event-grid.md) with your Blob storage account. For an example, see the [Event Grid tutorial](../event-grid/resize-images-on-storage-blob-upload-event.md?toc=%2Fazure%2Fazure-functions%2Ftoc.json). See the [Image resize with Event Grid](../event-grid/resize-images-on-storage-blob-upload-event.md) tutorial of an Event Grid example.
azure-government Azure Services In Fedramp Auditscope https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/compliance/azure-services-in-fedramp-auditscope.md
description: This article tracks FedRAMP, DoD, and ICD 503 compliance scope for
Previously updated : 08/11/2021 Last updated : 08/12/2021 # Azure, Dynamics 365, Microsoft 365, and Power Platform services compliance scope
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| Service | DoD IL2 | FedRAMP High | Planned 2021 | | - |:--:|::|::|
-| [AI Builder](/ai-builder/overview) | &#x2705; | &#x2705; | |
| [API Management](https://azure.microsoft.com/services/api-management/) | &#x2705; | &#x2705; | | | [App Configuration](https://azure.microsoft.com/services/app-configuration/) | &#x2705; | &#x2705; | | | [Application Gateway](https://azure.microsoft.com/services/application-gateway/) | &#x2705; | &#x2705; | |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Azure Arc-enabled Servers](../../azure-arc/servers/overview.md) | &#x2705; | &#x2705; | | | [Azure Archive Storage](https://azure.microsoft.com/services/storage/archive/) | &#x2705; | &#x2705; | | | [Azure Backup](https://azure.microsoft.com/services/backup/) | &#x2705; | &#x2705; | |
-| **Service** | **DoD IL2** | **FedRAMP High** | **Planned 2021** |
| [Azure Bastion](https://azure.microsoft.com/services/azure-bastion/) | &#x2705; | &#x2705; | |
+| **Service** | **DoD IL2** | **FedRAMP High** | **Planned 2021** |
| [Azure Blueprints](https://azure.microsoft.com/services/blueprints/) | &#x2705; | &#x2705; | | | [Azure Bot Service](/azure/bot-service/) | &#x2705; | &#x2705; | | | [Azure Cache for Redis](https://azure.microsoft.com/services/cache/) | &#x2705; | &#x2705; | |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Azure Data Share](https://azure.microsoft.com/services/data-share/) | &#x2705; | &#x2705; | | | [Azure Database for MariaDB](https://azure.microsoft.com/services/mariadb/) | &#x2705; | &#x2705; | | | [Azure Database for MySQL](https://azure.microsoft.com/services/mysql/) | &#x2705; | &#x2705; | |
-| **Service** | **DoD IL2** | **FedRAMP High** | **Planned 2021** |
| [Azure Database for PostgreSQL](https://azure.microsoft.com/services/postgresql/) | &#x2705; | &#x2705; | | | [Azure Database Migration Service](https://azure.microsoft.com/services/database-migration/) | &#x2705; | &#x2705; | | | [Azure Databricks](https://azure.microsoft.com/services/databricks/) **&ast;&ast;** | &#x2705; | &#x2705; | |
+| **Service** | **DoD IL2** | **FedRAMP High** | **Planned 2021** |
| [Azure DDoS Protection](https://azure.microsoft.com/services/ddos-protection/) | &#x2705; | &#x2705; | | | [Azure Dedicated HSM](https://azure.microsoft.com/services/azure-dedicated-hsm/) | &#x2705; | &#x2705; | | | [Azure DevTest Labs](https://azure.microsoft.com/services/devtest-lab/) | &#x2705; | &#x2705; | |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Azure Form Recognizer](https://azure.microsoft.com/services/form-recognizer/) | &#x2705; | &#x2705; | | | [Azure Front Door](https://azure.microsoft.com/services/frontdoor/) | &#x2705; | &#x2705; | | | [Azure Functions](https://azure.microsoft.com/services/functions/) | &#x2705; | &#x2705; | |
-| **Service** | **DoD IL2** | **FedRAMP High** | **Planned 2021** |
| [Azure Health Bot](/healthbot/) | &#x2705; | &#x2705; | | | [Azure HDInsight](https://azure.microsoft.com/services/hdinsight/) | &#x2705; | &#x2705; | | | [Azure Healthcare APIs](https://azure.microsoft.com/services/healthcare-apis/) (formerly Azure API for FHIR) | &#x2705; | &#x2705; | |
+| **Service** | **DoD IL2** | **FedRAMP High** | **Planned 2021** |
| [Azure HPC Cache](https://azure.microsoft.com/services/hpc-cache/) | &#x2705; | &#x2705; | | | [Azure Information Protection](https://azure.microsoft.com/services/information-protection/) | &#x2705; | &#x2705; | | | [Azure Internet Analyzer](https://azure.microsoft.com/services/internet-analyzer/) | &#x2705; | &#x2705; | |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Azure Logic Apps](https://azure.microsoft.com/services/logic-apps/) | &#x2705; | &#x2705; | | | [Azure Machine Learning](https://azure.microsoft.com/services/machine-learning/) | &#x2705; | &#x2705; | | | [Azure Managed Applications](https://azure.microsoft.com/services/managed-applications/) | &#x2705; | &#x2705; | |
-| **Service** | **DoD IL2** | **FedRAMP High** | **Planned 2021** |
| [Azure Marketplace portal](https://azuremarketplace.microsoft.com/) | &#x2705; | &#x2705; | | | [Azure Maps](https://azure.microsoft.com/services/azure-maps/) | &#x2705; | &#x2705; | | | [Azure Media Services](https://azure.microsoft.com/services/media-services/) | &#x2705; | &#x2705; | |
+| **Service** | **DoD IL2** | **FedRAMP High** | **Planned 2021** |
| [Azure Migrate](https://azure.microsoft.com/services/azure-migrate/) | &#x2705; | &#x2705; | | | [Azure Monitor](https://azure.microsoft.com/services/monitor/) (incl. [Application Insights](../../azure-monitor/app/app-insights-overview.md), [Log Analytics](../../azure-monitor/logs/data-platform-logs.md), and [Application Change Analysis](../../azure-monitor/app/change-analysis.md)) | &#x2705; | &#x2705; | |
-| [Azure Monitor Application Change Analysis](../../azure-monitor/app/change-analysis.md) | &#x2705; | &#x2705; | |
| [Azure NetApp Files](https://azure.microsoft.com/services/netapp/) | &#x2705; | &#x2705; | | | [Azure Open Datasets](https://azure.microsoft.com/services/open-datasets/) | &#x2705; | &#x2705; | | | [Azure Peering Service](../../peering-service/about.md) | &#x2705; | &#x2705; | |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Azure Red Hat OpenShift](https://azure.microsoft.com/services/openshift/) | &#x2705; | &#x2705; | | | [Azure Resource Graph](../../governance/resource-graph/overview.md) | &#x2705; | &#x2705; | | | [Azure Resource Manager](https://azure.microsoft.com/features/resource-manager/) | &#x2705; | &#x2705; | |
-| **Service** | **DoD IL2** | **FedRAMP High** | **Planned 2021** |
| [Azure Scheduler](../../scheduler/scheduler-intro.md) | &#x2705; | &#x2705; | | | [Azure Security Center](https://azure.microsoft.com/services/security-center/) | &#x2705; | &#x2705; | | | [Azure Service Fabric](https://azure.microsoft.com/services/service-fabric/) | &#x2705; | &#x2705; | | | [Azure Service Health](https://azure.microsoft.com/features/service-health/) | &#x2705; | &#x2705; | |
+| **Service** | **DoD IL2** | **FedRAMP High** | **Planned 2021** |
| [Azure Service Manager (RDFE)](/previous-versions/azure/ee460799(v=azure.100)) | &#x2705; | &#x2705; | | | [Azure Sentinel](https://azure.microsoft.com/services/azure-sentinel/) (incl. [UEBA](../../sentinel/identify-threats-with-entity-behavior-analytics.md#what-is-user-and-entity-behavior-analytics-ueba)) | &#x2705; | &#x2705; | | | [Azure SignalR Service](https://azure.microsoft.com/services/signalr-service/) | &#x2705; | &#x2705; | |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Azure Synapse Analytics](https://azure.microsoft.com/services/synapse-analytics/) | &#x2705; | &#x2705; | | | [Azure Time Series Insights](https://azure.microsoft.com/services/time-series-insights/) | &#x2705; | &#x2705; | | | [Azure Video Analyzer](https://azure.microsoft.com/products/video-analyzer/) | &#x2705; | &#x2705; | |
-| **Service** | **DoD IL2** | **FedRAMP High** | **Planned 2021** |
| [Azure Virtual Desktop](https://azure.microsoft.com/services/virtual-desktop/) (formerly Windows Virtual Desktop) | &#x2705; | &#x2705; | | | [Azure VMware Solution](https://azure.microsoft.com/services/azure-vmware/) | | | &#x2705; | | [Azure Web Application Firewall)](https://azure.microsoft.com/services/web-application-firewall/) | &#x2705; | &#x2705; | | | [Batch](https://azure.microsoft.com/services/batch/) | &#x2705; | &#x2705; | |
+| **Service** | **DoD IL2** | **FedRAMP High** | **Planned 2021** |
| [Cloud Shell](https://azure.microsoft.com/features/cloud-shell/) | &#x2705; | &#x2705; | | | [Cognitive | [Cognitive
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Cognitive | [Cognitive | [Cognitive
-| **Service** | **DoD IL2** | **FedRAMP High** | **Planned 2021** |
| [Cognitive | [Container Instances](https://azure.microsoft.com/services/container-instances/) | &#x2705; | &#x2705; | | | [Container Registry](https://azure.microsoft.com/services/container-registry/) | &#x2705; | &#x2705; | | | [Content Delivery Network](https://azure.microsoft.com/services/cdn/) | &#x2705; | &#x2705; | |
+| **Service** | **DoD IL2** | **FedRAMP High** | **Planned 2021** |
| [Customer Lockbox](../../security/fundamentals/customer-lockbox-overview.md) | &#x2705; | &#x2705; | | | [Data Factory](https://azure.microsoft.com/services/data-factory/) | &#x2705; | &#x2705; | |
-| [Data Integrator](/power-platform/admin/data-integrator) | &#x2705; | &#x2705; | |
| [Dataverse](/powerapps/maker/common-data-service/data-platform-intro) (formerly Common Data Service) | &#x2705; | &#x2705; | | | [Dynamics 365 Chat (Omnichannel Engagement Hub)](/dynamics365/omnichannel/introduction-omnichannel) | &#x2705; | &#x2705; | | | [Dynamics 365 Commerce](https://dynamics.microsoft.com/commerce/overview/)| &#x2705; | &#x2705; | |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Dynamics 365 Finance](https://dynamics.microsoft.com/finance/overview/)| &#x2705; | &#x2705; | | | [Dynamics 365 Guides](https://dynamics.microsoft.com/mixed-reality/guides/)| &#x2705; | &#x2705; | | | [Dynamics 365 Sales](https://dynamics.microsoft.com/sales/overview/) | | | &#x2705; |
-| **Service** | **DoD IL2** | **FedRAMP High** | **Planned 2021** |
| [Dynamics 365 Sales Professional](https://dynamics.microsoft.com/sales/professional/) | | | &#x2705; | | [Dynamics 365 Supply Chain Management](https://dynamics.microsoft.com/supply-chain-management/overview/)| &#x2705; | &#x2705; | | | [Event Grid](https://azure.microsoft.com/services/event-grid/) | &#x2705; | &#x2705; | | | [Event Hubs](https://azure.microsoft.com/services/event-hubs/) | &#x2705; | &#x2705; | | | [GitHub AE](https://docs.github.com/github-ae@latest/admin/overview/about-github-ae) | &#x2705; | &#x2705; | |
+| **Service** | **DoD IL2** | **FedRAMP High** | **Planned 2021** |
| [GitHub Codespaces](https://visualstudio.microsoft.com/services/github-codespaces/) (formerly Visual Studio Codespaces) | &#x2705; | &#x2705; | | | [Import/Export](https://azure.microsoft.com/services/storage/import-export/) | &#x2705; | &#x2705; | | | [Key Vault](https://azure.microsoft.com/services/key-vault/) | &#x2705; | &#x2705; | |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Microsoft Azure portal](https://azure.microsoft.com/features/azure-portal/)| &#x2705; | &#x2705; | | | [Microsoft Cloud App Security](/cloud-app-security/what-is-cloud-app-security) | &#x2705; | &#x2705; | | | [Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/) (formerly Microsoft Defender Advanced Threat Protection) | &#x2705; | &#x2705; | |
-| **Service** | **DoD IL2** | **FedRAMP High** | **Planned 2021** |
| [Microsoft Defender for Identity](/defender-for-identity/what-is) (formerly Azure Advanced Threat Protection) | &#x2705; | &#x2705; | | | [Microsoft Graph](/graph/overview) | &#x2705; | &#x2705; | | | [Microsoft Intune](/mem/intune/fundamentals/) | &#x2705; | &#x2705; | | | [Microsoft Stream](/stream/overview) | &#x2705; | &#x2705; | | | [Microsoft Threat Experts](/microsoft-365/security/defender-endpoint/microsoft-threat-experts) | &#x2705; | &#x2705; | |
+| **Service** | **DoD IL2** | **FedRAMP High** | **Planned 2021** |
| [Multifactor Authentication](../../active-directory/authentication/concept-mfa-howitworks.md) | &#x2705; | &#x2705; | |
-| [Network Watcher](https://azure.microsoft.com/services/network-watcher/) | &#x2705; | &#x2705; | |
-| [Network Watcher Traffic Analytics](../../network-watcher/traffic-analytics.md) | &#x2705; | &#x2705; | |
+| [Network Watcher](https://azure.microsoft.com/services/network-watcher/) incl. [Traffic Analytics](../../network-watcher/traffic-analytics.md) | &#x2705; | &#x2705; | |
| [Notification Hubs](https://azure.microsoft.com/services/notification-hubs/) | &#x2705; | &#x2705; | |
+| [Power AI Builder](/ai-builder/overview) | &#x2705; | &#x2705; | |
| [Power Apps](/powerapps/powerapps-overview) | &#x2705; | &#x2705; | | | [Power Apps Portal](https://powerapps.microsoft.com/portals/) | &#x2705; | &#x2705; | | | [Power Automate](/power-automate/getting-started) (formerly Microsoft Flow) | &#x2705; | &#x2705; | | | [Power BI Embedded](https://azure.microsoft.com/services/power-bi-embedded/) | &#x2705; | &#x2705; | |
+| [Power Data Integrator](/power-platform/admin/data-integrator) (formerly Dynamics 365 Integrator App) | &#x2705; | &#x2705; | |
| [Power Virtual Agents](/power-virtual-agents/fundamentals-what-is-power-virtual-agents) | &#x2705; | &#x2705; | | | [Private Link](https://azure.microsoft.com/services/private-link/) | &#x2705; | &#x2705; | |
-| **Service** | **DoD IL2** | **FedRAMP High** | **Planned 2021** |
| [Service Bus](https://azure.microsoft.com/services/service-bus/) | &#x2705; | &#x2705; | | | [SQL Server Registry](/sql/sql-server/end-of-support/sql-server-extended-security-updates) | &#x2705; | &#x2705; | | | [SQL Server Stretch Database](https://azure.microsoft.com/services/sql-server-stretch-database/) | &#x2705; | &#x2705; | | | [Storage: Blobs](https://azure.microsoft.com/services/storage/blobs/) (incl. [Azure Data Lake Storage Gen2](../../storage/blobs/data-lake-storage-introduction.md)) | &#x2705; | &#x2705; | |
+| **Service** | **DoD IL2** | **FedRAMP High** | **Planned 2021** |
| [Storage: Data Movement)](../../storage/common/storage-use-data-movement-library.md) | &#x2705; | &#x2705; | |
-| [Storage: Disks (incl. Managed Disks)](https://azure.microsoft.com/services/storage/disks/) | &#x2705; | &#x2705; | |
+| [Storage: Disks](https://azure.microsoft.com/services/storage/disks/) (incl. [managed disks](../../virtual-machines/managed-disks-overview.md)) | &#x2705; | &#x2705; | |
| [Storage: Files](https://azure.microsoft.com/services/storage/files/) | &#x2705; | &#x2705; | | | [Storage: Queues](https://azure.microsoft.com/services/storage/queues/) | &#x2705; | &#x2705; | | | [Storage: Tables](https://azure.microsoft.com/services/storage/tables/) | &#x2705; | &#x2705; | |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Web Apps (App Service)](https://azure.microsoft.com/services/app-service/web/) | &#x2705; | &#x2705; | | | [Windows 10 IoT Core Services](https://azure.microsoft.com/services/windows-10-iot-core/) | &#x2705; | &#x2705; | |
-**&ast;** FedRAMP High authorization for edge devices (such as Azure Data Box and Azure Stack Edge) applies only to Azure services that support on-premises, customer-managed devices. For example, FedRAMP High authorization for Azure Data Box covers datacenter infrastructure services and Data Box pod and disk service, which are the online software components supporting Data Box hardware appliance. You are wholly responsible for the authorization package that covers the physical devices. For assistance with accelerating your onboarding and authorization of devices, contact your Microsoft account representative.
+**&ast;** FedRAMP High authorization for edge devices (such as Azure Data Box and Azure Stack Edge) applies only to Azure services that support on-premises, customer-managed devices. For example, FedRAMP High authorization for Azure Data Box covers datacenter infrastructure services and Data Box pod and disk service, which are the online software components supporting your Data Box hardware appliance. You are wholly responsible for the authorization package that covers the physical devices. For assistance with accelerating your onboarding and authorization of devices, contact your Microsoft account representative.
**&ast;&ast;** FedRAMP High authorization for Azure Databricks is applicable to limited regions in Azure. To configure Azure Databricks for FedRAMP High use, contact your Microsoft or Databricks representative.
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
### Terminology used
+- Azure Government = Azure Government regions US Gov Arizona, US Gov Texas, and US Gov Virginia
- FR High = FedRAMP High Provisional Authorization to Operate (P-ATO) in Azure Government-- DoD IL2 = DoD SRG Impact Level 2 Provisional Authorization (PA) in Azure Government regions US Gov Arizona, US Gov Texas, and US Gov Virginia-- DoD IL4 = DoD SRG Impact Level 4 Provisional Authorization (PA) in Azure Government regions US Gov Arizona, US Gov Texas, and US Gov Virginia-- DoD IL5 = DoD SRG Impact Level 5 Provisional Authorization (PA) in Azure Government regions US Gov Arizona, US Gov Texas, and US Gov Virginia
+- DoD IL2 = DoD SRG Impact Level 2 Provisional Authorization (PA) in Azure Government
+- DoD IL4 = DoD SRG Impact Level 4 Provisional Authorization (PA) in Azure Government
+- DoD IL5 = DoD SRG Impact Level 5 Provisional Authorization (PA) in Azure Government
- DoD IL6 = DoD SRG Impact Level 6 Provisional Authorization (PA) in Azure Government Secret - ICD 503 Secret = Intelligence Community Directive 503 Authorization to Operate (ATO) in Azure Government Secret - &#x2705; = service is included in audit scope and has been authorized
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Content Delivery Network](https://azure.microsoft.com/services/cdn/) | &#x2705; | &#x2705; | &#x2705; | | | | [Customer Lockbox](../../security/fundamentals/customer-lockbox-overview.md) | &#x2705; | &#x2705; | &#x2705; | | | | [Data Factory](https://azure.microsoft.com/services/data-factory/) | &#x2705; | &#x2705; | &#x2705; | | |
-| [Data Integrator](/power-platform/admin/data-integrator) | &#x2705; | &#x2705; | &#x2705; | | |
| [Dataverse](/powerapps/maker/common-data-service/data-platform-intro) (formerly Common Data Service) | &#x2705; | &#x2705; | &#x2705; | | | | [Dynamics 365 Chat (Omnichannel Engagement Hub)](/dynamics365/omnichannel/introduction-omnichannel) | &#x2705; | &#x2705; | &#x2705; | | | | [Dynamics 365 Customer Insights](/dynamics365/customer-insights/audience-insights/overview) | &#x2705; | &#x2705; | &#x2705; | | |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Dynamics 365 Sales](https://dynamics.microsoft.com/sales/overview/) | &#x2705; | &#x2705; | &#x2705; | | | | [Event Grid](https://azure.microsoft.com/services/event-grid/) | &#x2705; | &#x2705; | &#x2705; | | | | [Event Hubs](https://azure.microsoft.com/services/event-hubs/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-| **Service** | **FR High / DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** | **ICD 503 Secret** |
| [GitHub AE](https://docs.github.com/en/github-ae@latest/admin/overview/about-github-ae) | &#x2705; | | | | |
+| **Service** | **FR High / DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** | **ICD 503 Secret** |
| [Import/Export](https://azure.microsoft.com/services/storage/import-export/) | &#x2705; | &#x2705; | &#x2705; | | | | [Key Vault](https://azure.microsoft.com/services/key-vault/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Load Balancer](https://azure.microsoft.com/services/load-balancer/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Microsoft Stream](/stream/overview) | &#x2705; | &#x2705; | &#x2705; | | | | [Multifactor Authentication](../../active-directory/authentication/concept-mfa-howitworks.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Network Watcher](https://azure.microsoft.com/services/network-watcher/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-| **Service** | **FR High / DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** | **ICD 503 Secret** |
| [Network Watcher Traffic Analytics](../../network-watcher/traffic-analytics.md) | &#x2705; | &#x2705; | &#x2705; | | |
+| **Service** | **FR High / DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** | **ICD 503 Secret** |
| [Notification Hubs](https://azure.microsoft.com/services/notification-hubs/) | &#x2705; | &#x2705; | &#x2705; | | | | [Planned Maintenance for VMs](../../virtual-machines/maintenance-control-portal.md) | &#x2705; | | | | | | [Power Apps](/powerapps/powerapps-overview) | &#x2705; | &#x2705; | &#x2705; | | | | [Power Automate](/power-automate/getting-started) (formerly Microsoft Flow) | &#x2705; | &#x2705; | &#x2705; | | | | [Power BI](https://powerbi.microsoft.com/) | &#x2705; | &#x2705; | &#x2705; | | | | [Power BI Embedded](https://azure.microsoft.com/services/power-bi-embedded/) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Power Data Integrator](/power-platform/admin/data-integrator) (formerly Dynamics 365 Integrator App) | &#x2705; | &#x2705; | &#x2705; | | |
| [Power Query Online](/powerquery.microsoft.com/) | &#x2705; | &#x2705; | &#x2705; | | | | [Power Virtual Agents](/power-virtual-agents/fundamentals-what-is-power-virtual-agents) | &#x2705; | | | | | | [Private Link](https://azure.microsoft.com/services/private-link/) | &#x2705; | &#x2705; | &#x2705; | | | | [Service Bus](https://azure.microsoft.com/services/service-bus/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [SQL Server Stretch Database](https://azure.microsoft.com/services/sql-server-stretch-database/) | &#x2705; | &#x2705; | &#x2705; | | | | [Storage: Blobs](https://azure.microsoft.com/services/storage/blobs/) (incl. [Azure Data Lake Storage Gen2](../../storage/blobs/data-lake-storage-introduction.md)) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-| [Storage: Disks (incl. Managed Disks)](https://azure.microsoft.com/services/storage/disks/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [Storage: Disks](https://azure.microsoft.com/services/storage/disks/) (incl. [managed disks](../../virtual-machines/managed-disks-overview.md)) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
| [Storage: Files](https://azure.microsoft.com/services/storage/files/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | **Service** | **FR High / DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** | **ICD 503 Secret** | | [Storage: Queues](https://azure.microsoft.com/services/storage/queues/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
azure-government Documentation Government Manage Oms https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/documentation-government-manage-oms.md
The first step in integrating your cloud assets with Azure Monitor logs is insta
You can connect Azure VMs to Azure Monitor logs directly through the Azure portal. For instructions, see [New ways to enable Azure Monitor logs on your Azure VMs](https://blogs.technet.microsoft.com/momteam/2016/02/10/new-ways-to-enable-log-analytics-oms-on-your-azure-vms/).
-You can also connect them programmatically or configure the Azure Monitor virtual machine extension right into your Azure Resource Manager templates. See the instructions for Windows-based machines at [Connect Windows computers to Azure Monitor logs](../azure-monitor/agents/agent-windows.md) and for Linux-based machines at [Connect Linux computers to Azure Monitor logs](../azure-monitor/vm/quick-collect-linux-computer.md).
+You can also connect them programmatically or configure the Azure Monitor virtual machine extension right into your Azure Resource Manager templates. See the instructions for Windows-based machines at [Connect Windows computers to Azure Monitor logs](../azure-monitor/agents/agent-windows.md) and for Linux-based machines at [Connect Linux computers to Azure Monitor logs](../azure-monitor/vm/monitor-virtual-machine.md).
## Onboarding storage accounts and Operations Manager to Azure Monitor logs Azure Monitor logs can also connect to your storage account and/or existing System Center Operations Manager deployments to offer you operations management in hybrid scenarios (across cloud providers or in cloud/on-premises infrastructures).
azure-monitor Agent Linux Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/agents/agent-linux-troubleshoot.md
We've seen that a clean re-install of the Agent will fix most issues. In fact th
| NOT_DEFINED | Because the necessary dependencies are not installed, the auoms auditd plugin will not be installed. Installation of auoms failed, install package auditd. | | 2 | Invalid option provided to the shell bundle. Run `sudo sh ./omsagent-*.universal*.sh --help` for usage | | 3 | No option provided to the shell bundle. Run `sudo sh ./omsagent-*.universal*.sh --help` for usage. |
-| 4 | Invalid package type OR invalid proxy settings; omsagent-*rpm*.sh packages can only be installed on RPM-based systems, and omsagent-*deb*.sh packages can only be installed on Debian-based systems. It is recommend you use the universal installer from the [latest release](../vm/quick-collect-linux-computer.md#install-the-agent-for-linux). Also review to verify your proxy settings. |
+| 4 | Invalid package type OR invalid proxy settings; omsagent-*rpm*.sh packages can only be installed on RPM-based systems, and omsagent-*deb*.sh packages can only be installed on Debian-based systems. It is recommend you use the universal installer from the [latest release](../vm/monitor-virtual-machine.md#agents). Also review to verify your proxy settings. |
| 5 | The shell bundle must be executed as root OR there was 403 error returned during onboarding. Run your command using `sudo`. | | 6 | Invalid package architecture OR there was error 200 error returned during onboarding; omsagent-\*x64.sh packages can only be installed on 64-bit systems, and omsagent-\*x86.sh packages can only be installed on 32-bit systems. Download the correct package for your architecture from the [latest release](https://github.com/Microsoft/OMS-Agent-for-Linux/releases/latest). | | 17 | Installation of OMS package failed. Look through the command output for the root failure. |
You can continue reonboard after using the `--purge` option
Perform the following steps to correct the issue. 1. Remove extension from Azure portal.
-2. Install the agent following the [instructions](../vm/quick-collect-linux-computer.md).
+2. Install the agent following the [instructions](../vm/monitor-virtual-machine.md).
3. Restart the agent by running the following command: `sudo /opt/microsoft/omsagent/bin/service_control restart`. * Wait several minutes and the provisioning state changes to **Provisioning succeeded**.
Perform the following steps to correct the issue.
wget https://github.com/Microsoft/OMS-Agent-for-Linux/releases/download/OMSAgent_GA_v1.4.2-124/omsagent-1.4.2-124.universal.x64.sh ```
-3. Upgrade packages by executing `sudo sh ./omsagent-*.universal.x64.sh --upgrade`.
+3. Upgrade packages by executing `sudo sh ./omsagent-*.universal.x64.sh --upgrade`.
azure-monitor Data Sources Syslog https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/agents/data-sources-syslog.md
You can add a new facility by clicking **Add facility**. For each facility, only
By default, all configuration changes are automatically pushed to all agents. If you want to configure Syslog manually on each Linux agent, then uncheck the box *Apply below configuration to my machines*. ### Configure Syslog on Linux agent
-When the [Log Analytics agent is installed on a Linux client](../vm/quick-collect-linux-computer.md), it installs a default syslog configuration file that defines the facility and severity of the messages that are collected. You can modify this file to change the configuration. The configuration file is different depending on the Syslog daemon that the client has installed.
+When the [Log Analytics agent is installed on a Linux client](../vm/monitor-virtual-machine.md), it installs a default syslog configuration file that defines the facility and severity of the messages that are collected. You can modify this file to change the configuration. The configuration file is different depending on the Syslog daemon that the client has installed.
> [!NOTE] > If you edit the syslog configuration, you must restart the syslog daemon for the changes to take effect.
The following table provides different examples of log queries that retrieve Sys
## Next steps * Learn about [log queries](../logs/log-query-overview.md) to analyze the data collected from data sources and solutions. * Use [Custom Fields](../logs/custom-fields.md) to parse data from syslog records into individual fields.
-* [Configure Linux agents](../vm/quick-collect-linux-computer.md) to collect other types of data.
+* [Configure Linux agents](../vm/monitor-virtual-machine.md) to collect other types of data.
azure-monitor Log Analytics Agent https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/agents/log-analytics-agent.md
There are multiple methods to install the Log Analytics agent and connect your m
- [VM insights](../vm/vminsights-enable-overview.md) provides multiple methods enabling agents at scale. This includes installation of the Log Analytics agent and Dependency agent. - [Azure Security Center can provision the Log Analytics agent](../../security-center/security-center-enable-data-collection.md) on all supported Azure VMs and any new ones that are created if you enable it to monitor for security vulnerabilities and threats. - Log Analytics VM extension for [Windows](../../virtual-machines/extensions/oms-windows.md) or [Linux](../../virtual-machines/extensions/oms-linux.md) can be installed with the Azure portal, Azure CLI, Azure PowerShell, or a Azure Resource Manager template.-- Install for individual Azure virtual machines [manually from the Azure portal](../vm/quick-collect-azurevm.md?toc=%2fazure%2fazure-monitor%2ftoc.json).
+- Install for individual Azure virtual machines [manually from the Azure portal](../vm/monitor-virtual-machine.md?toc=%2fazure%2fazure-monitor%2ftoc.json).
### Windows virtual machine on-premises or in another cloud
There are multiple methods to install the Log Analytics agent and connect your m
### Linux virtual machine on-premises or in another cloud - Use [Azure Arc enabled servers](../../azure-arc/servers/overview.md) to deploy and manage the Log Analytics VM extension.-- [Manually install](../vm/quick-collect-linux-computer.md) the agent calling a wrapper-script hosted on GitHub.
+- [Manually install](../vm/monitor-virtual-machine.md) the agent calling a wrapper-script hosted on GitHub.
- Integrate [System Center Operations Manager](./om-agents.md) with Azure Monitor to forward collected data from Windows computers reporting to a management group. ## Workspace ID and key
For example:
* Review [data sources](../agents/agent-data-sources.md) to understand the data sources available to collect data from your Windows or Linux system. * Learn about [log queries](../logs/log-query-overview.md) to analyze the data collected from data sources and solutions.
-* Learn about [monitoring solutions](../insights/solutions.md) that add functionality to Azure Monitor and also collect data into the Log Analytics workspace.
+* Learn about [monitoring solutions](../insights/solutions.md) that add functionality to Azure Monitor and also collect data into the Log Analytics workspace.
azure-monitor Custom Data Correlation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/custom-data-correlation.md
Since Application Insights is backed by the powerful Azure Monitor log platform,
In this section, we will review how to get your data into Azure Monitor logs.
-If you don't already have one, provision a new Log Analytics workspace by following [these instructions](../vm/quick-collect-azurevm.md) through and including the "create a workspace" step.
+If you don't already have one, provision a new Log Analytics workspace by following [these instructions](../vm/monitor-virtual-machine.md) through and including the "create a workspace" step.
To start sending log data into Azure Monitor. Several options exist:
app('myAI').requests
## Next Steps - Check out the [Data Collector API](../logs/data-collector-api.md) reference.-- For more information on [cross-resource joins](../logs/cross-workspace-query.md).
+- For more information on [cross-resource joins](../logs/cross-workspace-query.md).
azure-monitor Ilogger https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/ilogger.md
Title: Application Insights logging with .NET
description: Learn how to use Application Insights with the ILogger interface in .NET. Last updated 05/20/2021- # Application Insights logging with .NET
Host.CreateDefaultBuilder(args)
}); ```
-This preceding code is functionally equivalent to the previous section in *appsettings.json*. For more information, see [Configuration in .NET](/dotnet/core/extensions/configuration).
## Logging scopes
azure-monitor Containers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/containers/containers.md
Use the following information to install and configure the solution.
2. Install and use Docker with a Log Analytics agent. Based on your operating system and Docker orchestrator, you can use the following methods to configure your agent. - For standalone hosts:
- - On supported Linux operating systems, install and run Docker and then install and configure the [Log Analytics agent for Linux](../vm/quick-collect-linux-computer.md).
+ - On supported Linux operating systems, install and run Docker and then install and configure the [Log Analytics agent for Linux](../vm/monitor-virtual-machine.md).
- On CoreOS, you cannot run the Log Analytics agent for Linux. Instead, you run a containerized version of the Log Analytics agent for Linux. Review Linux container hosts including CoreOS or Azure Government Linux container hosts including CoreOS if you are working with containers in Azure Government Cloud. - On Windows Server 2016 and Windows 10, install the Docker Engine and client then connect an agent to gather information and send it to Azure Monitor. Review [Install and configure Windows container hosts](#install-and-configure-windows-container-hosts) if you have a Windows environment. - For Docker multi-host orchestration:
Use the following information to install and configure the solution.
Review the [Docker Engine on Windows](/virtualization/windowscontainers/manage-docker/configure-docker-daemon) article for additional information about how to install and configure your Docker Engines on computers running Windows. > [!IMPORTANT]
-> Docker must be running **before** you install the [Log Analytics agent for Linux](../vm/quick-collect-linux-computer.md) on your container hosts. If you've already installed the agent before installing Docker, you need to reinstall the Log Analytics agent for Linux. For more information about Docker, see the [Docker website](https://www.docker.com).
+> Docker must be running **before** you install the [Log Analytics agent for Linux](../vm/monitor-virtual-machine.md) on your container hosts. If you've already installed the agent before installing Docker, you need to reinstall the Log Analytics agent for Linux. For more information about Docker, see the [Docker website](https://www.docker.com).
### Install and configure Linux container hosts
sudo docker run --privileged -d -v /var/run/docker.sock:/var/run/docker.sock -v
**Switching from using an installed Linux agent to one in a container**
-If you previously used the directly-installed agent and want to instead use an agent running in a container, you must first remove the Log Analytics agent for Linux. See [Uninstalling the Log Analytics agent for Linux](../vm/quick-collect-linux-computer.md) to understand how to successfully uninstall the agent.
+If you previously used the directly-installed agent and want to instead use an agent running in a container, you must first remove the Log Analytics agent for Linux. See [Uninstalling the Log Analytics agent for Linux](../vm/monitor-virtual-machine.md) to understand how to successfully uninstall the agent.
#### Configure a Log Analytics agent for Docker Swarm
For Docker Swarm, once the secret for Workspace ID and Primary Key is created, u
There are three ways to add the Log Analytics agent to Red Hat OpenShift to start collecting container monitoring data.
-* [Install the Log Analytics agent for Linux](../vm/quick-collect-linux-computer.md) directly on each OpenShift node
-* [Enable Log Analytics VM Extension](../vm/quick-collect-azurevm.md) on each OpenShift node residing in Azure
+* [Install the Log Analytics agent for Linux](../vm/monitor-virtual-machine.md) directly on each OpenShift node
+* [Enable Log Analytics VM Extension](../vm/monitor-virtual-machine.md) on each OpenShift node residing in Azure
* Install the Log Analytics agent as an OpenShift daemon-set In this section we cover the steps required to install the Log Analytics agent as an OpenShift daemon-set.
For more information about the Docker daemon configuration used with Windows Con
#### Install Windows agents
-To enable Windows and Hyper-V container monitoring, install the Microsoft Monitoring Agent (MMA) on Windows computers that are container hosts. For computers running Windows in your on-premises environment, see [Connect Windows computers to Azure Monitor](../agents/agent-windows.md). For virtual machines running in Azure, connect them to Azure Monitor using the [virtual machine extension](../vm/quick-collect-azurevm.md).
+To enable Windows and Hyper-V container monitoring, install the Microsoft Monitoring Agent (MMA) on Windows computers that are container hosts. For computers running Windows in your on-premises environment, see [Connect Windows computers to Azure Monitor](../agents/agent-windows.md). For virtual machines running in Azure, connect them to Azure Monitor using the [virtual machine extension](../vm/monitor-virtual-machine.md).
-You can monitor Windows containers running on Service Fabric. However, only [virtual machines running in Azure](../vm/quick-collect-azurevm.md) and [computers running Windows in your on-premises environment](../agents/agent-windows.md) are currently supported for Service Fabric.
+You can monitor Windows containers running on Service Fabric. However, only [virtual machines running in Azure](../vm/monitor-virtual-machine.md) and [computers running Windows in your on-premises environment](../agents/agent-windows.md) are currently supported for Service Fabric.
You can verify that the Container Monitoring solution is set correctly for Windows. To check whether the management pack was download properly, look for *ContainerManagement.xxx*. The files should be in the C:\Program Files\Microsoft Monitoring Agent\Agent\Health Service State\Management Packs folder.
The Container Monitoring solution collects various performance metrics and log d
Data is collected every three minutes by the following agent types. -- [Log Analytics agent for Linux](../vm/quick-collect-linux-computer.md)
+- [Log Analytics agent for Linux](../vm/monitor-virtual-machine.md)
- [Windows agent](../agents/agent-windows.md)-- [Log Analytics VM extension](../vm/quick-collect-azurevm.md)
+- [Log Analytics VM extension](../vm/monitor-virtual-machine.md)
### Container records
After you create a query that you find useful, save it by clicking **Favorites**
## Next steps
-[Query logs](../logs/log-query-overview.md) to view detailed container data records.
-
+[Query logs](../logs/log-query-overview.md) to view detailed container data records.
azure-monitor Metrics Supported https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/essentials/metrics-supported.md
description: List of metrics available for each resource type with Azure Monitor
Previously updated : 07/19/2021 Last updated : 08/04/2021
The Azure Monitor Agent replaces the Azure Diagnostics extension and Log Analyti
> [!IMPORTANT] > This latest update adds a new column and reordered the metrics to be alphabetic. The addition information means that the tables below may have a horizontal scroll bar at the bottom, depending on the width of your browser window. If you believe you are missing information, use the scroll bar to see the entirety of the table. + ## microsoft.aadiam/azureADMetrics |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
The Azure Monitor Agent replaces the Azure Diagnostics extension and Log Analyti
|jvm.memory.used|Yes|jvm.memory.used|Bytes|Average|App Memory Used in bytes|Deployment, AppName, Pod| |loh-size|Yes|loh-size|Bytes|Average|LOH Heap Size|Deployment, AppName, Pod| |monitor-lock-contention-count|Yes|monitor-lock-contention-count|Count|Average|Number of times there were contention when trying to take the monitor lock|Deployment, AppName, Pod|
+|PodCpuUsage|Yes|App CPU Usage|Percent|Average|The recent CPU usage for the app|Deployment, AppName, Pod|
+|PodMemoryUsage|Yes|App Memory Usage|Percent|Average|The recent Memory usage for the app|Deployment, AppName, Pod|
|process.cpu.usage|Yes|process.cpu.usage|Percent|Average|The recent CPU usage for the JVM process|Deployment, AppName, Pod| |requests-per-second|Yes|requests-rate|Count|Average|Request rate|Deployment, AppName, Pod| |system.cpu.usage|Yes|system.cpu.usage|Percent|Average|The recent CPU usage for the whole system|Deployment, AppName, Pod|
The Azure Monitor Agent replaces the Azure Diagnostics extension and Log Analyti
|total-requests|Yes|total-requests|Count|Average|Total number of requests in the lifetime of the process|Deployment, AppName, Pod| |working-set|Yes|working-set|Count|Average|Amount of working set used by the process (MB)|Deployment, AppName, Pod| - ## Microsoft.Automation/automationAccounts |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
The Azure Monitor Agent replaces the Azure Diagnostics extension and Log Analyti
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|allcachehits|Yes|Cache Hits (Instance Based)|Count|Total||ShardId, Port, Primary|
-|allcachemisses|Yes|Cache Misses (Instance Based)|Count|Total||ShardId, Port, Primary|
-|allcacheRead|Yes|Cache Read (Instance Based)|BytesPerSecond|Maximum||ShardId, Port, Primary|
-|allcacheWrite|Yes|Cache Write (Instance Based)|BytesPerSecond|Maximum||ShardId, Port, Primary|
-|allconnectedclients|Yes|Connected Clients (Instance Based)|Count|Maximum||ShardId, Port, Primary|
-|allevictedkeys|Yes|Evicted Keys (Instance Based)|Count|Total||ShardId, Port, Primary|
-|allexpiredkeys|Yes|Expired Keys (Instance Based)|Count|Total||ShardId, Port, Primary|
-|allgetcommands|Yes|Gets (Instance Based)|Count|Total||ShardId, Port, Primary|
-|alloperationsPerSecond|Yes|Operations Per Second (Instance Based)|Count|Maximum||ShardId, Port, Primary|
-|allpercentprocessortime|Yes|CPU (Instance Based)|Percent|Maximum||ShardId, Port, Primary|
-|allserverLoad|Yes|Server Load (Instance Based)|Percent|Maximum||ShardId, Port, Primary|
-|allsetcommands|Yes|Sets (Instance Based)|Count|Total||ShardId, Port, Primary|
-|alltotalcommandsprocessed|Yes|Total Operations (Instance Based)|Count|Total||ShardId, Port, Primary|
-|alltotalkeys|Yes|Total Keys (Instance Based)|Count|Maximum||ShardId, Port, Primary|
-|allusedmemory|Yes|Used Memory (Instance Based)|Bytes|Maximum||ShardId, Port, Primary|
-|allusedmemorypercentage|Yes|Used Memory Percentage (Instance Based)|Percent|Maximum||ShardId, Port, Primary|
-|allusedmemoryRss|Yes|Used Memory RSS (Instance Based)|Bytes|Maximum||ShardId, Port, Primary|
-|cachehits|Yes|Cache Hits|Count|Total||ShardId|
-|cachehits0|Yes|Cache Hits (Shard 0)|Count|Total||No Dimensions|
-|cachehits1|Yes|Cache Hits (Shard 1)|Count|Total||No Dimensions|
-|cachehits2|Yes|Cache Hits (Shard 2)|Count|Total||No Dimensions|
-|cachehits3|Yes|Cache Hits (Shard 3)|Count|Total||No Dimensions|
-|cachehits4|Yes|Cache Hits (Shard 4)|Count|Total||No Dimensions|
-|cachehits5|Yes|Cache Hits (Shard 5)|Count|Total||No Dimensions|
-|cachehits6|Yes|Cache Hits (Shard 6)|Count|Total||No Dimensions|
-|cachehits7|Yes|Cache Hits (Shard 7)|Count|Total||No Dimensions|
-|cachehits8|Yes|Cache Hits (Shard 8)|Count|Total||No Dimensions|
-|cachehits9|Yes|Cache Hits (Shard 9)|Count|Total||No Dimensions|
-|cacheLatency|Yes|Cache Latency Microseconds (Preview)|Count|Average||ShardId|
-|cachemisses|Yes|Cache Misses|Count|Total||ShardId|
-|cachemisses0|Yes|Cache Misses (Shard 0)|Count|Total||No Dimensions|
-|cachemisses1|Yes|Cache Misses (Shard 1)|Count|Total||No Dimensions|
-|cachemisses2|Yes|Cache Misses (Shard 2)|Count|Total||No Dimensions|
-|cachemisses3|Yes|Cache Misses (Shard 3)|Count|Total||No Dimensions|
-|cachemisses4|Yes|Cache Misses (Shard 4)|Count|Total||No Dimensions|
-|cachemisses5|Yes|Cache Misses (Shard 5)|Count|Total||No Dimensions|
-|cachemisses6|Yes|Cache Misses (Shard 6)|Count|Total||No Dimensions|
-|cachemisses7|Yes|Cache Misses (Shard 7)|Count|Total||No Dimensions|
-|cachemisses8|Yes|Cache Misses (Shard 8)|Count|Total||No Dimensions|
-|cachemisses9|Yes|Cache Misses (Shard 9)|Count|Total||No Dimensions|
-|cachemissrate|Yes|Cache Miss Rate|Percent|cachemissrate||ShardId|
-|cacheRead|Yes|Cache Read|BytesPerSecond|Maximum||ShardId|
-|cacheRead0|Yes|Cache Read (Shard 0)|BytesPerSecond|Maximum||No Dimensions|
-|cacheRead1|Yes|Cache Read (Shard 1)|BytesPerSecond|Maximum||No Dimensions|
-|cacheRead2|Yes|Cache Read (Shard 2)|BytesPerSecond|Maximum||No Dimensions|
-|cacheRead3|Yes|Cache Read (Shard 3)|BytesPerSecond|Maximum||No Dimensions|
-|cacheRead4|Yes|Cache Read (Shard 4)|BytesPerSecond|Maximum||No Dimensions|
-|cacheRead5|Yes|Cache Read (Shard 5)|BytesPerSecond|Maximum||No Dimensions|
-|cacheRead6|Yes|Cache Read (Shard 6)|BytesPerSecond|Maximum||No Dimensions|
-|cacheRead7|Yes|Cache Read (Shard 7)|BytesPerSecond|Maximum||No Dimensions|
-|cacheRead8|Yes|Cache Read (Shard 8)|BytesPerSecond|Maximum||No Dimensions|
-|cacheRead9|Yes|Cache Read (Shard 9)|BytesPerSecond|Maximum||No Dimensions|
-|cacheWrite|Yes|Cache Write|BytesPerSecond|Maximum||ShardId|
-|cacheWrite0|Yes|Cache Write (Shard 0)|BytesPerSecond|Maximum||No Dimensions|
-|cacheWrite1|Yes|Cache Write (Shard 1)|BytesPerSecond|Maximum||No Dimensions|
-|cacheWrite2|Yes|Cache Write (Shard 2)|BytesPerSecond|Maximum||No Dimensions|
-|cacheWrite3|Yes|Cache Write (Shard 3)|BytesPerSecond|Maximum||No Dimensions|
-|cacheWrite4|Yes|Cache Write (Shard 4)|BytesPerSecond|Maximum||No Dimensions|
-|cacheWrite5|Yes|Cache Write (Shard 5)|BytesPerSecond|Maximum||No Dimensions|
-|cacheWrite6|Yes|Cache Write (Shard 6)|BytesPerSecond|Maximum||No Dimensions|
-|cacheWrite7|Yes|Cache Write (Shard 7)|BytesPerSecond|Maximum||No Dimensions|
-|cacheWrite8|Yes|Cache Write (Shard 8)|BytesPerSecond|Maximum||No Dimensions|
-|cacheWrite9|Yes|Cache Write (Shard 9)|BytesPerSecond|Maximum||No Dimensions|
-|connectedclients|Yes|Connected Clients|Count|Maximum||ShardId|
-|connectedclients0|Yes|Connected Clients (Shard 0)|Count|Maximum||No Dimensions|
-|connectedclients1|Yes|Connected Clients (Shard 1)|Count|Maximum||No Dimensions|
-|connectedclients2|Yes|Connected Clients (Shard 2)|Count|Maximum||No Dimensions|
-|connectedclients3|Yes|Connected Clients (Shard 3)|Count|Maximum||No Dimensions|
-|connectedclients4|Yes|Connected Clients (Shard 4)|Count|Maximum||No Dimensions|
-|connectedclients5|Yes|Connected Clients (Shard 5)|Count|Maximum||No Dimensions|
-|connectedclients6|Yes|Connected Clients (Shard 6)|Count|Maximum||No Dimensions|
-|connectedclients7|Yes|Connected Clients (Shard 7)|Count|Maximum||No Dimensions|
-|connectedclients8|Yes|Connected Clients (Shard 8)|Count|Maximum||No Dimensions|
-|connectedclients9|Yes|Connected Clients (Shard 9)|Count|Maximum||No Dimensions|
-|errors|Yes|Errors|Count|Maximum||ShardId, ErrorType|
-|evictedkeys|Yes|Evicted Keys|Count|Total||ShardId|
-|evictedkeys0|Yes|Evicted Keys (Shard 0)|Count|Total||No Dimensions|
-|evictedkeys1|Yes|Evicted Keys (Shard 1)|Count|Total||No Dimensions|
-|evictedkeys2|Yes|Evicted Keys (Shard 2)|Count|Total||No Dimensions|
-|evictedkeys3|Yes|Evicted Keys (Shard 3)|Count|Total||No Dimensions|
-|evictedkeys4|Yes|Evicted Keys (Shard 4)|Count|Total||No Dimensions|
-|evictedkeys5|Yes|Evicted Keys (Shard 5)|Count|Total||No Dimensions|
-|evictedkeys6|Yes|Evicted Keys (Shard 6)|Count|Total||No Dimensions|
-|evictedkeys7|Yes|Evicted Keys (Shard 7)|Count|Total||No Dimensions|
-|evictedkeys8|Yes|Evicted Keys (Shard 8)|Count|Total||No Dimensions|
-|evictedkeys9|Yes|Evicted Keys (Shard 9)|Count|Total||No Dimensions|
-|expiredkeys|Yes|Expired Keys|Count|Total||ShardId|
-|expiredkeys0|Yes|Expired Keys (Shard 0)|Count|Total||No Dimensions|
-|expiredkeys1|Yes|Expired Keys (Shard 1)|Count|Total||No Dimensions|
-|expiredkeys2|Yes|Expired Keys (Shard 2)|Count|Total||No Dimensions|
-|expiredkeys3|Yes|Expired Keys (Shard 3)|Count|Total||No Dimensions|
-|expiredkeys4|Yes|Expired Keys (Shard 4)|Count|Total||No Dimensions|
-|expiredkeys5|Yes|Expired Keys (Shard 5)|Count|Total||No Dimensions|
-|expiredkeys6|Yes|Expired Keys (Shard 6)|Count|Total||No Dimensions|
-|expiredkeys7|Yes|Expired Keys (Shard 7)|Count|Total||No Dimensions|
-|expiredkeys8|Yes|Expired Keys (Shard 8)|Count|Total||No Dimensions|
-|expiredkeys9|Yes|Expired Keys (Shard 9)|Count|Total||No Dimensions|
-|getcommands|Yes|Gets|Count|Total||ShardId|
-|getcommands0|Yes|Gets (Shard 0)|Count|Total||No Dimensions|
-|getcommands1|Yes|Gets (Shard 1)|Count|Total||No Dimensions|
-|getcommands2|Yes|Gets (Shard 2)|Count|Total||No Dimensions|
-|getcommands3|Yes|Gets (Shard 3)|Count|Total||No Dimensions|
-|getcommands4|Yes|Gets (Shard 4)|Count|Total||No Dimensions|
-|getcommands5|Yes|Gets (Shard 5)|Count|Total||No Dimensions|
-|getcommands6|Yes|Gets (Shard 6)|Count|Total||No Dimensions|
-|getcommands7|Yes|Gets (Shard 7)|Count|Total||No Dimensions|
-|getcommands8|Yes|Gets (Shard 8)|Count|Total||No Dimensions|
-|getcommands9|Yes|Gets (Shard 9)|Count|Total||No Dimensions|
-|operationsPerSecond|Yes|Operations Per Second|Count|Maximum||ShardId|
-|operationsPerSecond0|Yes|Operations Per Second (Shard 0)|Count|Maximum||No Dimensions|
-|operationsPerSecond1|Yes|Operations Per Second (Shard 1)|Count|Maximum||No Dimensions|
-|operationsPerSecond2|Yes|Operations Per Second (Shard 2)|Count|Maximum||No Dimensions|
-|operationsPerSecond3|Yes|Operations Per Second (Shard 3)|Count|Maximum||No Dimensions|
-|operationsPerSecond4|Yes|Operations Per Second (Shard 4)|Count|Maximum||No Dimensions|
-|operationsPerSecond5|Yes|Operations Per Second (Shard 5)|Count|Maximum||No Dimensions|
-|operationsPerSecond6|Yes|Operations Per Second (Shard 6)|Count|Maximum||No Dimensions|
-|operationsPerSecond7|Yes|Operations Per Second (Shard 7)|Count|Maximum||No Dimensions|
-|operationsPerSecond8|Yes|Operations Per Second (Shard 8)|Count|Maximum||No Dimensions|
-|operationsPerSecond9|Yes|Operations Per Second (Shard 9)|Count|Maximum||No Dimensions|
-|percentProcessorTime|Yes|CPU|Percent|Maximum||ShardId|
-|percentProcessorTime0|Yes|CPU (Shard 0)|Percent|Maximum||No Dimensions|
-|percentProcessorTime1|Yes|CPU (Shard 1)|Percent|Maximum||No Dimensions|
-|percentProcessorTime2|Yes|CPU (Shard 2)|Percent|Maximum||No Dimensions|
-|percentProcessorTime3|Yes|CPU (Shard 3)|Percent|Maximum||No Dimensions|
-|percentProcessorTime4|Yes|CPU (Shard 4)|Percent|Maximum||No Dimensions|
-|percentProcessorTime5|Yes|CPU (Shard 5)|Percent|Maximum||No Dimensions|
-|percentProcessorTime6|Yes|CPU (Shard 6)|Percent|Maximum||No Dimensions|
-|percentProcessorTime7|Yes|CPU (Shard 7)|Percent|Maximum||No Dimensions|
-|percentProcessorTime8|Yes|CPU (Shard 8)|Percent|Maximum||No Dimensions|
-|percentProcessorTime9|Yes|CPU (Shard 9)|Percent|Maximum||No Dimensions|
-|serverLoad|Yes|Server Load|Percent|Maximum||ShardId|
-|serverLoad0|Yes|Server Load (Shard 0)|Percent|Maximum||No Dimensions|
-|serverLoad1|Yes|Server Load (Shard 1)|Percent|Maximum||No Dimensions|
-|serverLoad2|Yes|Server Load (Shard 2)|Percent|Maximum||No Dimensions|
-|serverLoad3|Yes|Server Load (Shard 3)|Percent|Maximum||No Dimensions|
-|serverLoad4|Yes|Server Load (Shard 4)|Percent|Maximum||No Dimensions|
-|serverLoad5|Yes|Server Load (Shard 5)|Percent|Maximum||No Dimensions|
-|serverLoad6|Yes|Server Load (Shard 6)|Percent|Maximum||No Dimensions|
-|serverLoad7|Yes|Server Load (Shard 7)|Percent|Maximum||No Dimensions|
-|serverLoad8|Yes|Server Load (Shard 8)|Percent|Maximum||No Dimensions|
-|serverLoad9|Yes|Server Load (Shard 9)|Percent|Maximum||No Dimensions|
-|setcommands|Yes|Sets|Count|Total||ShardId|
-|setcommands0|Yes|Sets (Shard 0)|Count|Total||No Dimensions|
-|setcommands1|Yes|Sets (Shard 1)|Count|Total||No Dimensions|
-|setcommands2|Yes|Sets (Shard 2)|Count|Total||No Dimensions|
-|setcommands3|Yes|Sets (Shard 3)|Count|Total||No Dimensions|
-|setcommands4|Yes|Sets (Shard 4)|Count|Total||No Dimensions|
-|setcommands5|Yes|Sets (Shard 5)|Count|Total||No Dimensions|
-|setcommands6|Yes|Sets (Shard 6)|Count|Total||No Dimensions|
-|setcommands7|Yes|Sets (Shard 7)|Count|Total||No Dimensions|
-|setcommands8|Yes|Sets (Shard 8)|Count|Total||No Dimensions|
-|setcommands9|Yes|Sets (Shard 9)|Count|Total||No Dimensions|
-|totalcommandsprocessed|Yes|Total Operations|Count|Total||ShardId|
-|totalcommandsprocessed0|Yes|Total Operations (Shard 0)|Count|Total||No Dimensions|
-|totalcommandsprocessed1|Yes|Total Operations (Shard 1)|Count|Total||No Dimensions|
-|totalcommandsprocessed2|Yes|Total Operations (Shard 2)|Count|Total||No Dimensions|
-|totalcommandsprocessed3|Yes|Total Operations (Shard 3)|Count|Total||No Dimensions|
-|totalcommandsprocessed4|Yes|Total Operations (Shard 4)|Count|Total||No Dimensions|
-|totalcommandsprocessed5|Yes|Total Operations (Shard 5)|Count|Total||No Dimensions|
-|totalcommandsprocessed6|Yes|Total Operations (Shard 6)|Count|Total||No Dimensions|
-|totalcommandsprocessed7|Yes|Total Operations (Shard 7)|Count|Total||No Dimensions|
-|totalcommandsprocessed8|Yes|Total Operations (Shard 8)|Count|Total||No Dimensions|
-|totalcommandsprocessed9|Yes|Total Operations (Shard 9)|Count|Total||No Dimensions|
-|totalkeys|Yes|Total Keys|Count|Maximum||ShardId|
-|totalkeys0|Yes|Total Keys (Shard 0)|Count|Maximum||No Dimensions|
-|totalkeys1|Yes|Total Keys (Shard 1)|Count|Maximum||No Dimensions|
-|totalkeys2|Yes|Total Keys (Shard 2)|Count|Maximum||No Dimensions|
-|totalkeys3|Yes|Total Keys (Shard 3)|Count|Maximum||No Dimensions|
-|totalkeys4|Yes|Total Keys (Shard 4)|Count|Maximum||No Dimensions|
-|totalkeys5|Yes|Total Keys (Shard 5)|Count|Maximum||No Dimensions|
-|totalkeys6|Yes|Total Keys (Shard 6)|Count|Maximum||No Dimensions|
-|totalkeys7|Yes|Total Keys (Shard 7)|Count|Maximum||No Dimensions|
-|totalkeys8|Yes|Total Keys (Shard 8)|Count|Maximum||No Dimensions|
-|totalkeys9|Yes|Total Keys (Shard 9)|Count|Maximum||No Dimensions|
-|usedmemory|Yes|Used Memory|Bytes|Maximum||ShardId|
-|usedmemory0|Yes|Used Memory (Shard 0)|Bytes|Maximum||No Dimensions|
-|usedmemory1|Yes|Used Memory (Shard 1)|Bytes|Maximum||No Dimensions|
-|usedmemory2|Yes|Used Memory (Shard 2)|Bytes|Maximum||No Dimensions|
-|usedmemory3|Yes|Used Memory (Shard 3)|Bytes|Maximum||No Dimensions|
-|usedmemory4|Yes|Used Memory (Shard 4)|Bytes|Maximum||No Dimensions|
-|usedmemory5|Yes|Used Memory (Shard 5)|Bytes|Maximum||No Dimensions|
-|usedmemory6|Yes|Used Memory (Shard 6)|Bytes|Maximum||No Dimensions|
-|usedmemory7|Yes|Used Memory (Shard 7)|Bytes|Maximum||No Dimensions|
-|usedmemory8|Yes|Used Memory (Shard 8)|Bytes|Maximum||No Dimensions|
-|usedmemory9|Yes|Used Memory (Shard 9)|Bytes|Maximum||No Dimensions|
-|usedmemorypercentage|Yes|Used Memory Percentage|Percent|Maximum||ShardId|
-|usedmemoryRss|Yes|Used Memory RSS|Bytes|Maximum||ShardId|
-|usedmemoryRss0|Yes|Used Memory RSS (Shard 0)|Bytes|Maximum||No Dimensions|
-|usedmemoryRss1|Yes|Used Memory RSS (Shard 1)|Bytes|Maximum||No Dimensions|
-|usedmemoryRss2|Yes|Used Memory RSS (Shard 2)|Bytes|Maximum||No Dimensions|
-|usedmemoryRss3|Yes|Used Memory RSS (Shard 3)|Bytes|Maximum||No Dimensions|
-|usedmemoryRss4|Yes|Used Memory RSS (Shard 4)|Bytes|Maximum||No Dimensions|
-|usedmemoryRss5|Yes|Used Memory RSS (Shard 5)|Bytes|Maximum||No Dimensions|
-|usedmemoryRss6|Yes|Used Memory RSS (Shard 6)|Bytes|Maximum||No Dimensions|
-|usedmemoryRss7|Yes|Used Memory RSS (Shard 7)|Bytes|Maximum||No Dimensions|
-|usedmemoryRss8|Yes|Used Memory RSS (Shard 8)|Bytes|Maximum||No Dimensions|
-|usedmemoryRss9|Yes|Used Memory RSS (Shard 9)|Bytes|Maximum||No Dimensions|
+|allcachehits|Yes|Cache Hits (Instance Based)|Count|Total|The number of successful key lookups. For more details, see https://aka.ms/redis/metrics.|ShardId, Port, Primary|
+|allcachemisses|Yes|Cache Misses (Instance Based)|Count|Total|The number of failed key lookups. For more details, see https://aka.ms/redis/metrics.|ShardId, Port, Primary|
+|allcacheRead|Yes|Cache Read (Instance Based)|BytesPerSecond|Maximum|The amount of data read from the cache in Megabytes per second (MB/s). For more details, see https://aka.ms/redis/metrics.|ShardId, Port, Primary|
+|allcacheWrite|Yes|Cache Write (Instance Based)|BytesPerSecond|Maximum|The amount of data written to the cache in Megabytes per second (MB/s). For more details, see https://aka.ms/redis/metrics.|ShardId, Port, Primary|
+|allconnectedclients|Yes|Connected Clients (Instance Based)|Count|Maximum|The number of client connections to the cache. For more details, see https://aka.ms/redis/metrics.|ShardId, Port, Primary|
+|allevictedkeys|Yes|Evicted Keys (Instance Based)|Count|Total|The number of items evicted from the cache. For more details, see https://aka.ms/redis/metrics.|ShardId, Port, Primary|
+|allexpiredkeys|Yes|Expired Keys (Instance Based)|Count|Total|The number of items expired from the cache. For more details, see https://aka.ms/redis/metrics.|ShardId, Port, Primary|
+|allgetcommands|Yes|Gets (Instance Based)|Count|Total|The number of get operations from the cache. For more details, see https://aka.ms/redis/metrics.|ShardId, Port, Primary|
+|alloperationsPerSecond|Yes|Operations Per Second (Instance Based)|Count|Maximum|The number of instantaneous operations per second executed on the cache. For more details, see https://aka.ms/redis/metrics.|ShardId, Port, Primary|
+|allpercentprocessortime|Yes|CPU (Instance Based)|Percent|Maximum|The CPU utilization of the Azure Redis Cache server as a percentage. For more details, see https://aka.ms/redis/metrics.|ShardId, Port, Primary|
+|allserverLoad|Yes|Server Load (Instance Based)|Percent|Maximum|The percentage of cycles in which the Redis server is busy processing and not waiting idle for messages. For more details, see https://aka.ms/redis/metrics.|ShardId, Port, Primary|
+|allsetcommands|Yes|Sets (Instance Based)|Count|Total|The number of set operations to the cache. For more details, see https://aka.ms/redis/metrics.|ShardId, Port, Primary|
+|alltotalcommandsprocessed|Yes|Total Operations (Instance Based)|Count|Total|The total number of commands processed by the cache server. For more details, see https://aka.ms/redis/metrics.|ShardId, Port, Primary|
+|alltotalkeys|Yes|Total Keys (Instance Based)|Count|Maximum|The total number of items in the cache. For more details, see https://aka.ms/redis/metrics.|ShardId, Port, Primary|
+|allusedmemory|Yes|Used Memory (Instance Based)|Bytes|Maximum|The amount of cache memory used for key/value pairs in the cache in MB. For more details, see https://aka.ms/redis/metrics.|ShardId, Port, Primary|
+|allusedmemorypercentage|Yes|Used Memory Percentage (Instance Based)|Percent|Maximum|The percentage of cache memory used for key/value pairs. For more details, see https://aka.ms/redis/metrics.|ShardId, Port, Primary|
+|allusedmemoryRss|Yes|Used Memory RSS (Instance Based)|Bytes|Maximum|The amount of cache memory used in MB, including fragmentation and metadata. For more details, see https://aka.ms/redis/metrics.|ShardId, Port, Primary|
+|cachehits|Yes|Cache Hits|Count|Total|The number of successful key lookups. For more details, see https://aka.ms/redis/metrics.|ShardId|
+|cachehits0|Yes|Cache Hits (Shard 0)|Count|Total|The number of successful key lookups. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|cachehits1|Yes|Cache Hits (Shard 1)|Count|Total|The number of successful key lookups. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|cachehits2|Yes|Cache Hits (Shard 2)|Count|Total|The number of successful key lookups. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|cachehits3|Yes|Cache Hits (Shard 3)|Count|Total|The number of successful key lookups. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|cachehits4|Yes|Cache Hits (Shard 4)|Count|Total|The number of successful key lookups. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|cachehits5|Yes|Cache Hits (Shard 5)|Count|Total|The number of successful key lookups. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|cachehits6|Yes|Cache Hits (Shard 6)|Count|Total|The number of successful key lookups. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|cachehits7|Yes|Cache Hits (Shard 7)|Count|Total|The number of successful key lookups. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|cachehits8|Yes|Cache Hits (Shard 8)|Count|Total|The number of successful key lookups. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|cachehits9|Yes|Cache Hits (Shard 9)|Count|Total|The number of successful key lookups. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|cacheLatency|Yes|Cache Latency Microseconds (Preview)|Count|Average|The latency to the cache in microseconds. For more details, see https://aka.ms/redis/metrics.|ShardId|
+|cachemisses|Yes|Cache Misses|Count|Total|The number of failed key lookups. For more details, see https://aka.ms/redis/metrics.|ShardId|
+|cachemisses0|Yes|Cache Misses (Shard 0)|Count|Total|The number of failed key lookups. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|cachemisses1|Yes|Cache Misses (Shard 1)|Count|Total|The number of failed key lookups. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|cachemisses2|Yes|Cache Misses (Shard 2)|Count|Total|The number of failed key lookups. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|cachemisses3|Yes|Cache Misses (Shard 3)|Count|Total|The number of failed key lookups. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|cachemisses4|Yes|Cache Misses (Shard 4)|Count|Total|The number of failed key lookups. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|cachemisses5|Yes|Cache Misses (Shard 5)|Count|Total|The number of failed key lookups. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|cachemisses6|Yes|Cache Misses (Shard 6)|Count|Total|The number of failed key lookups. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|cachemisses7|Yes|Cache Misses (Shard 7)|Count|Total|The number of failed key lookups. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|cachemisses8|Yes|Cache Misses (Shard 8)|Count|Total|The number of failed key lookups. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|cachemisses9|Yes|Cache Misses (Shard 9)|Count|Total|The number of failed key lookups. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|cachemissrate|Yes|Cache Miss Rate|Percent|cachemissrate|The % of get requests that miss. For more details, see https://aka.ms/redis/metrics.|ShardId|
+|cacheRead|Yes|Cache Read|BytesPerSecond|Maximum|The amount of data read from the cache in Megabytes per second (MB/s). For more details, see https://aka.ms/redis/metrics.|ShardId|
+|cacheRead0|Yes|Cache Read (Shard 0)|BytesPerSecond|Maximum|The amount of data read from the cache in Megabytes per second (MB/s). For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|cacheRead1|Yes|Cache Read (Shard 1)|BytesPerSecond|Maximum|The amount of data read from the cache in Megabytes per second (MB/s). For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|cacheRead2|Yes|Cache Read (Shard 2)|BytesPerSecond|Maximum|The amount of data read from the cache in Megabytes per second (MB/s). For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|cacheRead3|Yes|Cache Read (Shard 3)|BytesPerSecond|Maximum|The amount of data read from the cache in Megabytes per second (MB/s). For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|cacheRead4|Yes|Cache Read (Shard 4)|BytesPerSecond|Maximum|The amount of data read from the cache in Megabytes per second (MB/s). For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|cacheRead5|Yes|Cache Read (Shard 5)|BytesPerSecond|Maximum|The amount of data read from the cache in Megabytes per second (MB/s). For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|cacheRead6|Yes|Cache Read (Shard 6)|BytesPerSecond|Maximum|The amount of data read from the cache in Megabytes per second (MB/s). For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|cacheRead7|Yes|Cache Read (Shard 7)|BytesPerSecond|Maximum|The amount of data read from the cache in Megabytes per second (MB/s). For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|cacheRead8|Yes|Cache Read (Shard 8)|BytesPerSecond|Maximum|The amount of data read from the cache in Megabytes per second (MB/s). For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|cacheRead9|Yes|Cache Read (Shard 9)|BytesPerSecond|Maximum|The amount of data read from the cache in Megabytes per second (MB/s). For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|cacheWrite|Yes|Cache Write|BytesPerSecond|Maximum|The amount of data written to the cache in Megabytes per second (MB/s). For more details, see https://aka.ms/redis/metrics.|ShardId|
+|cacheWrite0|Yes|Cache Write (Shard 0)|BytesPerSecond|Maximum|The amount of data written to the cache in Megabytes per second (MB/s). For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|cacheWrite1|Yes|Cache Write (Shard 1)|BytesPerSecond|Maximum|The amount of data written to the cache in Megabytes per second (MB/s). For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|cacheWrite2|Yes|Cache Write (Shard 2)|BytesPerSecond|Maximum|The amount of data written to the cache in Megabytes per second (MB/s). For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|cacheWrite3|Yes|Cache Write (Shard 3)|BytesPerSecond|Maximum|The amount of data written to the cache in Megabytes per second (MB/s). For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|cacheWrite4|Yes|Cache Write (Shard 4)|BytesPerSecond|Maximum|The amount of data written to the cache in Megabytes per second (MB/s). For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|cacheWrite5|Yes|Cache Write (Shard 5)|BytesPerSecond|Maximum|The amount of data written to the cache in Megabytes per second (MB/s). For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|cacheWrite6|Yes|Cache Write (Shard 6)|BytesPerSecond|Maximum|The amount of data written to the cache in Megabytes per second (MB/s). For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|cacheWrite7|Yes|Cache Write (Shard 7)|BytesPerSecond|Maximum|The amount of data written to the cache in Megabytes per second (MB/s). For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|cacheWrite8|Yes|Cache Write (Shard 8)|BytesPerSecond|Maximum|The amount of data written to the cache in Megabytes per second (MB/s). For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|cacheWrite9|Yes|Cache Write (Shard 9)|BytesPerSecond|Maximum|The amount of data written to the cache in Megabytes per second (MB/s). For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|connectedclients|Yes|Connected Clients|Count|Maximum|The number of client connections to the cache. For more details, see https://aka.ms/redis/metrics.|ShardId|
+|connectedclients0|Yes|Connected Clients (Shard 0)|Count|Maximum|The number of client connections to the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|connectedclients1|Yes|Connected Clients (Shard 1)|Count|Maximum|The number of client connections to the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|connectedclients2|Yes|Connected Clients (Shard 2)|Count|Maximum|The number of client connections to the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|connectedclients3|Yes|Connected Clients (Shard 3)|Count|Maximum|The number of client connections to the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|connectedclients4|Yes|Connected Clients (Shard 4)|Count|Maximum|The number of client connections to the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|connectedclients5|Yes|Connected Clients (Shard 5)|Count|Maximum|The number of client connections to the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|connectedclients6|Yes|Connected Clients (Shard 6)|Count|Maximum|The number of client connections to the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|connectedclients7|Yes|Connected Clients (Shard 7)|Count|Maximum|The number of client connections to the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|connectedclients8|Yes|Connected Clients (Shard 8)|Count|Maximum|The number of client connections to the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|connectedclients9|Yes|Connected Clients (Shard 9)|Count|Maximum|The number of client connections to the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|errors|Yes|Errors|Count|Maximum|The number errors that occured on the cache. For more details, see https://aka.ms/redis/metrics.|ShardId, ErrorType|
+|evictedkeys|Yes|Evicted Keys|Count|Total|The number of items evicted from the cache. For more details, see https://aka.ms/redis/metrics.|ShardId|
+|evictedkeys0|Yes|Evicted Keys (Shard 0)|Count|Total|The number of items evicted from the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|evictedkeys1|Yes|Evicted Keys (Shard 1)|Count|Total|The number of items evicted from the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|evictedkeys2|Yes|Evicted Keys (Shard 2)|Count|Total|The number of items evicted from the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|evictedkeys3|Yes|Evicted Keys (Shard 3)|Count|Total|The number of items evicted from the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|evictedkeys4|Yes|Evicted Keys (Shard 4)|Count|Total|The number of items evicted from the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|evictedkeys5|Yes|Evicted Keys (Shard 5)|Count|Total|The number of items evicted from the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|evictedkeys6|Yes|Evicted Keys (Shard 6)|Count|Total|The number of items evicted from the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|evictedkeys7|Yes|Evicted Keys (Shard 7)|Count|Total|The number of items evicted from the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|evictedkeys8|Yes|Evicted Keys (Shard 8)|Count|Total|The number of items evicted from the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|evictedkeys9|Yes|Evicted Keys (Shard 9)|Count|Total|The number of items evicted from the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|expiredkeys|Yes|Expired Keys|Count|Total|The number of items expired from the cache. For more details, see https://aka.ms/redis/metrics.|ShardId|
+|expiredkeys0|Yes|Expired Keys (Shard 0)|Count|Total|The number of items expired from the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|expiredkeys1|Yes|Expired Keys (Shard 1)|Count|Total|The number of items expired from the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|expiredkeys2|Yes|Expired Keys (Shard 2)|Count|Total|The number of items expired from the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|expiredkeys3|Yes|Expired Keys (Shard 3)|Count|Total|The number of items expired from the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|expiredkeys4|Yes|Expired Keys (Shard 4)|Count|Total|The number of items expired from the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|expiredkeys5|Yes|Expired Keys (Shard 5)|Count|Total|The number of items expired from the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|expiredkeys6|Yes|Expired Keys (Shard 6)|Count|Total|The number of items expired from the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|expiredkeys7|Yes|Expired Keys (Shard 7)|Count|Total|The number of items expired from the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|expiredkeys8|Yes|Expired Keys (Shard 8)|Count|Total|The number of items expired from the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|expiredkeys9|Yes|Expired Keys (Shard 9)|Count|Total|The number of items expired from the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|getcommands|Yes|Gets|Count|Total|The number of get operations from the cache. For more details, see https://aka.ms/redis/metrics.|ShardId|
+|getcommands0|Yes|Gets (Shard 0)|Count|Total|The number of get operations from the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|getcommands1|Yes|Gets (Shard 1)|Count|Total|The number of get operations from the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|getcommands2|Yes|Gets (Shard 2)|Count|Total|The number of get operations from the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|getcommands3|Yes|Gets (Shard 3)|Count|Total|The number of get operations from the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|getcommands4|Yes|Gets (Shard 4)|Count|Total|The number of get operations from the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|getcommands5|Yes|Gets (Shard 5)|Count|Total|The number of get operations from the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|getcommands6|Yes|Gets (Shard 6)|Count|Total|The number of get operations from the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|getcommands7|Yes|Gets (Shard 7)|Count|Total|The number of get operations from the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|getcommands8|Yes|Gets (Shard 8)|Count|Total|The number of get operations from the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|getcommands9|Yes|Gets (Shard 9)|Count|Total|The number of get operations from the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|operationsPerSecond|Yes|Operations Per Second|Count|Maximum|The number of instantaneous operations per second executed on the cache. For more details, see https://aka.ms/redis/metrics.|ShardId|
+|operationsPerSecond0|Yes|Operations Per Second (Shard 0)|Count|Maximum|The number of instantaneous operations per second executed on the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|operationsPerSecond1|Yes|Operations Per Second (Shard 1)|Count|Maximum|The number of instantaneous operations per second executed on the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|operationsPerSecond2|Yes|Operations Per Second (Shard 2)|Count|Maximum|The number of instantaneous operations per second executed on the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|operationsPerSecond3|Yes|Operations Per Second (Shard 3)|Count|Maximum|The number of instantaneous operations per second executed on the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|operationsPerSecond4|Yes|Operations Per Second (Shard 4)|Count|Maximum|The number of instantaneous operations per second executed on the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|operationsPerSecond5|Yes|Operations Per Second (Shard 5)|Count|Maximum|The number of instantaneous operations per second executed on the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|operationsPerSecond6|Yes|Operations Per Second (Shard 6)|Count|Maximum|The number of instantaneous operations per second executed on the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|operationsPerSecond7|Yes|Operations Per Second (Shard 7)|Count|Maximum|The number of instantaneous operations per second executed on the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|operationsPerSecond8|Yes|Operations Per Second (Shard 8)|Count|Maximum|The number of instantaneous operations per second executed on the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|operationsPerSecond9|Yes|Operations Per Second (Shard 9)|Count|Maximum|The number of instantaneous operations per second executed on the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|percentProcessorTime|Yes|CPU|Percent|Maximum|The CPU utilization of the Azure Redis Cache server as a percentage. For more details, see https://aka.ms/redis/metrics.|ShardId|
+|percentProcessorTime0|Yes|CPU (Shard 0)|Percent|Maximum|The CPU utilization of the Azure Redis Cache server as a percentage. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|percentProcessorTime1|Yes|CPU (Shard 1)|Percent|Maximum|The CPU utilization of the Azure Redis Cache server as a percentage. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|percentProcessorTime2|Yes|CPU (Shard 2)|Percent|Maximum|The CPU utilization of the Azure Redis Cache server as a percentage. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|percentProcessorTime3|Yes|CPU (Shard 3)|Percent|Maximum|The CPU utilization of the Azure Redis Cache server as a percentage. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|percentProcessorTime4|Yes|CPU (Shard 4)|Percent|Maximum|The CPU utilization of the Azure Redis Cache server as a percentage. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|percentProcessorTime5|Yes|CPU (Shard 5)|Percent|Maximum|The CPU utilization of the Azure Redis Cache server as a percentage. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|percentProcessorTime6|Yes|CPU (Shard 6)|Percent|Maximum|The CPU utilization of the Azure Redis Cache server as a percentage. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|percentProcessorTime7|Yes|CPU (Shard 7)|Percent|Maximum|The CPU utilization of the Azure Redis Cache server as a percentage. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|percentProcessorTime8|Yes|CPU (Shard 8)|Percent|Maximum|The CPU utilization of the Azure Redis Cache server as a percentage. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|percentProcessorTime9|Yes|CPU (Shard 9)|Percent|Maximum|The CPU utilization of the Azure Redis Cache server as a percentage. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|serverLoad|Yes|Server Load|Percent|Maximum|The percentage of cycles in which the Redis server is busy processing and not waiting idle for messages. For more details, see https://aka.ms/redis/metrics.|ShardId|
+|serverLoad0|Yes|Server Load (Shard 0)|Percent|Maximum|The percentage of cycles in which the Redis server is busy processing and not waiting idle for messages. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|serverLoad1|Yes|Server Load (Shard 1)|Percent|Maximum|The percentage of cycles in which the Redis server is busy processing and not waiting idle for messages. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|serverLoad2|Yes|Server Load (Shard 2)|Percent|Maximum|The percentage of cycles in which the Redis server is busy processing and not waiting idle for messages. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|serverLoad3|Yes|Server Load (Shard 3)|Percent|Maximum|The percentage of cycles in which the Redis server is busy processing and not waiting idle for messages. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|serverLoad4|Yes|Server Load (Shard 4)|Percent|Maximum|The percentage of cycles in which the Redis server is busy processing and not waiting idle for messages. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|serverLoad5|Yes|Server Load (Shard 5)|Percent|Maximum|The percentage of cycles in which the Redis server is busy processing and not waiting idle for messages. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|serverLoad6|Yes|Server Load (Shard 6)|Percent|Maximum|The percentage of cycles in which the Redis server is busy processing and not waiting idle for messages. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|serverLoad7|Yes|Server Load (Shard 7)|Percent|Maximum|The percentage of cycles in which the Redis server is busy processing and not waiting idle for messages. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|serverLoad8|Yes|Server Load (Shard 8)|Percent|Maximum|The percentage of cycles in which the Redis server is busy processing and not waiting idle for messages. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|serverLoad9|Yes|Server Load (Shard 9)|Percent|Maximum|The percentage of cycles in which the Redis server is busy processing and not waiting idle for messages. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|setcommands|Yes|Sets|Count|Total|The number of set operations to the cache. For more details, see https://aka.ms/redis/metrics.|ShardId|
+|setcommands0|Yes|Sets (Shard 0)|Count|Total|The number of set operations to the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|setcommands1|Yes|Sets (Shard 1)|Count|Total|The number of set operations to the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|setcommands2|Yes|Sets (Shard 2)|Count|Total|The number of set operations to the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|setcommands3|Yes|Sets (Shard 3)|Count|Total|The number of set operations to the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|setcommands4|Yes|Sets (Shard 4)|Count|Total|The number of set operations to the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|setcommands5|Yes|Sets (Shard 5)|Count|Total|The number of set operations to the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|setcommands6|Yes|Sets (Shard 6)|Count|Total|The number of set operations to the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|setcommands7|Yes|Sets (Shard 7)|Count|Total|The number of set operations to the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|setcommands8|Yes|Sets (Shard 8)|Count|Total|The number of set operations to the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|setcommands9|Yes|Sets (Shard 9)|Count|Total|The number of set operations to the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|totalcommandsprocessed|Yes|Total Operations|Count|Total|The total number of commands processed by the cache server. For more details, see https://aka.ms/redis/metrics.|ShardId|
+|totalcommandsprocessed0|Yes|Total Operations (Shard 0)|Count|Total|The total number of commands processed by the cache server. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|totalcommandsprocessed1|Yes|Total Operations (Shard 1)|Count|Total|The total number of commands processed by the cache server. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|totalcommandsprocessed2|Yes|Total Operations (Shard 2)|Count|Total|The total number of commands processed by the cache server. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|totalcommandsprocessed3|Yes|Total Operations (Shard 3)|Count|Total|The total number of commands processed by the cache server. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|totalcommandsprocessed4|Yes|Total Operations (Shard 4)|Count|Total|The total number of commands processed by the cache server. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|totalcommandsprocessed5|Yes|Total Operations (Shard 5)|Count|Total|The total number of commands processed by the cache server. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|totalcommandsprocessed6|Yes|Total Operations (Shard 6)|Count|Total|The total number of commands processed by the cache server. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|totalcommandsprocessed7|Yes|Total Operations (Shard 7)|Count|Total|The total number of commands processed by the cache server. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|totalcommandsprocessed8|Yes|Total Operations (Shard 8)|Count|Total|The total number of commands processed by the cache server. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|totalcommandsprocessed9|Yes|Total Operations (Shard 9)|Count|Total|The total number of commands processed by the cache server. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|totalkeys|Yes|Total Keys|Count|Maximum|The total number of items in the cache. For more details, see https://aka.ms/redis/metrics.|ShardId|
+|totalkeys0|Yes|Total Keys (Shard 0)|Count|Maximum|The total number of items in the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|totalkeys1|Yes|Total Keys (Shard 1)|Count|Maximum|The total number of items in the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|totalkeys2|Yes|Total Keys (Shard 2)|Count|Maximum|The total number of items in the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|totalkeys3|Yes|Total Keys (Shard 3)|Count|Maximum|The total number of items in the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|totalkeys4|Yes|Total Keys (Shard 4)|Count|Maximum|The total number of items in the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|totalkeys5|Yes|Total Keys (Shard 5)|Count|Maximum|The total number of items in the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|totalkeys6|Yes|Total Keys (Shard 6)|Count|Maximum|The total number of items in the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|totalkeys7|Yes|Total Keys (Shard 7)|Count|Maximum|The total number of items in the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|totalkeys8|Yes|Total Keys (Shard 8)|Count|Maximum|The total number of items in the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|totalkeys9|Yes|Total Keys (Shard 9)|Count|Maximum|The total number of items in the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|usedmemory|Yes|Used Memory|Bytes|Maximum|The amount of cache memory used for key/value pairs in the cache in MB. For more details, see https://aka.ms/redis/metrics.|ShardId|
+|usedmemory0|Yes|Used Memory (Shard 0)|Bytes|Maximum|The amount of cache memory used for key/value pairs in the cache in MB. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|usedmemory1|Yes|Used Memory (Shard 1)|Bytes|Maximum|The amount of cache memory used for key/value pairs in the cache in MB. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|usedmemory2|Yes|Used Memory (Shard 2)|Bytes|Maximum|The amount of cache memory used for key/value pairs in the cache in MB. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|usedmemory3|Yes|Used Memory (Shard 3)|Bytes|Maximum|The amount of cache memory used for key/value pairs in the cache in MB. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|usedmemory4|Yes|Used Memory (Shard 4)|Bytes|Maximum|The amount of cache memory used for key/value pairs in the cache in MB. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|usedmemory5|Yes|Used Memory (Shard 5)|Bytes|Maximum|The amount of cache memory used for key/value pairs in the cache in MB. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|usedmemory6|Yes|Used Memory (Shard 6)|Bytes|Maximum|The amount of cache memory used for key/value pairs in the cache in MB. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|usedmemory7|Yes|Used Memory (Shard 7)|Bytes|Maximum|The amount of cache memory used for key/value pairs in the cache in MB. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|usedmemory8|Yes|Used Memory (Shard 8)|Bytes|Maximum|The amount of cache memory used for key/value pairs in the cache in MB. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|usedmemory9|Yes|Used Memory (Shard 9)|Bytes|Maximum|The amount of cache memory used for key/value pairs in the cache in MB. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|usedmemorypercentage|Yes|Used Memory Percentage|Percent|Maximum|The percentage of cache memory used for key/value pairs. For more details, see https://aka.ms/redis/metrics.|ShardId|
+|usedmemoryRss|Yes|Used Memory RSS|Bytes|Maximum|The amount of cache memory used in MB, including fragmentation and metadata. For more details, see https://aka.ms/redis/metrics.|ShardId|
+|usedmemoryRss0|Yes|Used Memory RSS (Shard 0)|Bytes|Maximum|The amount of cache memory used in MB, including fragmentation and metadata. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|usedmemoryRss1|Yes|Used Memory RSS (Shard 1)|Bytes|Maximum|The amount of cache memory used in MB, including fragmentation and metadata. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|usedmemoryRss2|Yes|Used Memory RSS (Shard 2)|Bytes|Maximum|The amount of cache memory used in MB, including fragmentation and metadata. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|usedmemoryRss3|Yes|Used Memory RSS (Shard 3)|Bytes|Maximum|The amount of cache memory used in MB, including fragmentation and metadata. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|usedmemoryRss4|Yes|Used Memory RSS (Shard 4)|Bytes|Maximum|The amount of cache memory used in MB, including fragmentation and metadata. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|usedmemoryRss5|Yes|Used Memory RSS (Shard 5)|Bytes|Maximum|The amount of cache memory used in MB, including fragmentation and metadata. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|usedmemoryRss6|Yes|Used Memory RSS (Shard 6)|Bytes|Maximum|The amount of cache memory used in MB, including fragmentation and metadata. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|usedmemoryRss7|Yes|Used Memory RSS (Shard 7)|Bytes|Maximum|The amount of cache memory used in MB, including fragmentation and metadata. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|usedmemoryRss8|Yes|Used Memory RSS (Shard 8)|Bytes|Maximum|The amount of cache memory used in MB, including fragmentation and metadata. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|usedmemoryRss9|Yes|Used Memory RSS (Shard 9)|Bytes|Maximum|The amount of cache memory used in MB, including fragmentation and metadata. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
## Microsoft.Cache/redisEnterprise
The Azure Monitor Agent replaces the Azure Diagnostics extension and Log Analyti
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| |||||||| |Availability|Yes|Availability|Percent|Average|The percentage of availability for the storage service or the specified API operation. Availability is calculated by taking the TotalBillableRequests value and dividing it by the number of applicable requests, including those that produced unexpected errors. All unexpected errors result in reduced availability for the storage service or the specified API operation.|GeoType, ApiName, Authentication|
-|BlobCapacity|No|Blob Capacity|Bytes|Average|The amount of storage used by the storage accountΓÇÖs Blob service in bytes.|BlobType, Tier|
-|BlobCount|No|Blob Count|Count|Average|The number of Blob in the storage accountΓÇÖs Blob service.|BlobType, Tier|
-|ContainerCount|Yes|Blob Container Count|Count|Average|The number of containers in the storage accountΓÇÖs Blob service.|No Dimensions|
+|BlobCapacity|No|Blob Capacity|Bytes|Average|The amount of storage used by the storage account's Blob service in bytes.|BlobType, Tier|
+|BlobCount|No|Blob Count|Count|Average|The number of Blob in the storage account's Blob service.|BlobType, Tier|
+|ContainerCount|Yes|Blob Container Count|Count|Average|The number of containers in the storage account's Blob service.|No Dimensions|
|Egress|Yes|Egress|Bytes|Total|The amount of egress data, in bytes. This number includes egress from an external client into Azure Storage as well as egress within Azure. As a result, this number does not reflect billable egress.|GeoType, ApiName, Authentication| |IndexCapacity|No|Index Capacity|Bytes|Average|The amount of storage used by ADLS Gen2 (Hierarchical) Index in bytes.|No Dimensions| |Ingress|Yes|Ingress|Bytes|Total|The amount of ingress data, in bytes. This number includes ingress from an external client into Azure Storage as well as ingress within Azure.|GeoType, ApiName, Authentication|
The Azure Monitor Agent replaces the Azure Diagnostics extension and Log Analyti
|||||||| |Availability|Yes|Availability|Percent|Average|The percentage of availability for the storage service or the specified API operation. Availability is calculated by taking the TotalBillableRequests value and dividing it by the number of applicable requests, including those that produced unexpected errors. All unexpected errors result in reduced availability for the storage service or the specified API operation.|GeoType, ApiName, Authentication, FileShare| |Egress|Yes|Egress|Bytes|Total|The amount of egress data, in bytes. This number includes egress from an external client into Azure Storage as well as egress within Azure. As a result, this number does not reflect billable egress.|GeoType, ApiName, Authentication, FileShare|
-|FileCapacity|No|File Capacity|Bytes|Average|The amount of storage used by the storage accountΓÇÖs File service in bytes.|FileShare|
-|FileCount|No|File Count|Count|Average|The number of file in the storage accountΓÇÖs File service.|FileShare|
-|FileShareCount|No|File Share Count|Count|Average|The number of file shares in the storage accountΓÇÖs File service.|No Dimensions|
+|FileCapacity|No|File Capacity|Bytes|Average|The amount of storage used by the storage account's File service in bytes.|FileShare|
+|FileCount|No|File Count|Count|Average|The number of file in the storage account's File service.|FileShare|
+|FileShareCount|No|File Share Count|Count|Average|The number of file shares in the storage account's File service.|No Dimensions|
|FileShareQuota|No|File share quota size|Bytes|Average|The upper limit on the amount of storage that can be used by Azure Files Service in bytes.|FileShare|
-|FileShareSnapshotCount|No|File Share Snapshot Count|Count|Average|The number of snapshots present on the share in storage accountΓÇÖs Files Service.|FileShare|
-|FileShareSnapshotSize|No|File Share Snapshot Size|Bytes|Average|The amount of storage used by the snapshots in storage accountΓÇÖs File service in bytes.|FileShare|
+|FileShareSnapshotCount|No|File Share Snapshot Count|Count|Average|The number of snapshots present on the share in storage account's Files Service.|FileShare|
+|FileShareSnapshotSize|No|File Share Snapshot Size|Bytes|Average|The amount of storage used by the snapshots in storage account's File service in bytes.|FileShare|
|Ingress|Yes|Ingress|Bytes|Total|The amount of ingress data, in bytes. This number includes ingress from an external client into Azure Storage as well as ingress within Azure.|GeoType, ApiName, Authentication, FileShare| |SuccessE2ELatency|Yes|Success E2E Latency|Milliseconds|Average|The end-to-end latency of successful requests made to a storage service or the specified API operation, in milliseconds. This value includes the required processing time within Azure Storage to read the request, send the response, and receive acknowledgment of the response.|GeoType, ApiName, Authentication, FileShare| |SuccessServerLatency|Yes|Success Server Latency|Milliseconds|Average|The latency used by Azure Storage to process a successful request, in milliseconds. This value does not include the network latency specified in SuccessE2ELatency.|GeoType, ApiName, Authentication, FileShare|
The Azure Monitor Agent replaces the Azure Diagnostics extension and Log Analyti
|Availability|Yes|Availability|Percent|Average|The percentage of availability for the storage service or the specified API operation. Availability is calculated by taking the TotalBillableRequests value and dividing it by the number of applicable requests, including those that produced unexpected errors. All unexpected errors result in reduced availability for the storage service or the specified API operation.|GeoType, ApiName, Authentication| |Egress|Yes|Egress|Bytes|Total|The amount of egress data, in bytes. This number includes egress from an external client into Azure Storage as well as egress within Azure. As a result, this number does not reflect billable egress.|GeoType, ApiName, Authentication| |Ingress|Yes|Ingress|Bytes|Total|The amount of ingress data, in bytes. This number includes ingress from an external client into Azure Storage as well as ingress within Azure.|GeoType, ApiName, Authentication|
-|QueueCapacity|Yes|Queue Capacity|Bytes|Average|The amount of storage used by the storage accountΓÇÖs Queue service in bytes.|No Dimensions|
-|QueueCount|Yes|Queue Count|Count|Average|The number of queue in the storage accountΓÇÖs Queue service.|No Dimensions|
-|QueueMessageCount|Yes|Queue Message Count|Count|Average|The approximate number of queue messages in the storage accountΓÇÖs Queue service.|No Dimensions|
+|QueueCapacity|Yes|Queue Capacity|Bytes|Average|The amount of storage used by the storage account's Queue service in bytes.|No Dimensions|
+|QueueCount|Yes|Queue Count|Count|Average|The number of queue in the storage account's Queue service.|No Dimensions|
+|QueueMessageCount|Yes|Queue Message Count|Count|Average|The approximate number of queue messages in the storage account's Queue service.|No Dimensions|
|SuccessE2ELatency|Yes|Success E2E Latency|Milliseconds|Average|The end-to-end latency of successful requests made to a storage service or the specified API operation, in milliseconds. This value includes the required processing time within Azure Storage to read the request, send the response, and receive acknowledgment of the response.|GeoType, ApiName, Authentication| |SuccessServerLatency|Yes|Success Server Latency|Milliseconds|Average|The latency used by Azure Storage to process a successful request, in milliseconds. This value does not include the network latency specified in SuccessE2ELatency.|GeoType, ApiName, Authentication| |Transactions|Yes|Transactions|Count|Total|The number of requests made to a storage service or the specified API operation. This number includes successful and failed requests, as well as requests which produced errors. Use ResponseType dimension for the number of different type of response.|ResponseType, GeoType, ApiName, Authentication|
The Azure Monitor Agent replaces the Azure Diagnostics extension and Log Analyti
|Ingress|Yes|Ingress|Bytes|Total|The amount of ingress data, in bytes. This number includes ingress from an external client into Azure Storage as well as ingress within Azure.|GeoType, ApiName, Authentication| |SuccessE2ELatency|Yes|Success E2E Latency|Milliseconds|Average|The end-to-end latency of successful requests made to a storage service or the specified API operation, in milliseconds. This value includes the required processing time within Azure Storage to read the request, send the response, and receive acknowledgment of the response.|GeoType, ApiName, Authentication| |SuccessServerLatency|Yes|Success Server Latency|Milliseconds|Average|The latency used by Azure Storage to process a successful request, in milliseconds. This value does not include the network latency specified in SuccessE2ELatency.|GeoType, ApiName, Authentication|
-|TableCapacity|Yes|Table Capacity|Bytes|Average|The amount of storage used by the storage accountΓÇÖs Table service in bytes.|No Dimensions|
-|TableCount|Yes|Table Count|Count|Average|The number of table in the storage accountΓÇÖs Table service.|No Dimensions|
-|TableEntityCount|Yes|Table Entity Count|Count|Average|The number of table entities in the storage accountΓÇÖs Table service.|No Dimensions|
+|TableCapacity|Yes|Table Capacity|Bytes|Average|The amount of storage used by the storage account's Table service in bytes.|No Dimensions|
+|TableCount|Yes|Table Count|Count|Average|The number of table in the storage account's Table service.|No Dimensions|
+|TableEntityCount|Yes|Table Entity Count|Count|Average|The number of table entities in the storage account's Table service.|No Dimensions|
|Transactions|Yes|Transactions|Count|Total|The number of requests made to a storage service or the specified API operation. This number includes successful and failed requests, as well as requests which produced errors. Use ResponseType dimension for the number of different type of response.|ResponseType, GeoType, ApiName, Authentication|
The Azure Monitor Agent replaces the Azure Diagnostics extension and Log Analyti
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|Provisioned|Yes|Provisioned|Count|Count|Resources that are provisioned|PoolId, SKU, Images, ProviderName|
-|Ready|Yes|Ready|Percent|Average|Resources that are ready to be used|PoolId, SKU, Images, ProviderName|
-|TotalDurationMs|Yes|TotalDurationMs|Milliseconds|Average|Average time to complete requests (ms)|PoolId, Type, ResourceRequestType, Image|
+|Allocated|Yes|Allocated|Count|Average|Resources that are allocated|PoolId, SKU, Images, ProviderName|
+|AllocationDurationMs|Yes|AllocationDurationMs|Milliseconds|Average|Average time to allocate requests (ms)|PoolId, Type, ResourceRequestType, Image|
+|Count|Yes|Count|Count|Count|Number of requests in last dump|RequestType, Status, PoolId, Type, ErrorCode, FailureStage|
+|NotReady|Yes|NotReady|Count|Average|Resources that are not ready to be used|PoolId, SKU, Images, ProviderName|
+|PendingReimage|Yes|PendingReimage|Count|Average|Resources that are pending reimage|PoolId, SKU, Images, ProviderName|
+|PendingReturn|Yes|PendingReturn|Count|Average|Resources that are pending return|PoolId, SKU, Images, ProviderName|
+|Provisioned|Yes|Provisioned|Count|Average|Resources that are provisioned|PoolId, SKU, Images, ProviderName|
+|Ready|Yes|Ready|Count|Average|Resources that are ready to be used|PoolId, SKU, Images, ProviderName|
+|Starting|Yes|Starting|Count|Average|Resources that are starting|PoolId, SKU, Images, ProviderName|
+|Total|Yes|Total|Count|Average|Total Number of Resources|PoolId, SKU, Images, ProviderName|
## Microsoft.Cloudtest/pools |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|Provisioned|Yes|Provisioned|Count|Count|Resources that are provisioned|PoolId, SKU, Images, ProviderName|
-|Ready|Yes|Ready|Percent|Average|Resources that are ready to be used|PoolId, SKU, Images, ProviderName|
-|TotalDurationMs|Yes|TotalDurationMs|Milliseconds|Average|Average time to complete requests (ms)|PoolId, Type, ResourceRequestType, Image|
+|Allocated|Yes|Allocated|Count|Average|Resources that are allocated|PoolId, SKU, Images, ProviderName|
+|AllocationDurationMs|Yes|AllocationDurationMs|Milliseconds|Average|Average time to allocate requests (ms)|PoolId, Type, ResourceRequestType, Image|
+|Count|Yes|Count|Count|Count|Number of requests in last dump|RequestType, Status, PoolId, Type, ErrorCode, FailureStage|
+|NotReady|Yes|NotReady|Count|Average|Resources that are not ready to be used|PoolId, SKU, Images, ProviderName|
+|PendingReimage|Yes|PendingReimage|Count|Average|Resources that are pending reimage|PoolId, SKU, Images, ProviderName|
+|PendingReturn|Yes|PendingReturn|Count|Average|Resources that are pending return|PoolId, SKU, Images, ProviderName|
+|Provisioned|Yes|Provisioned|Count|Average|Resources that are provisioned|PoolId, SKU, Images, ProviderName|
+|Ready|Yes|Ready|Count|Average|Resources that are ready to be used|PoolId, SKU, Images, ProviderName|
+|Starting|Yes|Starting|Count|Average|Resources that are starting|PoolId, SKU, Images, ProviderName|
+|Total|Yes|Total|Count|Average|Total Number of Resources|PoolId, SKU, Images, ProviderName|
## Microsoft.ClusterStor/nodes
The Azure Monitor Agent replaces the Azure Diagnostics extension and Log Analyti
|d2c.endpoints.latency.serviceBusQueues|Yes|Routing: message latency for Service Bus Queue|Milliseconds|Average|The average latency (milliseconds) between message ingress to IoT Hub and telemetry message ingress into a Service Bus queue endpoint.|No Dimensions| |d2c.endpoints.latency.serviceBusTopics|Yes|Routing: message latency for Service Bus Topic|Milliseconds|Average|The average latency (milliseconds) between message ingress to IoT Hub and telemetry message ingress into a Service Bus topic endpoint.|No Dimensions| |d2c.endpoints.latency.storage|Yes|Routing: message latency for storage|Milliseconds|Average|The average latency (milliseconds) between message ingress to IoT Hub and telemetry message ingress into a storage endpoint.|No Dimensions|
-|d2c.telemetry.egress.dropped|Yes|Routing: telemetry messages dropped |Count|Total|The number of times messages were dropped by IoT Hub routing due to dead endpoints. This value does not count messages delivered to fallback route as dropped messages are not delivered there.|No Dimensions|
+|d2c.telemetry.egress.dropped|Yes|Routing: telemetry messages dropped'|Count|Total|The number of times messages were dropped by IoT Hub routing due to dead endpoints. This value does not count messages delivered to fallback route as dropped messages are not delivered there.|No Dimensions|
|d2c.telemetry.egress.fallback|Yes|Routing: messages delivered to fallback|Count|Total|The number of times IoT Hub routing delivered messages to the endpoint associated with the fallback route.|No Dimensions| |d2c.telemetry.egress.invalid|Yes|Routing: telemetry messages incompatible|Count|Total|The number of times IoT Hub routing failed to deliver messages due to an incompatibility with the endpoint. This value does not include retries.|No Dimensions|
-|d2c.telemetry.egress.orphaned|Yes|Routing: telemetry messages orphaned |Count|Total|The number of times messages were orphaned by IoT Hub routing because they didn't match any routing rules (including the fallback rule). |No Dimensions|
+|d2c.telemetry.egress.orphaned|Yes|Routing: telemetry messages orphaned'|Count|Total|The number of times messages were orphaned by IoT Hub routing because they didn't match any routing rules (including the fallback rule).'|No Dimensions|
|d2c.telemetry.egress.success|Yes|Routing: telemetry messages delivered|Count|Total|The number of times messages were successfully delivered to all endpoints using IoT Hub routing. If a message is routed to multiple endpoints, this value increases by one for each successful delivery. If a message is delivered to the same endpoint multiple times, this value increases by one for each successful delivery.|No Dimensions| |d2c.telemetry.ingress.allProtocol|Yes|Telemetry message send attempts|Count|Total|Number of device-to-cloud telemetry messages attempted to be sent to your IoT hub|No Dimensions| |d2c.telemetry.ingress.sendThrottle|Yes|Number of throttling errors|Count|Total|Number of throttling errors due to device throughput throttles|No Dimensions|
The Azure Monitor Agent replaces the Azure Diagnostics extension and Log Analyti
|||||||| |AddRegion|Yes|Region Added|Count|Count|Region Added|Region| |AutoscaleMaxThroughput|No|Autoscale Max Throughput|Count|Maximum|Autoscale Max Throughput|DatabaseName, CollectionName|
-|AvailableStorage|No|(deprecated) Available Storage|Bytes|Total|"Available Storage"will be removed from Azure Monitor at the end of September 2023. Cosmos DB collection storage size is now unlimited. The only restriction is that the storage size for each logical partition key is 20GB. You can enable PartitionKeyStatistics in Diagnostic Log to know the storage consumption for top partition keys. For more info about Cosmos DB storage quota, please check this doc https://docs.microsoft.com/azure/cosmos-db/concepts-limits. After deprecation, the remaining alert rules still defined on the deprecated metric will be automatically disabled post the deprecation date.|CollectionName, DatabaseName, Region|
+|AvailableStorage|No|(deprecated) Available Storage|Bytes|Total|"Available Storage"will be removed from Azure Monitor at the end of September 2023. Cosmos DB collection storage size is now unlimited. The only restriction is that the storage size for each logical partition key is 20GB. You can enable PartitionKeyStatistics in Diagnostic Log to know the storage consumption for top partition keys. For more info about Cosmos DB storage quota, please check this doc [https://docs.microsoft.com/azure/cosmos-db/concepts-limits](/azure/cosmos-db/concepts-limits). After deprecation, the remaining alert rules still defined on the deprecated metric will be automatically disabled post the deprecation date.|CollectionName, DatabaseName, Region|
|CassandraConnectionClosures|No|Cassandra Connection Closures|Count|Total|Number of Cassandra connections that were closed, reported at a 1 minute granularity|APIType, Region, ClosureReason| |CassandraConnectorAvgReplicationLatency|No|Cassandra Connector Average ReplicationLatency|MilliSeconds|Average|Cassandra Connector Average ReplicationLatency|No Dimensions| |CassandraConnectorReplicationHealthStatus|No|Cassandra Connector Replication Health Status|Count|Count|Cassandra Connector Replication Health Status|NotStarted, ReplicationInProgress, Error|
The Azure Monitor Agent replaces the Azure Diagnostics extension and Log Analyti
|DedicatedGatewayMaximumCPUUsage|No|DedicatedGatewayMaximumCPUUsage|Percent|Average|Average Maximum CPU usage across dedicated gateway instances|Region, MetricType| |DedicatedGatewayRequests|Yes|DedicatedGatewayRequests|Count|Count|Requests at the dedicated gateway|DatabaseName, CollectionName, CacheExercised, OperationName, Region| |DeleteAccount|Yes|Account Deleted|Count|Count|Account Deleted|No Dimensions|
-|DocumentCount|No|Document Count|Count|Total|Total document count reported at 5 minutes granularity|CollectionName, DatabaseName, Region|
+|DocumentCount|No|Document Count|Count|Total|Total document count reported at 5 minutes, 1 hour and 1 day granularity|CollectionName, DatabaseName, Region|
|DocumentQuota|No|Document Quota|Bytes|Total|Total storage quota reported at 5 minutes granularity|CollectionName, DatabaseName, Region| |GremlinDatabaseCreate|No|Gremlin Database Created|Count|Count|Gremlin Database Created|ResourceName, ApiKind, ApiKindResourceType, IsThroughputRequest, OperationType| |GremlinDatabaseDelete|No|Gremlin Database Deleted|Count|Count|Gremlin Database Deleted|ResourceName, ApiKind, ApiKindResourceType, OperationType|
The Azure Monitor Agent replaces the Azure Diagnostics extension and Log Analyti
|CacheUtilization|Yes|Cache utilization|Percent|Average|Utilization level in the cluster scope|No Dimensions| |CacheUtilizationFactor|Yes|Cache utilization factor|Percent|Average|Percentage difference between the current number of instances and the optimal number of instances (per cache utilization)|No Dimensions| |ContinuousExportMaxLatenessMinutes|Yes|Continuous Export Max Lateness|Count|Maximum|The lateness (in minutes) reported by the continuous export jobs in the cluster|No Dimensions|
-|ContinuousExportNumOfRecordsExported|Yes|Continuous export ΓÇô num of exported records|Count|Total|Number of records exported, fired for every storage artifact written during the export operation|ContinuousExportName, Database|
+|ContinuousExportNumOfRecordsExported|Yes|Continuous export ' num of exported records|Count|Total|Number of records exported, fired for every storage artifact written during the export operation|ContinuousExportName, Database|
|ContinuousExportPendingCount|Yes|Continuous Export Pending Count|Count|Maximum|The number of pending continuous export jobs ready for execution|No Dimensions| |ContinuousExportResult|Yes|Continuous Export Result|Count|Count|Indicates whether Continuous Export succeeded or failed|ContinuousExportName, Result, Database| |CPU|Yes|CPU|Percent|Average|CPU utilization level|No Dimensions|
The Azure Monitor Agent replaces the Azure Diagnostics extension and Log Analyti
|MaterializedViewHealth|Yes|Materialized View Health|Count|Average|The health of the materialized view (1 for healthy, 0 for non-healthy)|Database, MaterializedViewName| |MaterializedViewRecordsInDelta|Yes|Materialized View Records In Delta|Count|Average|The number of records in the non-materialized part of the view|Database, MaterializedViewName| |MaterializedViewResult|Yes|Materialized View Result|Count|Average|The result of the materialization process|Database, MaterializedViewName, Result|
-|QueryDuration|Yes|Query duration|Milliseconds|Average|QueriesΓÇÖ duration in seconds|QueryStatus|
+|QueryDuration|Yes|Query duration|Milliseconds|Average|Queries' duration in seconds|QueryStatus|
|QueryResult|No|Query Result|Count|Count|Total number of queries.|QueryStatus| |QueueLength|Yes|Queue Length|Count|Average|Number of pending messages in a component's queue.|ComponentType| |QueueOldestMessage|Yes|Queue Oldest Message|Count|Average|Time in seconds from when the oldest message in queue was inserted.|ComponentType|
The Azure Monitor Agent replaces the Azure Diagnostics extension and Log Analyti
|Cancelled Runs|Yes|Cancelled Runs|Count|Total|Number of runs cancelled for this workspace. Count is updated when a run is successfully cancelled.|Scenario, RunType, PublishedPipelineId, ComputeType, PipelineStepType, ExperimentName| |Completed Runs|Yes|Completed Runs|Count|Total|Number of runs completed successfully for this workspace. Count is updated when a run has completed and output has been collected.|Scenario, RunType, PublishedPipelineId, ComputeType, PipelineStepType, ExperimentName| |CpuCapacityMillicores|Yes|CpuCapacityMillicores|Count|Average|Maximum capacity of a CPU node in millicores. Capacity is aggregated in one minute intervals.|RunId, InstanceId, ComputeName|
+|CpuMemoryCapacityMegabytes|Yes|CpuMemoryCapacityMegabytes|Count|Average|Maximum memory utilization of a CPU node in megabytes. Utilization is aggregated in one minute intervals.|RunId, InstanceId, ComputeName|
+|CpuMemoryUtilizationMegabytes|Yes|CpuMemoryUtilizationMegabytes|Count|Average|Memory utilization of a CPU node in megabytes. Utilization is aggregated in one minute intervals.|RunId, InstanceId, ComputeName|
+|CpuMemoryUtilizationPercentage|Yes|CpuMemoryUtilizationPercentage|Count|Average|Memory utilization percentage of a CPU node. Utilization is aggregated in one minute intervals.|RunId, InstanceId, ComputeName|
|CpuUtilization|Yes|CpuUtilization|Count|Average|Percentage of utilization on a CPU node. Utilization is reported at one minute intervals.|Scenario, runId, NodeId, ClusterName| |CpuUtilizationMillicores|Yes|CpuUtilizationMillicores|Count|Average|Utilization of a CPU node in millicores. Utilization is aggregated in one minute intervals.|RunId, InstanceId, ComputeName| |CpuUtilizationPercentage|Yes|CpuUtilizationPercentage|Count|Average|Utilization percentage of a CPU node. Utilization is aggregated in one minute intervals.|RunId, InstanceId, ComputeName|
The Azure Monitor Agent replaces the Azure Diagnostics extension and Log Analyti
|GpuUtilization|Yes|GpuUtilization|Count|Average|Percentage of utilization on a GPU node. Utilization is reported at one minute intervals.|Scenario, runId, NodeId, DeviceId, ClusterName| |GpuUtilizationMilliGPUs|Yes|GpuUtilizationMilliGPUs|Count|Average|Utilization of a GPU device in milli-GPUs. Utilization is aggregated in one minute intervals.|RunId, InstanceId, DeviceId, ComputeName| |GpuUtilizationPercentage|Yes|GpuUtilizationPercentage|Count|Average|Utilization percentage of a GPU device. Utilization is aggregated in one minute intervals.|RunId, InstanceId, DeviceId, ComputeName|
+|IBReceiveMegabytes|Yes|IBReceiveMegabytes|Count|Average|Network data received over InfiniBand in megabytes. Metrics are aggregated in one minute intervals.|RunId, InstanceId, ComputeName|
+|IBTransmitMegabytes|Yes|IBTransmitMegabytes|Count|Average|Network data sent over InfiniBand in megabytes. Metrics are aggregated in one minute intervals.|RunId, InstanceId, ComputeName|
|Idle Cores|Yes|Idle Cores|Count|Average|Number of idle cores|Scenario, ClusterName| |Idle Nodes|Yes|Idle Nodes|Count|Average|Number of idle nodes. Idle nodes are the nodes which are not running any jobs but can accept new job if available.|Scenario, ClusterName| |Leaving Cores|Yes|Leaving Cores|Count|Average|Number of leaving cores|Scenario, ClusterName|
The Azure Monitor Agent replaces the Azure Diagnostics extension and Log Analyti
|Model Deploy Succeeded|Yes|Model Deploy Succeeded|Count|Total|Number of model deployments that succeeded in this workspace|Scenario| |Model Register Failed|Yes|Model Register Failed|Count|Total|Number of model registrations that failed in this workspace|Scenario, StatusCode| |Model Register Succeeded|Yes|Model Register Succeeded|Count|Total|Number of model registrations that succeeded in this workspace|Scenario|
+|NetworkInputMegabytes|Yes|NetworkInputMegabytes|Count|Average|Network data received in megabytes. Metrics are aggregated in one minute intervals.|RunId, InstanceId, ComputeName|
+|NetworkOutputMegabytes|Yes|NetworkOutputMegabytes|Count|Average|Network data sent in megabytes. Metrics are aggregated in one minute intervals.|RunId, InstanceId, ComputeName|
|Not Responding Runs|Yes|Not Responding Runs|Count|Total|Number of runs not responding for this workspace. Count is updated when a run enters Not Responding state.|Scenario, RunType, PublishedPipelineId, ComputeType, PipelineStepType, ExperimentName| |Not Started Runs|Yes|Not Started Runs|Count|Total|Number of runs in Not Started state for this workspace. Count is updated when a request is received to create a run but run information has not yet been populated. |Scenario, RunType, PublishedPipelineId, ComputeType, PipelineStepType, ExperimentName| |Preempted Cores|Yes|Preempted Cores|Count|Average|Number of preempted cores|Scenario, ClusterName|
The Azure Monitor Agent replaces the Azure Diagnostics extension and Log Analyti
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|QueryVolume|Yes|Query Volume|Count|Total|Number of queries served for a DNS zone|No Dimensions|
+|QueryVolume|No|Query Volume|Count|Total|Number of queries served for a DNS zone|No Dimensions|
|RecordSetCapacityUtilization|No|Record Set Capacity Utilization|Percent|Maximum|Percent of Record Set capacity utilized by a DNS zone|No Dimensions|
-|RecordSetCount|Yes|Record Set Count|Count|Maximum|Number of Record Sets in a DNS zone|No Dimensions|
+|RecordSetCount|No|Record Set Count|Count|Maximum|Number of Record Sets in a DNS zone|No Dimensions|
## Microsoft.Network/expressRouteCircuits
The Azure Monitor Agent replaces the Azure Diagnostics extension and Log Analyti
|storage_space_used_mb|Yes|Storage space used|Count|Average|Storage space used|No Dimensions| |virtual_core_count|Yes|Virtual core count|Count|Average|Virtual core count|No Dimensions| + ## Microsoft.Sql/servers |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
The Azure Monitor Agent replaces the Azure Diagnostics extension and Log Analyti
|dtu_used|Yes|DTU used|Count|Average|DTU used|DatabaseResourceId| |storage_used|Yes|Data space used|Bytes|Average|Data space used|ElasticPoolResourceId| + ## Microsoft.Sql/servers/databases |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
The Azure Monitor Agent replaces the Azure Diagnostics extension and Log Analyti
|workers_percent|Yes|Workers percentage|Percent|Average|Workers percentage. Not applicable to data warehouses.|No Dimensions| |xtp_storage_percent|Yes|In-Memory OLTP storage percent|Percent|Average|In-Memory OLTP storage percent. Not applicable to data warehouses.|No Dimensions| + ## Microsoft.Sql/servers/elasticPools |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
The Azure Monitor Agent replaces the Azure Diagnostics extension and Log Analyti
|workers_percent|Yes|Workers percentage|Percent|Average|Workers percentage|No Dimensions| |xtp_storage_percent|Yes|In-Memory OLTP storage percent|Percent|Average|In-Memory OLTP storage percent|No Dimensions| + ## Microsoft.Storage/storageAccounts |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
The Azure Monitor Agent replaces the Azure Diagnostics extension and Log Analyti
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| |||||||| |Availability|Yes|Availability|Percent|Average|The percentage of availability for the storage service or the specified API operation. Availability is calculated by taking the TotalBillableRequests value and dividing it by the number of applicable requests, including those that produced unexpected errors. All unexpected errors result in reduced availability for the storage service or the specified API operation.|GeoType, ApiName, Authentication|
-|BlobCapacity|No|Blob Capacity|Bytes|Average|The amount of storage used by the storage accountΓÇÖs Blob service in bytes.|BlobType, Tier|
+|BlobCapacity|No|Blob Capacity|Bytes|Average|The amount of storage used by the storage account's Blob service in bytes.|BlobType, Tier|
|BlobCount|No|Blob Count|Count|Average|The number of blob objects stored in the storage account.|BlobType, Tier|
-|BlobProvisionedSize|No|Blob Provisioned Size|Bytes|Average|The amount of storage provisioned in the storage accountΓÇÖs Blob service in bytes.|BlobType, Tier|
+|BlobProvisionedSize|No|Blob Provisioned Size|Bytes|Average|The amount of storage provisioned in the storage account's Blob service in bytes.|BlobType, Tier|
|ContainerCount|Yes|Blob Container Count|Count|Average|The number of containers in the storage account.|No Dimensions| |Egress|Yes|Egress|Bytes|Total|The amount of egress data. This number includes egress to external client from Azure Storage as well as egress within Azure. As a result, this number does not reflect billable egress.|GeoType, ApiName, Authentication| |IndexCapacity|No|Index Capacity|Bytes|Average|The amount of storage used by Azure Data Lake Storage Gen2 hierarchical index.|No Dimensions|
The Azure Monitor Agent replaces the Azure Diagnostics extension and Log Analyti
|FileShareCapacityQuota|No|File Share Capacity Quota|Bytes|Average|The upper limit on the amount of storage that can be used by Azure Files Service in bytes.|FileShare| |FileShareCount|No|File Share Count|Count|Average|The number of file shares in the storage account.|No Dimensions| |FileShareProvisionedIOPS|No|File Share Provisioned IOPS|CountPerSecond|Average|The baseline number of provisioned IOPS for the premium file share in the premium files storage account. This number is calculated based on the provisioned size (quota) of the share capacity.|FileShare|
-|FileShareSnapshotCount|No|File Share Snapshot Count|Count|Average|The number of snapshots present on the share in storage accountΓÇÖs Files Service.|FileShare|
-|FileShareSnapshotSize|No|File Share Snapshot Size|Bytes|Average|The amount of storage used by the snapshots in storage accountΓÇÖs File service in bytes.|FileShare|
+|FileShareSnapshotCount|No|File Share Snapshot Count|Count|Average|The number of snapshots present on the share in storage account's Files Service.|FileShare|
+|FileShareSnapshotSize|No|File Share Snapshot Size|Bytes|Average|The amount of storage used by the snapshots in storage account's File service in bytes.|FileShare|
|Ingress|Yes|Ingress|Bytes|Total|The amount of ingress data, in bytes. This number includes ingress from an external client into Azure Storage as well as ingress within Azure.|GeoType, ApiName, Authentication, FileShare| |SuccessE2ELatency|Yes|Success E2E Latency|Milliseconds|Average|The average end-to-end latency of successful requests made to a storage service or the specified API operation, in milliseconds. This value includes the required processing time within Azure Storage to read the request, send the response, and receive acknowledgment of the response.|GeoType, ApiName, Authentication, FileShare| |SuccessServerLatency|Yes|Success Server Latency|Milliseconds|Average|The average time used to process a successful request by Azure Storage. This value does not include the network latency specified in SuccessE2ELatency.|GeoType, ApiName, Authentication, FileShare|
The Azure Monitor Agent replaces the Azure Diagnostics extension and Log Analyti
|BigDataPoolApplicationsEnded|No|Ended Apache Spark applications|Count|Total|Count of Apache Spark pool applications ended|JobType, JobResult|
-## Microsoft.Synapse/workspaces/kustoPools
-
-|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
-||||||||
-|BatchBlobCount|Yes|Batch Blob Count|Count|Average|Number of data sources in an aggregated batch for ingestion.|Database|
-|BatchDuration|Yes|Batch Duration|Seconds|Average|The duration of the aggregation phase in the ingestion flow.|Database|
-|BatchesProcessed|Yes|Batches Processed|Count|Total|Number of batches aggregated for ingestion. Batching Type: whether the batch reached batching time, data size or number of files limit set by batching policy|Database, SealReason|
-|BatchSize|Yes|Batch Size|Bytes|Average|Uncompressed expected data size in an aggregated batch for ingestion.|Database|
-|BlobsDropped|Yes|Blobs Dropped|Count|Total|Number of blobs permanently rejected by a component.|Database, ComponentType, ComponentName|
-|BlobsProcessed|Yes|Blobs Processed|Count|Total|Number of blobs processed by a component.|Database, ComponentType, ComponentName|
-|BlobsReceived|Yes|Blobs Received|Count|Total|Number of blobs received from input stream by a component.|Database, ComponentType, ComponentName|
-|CacheUtilization|Yes|Cache utilization|Percent|Average|Utilization level in the cluster scope|No Dimensions|
-|ContinuousExportMaxLatenessMinutes|Yes|Continuous Export Max Lateness|Count|Maximum|The lateness (in minutes) reported by the continuous export jobs in the cluster|No Dimensions|
-|ContinuousExportNumOfRecordsExported|Yes|Continuous export ΓÇô num of exported records|Count|Total|Number of records exported, fired for every storage artifact written during the export operation|ContinuousExportName, Database|
-|ContinuousExportPendingCount|Yes|Continuous Export Pending Count|Count|Maximum|The number of pending continuous export jobs ready for execution|No Dimensions|
-|ContinuousExportResult|Yes|Continuous Export Result|Count|Count|Indicates whether Continuous Export succeeded or failed|ContinuousExportName, Result, Database|
-|CPU|Yes|CPU|Percent|Average|CPU utilization level|No Dimensions|
-|DiscoveryLatency|Yes|Discovery Latency|Seconds|Average|Reported by data connections (if exist). Time in seconds from when a message is enqueued or event is created until it is discovered by data connection. This time is not included in the Azure Data Explorer total ingestion duration.|ComponentType, ComponentName|
-|EventsDropped|Yes|Events Dropped|Count|Total|Number of events dropped permanently by data connection. An Ingestion result metric with a failure reason will be sent.|ComponentType, ComponentName|
-|EventsProcessed|Yes|Events Processed|Count|Total|Number of events processed by the cluster|ComponentType, ComponentName|
-|EventsProcessedForEventHubs|Yes|Events Processed (for Event/IoT Hubs)|Count|Total|Number of events processed by the cluster when ingesting from Event/IoT Hub|EventStatus|
-|EventsReceived|Yes|Events Received|Count|Total|Number of events received by data connection.|ComponentType, ComponentName|
-|ExportUtilization|Yes|Export Utilization|Percent|Maximum|Export utilization|No Dimensions|
-|IngestionLatencyInSeconds|Yes|Ingestion Latency|Seconds|Average|Latency of data ingested, from the time the data was received in the cluster until it's ready for query. The ingestion latency period depends on the ingestion scenario.|No Dimensions|
-|IngestionResult|Yes|Ingestion result|Count|Total|Total number of sources that either failed or succeeded to be ingested. Splitting the metric by status, you can get detailed information about the status of the ingestion operations.|IngestionResultDetails, FailureKind|
-|IngestionUtilization|Yes|Ingestion utilization|Percent|Average|Ratio of used ingestion slots in the cluster|No Dimensions|
-|IngestionVolumeInMB|Yes|Ingestion Volume|Bytes|Total|Overall volume of ingested data to the cluster|Database|
-|InstanceCount|Yes|Instance Count|Count|Average|Total instance count|No Dimensions|
-|KeepAlive|Yes|Keep alive|Count|Average|Sanity check indicates the cluster responds to queries|No Dimensions|
-|MaterializedViewAgeMinutes|Yes|Materialized View Age|Count|Average|The materialized view age in minutes|Database, MaterializedViewName|
-|MaterializedViewDataLoss|Yes|Materialized View Data Loss|Count|Maximum|Indicates potential data loss in materialized view|Database, MaterializedViewName, Kind|
-|MaterializedViewExtentsRebuild|Yes|Materialized View Extents Rebuild|Count|Average|Number of extents rebuild|Database, MaterializedViewName|
-|MaterializedViewHealth|Yes|Materialized View Health|Count|Average|The health of the materialized view (1 for healthy, 0 for non-healthy)|Database, MaterializedViewName|
-|MaterializedViewRecordsInDelta|Yes|Materialized View Records In Delta|Count|Average|The number of records in the non-materialized part of the view|Database, MaterializedViewName|
-|MaterializedViewResult|Yes|Materialized View Result|Count|Average|The result of the materialization process|Database, MaterializedViewName, Result|
-|QueryDuration|Yes|Query duration|Milliseconds|Average|QueriesΓÇÖ duration in seconds|QueryStatus|
-|QueryResult|No|Query Result|Count|Count|Total number of queries.|QueryStatus|
-|QueueLength|Yes|Queue Length|Count|Average|Number of pending messages in a component's queue.|ComponentType|
-|QueueOldestMessage|Yes|Queue Oldest Message|Count|Average|Time in seconds from when the oldest message in queue was inserted.|ComponentType|
-|ReceivedDataSizeBytes|Yes|Received Data Size Bytes|Bytes|Average|Size of data received by data connection. This is the size of the data stream, or of raw data size if provided.|ComponentType, ComponentName|
-|StageLatency|Yes|Stage Latency|Seconds|Average|Cumulative time from when a message is discovered until it is received by the reporting component for processing (discovery time is set when message is enqueued for ingestion queue, or when discovered by data connection).|Database, ComponentType|
-|SteamingIngestRequestRate|Yes|Streaming Ingest Request Rate|Count|RateRequestsPerSecond|Streaming ingest request rate (requests per second)|No Dimensions|
-|StreamingIngestDataRate|Yes|Streaming Ingest Data Rate|Count|Average|Streaming ingest data rate (MB per second)|No Dimensions|
-|StreamingIngestDuration|Yes|Streaming Ingest Duration|Milliseconds|Average|Streaming ingest duration in milliseconds|No Dimensions|
-|StreamingIngestResults|Yes|Streaming Ingest Result|Count|Average|Streaming ingest result|Result|
-|TotalNumberOfConcurrentQueries|Yes|Total number of concurrent queries|Count|Maximum|Total number of concurrent queries|No Dimensions|
-|TotalNumberOfExtents|Yes|Total number of extents|Count|Total|Total number of data extents|No Dimensions|
-|TotalNumberOfThrottledCommands|Yes|Total number of throttled commands|Count|Total|Total number of throttled commands|CommandType|
-|TotalNumberOfThrottledQueries|Yes|Total number of throttled queries|Count|Maximum|Total number of throttled queries|No Dimensions|
-- ## Microsoft.Synapse/workspaces/sqlPools |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
The Azure Monitor Agent replaces the Azure Diagnostics extension and Log Analyti
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|ApiConnectionRequests|Yes|Requests|Count|Total|API Connection Requests|HttpStatusCode, ClientIPAddress|
+|Requests|No|Requests|Count|Total|API Connection Requests|HttpStatusCode, ClientIPAddress|
## Microsoft.Web/hostingEnvironments
The Azure Monitor Agent replaces the Azure Diagnostics extension and Log Analyti
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|AppConnections|Yes|Connections|Count|Average|The number of bound sockets existing in the sandbox (w3wp.exe and its child processes). A bound socket is created by calling bind()/connect() APIs and remains until said socket is closed with CloseHandle()/closesocket().|Instance|
-|AverageMemoryWorkingSet|Yes|Average memory working set|Bytes|Average|The average amount of memory used by the app, in megabytes (MiB).|Instance|
-|AverageResponseTime|Yes|Average Response Time (deprecated)|Seconds|Average|The average time taken for the app to serve requests, in seconds.|Instance|
-|BytesReceived|Yes|Data In|Bytes|Total|The amount of incoming bandwidth consumed by the app, in MiB.|Instance|
-|BytesSent|Yes|Data Out|Bytes|Total|The amount of outgoing bandwidth consumed by the app, in MiB.|Instance|
-|CpuTime|Yes|CPU Time|Seconds|Total|The amount of CPU consumed by the app, in seconds. For more information about this metric. Please see https://aka.ms/website-monitor-cpu-time-vs-cpu-percentage (CPU time vs CPU percentage).|Instance|
-|CurrentAssemblies|Yes|Current Assemblies|Count|Average|The current number of Assemblies loaded across all AppDomains in this application.|Instance|
-|FileSystemUsage|Yes|File System Usage|Bytes|Average|Percentage of filesystem quota consumed by the app.|No Dimensions|
-|FunctionExecutionCount|Yes|Function Execution Count|Count|Total|Function Execution Count|Instance|
-|FunctionExecutionUnits|Yes|Function Execution Units|Count|Total|Function Execution Units|Instance|
-|Gen0Collections|Yes|Gen 0 Garbage Collections|Count|Total|The number of times the generation 0 objects are garbage collected since the start of the app process. Higher generation GCs include all lower generation GCs.|Instance|
-|Gen1Collections|Yes|Gen 1 Garbage Collections|Count|Total|The number of times the generation 1 objects are garbage collected since the start of the app process. Higher generation GCs include all lower generation GCs.|Instance|
-|Gen2Collections|Yes|Gen 2 Garbage Collections|Count|Total|The number of times the generation 2 objects are garbage collected since the start of the app process.|Instance|
-|Handles|Yes|Handle Count|Count|Average|The total number of handles currently open by the app process.|Instance|
-|HealthCheckStatus|Yes|Health check status|Count|Average|Health check status|Instance|
-|Http101|Yes|Http 101|Count|Total|The count of requests resulting in an HTTP status code 101.|Instance|
-|Http2xx|Yes|Http 2xx|Count|Total|The count of requests resulting in an HTTP status code = 200 but < 300.|Instance|
-|Http3xx|Yes|Http 3xx|Count|Total|The count of requests resulting in an HTTP status code = 300 but < 400.|Instance|
-|Http401|Yes|Http 401|Count|Total|The count of requests resulting in HTTP 401 status code.|Instance|
-|Http403|Yes|Http 403|Count|Total|The count of requests resulting in HTTP 403 status code.|Instance|
-|Http404|Yes|Http 404|Count|Total|The count of requests resulting in HTTP 404 status code.|Instance|
-|Http406|Yes|Http 406|Count|Total|The count of requests resulting in HTTP 406 status code.|Instance|
-|Http4xx|Yes|Http 4xx|Count|Total|The count of requests resulting in an HTTP status code = 400 but < 500.|Instance|
-|Http5xx|Yes|Http Server Errors|Count|Total|The count of requests resulting in an HTTP status code = 500 but < 600.|Instance|
-|HttpResponseTime|Yes|Response Time|Seconds|Average|The time taken for the app to serve requests, in seconds.|Instance|
-|IoOtherBytesPerSecond|Yes|IO Other Bytes Per Second|BytesPerSecond|Total|The rate at which the app process is issuing bytes to I/O operations that don't involve data, such as control operations.|Instance|
-|IoOtherOperationsPerSecond|Yes|IO Other Operations Per Second|BytesPerSecond|Total|The rate at which the app process is issuing I/O operations that aren't read or write operations.|Instance|
-|IoReadBytesPerSecond|Yes|IO Read Bytes Per Second|BytesPerSecond|Total|The rate at which the app process is reading bytes from I/O operations.|Instance|
-|IoReadOperationsPerSecond|Yes|IO Read Operations Per Second|BytesPerSecond|Total|The rate at which the app process is issuing read I/O operations.|Instance|
-|IoWriteBytesPerSecond|Yes|IO Write Bytes Per Second|BytesPerSecond|Total|The rate at which the app process is writing bytes to I/O operations.|Instance|
-|IoWriteOperationsPerSecond|Yes|IO Write Operations Per Second|BytesPerSecond|Total|The rate at which the app process is issuing write I/O operations.|Instance|
-|MemoryWorkingSet|Yes|Memory working set|Bytes|Average|The current amount of memory used by the app, in MiB.|Instance|
-|PrivateBytes|Yes|Private Bytes|Bytes|Average|Private Bytes is the current size, in bytes, of memory that the app process has allocated that can't be shared with other processes.|Instance|
-|Requests|Yes|Requests|Count|Total|The total number of requests regardless of their resulting HTTP status code.|Instance|
-|RequestsInApplicationQueue|Yes|Requests In Application Queue|Count|Average|The number of requests in the application request queue.|Instance|
-|Threads|Yes|Thread Count|Count|Average|The number of threads currently active in the app process.|Instance|
-|TotalAppDomains|Yes|Total App Domains|Count|Average|The current number of AppDomains loaded in this application.|Instance|
-|TotalAppDomainsUnloaded|Yes|Total App Domains Unloaded|Count|Average|The total number of AppDomains unloaded since the start of the application.|Instance|
+|AppConnections|Yes|Connections|Count|Average|The number of bound sockets existing in the sandbox (w3wp.exe and its child processes). A bound socket is created by calling bind()/connect() APIs and remains until said socket is closed with CloseHandle()/closesocket(). For WebApps and FunctionApps.|Instance|
+|AverageMemoryWorkingSet|Yes|Average memory working set|Bytes|Average|The average amount of memory used by the app, in megabytes (MiB). For WebApps and FunctionApps.|Instance|
+|AverageResponseTime|Yes|Average Response Time (deprecated)|Seconds|Average|The average time taken for the app to serve requests, in seconds. For WebApps and FunctionApps.|Instance|
+|BytesReceived|Yes|Data In|Bytes|Total|The amount of incoming bandwidth consumed by the app, in MiB. For WebApps and FunctionApps.|Instance|
+|BytesSent|Yes|Data Out|Bytes|Total|The amount of outgoing bandwidth consumed by the app, in MiB. For WebApps and FunctionApps.|Instance|
+|CpuTime|Yes|CPU Time|Seconds|Total|The amount of CPU consumed by the app, in seconds. For more information about this metric. Please see'https://aka.ms/website-monitor-cpu-time-vs-cpu-percentage (CPU time vs CPU percentage). For WebApps only .|Instance|
+|CurrentAssemblies|Yes|Current Assemblies|Count|Average|The current number of Assemblies loaded across all AppDomains in this application. For WebApps and FunctionApps.|Instance|
+|FileSystemUsage|Yes|File System Usage|Bytes|Average|Percentage of filesystem quota consumed by the app. For WebApps and FunctionApps.|No Dimensions|
+|FunctionExecutionCount|Yes|Function Execution Count|Count|Total|Function Execution Count. For FunctionApps only.|Instance|
+|FunctionExecutionUnits|Yes|Function Execution Units|Count|Total|Function Execution Units. For FunctionApps only.|Instance|
+|Gen0Collections|Yes|Gen 0 Garbage Collections|Count|Total|The number of times the generation 0 objects are garbage collected since the start of the app process. Higher generation GCs include all lower generation GCs. For WebApps and FunctionApps.|Instance|
+|Gen1Collections|Yes|Gen 1 Garbage Collections|Count|Total|The number of times the generation 1 objects are garbage collected since the start of the app process. Higher generation GCs include all lower generation GCs. For WebApps and FunctionApps.|Instance|
+|Gen2Collections|Yes|Gen 2 Garbage Collections|Count|Total|The number of times the generation 2 objects are garbage collected since the start of the app process. For WebApps and FunctionApps.|Instance|
+|Handles|Yes|Handle Count|Count|Average|The total number of handles currently open by the app process. For WebApps and FunctionApps.|Instance|
+|HealthCheckStatus|Yes|Health check status|Count|Average|Health check status For WebApps and FunctionApps.|Instance|
+|Http101|Yes|Http 101|Count|Total|The count of requests resulting in an HTTP status code 101. For WebApps and FunctionApps.|Instance|
+|Http2xx|Yes|Http 2xx|Count|Total|The count of requests resulting in an HTTP status code = 200 but < 300. For WebApps and FunctionApps.|Instance|
+|Http3xx|Yes|Http 3xx|Count|Total|The count of requests resulting in an HTTP status code = 300 but < 400. For WebApps and FunctionApps.|Instance|
+|Http401|Yes|Http 401|Count|Total|The count of requests resulting in HTTP 401 status code. For WebApps and FunctionApps.|Instance|
+|Http403|Yes|Http 403|Count|Total|The count of requests resulting in HTTP 403 status code. For WebApps and FunctionApps.|Instance|
+|Http404|Yes|Http 404|Count|Total|The count of requests resulting in HTTP 404 status code. For WebApps and FunctionApps.|Instance|
+|Http406|Yes|Http 406|Count|Total|The count of requests resulting in HTTP 406 status code. For WebApps and FunctionApps.|Instance|
+|Http4xx|Yes|Http 4xx|Count|Total|The count of requests resulting in an HTTP status code = 400 but < 500. For WebApps and FunctionApps.|Instance|
+|Http5xx|Yes|Http Server Errors|Count|Total|The count of requests resulting in an HTTP status code = 500 but < 600. For WebApps and FunctionApps.|Instance|
+|HttpResponseTime|Yes|Response Time|Seconds|Average|The time taken for the app to serve requests, in seconds. For WebApps and FunctionApps.|Instance|
+|IoOtherBytesPerSecond|Yes|IO Other Bytes Per Second|BytesPerSecond|Total|The rate at which the app process is issuing bytes to I/O operations that don't involve data, such as control operations. For WebApps and FunctionApps.|Instance|
+|IoOtherOperationsPerSecond|Yes|IO Other Operations Per Second|BytesPerSecond|Total|The rate at which the app process is issuing I/O operations that aren't read or write operations. For WebApps and FunctionApps.|Instance|
+|IoReadBytesPerSecond|Yes|IO Read Bytes Per Second|BytesPerSecond|Total|The rate at which the app process is reading bytes from I/O operations. For WebApps and FunctionApps.|Instance|
+|IoReadOperationsPerSecond|Yes|IO Read Operations Per Second|BytesPerSecond|Total|The rate at which the app process is issuing read I/O operations. For WebApps and FunctionApps.|Instance|
+|IoWriteBytesPerSecond|Yes|IO Write Bytes Per Second|BytesPerSecond|Total|The rate at which the app process is writing bytes to I/O operations. For WebApps and FunctionApps.|Instance|
+|IoWriteOperationsPerSecond|Yes|IO Write Operations Per Second|BytesPerSecond|Total|The rate at which the app process is issuing write I/O operations. For WebApps and FunctionApps.|Instance|
+|MemoryWorkingSet|Yes|Memory working set|Bytes|Average|The current amount of memory used by the app, in MiB. For WebApps and FunctionApps.|Instance|
+|PrivateBytes|Yes|Private Bytes|Bytes|Average|Private Bytes is the current size, in bytes, of memory that the app process has allocated that can't be shared with other processes. For WebApps and FunctionApps.|Instance|
+|Requests|Yes|Requests|Count|Total|The total number of requests regardless of their resulting HTTP status code. For WebApps and FunctionApps.|Instance|
+|RequestsInApplicationQueue|Yes|Requests In Application Queue|Count|Average|The number of requests in the application request queue. For WebApps and FunctionApps.|Instance|
+|Threads|Yes|Thread Count|Count|Average|The number of threads currently active in the app process. For WebApps and FunctionApps.|Instance|
+|TotalAppDomains|Yes|Total App Domains|Count|Average|The current number of AppDomains loaded in this application. For WebApps and FunctionApps.|Instance|
+|TotalAppDomainsUnloaded|Yes|Total App Domains Unloaded|Count|Average|The total number of AppDomains unloaded since the start of the application. For WebApps and FunctionApps.|Instance|
## Microsoft.Web/sites/slots
The Azure Monitor Agent replaces the Azure Diagnostics extension and Log Analyti
|AverageResponseTime|Yes|Average Response Time (deprecated)|Seconds|Average|The average time taken for the app to serve requests, in seconds.|Instance| |BytesReceived|Yes|Data In|Bytes|Total|The amount of incoming bandwidth consumed by the app, in MiB.|Instance| |BytesSent|Yes|Data Out|Bytes|Total|The amount of outgoing bandwidth consumed by the app, in MiB.|Instance|
-|CpuTime|Yes|CPU Time|Seconds|Total|The amount of CPU consumed by the app, in seconds. For more information about this metric. Please see https://aka.ms/website-monitor-cpu-time-vs-cpu-percentage (CPU time vs CPU percentage).|Instance|
+|CpuTime|Yes|CPU Time|Seconds|Total|The amount of CPU consumed by the app, in seconds. For more information about this metric. Please see'https://aka.ms/website-monitor-cpu-time-vs-cpu-percentage (CPU time vs CPU percentage).|Instance|
|CurrentAssemblies|Yes|Current Assemblies|Count|Average|The current number of Assemblies loaded across all AppDomains in this application.|Instance| |FileSystemUsage|Yes|File System Usage|Bytes|Average|Percentage of filesystem quota consumed by the app.|No Dimensions| |FunctionExecutionCount|Yes|Function Execution Count|Count|Total|Function Execution Count|Instance|
azure-monitor Resource Logs Categories https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/essentials/resource-logs-categories.md
Title: Azure Monitor Resource Logs supported services and categories description: Reference of Azure Monitor Understand the supported services and event schema for Azure resource logs. Previously updated : 07/19/2021 Last updated : 08/04/2021 # Supported categories for Azure Resource Logs
Some categories may only be supported for specific types of resources. See the r
If you think there is something is missing, you can open a GitHub comment at the bottom of this article. + ## Microsoft.AAD/DomainServices |Category|Category Display Name|Costs To Export|
If you think there is something is missing, you can open a GitHub comment at the
|Request|Request|No|
-## Microsoft.HealthcareApis/services
+## Microsoft.HealthcareApis/workspaces/dicomservices
|Category|Category Display Name|Costs To Export| ||||
-|AuditLogs|Audit logs|No|
-|DiagnosticLogs|Diagnostic logs|Yes|
+|AuditLogs|Audit logs|Yes|
++
+## Microsoft.HealthcareApis/workspaces/fhirservices
+
+|Category|Category Display Name|Costs To Export|
+||||
+|AuditLogs|FHIR Audit logs|Yes|
## microsoft.insights/autoscalesettings
If you think there is something is missing, you can open a GitHub comment at the
|Category|Category Display Name|Costs To Export| ||||
-|Audit|Audit|No|
+|Audit|Audit|Yes|
## Microsoft.PowerBI/tenants
If you think there is something is missing, you can open a GitHub comment at the
|||| |Engine|Engine|No| - ## Microsoft.PowerBI/tenants/workspaces |Category|Category Display Name|Costs To Export|
If you think there is something is missing, you can open a GitHub comment at the
|ResourceUsageStats|Resource Usage Statistics|No| |SQLSecurityAuditEvents|SQL Security Audit Event|No| + ## Microsoft.Sql/managedInstances/databases |Category|Category Display Name|Costs To Export|
If you think there is something is missing, you can open a GitHub comment at the
|QueryStoreWaitStatistics|Query Store Wait Statistics|No| |SQLInsights|SQL Insights|No| + ## Microsoft.Sql/servers/databases |Category|Category Display Name|Costs To Export|
If you think there is something is missing, you can open a GitHub comment at the
|Timeouts|Timeouts|No| |Waits|Waits|No| + ## Microsoft.Storage/storageAccounts/blobServices |Category|Category Display Name|Costs To Export|
If you think there is something is missing, you can open a GitHub comment at the
|||| |BigDataPoolAppsEnded|Big Data Pool Applications Ended|No| + ## Microsoft.Synapse/workspaces/sqlPools |Category|Category Display Name|Costs To Export|
If you think there is something is missing, you can open a GitHub comment at the
|AppServicePlatformLogs|App Service Platform logs|No| |FunctionAppLogs|Function Application Logs|No| + ## Next Steps * [Learn more about resource logs](../essentials/platform-logs-overview.md)
azure-monitor Alert Management Solution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/insights/alert-management-solution.md
Last updated 01/19/2018
![Alert Management icon](media/alert-management-solution/icon.png)
-The Alert Management solution helps you analyze all of the alerts in your Log Analytics repository. These alerts may have come from a variety of sources including those sources [created by Log Analytics](../alerts/alerts-overview.md) or [imported from Nagios or Zabbix](../vm/quick-collect-linux-computer.md). The solution also imports alerts from any [connected System Center Operations Manager management groups](../agents/om-agents.md).
+The Alert Management solution helps you analyze all of the alerts in your Log Analytics repository. These alerts may have come from a variety of sources including those sources [created by Log Analytics](../alerts/alerts-overview.md) or [imported from Nagios or Zabbix](../vm/monitor-virtual-machine.md). The solution also imports alerts from any [connected System Center Operations Manager management groups](../agents/om-agents.md).
## Prerequisites The solution works with any records in the Log Analytics repository with a type of **Alert**, so you must perform whatever configuration is required to collect these records. - For Log Analytics alerts, [create alert rules](../alerts/alerts-overview.md) to create alert records directly in the repository.-- For Nagios and Zabbix alerts, [configure those servers](../vm/quick-collect-linux-computer.md) to send alerts to Log Analytics.
+- For Nagios and Zabbix alerts, [configure those servers](../vm/monitor-virtual-machine.md) to send alerts to Log Analytics.
- For System Center Operations Manager alerts, [connect your Operations Manager management group to your Log Analytics workspace](../agents/om-agents.md). Any alerts created in System Center Operations Manager are imported into Log Analytics. ## Configuration
The following table describes the connected sources that are supported by this s
| Connected Source | Support | Description | |: |: |: | | [Windows agents](../agents/agent-windows.md) | No |Direct Windows agents do not generate alerts. Log Analytics alerts can be created from events and performance data collected from Windows agents. |
-| [Linux agents](../vm/quick-collect-linux-computer.md) | No |Direct Linux agents do not generate alerts. Log Analytics alerts can be created from events and performance data collected from Linux agents. Nagios and Zabbix alerts are collected from those servers that require the Linux agent. |
+| [Linux agents](../vm/monitor-virtual-machine.md) | No |Direct Linux agents do not generate alerts. Log Analytics alerts can be created from events and performance data collected from Linux agents. Nagios and Zabbix alerts are collected from those servers that require the Linux agent. |
| [System Center Operations Manager management group](../agents/om-agents.md) |Yes |Alerts that are generated on Operations Manager agents are delivered to the management group and then forwarded to Log Analytics.<br><br>A direct connection from Operations Manager agents to Log Analytics is not required. Alert data is forwarded from the management group to the Log Analytics repository. |
azure-monitor Azure Sql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/insights/azure-sql.md
Azure SQL Analytics is a cloud only monitoring solution supporting streaming of
| [Diagnostics settings](../essentials/diagnostic-settings.md) | **Yes** | Azure metric and log data are sent to Azure Monitor Logs directly by Azure. | | [Azure storage account](../essentials/resource-logs.md#send-to-log-analytics-workspace) | No | Azure Monitor doesn't read the data from a storage account. | | [Windows agents](../agents/agent-windows.md) | No | Direct Windows agents aren't used by Azure SQL Analytics. |
-| [Linux agents](../vm/quick-collect-linux-computer.md) | No | Direct Linux agents aren't used by Azure SQL Analytics. |
+| [Linux agents](../vm/monitor-virtual-machine.md) | No | Direct Linux agents aren't used by Azure SQL Analytics. |
| [System Center Operations Manager management group](../agents/om-agents.md) | No | A direct connection from the Operations Manager agent to Azure Monitor is not used by Azure SQL Analytics. | ## Azure SQL Analytics options
While Azure SQL Analytics is free to use, consumption of diagnostics telemetry a
- Use [log queries](../logs/log-query-overview.md) in Azure Monitor to view detailed Azure SQL data. - [Create your own dashboards](../visualize/tutorial-logs-dashboards.md) showing Azure SQL data.-- [Create alerts](../alerts/alerts-overview.md) when specific Azure SQL events occur.-
+- [Create alerts](../alerts/alerts-overview.md) when specific Azure SQL events occur.
azure-monitor Capacity Performance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/insights/capacity-performance.md
The following table describes the connected sources that are supported by this s
| Connected Source | Support | Description | |||| | [Windows agents](../agents/agent-windows.md) | Yes | The solution collects capacity and performance data information from Windows agents. |
-| [Linux agents](../vm/quick-collect-linux-computer.md) | No | The solution does not collect capacity and performance data information from direct Linux agents.|
+| [Linux agents](../vm/monitor-virtual-machine.md) | No | The solution does not collect capacity and performance data information from direct Linux agents.|
| [SCOM management group](../agents/om-agents.md) | Yes |The solution collects capacity and performance data from agents in a connected SCOM management group. A direct connection from the SCOM agent to Log Analytics is not required.| | [Azure storage account](../essentials/resource-logs.md#send-to-log-analytics-workspace) | No | Azure storage does not include capacity and performance data.|
The following table provides sample log searches for capacity and performance da
## Next steps
-* Use [Log searches in Log Analytics](../logs/log-query-overview.md) to view detailed Capacity and Performance data.
-
+* Use [Log searches in Log Analytics](../logs/log-query-overview.md) to view detailed Capacity and Performance data.
azure-monitor Dns Analytics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/insights/dns-analytics.md
The following table describes the connected sources that are supported by this s
| **Connected source** | **Support** | **Description** | | | | | | [Windows agents](../agents/agent-windows.md) | Yes | The solution collects DNS information from Windows agents. |
-| [Linux agents](../vm/quick-collect-linux-computer.md) | No | The solution does not collect DNS information from direct Linux agents. |
+| [Linux agents](../vm/monitor-virtual-machine.md) | No | The solution does not collect DNS information from direct Linux agents. |
| [System Center Operations Manager management group](../agents/om-agents.md) | Yes | The solution collects DNS information from agents in a connected Operations Manager management group. A direct connection from the Operations Manager agent to Azure Monitor is not required. Data is forwarded from the management group to the Log Analytics workspace. | | [Azure storage account](../essentials/resource-logs.md#send-to-log-analytics-workspace) | No | Azure storage isn't used by the solution. |
To provide feedback, visit the [Log Analytics UserVoice page](https://aka.ms/dns
## Next steps
-[Query logs](../logs/log-query-overview.md) to view detailed DNS log records.
+[Query logs](../logs/log-query-overview.md) to view detailed DNS log records.
azure-monitor Solution Office 365 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/insights/solution-office-365.md
Last updated 03/30/2020
> ### Q: How I can use the Azure Sentinel out-of-the-box security-oriented content? > Azure Sentinel provides out-of-the-box security-oriented dashboards, custom alert queries, hunting queries, investigation, and automated response capabilities based on the Office 365 and Azure AD logs. Explore the Azure Sentinel GitHub and tutorials to learn more: >
-> - [Detect threats out-of-the-box](../../sentinel/tutorial-detect-threats-built-in.md)
-> - [Create custom analytic rules to detect suspicious threats](../../sentinel/tutorial-detect-threats-custom.md)
-> - [Monitor your data](../../sentinel/tutorial-monitor-your-data.md)
-> - [Investigate incidents with Azure Sentinel](../../sentinel/tutorial-investigate-cases.md)
+> - [Detect threats out-of-the-box](/azure/azure-monitor/insights/articles/sentinel/detect-threats-built-in.md)
+> - [Create custom analytic rules to detect suspicious threats](/azure/azure-monitor/insights/articles/sentinel/detect-threats-custom.md)
+> - [Monitor your data](/azure/azure-monitor/insights/articles/sentinel/monitor-your-data.md)
+> - [Investigate incidents with Azure Sentinel](/azure/azure-monitor/insights/articles/sentinel/investigate-cases.md)
> - [Set up automated threat responses in Azure Sentinel](../../sentinel/tutorial-respond-threats-playbook.md) > - [Azure Sentinel GitHub community](https://github.com/Azure/Azure-Sentinel/tree/master/Playbooks) >
The following table provides sample log queries for update records collected by
* Use [log queries in Azure Monitor](../logs/log-query-overview.md) to view detailed update data. * [Create your own dashboards](../visualize/tutorial-logs-dashboards.md) to display your favorite Office 365 search queries.
-* [Create alerts](../alerts/alerts-overview.md) to be proactively notified of important Office 365 activities.
+* [Create alerts](../alerts/alerts-overview.md) to be proactively notified of important Office 365 activities.
azure-monitor Vmware https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/insights/vmware.md
Use the following information to install and configure the solution.
vSphere ESXi Host 5.5, 6.0, and 6.5 #### Prepare a Linux server
-Create a Linux operating system VM to receive all syslog data from the ESXi hosts. The [Log Analytics Linux agent](../vm/quick-collect-linux-computer.md) is the collection point for all ESXi host syslog data. You can use multiple ESXi hosts to forward logs to a single Linux server, as in the following example.
+Create a Linux operating system VM to receive all syslog data from the ESXi hosts. The [Log Analytics Linux agent](../vm/monitor-virtual-machine.md) is the collection point for all ESXi host syslog data. You can use multiple ESXi hosts to forward logs to a single Linux server, as in the following example.
[!INCLUDE [log-analytics-agent-note](../../../includes/log-analytics-agent-note.md)]
There can be multiple reasons:
## Next steps * Use [log queries](../logs/log-query-overview.md) in Log Analytics to view detailed VMware host data. * [Create your own dashboards](../visualize/tutorial-logs-dashboards.md) showing VMware host data.
-* [Create alerts](../alerts/alerts-overview.md) when specific VMware host events occur.
-
+* [Create alerts](../alerts/alerts-overview.md) when specific VMware host events occur.
azure-monitor App Insights Connector https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/app-insights-connector.md
Unlike most other Log Analytics solutions, data isn't collected for the Applicat
| Connected Source | Supported | Description | | | | | | [Windows agents](./../agents/agent-windows.md) | No | The solution does not collect information from Windows agents. |
-| [Linux agents](../vm/quick-collect-linux-computer.md) | No | The solution does not collect information from Linux agents. |
+| [Linux agents](../vm/monitor-virtual-machine.md) | No | The solution does not collect information from Linux agents. |
| [SCOM management group](../agents/om-agents.md) | No | The solution does not collect information from agents in a connected SCOM management group. | | [Azure storage account](../essentials/resource-logs.md#send-to-log-analytics-workspace) | No | The solution does not collection information from Azure storage. |
ApplicationInsights | summarize by ApplicationName
## Next steps -- Use [Log Search](./log-query-overview.md) to view detailed information for your Application Insights apps.
+- Use [Log Search](./log-query-overview.md) to view detailed information for your Application Insights apps.
azure-monitor Data Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/data-security.md
To ensure the security of data in transit to Azure Monitor, we strongly encourag
The [PCI Security Standards Council](https://www.pcisecuritystandards.org/) has set a [deadline of June 30th, 2018](https://www.pcisecuritystandards.org/pdfs/PCI_SSC_Migrating_from_SSL_and_Early_TLS_Resource_Guide.pdf) to disable older versions of TLS/SSL and upgrade to more secure protocols. Once Azure drops legacy support, if your agents cannot communicate over at least TLS 1.2 you would not be able to send data to Azure Monitor Logs.
-We do not recommend explicitly setting your agent to only use TLS 1.2 unless absolutely necessary, as it can break platform level security features that allow you to automatically detect and take advantage of newer more secure protocols as they become available, such as TLS 1.3.
+We recommend you do NOT explicit set your agent to only use TLS 1.2 unless absolutely necessary. Allowing the agent to automatically detect, negotiate, and take advantage of future security standards is preferable. Otherwise you may miss the added security of the newer standards and possibly experience problems if TLS 1.2 is ever deprecated in favor of those newer standards.
### Platform-specific guidance
azure-monitor Manage Access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/manage-access.md
Sometimes custom logs come from sources that are not directly associated to a sp
* See [Log Analytics agent overview](../agents/log-analytics-agent.md) to gather data from computers in your datacenter or other cloud environment.
-* See [Collect data about Azure virtual machines](../vm/quick-collect-azurevm.md) to configure data collection from Azure VMs.
+* See [Collect data about Azure virtual machines](../vm/monitor-virtual-machine.md) to configure data collection from Azure VMs.
azure-monitor Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/policy-reference.md
Title: Built-in policy definitions for Azure Monitor description: Lists Azure Policy built-in policy definitions for Azure Monitor. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/16/2021 Last updated : 08/13/2021
azure-monitor Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Monitor description: Lists Azure Policy Regulatory Compliance controls available for Azure Monitor. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 07/16/2021 Last updated : 08/13/2021
azure-monitor Tutorial Logs Dashboards https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/visualize/tutorial-logs-dashboards.md
Log Analytics dashboards can visualize all of your saved log queries, giving you
> * Add a log query to a shared dashboard > * Customize a tile in a shared dashboard
-To complete the example in this tutorial, you must have an existing virtual machine [connected to the Log Analytics workspace](../vm/quick-collect-azurevm.md).
+To complete the example in this tutorial, you must have an existing virtual machine [connected to the Log Analytics workspace](../vm/monitor-virtual-machine.md).
## Sign in to Azure portal Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com).
azure-monitor Vmext Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/visualize/vmext-troubleshoot.md
If the *Microsoft Monitoring Agent* VM extension is not installing or reporting,
1. Check if the Azure VM agent is installed and working correctly by using the steps in [KB 2965986](https://support.microsoft.com/kb/2965986#mt1). * You can also review the VM agent log file `C:\WindowsAzure\logs\WaAppAgent.log` * If the log does not exist, the VM agent is not installed.
- * [Install the Azure VM Agent](../vm/quick-collect-azurevm.md#enable-the-log-analytics-vm-extension)
+ * [Install the Azure VM Agent](../vm/monitor-virtual-machine.md#agents)
2. Review the Microsoft Monitoring Agent VM extension log files in `C:\Packages\Plugins\Microsoft.EnterpriseCloud.Monitoring.MicrosoftMonitoringAgent` 3. Ensure the virtual machine can run PowerShell scripts 4. Ensure permissions on C:\Windows\temp havenΓÇÖt been changed
azure-monitor Monitor Vm Azure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/vm/monitor-vm-azure.md
Last updated 05/05/2020
This article describes how to use Azure Monitor to collect and analyze monitoring data from Azure virtual machines to maintain their health. Virtual machines can be monitored for availability and performance with Azure Monitor like any [other Azure resource](../essentials/monitor-azure-resource.md), but they're unique from other resources since you also need to monitor the guest operating and system and the workloads that run in it. > [!NOTE]
-> This article provides a complete overview of the concepts and options for monitoring virtual machines in Azure Monitor. To start monitoring your virtual machines quickly without focusing on the underlying concepts, see [Quickstart: Monitor an Azure virtual machine with Azure Monitor](./quick-monitor-azure-vm.md).
+> This article provides a complete overview of the concepts and options for monitoring virtual machines in Azure Monitor. To start monitoring your virtual machines quickly without focusing on the underlying concepts, see [Quickstart: Monitor an Azure virtual machine with Azure Monitor](./monitor-virtual-machine.md).
## Differences from other Azure resources
See [Connect Operations Manager to Azure Monitor](../agents/om-agents.md) for de
## Next steps * [Learn how to analyze data in Azure Monitor logs using log queries.](../logs/get-started-queries.md)
-* [Learn about alerts using metrics and logs in Azure Monitor.](../alerts/alerts-overview.md)
+* [Learn about alerts using metrics and logs in Azure Monitor.](../alerts/alerts-overview.md)
azure-percept Audio Button Led Behavior https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/audio-button-led-behavior.md
Previously updated : 03/25/2021 Last updated : 08/03/2021 # Azure Percept Audio button and LED states
azure-percept Connect Over Cellular https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/connect-over-cellular.md
Previously updated : 05/20/2021 Last updated : 07/28/2021
azure-percept How To Select Update Package https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/how-to-select-update-package.md
Previously updated : 05/04/2021 Last updated : 07/23/2021
azure-percept Troubleshoot Audio Accessory Speech Module https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/troubleshoot-audio-accessory-speech-module.md
Previously updated : 03/25/2021 Last updated : 08/03/2021
azure-percept Troubleshoot Dev Kit https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/troubleshoot-dev-kit.md
Previously updated : 03/25/2021 Last updated : 08/10/2021
azure-portal Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-portal/policy-reference.md
Title: Built-in policy definitions for Azure portal description: Lists Azure Policy built-in policy definitions for Azure portal. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/16/2021 Last updated : 08/13/2021
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/custom-providers/policy-reference.md
Title: Built-in policy definitions for Azure Custom Resource Providers description: Lists Azure Policy built-in policy definitions for Azure Custom Resource Providers. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/16/2021 Last updated : 08/13/2021
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/managed-applications/policy-reference.md
Title: Built-in policy definitions for Azure Managed Applications description: Lists Azure Policy built-in policy definitions for Azure Managed Applications. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/16/2021 Last updated : 08/13/2021
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/policy-reference.md
Title: Built-in policy definitions for Azure Resource Manager description: Lists Azure Policy built-in policy definitions for Azure Resource Manager. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/16/2021 Last updated : 08/13/2021
azure-resource-manager Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Resource Manager description: Lists Azure Policy Regulatory Compliance controls available for Azure Resource Manager. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 07/16/2021 Last updated : 08/13/2021
azure-signalr Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-signalr/policy-reference.md
Title: Built-in policy definitions for Azure SignalR description: Lists Azure Policy built-in policy definitions for Azure SignalR. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/16/2021 Last updated : 08/13/2021
azure-signalr Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-signalr/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure SignalR description: Lists Azure Policy Regulatory Compliance controls available for Azure SignalR. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 07/16/2021 Last updated : 08/13/2021
azure-sql File Space Manage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/file-space-manage.md
ms.devlang:
- Previously updated : 05/28/2021+ Last updated : 08/09/2021 # Manage file space for databases in Azure SQL Database [!INCLUDE[appliesto-sqldb](../includes/appliesto-sqldb.md)]
Monitoring file space usage and shrinking data files may be necessary in the fol
- Allow decreasing the max size of a single database or elastic pool. - Allow changing a single database or elastic pool to a different service tier or performance tier with a lower max size.
+> [!NOTE]
+> Shrink operations should not be considered a regular maintenance operation. Data and log files that grow due to regular, recurring business operations do not require shrink operations.
+ ### Monitoring file space usage Most storage space metrics displayed in the following APIs only measure the size of used data pages: - Azure Resource Manager based metrics APIs including PowerShell [get-metrics](/powershell/module/az.monitor/get-azmetric)-- T-SQL: [sys.dm_db_resource_stats](/sql/relational-databases/system-dynamic-management-views/sys-dm-db-resource-stats-azure-sql-database) However, the following APIs also measure the size of space allocated for databases and elastic pools: - T-SQL: [sys.resource_stats](/sql/relational-databases/system-catalog-views/sys-resource-stats-azure-sql-database) - T-SQL: [sys.elastic_pool_resource_stats](/sql/relational-databases/system-catalog-views/sys-elastic-pool-resource-stats-azure-sql-database)
-### Shrinking data files
-
-Azure SQL Database does not automatically shrink data files to reclaim unused allocated space due to the potential impact to database performance. However, customers may shrink data files via self-service at a time of their choosing by following the steps described in [reclaim unused allocated space](#reclaim-unused-allocated-space).
-
-### Shrinking transaction log file
-
-Unlike data files, Azure SQL Database automatically shrinks transaction log file to avoid excessive space usage that can lead to out-of-space errors. It is usually not necessary for customers to shrink the transaction log file.
-
-In Premium and Business Critical service tiers, if the transaction log becomes large, it may significantly contribute to local storage consumption toward the [maximum local storage](resource-limits-logical-server.md#storage-space-governance) limit. If local storage consumption is close to the limit, customers may choose to shrink transaction log using the [DBCC SHRINKFILE](/sql/t-sql/database-console-commands/dbcc-shrinkfile-transact-sql) command as shown in the following example. This releases local storage as soon as the command completes, without waiting for the periodic automatic shrink operation.
-
-```tsql
-DBCC SHRINKFILE (2);
-```
- ## Understanding types of storage space for a database Understanding the following storage space quantities are important for managing the file space of a database.
ORDER BY end_time DESC;
## Reclaim unused allocated space
-> [!NOTE]
+> [!IMPORTANT]
> Shrink commands impact database performance while running, and if possible should be run during periods of low usage.
-### DBCC shrink
+### Shrinking data files
+
+Because of a potential impact to database performance, Azure SQL Database does not automatically shrink data files. However, customers may shrink data files via self-service at a time of their choosing. This should not be a regularly scheduled operation, but rather, a one-time event in response to a major reduction in data file used space consumption.
+
+In Azure SQL Database, to shrink files you can use the `DBCC SHRINKDATABASE` or `DBCC SHRINKFILE` commands:
+
+- `DBCC SHRINKDATABASE` will shrink all database data and log files, which is typically unnecessary. The command shrinks one file at a time. It will also [shrink the log file](#shrinking-transaction-log-file). Azure SQL Database automatically shrinks log files, if necessary.
+- `DBCC SHRINKFILE` command supports more advanced scenarios:
+ - It can target individual files as needed, rather than shrinking all files in the database.
+ - Each `DBCC SHRINKFILE` command can run in parallel with other `DBCC SHRINKFILE` commands to shrink the database faster, at the expense of higher resource usage and a higher chance of blocking user queries, if they are executing during shrink.
+ - If the tail of the file does not contain data, it can reduce allocated file size much faster by specifying the TRUNCATEONLY argument. This does not require data movement within the file.
+- For more information about these shrink commands, see [DBCC SHRINKDATABASE](/sql/t-sql/database-console-commands/dbcc-shrinkdatabase-transact-sql) or [DBCC SHRINKFILE](/sql/t-sql/database-console-commands/dbcc-shrinkfile-transact-sql).
-Once databases have been identified for reclaiming unused allocated space, modify the name of the database in the following command to shrink the data files for each database.
+The following examples must be executed while connected to the target user database, not the `master` database.
+
+To use `DBCC SHRINKDATABASE` to shrink all data and log files in a given database:
```sql -- Shrink database data space allocated.
-DBCC SHRINKDATABASE (N'db1');
+DBCC SHRINKDATABASE (N'database_name');
+```
+
+In Azure SQL Database, a database may have one or more data files. Additional data files can only be created automatically. To determine file layout of your database, query the `sys.database_files` catalog view using the following sample script:
+
+```sql
+-- Review file properties, including file_id values to reference in shrink commands
+SELECT file_id,
+ name,
+ CAST(FILEPROPERTY(name, 'SpaceUsed') AS bigint) * 8 / 1024. AS space_used_mb,
+ CAST(size AS bigint) * 8 / 1024. AS space_allocated_mb,
+ CAST(max_size AS bigint) * 8 / 1024. AS max_size_mb
+FROM sys.database_files
+WHERE type_desc IN ('ROWS','LOG');
+GO
+```
+
+Execute a shrink against one file only via the `DBCC SHRINKFILE` command, for example:
+
+```sql
+-- Shrink database data file named 'data_0` by removing all unused at the end of the file, if any.
+DBCC SHRINKFILE ('data_0', TRUNCATEONLY);
+GO
```
-Shrink commands impact database performance while running, and if possible should be run during periods of low usage.
+You should also be aware of the potential negative performance impact of shrinking database files, see the [Rebuild indexes](#rebuild-indexes) section below.
-You should also be aware of the potential negative performance impact of shrinking database files, see [**Rebuild indexes**](#rebuild-indexes) section below.
+### Shrinking transaction log file
+
+Unlike data files, Azure SQL Database automatically shrinks transaction log file to avoid excessive space usage that can lead to out-of-space errors. It is usually not necessary for customers to shrink the transaction log file.
+
+In Premium and Business Critical service tiers, if the transaction log becomes large, it may significantly contribute to local storage consumption toward the [maximum local storage](resource-limits-logical-server.md#storage-space-governance) limit. If local storage consumption is close to the limit, customers may choose to shrink transaction log using the [DBCC SHRINKFILE](/sql/t-sql/database-console-commands/dbcc-shrinkfile-transact-sql) command as shown in the following example. This releases local storage as soon as the command completes, without waiting for the periodic automatic shrink operation.
-For more information about this command, see [SHRINKDATABASE](/sql/t-sql/database-console-commands/dbcc-shrinkdatabase-transact-sql).
+The following example should be executed while connected to the target user database, not the master database.
+
+```tsql
+-- Shrink the database log file (always file_id = 2), by removing all unused space at the end of the file, if any.
+DBCC SHRINKFILE (2, TRUNCATEONLY);
+```
### Auto-shrink
-Alternatively, auto-shrink can be enabled for a database. Auto-shrink reduces file management complexity and is less impactful to database performance than `SHRINKDATABASE` or `SHRINKFILE`. Auto-shrink can be particularly helpful in managing elastic pools with many databases that experience significant growth and reduction in space used. However, auto shrink can be less effective in reclaiming file space than `SHRINKDATABASE` and `SHRINKFILE`.
+Alternatively, auto-shrink can be enabled for a database. However, auto shrink can be less effective in reclaiming file space than `DBCC SHRINKDATABASE` and `DBCC SHRINKFILE`.
+
+Auto-shrink can be helpful in the specific scenario where an elastic pool contains many databases that experience significant growth and reduction in data file space used. This is not a common scenario.
By default, auto-shrink is disabled, which is recommended for most databases. If it becomes necessary to enable auto-shrink, it is recommended to disable it once space management goals have been achieved, instead of keeping it enabled permanently. For more information, see [Considerations for AUTO_SHRINK](/troubleshoot/sql/admin/considerations-autogrow-autoshrink#considerations-for-auto_shrink).
-To enable auto-shrink, execute the following command in your database (not in the master database).
+To enable auto-shrink, execute the following command while connected to your database (not in the master database).
```sql -- Enable auto-shrink for the current database.
ALTER DATABASE CURRENT SET AUTO_SHRINK ON;
For more information about this command, see [DATABASE SET](/sql/t-sql/statements/alter-database-transact-sql-set-options) options.
-### Rebuild indexes
+### <a name="rebuild-indexes"></a> Index maintenance before or after shrink
+
+After a shrink operation is completed against data files, indexes may become fragmented and lose their performance optimization effectiveness for certain workloads, such as queries using large scans. If performance degradation occurs after the shrink operation is complete, consider index maintenance to rebuild indexes.
-After data files are shrunk, indexes may become fragmented and lose their performance optimization effectiveness. If performance degradation occurs, consider rebuilding database indexes. For more information on fragmentation and index maintenance, see [Optimize index maintenance to improve query performance and reduce resource consumption](/sql/relational-databases/indexes/reorganize-and-rebuild-indexes).
+If page density in the database is low, a shrink will take longer because it will have to move more pages in each data file. Microsoft recommends determining average page density before executing shrink commands. If page density is low, rebuild or reorganize indexes to increase page density before running shrink. For more information, including a sample script to determine page density, see [Optimize index maintenance to improve query performance and reduce resource consumption](/sql/relational-databases/indexes/reorganize-and-rebuild-indexes).
## Next steps
After data files are shrunk, indexes may become fragmented and lose their perfor
- [Resource limits for single databases using the DTU-based purchasing model](resource-limits-dtu-single-databases.md) - [Azure SQL Database vCore-based purchasing model limits for elastic pools](resource-limits-vcore-elastic-pools.md) - [Resources limits for elastic pools using the DTU-based purchasing model](resource-limits-dtu-elastic-pools.md)-- For more information about the `SHRINKDATABASE` command, see [SHRINKDATABASE](/sql/t-sql/database-console-commands/dbcc-shrinkdatabase-transact-sql).-- For more information on fragmentation and rebuilding indexes, see [Reorganize and Rebuild Indexes](/sql/relational-databases/indexes/reorganize-and-rebuild-indexes).
azure-sql Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/policy-reference.md
Title: Built-in policy definitions for Azure SQL Database description: Lists Azure Policy built-in policy definitions for Azure SQL Database and SQL Managed Instance. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/16/2021 Last updated : 08/13/2021
azure-sql Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure SQL Database description: Lists Azure Policy Regulatory Compliance controls available for Azure SQL Database and SQL Managed Instance. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 07/16/2021 Last updated : 08/13/2021
azure-sql Service Tiers Dtu https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/service-tiers-dtu.md
Previously updated : 5/4/2021 Last updated : 8/12/2021 # Service tiers in the DTU-based purchase model [!INCLUDE[appliesto-sqldb](../includes/appliesto-sqldb.md)]
The key metrics in the benchmark are throughput and response time.
| Standard |Transactions per minute |90th percentile at 1.0 seconds | | Basic |Transactions per hour |80th percentile at 2.0 seconds |
+> [!NOTE]
+> Response time metrics are specific to the [DTU Benchmark](#dtu-benchmark). Response times for other workloads are workload-dependent and will differ.
+ ## Next steps - For details on specific compute sizes and storage size choices available for single databases, see [SQL Database DTU-based resource limits for single databases](resource-limits-dtu-single-databases.md#single-database-storage-sizes-and-compute-sizes).
azure-sql Performance Guidelines Best Practices Checklist https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/performance-guidelines-best-practices-checklist.md
The following is a quick checklist of best practices for Azure-specific guidance
- Leverage [Azure Security Center](../../../security-center/index.yml) to improve the overall security posture of your virtual machine deployment. - Leverage [Azure Defender](../../../security-center/azure-defender.md), integrated with [Azure Security Center](https://azure.microsoft.com/services/security-center/), for specific [SQL Server VM coverage](../../../security-center/defender-for-sql-introduction.md) including vulnerability assessments, and just-in-time access, which reduces the attack service while allowing legitimate users to access virtual machines when necessary. To learn more, see [vulnerability assessments](../../../security-center/defender-for-sql-on-machines-vulnerability-assessment.md), [enable vulnerability assessments for SQL Server VMs](../../../security-center/defender-for-sql-on-machines-vulnerability-assessment.md) and [just-in-time access](../../../security-center/just-in-time-explained.md). - Leverage [Azure Advisor](../../../advisor/advisor-overview.md) to address [performance](../../../advisor/advisor-performance-recommendations.md), [cost](../../../advisor/advisor-cost-recommendations.md), [reliability](../../../advisor/advisor-high-availability-recommendations.md), [operational excellence](../../../advisor/advisor-operational-excellence-recommendations.md), and [security recommendations](../../../advisor/advisor-security-recommendations.md).-- Leverage [Azure Monitor](../../../azure-monitor/vm/quick-monitor-azure-vm.md) to collect, analyze, and act on telemetry data from your SQL Server environment. This includes identifying infrastructure issues with [VM insights](../../../azure-monitor/vm/vminsights-overview.md) and monitoring data with [Log Analytics](../../../azure-monitor/logs/log-query-overview.md) for deeper diagnostics.
+- Leverage [Azure Monitor](../../../azure-monitor/vm/monitor-virtual-machine.md) to collect, analyze, and act on telemetry data from your SQL Server environment. This includes identifying infrastructure issues with [VM insights](../../../azure-monitor/vm/vminsights-overview.md) and monitoring data with [Log Analytics](../../../azure-monitor/logs/log-query-overview.md) for deeper diagnostics.
- Enable [Autoshutdown](../../../automation/automation-solution-vm-management.md) for development and test environments. - Implement a high availability and disaster recovery (HADR) solution that meets your business continuity SLAs, see the [HADR options](business-continuity-high-availability-disaster-recovery-hadr-overview.md#deployment-architectures) options available for SQL Server on Azure VMs. - Use the Azure portal (support + troubleshooting) to evaluate [resource health](../../../service-health/resource-health-overview.md) and history; submit new support requests when needed.
To learn more, see the other articles in this series:
For security best practices, see [Security considerations for SQL Server on Azure Virtual Machines](security-considerations-best-practices.md).
-Review other SQL Server Virtual Machine articles at [SQL Server on Azure Virtual Machines Overview](sql-server-on-azure-vm-iaas-what-is-overview.md). If you have questions about SQL Server virtual machines, see the [Frequently Asked Questions](frequently-asked-questions-faq.yml).
+Review other SQL Server Virtual Machine articles at [SQL Server on Azure Virtual Machines Overview](sql-server-on-azure-vm-iaas-what-is-overview.md). If you have questions about SQL Server virtual machines, see the [Frequently Asked Questions](frequently-asked-questions-faq.yml).
azure-video-analyzer Embed Player In Power Bi https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-docs/embed-player-in-power-bi.md
Dashboards are an insightful way to monitor your business and view all your most
## Suggested pre-reading - Azure Video Analyzer [player widget](player-widget.md)-- Introduction to [Power BI dashboards](https://docs.microsoft.com/power-bi/create-reports/service-dashboards)
+- Introduction to [Power BI dashboards](/power-bi/create-reports/service-dashboards)
## Prerequisites
Here is a sample of multiple videos pinned to a single Power BI dashboard.
## Next steps -- Learn more about the [widget API](https://github.com/Azure/video-analyzer/tree/main/widgets)
+- Learn more about the [widget API](https://github.com/Azure/video-analyzer/tree/main/widgets)
azure-vmware Concepts Hub And Spoke https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/concepts-hub-and-spoke.md
The architecture has the following main components:
- **Spoke virtual network**
- - **IaaS Spoke:** An IaaS spoke hosts Azure IaaS based workloads, including VM availability sets and virtual machine scale sets, and the corresponding network components.
+ - **IaaS Spoke:** Hosts Azure IaaS based workloads, including VM availability sets and virtual machine scale sets, and the corresponding network components.
- - **PaaS Spoke:** A PaaS Spoke hosts Azure PaaS services using private addressing thanks to [Private Endpoint](../private-link/private-endpoint-overview.md) and [Private Link](../private-link/private-link-overview.md).
+ - **PaaS Spoke:** Hosts Azure PaaS services using private addressing thanks to [Private Endpoint](../private-link/private-endpoint-overview.md) and [Private Link](../private-link/private-link-overview.md).
- **Azure Firewall:** Acts as the central piece to segment traffic between the Spokes and Azure VMware Solution.
For more information on Azure VMware Solution networking and connectivity concep
### Traffic segmentation
-[Azure Firewall](../firewall/index.yml) is the Hub and Spoke topology's central piece, deployed on the Hub virtual network. Use Azure Firewall or another Azure supported network virtual appliance (NVA) to establish traffic rules and segment the communication between the different spokes and Azure VMware Solution workloads.
+[Azure Firewall](../firewall/index.yml) is the Hub and Spoke topology's central piece, deployed on the Hub virtual network. Use Azure Firewall, or another Azure supported network virtual appliance (NVA) to establish traffic rules and segment the communication between the different spokes and Azure VMware Solution workloads.
-Create route tables to direct the traffic to Azure Firewall. For the Spoke virtual networks, create a route that sets the default route to the internal interface of Azure Firewall. This way, when a workload in the Virtual Network needs to reach the Azure VMware Solution address space, the firewall can evaluate it and apply the corresponding traffic rule to either allow or deny it.
+Create route tables to direct the traffic to Azure Firewall. For the Spoke virtual networks, create a route that sets the default route to the internal interface of the Azure Firewall. This way, when a workload in the Virtual Network needs to reach the Azure VMware Solution address space, the firewall can evaluate it and apply the corresponding traffic rule to either allow or deny it.
:::image type="content" source="media/hub-spoke/create-route-table-to-direct-traffic.png" alt-text="Screenshot showing the route tables to direct traffic to Azure Firewall." lightbox="media/hub-spoke/create-route-table-to-direct-traffic.png":::
Access Azure VMware Solution environment with a jump box, which is a Windows 10
>[!IMPORTANT] >Azure Bastion is the service recommended to connect to the jump box to prevent exposing Azure VMware Solution to the internet. You cannot use Azure Bastion to connect to Azure VMware Solution VMs since they are not Azure IaaS objects.
-As a security best practice, deploy [Microsoft Azure Bastion](../bastion/index.yml) service within the Hub virtual network. Azure Bastion provides seamless RDP and SSH access to VMs deployed on Azure without the need to provision public IP addresses to those resources. Once you provision the Azure Bastion service, you can access the selected VM from the Azure portal. After establishing the connection, a new tab opens, showing the jump box desktop, and from that desktop, you can access the Azure VMware Solution private cloud management plane.
+As a security best practice, deploy [Microsoft Azure Bastion](../bastion/index.yml) service within the Hub virtual network. Azure Bastion provides seamless RDP and SSH access to VMs deployed on Azure without providing public IP addresses to those resources. Once you provision the Azure Bastion service, you can access the selected VM from the Azure portal. After establishing the connection, a new tab opens, showing the jump box desktop, and from that desktop, you can access the Azure VMware Solution private cloud management plane.
> [!IMPORTANT] > Do not give a public IP address to the jump box VM or expose 3389/TCP port to the public internet.
For Azure DNS resolution, there are two options available:
The best approach is to combine both to provide reliable name resolution for Azure VMware Solution, on-premises, and Azure.
-As a general design recommendation, use the existing Azure DNS infrastructure (in this case, Active Directory-integrated DNS) deployed onto at least two Azure VMs deployed in the Hub virtual network and configured in the Spoke virtual networks to use those Azure DNS servers in the DNS settings.
+As a general design recommendation, use the existing Active Directory-integrated DNS deployed onto at least two Azure VMs in the Hub virtual network and configured in the Spoke virtual networks to use those Azure DNS servers in the DNS settings.
You can use Azure Private DNS, where the Azure Private DNS zone links to the virtual network. The DNS servers are used as hybrid resolvers with conditional forwarding to on-premises or Azure VMware Solution running DNS using customer Azure Private DNS infrastructure.
azure-vmware Concepts Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/concepts-identity.md
Last updated 07/29/2021
# Azure VMware Solution identity concepts
-Azure VMware Solution private clouds are provisioned with a vCenter Server and NSX-T Manager. You use vCenter to manage virtual machine (VM) workloads and NSX-T Manager to manage and extend the private cloud. Access and identity management use the CloudAdmin role for vCenter and restricted administrator rights for NSX-T Manager.
+Azure VMware Solution private clouds are provisioned with a vCenter Server and NSX-T Manager. You'll use vCenter to manage virtual machine (VM) workloads and NSX-T Manager to manage and extend the private cloud. The CloudAdmin role is used for vCenter and restricted administrator rights for NSX-T Manager.
## vCenter access and identity
To prevent creating roles that can't be assigned or deleted, clone the CloudAdmi
1. Right-click the object and select **Add Permission**.
-1. In the **Add Permission** window, select the Identity Source in the **User** drop-down where the group or user can be found.
+1. Select the Identity Source in the **User** drop-down where the group or user can be found.
1. Search for the user or group after selecting the Identity Source under the **User** section.
To prevent creating roles that can't be assigned or deleted, clone the CloudAdmi
>[!NOTE] >NSX-T [!INCLUDE [nsxt-version](includes/nsxt-version.md)] is currently supported for all new private clouds.
-Use the *admin* account to access NSX-T Manager. It has full privileges and lets you create and manage Tier-1 (T1) Gateways, segments (logical switches), and all services. In addition, the privileges give you access to the NSX-T Tier-0 (T0) Gateway. A change to the T0 Gateway could result in degraded network performance or no private cloud access. Open a support request in the Azure portal to request any changes to your NSX-T T0 Gateway.
+Use the *admin* account to access NSX-T Manager. It has full privileges and lets you create and manage Tier-1 (T1) gateways, segments (logical switches), and all services. In addition, the privileges give you access to the NSX-T Tier-0 (T0) gateway. A change to the T0 gateway could result in degraded network performance or no private cloud access. Open a support request in the Azure portal to request any changes to your NSX-T T0 gateway.
## Next steps
azure-vmware Concepts Networking https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/concepts-networking.md
Now that you've covered Azure VMware Solution network and interconnectivity conc
- [Azure VMware Solution storage concepts](concepts-storage.md) - [Azure VMware Solution identity concepts](concepts-identity.md)-- [How to enable Azure VMware Solution resource](deploy-azure-vmware-solution.md#register-the-microsoftavs-resource-provider)
+- [Enabling the Azure VMware Solution resource provider](deploy-azure-vmware-solution.md#register-the-microsoftavs-resource-provider)
<!-- LINKS - external --> [enable Global Reach]: ../expressroute/expressroute-howto-set-global-reach.md
azure-vmware Install Vmware Hcx https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/install-vmware-hcx.md
You can uninstall HCX Advanced through the portal, which removes the existing pa
1. In your Azure VMware Solution private cloud, select **Manage** > **Add-ons**. 1. Select **Get started** for **HCX Workload Mobility**, then select **Uninstall**.- 1. Enter **yes** to confirm the uninstall.
-At this point, HCX Advanced will no longer have the vCenter plugin, and if needed, you can reinstall it at any time.
+At this point, HCX Advanced no longer has the vCenter plugin, and if needed, you can reinstall it at any time.
## Next steps
azure-vmware Tutorial Access Private Cloud https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/tutorial-access-private-cloud.md
Title: Tutorial - Access your private cloud description: Learn how to access an Azure VMware Solution private cloud Previously updated : 03/13/2021 Last updated : 08/13/2021 # Tutorial: Access an Azure VMware Solution private cloud
In this tutorial, you learn how to:
## Create a new Windows virtual machine
-1. In the resource group, select **Add**, search for and select **Microsoft Windows 10**. Then select **Create**.
+1. In the resource group, select **Add**, search for **Microsoft Windows 10**, and select it. Then select **Create**.
:::image type="content" source="media/tutorial-access-private-cloud/ss8-azure-w10vm-create.png" alt-text="Screenshot of how to add a new Windows 10 VM for a jump box.":::
In this tutorial, you learn how to:
| **Username** | Enter the user name for logging on to the VM. | | **Password** | Enter the password for logging on to the VM. | | **Confirm password** | Enter the password for logging on to the VM. |
- | **Public inbound ports** | Select **None**. If you select None, you can use [JIT access](../security-center/security-center-just-in-time.md#jit-configure) to control access to the VM only when you want to access it. Alternatively, you can use an [Azure Bastion](../bastion/tutorial-create-host-portal.md) if you want to access the jump box server securely from the internet without exposing any network port. |
+ | **Public inbound ports** | Select **None**. <ul><li>To control access to the VM only when you want to access it, use [JIT access](../security-center/security-center-just-in-time.md#jit-configure).</li><li>To securely access the jump box server from the internet without exposing any network port, use an [Azure Bastion](../bastion/tutorial-create-host-portal.md).</li></ul> |
1. Once validation passes, select **Create** to start the virtual machine creation process.
In this tutorial, you learn how to:
1. In the Windows VM, open a browser and navigate to the vCenter and NSX-T Manager URLs in two tabs.
-1. In the vCenter tab, enter the `cloudadmin@vmcp.local` user credentials from the previous step.
+1. In the vCenter tab, enter the `cloudadmin@vsphere.local` user credentials from the previous step.
:::image type="content" source="media/tutorial-access-private-cloud/ss5-vcenter-login.png" alt-text="Screenshot showing the VMware vSphere sign in page." border="true":::
azure-vmware Tutorial Configure Networking https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/tutorial-configure-networking.md
When you select an existing vNet, the Azure Resource Manager (ARM) template that
### Create a new vNet
-When you create a new vNet, the required components needed to connect to Azure VMware Solution get created automatically.
+When you create a new vNet, the required components to connect to Azure VMware Solution are automatically created.
1. In your Azure VMware Solution private cloud, under **Manage**, select **Connectivity**.
When you create a new vNet, the required components needed to connect to Azure V
3. Provide or update the information for the new vNet and then select **OK**.
- At this point, the vNet validates if overlapping IP address spaces between Azure VMware Solution and vNet are detected. If detected, change the network address of either the private cloud or the vNet so they don't overlap.
+ At this point, the vNet validates if overlapping IP address spaces between Azure VMware Solution and vNet are detected. If detected, change the private cloud or vNet's network address so they don't overlap.
:::image type="content" source="media/networking/create-new-virtual-network.png" alt-text="Screenshot showing the Create virtual network window.":::
backup Backup Azure Monitoring Built In Monitor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-monitoring-built-in-monitor.md
You can change the state of an alert to **Acknowledged** or **Closed** by clicki
> [!NOTE] > - In Backup center, only alerts for Azure-based workloads are displayed currently. To view alerts for on-premises resources, navigate to the Recovery Services vault and click the **Alerts** menu item.
-> - Only Azure Monitor alerts are displayed in Backup center. Alerts raised by the older alerting solution (accessed via the [Backup Alerts](/azure/backup/backup-azure-monitoring-built-in-monitor#backup-alerts-in-recovery-services-vault) tab in Recovery Services vault) are not displayed in Backup center.
+> - Only Azure Monitor alerts are displayed in Backup center. Alerts raised by the older alerting solution (accessed via the [Backup Alerts](#backup-alerts-in-recovery-services-vault) tab in Recovery Services vault) are not displayed in Backup center.
For more information about Azure Monitor alerts, see [Overview of alerts in Azure](../azure-monitor/alerts/alerts-overview.md). ### Configuring notifications for alerts
To configure notifications for Azure Monitor alerts, you must create an action r
## Next steps
-[Monitor Azure Backup workloads using Azure Monitor](backup-azure-monitoring-use-azuremonitor.md)
+[Monitor Azure Backup workloads using Azure Monitor](backup-azure-monitoring-use-azuremonitor.md)
backup Backup Blobs Storage Account Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-blobs-storage-account-cli.md
For information on the Azure Blobs regions availability, supported scenarios, an
## Before you start
-See the [prerequisites](/azure/backup/blob-backup-configure-manage#before-you-start) and [support matrix](/azure/backup/blob-backup-support-matrix) before you get started.
+See the [prerequisites](./blob-backup-configure-manage.md#before-you-start) and [support matrix](./blob-backup-support-matrix.md) before you get started.
## Create a Backup vault
backup Backup Blobs Storage Account Ps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-blobs-storage-account-ps.md
For information on the Azure blob region availability, supported scenarios and l
## Before you start
-See the [prerequisites](/azure/backup/blob-backup-configure-manage#before-you-start) and [support matrix](/azure/backup/blob-backup-support-matrix) before you get started.
+See the [prerequisites](./blob-backup-configure-manage.md#before-you-start) and [support matrix](./blob-backup-support-matrix.md) before you get started.
## Create a Backup vault
blobrg-PSTestSA-3df6ac08-9496-4839-8fb5-8b78e594f166 Microsoft.DataProtection/ba
## Next steps
-[Restore Azure blobs using Azure PowerShell](restore-blobs-storage-account-ps.md)
+[Restore Azure blobs using Azure PowerShell](restore-blobs-storage-account-ps.md)
backup Backup Center Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-center-support-matrix.md
Backup Center provides a single pane of glass for enterprises to [govern, monito
| Insights | View Backup Reports | <li> Azure Virtual Machine <br><br> <li> SQL in Azure Virtual Machine <br><br> <li> SAP HANA in Azure Virtual Machine <br><br> <li> Azure Files <br><br> <li> System Center Data Protection Manager <br><br> <li> Azure Backup Agent (MARS) <br><br> <li> Azure Backup Server (MABS) | Refer to [supported scenarios for Backup Reports](./configure-reports.md#supported-scenarios) | | Governance | View and assign built-in and custom Azure Policies under category 'Backup' | N/A | N/A | | Governance | View datasources not configured for backup | <li> Azure Virtual Machine <br><br> <li> Azure Database for PostgreSQL server | N/A |
-| Monitoring | View Azure Monitor alerts at scale | <li> Azure Virtual Machine <br><br> <li> Azure Database for PostgreSQL server <br><br> <li> SQL in Azure VM <br><br> <li> SAP HANA in Azure VM <br><br> <li> Azure Files<br/><br/> <li>Azure Blobs<br/><br/> <li>Azure Managed Disks | Refer [Alerts](/azure/backup/backup-azure-monitoring-built-in-monitor#azure-monitor-alerts-for-azure-backup-preview) documentation |
-| Actions | Execute cross-region restore job from Backup center | <li> Azure Virtual Machine <br><br> <li> SQL in Azure VM <br><br> <li> SAP HANA in Azure VM | Refer [cross-region restore](/azure/backup/backup-create-rs-vault#set-cross-region-restore) documentation |
+| Monitoring | View Azure Monitor alerts at scale | <li> Azure Virtual Machine <br><br> <li> Azure Database for PostgreSQL server <br><br> <li> SQL in Azure VM <br><br> <li> SAP HANA in Azure VM <br><br> <li> Azure Files<br/><br/> <li>Azure Blobs<br/><br/> <li>Azure Managed Disks | Refer [Alerts](./backup-azure-monitoring-built-in-monitor.md#azure-monitor-alerts-for-azure-backup-preview) documentation |
+| Actions | Execute cross-region restore job from Backup center | <li> Azure Virtual Machine <br><br> <li> SQL in Azure VM <br><br> <li> SAP HANA in Azure VM | Refer [cross-region restore](./backup-create-rs-vault.md#set-cross-region-restore) documentation |
## Unsupported scenarios
backup Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/policy-reference.md
Title: Built-in policy definitions for Azure Backup description: Lists Azure Policy built-in policy definitions for Azure Backup. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/16/2021 Last updated : 08/13/2021
backup Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Backup description: Lists Azure Policy Regulatory Compliance controls available for Azure Backup. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 07/16/2021 Last updated : 08/13/2021
backup Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/whats-new.md
In addition to the capability to move the recovery points:
- You have the capability to move all their recovery points for a particular backup item at one go using sample scripts. - You can view Archive storage usage on the Vault dashboard.
-For more information, see [Archive Tier support](/azure/backup/archive-tier-support).
+For more information, see [Archive Tier support](./archive-tier-support.md).
## Backup for Azure Blobs is now generally available
bastion Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/bastion/troubleshoot.md
The key's randomart image is:
**A:** Azure Bastion supports domain-joined VM sign-in for username-password based domain sign-in only. When specifying the domain credentials in the Azure portal, use the UPN (username@domain) format instead of *domain\username* format to sign in. This is supported for domain-joined or hybrid-joined (both domain-joined as well as Azure AD-joined) virtual machines. It is not supported for Azure AD-joined-only virtual machines.
+## <a name="connectivity"></a> Unable to connect to virtual machine
+
+**Q:** I am unable to connect to my virtual machine (and I'm not experiencing the problems above).
+
+**A:** You can troubleshoot your connectivity issues by navigating to the **Connection Troubleshoot** tab (in the **Monitoring** section) of your Azure Bastion resource in the Azure portal. Network Watcher Connection Troubleshoot provides the capability to check a direct TCP connection from a virtual machine (VM) to a VM, fully qualified domain name (FQDN), URI, or IPv4 address. To start, choose a source to start the connection from, and the destination you wish to connect to and select "Check". [Learn more](https://docs.microsoft.com/azure/network-watcher/network-watcher-connectivity-overview).
++ ## <a name="filetransfer"></a>File transfer issues **Q:** Is file transfer supported with Azure Bastion?
batch Batch Docker Container Workloads https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/batch-docker-container-workloads.md
Title: Container workloads description: Learn how to run and scale apps from container images on Azure Batch. Create a pool of compute nodes that support running container tasks. Previously updated : 10/06/2020 Last updated : 08/13/2021 # Run container applications on Azure Batch
You should be familiar with container concepts and how to create a Batch pool an
- Batch Java SDK version 3.0 - Batch Node.js SDK version 3.0 -- **Accounts**: In your Azure subscription, you need to create a Batch account and optionally an Azure Storage account.
+- **Accounts**: In your Azure subscription, you need to create a [Batch account](accounts.md) and optionally an Azure Storage account.
-- **A supported VM image**: Containers are only supported in pools created with the Virtual Machine Configuration, from images detailed in the following section, "Supported virtual machine images." If you provide a custom image, see the considerations in the following section and the requirements in [Use a managed custom image to create a pool of virtual machines](batch-custom-images.md).
+- **A supported VM image**: Containers are only supported in pools created with the Virtual Machine Configuration, from a supported image (listed in the next section). If you provide a custom image, see the considerations in the following section and the requirements in [Use a managed custom image to create a pool of virtual machines](batch-custom-images.md).
Keep in mind the following limitations: - Batch provides RDMA support only for containers running on Linux pools.- - For Windows container workloads, we recommend choosing a multicore VM size for your pool. ## Supported virtual machine images
For Linux container workloads, Batch currently supports the following Linux imag
These images are only supported for use in Azure Batch pools and are geared for Docker container execution. They feature: - A pre-installed Docker-compatible [Moby](https://github.com/moby/moby) container runtime- - Pre-installed NVIDIA GPU drivers and NVIDIA container runtime, to streamline deployment on Azure N-series VMs--- Pre-installed/pre-configured image with support for Infiniband RDMA VM sizes for images with the suffix of `-rdma`. Currently these images do not support SR-IOV IB/RDMA VM sizes.
+- VM images with the suffix of '-rdma' are pre-configured with support for InfiniBand RDMA VM sizes. These VM images should not be used with VM sizes that do not have InfiniBand support.
You can also create custom images from VMs running Docker on one of the Linux distributions that is compatible with Batch. If you choose to provide your own custom Linux image, see the instructions in [Use a managed custom image to create a pool of virtual machines](batch-custom-images.md).
For Docker support on a custom image, install [Docker Community Edition (CE)](ht
Additional considerations for using a custom Linux image: - To take advantage of the GPU performance of Azure N-series sizes when using a custom image, pre-install NVIDIA drivers. Also, you need to install the Docker Engine Utility for NVIDIA GPUs, [NVIDIA Docker](https://github.com/NVIDIA/nvidia-docker).- - To access the Azure RDMA network, use an RDMA-capable VM size. Necessary RDMA drivers are installed in the CentOS HPC and Ubuntu images supported by Batch. Additional configuration may be needed to run MPI workloads. See [Use RDMA-capable or GPU-enabled instances in Batch pool](batch-pool-compute-intensive-sizes.md). ## Container configuration for Batch pool
To run a container task on a container-enabled pool, specify container-specific
- If you run tasks on container images, the [cloud task](/dotnet/api/microsoft.azure.batch.cloudtask) and [job manager task](/dotnet/api/microsoft.azure.batch.cloudjob.jobmanagertask) require container settings. However, the [start task](/dotnet/api/microsoft.azure.batch.starttask), [job preparation task](/dotnet/api/microsoft.azure.batch.cloudjob.jobpreparationtask), and [job release task](/dotnet/api/microsoft.azure.batch.cloudjob.jobreleasetask) do not require container settings (that is, they can run within a container context or directly on the node). -- For Windows, tasks must be run with [ElevationLevel](/rest/api/batchservice/task/add#elevationlevel) set to `admin`.
+- For Windows, tasks must be run with [ElevationLevel](/rest/api/batchservice/task/add#elevationlevel) set to `admin`.
- For Linux, Batch will map the user/group permission to the container. If access to any folder within the container requires Administrator permission, you may need to run the task as pool scope with admin elevation level. This will ensure Batch runs the task as root in the container context. Otherwise, a non-admin user may not have access to those folders.
containerTask.ContainerSettings = cmdContainerSettings;
## Next steps -- For easy deployment of container workloads on Azure Batch through [Shipyard recipes](https://github.com/Azure/batch-shipyard/tree/master/recipes), see the [Batch Shipyard](https://github.com/Azure/batch-shipyard) toolkit .
+- For easy deployment of container workloads on Azure Batch through [Shipyard recipes](https://github.com/Azure/batch-shipyard/tree/master/recipes), see the [Batch Shipyard](https://github.com/Azure/batch-shipyard) toolkit.
- For information on installing and using Docker CE on Linux, see the [Docker](https://docs.docker.com/engine/installation/) documentation. - Learn how to [Use a managed custom image to create a pool of virtual machines](batch-custom-images.md). - Learn more about the [Moby project](https://mobyproject.org/), a framework for creating container-based systems.
batch Large Number Tasks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/large-number-tasks.md
pip install azure-batch
pip install azure-batch-extensions ```
-Set up a `BatchExtensionsClient` that uses the SDK extension:
+After importing the package using `import azext.batch as batch`, set up a `BatchExtensionsClient` that uses the SDK extension:
```python
batch Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/policy-reference.md
Title: Built-in policy definitions for Azure Batch description: Lists Azure Policy built-in policy definitions for Azure Batch. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/16/2021 Last updated : 08/13/2021
batch Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Batch description: Lists Azure Policy Regulatory Compliance controls available for Azure Batch. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 07/16/2021 Last updated : 08/13/2021
cloud-services Cloud Services Guestos Msrc Releases https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-guestos-msrc-releases.md
na Previously updated : 7/15/2021 Last updated : 8/13/2021
The following tables show the Microsoft Security Response Center (MSRC) updates
## July 2021 Guest OS
->[!NOTE]
-
->The July Guest OS is currently being rolled out to Cloud Service VMs that are configured for automatic updates. When the rollout is complete, this version will be made available for manual updates through the Azure portal and configuration files. The following patches are included in the July Guest OS. This list is subject to change.
| Product Category | Parent KB Article | Vulnerability Description | Guest OS | Date First Introduced | | | | | | |
-| Rel 21-07 | [5004244] | Latest Cumulative Update(LCU) | 6.33 | July 13 , 2021 |
-| Rel 21-07 | [5004233] | IE Cumulative Updates | 2.112, 3.99, 4.92 | July 13 , 2021 |
-| Rel 21-07 | [5004238] | Latest Cumulative Update(LCU) | 5.57 | July 13 , 2021 |
-| Rel 21-07 | [4578952] | .NET Framework 3.5 Security and Quality Rollup  | 2.112 | Oct 13, 2020 |
-| Rel 21-07 | [4578955] | .NET Framework 4.5.2 Security and Quality Rollup  | 2.112 | Oct 13, 2020 |
-| Rel 21-07 | [4578953] | .NET Framework 3.5 Security and Quality Rollup  | 4.92 | Oct 13, 2020 |
-| Rel 21-07 | [4578956] | .NET Framework 4.5.2 Security and Quality Rollup  | 4.92 | Oct 13, 2020 |
-| Rel 21-07 | [4578950] | .NET Framework 3.5 Security and Quality Rollup  | 3.99 | Oct 13, 2020 |
-| Rel 21-07 | [4578954] | . NET Framework 4.5.2 Security and Quality Rollup  | 3.99 | Oct 13, 2020 |
-| Rel 21-07 | [4601060] | . NET Framework 3.5 and 4.7.2 Cumulative Update  | 6.33 | Feb 9, 2021 |
-| Rel 21-07 | [5004289] | Monthly Rollup  | 2.112 | July 13, 2021 |
-| Rel 21-07 | [5004294] | Monthly Rollup  | 3.99 | July 13, 2021 |
-| Rel 21-07 | [5004298] | Monthly Rollup  | 4.92 | July 13, 2021 |
-| Rel 21-07 | [5001401] | Servicing Stack update  | 3.99 | Apr 13, 2021 |
-| Rel 21-07 | [5001403] | Servicing Stack update  | 4.92 | Apr 13, 2021 |
-| Rel 21-07 OOB | [4578013] | Standalone Security Update  | 4.92 | Aug 19, 2020 |
-| Rel 21-07 | [5001402] | Servicing Stack update  | 5.57 | Apr 13, 2021 |
-| Rel 21-07 | [5004378] | Servicing Stack update  | 2.112 | July 13, 2021 |
-| Rel 21-07 | [5003711] | Servicing Stack update  | 6.33 | June 8, 2021 |
-| Rel 21-07 | [4494175] | Microcode  | 5.57 | Sep 1, 2020 |
-| Rel 21-07 | [4494174] | Microcode  | 6.33 | Sep 1, 2020 |
+| Rel 21-07 | [5004244] | Latest Cumulative Update(LCU) | [6.33] | July 13 , 2021 |
+| Rel 21-07 | [5004233] | IE Cumulative Updates | [2.112], [3.99], [4.92] | July 13 , 2021 |
+| Rel 21-07 | [5004238] | Latest Cumulative Update(LCU) | [5.57] | July 13 , 2021 |
+| Rel 21-07 | [4578952] | .NET Framework 3.5 Security and Quality Rollup  | [2.112] | Oct 13, 2020 |
+| Rel 21-07 | [4578955] | .NET Framework 4.5.2 Security and Quality Rollup  | [2.112] | Oct 13, 2020 |
+| Rel 21-07 | [4578953] | .NET Framework 3.5 Security and Quality Rollup  | [4.92] | Oct 13, 2020 |
+| Rel 21-07 | [4578956] | .NET Framework 4.5.2 Security and Quality Rollup  | [4.92] | Oct 13, 2020 |
+| Rel 21-07 | [4578950] | .NET Framework 3.5 Security and Quality Rollup  | [3.99] | Oct 13, 2020 |
+| Rel 21-07 | [4578954] | . NET Framework 4.5.2 Security and Quality Rollup  | [3.99] | Oct 13, 2020 |
+| Rel 21-07 | [4601060] | . NET Framework 3.5 and 4.7.2 Cumulative Update  | [6.33] | Feb 9, 2021 |
+| Rel 21-07 | [5004289] | Monthly Rollup  | [2.112] | July 13, 2021 |
+| Rel 21-07 | [5004294] | Monthly Rollup  | [3.99] | July 13, 2021 |
+| Rel 21-07 | [5004298] | Monthly Rollup  | [4.92] | July 13, 2021 |
+| Rel 21-07 | [5001401] | Servicing Stack update  | [3.99] | Apr 13, 2021 |
+| Rel 21-07 | [5001403] | Servicing Stack update  | [4.92] | Apr 13, 2021 |
+| Rel 21-07 OOB | [4578013] | Standalone Security Update  | [4.92] | Aug 19, 2020 |
+| Rel 21-07 | [5001402] | Servicing Stack update  | [5.57] | Apr 13, 2021 |
+| Rel 21-07 | [5004378] | Servicing Stack update  | [2.112] | July 13, 2021 |
+| Rel 21-07 | [5003711] | Servicing Stack update  | [6.33] | June 8, 2021 |
+| Rel 21-07 | [4494175] | Microcode  | [5.57] | Sep 1, 2020 |
+| Rel 21-07 | [4494174] | Microcode  | [6.33] | Sep 1, 2020 |
[5004244]: https://support.microsoft.com/kb/5004244 [5004233]: https://support.microsoft.com/kb/5004233
The following tables show the Microsoft Security Response Center (MSRC) updates
[5003711]: https://support.microsoft.com/kb/5003711 [4494175]: https://support.microsoft.com/kb/4494175 [4494174]: https://support.microsoft.com/kb/4494174
+[2.112]: ./cloud-services-guestos-update-matrix.md#family-2-releases
+[3.99]: ./cloud-services-guestos-update-matrix.md#family-3-releases
+[4.92]: ./cloud-services-guestos-update-matrix.md#family-4-releases
+[5.57]: ./cloud-services-guestos-update-matrix.md#family-5-releases
+[6.33]: ./cloud-services-guestos-update-matrix.md#family-6-releases
## June 2021 Guest OS
cloud-services Cloud Services Guestos Update Matrix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-guestos-update-matrix.md
na Previously updated : 7/1/2021 Last updated : 8/13/2021 # Azure Guest OS releases and SDK compatibility matrix
Unsure about how to update your Guest OS? Check [this][cloud updates] out.
## News updates
+###### **August 13, 2021**
+The July Guest OS has released.
+ ###### **July 1, 2021** The June Guest OS has released.
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-6.33_202107-01 | August 13, 2021 | Post 6.35 |
| WA-GUEST-OS-6.32_202106-01 | July 1, 2021 | Post 6.34 |
-| WA-GUEST-OS-6.31_202105-01 | May 26, 2021 | Post 6.33 |
+|~~WA-GUEST-OS-6.31_202105-0~~| May 26, 2021 | August 13, 2021 |
|~~WA-GUEST-OS-6.30_202104-01~~| April 30, 2021 | July 1, 2021 | |~~WA-GUEST-OS-6.29_202103-01~~| March 28, 2021 | May 26, 2021 | |~~WA-GUEST-OS-6.28_202102-01~~| February 19, 2021 | April 30, 2021 |
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-5.57_202107-01 | August 13, 2021 | Post 5.59 |
| WA-GUEST-OS-5.56_202106-01 | July 1, 2021 | Post 5.58 |
-| WA-GUEST-OS-5.55_202105-01 | May 26, 2021 | Post 5.57 |
+|~~WA-GUEST-OS-5.55_202105-01~~| May 26, 2021 | August 13, 2021 |
|~~WA-GUEST-OS-5.54_202104-01~~| April 30, 2021 | July 1, 2021 | |~~WA-GUEST-OS-5.53_202103-01~~| March 28, 2021 | May 26, 2021 | |~~WA-GUEST-OS-5.52_202102-01~~| February 19, 2021 | April 30, 2021 |
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-4.92_202107-01 | August 13, 2021 | Post 4.94 |
| WA-GUEST-OS-4.91_202106-01 | July 1, 2021 | Post 4.93 |
-| WA-GUEST-OS-4.90_202105-01 | May 26, 2021 | Post 4.92 |
+|~~WA-GUEST-OS-4.90_202105-01~~| May 26, 2021 | August 13, 2021 |
|~~WA-GUEST-OS-4.89_202104-01~~| April 30, 2021 | July 1, 2021 | |~~WA-GUEST-OS-4.88_202103-01~~| March 28, 2021 | May 26, 2021 | |~~WA-GUEST-OS-4.87_202102-01~~| February 19, 2021 | April 30, 2021 |
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-3.99_202107-01 | August 13, 2021 | Post 3.101 |
| WA-GUEST-OS-3.98_202106-01 | July 1, 2021 | Post 3.100 |
-| WA-GUEST-OS-3.97_202105-01 | May 26, 2021 | Post 3.99 |
+|~~WA-GUEST-OS-3.97_202105-01~~| May 26, 2021 | August 13, 2021 |
|~~WA-GUEST-OS-3.96_202104-01~~| April 30, 2021 | July 1, 2021 | |~~WA-GUEST-OS-3.95_202103-01~~| March 28, 2021 | May 26, 2021 | |~~WA-GUEST-OS-3.94_202102-01~~| February 19, 2021 | April 30, 2021 |
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-2.112_202107-01 | August 13, 2021 | Post 2.114 |
| WA-GUEST-OS-2.111_202106-01 | July 1, 2021 | Post 2.113 |
-| WA-GUEST-OS-2.110_202105-01 | May 26, 2021 | Post 2.112 |
+|~~WA-GUEST-OS-2.110_202105-01~~| May 26, 2021 | August 13, 2021 |
|~~WA-GUEST-OS-2.109_202104-01~~| April 30, 2021 | July 1, 2021 | |~~WA-GUEST-OS-2.108_202103-01~~| March 28, 2021 | May 26, 2021 | |~~WA-GUEST-OS-2.107_202102-01~~| February 19, 2021 | April 30, 2021 |
cognitive-services Batch Transcription https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/batch-transcription.md
You can review and test the detailed API, which is available as a [Swagger docum
Batch transcription jobs are scheduled on a best effort basis. You cannot estimate when a job will change into the running state,
-but it should happen within minutes under normal system load.
+but it should happen within minutes under normal system load.
Once in the running state, the transcription occurs faster than the audio runtime playback speed. ## Prerequisites
As with all features of the Speech service, you create a subscription key from t
If you plan to customize models, follow the steps in [Acoustic customization](./how-to-custom-speech-train-model.md) and [Language customization](./how-to-custom-speech-train-model.md). To use the created models in batch transcription, you need their model location. You can retrieve the model location when you inspect the details of the model (`self` property). A deployed custom endpoint is *not needed* for the batch transcription service. >[!NOTE]
-> As a part of the REST API, Batch Transcription has a set of [quotas and limits](speech-services-quotas-and-limits.md#batch-transcription), which we encourage to review. To take the full advantage of Batch Transcription ability to efficiently transcribe a large number of audio files we recommend always sending multiple files per request or pointing to a Blob Storage container with the audio files to transcribe. The service will transcribe the files concurrently reducing the turnaround time. Using multiple files in a single request is very simple and straightforward - see [Configuration](#configuration) section.
+> As a part of the REST API, Batch Transcription has a set of [quotas and limits](speech-services-quotas-and-limits.md#batch-transcription), which we encourage to review. To take the full advantage of Batch Transcription ability to efficiently transcribe a large number of audio files we recommend always sending multiple files per request or pointing to a Blob Storage container with the audio files to transcribe. The service will transcribe the files concurrently reducing the turnaround time. Using multiple files in a single request is very simple and straightforward - see [Configuration](#configuration) section.
## Batch transcription API
To create an ordered final transcript, use the timestamps generated per utteranc
### Configuration
-Configuration parameters are provided as JSON.
+Configuration parameters are provided as JSON.
**Transcribing one or more individual files.** If you have more than one file to transcribe, we recommend sending multiple files in one request. The example below is using three files:
and can read audio or write transcriptions using a SAS URI with [Azure Blob stor
## Batch transcription result For each audio input, one transcription result file is created.
-The [Get transcriptions files](https://westus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetTranscriptionFiles) operation
-returns a list of result files for this transcription.
-To find the transcription file for a specific input file,
+The [Get transcriptions files](https://westus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetTranscriptionFiles) operation
+returns a list of result files for this transcription.
+To find the transcription file for a specific input file,
filter all returned files with `kind` == `Transcription` and `name` == `{originalInputName.suffix}.json`. Each transcription result file has this format:
Each transcription result file has this format:
], "recognizedPhrases": [ // results for each phrase and each channel individually {
- "recognitionStatus": "Success", // recognition state, e.g. "Success", "Failure"
+ "recognitionStatus": "Success", // recognition state, e.g. "Success", "Failure"
"speaker": 1, // if `diarizationEnabled` is `true`, this is the identified speaker (1 or 2), otherwise this property is not present "channel": 0, // channel number of the result
- "offset": "PT0.07S", // offset in audio of this phrase, ISO 8601 encoded duration
+ "offset": "PT0.07S", // offset in audio of this phrase, ISO 8601 encoded duration
"duration": "PT1.59S", // audio duration of this phrase, ISO 8601 encoded duration "offsetInTicks": 700000.0, // offset in audio of this phrase in ticks (1 tick is 100 nanoseconds) "durationInTicks": 15900000.0, // audio duration of this phrase in ticks (1 tick is 100 nanoseconds)
Word-level timestamps must be enabled as the parameters in the above request ind
## Best practices
-The batch transcription service can handle large number of submitted transcriptions. You can query the status of your transcriptions
+The batch transcription service can handle large number of submitted transcriptions. You can query the status of your transcriptions
with [Get transcriptions](https://westus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetTranscriptions).
-Call [Delete transcription](https://westus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/DeleteTranscription)
+Call [Delete transcription](https://westus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/DeleteTranscription)
regularly from the service once you retrieved the results. Alternatively set `timeToLive` property to ensure eventual deletion of the results.
Complete samples are available in the [GitHub sample repository](https://aka.ms/
Update the sample code with your subscription information, service region, URI pointing to the audio file to transcribe, and model location if you're using a custom model.
-[!code-csharp[Configuration variables for batch transcription](~/samples-cognitive-services-speech-sdk/samples/batch/csharp/program.cs#transcriptiondefinition)]
+[!code-csharp[Configuration variables for batch transcription](~/samples-cognitive-services-speech-sdk/samples/batch/csharp/batchclient/program.cs#transcriptiondefinition)]
The sample code sets up the client and submits the transcription request. It then polls for the status information and print details about the transcription progress.
cognitive-services Call Center Transcription https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/call-center-transcription.md
Title: Call Center Transcription - Speech service
description: A common scenario for speech-to-text is transcribing large volumes of telephony data that come from various systems, such as Interactive Voice Response (IVR). Using Speech service and the Unified speech model, a business can get high-quality transcriptions with audio capture systems. -+ Last updated 07/05/2019-+ # Speech service for telephony data
cognitive-services Conversation Transcription https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/conversation-transcription.md
Title: Conversation Transcription (Preview) - Speech service
description: Conversation Transcription is a solution for meetings, that combines recognition, speaker ID, and diarization to provide transcription of any conversation. -+ Last updated 03/26/2021-+ # What is Conversation Transcription (Preview)?
cognitive-services Custom Keyword Basics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/custom-keyword-basics.md
Title: Create Keyword quickstart - Speech service
description: Your device is always listening for a keyword (or phrase). When the user says the keyword, the device sends all subsequent audio to the cloud, until the user stops speaking. Customizing your keyword is an effective way to differentiate your device and strengthen your branding. -+ Last updated 11/03/2020-+ zone_pivot_groups: keyword-quickstart
cognitive-services Custom Neural Voice https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/custom-neural-voice.md
Title: Custom neural voice overview - Speech service
description: Custom Neural Voice is a text-to-Speech feature that allows you to create a one-of-a-kind customized synthetic voice for your applications by providing your own audio data as a sample. -+ Last updated 05/18/2021-+ # What is Custom Neural Voice?
cognitive-services Custom Speech Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/custom-speech-overview.md
Title: "Custom Speech overview - Speech service"
description: Custom Speech is a set of online tools that allow you to evaluate and improve the Microsoft speech-to-text accuracy for your applications, tools, and products. -+ Last updated 02/12/2021-+
cognitive-services Direct Line Speech https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/direct-line-speech.md
Title: Direct Line Speech - Speech service
description: An overview of the features, capabilities, and restrictions for Voice assistants using Direct Line Speech with the Speech Software Development Kit (SDK). -+ Last updated 03/11/2020-+ # What is Direct Line Speech?
cognitive-services Get Speech Devices Sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/get-speech-devices-sdk.md
Title: Get the Speech Devices SDK
description: The Speech service works with a wide variety of devices and audio sources. Now, you can take your speech applications to the next level with matched hardware and software. In this article, you'll learn how to get access to the Speech Devices SDK and start developing. -+ Last updated 04/14/2019-+ # Get the Cognitive Services Speech Devices SDK
cognitive-services Get Started Intent Recognition https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/get-started-intent-recognition.md
Title: "Intent recognition quickstart - Speech service"
description: In this quickstart, you use intent recognition to interactively recognize intents from audio data captured from a microphone. -+ Last updated 05/04/2021-+ zone_pivot_groups: programming-languages-speech-services-one-nomore-no-go keywords: intent recognition
cognitive-services Get Started Speaker Recognition https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/get-started-speaker-recognition.md
Title: "Speaker Recognition quickstart - Speech service"
description: Learn how to use Speaker Recognition from the Speech SDK to answer the question, "who is speaking". In this quickstart, you learn about common design patterns for working with both speaker verification and identification, which both use voice biometry to identify unique voices. -+ Last updated 09/02/2020-+ zone_pivot_groups: programming-languages-set-twenty-five keywords: speaker recognition, voice biometry
cognitive-services Get Started Speech To Text https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/get-started-speech-to-text.md
Title: "Speech-to-text quickstart - Speech service"
description: Learn how to use the Speech SDK to convert speech-to-text. In this quickstart, you learn about object construction, supported audio input formats, and configuration options for speech recognition. -+ Last updated 09/15/2020-+ zone_pivot_groups: programming-languages-set-twenty-three keywords: speech to text, speech to text software
cognitive-services Get Started Speech Translation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/get-started-speech-translation.md
Title: Speech translation quickstart - Speech service
description: Learn how to use the Speech SDK to translate speech. In this quickstart, you learn about object construction, supported audio input formats, and configuration options for speech translation. -+ Last updated 09/01/2020-+ zone_pivot_groups: programming-languages-set-two-with-js-spx keywords: speech translation
cognitive-services Get Started Text To Speech https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/get-started-text-to-speech.md
Title: "Text-to-speech quickstart - Speech service"
description: Learn how to use the Speech SDK to convert text-to-speech. In this quickstart, you learn about object construction and design patterns, supported audio output formats, the Speech CLI, and custom configuration options for speech synthesis. -+ Last updated 05/17/2021-+ zone_pivot_groups: programming-languages-set-twenty-four keywords: text to speech
cognitive-services How To Audio Content Creation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-audio-content-creation.md
Title: Audio Content Creation - Speech service
description: Audio Content Creation is an online tool that allows you to customize and fine-tune Microsoft's text-to-speech output for your apps and products. -+ Last updated 01/31/2020-+ # Improve synthesis with the Audio Content Creation tool
cognitive-services How To Automatic Language Detection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-automatic-language-detection.md
Title: How to use language identification
description: Language identification is used to determine the language being spoken in audio passed to the Speech SDK when compared against a list of provided languages. -+ Last updated 05/21/2021-+ zone_pivot_groups: programming-languages-speech-services-nomore-variant
cognitive-services How To Custom Commands Update Command From Client https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-custom-commands-update-command-from-client.md
Title: 'Update a command from a client app'
description: Learn how to update a command from a client application. -+ Last updated 10/20/2020-+ # Update a command from a client app
cognitive-services How To Custom Commands Update Command From Web Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-custom-commands-update-command-from-web-endpoint.md
Title: 'Update a command from a web endpoint'
description: Learn how to update the state of a command by using a call to a web endpoint. -+ Last updated 10/20/2020-+ # Update a command from a web endpoint
cognitive-services How To Custom Speech Evaluate Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-custom-speech-evaluate-data.md
Title: "Evaluate and improve Custom Speech accuracy - Speech service"
description: "In this document you learn how to quantitatively measure and improve the quality of our speech-to-text model or your custom model. Audio + human-labeled transcription data is required to test accuracy, and 30 minutes to 5 hours of representative audio should be provided." -+ Last updated 02/12/2021-+ # Evaluate and improve Custom Speech accuracy
cognitive-services How To Custom Speech Human Labeled Transcriptions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-custom-speech-human-labeled-transcriptions.md
Title: Human-labeled transcriptions guidelines - Speech service
description: To improve speech recognition accuracy, such as when words are deleted or incorrectly substituted, you can use human-labeled transcriptions along with your audio data. Human-labeled transcriptions are word-by-word, verbatim transcriptions of an audio file. -+ Last updated 02/12/2021-+ # How to create human-labeled transcriptions
cognitive-services How To Custom Speech Inspect Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-custom-speech-inspect-data.md
Title: Inspect data quality for Custom Speech - Speech service
description: Custom Speech provides tools that allow you to visually inspect the recognition quality of a model by comparing audio data with the corresponding recognition result. You can play back uploaded audio and determine if the provided recognition result is correct. -+ Last updated 02/12/2021-+ # Inspect Custom Speech data
cognitive-services How To Custom Speech Test And Train https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-custom-speech-test-and-train.md
Title: "Prepare data for Custom Speech - Speech service"
description: "When testing the accuracy of Microsoft speech recognition or training your custom models, you'll need audio and text data. On this page, we cover the types of data, how to use, and manage them." -+ Last updated 02/12/2021-+ # Prepare data for Custom Speech
cognitive-services How To Custom Speech Train Model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-custom-speech-train-model.md
Title: Train and deploy a Custom Speech model - Speech service
description: Learn how to train and deploy Custom Speech models. Training a speech-to-text model can improve recognition accuracy for the Microsoft baseline model or a for custom model. -+ Last updated 02/12/2021-+ # Train and deploy a Custom Speech model
cognitive-services How To Custom Voice Create Voice https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-custom-voice-create-voice.md
Title: "Create a Custom Voice - Speech service"
description: "When you're ready to upload your data, go to the Custom Voice portal. Create or select a Custom Voice project. The project must share the right language/locale and the gender properties as the data you intend to use for your voice training." -+ Last updated 11/04/2019-+ # Create and use your voice model
cognitive-services How To Custom Voice Prepare Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-custom-voice-prepare-data.md
Title: "How to prepare data for Custom Voice - Speech service"
description: "Create a custom voice for your brand with the Speech service. You provide studio recordings and the associated scripts, the service generates a unique voice model tuned to the recorded voice. Use this voice to synthesize speech in your products, tools, and applications." -+ Last updated 11/04/2019-+ # Prepare training data
cognitive-services How To Custom Voice https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-custom-voice.md
Title: "Get started with Custom Neural Voice - Speech service"
description: "Custom Neural Voice is a set of online tools that allow you to create a recognizable, one-of-a-kind voice for your brand. All it takes to get started are a handful of audio files and the associated transcriptions." -+ Last updated 05/18/2021-+ # Get started with Custom Neural Voice
cognitive-services How To Develop Custom Commands Application https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-develop-custom-commands-application.md
description: Learn how to develop and customize Custom Commands applications. These voice-command apps are best suited for task completion or command-and-control scenarios. -+ Last updated 12/15/2020-+ # Develop Custom Commands applications
cognitive-services How To Migrate From Bing Speech https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-migrate-from-bing-speech.md
Last updated 04/03/2020-+ # Customer intent: As a developer currently using the deprecated Bing Speech, I want to learn the differences between Bing Speech and the Speech service, so that I can migrate my application to the Speech service.
cognitive-services How To Recognize Intents From Speech Csharp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-recognize-intents-from-speech-csharp.md
Title: How to recognize intents from speech using the Speech SDK C#
description: In this guide, you learn how to recognize intents from speech using the Speech SDK for C#. -+ Last updated 02/10/2020-+
cognitive-services How To Track Speech Sdk Memory Usage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-track-speech-sdk-memory-usage.md
Title: How to track Speech SDK memory usage - Speech service
description: The Speech Service SDK supports numerous programming languages for speech-to-text and text-to-speech conversion, along with speech translation. This article discusses memory management tooling built into the SDK. -+
cognitive-services How To Use Conversation Transcription https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-use-conversation-transcription.md
Title: Real-time Conversation Transcription quickstart - Speech service
description: Learn how to use real-time Conversation Transcription with the Speech SDK. Conversation Transcription allows you to transcribe meetings and other conversations with the ability to add, remove, and identify multiple participants by streaming audio to the Speech service. -+ Last updated 10/20/2020-+ zone_pivot_groups: acs-js-csharp
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/language-support.md
Title: Language support - Speech service
description: The Speech service supports numerous languages for speech-to-text and text-to-speech conversion, along with speech translation. This article provides a comprehensive list of language support by service feature. -+ Last updated 01/07/2021-+
cognitive-services Long Audio Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/long-audio-api.md
Title: Long Audio API - Speech service
description: Learn how the Long Audio API is designed for asynchronous synthesis of long-form text to speech. -+ Last updated 08/11/2020-+ # Long Audio API
cognitive-services Multi Device Conversation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/multi-device-conversation.md
Title: Multi-device Conversation (Preview) - Speech Service
description: Multi-device conversation makes it easy to create a speech or text conversation between multiple clients and coordinate the messages that are sent between them. -+ Last updated 03/11/2020-+ # What is Multi-device Conversation (Preview)?
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/overview.md
Title: What is the Speech service?
description: The Speech service is the unification of speech-to-text, text-to-speech, and speech translation into a single Azure subscription. Add speech to your applications, tools, and devices with the Speech SDK, Speech Devices SDK, or REST APIs. -+ Last updated 11/23/2020-+ # What is the Speech service?
cognitive-services Setup Platform https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/quickstarts/setup-platform.md
Title: 'Quickstart: Set up development environment'
description: In this quickstart, you'll learn how to install the Speech SDK for your preferred platform and programming language combination. -+ Last updated 10/15/2020-+ zone_pivot_groups: programming-languages-speech-services-one-nomore
cognitive-services Record Custom Voice Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/record-custom-voice-samples.md
Title: "Record custom voice samples - Speech service"
description: Make a production-quality custom voice by preparing a robust script, hiring good voice talent, and recording professionally. -+ Last updated 04/13/2020-+ # Record voice samples to create a custom voice
cognitive-services Rest Speech To Text https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/rest-speech-to-text.md
Title: Speech-to-text API reference (REST) - Speech service
description: Learn how to use the speech-to-text REST API. In this article, you'll learn about authorization options, query options, how to structure a request and receive a response. -+ Last updated 07/01/2021-+
cognitive-services Rest Text To Speech https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/rest-text-to-speech.md
Title: Text-to-speech API reference (REST) - Speech service
description: Learn how to use the text-to-speech REST API. In this article, you'll learn about authorization options, query options, how to structure a request and receive a response. -+ Last updated 07/01/2021-+
cognitive-services Speaker Recognition Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/speaker-recognition-overview.md
Title: Speaker Recognition overview - Speech service
description: Speaker Recognition provides algorithms that verify and identify speakers by their unique voice characteristics using voice biometry. Speaker Recognition is used to answer the question ΓÇ£who is speaking?ΓÇ¥. This article is an overview of the benefits and capabilities of the Speaker Recognition service. -+ Last updated 09/02/2020-+ keywords: speaker recognition, voice biometry
cognitive-services Speech Devices Sdk Microphone https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/speech-devices-sdk-microphone.md
Title: Speech Devices SDK microphone array recommendations
description: Speech Devices SDK microphone array recommendations. These array geometries are recommended for use with the Microsoft Audio Stack. -+ Last updated 07/16/2019-+ # Speech Devices SDK Microphone array recommendations
cognitive-services Speech Devices Sdk Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/speech-devices-sdk-quickstart.md
Title: 'Quickstart: Run the Speech Devices SDK on Windows, Linux or Android - Sp
description: This article contains the prerequisites and instructions for getting started with a Windows, Linux or Android Speech Devices SDK. -+ Last updated 06/25/2020-+ zone_pivot_groups: platforms-set-of-three
cognitive-services Speech Devices Sdk Roobo V1 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/speech-devices-sdk-roobo-v1.md
Title: Speech Devices SDK Roobo Smart Audio Dev Kit v1 - Speech service
description: Prerequisites and instructions for getting started with the Speech Devices SDK, Roobo Smart Audio Dev Kit v1. -+ Last updated 07/05/2019-+ # Device: Roobo Smart Audio Dev Kit
cognitive-services Speech Devices Sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/speech-devices-sdk.md
Title: Speech Devices SDK - Speech service
description: Get started with the Speech Devices SDK. The Speech service works with a wide variety of devices and audio sources. The Speech Devices SDK is a pre-tuned library that's paired with purpose-built, microphone array development kits. -+ Last updated 03/11/2020-+ # What is the Speech Devices SDK?
cognitive-services Speech Sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/speech-sdk.md
Title: About the Speech SDK - Speech service
description: The Speech software development kit (SDK) exposes many of the Speech service capabilities, making it easier to develop speech-enabled applications. -+ Last updated 04/03/2020-+ # About the Speech SDK
cognitive-services Speech Studio Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/speech-studio-overview.md
Title: "Speech Studio overview - Speech service"
description: Speech Studio is a set of UI-based tools for building and integrating features from Azure Speech service in your applications. -+ Last updated 05/07/2021-+ # What is Speech Studio?
cognitive-services Speech Synthesis Markup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/speech-synthesis-markup.md
Title: Speech Synthesis Markup Language (SSML) - Speech service
description: Using the Speech Synthesis Markup Language to control pronunciation and prosody in text-to-speech. -+ Last updated 03/23/2020-+
cognitive-services Speech To Text https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/speech-to-text.md
Title: Speech-to-text overview - Speech service
description: Speech-to-text software enables real-time transcription of audio streams into text. Your applications, tools, or devices can consume, display, and take action on this text input. This article is an overview of the benefits and capabilities of the speech-to-text service. -+ Last updated 09/01/2020-+ keywords: speech to text, speech to text software
cognitive-services Speech Translation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/speech-translation.md
Title: Speech translation overview - Speech service
description: Speech translation allows you to add end-to-end, real-time, multi-language translation of speech to your applications, tools, and devices. The same API can be used for both speech-to-speech and speech-to-text translation. This article is an overview of the benefits and capabilities of the speech translation service. -+ Last updated 09/01/2020-+ keywords: speech translation
cognitive-services Spx Basics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/spx-basics.md
Title: "Speech CLI quickstart - Speech service"
description: Get started with the Azure Speech CLI. You can interact with Speech services like speech to text, text to speech, and speech translation without writing code. -+ Last updated 04/28/2021-+ # Get started with the Azure Speech CLI
cognitive-services Spx Batch Operations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/spx-batch-operations.md
Title: "Speech CLI batch operations - Speech service"
description: learn how to do batch speech to text (speech recognition), batch text to speech (speech synthesis) with the Speech CLI. -+ Last updated 01/13/2021-+ # Speech CLI batch operations
cognitive-services Spx Data Store Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/spx-data-store-configuration.md
Title: "Speech CLI configuration options - Speech service"
description: Learn how to create and manage configuration files for use with the Azure Speech CLI. -+ Last updated 01/13/2021-+ # Speech CLI configuration options
cognitive-services Spx Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/spx-overview.md
Title: The Azure Speech CLI
description: The Speech CLI is a command-line tool for using the Speech service without writing any code. The Speech CLI requires minimal setup, and it's easy to immediately start experimenting with key features of the Speech service to see if your use-cases can be met. -+ Last updated 01/13/2021-+
cognitive-services Text To Speech https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/text-to-speech.md
Title: Text-to-speech overview - Speech service
description: The text-to-speech feature in the Speech service enables your applications, tools, or devices to convert text into natural human-like synthesized speech. This article is an overview of the benefits and capabilities of the text-to-speech service. -+ Last updated 09/01/2020-+ keywords: text to speech
cognitive-services Tutorial Tenant Model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/tutorial-tenant-model.md
Title: Create a tenant model (preview) - Speech Service
description: Automatically generate a secure, compliant tenant model (Custom Speech with Microsoft 365 data) that uses your Microsoft 365 data to deliver optimal speech recognition for organization-specific terms. -+ Last updated 06/25/2020-+
cognitive-services Tutorial Voice Enable Your Bot Speech Sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/tutorial-voice-enable-your-bot-speech-sdk.md
Title: "Tutorial: Voices enable your bot using Speech SDK - Speech service"
description: In this tutorial, you'll create an Echo Bot using Microsoft Bot Framework, deploy it to Azure, and register it with the Bot Framework Direct Line Speech channel. Then you'll configure a sample client app for Windows that lets you speak to your bot and hear it respond back to you. -+ Last updated 02/25/2020-+
cognitive-services Cognitive Services And Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/cognitive-services-and-machine-learning.md
The following data categorizes each service by which kind of data it allows or r
|[Custom Vision](./custom-vision-service/overview.md)||x|| |[Face](./Face/Overview.md)|x|x|| |[Form Recognizer](./form-recognizer/overview.md)||x||
-|[Immersive Reader](./immersive-reader/overview.md)|x|||
+|[Immersive Reader](../applied-ai-services/immersive-reader/overview.md)|x|||
|[Ink Recognizer](/previous-versions/azure/cognitive-services/Ink-Recognizer/overview)|x|x|| |[Language Understanding (LUIS)](./LUIS/what-is-luis.md)||x|| |[Personalizer](./personalizer/what-is-personalizer.md)|x*|x*|x|
Cognitive Services that provide exported models for other machine learning tools
* Learn how to [authenticate](authentication.md) to a Cognitive Service. * Use [diagnostic logging](diagnostic-logging.md) for issue identification and debugging. * Deploy a Cognitive Service in a Docker [container](cognitive-services-container-support.md).
-* Keep up to date with [service updates](https://azure.microsoft.com/updates/?product=cognitive-services).
+* Keep up to date with [service updates](https://azure.microsoft.com/updates/?product=cognitive-services).
cognitive-services Cognitive Services Support Options https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/cognitive-services-support-options.md
If you do submit a new question to Stack Overflow, please use one or more of the
* [Metrics Advisor (preview)](https://stackoverflow.com/search?q=azure+metrics+advisor) * [Personalizer](https://stackoverflow.com/search?q=azure+personalizer)
-## Submit feedback on User Voice
+## Submit feedback
-<div class='icon is-large'>
- <img alt='UserVoice' src='https://docs.microsoft.com/media/logos/logo-uservoice.svg'>
-</div>
-
-To request new features, post them on UserVoice. Share your ideas for making Cognitive Services and its APIs work better for the applications you develop.
+To request new features, post them on https://feedback.azure.com. Share your ideas for making Cognitive Services and its APIs work better for the applications you develop.
* [Cognitive Services](https://feedback.azure.com/forums/932041-azure-cognitive-services?category_id=395737)
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/language-support.md
These Cognitive Services are language agnostic and don't have limitations based
## Language
-* [Immersive Reader](./immersive-reader/language-support.md)
+* [Immersive Reader](../applied-ai-services/immersive-reader/language-support.md)
* [Language Understanding (LUIS)](./luis/luis-language-support.md) * [QnA Maker](./qnamaker/overview/language-support.md) * [Text Analytics](./text-analytics/language-support.md)
cognitive-services Frequently Asked Questions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/personalizer/frequently-asked-questions.md
- Title: Frequently asked questions - Personalizer
-description: This article contains answers to frequently asked troubleshooting questions about Personalizer.
--- Previously updated : 02/26/2020--
-ms.
-
-# Personalizer frequently asked questions
-
-This article contains answers to frequently asked troubleshooting questions about the Personalizer service.
-
-## Configuration issues
-
-### I changed a configuration setting and now my loop isn't performing at the same learning level. What happened?
-
-Some configuration settings [reset your model](how-to-settings.md#settings-that-include-resetting-the-model). Configuration changes should be carefully planned.
-
-### When configuring Personalizer with the API, I received an error. What happened?
-
-If you use a single API request to configure your service and change your learning behavior, you will get an error. You need to make two separate API calls: first, to configure your service, then to switch learning behavior.
-
-## Transaction errors
-
-### I get an HTTP 429 (Too many requests) response from the service. What can I do?
-
-If you picked a free price tier when you created the Personalizer instance, there is a quota limit on the number of Rank requests that are allowed. Review your API call rate for the Rank API (in the Metrics pane in the Azure portal for your Personalizer resource) and adjust the pricing tier (in the Pricing Tier pane) if your call volume is expected to increase beyond the threshold for chosen pricing tier.
-
-### I'm getting a 5xx error on Rank or Reward APIs. What should I do?
-
-These issues should be transparent. If they continue, contact support by selecting **New support request** in the **Support + troubleshooting** section, in the Azure portal for your Personalizer resource.
-
-## Learning loop
-
-### The learning loop doesn't attain a 100% match to the system without Personalizer. How do I fix this?
-
-The reasons you don't attain your goal with the learning loop:
-* Not enough features sent with Rank API call
-* Bugs in the features sent - such as sending non-aggregated feature data such as timestamps to Rank API
-* Bugs with loop processing - such as not sending reward data to Reward API for events
-
-To fix, you need to change the processing by either changing the features sent to the loop, or make sure the reward is a correct evaluation of the quality of the Rank's response.
-
-### The learning loop doesn't seem to learn. How do I fix this?
-
-The learning loop needs a few thousand Reward calls before Rank calls prioritize effectively.
-
-If you are unsure about how your learning loop is currently behaving, run an [offline evaluation](concepts-offline-evaluation.md), and apply the corrected learning policy.
-
-### I keep getting rank results with all the same probabilities for all items. How do I know Personalizer is learning?
-
-Personalizer returns the same probabilities in a Rank API result when it has just started and has an _empty_ model, or when you reset the Personalizer Loop, and your model is still within your **Model update frequency** period.
-
-When the new update period begins, the updated model is used, and you'll see the probabilities change.
-
-### The learning loop was learning but seems to not learn anymore, and the quality of the Rank results isn't that good. What should I do?
-
-* Make sure you've completed and applied one evaluation in the Azure portal for that Personalizer resource (learning loop).
-* Make sure all rewards are sent, via the Reward API, and processed.
-
-### How do I know that the learning loop is getting updated regularly and is used to score my data?
-
-You can find the time when the model was last updated in the **Model and Learning Settings** page of the Azure portal. If you see an old timestamp, it is likely because you are not sending the Rank and Reward calls. If the service has no incoming data, it does not update the learning. If you see the learning loop is not updating frequently enough, you can edit the loop's **Model Update frequency**.
-
-## Offline evaluations
-
-### An offline evaluation's feature importance returns a long list with hundreds or thousands of items. What happened?
-
-This is typically due to timestamps, user IDs or some other fine grained features sent in.
-
-### I created an offline evaluation and it succeeded almost instantly. Why is that? I don't see any results?
-
-The offline evaluation uses the trained model data from the events in that time period. If you did not send any data in the time period between start and end time of the evaluation, it will complete without any results. Submit a new offline evaluation by selecting a time range with events you know were sent to Personalizer.
-
-## Learning policy
-
-### How do I import a learning policy?
-
-Learn more about [learning policy concepts](concept-active-learning.md#understand-learning-policy-settings) and [how to apply](how-to-manage-model.md) a new learning policy. If you do not want to select a learning policy, you can use the [offline evaluation](how-to-offline-evaluation.md) to suggest a learning policy, based on your current events.
--
-## Security
-
-### The API key for my loop has been compromised. What can I do?
-
-You can regenerate one key after swapping your clients to use the other key. Having two keys allows you to propagate the key in a lazy manner without having to have any downtime. We recommend doing this on a regular cycle as a security measure.
--
-## Next steps
-
-[Configure the model update frequency](how-to-settings.md#model-update-frequency)
cognitive-services Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/policy-reference.md
Title: Built-in policy definitions for Azure Cognitive Services description: Lists Azure Policy built-in policy definitions for Azure Cognitive Services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/16/2021 Last updated : 08/13/2021
cognitive-services Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Cognitive Services description: Lists Azure Policy Regulatory Compliance controls available for Azure Cognitive Services. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 07/16/2021 Last updated : 08/13/2021
communication-services Sdk Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/chat/sdk-features.md
The following list presents the set of features which are currently available in
| | Get notified when participants are actively typing a message in a chat thread | ✔️ | ❌ | ❌ | ❌ | ✔️ | ✔️ | | | Get all messages in a chat thread | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | | | Send Unicode emojis as part of message content | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
-| | Add metadata to chat messages | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
-| | Add display name to typing indicator notification | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Add metadata to chat messages | ✔️ | ✔️ | ✔️ | ❌ | ❌ | ✔️ |
+| | Add display name to typing indicator notification | ✔️ | ✔️ | ✔️ | ❌ | ❌ | ✔️ |
|Real-time notifications (enabled by proprietary signaling package**)| Chat clients can subscribe to get real-time updates for incoming messages and other operations occurring in a chat thread. To see a list of supported updates for real-time notifications, see [Chat concepts](concepts.md#real-time-notifications) | ✔️ | ❌ | ❌ | ❌ | ✔️ | ✔️ | | Integration with Azure Event Grid | Use the chat events available in Azure Event Grid to plug custom notification services or post that event to a webhook to execute business logic like updating CRM records after a chat is finished | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | | Reporting </br>(This info is available under Monitoring tab for your Communication Services resource on Azure portal) | Understand API traffic from your chat app by monitoring the published metrics in Azure Metrics Explorer and set alerts to detect abnormalities | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
communication-services Meeting Interop https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/chat/meeting-interop.md
zone_pivot_groups: acs-web-ios-android
# Quickstart: Join your chat app to a Teams meeting - Get started with Azure Communication Services by connecting your chat solution to Microsoft Teams.
container-instances Container Instances Container Group Ssl https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-instances/container-instances-container-group-ssl.md
properties:
containers: - name: nginx-with-ssl properties:
- image: nginx
+ image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
ports: - port: 443 protocol: TCP
container-instances Container Instances Environment Variables https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-instances/container-instances-environment-variables.md
properties:
value: 'my-exposed-value' - name: 'SECRET' secureValue: 'my-secret-value'
- image: nginx
+ image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
ports: [] resources: requests:
The JSON response shows both the insecure environment variable's key and value,
With the [az container exec][az-container-exec] command, which enables executing a command in a running container, you can verify that the secure environment variable has been set. Run the following command to start an interactive bash session in the container: ```azurecli-interactive
-az container exec --resource-group myResourceGroup --name securetest --exec-command "/bin/bash"
+az container exec --resource-group myResourceGroup --name securetest --exec-command "/bin/sh"
``` Once you've opened an interactive shell within the container, you can access the `SECRET` variable's value:
container-instances Container Instances Init Container https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-instances/container-instances-init-container.md
This article shows how to use an Azure Resource Manager template to configure a
Start by copying the following JSON into a new file named `azuredeploy.json`. The template sets up a container group with one init container and two application containers:
-* The *init1* container runs the [busybox](https://hub.docker.com/_/busybox) image from Docker Hub. It sleeps for 60 seconds and then writes a command-line string to a file in an [emptyDir volume](container-instances-volume-emptydir.md).
+* The *init1* container runs the [busybox](https://hub.docker.com/_/busybox) image. It sleeps for 60 seconds and then writes a command-line string to a file in an [emptyDir volume](container-instances-volume-emptydir.md).
* Both application containers run the Microsoft `aci-wordcount` container image: * The *hamlet* container runs the wordcount app in its default configuration, counting word frequencies in Shakespeare's play *Hamlet*. * The *juliet* app container reads the command-line string from the emptDir volume to run the wordcount app instead on Shakespeare's *Romeo and Juliet*.
For more information and examples using the `aci-wordcount` image, see [Set envi
{ "name": "init1", "properties": {
- "image": "busybox",
+ "image": "mcr.microsoft.com/aks/e2e/library-busybox:master.210714.1",
"environmentVariables": [], "volumeMounts": [ {
container-instances Container Instances Liveness Probe https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-instances/container-instances-liveness-probe.md
properties:
containers: - name: mycontainer properties:
- image: nginx
+ image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
command: - "/bin/sh" - "-c"
container-instances Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-instances/policy-reference.md
Title: Built-in policy definitions for Azure Container Instances description: Lists Azure Policy built-in policy definitions for Azure Container Instances. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/16/2021 Last updated : 08/13/2021
container-registry Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/policy-reference.md
Title: Built-in policy definitions for Azure Container Registry description: Lists Azure Policy built-in policy definitions for Azure Container Registry. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/16/2021 Last updated : 08/13/2021
container-registry Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Container Registry description: Lists Azure Policy Regulatory Compliance controls available for Azure Container Registry. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 07/16/2021 Last updated : 08/13/2021
cosmos-db Audit Control Plane Logs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/audit-control-plane-logs.md
description: Learn how to audit the control plane operations such as add a regio
Previously updated : 10/05/2020 Last updated : 08/13/2021
After you turn on logging, use the following steps to track down operations for
| where TimeGenerated >= ago(1h) ```
-The following screenshots capture logs when a consistency level is changed for an Azure Cosmos account:
+ The following screenshots capture logs when a consistency level is changed for an Azure Cosmos account. The `activityId_g` value from results is different from the activity ID of an operation:
+ :::image type="content" source="./media/audit-control-plane-logs/add-ip-filter-logs.png" alt-text="Control plane logs when a VNet is added":::
-The following screenshots capture logs when the keyspace or a table of a Cassandra account are created and when the throughput is updated. The control plane logs for create and update operations on the database and the container are logged separately as shown in the following screenshot:
+ The following screenshots capture logs when the keyspace or a table of a Cassandra account are created and when the throughput is updated. The control plane logs for create and update operations on the database and the container are logged separately as shown in the following screenshot:
+ :::image type="content" source="./media/audit-control-plane-logs/throughput-update-logs.png" alt-text="Control plane logs when throughput is updated":::
## Identify the identity associated to a specific operation
-If you want to debug further, you can identify a specific operation in the **Activity log** by using the Activity ID or by the timestamp of the operation. Timestamp is used for some Resource Manager clients where the activity ID is not explicitly passed. The Activity log gives details about the identity with which the operation was initiated. The following screenshot shows how to use the activity ID and find the operations associated with it in the Activity log:
+If you want to debug further, you can identify a specific operation in the **Activity log** by using the `activityId_g` or by the timestamp of the operation. Timestamp is used for some Resource Manager clients where the activity ID is not explicitly passed. The Activity log gives details about the identity with which the operation was initiated. The following screenshot shows how to use the `activityId_g` to find the operations associated with it in the Activity log:
:::image type="content" source="./media/audit-control-plane-logs/find-operations-with-activity-id.png" alt-text="Use the activity ID and find the operations":::
cosmos-db Local Emulator Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/local-emulator-release-notes.md
This article shows the Azure Cosmos DB Emulator release notes with a list of fea
## Release notes
+### 2.14.2 (12 August 2021)
+
+ - This release updates the local Data Explorer content to latest Azure Portal version and resets the base for the Linux Cosmos emulator Docker image.
+ ### 2.14.1 (18 June 2021) - This release improves the start-up time for the emulator while reducing the footprint of its data on the disk. This new optimization is activated by "/EnablePreview" argument.
cosmos-db Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/policy-reference.md
Title: Built-in policy definitions for Azure Cosmos DB description: Lists Azure Policy built-in policy definitions for Azure Cosmos DB. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/16/2021 Last updated : 08/13/2021
cosmos-db Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Cosmos DB description: Lists Azure Policy Regulatory Compliance controls available for Azure Cosmos DB. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 07/16/2021 Last updated : 08/13/2021
cosmos-db Sql Api Get Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql-api-get-started.md
ms.devlang: dotnet Previously updated : 11/05/2019 Last updated : 08/12/2021
That's it, build it, and you're on your way!
* Want a more complex ASP.NET MVC tutorial? See [Tutorial: Develop an ASP.NET Core MVC web application with Azure Cosmos DB by using .NET SDK](sql-api-dotnet-application.md). * Want to do scale and performance testing with Azure Cosmos DB? See [Performance and scale testing with Azure Cosmos DB](performance-testing.md). * To learn how to monitor Azure Cosmos DB requests, usage, and storage, see [Monitor performance and storage metrics in Azure Cosmos DB](./monitor-cosmos-db.md).
-* To run queries against our sample dataset, see the [Query Playground](https://www.documentdb.com/sql/demo).
* To learn more about Azure Cosmos DB, see [Welcome to Azure Cosmos DB](./introduction.md). [cosmos-db-create-account]: create-sql-api-java.md#create-a-database-account
cosmos-db Sql Query Abs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-query-abs.md
+
+ Title: ABS in Azure Cosmos DB query language
+description: Learn about how the Absolute(ABS) SQL system function in Azure Cosmos DB returns the positive value of the specified numeric expression
++++ Last updated : 03/04/2020+++
+# ABS (Azure Cosmos DB)
+
+ Returns the absolute (positive) value of the specified numeric expression.
+
+## Syntax
+
+```sql
+ABS (<numeric_expr>)
+```
+
+## Arguments
+
+*numeric_expr*
+ Is a numeric expression.
+
+## Return types
+
+ Returns a numeric expression.
+
+## Examples
+
+ The following example shows the results of using the `ABS` function on three different numbers.
+
+```sql
+SELECT ABS(-1) AS abs1, ABS(0) AS abs2, ABS(1) AS abs3
+```
+
+ Here is the result set.
+
+```json
+[{abs1: 1, abs2: 0, abs3: 1}]
+```
+
+## Remarks
+
+This system function will benefit from a [range index](../index-policy.md#includeexclude-strategy).
+
+## Next steps
+
+- [Mathematical functions Azure Cosmos DB](sql-query-mathematical-functions.md)
+- [System functions Azure Cosmos DB](sql-query-system-functions.md)
+- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Acos https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-query-acos.md
+
+ Title: ACOS in Azure Cosmos DB query language
+description: Learn about how the ACOS (arccosice) SQL system function in Azure Cosmos DB returns the angle, in radians, whose cosine is the specified numeric expression
++++ Last updated : 03/03/2020+++
+# ACOS (Azure Cosmos DB)
+
+ Returns the angle, in radians, whose cosine is the specified numeric expression; also called arccosine.
+
+## Syntax
+
+```sql
+ACOS(<numeric_expr>)
+```
+
+## Arguments
+
+*numeric_expr*
+ Is a numeric expression.
+
+## Return types
+
+ Returns a numeric expression.
+
+## Examples
+
+ The following example returns the `ACOS` of -1.
+
+```sql
+SELECT ACOS(-1) AS acos
+```
+
+ Here is the result set.
+
+```json
+[{"acos": 3.1415926535897931}]
+```
+
+## Remarks
+
+This system function will not utilize the index.
+
+## Next steps
+
+- [Mathematical functions Azure Cosmos DB](sql-query-mathematical-functions.md)
+- [System functions Azure Cosmos DB](sql-query-system-functions.md)
+- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Aggregate Avg https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-query-aggregate-avg.md
+
+ Title: AVG in Azure Cosmos DB query language
+description: Learn about the Average (AVG) SQL system function in Azure Cosmos DB.
++++ Last updated : 12/02/2020+++
+# AVG (Azure Cosmos DB)
+
+This aggregate function returns the average of the values in the expression.
+
+## Syntax
+
+```sql
+AVG(<numeric_expr>)
+```
+
+## Arguments
+
+*numeric_expr*
+ Is a numeric expression.
+
+## Return types
+
+Returns a numeric expression.
+
+## Examples
+
+The following example returns the average value of `propertyA`:
+
+```sql
+SELECT AVG(c.propertyA)
+FROM c
+```
+
+## Remarks
+
+This system function will benefit from a [range index](../index-policy.md#includeexclude-strategy). If any arguments in `AVG` are string, boolean, or null, the entire aggregate system function will return `undefined`. If any argument has an `undefined` value, it will not impact the `AVG` calculation.
+
+## Next steps
+
+- [Mathematical functions in Azure Cosmos DB](sql-query-mathematical-functions.md)
+- [System functions in Azure Cosmos DB](sql-query-system-functions.md)
+- [Aggregate functions in Azure Cosmos DB](sql-query-aggregate-functions.md)
cosmos-db Sql Query Aggregate Count https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-query-aggregate-count.md
+
+ Title: COUNT in Azure Cosmos DB query language
+description: Learn about the Count (COUNT) SQL system function in Azure Cosmos DB.
++++ Last updated : 12/02/2020+++
+# COUNT (Azure Cosmos DB)
+
+This system function returns the count of the values in the expression.
+
+## Syntax
+
+```sql
+COUNT(<scalar_expr>)
+```
+
+## Arguments
+
+*scalar_expr*
+ Is any scalar expression
+
+## Return types
+
+Returns a numeric expression.
+
+## Examples
+
+The following example returns the total count of items in a container:
+
+```sql
+SELECT COUNT(1)
+FROM c
+```
+COUNT can take any scalar expression as input. The below query will produce an equivalent results:
+
+```sql
+SELECT COUNT(2)
+FROM c
+```
+
+## Remarks
+
+This system function will benefit from a [range index](../index-policy.md#includeexclude-strategy) for any properties in the query's filter.
+
+## Next steps
+
+- [Mathematical functions in Azure Cosmos DB](sql-query-mathematical-functions.md)
+- [System functions in Azure Cosmos DB](sql-query-system-functions.md)
+- [Aggregate functions in Azure Cosmos DB](sql-query-aggregate-functions.md)
cosmos-db Sql Query Aggregate Functions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-query-aggregate-functions.md
+
+ Title: Aggregate functions in Azure Cosmos DB
+description: Learn about SQL aggregate function syntax, types of aggregate functions supported by Azure Cosmos DB.
++++ Last updated : 12/02/2020+++
+# Aggregate functions in Azure Cosmos DB
+
+Aggregate functions perform a calculation on a set of values in the `SELECT` clause and return a single value. For example, the following query returns the count of items within a container:
+
+```sql
+ SELECT COUNT(1)
+ FROM c
+```
+
+## Types of aggregate functions
+
+The SQL API supports the following aggregate functions. `SUM` and `AVG` operate on numeric values, and `COUNT`, `MIN`, and `MAX` work on numbers, strings, Booleans, and nulls.
+
+| Function | Description |
+|-|-|
+| [AVG](sql-query-aggregate-avg.md) | Returns the average of the values in the expression. |
+| [COUNT](sql-query-aggregate-count.md) | Returns the number of items in the expression. |
+| [MAX](sql-query-aggregate-max.md) | Returns the maximum value in the expression. |
+| [MIN](sql-query-aggregate-min.md) | Returns the minimum value in the expression. |
+| [SUM](sql-query-aggregate-sum.md) | Returns the sum of all the values in the expression. |
++
+You can also return only the scalar value of the aggregate by using the VALUE keyword. For example, the following query returns the count of values as a single number:
+
+```sql
+ SELECT VALUE COUNT(1)
+ FROM Families f
+```
+
+The results are:
+
+```json
+ [ 2 ]
+```
+
+You can also combine aggregations with filters. For example, the following query returns the count of items with the address state of `WA`.
+
+```sql
+ SELECT VALUE COUNT(1)
+ FROM Families f
+ WHERE f.address.state = "WA"
+```
+
+The results are:
+
+```json
+ [ 1 ]
+```
+
+## Remarks
+
+These aggregate system functions will benefit from a [range index](../index-policy.md#includeexclude-strategy). If you expect to do an `AVG`, `COUNT`, `MAX`, `MIN`, or `SUM` on a property, you should [include the relevant path in the indexing policy](../index-policy.md#includeexclude-strategy).
+
+## Next steps
+
+- [Introduction to Azure Cosmos DB](../introduction.md)
+- [System functions](sql-query-system-functions.md)
+- [User defined functions](sql-query-udfs.md)
cosmos-db Sql Query Aggregate Max https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-query-aggregate-max.md
+
+ Title: MAX in Azure Cosmos DB query language
+description: Learn about the Max (MAX) SQL system function in Azure Cosmos DB.
++++ Last updated : 12/02/2020+++
+# MAX (Azure Cosmos DB)
+
+This aggregate function returns the maximum of the values in the expression.
+
+## Syntax
+
+```sql
+MAX(<scalar_expr>)
+```
+
+## Arguments
+
+*scalar_expr*
+ Is a scalar expression.
+
+## Return types
+
+Returns a scalar expression.
+
+## Examples
+
+The following example returns the maximum value of `propertyA`:
+
+```sql
+SELECT MAX(c.propertyA)
+FROM c
+```
+
+## Remarks
+
+This system function will benefit from a [range index](../index-policy.md#includeexclude-strategy). The arguments in `MAX` can be number, string, boolean, or null. Any undefined values will be ignored.
+
+When comparing different types data, the following priority order is used (in descending order):
+
+- string
+- number
+- boolean
+- null
+
+## Next steps
+
+- [Mathematical functions in Azure Cosmos DB](sql-query-mathematical-functions.md)
+- [System functions in Azure Cosmos DB](sql-query-system-functions.md)
+- [Aggregate functions in Azure Cosmos DB](sql-query-aggregate-functions.md)
cosmos-db Sql Query Aggregate Min https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-query-aggregate-min.md
+
+ Title: MIN in Azure Cosmos DB query language
+description: Learn about the Min (MIN) SQL system function in Azure Cosmos DB.
++++ Last updated : 12/02/2020+++
+# MIN (Azure Cosmos DB)
+
+This aggregate function returns the minimum of the values in the expression.
+
+## Syntax
+
+```sql
+MIN(<scalar_expr>)
+```
+
+## Arguments
+
+*scalar_expr*
+ Is a scalar expression.
+
+## Return types
+
+Returns a scalar expression.
+
+## Examples
+
+The following example returns the minimum value of `propertyA`:
+
+```sql
+SELECT MIN(c.propertyA)
+FROM c
+```
+
+## Remarks
+
+This system function will benefit from a [range index](../index-policy.md#includeexclude-strategy). The arguments in `MIN` can be number, string, boolean, or null. Any undefined values will be ignored.
+
+When comparing different types data, the following priority order is used (in ascending order):
+
+- null
+- boolean
+- number
+- string
+
+## Next steps
+
+- [Mathematical functions in Azure Cosmos DB](sql-query-mathematical-functions.md)
+- [System functions in Azure Cosmos DB](sql-query-system-functions.md)
+- [Aggregate functions in Azure Cosmos DB](sql-query-aggregate-functions.md)
cosmos-db Sql Query Aggregate Sum https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-query-aggregate-sum.md
+
+ Title: SUM in Azure Cosmos DB query language
+description: Learn about the Sum (SUM) SQL system function in Azure Cosmos DB.
++++ Last updated : 12/02/2020+++
+# SUM (Azure Cosmos DB)
+
+This aggregate function returns the sum of the values in the expression.
+
+## Syntax
+
+```sql
+SUM(<numeric_expr>)
+```
+
+## Arguments
+
+*numeric_expr*
+ Is a numeric expression.
+
+## Return types
+
+Returns a numeric expression.
+
+## Examples
+
+The following example returns the sum of `propertyA`:
+
+```sql
+SELECT SUM(c.propertyA)
+FROM c
+```
+
+## Remarks
+
+This system function will benefit from a [range index](../index-policy.md#includeexclude-strategy). If any arguments in `SUM` are string, boolean, or null, the entire aggregate system function will return `undefined`. If any argument has an `undefined` value, it will be not impact the `SUM` calculation.
+
+## Next steps
+
+- [Mathematical functions in Azure Cosmos DB](sql-query-mathematical-functions.md)
+- [System functions in Azure Cosmos DB](sql-query-system-functions.md)
+- [Aggregate functions in Azure Cosmos DB](sql-query-aggregate-functions.md)
cosmos-db Sql Query Array Concat https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-query-array-concat.md
+
+ Title: ARRAY_CONCAT in Azure Cosmos DB query language
+description: Learn about how the Array Concat SQL system function in Azure Cosmos DB returns an array that is the result of concatenating two or more array values
++++ Last updated : 03/03/2020+++
+# ARRAY_CONCAT (Azure Cosmos DB)
+
+ Returns an array that is the result of concatenating two or more array values.
+
+## Syntax
+
+```sql
+ARRAY_CONCAT (<arr_expr1>, <arr_expr2> [, <arr_exprN>])
+```
+
+## Arguments
+
+*arr_expr*
+ Is an array expression to concatenate to the other values. The `ARRAY_CONCAT` function requires at least two *arr_expr* arguments.
+
+## Return types
+
+ Returns an array expression.
+
+## Examples
+
+ The following example how to concatenate two arrays.
+
+```sql
+SELECT ARRAY_CONCAT(["apples", "strawberries"], ["bananas"]) AS arrayConcat
+```
+
+ Here is the result set.
+
+```json
+[{"arrayConcat": ["apples", "strawberries", "bananas"]}]
+```
+
+## Remarks
+
+This system function will not utilize the index.
+
+## Next steps
+
+- [Array functions Azure Cosmos DB](sql-query-array-functions.md)
+- [System functions Azure Cosmos DB](sql-query-system-functions.md)
+- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Array Contains https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-query-array-contains.md
+
+ Title: ARRAY_CONTAINS in Azure Cosmos DB query language
+description: Learn about how the Array Contains SQL system function in Azure Cosmos DB returns a Boolean indicating whether the array contains the specified value
++++ Last updated : 09/13/2019+++
+# ARRAY_CONTAINS (Azure Cosmos DB)
+
+Returns a Boolean indicating whether the array contains the specified value. You can check for a partial or full match of an object by using a boolean expression within the command.
+
+## Syntax
+
+```sql
+ARRAY_CONTAINS (<arr_expr>, <expr> [, bool_expr])
+```
+
+## Arguments
+
+*arr_expr*
+ Is the array expression to be searched.
+
+*expr*
+ Is the expression to be found.
+
+*bool_expr*
+ Is a boolean expression. If it evaluates to 'true' and if the specified search value is an object, the command checks for a partial match (the search object is a subset of one of the objects). If it evaluates to 'false', the command checks for a full match of all objects within the array. The default value if not specified is false.
+
+## Return types
+
+ Returns a Boolean value.
+
+## Examples
+
+ The following example how to check for membership in an array using `ARRAY_CONTAINS`.
+
+```sql
+SELECT
+ ARRAY_CONTAINS(["apples", "strawberries", "bananas"], "apples") AS b1,
+ ARRAY_CONTAINS(["apples", "strawberries", "bananas"], "mangoes") AS b2
+```
+
+ Here is the result set.
+
+```json
+[{"b1": true, "b2": false}]
+```
+
+The following example how to check for a partial match of a JSON in an array using ARRAY_CONTAINS.
+
+```sql
+SELECT
+ ARRAY_CONTAINS([{"name": "apples", "fresh": true}, {"name": "strawberries", "fresh": true}], {"name": "apples"}, true) AS b1,
+ ARRAY_CONTAINS([{"name": "apples", "fresh": true}, {"name": "strawberries", "fresh": true}], {"name": "apples"}) AS b2,
+ ARRAY_CONTAINS([{"name": "apples", "fresh": true}, {"name": "strawberries", "fresh": true}], {"name": "mangoes"}, true) AS b3
+```
+
+ Here is the result set.
+
+```json
+[{
+ "b1": true,
+ "b2": false,
+ "b3": false
+}]
+```
+
+## Remarks
+
+This system function will benefit from a [range index](../index-policy.md#includeexclude-strategy).
+
+## Next steps
+
+- [Array functions Azure Cosmos DB](sql-query-array-functions.md)
+- [System functions Azure Cosmos DB](sql-query-system-functions.md)
+- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Array Functions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-query-array-functions.md
+
+ Title: Array functions in Azure Cosmos DB query language
+description: Learn about how the array functions let you perform operations on arrays in Azure Cosmos DB
++++ Last updated : 09/13/2019+++
+# Array functions (Azure Cosmos DB)
+
+The array functions let you perform operations on arrays in Azure Cosmos DB.
+
+## Functions
+
+The following scalar functions perform an operation on an array input value and return numeric, boolean or array value:
+
+* [ARRAY_CONCAT](sql-query-array-concat.md)
+* [ARRAY_CONTAINS](sql-query-array-contains.md)
+* [ARRAY_LENGTH](sql-query-array-length.md)
+* [ARRAY_SLICE](sql-query-array-slice.md)
++
+
+
+
+## Next steps
+
+- [System functions Azure Cosmos DB](sql-query-system-functions.md)
+- [Introduction to Azure Cosmos DB](../introduction.md)
+- [User Defined Functions](sql-query-udfs.md)
+- [Aggregates](sql-query-aggregate-functions.md)
cosmos-db Sql Query Array Length https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-query-array-length.md
+
+ Title: ARRAY_LENGTH in Azure Cosmos DB query language
+description: Learn about how the Array length SQL system function in Azure Cosmos DB returns the number of elements of the specified array expression
++++ Last updated : 03/03/2020+++
+# ARRAY_LENGTH (Azure Cosmos DB)
+
+ Returns the number of elements of the specified array expression.
+
+## Syntax
+
+```sql
+ARRAY_LENGTH(<arr_expr>)
+```
+
+## Arguments
+
+*arr_expr*
+ Is an array expression.
+
+## Return types
+
+ Returns a numeric expression.
+
+## Examples
+
+ The following example how to get the length of an array using `ARRAY_LENGTH`.
+
+```sql
+SELECT ARRAY_LENGTH(["apples", "strawberries", "bananas"]) AS len
+```
+
+ Here is the result set.
+
+```json
+[{"len": 3}]
+```
+
+## Remarks
+
+This system function will not utilize the index.
+
+## Next steps
+
+- [Array functions Azure Cosmos DB](sql-query-array-functions.md)
+- [System functions Azure Cosmos DB](sql-query-system-functions.md)
+- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Array Slice https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-query-array-slice.md
+
+ Title: ARRAY_SLICE in Azure Cosmos DB query language
+description: Learn about how the Array slice SQL system function in Azure Cosmos DB returns part of an array expression
++++ Last updated : 03/03/2020+++
+# ARRAY_SLICE (Azure Cosmos DB)
+
+ Returns part of an array expression.
+
+## Syntax
+
+```sql
+ARRAY_SLICE (<arr_expr>, <num_expr> [, <num_expr>])
+```
+
+## Arguments
+
+*arr_expr*
+ Is any array expression.
+
+*num_expr*
+ Zero-based numeric index at which to begin the array. Negative values may be used to specify the starting index relative to the last element of the array i.e. -1 references the last element in the array.
+
+*num_expr*
+ Optional numeric expression that sets the maximum number of elements in the resulting array.
+
+## Return types
+
+ Returns an array expression.
+
+## Examples
+
+ The following example shows how to get different slices of an array using `ARRAY_SLICE`.
+
+```sql
+SELECT
+ ARRAY_SLICE(["apples", "strawberries", "bananas"], 1) AS s1,
+ ARRAY_SLICE(["apples", "strawberries", "bananas"], 1, 1) AS s2,
+ ARRAY_SLICE(["apples", "strawberries", "bananas"], -2, 1) AS s3,
+ ARRAY_SLICE(["apples", "strawberries", "bananas"], -2, 2) AS s4,
+ ARRAY_SLICE(["apples", "strawberries", "bananas"], 1, 0) AS s5,
+ ARRAY_SLICE(["apples", "strawberries", "bananas"], 1, 1000) AS s6,
+ ARRAY_SLICE(["apples", "strawberries", "bananas"], 1, -100) AS s7
+
+```
+
+ Here is the result set.
+
+```json
+[{
+ "s1": ["strawberries", "bananas"],
+ "s2": ["strawberries"],
+ "s3": ["strawberries"],
+ "s4": ["strawberries", "bananas"],
+ "s5": [],
+ "s6": ["strawberries", "bananas"],
+ "s7": []
+}]
+```
+
+## Remarks
+
+This system function will not utilize the index.
+
+## Next steps
+
+- [Array functions Azure Cosmos DB](sql-query-array-functions.md)
+- [System functions Azure Cosmos DB](sql-query-system-functions.md)
+- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Asin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-query-asin.md
+
+ Title: ASIN in Azure Cosmos DB query language
+description: Learn about how the Arcsine (ASIN) SQL system function in Azure Cosmos DB returns the angle, in radians, whose sine is the specified numeric expression
++++ Last updated : 03/04/2020+++
+# ASIN (Azure Cosmos DB)
+
+ Returns the angle, in radians, whose sine is the specified numeric expression. This is also called arcsine.
+
+## Syntax
+
+```sql
+ASIN(<numeric_expr>)
+```
+
+## Arguments
+
+*numeric_expr*
+ Is a numeric expression.
+
+## Return types
+
+ Returns a numeric expression.
+
+## Examples
+
+ The following example returns the `ASIN` of -1.
+
+```sql
+SELECT ASIN(-1) AS asin
+```
+
+ Here is the result set.
+
+```json
+[{"asin": -1.5707963267948966}]
+```
+
+## Remarks
+
+This system function will not utilize the index.
+
+## Next steps
+
+- [Mathematical functions Azure Cosmos DB](sql-query-mathematical-functions.md)
+- [System functions Azure Cosmos DB](sql-query-system-functions.md)
+- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Atan https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-query-atan.md
+
+ Title: ATAN in Azure Cosmos DB query language
+description: Learn about how the Arctangent (ATAN ) SQL system function in Azure Cosmos DB returns the angle, in radians, whose tangent is the specified numeric expression
++++ Last updated : 03/04/2020+++
+# ATAN (Azure Cosmos DB)
+
+ Returns the angle, in radians, whose tangent is the specified numeric expression. This is also called arctangent.
+
+## Syntax
+
+```sql
+ATAN(<numeric_expr>)
+```
+
+## Arguments
+
+*numeric_expr*
+ Is a numeric expression.
+
+## Return types
+
+ Returns a numeric expression.
+
+## Examples
+
+ The following example returns the `ATAN` of the specified value.
+
+```sql
+SELECT ATAN(-45.01) AS atan
+```
+
+ Here is the result set.
+
+```json
+[{"atan": -1.5485826962062663}]
+```
+
+## Remarks
+
+This system function will not utilize the index.
+
+## Next steps
+
+- [Mathematical functions Azure Cosmos DB](sql-query-mathematical-functions.md)
+- [System functions Azure Cosmos DB](sql-query-system-functions.md)
+- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Atn2 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-query-atn2.md
+
+ Title: ATN2 in Azure Cosmos DB query language
+description: Learn about how the ATN2 SQL system function in Azure Cosmos DB returns the principal value of the arc tangent of y/x, expressed in radians
++++ Last updated : 03/03/2020+++
+# ATN2 (Azure Cosmos DB)
+
+ Returns the principal value of the arc tangent of y/x, expressed in radians.
+
+## Syntax
+
+```sql
+ATN2(<numeric_expr>, <numeric_expr>)
+```
+
+## Arguments
+
+*numeric_expr*
+ Is a numeric expression.
+
+## Return types
+
+ Returns a numeric expression.
+
+## Examples
+
+ The following example calculates the ATN2 for the specified x and y components.
+
+```sql
+SELECT ATN2(35.175643, 129.44) AS atn2
+```
+
+ Here is the result set.
+
+```json
+[{"atn2": 1.3054517947300646}]
+```
+
+## Remarks
+
+This system function will not utilize the index.
+
+## Next steps
+
+- [Mathematical functions Azure Cosmos DB](sql-query-mathematical-functions.md)
+- [System functions Azure Cosmos DB](sql-query-system-functions.md)
+- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Ceiling https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-query-ceiling.md
+
+ Title: CEILING in Azure Cosmos DB query language
+description: Learn about how the CEILING SQL system function in Azure Cosmos DB returns the smallest integer value greater than, or equal to, the specified numeric expression.
++++ Last updated : 09/13/2019+++
+# CEILING (Azure Cosmos DB)
+
+ Returns the smallest integer value greater than, or equal to, the specified numeric expression.
+
+## Syntax
+
+```sql
+CEILING (<numeric_expr>)
+```
+
+## Arguments
+
+*numeric_expr*
+ Is a numeric expression.
+
+## Return types
+
+ Returns a numeric expression.
+
+## Examples
+
+ The following example shows positive numeric, negative, and zero values with the `CEILING` function.
+
+```sql
+SELECT CEILING(123.45) AS c1, CEILING(-123.45) AS c2, CEILING(0.0) AS c3
+```
+
+ Here is the result set.
+
+```json
+[{c1: 124, c2: -123, c3: 0}]
+```
+
+## Remarks
+
+This system function will benefit from a [range index](../index-policy.md#includeexclude-strategy).
+
+## Next steps
+
+- [Mathematical functions Azure Cosmos DB](sql-query-mathematical-functions.md)
+- [System functions Azure Cosmos DB](sql-query-system-functions.md)
+- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Concat https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-query-concat.md
+
+ Title: CONCAT in Azure Cosmos DB query language
+description: Learn about how the CONCAT SQL system function in Azure Cosmos DB returns a string that is the result of concatenating two or more string values
++++ Last updated : 03/03/2020+++
+# CONCAT (Azure Cosmos DB)
+
+ Returns a string that is the result of concatenating two or more string values.
+
+## Syntax
+
+```sql
+CONCAT(<str_expr1>, <str_expr2> [, <str_exprN>])
+```
+
+## Arguments
+
+*str_expr*
+ Is a string expression to concatenate to the other values. The `CONCAT` function requires at least two *str_expr* arguments.
+
+## Return types
+
+ Returns a string expression.
+
+## Examples
+
+ The following example returns the concatenated string of the specified values.
+
+```sql
+SELECT CONCAT("abc", "def") AS concat
+```
+
+ Here is the result set.
+
+```json
+[{"concat": "abcdef"}]
+```
+
+## Remarks
+
+This system function will not utilize the index.
+
+## Next steps
+
+- [String functions Azure Cosmos DB](sql-query-string-functions.md)
+- [System functions Azure Cosmos DB](sql-query-system-functions.md)
+- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Constants https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-query-constants.md
+
+ Title: SQL constants in Azure Cosmos DB
+description: Learn about how the SQL query constants in Azure Cosmos DB are used to represent a specific data value
++++ Last updated : 05/31/2019++++
+# Azure Cosmos DB SQL query constants
+
+ A constant, also known as a literal or a scalar value, is a symbol that represents a specific data value. The format of a constant depends on the data type of the value it represents.
+
+ **Supported scalar data types:**
+
+|**Type**|**Values order**|
+|-|-|
+|**Undefined**|Single value: **undefined**|
+|**Null**|Single value: **null**|
+|**Boolean**|Values: **false**, **true**.|
+|**Number**|A double-precision floating-point number, IEEE 754 standard.|
+|**String**|A sequence of zero or more Unicode characters. Strings must be enclosed in single or double quotes.|
+|**Array**|A sequence of zero or more elements. Each element can be a value of any scalar data type, except **Undefined**.|
+|**Object**|An unordered set of zero or more name/value pairs. Name is a Unicode string, value can be of any scalar data type, except **Undefined**.|
+
+## <a name="bk_syntax"></a>Syntax
+
+```sql
+<constant> ::=
+ <undefined_constant>
+ | <null_constant>
+ | <boolean_constant>
+ | <number_constant>
+ | <string_constant>
+ | <array_constant>
+ | <object_constant>
+
+<undefined_constant> ::= undefined
+
+<null_constant> ::= null
+
+<boolean_constant> ::= false | true
+
+<number_constant> ::= decimal_literal | hexadecimal_literal
+
+<string_constant> ::= string_literal
+
+<array_constant> ::=
+ '[' [<constant>][,...n] ']'
+
+<object_constant> ::=
+ '{' [{property_name | "property_name"} : <constant>][,...n] '}'
+
+```
+
+## <a name="bk_arguments"></a> Arguments
+
+* `<undefined_constant>; Undefined`
+
+ Represents undefined value of type Undefined.
+
+* `<null_constant>; null`
+
+ Represents **null** value of type **Null**.
+
+* `<boolean_constant>`
+
+ Represents constant of type Boolean.
+
+* `false`
+
+ Represents **false** value of type Boolean.
+
+* `true`
+
+ Represents **true** value of type Boolean.
+
+* `<number_constant>`
+
+ Represents a constant.
+
+* `decimal_literal`
+
+ Decimal literals are numbers represented using either decimal notation, or scientific notation.
+
+* `hexadecimal_literal`
+
+ Hexadecimal literals are numbers represented using prefix '0x' followed by one or more hexadecimal digits.
+
+* `<string_constant>`
+
+ Represents a constant of type String.
+
+* `string _literal`
+
+ String literals are Unicode strings represented by a sequence of zero or more Unicode characters or escape sequences. String literals are enclosed in single quotes (apostrophe: ' ) or double quotes (quotation mark: ").
+
+ Following escape sequences are allowed:
+
+|**Escape sequence**|**Description**|**Unicode character**|
+|-|-|-|
+|\\'|apostrophe (')|U+0027|
+|\\"|quotation mark (")|U+0022|
+|\\\ |reverse solidus (\\)|U+005C|
+|\\/|solidus (/)|U+002F|
+|\b|backspace|U+0008|
+|\f|form feed|U+000C|
+|\n|line feed|U+000A|
+|\r|carriage return|U+000D|
+|\t|tab|U+0009|
+|\uXXXX|A Unicode character defined by 4 hexadecimal digits.|U+XXXX|
+
+## Next steps
+
+- [Azure Cosmos DB .NET samples](https://github.com/Azure/azure-cosmos-dotnet-v3)
+- [Model document data](../modeling-data.md)
cosmos-db Sql Query Contains https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-query-contains.md
+
+ Title: Contains in Azure Cosmos DB query language
+description: Learn about how the CONTAINS SQL system function in Azure Cosmos DB returns a Boolean indicating whether the first string expression contains the second
++++ Last updated : 04/01/2021+++
+# CONTAINS (Azure Cosmos DB)
+
+Returns a Boolean indicating whether the first string expression contains the second.
+
+## Syntax
+
+```sql
+CONTAINS(<str_expr1>, <str_expr2> [, <bool_expr>])
+```
+
+## Arguments
+
+*str_expr1*
+ Is the string expression to be searched.
+
+*str_expr2*
+ Is the string expression to find.
+
+*bool_expr*
+ Optional value for ignoring case. When set to true, CONTAINS will do a case-insensitive search. When unspecified, this value is false.
+
+## Return types
+
+ Returns a Boolean expression.
+
+## Examples
+
+ The following example checks if "abc" contains "ab" and if "abc" contains "A".
+
+```sql
+SELECT CONTAINS("abc", "ab", false) AS c1, CONTAINS("abc", "A", false) AS c2, CONTAINS("abc", "A", true) AS c3
+```
+
+ Here is the result set.
+
+```json
+[
+ {
+ "c1": true,
+ "c2": false,
+ "c3": true
+ }
+]
+```
+
+## Remarks
+
+Learn about [how this string system function uses the index](sql-query-string-functions.md).
+
+## Next steps
+
+- [String functions Azure Cosmos DB](sql-query-string-functions.md)
+- [System functions Azure Cosmos DB](sql-query-system-functions.md)
+- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Cos https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-query-cos.md
+
+ Title: COS in Azure Cosmos DB query language
+description: Learn about how the Cosine (COS) SQL system function in Azure Cosmos DB returns the trigonometric cosine of the specified angle, in radians, in the specified expression
++++ Last updated : 03/03/2020+++
+# COS (Azure Cosmos DB)
+
+ Returns the trigonometric cosine of the specified angle, in radians, in the specified expression.
+
+## Syntax
+
+```sql
+COS(<numeric_expr>)
+```
+
+## Arguments
+
+*numeric_expr*
+ Is a numeric expression.
+
+## Return types
+
+ Returns a numeric expression.
+
+## Examples
+
+ The following example calculates the `COS` of the specified angle.
+
+```sql
+SELECT COS(14.78) AS cos
+```
+
+ Here is the result set.
+
+```json
+[{"cos": -0.59946542619465426}]
+```
+
+## Remarks
+
+This system function will not utilize the index.
+
+## Next steps
+
+- [Mathematical functions Azure Cosmos DB](sql-query-mathematical-functions.md)
+- [System functions Azure Cosmos DB](sql-query-system-functions.md)
+- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Cot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-query-cot.md
+
+ Title: COT in Azure Cosmos DB query language
+description: Learn about how the Cotangent(COT) SQL system function in Azure Cosmos DB returns the trigonometric cotangent of the specified angle, in radians, in the specified numeric expression
++++ Last updated : 03/03/2020+++
+# COT (Azure Cosmos DB)
+
+ Returns the trigonometric cotangent of the specified angle, in radians, in the specified numeric expression.
+
+## Syntax
+
+```sql
+COT(<numeric_expr>)
+```
+
+## Arguments
+
+*numeric_expr*
+ Is a numeric expression.
+
+## Return types
+
+ Returns a numeric expression.
+
+## Examples
+
+ The following example calculates the `COT` of the specified angle.
+
+```sql
+SELECT COT(124.1332) AS cot
+```
+
+ Here is the result set.
+
+```json
+[{"cot": -0.040311998371148884}]
+```
+
+## Remarks
+
+This system function will not utilize the index.
+
+## Next steps
+
+- [Mathematical functions Azure Cosmos DB](sql-query-mathematical-functions.md)
+- [System functions Azure Cosmos DB](sql-query-system-functions.md)
+- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Date Time Functions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-query-date-time-functions.md
+
+ Title: Date and time functions in Azure Cosmos DB query language
+description: Learn about date and time SQL system functions in Azure Cosmos DB to perform DateTime and timestamp operations.
++++ Last updated : 08/18/2020+++
+# Date and time functions (Azure Cosmos DB)
+
+The date and time functions let you perform DateTime and timestamp operations in Azure Cosmos DB.
+
+## Functions to obtain the date and time
+
+The following scalar functions allow you to get the current UTC date and time in three forms: a string which conforms to the ISO 8601 format,
+a numeric timestamp whose value is the number of milliseconds which have elapsed since the Unix epoch,
+or numeric ticks whose value is the number of 100 nanosecond ticks which have elapsed since the Unix epoch:
+
+* [GetCurrentDateTime](sql-query-getcurrentdatetime.md)
+* [GetCurrentTimestamp](sql-query-getcurrenttimestamp.md)
+* [GetCurrentTicks](sql-query-getcurrentticks.md)
+
+## Functions to work with DateTime values
+
+The following functions allow you to easily manipulate DateTime, timestamp, and tick values:
+
+* [DateTimeAdd](sql-query-datetimeadd.md)
+* [DateTimeDiff](sql-query-datetimediff.md)
+* [DateTimeFromParts](sql-query-datetimefromparts.md)
+* [DateTimePart](sql-query-datetimepart.md)
+* [DateTimeToTicks](sql-query-datetimetoticks.md)
+* [DateTimeToTimestamp](sql-query-datetimetotimestamp.md)
+* [TicksToDateTime](sql-query-tickstodatetime.md)
+* [TimestampToDateTime](sql-query-timestamptodatetime.md)
+
+## Next steps
+
+- [System functions Azure Cosmos DB](sql-query-system-functions.md)
+- [Introduction to Azure Cosmos DB](../introduction.md)
+- [User Defined Functions](sql-query-udfs.md)
+- [Aggregates](sql-query-aggregate-functions.md)
cosmos-db Sql Query Datetimeadd https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-query-datetimeadd.md
+
+ Title: DateTimeAdd in Azure Cosmos DB query language
+description: Learn about SQL system function DateTimeAdd in Azure Cosmos DB.
++++ Last updated : 07/09/2020+++
+# DateTimeAdd (Azure Cosmos DB)
+
+Returns DateTime string value resulting from adding a specified number value (as a signed integer) to a specified DateTime string
+
+## Syntax
+
+```sql
+DateTimeAdd (<DateTimePart> , <numeric_expr> ,<DateTime>)
+```
+
+## Arguments
+
+*DateTimePart*
+ The part of date to which DateTimeAdd adds an integer number. This table lists all valid DateTimePart arguments:
+
+| DateTimePart | abbreviations |
+| | -- |
+| Year | "year", "yyyy", "yy" |
+| Month | "month", "mm", "m" |
+| Day | "day", "dd", "d" |
+| Hour | "hour", "hh" |
+| Minute | "minute", "mi", "n" |
+| Second | "second", "ss", "s" |
+| Millisecond | "millisecond", "ms" |
+| Microsecond | "microsecond", "mcs" |
+| Nanosecond | "nanosecond", "ns" |
+
+*numeric_expr*
+ Is a signed integer value that will be added to the DateTimePart of the specified DateTime
+
+*DateTime*
+ UTC date and time ISO 8601 string value in the format `YYYY-MM-DDThh:mm:ss.fffffffZ` where:
+
+|Format|Description|
+|-|-|
+|YYYY|four-digit year|
+|MM|two-digit month (01 = January, etc.)|
+|DD|two-digit day of month (01 through 31)|
+|T|signifier for beginning of time elements|
+|hh|two-digit hour (00 through 23)|
+|mm|two-digit minutes (00 through 59)|
+|ss|two-digit seconds (00 through 59)|
+|.fffffff|seven-digit fractional seconds|
+|Z|UTC (Coordinated Universal Time) designator|
+
+ For more information on the ISO 8601 format, see [ISO_8601](https://en.wikipedia.org/wiki/ISO_8601)
+
+## Return types
+
+Returns a UTC date and time ISO 8601 string value in the format `YYYY-MM-DDThh:mm:ss.fffffffZ` where:
+
+|Format|Description|
+|-|-|
+|YYYY|four-digit year|
+|MM|two-digit month (01 = January, etc.)|
+|DD|two-digit day of month (01 through 31)|
+|T|signifier for beginning of time elements|
+|hh|two-digit hour (00 through 23)|
+|mm|two-digit minutes (00 through 59)|
+|ss|two-digit seconds (00 through 59)|
+|.fffffff|seven-digit fractional seconds|
+|Z|UTC (Coordinated Universal Time) designator|
+
+## Remarks
+
+DateTimeAdd will return `undefined` for the following reasons:
+
+- The DateTimePart value specified is invalid
+- The numeric_expr specified is not a valid integer
+- The DateTime in the argument or result is not a valid ISO 8601 DateTime.
+
+## Examples
+
+The following example adds 1 month to the DateTime: `2020-07-09T23:20:13.4575530Z`
+
+```sql
+SELECT DateTimeAdd("mm", 1, "2020-07-09T23:20:13.4575530Z") AS OneMonthLater
+```
+
+```json
+[
+ {
+ "OneMonthLater": "2020-08-09T23:20:13.4575530Z"
+ }
+]
+```
+
+The following example subtracts 2 hours from the DateTime: `2020-07-09T23:20:13.4575530Z`
+
+```sql
+SELECT DateTimeAdd("hh", -2, "2020-07-09T23:20:13.4575530Z") AS TwoHoursEarlier
+```
+
+```json
+[
+ {
+ "TwoHoursEarlier": "2020-07-09T21:20:13.4575530Z"
+ }
+]
+```
+
+## Next steps
+
+- [Date and time functions Azure Cosmos DB](sql-query-date-time-functions.md)
+- [System functions Azure Cosmos DB](sql-query-system-functions.md)
+- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Datetimediff https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-query-datetimediff.md
+
+ Title: DateTimeDiff in Azure Cosmos DB query language
+description: Learn about SQL system function DateTimeDiff in Azure Cosmos DB.
++++ Last updated : 07/09/2020+++
+# DateTimeDiff (Azure Cosmos DB)
+Returns the count (as a signed integer value) of the specified DateTimePart boundaries crossed between the specified *StartDate* and *EndDate*.
+
+## Syntax
+
+```sql
+DateTimeDiff (<DateTimePart> , <StartDate> , <EndDate>)
+```
+
+## Arguments
+
+*DateTimePart*
+ The part of date to which DateTimeAdd adds an integer number. This table lists all valid DateTimePart arguments:
+
+| DateTimePart | abbreviations |
+| | -- |
+| Year | "year", "yyyy", "yy" |
+| Month | "month", "mm", "m" |
+| Day | "day", "dd", "d" |
+| Hour | "hour", "hh" |
+| Minute | "minute", "mi", "n" |
+| Second | "second", "ss", "s" |
+| Millisecond | "millisecond", "ms" |
+| Microsecond | "microsecond", "mcs" |
+| Nanosecond | "nanosecond", "ns" |
+
+*StartDate*
+ UTC date and time ISO 8601 string value in the format `YYYY-MM-DDThh:mm:ss.fffffffZ` where:
+
+|Format|Description|
+|-|-|
+|YYYY|four-digit year|
+|MM|two-digit month (01 = January, etc.)|
+|DD|two-digit day of month (01 through 31)|
+|T|signifier for beginning of time elements|
+|hh|two-digit hour (00 through 23)|
+|mm|two-digit minutes (00 through 59)|
+|ss|two-digit seconds (00 through 59)|
+|.fffffff|seven-digit fractional seconds|
+|Z|UTC (Coordinated Universal Time) designator|
+
+ For more information on the ISO 8601 format, see [ISO_8601](https://en.wikipedia.org/wiki/ISO_8601)
+
+*EndDate*
+ UTC date and time ISO 8601 string value in the format `YYYY-MM-DDThh:mm:ss.fffffffZ`
+
+## Return types
+
+Returns a signed integer value.
+
+## Remarks
+
+DateTimeDiff will return `undefined` for the following reasons:
+
+- The DateTimePart value specified is invalid
+- The StartDate or EndDate is not a valid ISO 8601 DateTime
+
+DateTimeDiff will always return a signed integer value and is a measurement of the number of DateTimePart boundaries crossed, not measurement of the time interval.
+
+## Examples
+
+The following example computes the number of day boundaries crossed between `2020-01-01T01:02:03.1234527Z` and `2020-01-03T01:02:03.1234567Z`.
+
+```sql
+SELECT DateTimeDiff("day", "2020-01-01T01:02:03.1234527Z", "2020-01-03T01:02:03.1234567Z") AS DifferenceInDays
+```
+
+```json
+[
+ {
+ "DifferenceInDays": 2
+ }
+]
+```
+
+The following example computes the number of year boundaries crossed between `2028-01-01T01:02:03.1234527Z` and `2020-01-03T01:02:03.1234567Z`.
+
+```sql
+SELECT DateTimeDiff("yyyy", "2028-01-01T01:02:03.1234527Z", "2020-01-03T01:02:03.1234567Z") AS DifferenceInYears
+```
+
+```json
+[
+ {
+ "DifferenceInYears": -8
+ }
+]
+```
+
+The following example computes the number of hour boundaries crossed between `2020-01-01T01:00:00.1234527Z` and `2020-01-01T01:59:59.1234567Z`. Even though these DateTime values are over 0.99 hours apart, `DateTimeDiff` returns 0 because no hour boundaries were crossed.
+
+```sql
+SELECT DateTimeDiff("hh", "2020-01-01T01:00:00.1234527Z", "2020-01-01T01:59:59.1234567Z") AS DifferenceInHours
+```
+
+```json
+[
+ {
+ "DifferenceInHours": 0
+ }
+]
+```
+
+## Next steps
+
+- [Date and time functions Azure Cosmos DB](sql-query-date-time-functions.md)
+- [System functions Azure Cosmos DB](sql-query-system-functions.md)
+- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Datetimefromparts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-query-datetimefromparts.md
+
+ Title: DateTimeFromParts in Azure Cosmos DB query language
+description: Learn about SQL system function DateTimeFromParts in Azure Cosmos DB.
++++ Last updated : 07/09/2020+++
+# DateTimeFromParts (Azure Cosmos DB)
+
+Returns a string DateTime value constructed from input values.
+
+## Syntax
+
+```sql
+DateTimeFromParts(<numberYear>, <numberMonth>, <numberDay> [, numberHour] [, numberMinute] [, numberSecond] [, numberOfFractionsOfSecond])
+```
+
+## Arguments
+
+*numberYear*
+ Integer value for the year in the format `YYYY`
+
+*numberMonth*
+ Integer value for the month in the format `MM`
+
+*numberDay*
+ Integer value for the day in the format `DD`
+
+*numberHour* (optional)
+ Integer value for the hour in the format `hh`
+
+*numberMinute* (optional)
+ Integer value for the minute in the format `mm`
+
+*numberSecond* (optional)
+ Integer value for the second in the format `ss`
+
+*numberOfFractionsOfSecond* (optional)
+ Integer value for the fractional of a second in the format `.fffffff`
+
+## Return types
+
+Returns a UTC date and time ISO 8601 string value in the format `YYYY-MM-DDThh:mm:ss.fffffffZ` where:
+
+|Format|Description|
+|-|-|
+|YYYY|four-digit year|
+|MM|two-digit month (01 = January, etc.)|
+|DD|two-digit day of month (01 through 31)|
+|T|signifier for beginning of time elements|
+|hh|two-digit hour (00 through 23)|
+|mm|two-digit minutes (00 through 59)|
+|ss|two-digit seconds (00 through 59)|
+|.fffffff|seven-digit fractional seconds|
+|Z|UTC (Coordinated Universal Time) designator|
+
+ For more information on the ISO 8601 format, see [ISO_8601](https://en.wikipedia.org/wiki/ISO_8601)
+
+## Remarks
+
+If the specified integers would create an invalid DateTime, DateTimeFromParts will return `undefined`.
+
+If an optional argument isn't specified, its value will be 0.
+
+## Examples
+
+Here's an example that only includes required arguments to construct a DateTime:
+
+```sql
+SELECT DateTimeFromParts(2020, 9, 4) AS DateTime
+```
+
+```json
+[
+ {
+ "DateTime": "2020-09-04T00:00:00.0000000Z"
+ }
+]
+```
+
+Here's another example that also uses some optional arguments to construct a DateTime:
+
+```sql
+SELECT DateTimeFromParts(2020, 9, 4, 10, 52) AS DateTime
+```
+
+```json
+[
+ {
+ "DateTime": "2020-09-04T10:52:00.0000000Z"
+ }
+]
+```
+
+Here's another example that also uses all optional arguments to construct a DateTime:
+
+```sql
+SELECT DateTimeFromParts(2020, 9, 4, 10, 52, 12, 3456789) AS DateTime
+```
+
+```json
+[
+ {
+ "DateTime": "2020-09-04T10:52:12.3456789Z"
+ }
+]
+```
+
+## Next steps
+
+- [Date and time functions Azure Cosmos DB](sql-query-date-time-functions.md)
+- [System functions Azure Cosmos DB](sql-query-system-functions.md)
+- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Datetimepart https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-query-datetimepart.md
+
+ Title: DateTimePart in Azure Cosmos DB query language
+description: Learn about SQL system function DateTimePart in Azure Cosmos DB.
++++ Last updated : 08/14/2020+++
+# DateTimePart (Azure Cosmos DB)
+
+Returns the value of the specified DateTimePart between the specified DateTime.
+
+## Syntax
+
+```sql
+DateTimePart (<DateTimePart> , <DateTime>)
+```
+
+## Arguments
+
+*DateTimePart*
+ The part of the date for which DateTimePart will return the value. This table lists all valid DateTimePart arguments:
+
+| DateTimePart | abbreviations |
+| | -- |
+| Year | "year", "yyyy", "yy" |
+| Month | "month", "mm", "m" |
+| Day | "day", "dd", "d" |
+| Hour | "hour", "hh" |
+| Minute | "minute", "mi", "n" |
+| Second | "second", "ss", "s" |
+| Millisecond | "millisecond", "ms" |
+| Microsecond | "microsecond", "mcs" |
+| Nanosecond | "nanosecond", "ns" |
+
+*DateTime*
+ UTC date and time ISO 8601 string value in the format `YYYY-MM-DDThh:mm:ss.fffffffZ`
+
+## Return types
+
+Returns a positive integer value.
+
+## Remarks
+
+DateTimePart will return `undefined` for the following reasons:
+
+- The DateTimePart value specified is invalid
+- The DateTime is not a valid ISO 8601 DateTime
+
+This system function will not utilize the index.
+
+## Examples
+
+Here's an example that returns the integer value of the month:
+
+```sql
+SELECT DateTimePart("m", "2020-01-02T03:04:05.6789123Z") AS MonthValue
+```
+
+```json
+[
+ {
+ "MonthValue": 1
+ }
+]
+```
+
+Here's an example that returns the number of microseconds:
+
+```sql
+SELECT DateTimePart("mcs", "2020-01-02T03:04:05.6789123Z") AS MicrosecondsValue
+```
+
+```json
+[
+ {
+ "MicrosecondsValue": 678912
+ }
+]
+```
+
+## Next steps
+
+- [Date and time functions Azure Cosmos DB](sql-query-date-time-functions.md)
+- [System functions Azure Cosmos DB](sql-query-system-functions.md)
+- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Datetimetoticks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-query-datetimetoticks.md
+
+ Title: DateTimeToTicks in Azure Cosmos DB query language
+description: Learn about SQL system function DateTimeToTicks in Azure Cosmos DB.
++++ Last updated : 08/18/2020+++
+# DateTimeToTicks (Azure Cosmos DB)
+
+Converts the specified DateTime to ticks. A single tick represents one hundred nanoseconds or one ten-millionth of a second.
+
+## Syntax
+
+```sql
+DateTimeToTicks (<DateTime>)
+```
+
+## Arguments
+
+*DateTime*
+ UTC date and time ISO 8601 string value in the format `YYYY-MM-DDThh:mm:ss.fffffffZ`
+
+## Return types
+
+Returns a signed numeric value, the current number of 100-nanosecond ticks that have elapsed since the Unix epoch. In other words, DateTimeToTicks returns the number of 100-nanosecond ticks that have elapsed since 00:00:00 Thursday, 1 January 1970.
+
+## Remarks
+
+DateTimeDateTimeToTicks will return `undefined` if the DateTime is not a valid ISO 8601 DateTime
+
+This system function will not utilize the index.
+
+## Examples
+
+Here's an example that returns the number of ticks:
+
+```sql
+SELECT DateTimeToTicks("2020-01-02T03:04:05.6789123Z") AS Ticks
+```
+
+```json
+[
+ {
+ "Ticks": 15779342456789124
+ }
+]
+```
+
+Here's an example that returns the number of ticks without specifying the number of fractional seconds:
+
+```sql
+SELECT DateTimeToTicks("2020-01-02T03:04:05Z") AS Ticks
+```
+
+```json
+[
+ {
+ "Ticks": 15779342450000000
+ }
+]
+```
+
+## Next steps
+
+- [Date and time functions Azure Cosmos DB](sql-query-date-time-functions.md)
+- [System functions Azure Cosmos DB](sql-query-system-functions.md)
+- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Datetimetotimestamp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-query-datetimetotimestamp.md
+
+ Title: DateTimeToTimestamp in Azure Cosmos DB query language
+description: Learn about SQL system function DateTimeToTimestamp in Azure Cosmos DB.
++++ Last updated : 08/18/2020+++
+# DateTimeToTimestamp (Azure Cosmos DB)
+
+Converts the specified DateTime to a timestamp.
+
+## Syntax
+
+```sql
+DateTimeToTimestamp (<DateTime>)
+```
+
+## Arguments
+
+*DateTime*
+ UTC date and time ISO 8601 string value in the format `YYYY-MM-DDThh:mm:ss.fffffffZ` where:
+
+|Format|Description|
+|-|-|
+|YYYY|four-digit year|
+|MM|two-digit month (01 = January, etc.)|
+|DD|two-digit day of month (01 through 31)|
+|T|signifier for beginning of time elements|
+|hh|two-digit hour (00 through 23)|
+|mm|two-digit minutes (00 through 59)|
+|ss|two-digit seconds (00 through 59)|
+|.fffffff|seven-digit fractional seconds|
+|Z|UTC (Coordinated Universal Time) designator|
+
+ For more information on the ISO 8601 format, see [ISO_8601](https://en.wikipedia.org/wiki/ISO_8601)
+
+## Return types
+
+Returns a signed numeric value, the current number of milliseconds that have elapsed since the Unix epoch i.e. the number of milliseconds that have elapsed since 00:00:00 Thursday, 1 January 1970.
+
+## Remarks
+
+DateTimeToTimestamp will return `undefined` if the DateTime value specified is invalid
+
+## Examples
+
+The following example converts the DateTime to a timestamp:
+
+```sql
+SELECT DateTimeToTimestamp("2020-07-09T23:20:13.4575530Z") AS Timestamp
+```
+
+```json
+[
+ {
+ "Timestamp": 1594336813457
+ }
+]
+```
+
+Here's another example:
+
+```sql
+SELECT DateTimeToTimestamp("2020-07-09") AS Timestamp
+```
+
+```json
+[
+ {
+ "Timestamp": 1594252800000
+ }
+]
+```
+
+## Next steps
+
+- [Date and time functions Azure Cosmos DB](sql-query-date-time-functions.md)
+- [System functions Azure Cosmos DB](sql-query-system-functions.md)
+- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Degrees https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-query-degrees.md
+
+ Title: DEGREES in Azure Cosmos DB query language
+description: Learn about the DEGREES SQL system function in Azure Cosmos DB to return the corresponding angle in degrees for an angle specified in radians
++++ Last updated : 03/03/2020+++
+# DEGREES (Azure Cosmos DB)
+
+ Returns the corresponding angle in degrees for an angle specified in radians.
+
+## Syntax
+
+```sql
+DEGREES (<numeric_expr>)
+```
+
+## Arguments
+
+*numeric_expr*
+ Is a numeric expression.
+
+## Return types
+
+ Returns a numeric expression.
+
+## Examples
+
+ The following example returns the number of degrees in an angle of PI/2 radians.
+
+```sql
+SELECT DEGREES(PI()/2) AS degrees
+```
+
+ Here is the result set.
+
+```json
+[{"degrees": 90}]
+```
+
+## Remarks
+
+This system function will not utilize the index.
+
+## Next steps
+
+- [Mathematical functions Azure Cosmos DB](sql-query-mathematical-functions.md)
+- [System functions Azure Cosmos DB](sql-query-system-functions.md)
+- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Endswith https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-query-endswith.md
+
+ Title: EndsWith in Azure Cosmos DB query language
+description: Learn about the ENDSWITH SQL system function in Azure Cosmos DB to return a Boolean indicating whether the first string expression ends with the second
++++ Last updated : 06/02/2020+++
+# ENDSWITH (Azure Cosmos DB)
+
+Returns a Boolean indicating whether the first string expression ends with the second.
+
+## Syntax
+
+```sql
+ENDSWITH(<str_expr1>, <str_expr2> [, <bool_expr>])
+```
+
+## Arguments
+
+*str_expr1*
+ Is a string expression.
+
+*str_expr2*
+ Is a string expression to be compared to the end of *str_expr1*.
+
+*bool_expr*
+ Optional value for ignoring case. When set to true, ENDSWITH will do a case-insensitive search. When unspecified, this value is false.
+
+## Return types
+
+ Returns a Boolean expression.
+
+## Examples
+
+The following example checks if the string "abc" ends with "b" and "bC".
+
+```sql
+SELECT ENDSWITH("abc", "b", false) AS e1, ENDSWITH("abc", "bC", false) AS e2, ENDSWITH("abc", "bC", true) AS e3
+```
+
+ Here is the result set.
+
+```json
+[
+ {
+ "e1": false,
+ "e2": false,
+ "e3": true
+ }
+]
+```
+
+## Remarks
+
+Learn about [how this string system function uses the index](sql-query-string-functions.md).
+
+## Next steps
+
+- [String functions Azure Cosmos DB](sql-query-string-functions.md)
+- [System functions Azure Cosmos DB](sql-query-system-functions.md)
+- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Exp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-query-exp.md
+
+ Title: EXP in Azure Cosmos DB query language
+description: Learn about the Exponent (EXP) SQL system function in Azure Cosmos DB to return the exponential value of the specified numeric expression
++++ Last updated : 09/13/2019+++
+# EXP (Azure Cosmos DB)
+
+ Returns the exponential value of the specified numeric expression.
+
+## Syntax
+
+```sql
+EXP (<numeric_expr>)
+```
+
+## Arguments
+
+*numeric_expr*
+ Is a numeric expression.
+
+## Return types
+
+ Returns a numeric expression.
+
+## Remarks
+
+ The constant **e** (2.718281…), is the base of natural logarithms.
+
+ The exponent of a number is the constant **e** raised to the power of the number. For example, EXP(1.0) = e^1.0 = 2.71828182845905 and EXP(10) = e^10 = 22026.4657948067.
+
+ The exponential of the natural logarithm of a number is the number itself: EXP (LOG (n)) = n. And the natural logarithm of the exponential of a number is the number itself: LOG (EXP (n)) = n.
+
+## Examples
+
+ The following example declares a variable and returns the exponential value of the specified variable (10).
+
+```sql
+SELECT EXP(10) AS exp
+```
+
+ Here is the result set.
+
+```json
+[{exp: 22026.465794806718}]
+```
+
+ The following example returns the exponential value of the natural logarithm of 20 and the natural logarithm of the exponential of 20. Because these functions are inverse functions of one another, the return value with rounding for floating point math in both cases is 20.
+
+```sql
+SELECT EXP(LOG(20)) AS exp1, LOG(EXP(20)) AS exp2
+```
+
+ Here is the result set.
+
+```json
+[{exp1: 19.999999999999996, exp2: 20}]
+```
+
+## Next steps
+
+- [Mathematical functions Azure Cosmos DB](sql-query-mathematical-functions.md)
+- [System functions Azure Cosmos DB](sql-query-system-functions.md)
+- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Floor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-query-floor.md
+
+ Title: FLOOR in Azure Cosmos DB query language
+description: Learn about the FLOOR SQL system function in Azure Cosmos DB to return the largest integer less than or equal to the specified numeric expression
++++ Last updated : 09/13/2019+++
+# FLOOR (Azure Cosmos DB)
+
+ Returns the largest integer less than or equal to the specified numeric expression.
+
+## Syntax
+
+```sql
+FLOOR (<numeric_expr>)
+```
+
+## Arguments
+
+*numeric_expr*
+ Is a numeric expression.
+
+## Return types
+
+ Returns a numeric expression.
+
+## Examples
+
+ The following example shows positive numeric, negative, and zero values with the `FLOOR` function.
+
+```sql
+SELECT FLOOR(123.45) AS fl1, FLOOR(-123.45) AS fl2, FLOOR(0.0) AS fl3
+```
+
+ Here is the result set.
+
+```json
+[{fl1: 123, fl2: -124, fl3: 0}]
+```
+
+## Remarks
+
+This system function will benefit from a [range index](../index-policy.md#includeexclude-strategy).
+
+## Next steps
+
+- [Mathematical functions Azure Cosmos DB](sql-query-mathematical-functions.md)
+- [System functions Azure Cosmos DB](sql-query-system-functions.md)
+- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query From https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-query-from.md
+
+ Title: FROM clause in Azure Cosmos DB
+description: Learn about the SQL syntax, and example for FROM clause for Azure Cosmos DB. This article also shows examples to scope results, and get sub items by using the FROM clause.
++++ Last updated : 05/08/2020+++
+# FROM clause in Azure Cosmos DB
+
+The FROM (`FROM <from_specification>`) clause is optional, unless the source is filtered or projected later in the query. A query like `SELECT * FROM Families` enumerates over the entire `Families` container. You can also use the special identifier ROOT for the container instead of using the container name.
+
+The `FROM` clause enforces the following rules per query:
+
+* The container can be aliased, such as `SELECT f.id FROM Families AS f` or simply `SELECT f.id FROM Families f`. Here `f` is the alias for `Families`. AS is an optional keyword to [alias](sql-query-working-with-json.md#aliasing) the identifier.
+
+* Once aliased, the original source name cannot be bound. For example, `SELECT Families.id FROM Families f` is syntactically invalid because the identifier `Families` has been aliased and can't be resolved anymore.
+
+* All referenced properties must be fully qualified, to avoid any ambiguous bindings in the absence of strict schema adherence. For example, `SELECT id FROM Families f` is syntactically invalid because the property `id` isn't bound.
+
+## Syntax
+
+```sql
+FROM <from_specification>
+
+<from_specification> ::=
+ <from_source> {[ JOIN <from_source>][,...n]}
+
+<from_source> ::=
+ <container_expression> [[AS] input_alias]
+ | input_alias IN <container_expression>
+
+<container_expression> ::=
+ ROOT
+ | container_name
+ | input_alias
+ | <container_expression> '.' property_name
+ | <container_expression> '[' "property_name" | array_index ']'
+```
+
+## Arguments
+
+- `<from_source>`
+
+ Specifies a data source, with or without an alias. If alias is not specified, it will be inferred from the `<container_expression>` using following rules:
+
+- If the expression is a container_name, then container_name will be used as an alias.
+
+- If the expression is `<container_expression>`, then property_name, then property_name will be used as an alias. If the expression is a container_name, then container_name will be used as an alias.
+
+- AS `input_alias`
+
+ Specifies that the `input_alias` is a set of values returned by the underlying container expression.
+
+- `input_alias` IN
+
+ Specifies that the `input_alias` should represent the set of values obtained by iterating over all array elements of each array returned by the underlying container expression. Any value returned by underlying container expression that is not an array is ignored.
+
+- `<container_expression>`
+
+ Specifies the container expression to be used to retrieve the documents.
+
+- `ROOT`
+
+ Specifies that document should be retrieved from the default, currently connected container.
+
+- `container_name`
+
+ Specifies that document should be retrieved from the provided container. The name of the container must match the name of the container currently connected to.
+
+- `input_alias`
+
+ Specifies that document should be retrieved from the other source defined by the provided alias.
+
+- `<container_expression> '.' property_name`
+
+ Specifies that document should be retrieved by accessing the `property_name` property.
+
+- `<container_expression> '[' "property_name" | array_index ']'`
+
+ Specifies that document should be retrieved by accessing the `property_name` property or array_index array element for all documents retrieved by specified container expression.
+
+## Remarks
+
+All aliases provided or inferred in the `<from_source>(`s) must be unique. The Syntax `<container_expression>.`property_name is the same as `<container_expression>' ['"property_name"']'`. However, the latter syntax can be used if a property name contains a non-identifier character.
+
+### Handling missing properties, missing array elements, and undefined values
+
+If a container expression accesses properties or array elements and that value does not exist, that value will be ignored and not processed further.
+
+### Container expression context scoping
+
+A container expression may be container-scoped or document-scoped:
+
+- An expression is container-scoped, if the underlying source of the container expression is either ROOT or `container_name`. Such an expression represents a set of documents retrieved from the container directly, and is not dependent on the processing of other container expressions.
+
+- An expression is document-scoped, if the underlying source of the container expression is `input_alias` introduced earlier in the query. Such an expression represents a set of documents obtained by evaluating the container expression in the scope of each document belonging to the set associated with the aliased container. The resulting set will be a union of sets obtained by evaluating the container expression for each of the documents in the underlying set.
+
+## Examples
+
+### Get subitems by using the FROM clause
+
+The FROM clause can reduce the source to a smaller subset. To enumerate only a subtree in each item, the subroot can become the source, as shown in the following example:
+
+```sql
+ SELECT *
+ FROM Families.children
+```
+
+The results are:
+
+```json
+ [
+ [
+ {
+ "firstName": "Henriette Thaulow",
+ "gender": "female",
+ "grade": 5,
+ "pets": [
+ {
+ "givenName": "Fluffy"
+ }
+ ]
+ }
+ ],
+ [
+ {
+ "familyName": "Merriam",
+ "givenName": "Jesse",
+ "gender": "female",
+ "grade": 1
+ },
+ {
+ "familyName": "Miller",
+ "givenName": "Lisa",
+ "gender": "female",
+ "grade": 8
+ }
+ ]
+ ]
+```
+
+The preceding query used an array as the source, but you can also use an object as the source. The query considers any valid, defined JSON value in the source for inclusion in the result. The following example would exclude `Families` that don't have an `address.state` value.
+
+```sql
+ SELECT *
+ FROM Families.address.state
+```
+
+The results are:
+
+```json
+ [
+ "WA",
+ "NY"
+ ]
+```
+
+## Next steps
+
+- [Getting started](sql-query-getting-started.md)
+- [SELECT clause](sql-query-select.md)
+- [WHERE clause](sql-query-where.md)
cosmos-db Sql Query Geospatial Index https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-query-geospatial-index.md
+
+ Title: Index geospatial data with Azure Cosmos DB
+description: Index spatial data with Azure Cosmos DB
++++ Last updated : 11/03/2020+++
+# Index geospatial data with Azure Cosmos DB
+
+We designed Azure Cosmos DB's database engine to be truly schema agnostic and provide first class support for JSON. The write optimized database engine of Azure Cosmos DB natively understands spatial data represented in the GeoJSON standard.
+
+In a nutshell, the geometry is projected from geodetic coordinates onto a 2D plane then divided progressively into cells using a **quadtree**. These cells are mapped to 1D based on the location of the cell within a **Hilbert space filling curve**, which preserves locality of points. Additionally when location data is indexed, it goes through a process known as **tessellation**, that is, all the cells that intersect a location are identified and stored as keys in the Azure Cosmos DB index. At query time, arguments like points and Polygons are also tessellated to extract the relevant cell ID ranges, then used to retrieve data from the index.
+
+If you specify an indexing policy that includes a spatial index for `/*` (all paths), then all data found within the container is indexed for efficient spatial queries.
+
+> [!NOTE]
+> Azure Cosmos DB supports indexing of Points, LineStrings, Polygons, and MultiPolygons. If you index any one of these types, we will automatically index all other types. In other words, if you index Polygons, we'll also index Points, LineStrings, and MultiPolygons. Indexing a new spatial type does not affect the write RU charge or index size unless you have valid GeoJSON data of that type.
+
+## Modifying geospatial configuration
+
+In your container, the **Geospatial Configuration** specifies how the spatial data will be indexed. Specify one **Geospatial Configuration** per container: geography or geometry.
+
+You can toggle between the **geography** and **geometry** spatial type in the Azure portal. It's important that you create a [valid spatial geometry indexing policy with a bounding box](#geometry-data-indexing-examples) before switching to the geometry spatial type.
+
+Here's how to set the **Geospatial Configuration** in **Data Explorer** within the Azure portal:
++
+You can also modify the `geospatialConfig` in the .NET SDK to adjust the **Geospatial Configuration**:
+
+If not specified, the `geospatialConfig` will default to the geography data type. When you modify the `geospatialConfig`, all existing geospatial data in the container will be reindexed.
+
+Here is an example for modifying the geospatial data type to `geometry` by setting the `geospatialConfig` property and adding a **boundingBox**:
+
+```csharp
+ //Retrieve the container's details
+ ContainerResponse containerResponse = await client.GetContainer("db", "spatial").ReadContainerAsync();
+ //Set GeospatialConfig to Geometry
+ GeospatialConfig geospatialConfig = new GeospatialConfig(GeospatialType.Geometry);
+ containerResponse.Resource.GeospatialConfig = geospatialConfig;
+ // Add a spatial index including the required boundingBox
+ SpatialPath spatialPath = new SpatialPath
+ {
+ Path = "/locations/*",
+ BoundingBox = new BoundingBoxProperties(){
+ Xmin = 0,
+ Ymin = 0,
+ Xmax = 10,
+ Ymax = 10
+ }
+ };
+ spatialPath.SpatialTypes.Add(SpatialType.Point);
+ spatialPath.SpatialTypes.Add(SpatialType.LineString);
+ spatialPath.SpatialTypes.Add(SpatialType.Polygon);
+ spatialPath.SpatialTypes.Add(SpatialType.MultiPolygon);
+
+ containerResponse.Resource.IndexingPolicy.SpatialIndexes.Add(spatialPath);
+
+ // Update container with changes
+ await client.GetContainer("db", "spatial").ReplaceContainerAsync(containerResponse.Resource);
+```
+
+## Geography data indexing examples
+
+The following JSON snippet shows an indexing policy with spatial indexing enabled for the **geography** data type. It is valid for spatial data with the geography data type and will index any GeoJSON Point, Polygon, MultiPolygon, or LineString found within documents for spatial querying. If you are modifying the indexing policy using the Azure portal, you can specify the following JSON for indexing policy to enable spatial indexing on your container:
+
+**Container indexing policy JSON with geography spatial indexing**
+
+```json
+{
+ "automatic": true,
+ "indexingMode": "Consistent",
+ "includedPaths": [
+ {
+ "path": "/*"
+ }
+ ],
+ "spatialIndexes": [
+ {
+ "path": "/*",
+ "types": [
+ "Point",
+ "Polygon",
+ "MultiPolygon",
+ "LineString"
+ ]
+ }
+ ],
+ "excludedPaths": []
+}
+```
+
+> [!NOTE]
+> If the location GeoJSON value within the document is malformed or invalid, then it will not get indexed for spatial querying. You can validate location values using ST_ISVALID and ST_ISVALIDDETAILED.
+
+You can also [modify indexing policy](../how-to-manage-indexing-policy.md) using the Azure CLI, PowerShell, or any SDK.
+
+## Geometry data indexing examples
+
+With the **geometry** data type, similar to the geography data type, you must specify relevant paths and types to index. In addition, you must also specify a `boundingBox` within the indexing policy to indicate the desired area to be indexed for that specific path. Each geospatial path requires its own`boundingBox`.
+
+The bounding box consists of the following properties:
+
+- **xmin**: the minimum indexed x coordinate
+- **ymin**: the minimum indexed y coordinate
+- **xmax**: the maximum indexed x coordinate
+- **ymax**: the maximum indexed y coordinate
+
+A bounding box is required because geometric data occupies a plane that can be infinite. Spatial indexes, however, require a finite space. For the **geography** data type, the Earth is the boundary and you do not need to set a bounding box.
+
+Create a bounding box that contains all (or most) of your data. Only operations computed on the objects that are entirely inside the bounding box will be able to utilize the spatial index. Making the bounding box larger than necessary will negatively impact query performance.
+
+Here is an example indexing policy that indexes **geometry** data with **geospatialConfig** set to `geometry`:
+
+```json
+{
+ "indexingMode": "consistent",
+ "automatic": true,
+ "includedPaths": [
+ {
+ "path": "/*"
+ }
+ ],
+ "excludedPaths": [
+ {
+ "path": "/\"_etag\"/?"
+ }
+ ],
+ "spatialIndexes": [
+ {
+ "path": "/locations/*",
+ "types": [
+ "Point",
+ "LineString",
+ "Polygon",
+ "MultiPolygon"
+ ],
+ "boundingBox": {
+ "xmin": -10,
+ "ymin": -20,
+ "xmax": 10,
+ "ymax": 20
+ }
+ }
+ ]
+}
+```
+
+The above indexing policy has a **boundingBox** of (-10, 10) for x coordinates and (-20, 20) for y coordinates. The container with the above indexing policy will index all Points, Polygons, MultiPolygons, and LineStrings that are entirely within this region.
+
+> [!NOTE]
+> If you try to add an indexing policy with a **boundingBox** to a container with `geography` data type, it will fail. You should modify the container's **geospatialConfig** to be `geometry` before adding a **boundingBox**. You can add data and modify the remainder of
+> your indexing policy (such as the paths and types) either before or after selecting the geospatial data type for the container.
+
+## Next steps
+
+Now that you have learned how to get started with geospatial support in Azure Cosmos DB, next you can:
+
+* Learn more about [Azure Cosmos DB Query](sql-query-getting-started.md)
+* Learn more about [Querying spatial data with Azure Cosmos DB](sql-query-geospatial-query.md)
+* Learn more about [Geospatial and GeoJSON location data in Azure Cosmos DB](sql-query-geospatial-intro.md)
cosmos-db Sql Query Geospatial Intro https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-query-geospatial-intro.md
+
+ Title: Geospatial and GeoJSON location data in Azure Cosmos DB
+description: Understand how to create spatial objects with Azure Cosmos DB and the SQL API.
++++ Last updated : 02/25/2021+++
+# Geospatial and GeoJSON location data in Azure Cosmos DB
+
+This article is an introduction to the geospatial functionality in Azure Cosmos DB. After reading our documentation on geospatial indexing you will be able to answer the following questions:
+
+* How do I store spatial data in Azure Cosmos DB?
+* How can I query spatial data in Azure Cosmos DB in SQL and LINQ?
+* How do I enable or disable spatial indexing in Azure Cosmos DB?
+
+## Spatial Data Use Cases
+
+Geospatial data often involve proximity queries, for example, "find all coffee shops near my current location". Common use cases are:
+
+* Geolocation Analytics, driving specific located marketing initiatives.
+* Location based personalization, for multiple industries like Retail and Healthcare.
+* Logistics enhancement, for transport optimization.
+* Risk Analysis, especially for insurance and finance companies.
+* Situational awareness, for alerts and notifications.
+
+## Introduction to spatial data
+
+Spatial data describes the position and shape of objects in space. In most applications, these correspond to objects on the earth and geospatial data. Spatial data can be used to represent the location of a person, a place of interest, or the boundary of a city, or a lake.
+
+Azure Cosmos DB's SQL API supports two spatial data types: the **geometry** data type and the **geography** data type.
+
+- The **geometry** type represents data in a Euclidean (flat) coordinate system
+- The **geography** type represents data in a round-earth coordinate system.
+
+## Supported data types
+
+Azure Cosmos DB supports indexing and querying of geospatial point data that's represented using the [GeoJSON specification](https://tools.ietf.org/html/rfc7946). GeoJSON data structures are always valid JSON objects, so they can be stored and queried using Azure Cosmos DB without any specialized tools or libraries.
+
+Azure Cosmos DB supports the following spatial data types:
+
+- Point
+- LineString
+- Polygon
+- MultiPolygon
+
+### Points
+
+A **Point** denotes a single position in space. In geospatial data, a Point represents the exact location, which could be a street address of a grocery store, a kiosk, an automobile, or a city. A point is represented in GeoJSON (and Azure Cosmos DB) using its coordinate pair or longitude and latitude.
+
+Here's an example JSON for a point:
+
+**Points in Azure Cosmos DB**
+
+```json
+{
+ "type":"Point",
+ "coordinates":[ 31.9, -4.8 ]
+}
+```
+
+Spatial data types can be embedded in an Azure Cosmos DB document as shown in this example of a user profile containing location data:
+
+**Use Profile with Location stored in Azure Cosmos DB**
+
+```json
+{
+ "id":"cosmosdb-profile",
+ "screen_name":"@CosmosDB",
+ "city":"Redmond",
+ "topics":[ "global", "distributed" ],
+ "location":{
+ "type":"Point",
+ "coordinates":[ 31.9, -4.8 ]
+ }
+}
+```
+
+### Points in a geometry coordinate system
+
+For the **geometry** data type, GeoJSON specification specifies the horizontal axis first and the vertical axis second.
+
+### Points in a geography coordinate system
+
+For the **geography** data type, GeoJSON specification specifies longitude first and latitude second. Like in other mapping applications, longitude and latitude are angles and represented in terms of degrees. Longitude values are measured from the Prime Meridian and are between -180 degrees and 180.0 degrees, and latitude values are measured from the equator and are between -90.0 degrees and 90.0 degrees.
+
+Azure Cosmos DB interprets coordinates as represented per the WGS-84 reference system. See below for more details about coordinate reference systems.
+
+### LineStrings
+
+**LineStrings** represent a series of two or more points in space and the line segments that connect them. In geospatial data, LineStrings are commonly used to represent highways or rivers.
+
+**LineStrings in GeoJSON**
+
+```json
+ "type":"LineString",
+ "coordinates":[ [
+ [ 31.8, -5 ],
+ [ 31.8, -4.7 ]
+ ] ]
+```
+
+### Polygons
+
+A **Polygon** is a boundary of connected points that forms a closed LineString. Polygons are commonly used to represent natural formations like lakes or political jurisdictions like cities and states. Here's an example of a Polygon in Azure Cosmos DB:
+
+**Polygons in GeoJSON**
+
+```json
+{
+ "type":"Polygon",
+ "coordinates":[ [
+ [ 31.8, -5 ],
+ [ 32, -5 ],
+ [ 32, -4.7 ],
+ [ 31.8, -4.7 ],
+ [ 31.8, -5 ]
+ ] ]
+}
+```
+
+> [!NOTE]
+> The GeoJSON specification requires that for valid Polygons, the last coordinate pair provided should be the same as the first, to create a closed shape.
+>
+> Points within a Polygon must be specified in counter-clockwise order. A Polygon specified in clockwise order represents the inverse of the region within it.
+>
+>
+
+### MultiPolygons
+
+A **MultiPolygon** is an array of zero or more Polygons. **MultiPolygons** cannot overlap sides or have any common area. They may touch at one or more points.
+
+**MultiPolygons in GeoJSON**
+
+```json
+{
+ "type":"MultiPolygon",
+ "coordinates":[[[
+ [52.0, 12.0],
+ [53.0, 12.0],
+ [53.0, 13.0],
+ [52.0, 13.0],
+ [52.0, 12.0]
+ ]],
+ [[
+ [50.0, 0.0],
+ [51.0, 0.0],
+ [51.0, 5.0],
+ [50.0, 5.0],
+ [50.0, 0.0]
+ ]]]
+}
+```
+
+## Coordinate reference systems
+
+Since the shape of the earth is irregular, coordinates of geography geospatial data are represented in many coordinate reference systems (CRS), each with their own frames of reference and units of measurement. For example, the "National Grid of Britain" is a reference system is accurate for the United Kingdom, but not outside it.
+
+The most popular CRS in use today is the World Geodetic System [WGS-84](https://earth-info.nga.mil/GandG/update/index.php). GPS devices, and many mapping services including Google Maps and Bing Maps APIs use WGS-84. Azure Cosmos DB supports indexing and querying of geography geospatial data using the WGS-84 CRS only.
+
+## Creating documents with spatial data
+When you create documents that contain GeoJSON values, they are automatically indexed with a spatial index in accordance to the indexing policy of the container. If you're working with an Azure Cosmos DB SDK in a dynamically typed language like Python or Node.js, you must create valid GeoJSON.
+
+**Create Document with Geospatial data in Node.js**
+
+```javascript
+var userProfileDocument = {
+ "id":"cosmosdb",
+ "location":{
+ "type":"Point",
+ "coordinates":[ -122.12, 47.66 ]
+ }
+};
+
+client.createDocument(`dbs/${databaseName}/colls/${collectionName}`, userProfileDocument, (err, created) => {
+ // additional code within the callback
+});
+```
+
+If you're working with the SQL APIs, you can use the `Point`, `LineString`, `Polygon`, and `MultiPolygon` classes within the `Microsoft.Azure.Cosmos.Spatial` namespace to embed location information within your application objects. These classes help simplify the serialization and deserialization of spatial data into GeoJSON.
+
+**Create Document with Geospatial data in .NET**
+
+```csharp
+using Microsoft.Azure.Cosmos.Spatial;
+
+public class UserProfile
+{
+ [JsonProperty("id")]
+ public string id { get; set; }
+
+ [JsonProperty("location")]
+ public Point Location { get; set; }
+
+ // More properties
+}
+
+await container.CreateItemAsync( new UserProfile
+ {
+ id = "cosmosdb",
+ Location = new Point (-122.12, 47.66)
+ });
+```
+
+If you don't have the latitude and longitude information, but have the physical addresses or location name like city or country/region, you can look up the actual coordinates by using a geocoding service like Bing Maps REST Services. Learn more about Bing Maps geocoding [here](/bingmaps/rest-services/).
+
+## Next steps
+
+Now that you have learned how to get started with geospatial support in Azure Cosmos DB, next you can:
+
+* Learn more about [Azure Cosmos DB Query](sql-query-getting-started.md)
+* Learn more about [Querying spatial data with Azure Cosmos DB](sql-query-geospatial-query.md)
+* Learn more about [Index spatial data with Azure Cosmos DB](sql-query-geospatial-index.md)
cosmos-db Sql Query Geospatial Query https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-query-geospatial-query.md
+
+ Title: Querying geospatial data with Azure Cosmos DB
+description: Querying spatial data with Azure Cosmos DB
++++ Last updated : 02/20/2020+++
+# Querying geospatial data with Azure Cosmos DB
+
+This article will cover how to query geospatial data in Azure Cosmos DB using SQL and LINQ. Currently storing and accessing geospatial data is supported by Azure Cosmos DB SQL API accounts only. Azure Cosmos DB supports the following Open Geospatial Consortium (OGC) built-in functions for geospatial querying. For more information on the complete set of built-in functions in the SQL language, see [Query System Functions in Azure Cosmos DB](sql-query-system-functions.md).
+
+## Spatial SQL built-in functions
+
+Here is a list of geospatial system functions useful for querying in Azure Cosmos DB:
+
+|**Usage**|**Description**|
+|||
+| ST_DISTANCE (spatial_expr, spatial_expr) | Returns the distance between the two GeoJSON Point, Polygon, or LineString expressions.|
+|ST_WITHIN (spatial_expr, spatial_expr) | Returns a Boolean expression indicating whether the first GeoJSON object (Point, Polygon, or LineString) is within the second GeoJSON object (Point, Polygon, or LineString).|
+|ST_INTERSECTS (spatial_expr, spatial_expr)| Returns a Boolean expression indicating whether the two specified GeoJSON objects (Point, Polygon, or LineString) intersect.|
+|ST_ISVALID| Returns a Boolean value indicating whether the specified GeoJSON Point, Polygon, or LineString expression is valid.|
+| ST_ISVALIDDETAILED| Returns a JSON value containing a Boolean value if the specified GeoJSON Point, Polygon, or LineString expression is valid. If invalid, it returns the reason as a string value.|
+
+Spatial functions can be used to perform proximity queries against spatial data. For example, here's a query that returns all family documents that are within 30 km of the specified location using the `ST_DISTANCE` built-in function.
+
+**Query**
+
+```sql
+ SELECT f.id
+ FROM Families f
+ WHERE ST_DISTANCE(f.location, {"type": "Point", "coordinates":[31.9, -4.8]}) < 30000
+```
+
+**Results**
+
+```json
+ [{
+ "id": "WakefieldFamily"
+ }]
+```
+
+If you include spatial indexing in your indexing policy, then "distance queries" will be served efficiently through the index. For more information on spatial indexing, see [geospatial indexing](sql-query-geospatial-index.md). If you don't have a spatial index for the specified paths, the query will do a scan of the container.
+
+`ST_WITHIN` can be used to check if a point lies within a Polygon. Commonly Polygons are used to represent boundaries like zip codes, state boundaries, or natural formations. Again if you include spatial indexing in your indexing policy, then "within" queries will be served efficiently through the index.
+
+Polygon arguments in `ST_WITHIN` can contain only a single ring, that is, the Polygons must not contain holes in them.
+
+**Query**
+
+```sql
+ SELECT *
+ FROM Families f
+ WHERE ST_WITHIN(f.location, {
+ "type":"Polygon",
+ "coordinates": [[[31.8, -5], [32, -5], [32, -4.7], [31.8, -4.7], [31.8, -5]]]
+ })
+```
+
+**Results**
+
+```json
+ [{
+ "id": "WakefieldFamily",
+ }]
+```
+
+> [!NOTE]
+> Similar to how mismatched types work in Azure Cosmos DB query, if the location value specified in either argument is malformed or invalid, then it evaluates to **undefined** and the evaluated document to be skipped from the query results. If your query returns no results, run `ST_ISVALIDDETAILED` to debug why the spatial type is invalid.
+>
+>
+
+Azure Cosmos DB also supports performing inverse queries, that is, you can index polygons or lines in Azure Cosmos DB, then query for the areas that contain a specified point. This pattern is commonly used in logistics to identify, for example, when a truck enters or leaves a designated area.
+
+**Query**
+
+```sql
+ SELECT *
+ FROM Areas a
+ WHERE ST_WITHIN({"type": "Point", "coordinates":[31.9, -4.8]}, a.location)
+```
+
+**Results**
+
+```json
+ [{
+ "id": "MyDesignatedLocation",
+ "location": {
+ "type":"Polygon",
+ "coordinates": [[[31.8, -5], [32, -5], [32, -4.7], [31.8, -4.7], [31.8, -5]]]
+ }
+ }]
+```
+
+`ST_ISVALID` and `ST_ISVALIDDETAILED` can be used to check if a spatial object is valid. For example, the following query checks the validity of a point with an out of range latitude value (-132.8). `ST_ISVALID` returns just a Boolean value, and `ST_ISVALIDDETAILED` returns the Boolean and a string containing the reason why it is considered invalid.
+
+**Query**
+
+```sql
+ SELECT ST_ISVALID({ "type": "Point", "coordinates": [31.9, -132.8] })
+```
+
+**Results**
+
+```json
+ [{
+ "$1": false
+ }]
+```
+
+These functions can also be used to validate Polygons. For example, here we use `ST_ISVALIDDETAILED` to validate a Polygon that is not closed.
+
+**Query**
+
+```sql
+ SELECT ST_ISVALIDDETAILED({ "type": "Polygon", "coordinates": [[
+ [ 31.8, -5 ], [ 31.8, -4.7 ], [ 32, -4.7 ], [ 32, -5 ]
+ ]]})
+```
+
+**Results**
+
+```json
+ [{
+ "$1": {
+ "valid": false,
+ "reason": "The Polygon input is not valid because the start and end points of the ring number 1 are not the same. Each ring of a Polygon must have the same start and end points."
+ }
+ }]
+```
+
+## LINQ querying in the .NET SDK
+
+The SQL .NET SDK also providers stub methods `Distance()` and `Within()` for use within LINQ expressions. The SQL LINQ provider translates this method calls to the equivalent SQL built-in function calls (ST_DISTANCE and ST_WITHIN respectively).
+
+Here's an example of a LINQ query that finds all documents in the Azure Cosmos container whose `location` value is within a radius of 30 km of the specified point using LINQ.
+
+**LINQ query for Distance**
+
+```csharp
+ foreach (UserProfile user in container.GetItemLinqQueryable<UserProfile>(allowSynchronousQueryExecution: true)
+ .Where(u => u.ProfileType == "Public" && u.Location.Distance(new Point(32.33, -4.66)) < 30000))
+ {
+ Console.WriteLine("\t" + user);
+ }
+```
+
+Similarly, here's a query for finding all the documents whose `location` is within the specified box/Polygon.
+
+**LINQ query for Within**
+
+```csharp
+ Polygon rectangularArea = new Polygon(
+ new[]
+ {
+ new LinearRing(new [] {
+ new Position(31.8, -5),
+ new Position(32, -5),
+ new Position(32, -4.7),
+ new Position(31.8, -4.7),
+ new Position(31.8, -5)
+ })
+ });
+
+ foreach (UserProfile user in container.GetItemLinqQueryable<UserProfile>(allowSynchronousQueryExecution: true)
+ .Where(a => a.Location.Within(rectangularArea)))
+ {
+ Console.WriteLine("\t" + user);
+ }
+```
+
+## Next steps
+
+Now that you have learned how to get started with geospatial support in Azure Cosmos DB, next you can:
+
+* Learn more about [Azure Cosmos DB Query](sql-query-getting-started.md)
+* Learn more about [Geospatial and GeoJSON location data in Azure Cosmos DB](sql-query-geospatial-intro.md)
+* Learn more about [Index spatial data with Azure Cosmos DB](sql-query-geospatial-index.md)
cosmos-db Sql Query Getcurrentdatetime https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-query-getcurrentdatetime.md
+
+ Title: GetCurrentDateTime in Azure Cosmos DB query language
+description: Learn about SQL system function GetCurrentDateTime in Azure Cosmos DB.
++++ Last updated : 02/03/2021+++
+# GetCurrentDateTime (Azure Cosmos DB)
+
+Returns the current UTC (Coordinated Universal Time) date and time as an ISO 8601 string.
+
+## Syntax
+
+```sql
+GetCurrentDateTime ()
+```
+
+## Return types
+
+Returns the current UTC date and time ISO 8601 string value in the format `YYYY-MM-DDThh:mm:ss.fffffffZ` where:
+
+|Format|Description|
+|-|-|
+|YYYY|four-digit year|
+|MM|two-digit month (01 = January, etc.)|
+|DD|two-digit day of month (01 through 31)|
+|T|signifier for beginning of time elements|
+|hh|two-digit hour (00 through 23)|
+|mm|two-digit minutes (00 through 59)|
+|ss|two-digit seconds (00 through 59)|
+|.fffffff|seven-digit fractional seconds|
+|Z|UTC (Coordinated Universal Time) designator|
+
+ For more information on the ISO 8601 format, see [ISO_8601](https://en.wikipedia.org/wiki/ISO_8601)
+
+## Remarks
+
+GetCurrentDateTime() is a nondeterministic function. The result returned is UTC. Precision is 7 digits, with an accuracy of 100 nanoseconds.
+
+> [!NOTE]
+> This system function will not utilize the index. If you need to compare values to the current time, obtain the current time before query execution and use that constant string value in the `WHERE` clause.
+
+## Examples
+
+The following example shows how to get the current UTC Date Time using the GetCurrentDateTime() built-in function.
+
+```sql
+SELECT GetCurrentDateTime() AS currentUtcDateTime
+```
+
+ Here is an example result set.
+
+```json
+[{
+ "currentUtcDateTime": "2019-05-03T20:36:17.1234567Z"
+}]
+```
+
+## Next steps
+
+- [Date and time functions Azure Cosmos DB](sql-query-date-time-functions.md)
+- [System functions Azure Cosmos DB](sql-query-system-functions.md)
+- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Getcurrentticks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-query-getcurrentticks.md
+
+ Title: GetCurrentTicks in Azure Cosmos DB query language
+description: Learn about SQL system function GetCurrentTicks in Azure Cosmos DB.
++++ Last updated : 02/03/2021+++
+# GetCurrentTicks (Azure Cosmos DB)
+
+Returns the number of 100-nanosecond ticks that have elapsed since 00:00:00 Thursday, 1 January 1970.
+
+## Syntax
+
+```sql
+GetCurrentTicks ()
+```
+
+## Return types
+
+Returns a signed numeric value, the current number of 100-nanosecond ticks that have elapsed since the Unix epoch. In other words, GetCurrentTicks returns the number of 100 nanosecond ticks that have elapsed since 00:00:00 Thursday, 1 January 1970.
+
+## Remarks
+
+GetCurrentTicks() is a nondeterministic function. The result returned is UTC (Coordinated Universal Time).
+
+> [!NOTE]
+> This system function will not utilize the index. If you need to compare values to the current time, obtain the current time before query execution and use that constant string value in the `WHERE` clause.
+
+## Examples
+
+Here's an example that returns the current time, measured in ticks:
+
+```sql
+SELECT GetCurrentTicks() AS CurrentTimeInTicks
+```
+
+```json
+[
+ {
+ "CurrentTimeInTicks": 15973607943002652
+ }
+]
+```
+
+## Next steps
+
+- [Date and time functions Azure Cosmos DB](sql-query-date-time-functions.md)
+- [System functions Azure Cosmos DB](sql-query-system-functions.md)
+- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Getcurrenttimestamp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-query-getcurrenttimestamp.md
+
+ Title: GetCurrentTimestamp in Azure Cosmos DB query language
+description: Learn about SQL system function GetCurrentTimestamp in Azure Cosmos DB.
++++ Last updated : 02/03/2021+++
+# GetCurrentTimestamp (Azure Cosmos DB)
+
+ Returns the number of milliseconds that have elapsed since 00:00:00 Thursday, 1 January 1970.
+
+## Syntax
+
+```sql
+GetCurrentTimestamp ()
+```
+
+## Return types
+
+Returns a signed numeric value, the current number of milliseconds that have elapsed since the Unix epoch i.e. the number of milliseconds that have elapsed since 00:00:00 Thursday, 1 January 1970.
+
+## Remarks
+
+GetCurrentTimestamp() is a nondeterministic function. The result returned is UTC (Coordinated Universal Time).
+
+> [!NOTE]
+> This system function will not utilize the index. If you need to compare values to the current time, obtain the current time before query execution and use that constant string value in the `WHERE` clause.
+
+## Examples
+
+ The following example shows how to get the current timestamp using the GetCurrentTimestamp() built-in function.
+
+```sql
+SELECT GetCurrentTimestamp() AS currentUtcTimestamp
+```
+
+ Here is an example result set.
+
+```json
+[{
+ "currentUtcTimestamp": 1556916469065
+}]
+```
+
+## Next steps
+
+- [Date and time functions Azure Cosmos DB](sql-query-date-time-functions.md)
+- [System functions Azure Cosmos DB](sql-query-system-functions.md)
+- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-query-getting-started.md
+
+ Title: Getting started with SQL queries in Azure Cosmos DB
+description: Learn how to use SQL queries to query data from Azure Cosmos DB. You can upload sample data to a container in Azure Cosmos DB and query it.
++++ Last updated : 04/09/2021+++
+# Getting started with SQL queries
+
+In Azure Cosmos DB SQL API accounts, there are two ways to read data:
+
+**Point reads** - You can do a key/value lookup on a single *item ID* and partition key. The *item ID* and partition key combination is the key and the item itself is the value. For a 1 KB document, point reads typically cost 1 [request unit](../request-units.md) with a latency under 10 ms. Point reads return a single item.
+
+Here are some examples of how to do **Point reads** with each SDK:
+
+- [.NET SDK](/dotnet/api/microsoft.azure.cosmos.container.readitemasync)
+- [Java SDK](/java/api/com.azure.cosmos.cosmoscontainer.readitem#com_azure_cosmos_CosmosContainer__T_readItem_java_lang_String_com_azure_cosmos_models_PartitionKey_com_azure_cosmos_models_CosmosItemRequestOptions_java_lang_Class_T__)
+- [Node.js SDK](/javascript/api/@azure/cosmos/item#read-requestoptions-)
+- [Python SDK](/python/api/azure-cosmos/azure.cosmos.containerproxy#read-item-item--partition-key--populate-query-metrics-none--post-trigger-include-none-kwargs-)
+
+**SQL queries** - You can query data by writing queries using the Structured Query Language (SQL) as a JSON query language. Queries always cost at least 2.3 request units and, in general, will have a higher and more variable latency than point reads. Queries can return many items.
+
+Most read-heavy workloads on Azure Cosmos DB use a combination of both point reads and SQL queries. If you just need to read a single item, point reads are cheaper and faster than queries. Point reads don't need to use the query engine to access data and can read the data directly. Of course, it's not possible for all workloads to exclusively read data using point reads, so support of SQL as a query language and [schema-agnostic indexing](../index-overview.md) provide a more flexible way to access your data.
+
+Here are some examples of how to do **SQL queries** with each SDK:
+
+- [.NET SDK](../sql-api-dotnet-v3sdk-samples.md#query-examples)
+- [Java SDK](../sql-api-java-sdk-samples.md#query-examples)
+- [Node.js SDK](../sql-api-nodejs-samples.md#item-examples)
+- [Python SDK](../sql-api-python-samples.md#item-examples)
+
+The remainder of this doc shows how to get started writing SQL queries in Azure Cosmos DB. SQL queries can be run through either the SDK or Azure portal.
+
+## Upload sample data
+
+In your SQL API Cosmos DB account, open the [Data Explorer](../data-explorer.md) to create a container called `Families`. After the container is created, use the data structures browser, to find and open it. In your `Families` container, you will see the `Items` option right below the name of the container. Open this option and you'll see a button, in the menu bar in center of the screen, to create a 'New Item'. You will use this feature to create the JSON items below.
+
+### Create JSON items
+
+The following 2 JSON items are documents about the Andersen and Wakefield families. They include parents, children and their pets, address, and registration information.
+
+The first item has strings, numbers, Booleans, arrays, and nested properties:
+
+```json
+{
+ "id": "AndersenFamily",
+ "lastName": "Andersen",
+ "parents": [
+ { "firstName": "Thomas" },
+ { "firstName": "Mary Kay"}
+ ],
+ "children": [
+ {
+ "firstName": "Henriette Thaulow",
+ "gender": "female",
+ "grade": 5,
+ "pets": [{ "givenName": "Fluffy" }]
+ }
+ ],
+ "address": { "state": "WA", "county": "King", "city": "Seattle" },
+ "creationDate": 1431620472,
+ "isRegistered": true
+}
+```
+
+The second item uses `givenName` and `familyName` instead of `firstName` and `lastName`:
+
+```json
+{
+ "id": "WakefieldFamily",
+ "parents": [
+ { "familyName": "Wakefield", "givenName": "Robin" },
+ { "familyName": "Miller", "givenName": "Ben" }
+ ],
+ "children": [
+ {
+ "familyName": "Merriam",
+ "givenName": "Jesse",
+ "gender": "female",
+ "grade": 1,
+ "pets": [
+ { "givenName": "Goofy" },
+ { "givenName": "Shadow" }
+ ]
+ },
+ {
+ "familyName": "Miller",
+ "givenName": "Lisa",
+ "gender": "female",
+ "grade": 8 }
+ ],
+ "address": { "state": "NY", "county": "Manhattan", "city": "NY" },
+ "creationDate": 1431620462,
+ "isRegistered": false
+}
+```
+
+### Query the JSON items
+
+Try a few queries against the JSON data to understand some of the key aspects of Azure Cosmos DB's SQL query language.
+
+The following query returns the items where the `id` field matches `AndersenFamily`. Since it's a `SELECT *` query, the output of the query is the complete JSON item. For more information about SELECT syntax, see [SELECT statement](sql-query-select.md).
+
+```sql
+ SELECT *
+ FROM Families f
+ WHERE f.id = "AndersenFamily"
+```
+
+The query results are:
+
+```json
+ [{
+ "id": "AndersenFamily",
+ "lastName": "Andersen",
+ "parents": [
+ { "firstName": "Thomas" },
+ { "firstName": "Mary Kay"}
+ ],
+ "children": [
+ {
+ "firstName": "Henriette Thaulow", "gender": "female", "grade": 5,
+ "pets": [{ "givenName": "Fluffy" }]
+ }
+ ],
+ "address": { "state": "WA", "county": "King", "city": "Seattle" },
+ "creationDate": 1431620472,
+ "isRegistered": true
+ }]
+```
+
+The following query reformats the JSON output into a different shape. The query projects a new JSON `Family` object with two selected fields, `Name` and `City`, when the address city is the same as the state. "NY, NY" matches this case.
+
+```sql
+ SELECT {"Name":f.id, "City":f.address.city} AS Family
+ FROM Families f
+ WHERE f.address.city = f.address.state
+```
+
+The query results are:
+
+```json
+ [{
+ "Family": {
+ "Name": "WakefieldFamily",
+ "City": "NY"
+ }
+ }]
+```
+
+The following query returns all the given names of children in the family whose `id` matches `WakefieldFamily`, ordered by city.
+
+```sql
+ SELECT c.givenName
+ FROM Families f
+ JOIN c IN f.children
+ WHERE f.id = 'WakefieldFamily'
+ ORDER BY f.address.city ASC
+```
+
+The results are:
+
+```json
+ [
+ { "givenName": "Jesse" },
+ { "givenName": "Lisa"}
+ ]
+```
+
+## Remarks
+
+The preceding examples show several aspects of the Cosmos DB query language:
+
+* Since SQL API works on JSON values, it deals with tree-shaped entities instead of rows and columns. You can refer to the tree nodes at any arbitrary depth, like `Node1.Node2.Node3…..Nodem`, similar to the two-part reference of `<table>.<column>` in ANSI SQL.
+
+* Because the query language works with schemaless data, the type system must be bound dynamically. The same expression could yield different types on different items. The result of a query is a valid JSON value, but isn't guaranteed to be of a fixed schema.
+
+* Azure Cosmos DB supports strict JSON items only. The type system and expressions are restricted to deal only with JSON types. For more information, see the [JSON specification](https://www.json.org/).
+
+* A Cosmos container is a schema-free collection of JSON items. The relations within and across container items are implicitly captured by containment, not by primary key and foreign key relations. This feature is important for the intra-item joins that are described in [Joins in Azure Cosmos DB](sql-query-join.md).
+
+## Next steps
+
+- [Introduction to Azure Cosmos DB](../introduction.md)
+- [Azure Cosmos DB .NET samples](https://github.com/Azure/azure-cosmos-dotnet-v3)
+- [SELECT clause](sql-query-select.md)
cosmos-db Sql Query Group By https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-query-group-by.md
+
+ Title: GROUP BY clause in Azure Cosmos DB
+description: Learn about the GROUP BY clause for Azure Cosmos DB.
++++ Last updated : 07/30/2021+++
+# GROUP BY clause in Azure Cosmos DB
+
+The GROUP BY clause divides the query's results according to the values of one or more specified properties.
+
+## Syntax
+
+```sql
+<group_by_clause> ::= GROUP BY <scalar_expression_list>
+
+<scalar_expression_list> ::=
+ <scalar_expression>
+ | <scalar_expression_list>, <scalar_expression>
+```
+
+## Arguments
+
+- `<scalar_expression_list>`
+
+ Specifies the expressions that will be used to divide query results.
+
+- `<scalar_expression>`
+
+ Any scalar expression is allowed except for scalar subqueries and scalar aggregates. Each scalar expression must contain at least one property reference. There is no limit to the number of individual expressions or the cardinality of each expression.
+
+## Remarks
+
+ When a query uses a GROUP BY clause, the SELECT clause can only contain the subset of properties and system functions included in the GROUP BY clause. One exception is [aggregate functions](sql-query-aggregate-functions.md), which can appear in the SELECT clause without being included in the GROUP BY clause. You can also always include literal values in the SELECT clause.
+
+ The GROUP BY clause must be after the SELECT, FROM, and WHERE clause and before the OFFSET LIMIT clause. You currently cannot use GROUP BY with an ORDER BY clause but this is planned.
+
+ The GROUP BY clause does not allow any of the following:
+
+- Aliasing properties or aliasing system functions (aliasing is still allowed within the SELECT clause)
+- Subqueries
+- Aggregate system functions (these are only allowed in the SELECT clause)
+
+Queries with an aggregate system function and a subquery with `GROUP BY` are not supported. For example, the following query is not supported:
+
+```sql
+SELECT COUNT(UniqueLastNames)
+FROM (
+SELECT AVG(f.age)
+FROM f
+GROUP BY f.lastName
+) AS UniqueLastNames
+```
+
+Additionally, cross-partition `GROUP BY` queries can have a maximum of 21 [aggregate system functions](sql-query-aggregate-functions.md).
+
+## Examples
+
+These examples use a sample [nutrition data set](https://github.com/AzureCosmosDB/labs/blob/master/dotnet/setup/NutritionData.json).
+
+Here's a query which returns the total count of items in each foodGroup:
+
+```sql
+SELECT TOP 4 COUNT(1) AS foodGroupCount, f.foodGroup
+FROM Food f
+GROUP BY f.foodGroup
+```
+
+Some results are (TOP keyword is used to limit results):
+
+```json
+[
+ {
+ "foodGroupCount": 183,
+ "foodGroup": "Cereal Grains and Pasta"
+ },
+ {
+ "foodGroupCount": 133,
+ "foodGroup": "Nut and Seed Products"
+ },
+ {
+ "foodGroupCount": 113,
+ "foodGroup": "Meals, Entrees, and Side Dishes"
+ },
+ {
+ "foodGroupCount": 64,
+ "foodGroup": "Spices and Herbs"
+ }
+]
+```
+
+This query has two expressions used to divide results:
+
+```sql
+SELECT TOP 4 COUNT(1) AS foodGroupCount, f.foodGroup, f.version
+FROM Food f
+GROUP BY f.foodGroup, f.version
+```
+
+Some results are:
+
+```json
+[
+ {
+ "foodGroupCount": 183,
+ "foodGroup": "Cereal Grains and Pasta",
+ "version": 1
+ },
+ {
+ "foodGroupCount": 133,
+ "foodGroup": "Nut and Seed Products",
+ "version": 1
+ },
+ {
+ "foodGroupCount": 113,
+ "foodGroup": "Meals, Entrees, and Side Dishes",
+ "version": 1
+ },
+ {
+ "foodGroupCount": 64,
+ "foodGroup": "Spices and Herbs",
+ "version": 1
+ }
+]
+```
+
+This query has a system function in the GROUP BY clause:
+
+```sql
+SELECT TOP 4 COUNT(1) AS foodGroupCount, UPPER(f.foodGroup) AS upperFoodGroup
+FROM Food f
+GROUP BY UPPER(f.foodGroup)
+```
+
+Some results are:
+
+```json
+[
+ {
+ "foodGroupCount": 183,
+ "upperFoodGroup": "CEREAL GRAINS AND PASTA"
+ },
+ {
+ "foodGroupCount": 133,
+ "upperFoodGroup": "NUT AND SEED PRODUCTS"
+ },
+ {
+ "foodGroupCount": 113,
+ "upperFoodGroup": "MEALS, ENTREES, AND SIDE DISHES"
+ },
+ {
+ "foodGroupCount": 64,
+ "upperFoodGroup": "SPICES AND HERBS"
+ }
+]
+```
+
+This query uses both keywords and system functions in the item property expression:
+
+```sql
+SELECT COUNT(1) AS foodGroupCount, ARRAY_CONTAINS(f.tags, {name: 'orange'}) AS containsOrangeTag, f.version BETWEEN 0 AND 2 AS correctVersion
+FROM Food f
+GROUP BY ARRAY_CONTAINS(f.tags, {name: 'orange'}), f.version BETWEEN 0 AND 2
+```
+
+The results are:
+
+```json
+[
+ {
+ "foodGroupCount": 10,
+ "containsOrangeTag": true,
+ "correctVersion": true
+ },
+ {
+ "foodGroupCount": 8608,
+ "containsOrangeTag": false,
+ "correctVersion": true
+ }
+]
+```
+
+## Next steps
+
+- [Getting started](sql-query-getting-started.md)
+- [SELECT clause](sql-query-select.md)
+- [Aggregate functions](sql-query-aggregate-functions.md)
cosmos-db Sql Query Index Of https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-query-index-of.md
+
+ Title: INDEX_OF in Azure Cosmos DB query language
+description: Learn about SQL system function INDEX_OF in Azure Cosmos DB.
++++ Last updated : 09/13/2019+++
+# INDEX_OF (Azure Cosmos DB)
+
+ Returns the starting position of the first occurrence of the second string expression within the first specified string expression, or -1 if the string is not found.
+
+## Syntax
+
+```sql
+INDEX_OF(<str_expr1>, <str_expr2> [, <numeric_expr>])
+```
+
+## Arguments
+
+*str_expr1*
+ Is the string expression to be searched.
+
+*str_expr2*
+ Is the string expression to search for.
+
+*numeric_expr*
+ Optional numeric expression that sets the position the search will start. The first position in *str_expr1* is 0.
+
+## Return types
+
+ Returns a numeric expression.
+
+## Examples
+
+ The following example returns the index of various substrings inside "abc".
+
+```sql
+SELECT INDEX_OF("abc", "ab") AS i1, INDEX_OF("abc", "b") AS i2, INDEX_OF("abc", "c") AS i3
+```
+
+ Here is the result set.
+
+```json
+[{"i1": 0, "i2": 1, "i3": -1}]
+```
+
+## Next steps
+
+- [String functions Azure Cosmos DB](sql-query-string-functions.md)
+- [System functions Azure Cosmos DB](sql-query-system-functions.md)
+- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Is Array https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-query-is-array.md
+
+ Title: IS_ARRAY in Azure Cosmos DB query language
+description: Learn about SQL system function IS_ARRAY in Azure Cosmos DB.
++++ Last updated : 09/13/2019+++
+# IS_ARRAY (Azure Cosmos DB)
+
+ Returns a Boolean value indicating if the type of the specified expression is an array.
+
+## Syntax
+
+```sql
+IS_ARRAY(<expr>)
+```
+
+## Arguments
+
+*expr*
+ Is any expression.
+
+## Return types
+
+ Returns a Boolean expression.
+
+## Examples
+
+ The following example checks objects of JSON Boolean, number, string, null, object, array, and undefined types using the `IS_ARRAY` function.
+
+```sql
+SELECT
+ IS_ARRAY(true) AS isArray1,
+ IS_ARRAY(1) AS isArray2,
+ IS_ARRAY("value") AS isArray3,
+ IS_ARRAY(null) AS isArray4,
+ IS_ARRAY({prop: "value"}) AS isArray5,
+ IS_ARRAY([1, 2, 3]) AS isArray6,
+ IS_ARRAY({prop: "value"}.prop2) AS isArray7
+```
+
+ Here is the result set.
+
+```json
+[{"isArray1":false,"isArray2":false,"isArray3":false,"isArray4":false,"isArray5":false,"isArray6":true,"isArray7":false}]
+```
+
+## Remarks
+
+This system function will benefit from a [range index](../index-policy.md#includeexclude-strategy).
+
+## Next steps
+
+- [Type checking functions Azure Cosmos DB](sql-query-type-checking-functions.md)
+- [System functions Azure Cosmos DB](sql-query-system-functions.md)
+- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Is Bool https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-query-is-bool.md
+
+ Title: IS_BOOL in Azure Cosmos DB query language
+description: Learn about SQL system function IS_BOOL in Azure Cosmos DB.
++++ Last updated : 09/13/2019+++
+# IS_BOOL (Azure Cosmos DB)
+
+ Returns a Boolean value indicating if the type of the specified expression is a Boolean.
+
+## Syntax
+
+```sql
+IS_BOOL(<expr>)
+```
+
+## Arguments
+
+*expr*
+ Is any expression.
+
+## Return types
+
+ Returns a Boolean expression.
+
+## Examples
+
+ The following example checks objects of JSON Boolean, number, string, null, object, array, and undefined types using the `IS_BOOL` function.
+
+```sql
+SELECT
+ IS_BOOL(true) AS isBool1,
+ IS_BOOL(1) AS isBool2,
+ IS_BOOL("value") AS isBool3,
+ IS_BOOL(null) AS isBool4,
+ IS_BOOL({prop: "value"}) AS isBool5,
+ IS_BOOL([1, 2, 3]) AS isBool6,
+ IS_BOOL({prop: "value"}.prop2) AS isBool7
+```
+
+ Here is the result set.
+
+```json
+[{"isBool1":true,"isBool2":false,"isBool3":false,"isBool4":false,"isBool5":false,"isBool6":false,"isBool7":false}]
+```
+
+## Remarks
+
+This system function will benefit from a [range index](../index-policy.md#includeexclude-strategy).
+
+## Next steps
+
+- [Type checking functions Azure Cosmos DB](sql-query-type-checking-functions.md)
+- [System functions Azure Cosmos DB](sql-query-system-functions.md)
+- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Is Defined https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-query-is-defined.md
+
+ Title: IS_DEFINED in Azure Cosmos DB query language
+description: Learn about SQL system function IS_DEFINED in Azure Cosmos DB.
++++ Last updated : 09/13/2019+++
+# IS_DEFINED (Azure Cosmos DB)
+
+ Returns a Boolean indicating if the property has been assigned a value.
+
+## Syntax
+
+```sql
+IS_DEFINED(<expr>)
+```
+
+## Arguments
+
+*expr*
+ Is any expression.
+
+## Return types
+
+ Returns a Boolean expression.
+
+## Examples
+
+ The following example checks for the presence of a property within the specified JSON document. The first returns true since "a" is present, but the second returns false since "b" is absent.
+
+```sql
+SELECT IS_DEFINED({ "a" : 5 }.a) AS isDefined1, IS_DEFINED({ "a" : 5 }.b) AS isDefined2
+```
+
+ Here is the result set.
+
+```json
+[{"isDefined1":true,"isDefined2":false}]
+```
+
+## Remarks
+
+This system function will benefit from a [range index](../index-policy.md#includeexclude-strategy).
+
+## Next steps
+
+- [Type checking functions Azure Cosmos DB](sql-query-type-checking-functions.md)
+- [System functions Azure Cosmos DB](sql-query-system-functions.md)
+- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Is Null https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-query-is-null.md
+
+ Title: IS_NULL in Azure Cosmos DB query language
+description: Learn about SQL system function IS_NULL in Azure Cosmos DB.
++++ Last updated : 09/13/2019+++
+# IS_NULL (Azure Cosmos DB)
+
+ Returns a Boolean value indicating if the type of the specified expression is null.
+
+## Syntax
+
+```sql
+IS_NULL(<expr>)
+```
+
+## Arguments
+
+*expr*
+ Is any expression.
+
+## Return types
+
+ Returns a Boolean expression.
+
+## Examples
+
+ The following example checks objects of JSON Boolean, number, string, null, object, array, and undefined types using the `IS_NULL` function.
+
+```sql
+SELECT
+ IS_NULL(true) AS isNull1,
+ IS_NULL(1) AS isNull2,
+ IS_NULL("value") AS isNull3,
+ IS_NULL(null) AS isNull4,
+ IS_NULL({prop: "value"}) AS isNull5,
+ IS_NULL([1, 2, 3]) AS isNull6,
+ IS_NULL({prop: "value"}.prop2) AS isNull7
+```
+
+ Here is the result set.
+
+```json
+[{"isNull1":false,"isNull2":false,"isNull3":false,"isNull4":true,"isNull5":false,"isNull6":false,"isNull7":false}]
+```
+
+## Remarks
+
+This system function will benefit from a [range index](../index-policy.md#includeexclude-strategy).
+
+## Next steps
+
+- [Type checking functions Azure Cosmos DB](sql-query-type-checking-functions.md)
+- [System functions Azure Cosmos DB](sql-query-system-functions.md)
+- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Is Number https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-query-is-number.md
+
+ Title: IS_NUMBER in Azure Cosmos DB query language
+description: Learn about SQL system function IS_NUMBER in Azure Cosmos DB.
++++ Last updated : 09/13/2019+++
+# IS_NUMBER (Azure Cosmos DB)
+
+ Returns a Boolean value indicating if the type of the specified expression is a number.
+
+## Syntax
+
+```sql
+IS_NUMBER(<expr>)
+```
+
+## Arguments
+
+*expr*
+ Is any expression.
+
+## Return types
+
+ Returns a Boolean expression.
+
+## Examples
+
+ The following example checks objects of JSON Boolean, number, string, null, object, array, and undefined types using the `IS_NUMBER` function.
+
+```sql
+SELECT
+ IS_NUMBER(true) AS isNum1,
+ IS_NUMBER(1) AS isNum2,
+ IS_NUMBER("value") AS isNum3,
+ IS_NUMBER(null) AS isNum4,
+ IS_NUMBER({prop: "value"}) AS isNum5,
+ IS_NUMBER([1, 2, 3]) AS isNum6,
+ IS_NUMBER({prop: "value"}.prop2) AS isNum7
+```
+
+ Here is the result set.
+
+```json
+[{"isNum1":false,"isNum2":true,"isNum3":false,"isNum4":false,"isNum5":false,"isNum6":false,"isNum7":false}]
+```
+
+## Remarks
+
+This system function will benefit from a [range index](../index-policy.md#includeexclude-strategy).
+
+## Next steps
+
+- [Type checking functions Azure Cosmos DB](sql-query-type-checking-functions.md)
+- [System functions Azure Cosmos DB](sql-query-system-functions.md)
+- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Is Object https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-query-is-object.md
+
+ Title: IS_OBJECT in Azure Cosmos DB query language
+description: Learn about SQL system function IS_OBJECT in Azure Cosmos DB.
++++ Last updated : 09/13/2019+++
+# IS_OBJECT (Azure Cosmos DB)
+
+ Returns a Boolean value indicating if the type of the specified expression is a JSON object.
+
+## Syntax
+
+```sql
+IS_OBJECT(<expr>)
+```
+
+## Arguments
+
+*expr*
+ Is any expression.
+
+## Return types
+
+ Returns a Boolean expression.
+
+## Examples
+
+ The following example checks objects of JSON Boolean, number, string, null, object, array, and undefined types using the `IS_OBJECT` function.
+
+```sql
+SELECT
+ IS_OBJECT(true) AS isObj1,
+ IS_OBJECT(1) AS isObj2,
+ IS_OBJECT("value") AS isObj3,
+ IS_OBJECT(null) AS isObj4,
+ IS_OBJECT({prop: "value"}) AS isObj5,
+ IS_OBJECT([1, 2, 3]) AS isObj6,
+ IS_OBJECT({prop: "value"}.prop2) AS isObj7
+```
+
+ Here is the result set.
+
+```json
+[{"isObj1":false,"isObj2":false,"isObj3":false,"isObj4":false,"isObj5":true,"isObj6":false,"isObj7":false}]
+```
+
+## Remarks
+
+This system function will benefit from a [range index](../index-policy.md#includeexclude-strategy).
+
+## Next steps
+
+- [Type checking functions Azure Cosmos DB](sql-query-type-checking-functions.md)
+- [System functions Azure Cosmos DB](sql-query-system-functions.md)
+- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Is Primitive https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-query-is-primitive.md
+
+ Title: IS_PRIMITIVE in Azure Cosmos DB query language
+description: Learn about SQL system function IS_PRIMITIVE in Azure Cosmos DB.
++++ Last updated : 09/13/2019+++
+# IS_PRIMITIVE (Azure Cosmos DB)
+
+ Returns a Boolean value indicating if the type of the specified expression is a primitive (string, Boolean, numeric, or null).
+
+## Syntax
+
+```sql
+IS_PRIMITIVE(<expr>)
+```
+
+## Arguments
+
+*expr*
+ Is any expression.
+
+## Return types
+
+ Returns a Boolean expression.
+
+## Examples
+
+ The following example checks objects of JSON Boolean, number, string, null, object, array and undefined types using the `IS_PRIMITIVE` function.
+
+```sql
+SELECT
+ IS_PRIMITIVE(true) AS isPrim1,
+ IS_PRIMITIVE(1) AS isPrim2,
+ IS_PRIMITIVE("value") AS isPrim3,
+ IS_PRIMITIVE(null) AS isPrim4,
+ IS_PRIMITIVE({prop: "value"}) AS isPrim5,
+ IS_PRIMITIVE([1, 2, 3]) AS isPrim6,
+ IS_PRIMITIVE({prop: "value"}.prop2) AS isPrim7
+```
+
+ Here is the result set.
+
+```json
+[{"isPrim1": true, "isPrim2": true, "isPrim3": true, "isPrim4": true, "isPrim5": false, "isPrim6": false, "isPrim7": false}]
+```
+
+## Remarks
+
+This system function will benefit from a [range index](../index-policy.md#includeexclude-strategy).
+
+## Next steps
+
+- [Type checking functions Azure Cosmos DB](sql-query-type-checking-functions.md)
+- [System functions Azure Cosmos DB](sql-query-system-functions.md)
+- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Is String https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-query-is-string.md
+
+ Title: IS_STRING in Azure Cosmos DB query language
+description: Learn about SQL system function IS_STRING in Azure Cosmos DB.
++++ Last updated : 09/13/2019+++
+# IS_STRING (Azure Cosmos DB)
+
+ Returns a Boolean value indicating if the type of the specified expression is a string.
+
+## Syntax
+
+```sql
+IS_STRING(<expr>)
+```
+
+## Arguments
+
+*expr*
+ Is any expression.
+
+## Return types
+
+ Returns a Boolean expression.
+
+## Examples
+
+ The following example checks objects of JSON Boolean, number, string, null, object, array, and undefined types using the `IS_STRING` function.
+
+```sql
+SELECT
+ IS_STRING(true) AS isStr1,
+ IS_STRING(1) AS isStr2,
+ IS_STRING("value") AS isStr3,
+ IS_STRING(null) AS isStr4,
+ IS_STRING({prop: "value"}) AS isStr5,
+ IS_STRING([1, 2, 3]) AS isStr6,
+ IS_STRING({prop: "value"}.prop2) AS isStr7
+```
+
+ Here is the result set.
+
+```json
+[{"isStr1":false,"isStr2":false,"isStr3":true,"isStr4":false,"isStr5":false,"isStr6":false,"isStr7":false}]
+```
+
+## Remarks
+
+This system function will benefit from a [range index](../index-policy.md#includeexclude-strategy).
+
+## Next steps
+
+- [Type checking functions Azure Cosmos DB](sql-query-type-checking-functions.md)
+- [System functions Azure Cosmos DB](sql-query-system-functions.md)
+- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Join https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-query-join.md
+
+ Title: SQL JOIN queries for Azure Cosmos DB
+description: Learn how to JOIN multiple tables in Azure Cosmos DB to query the data
++++ Last updated : 08/06/2021+++
+# Joins in Azure Cosmos DB
+
+In a relational database, joins across tables are the logical corollary to designing normalized schemas. In contrast, the SQL API uses the denormalized data model of schema-free items, which is the logical equivalent of a *self-join*.
+
+> [!NOTE]
+> In Azure Cosmos DB, joins are scoped to a single item. Cross-item and cross-container joins are not supported. In NoSQL databases like Azure Cosmos DB, good [data modeling](../modeling-data.md) can help avoid the need for cross-item and cross-container joins.
+
+Joins result in a complete cross product of the sets participating in the join. The result of an N-way join is a set of N-element tuples, where each value in the tuple is associated with the aliased set participating in the join and can be accessed by referencing that alias in other clauses.
+
+## Syntax
+
+The language supports the syntax `<from_source1> JOIN <from_source2> JOIN ... JOIN <from_sourceN>`. This query returns a set of tuples with `N` values. Each tuple has values produced by iterating all container aliases over their respective sets.
+
+Let's look at the following FROM clause: `<from_source1> JOIN <from_source2> JOIN ... JOIN <from_sourceN>`
+
+ Let each source define `input_alias1, input_alias2, …, input_aliasN`. This FROM clause returns a set of N-tuples (tuple with N values). Each tuple has values produced by iterating all container aliases over their respective sets.
+
+**Example 1** - 2 sources
+
+- Let `<from_source1>` be container-scoped and represent set {A, B, C}.
+
+- Let `<from_source2>` be document-scoped referencing input_alias1 and represent sets:
+
+ {1, 2} for `input_alias1 = A,`
+
+ {3} for `input_alias1 = B,`
+
+ {4, 5} for `input_alias1 = C,`
+
+- The FROM clause `<from_source1> JOIN <from_source2>` will result in the following tuples:
+
+ (`input_alias1, input_alias2`):
+
+ `(A, 1), (A, 2), (B, 3), (C, 4), (C, 5)`
+
+**Example 2** - 3 sources
+
+- Let `<from_source1>` be container-scoped and represent set {A, B, C}.
+
+- Let `<from_source2>` be document-scoped referencing `input_alias1` and represent sets:
+
+ {1, 2} for `input_alias1 = A,`
+
+ {3} for `input_alias1 = B,`
+
+ {4, 5} for `input_alias1 = C,`
+
+- Let `<from_source3>` be document-scoped referencing `input_alias2` and represent sets:
+
+ {100, 200} for `input_alias2 = 1,`
+
+ {300} for `input_alias2 = 3,`
+
+- The FROM clause `<from_source1> JOIN <from_source2> JOIN <from_source3>` will result in the following tuples:
+
+ (input_alias1, input_alias2, input_alias3):
+
+ (A, 1, 100), (A, 1, 200), (B, 3, 300)
+
+ > [!NOTE]
+ > Lack of tuples for other values of `input_alias1`, `input_alias2`, for which the `<from_source3>` did not return any values.
+
+**Example 3** - 3 sources
+
+- Let `<from_source1>` be container-scoped and represent set {A, B, C}.
+
+- Let `<from_source2>` be document-scoped referencing `input_alias1` and represent sets:
+
+ {1, 2} for `input_alias1 = A,`
+
+ {3} for `input_alias1 = B,`
+
+ {4, 5} for `input_alias1 = C,`
+
+- Let `<from_source3>` be scoped to `input_alias1` and represent sets:
+
+ {100, 200} for `input_alias2 = A,`
+
+ {300} for `input_alias2 = C,`
+
+- The FROM clause `<from_source1> JOIN <from_source2> JOIN <from_source3>` will result in the following tuples:
+
+ (`input_alias1, input_alias2, input_alias3`):
+
+ (A, 1, 100), (A, 1, 200), (A, 2, 100), (A, 2, 200), (C, 4, 300) , (C, 5, 300)
+
+ > [!NOTE]
+ > This resulted in cross product between `<from_source2>` and `<from_source3>` because both are scoped to the same `<from_source1>`. This resulted in 4 (2x2) tuples having value A, 0 tuples having value B (1x0) and 2 (2x1) tuples having value C.
+
+## Examples
+
+The following examples show how the JOIN clause works. Before you run these examples, upload the sample [family data](sql-query-getting-started.md#upload-sample-data). In the following example, the result is empty, since the cross product of each item from source and an empty set is empty:
+
+```sql
+ SELECT f.id
+ FROM Families f
+ JOIN f.NonExistent
+```
+
+The result is:
+
+```json
+ [{
+ }]
+```
+
+In the following example, the join is a cross product between two JSON objects, the item root `id` and the `children` subroot. The fact that `children` is an array isn't effective in the join, because it deals with a single root that is the `children` array. The result contains only two results, because the cross product of each item with the array yields exactly only one item.
+
+```sql
+ SELECT f.id
+ FROM Families f
+ JOIN f.children
+```
+
+The results are:
+
+```json
+ [
+ {
+ "id": "AndersenFamily"
+ },
+ {
+ "id": "WakefieldFamily"
+ }
+ ]
+```
+
+The following example shows a more conventional join:
+
+```sql
+ SELECT f.id
+ FROM Families f
+ JOIN c IN f.children
+```
+
+The results are:
+
+```json
+ [
+ {
+ "id": "AndersenFamily"
+ },
+ {
+ "id": "WakefieldFamily"
+ },
+ {
+ "id": "WakefieldFamily"
+ }
+ ]
+```
+
+The FROM source of the JOIN clause is an iterator. So, the flow in the preceding example is:
+
+1. Expand each child element `c` in the array.
+2. Apply a cross product with the root of the item `f` with each child element `c` that the first step flattened.
+3. Finally, project the root object `f` `id` property alone.
+
+The first item, `AndersenFamily`, contains only one `children` element, so the result set contains only a single object. The second item, `WakefieldFamily`, contains two `children`, so the cross product produces two objects, one for each `children` element. The root fields in both these items are the same, just as you would expect in a cross product.
+
+The real utility of the JOIN clause is to form tuples from the cross product in a shape that's otherwise difficult to project. The example below filters on the combination of a tuple that lets the user choose a condition satisfied by the tuples overall.
+
+```sql
+ SELECT
+ f.id AS familyName,
+ c.givenName AS childGivenName,
+ c.firstName AS childFirstName,
+ p.givenName AS petName
+ FROM Families f
+ JOIN c IN f.children
+ JOIN p IN c.pets
+```
+
+The results are:
+
+```json
+ [
+ {
+ "familyName": "AndersenFamily",
+ "childFirstName": "Henriette Thaulow",
+ "petName": "Fluffy"
+ },
+ {
+ "familyName": "WakefieldFamily",
+ "childGivenName": "Jesse",
+ "petName": "Goofy"
+ },
+ {
+ "familyName": "WakefieldFamily",
+ "childGivenName": "Jesse",
+ "petName": "Shadow"
+ }
+ ]
+```
+
+The following extension of the preceding example performs a double join. You could view the cross product as the following pseudo-code:
+
+```
+ for-each(Family f in Families)
+ {
+ for-each(Child c in f.children)
+ {
+ for-each(Pet p in c.pets)
+ {
+ return (Tuple(f.id AS familyName,
+ c.givenName AS childGivenName,
+ c.firstName AS childFirstName,
+ p.givenName AS petName));
+ }
+ }
+ }
+```
+
+`AndersenFamily` has one child who has one pet, so the cross product yields one row (1\*1\*1) from this family. `WakefieldFamily` has two children, only one of whom has pets, but that child has two pets. The cross product for this family yields 1\*1\*2 = 2 rows.
+
+In the next example, there is an additional filter on `pet`, which excludes all the tuples where the pet name is not `Shadow`. You can build tuples from arrays, filter on any of the elements of the tuple, and project any combination of the elements.
+
+```sql
+ SELECT
+ f.id AS familyName,
+ c.givenName AS childGivenName,
+ c.firstName AS childFirstName,
+ p.givenName AS petName
+ FROM Families f
+ JOIN c IN f.children
+ JOIN p IN c.pets
+ WHERE p.givenName = "Shadow"
+```
+
+The results are:
+
+```json
+ [
+ {
+ "familyName": "WakefieldFamily",
+ "childGivenName": "Jesse",
+ "petName": "Shadow"
+ }
+ ]
+```
+
+If your query has a JOIN and filters, you can rewrite part of the query as a [subquery](sql-query-subquery.md#optimize-join-expressions) to improve performance.
+
+## Next steps
+
+- [Getting started](sql-query-getting-started.md)
+- [Azure Cosmos DB .NET samples](https://github.com/Azure/azure-cosmosdb-dotnet)
+- [Subqueries](sql-query-subquery.md)
cosmos-db Sql Query Keywords https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-query-keywords.md
+
+ Title: SQL keywords for Azure Cosmos DB
+description: Learn about SQL keywords for Azure Cosmos DB.
++++ Last updated : 01/20/2021+++
+# Keywords in Azure Cosmos DB
+
+This article details keywords which may be used in Azure Cosmos DB SQL queries.
+
+## BETWEEN
+
+You can use the `BETWEEN` keyword to express queries against ranges of string or numerical values. For example, the following query returns all items in which the first child's grade is 1-5, inclusive.
+
+```sql
+ SELECT *
+ FROM Families.children[0] c
+ WHERE c.grade BETWEEN 1 AND 5
+```
+
+You can also use the `BETWEEN` keyword in the `SELECT` clause, as in the following example.
+
+```sql
+ SELECT (c.grade BETWEEN 0 AND 10)
+ FROM Families.children[0] c
+```
+
+In SQL API, unlike ANSI SQL, you can express range queries against properties of mixed types. For example, `grade` might be a number like `5` in some items and a string like `grade4` in others. In these cases, as in JavaScript, the comparison between the two different types results in `Undefined`, so the item is skipped.
+
+## DISTINCT
+
+The `DISTINCT` keyword eliminates duplicates in the query's projection.
+
+In this example, the query projects values for each last name:
+
+```sql
+SELECT DISTINCT VALUE f.lastName
+FROM Families f
+```
+
+The results are:
+
+```json
+[
+ "Andersen"
+]
+```
+
+You can also project unique objects. In this case, the lastName field does not exist in one of the two documents, so the query returns an empty object.
+
+```sql
+SELECT DISTINCT f.lastName
+FROM Families f
+```
+
+The results are:
+
+```json
+[
+ {
+ "lastName": "Andersen"
+ },
+ {}
+]
+```
+
+`DISTINCT` can also be used in the projection within a subquery:
+
+```sql
+SELECT f.id, ARRAY(SELECT DISTINCT VALUE c.givenName FROM c IN f.children) as ChildNames
+FROM f
+```
+
+This query projects an array which contains each child's givenName with duplicates removed. This array is aliased as ChildNames and projected in the outer query.
+
+The results are:
+
+```json
+[
+ {
+ "id": "AndersenFamily",
+ "ChildNames": []
+ },
+ {
+ "id": "WakefieldFamily",
+ "ChildNames": [
+ "Jesse",
+ "Lisa"
+ ]
+ }
+]
+```
+
+Queries with an aggregate system function and a subquery with `DISTINCT` are not supported. For example, the following query is not supported:
+
+```sql
+SELECT COUNT(1) FROM (SELECT DISTINCT f.lastName FROM f)
+```
+
+## LIKE
+
+Returns a Boolean value depending on whether a specific character string matches a specified pattern. A pattern can include regular characters and wildcard characters. You can write logically equivalent queries using either the `LIKE` keyword or the [RegexMatch](sql-query-regexmatch.md) system function. YouΓÇÖll observe the same index utilization regardless of which one you choose. Therefore, you should use `LIKE` if you prefer its syntax more than regular expressions.
+
+> [!NOTE]
+> Because `LIKE` can utilize an index, you should [create a range index](./../index-policy.md) for properties you are comparing using `LIKE`.
+
+You can use the following wildcard characters with LIKE:
+
+| Wildcard character | Description | Example |
+| -- | | - |
+| % | Any string of zero or more characters | WHERE c.description LIKE ΓÇ£%SO%PS%ΓÇ¥ |
+| _ (underscore) | Any single character | WHERE c.description LIKE ΓÇ£%SO_PS%ΓÇ¥ |
+| [ ] | Any single character within the specified range ([a-f]) or set ([abcdef]). | WHERE c.description LIKE ΓÇ£%SO[t-z]PS%ΓÇ¥ |
+| [^] | Any single character not within the specified range ([^a-f]) or set ([^abcdef]). | WHERE c.description LIKE ΓÇ£%SO[^abc]PS%ΓÇ¥ |
++
+### Using LIKE with the % wildcard character
+
+The `%` character matches any string of zero or more characters. For example, by placing a `%` at the beginning and end of the pattern, the following query returns all items with a description that contains `fruit`:
+
+```sql
+SELECT *
+FROM c
+WHERE c.description LIKE "%fruit%"
+```
+
+If you only used a `%` character at the end of the pattern, youΓÇÖd only return items with a description that started with `fruit`:
+
+```sql
+SELECT *
+FROM c
+WHERE c.description LIKE "fruit%"
+```
++
+### Using NOT LIKE
+
+The below example returns all items with a description that does not contain `fruit`:
+
+```sql
+SELECT *
+FROM c
+WHERE c.description NOT LIKE "%fruit%"
+```
+
+### Using the escape clause
+
+You can search for patterns that include one or more wildcard characters using the ESCAPE clause. For example, if you wanted to search for descriptions that contained the string `20-30%`, you wouldnΓÇÖt want to interpret the `%` as a wildcard character.
+
+```sql
+SELECT *
+FROM c
+WHERE c.description LIKE '%20-30!%%' ESCAPE '!'
+```
+
+### Using wildcard characters as literals
+
+You can enclose wildcard characters in brackets to treat them as literal characters. When you enclose a wildcard character in brackets, you remove any special attributes. Here are some examples:
+
+| Pattern | Meaning |
+| -- | - |
+| LIKE ΓÇ£20-30[%]ΓÇ¥ | 20-30% |
+| LIKE ΓÇ£[_]nΓÇ¥ | _n |
+| LIKE ΓÇ£[ [ ]ΓÇ¥ | [ |
+| LIKE ΓÇ£]ΓÇ¥ | ] |
+
+## IN
+
+Use the IN keyword to check whether a specified value matches any value in a list. For example, the following query returns all family items where the `id` is `WakefieldFamily` or `AndersenFamily`.
+
+```sql
+ SELECT *
+ FROM Families
+ WHERE Families.id IN ('AndersenFamily', 'WakefieldFamily')
+```
+
+The following example returns all items where the state is any of the specified values:
+
+```sql
+ SELECT *
+ FROM Families
+ WHERE Families.address.state IN ("NY", "WA", "CA", "PA", "OH", "OR", "MI", "WI", "MN", "FL")
+```
+
+The SQL API provides support for [iterating over JSON arrays](sql-query-object-array.md#Iteration), with a new construct added via the in keyword in the FROM source.
+
+If you include your partition key in the `IN` filter, your query will automatically filter to only the relevant partitions.
+
+## TOP
+
+The TOP keyword returns the first `N` number of query results in an undefined order. As a best practice, use TOP with the `ORDER BY` clause to limit results to the first `N` number of ordered values. Combining these two clauses is the only way to predictably indicate which rows TOP affects.
+
+You can use TOP with a constant value, as in the following example, or with a variable value using parameterized queries.
+
+```sql
+ SELECT TOP 1 *
+ FROM Families f
+```
+
+The results are:
+
+```json
+ [{
+ "id": "AndersenFamily",
+ "lastName": "Andersen",
+ "parents": [
+ { "firstName": "Thomas" },
+ { "firstName": "Mary Kay"}
+ ],
+ "children": [
+ {
+ "firstName": "Henriette Thaulow", "gender": "female", "grade": 5,
+ "pets": [{ "givenName": "Fluffy" }]
+ }
+ ],
+ "address": { "state": "WA", "county": "King", "city": "Seattle" },
+ "creationDate": 1431620472,
+ "isRegistered": true
+ }]
+```
+
+## Next steps
+
+- [Getting started](sql-query-getting-started.md)
+- [Joins](sql-query-join.md)
+- [Subqueries](sql-query-subquery.md)
cosmos-db Sql Query Left https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-query-left.md
+
+ Title: LEFT in Azure Cosmos DB query language
+description: Learn about SQL system function LEFT in Azure Cosmos DB.
++++ Last updated : 09/13/2019+++
+# LEFT (Azure Cosmos DB)
+
+ Returns the left part of a string with the specified number of characters.
+
+## Syntax
+
+```sql
+LEFT(<str_expr>, <num_expr>)
+```
+
+## Arguments
+
+*str_expr*
+ Is the string expression to extract characters from.
+
+*num_expr*
+ Is a numeric expression which specifies the number of characters.
+
+## Return types
+
+ Returns a string expression.
+
+## Examples
+
+ The following example returns the left part of "abc" for various length values.
+
+```sql
+SELECT LEFT("abc", 1) AS l1, LEFT("abc", 2) AS l2
+```
+
+ Here is the result set.
+
+```json
+[{"l1": "a", "l2": "ab"}]
+```
+
+## Remarks
+
+This system function will benefit from a [range index](../index-policy.md#includeexclude-strategy).
+
+## Next steps
+
+- [String functions Azure Cosmos DB](sql-query-string-functions.md)
+- [System functions Azure Cosmos DB](sql-query-system-functions.md)
+- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Length https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-query-length.md
+
+ Title: LENGTH in Azure Cosmos DB query language
+description: Learn about SQL system function LENGTH in Azure Cosmos DB.
++++ Last updated : 09/13/2019+++
+# LENGTH (Azure Cosmos DB)
+
+ Returns the number of characters of the specified string expression.
+
+## Syntax
+
+```sql
+LENGTH(<str_expr>)
+```
+
+## Arguments
+
+*str_expr*
+ Is the string expression to be evaluated.
+
+## Return types
+
+ Returns a numeric expression.
+
+## Examples
+
+ The following example returns the length of a string.
+
+```sql
+SELECT LENGTH("abc") AS len
+```
+
+ Here is the result set.
+
+```json
+[{"len": 3}]
+```
+
+## Remarks
+
+This system function will not utilize the index.
+
+## Next steps
+
+- [String functions Azure Cosmos DB](sql-query-string-functions.md)
+- [System functions Azure Cosmos DB](sql-query-system-functions.md)
+- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Linq To Sql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-query-linq-to-sql.md
+
+ Title: LINQ to SQL translation in Azure Cosmos DB
+description: Learn the LINQ operators supported and how the LINQ queries are mapped to SQL queries in Azure Cosmos DB.
++++ Last updated : 08/06/2021+++
+# LINQ to SQL translation
+
+The Azure Cosmos DB query provider performs a best effort mapping from a LINQ query into a Cosmos DB SQL query. If you want to get the SQL query that is translated from LINQ, use the `ToString()` method on the generated `IQueryable`object. The following description assumes a basic familiarity with [LINQ](/dotnet/csharp/programming-guide/concepts/linq/introduction-to-linq-queries). In addition to LINQ, Azure Cosmos DB also supports [Entity Framework Core](/ef/core/providers/cosmos/?tabs=dotnet-core-cli) which works with SQL API.
+
+> [!NOTE]
+> We recommend using the latest [.NET SDK version](https://www.nuget.org/packages/Microsoft.Azure.Cosmos/3.20.1)
+
+The query provider type system supports only the JSON primitive types: numeric, Boolean, string, and null.
+
+The query provider supports the following scalar expressions:
+
+- Constant values, including constant values of the primitive data types at query evaluation time.
+
+- Property/array index expressions that refer to the property of an object or an array element. For example:
+
+ ```
+ family.Id;
+ family.children[0].familyName;
+ family.children[0].grade;
+ family.children[n].grade; //n is an int variable
+ ```
+
+- Arithmetic expressions, including common arithmetic expressions on numerical and Boolean values. For the complete list, see the [Azure Cosmos DB SQL specification](sql-query-aggregate-functions.md).
+
+ ```
+ 2 * family.children[0].grade;
+ x + y;
+ ```
+
+- String comparison expressions, which include comparing a string value to some constant string value.
+
+ ```
+ mother.familyName == "Wakefield";
+ child.givenName == s; //s is a string variable
+ ```
+
+- Object/array creation expressions, which return an object of compound value type or anonymous type, or an array of such objects. You can nest these values.
+
+ ```
+ new Parent { familyName = "Wakefield", givenName = "Robin" };
+ new { first = 1, second = 2 }; //an anonymous type with two fields
+ new int[] { 3, child.grade, 5 };
+ ```
+
+## Using LINQ
+
+You can create a LINQ query with `GetItemLinqQueryable`. This example shows LINQ query generation and asynchronous execution with a `FeedIterator`:
+
+```csharp
+using (FeedIterator<Book> setIterator = container.GetItemLinqQueryable<Book>()
+ .Where(b => b.Title == "War and Peace")
+ .ToFeedIterator<Book>())
+ {
+ //Asynchronous query execution
+ while (setIterator.HasMoreResults)
+ {
+ foreach(var item in await setIterator.ReadNextAsync()){
+ {
+ Console.WriteLine(item.cost);
+ }
+ }
+ }
+ }
+```
+
+## <a id="SupportedLinqOperators"></a>Supported LINQ operators
+
+The LINQ provider included with the SQL .NET SDK supports the following operators:
+
+- **Select**: Projections translate to [SELECT](sql-query-select.md), including object construction.
+- **Where**: Filters translate to [WHERE](sql-query-where.md), and support translation between `&&`, `||`, and `!` to the SQL operators
+- **SelectMany**: Allows unwinding of arrays to the [JOIN](sql-query-join.md) clause. Use to chain or nest expressions to filter on array elements.
+- **OrderBy** and **OrderByDescending**: Translate to [ORDER BY](sql-query-order-by.md) with ASC or DESC.
+- **Count**, **Sum**, **Min**, **Max**, and **Average** operators for [aggregation](sql-query-aggregate-functions.md), and their async equivalents **CountAsync**, **SumAsync**, **MinAsync**, **MaxAsync**, and **AverageAsync**.
+- **CompareTo**: Translates to range comparisons. Commonly used for strings, since they're not comparable in .NET.
+- **Skip** and **Take**: Translates to [OFFSET and LIMIT](sql-query-offset-limit.md) for limiting results from a query and doing pagination.
+- **Math functions**: Supports translation from .NET `Abs`, `Acos`, `Asin`, `Atan`, `Ceiling`, `Cos`, `Exp`, `Floor`, `Log`, `Log10`, `Pow`, `Round`, `Sign`, `Sin`, `Sqrt`, `Tan`, and `Truncate` to the equivalent [built-in mathematical functions](sql-query-mathematical-functions.md).
+- **String functions**: Supports translation from .NET `Concat`, `Contains`, `Count`, `EndsWith`,`IndexOf`, `Replace`, `Reverse`, `StartsWith`, `SubString`, `ToLower`, `ToUpper`, `TrimEnd`, and `TrimStart` to the equivalent [built-in string functions](sql-query-string-functions.md).
+- **Array functions**: Supports translation from .NET `Concat`, `Contains`, and `Count` to the equivalent [built-in array functions](sql-query-array-functions.md).
+- **Geospatial Extension functions**: Supports translation from stub methods `Distance`, `IsValid`, `IsValidDetailed`, and `Within` to the equivalent [built-in geospatial functions](sql-query-geospatial-query.md).
+- **User-Defined Function Extension function**: Supports translation from the stub method [CosmosLinq.InvokeUserDefinedFunction](/dotnet/api/microsoft.azure.cosmos.linq.cosmoslinq.invokeuserdefinedfunction?view=azure-dotnet&preserve-view=true) to the corresponding [user-defined function](sql-query-udfs.md).
+- **Miscellaneous**: Supports translation of `Coalesce` and conditional [operators](sql-query-operators.md). Can translate `Contains` to String CONTAINS, ARRAY_CONTAINS, or IN, depending on context.
+
+## Examples
+
+The following examples illustrate how some of the standard LINQ query operators translate to queries in Azure Cosmos DB.
+
+### Select operator
+
+The syntax is `input.Select(x => f(x))`, where `f` is a scalar expression. The `input`, in this case, would be an `IQueryable` object.
+
+**Select operator, example 1:**
+
+- **LINQ lambda expression**
+
+ ```csharp
+ input.Select(family => family.parents[0].familyName);
+ ```
+
+- **SQL**
+
+ ```sql
+ SELECT VALUE f.parents[0].familyName
+ FROM Families f
+ ```
+
+**Select operator, example 2:**
+
+- **LINQ lambda expression**
+
+ ```csharp
+ input.Select(family => family.children[0].grade + c); // c is an int variable
+ ```
+
+- **SQL**
+
+ ```sql
+ SELECT VALUE f.children[0].grade + c
+ FROM Families f
+ ```
+
+**Select operator, example 3:**
+
+- **LINQ lambda expression**
+
+ ```csharp
+ input.Select(family => new
+ {
+ name = family.children[0].familyName,
+ grade = family.children[0].grade + 3
+ });
+ ```
+
+- **SQL**
+
+ ```sql
+ SELECT VALUE {"name":f.children[0].familyName,
+ "grade": f.children[0].grade + 3 }
+ FROM Families f
+ ```
+
+### SelectMany operator
+
+The syntax is `input.SelectMany(x => f(x))`, where `f` is a scalar expression that returns a container type.
+
+- **LINQ lambda expression**
+
+ ```csharp
+ input.SelectMany(family => family.children);
+ ```
+
+- **SQL**
+
+ ```sql
+ SELECT VALUE child
+ FROM child IN Families.children
+ ```
+
+### Where operator
+
+The syntax is `input.Where(x => f(x))`, where `f` is a scalar expression, which returns a Boolean value.
+
+**Where operator, example 1:**
+
+- **LINQ lambda expression**
+
+ ```csharp
+ input.Where(family=> family.parents[0].familyName == "Wakefield");
+ ```
+
+- **SQL**
+
+ ```sql
+ SELECT *
+ FROM Families f
+ WHERE f.parents[0].familyName = "Wakefield"
+ ```
+
+**Where operator, example 2:**
+
+- **LINQ lambda expression**
+
+ ```csharp
+ input.Where(
+ family => family.parents[0].familyName == "Wakefield" &&
+ family.children[0].grade < 3);
+ ```
+
+- **SQL**
+
+ ```sql
+ SELECT *
+ FROM Families f
+ WHERE f.parents[0].familyName = "Wakefield"
+ AND f.children[0].grade < 3
+ ```
+
+## Composite SQL queries
+
+You can compose the preceding operators to form more powerful queries. Since Cosmos DB supports nested containers, you can concatenate or nest the composition.
+
+### Concatenation
+
+The syntax is `input(.|.SelectMany())(.Select()|.Where())*`. A concatenated query can start with an optional `SelectMany` query, followed by multiple `Select` or `Where` operators.
+
+**Concatenation, example 1:**
+
+- **LINQ lambda expression**
+
+ ```csharp
+ input.Select(family => family.parents[0])
+ .Where(parent => parent.familyName == "Wakefield");
+ ```
+
+- **SQL**
+
+ ```sql
+ SELECT *
+ FROM Families f
+ WHERE f.parents[0].familyName = "Wakefield"
+ ```
+
+**Concatenation, example 2:**
+
+- **LINQ lambda expression**
+
+ ```csharp
+ input.Where(family => family.children[0].grade > 3)
+ .Select(family => family.parents[0].familyName);
+ ```
+
+- **SQL**
+
+ ```sql
+ SELECT VALUE f.parents[0].familyName
+ FROM Families f
+ WHERE f.children[0].grade > 3
+ ```
+
+**Concatenation, example 3:**
+
+- **LINQ lambda expression**
+
+ ```csharp
+ input.Select(family => new { grade=family.children[0].grade}).
+ Where(anon=> anon.grade < 3);
+ ```
+
+- **SQL**
+
+ ```sql
+ SELECT *
+ FROM Families f
+ WHERE ({grade: f.children[0].grade}.grade > 3)
+ ```
+
+**Concatenation, example 4:**
+
+- **LINQ lambda expression**
+
+ ```csharp
+ input.SelectMany(family => family.parents)
+ .Where(parent => parents.familyName == "Wakefield");
+ ```
+
+- **SQL**
+
+ ```sql
+ SELECT *
+ FROM p IN Families.parents
+ WHERE p.familyName = "Wakefield"
+ ```
+
+### Nesting
+
+The syntax is `input.SelectMany(x=>x.Q())` where `Q` is a `Select`, `SelectMany`, or `Where` operator.
+
+A nested query applies the inner query to each element of the outer container. One important feature is that the inner query can refer to the fields of the elements in the outer container, like a self-join.
+
+**Nesting, example 1:**
+
+- **LINQ lambda expression**
+
+ ```csharp
+ input.SelectMany(family=>
+ family.parents.Select(p => p.familyName));
+ ```
+
+- **SQL**
+
+ ```sql
+ SELECT VALUE p.familyName
+ FROM Families f
+ JOIN p IN f.parents
+ ```
+
+**Nesting, example 2:**
+
+- **LINQ lambda expression**
+
+ ```csharp
+ input.SelectMany(family =>
+ family.children.Where(child => child.familyName == "Jeff"));
+ ```
+
+- **SQL**
+
+ ```sql
+ SELECT *
+ FROM Families f
+ JOIN c IN f.children
+ WHERE c.familyName = "Jeff"
+ ```
+
+**Nesting, example 3:**
+
+- **LINQ lambda expression**
+
+ ```csharp
+ input.SelectMany(family => family.children.Where(
+ child => child.familyName == family.parents[0].familyName));
+ ```
+
+- **SQL**
+
+ ```sql
+ SELECT *
+ FROM Families f
+ JOIN c IN f.children
+ WHERE c.familyName = f.parents[0].familyName
+ ```
+
+## Next steps
+
+- [Azure Cosmos DB .NET samples](https://github.com/Azure/azure-cosmos-dotnet-v3)
+- [Model document data](../modeling-data.md)
cosmos-db Sql Query Log https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-query-log.md
+
+ Title: LOG in Azure Cosmos DB query language
+description: Learn about the LOG SQL system function in Azure Cosmos DB to return the natural logarithm of the specified numeric expression
++++ Last updated : 09/13/2019+++
+# LOG (Azure Cosmos DB)
+
+ Returns the natural logarithm of the specified numeric expression.
+
+## Syntax
+
+```sql
+LOG (<numeric_expr> [, <base>])
+```
+
+## Arguments
+
+*numeric_expr*
+ Is a numeric expression.
+
+*base*
+ Optional numeric argument that sets the base for the logarithm.
+
+## Return types
+
+ Returns a numeric expression.
+
+## Remarks
+
+ By default, LOG() returns the natural logarithm. You can change the base of the logarithm to another value by using the optional base parameter.
+
+ The natural logarithm is the logarithm to the base **e**, where **e** is an irrational constant approximately equal to 2.718281828.
+
+ The natural logarithm of the exponential of a number is the number itself: LOG( EXP( n ) ) = n. And the exponential of the natural logarithm of a number is the number itself: EXP( LOG( n ) ) = n.
+
+ This system function will not utilize the index.
+
+## Examples
+
+ The following example declares a variable and returns the logarithm value of the specified variable (10).
+
+```sql
+SELECT LOG(10) AS log
+```
+
+ Here is the result set.
+
+```json
+[{log: 2.3025850929940459}]
+```
+
+ The following example calculates the `LOG` for the exponent of a number.
+
+```sql
+SELECT EXP(LOG(10)) AS expLog
+```
+
+ Here is the result set.
+
+```json
+[{expLog: 10.000000000000002}]
+```
+
+## Next steps
+
+- [Mathematical functions Azure Cosmos DB](sql-query-mathematical-functions.md)
+- [System functions Azure Cosmos DB](sql-query-system-functions.md)
+- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Log10 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-query-log10.md
+
+ Title: LOG10 in Azure Cosmos DB query language
+description: Learn about the LOG10 SQL system function in Azure Cosmos DB to return the base-10 logarithm of the specified numeric expression
++++ Last updated : 09/13/2019+++
+# LOG10 (Azure Cosmos DB)
+
+ Returns the base-10 logarithm of the specified numeric expression.
+
+## Syntax
+
+```sql
+LOG10 (<numeric_expr>)
+```
+
+## Arguments
+
+*numeric_expression*
+ Is a numeric expression.
+
+## Return types
+
+ Returns a numeric expression.
+
+## Remarks
+
+ The LOG10 and POWER functions are inversely related to one another. For example, 10 ^ LOG10(n) = n. This system function will not utilize the index.
+
+## Examples
+
+ The following example declares a variable and returns the LOG10 value of the specified variable (100).
+
+```sql
+SELECT LOG10(100) AS log10
+```
+
+ Here is the result set.
+
+```json
+[{log10: 2}]
+```
+
+## Next steps
+
+- [Mathematical functions Azure Cosmos DB](sql-query-mathematical-functions.md)
+- [System functions Azure Cosmos DB](sql-query-system-functions.md)
+- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Lower https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-query-lower.md
+
+ Title: LOWER in Azure Cosmos DB query language
+description: Learn about the LOWER SQL system function in Azure Cosmos DB to return a string expression after converting uppercase character data to lowercase
++++ Last updated : 04/07/2021+++
+# LOWER (Azure Cosmos DB)
+
+ Returns a string expression after converting uppercase character data to lowercase.
+
+The LOWER system function does not utilize the index. If you plan to do frequent case insensitive comparisons, the LOWER system function may consume a significant amount of RU's. If this is the case, instead of using the LOWER system function to normalize data each time for comparisons, you can normalize the casing upon insertion. Then a query such as SELECT * FROM c WHERE LOWER(c.name) = 'bob' simply becomes SELECT * FROM c WHERE c.name = 'bob'.
+
+## Syntax
+
+```sql
+LOWER(<str_expr>)
+```
+
+## Arguments
+
+*str_expr*
+ Is a string expression.
+
+## Return types
+
+ Returns a string expression.
+
+## Examples
+
+ The following example shows how to use `LOWER` in a query.
+
+```sql
+SELECT LOWER("Abc") AS lower
+```
+
+ Here is the result set.
+
+```json
+[{"lower": "abc"}]
+
+```
+
+## Remarks
+
+This system function will not [use indexes](../index-overview.md#index-usage).
+
+## Next steps
+
+- [String functions Azure Cosmos DB](sql-query-string-functions.md)
+- [System functions Azure Cosmos DB](sql-query-system-functions.md)
+- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Ltrim https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-query-ltrim.md
+
+ Title: LTRIM in Azure Cosmos DB query language
+description: Learn about the LTRIM SQL system function in Azure Cosmos DB to return a string expression after it removes leading blanks
++++ Last updated : 09/13/2019+++
+# LTRIM (Azure Cosmos DB)
+
+ Returns a string expression after it removes leading blanks.
+
+## Syntax
+
+```sql
+LTRIM(<str_expr>)
+```
+
+## Arguments
+
+*str_expr*
+ Is a string expression.
+
+## Return types
+
+ Returns a string expression.
+
+## Examples
+
+ The following example shows how to use `LTRIM` inside a query.
+
+```sql
+SELECT LTRIM(" abc") AS l1, LTRIM("abc") AS l2, LTRIM("abc ") AS l3
+```
+
+ Here is the result set.
+
+```json
+[{"l1": "abc", "l2": "abc", "l3": "abc "}]
+```
+
+## Remarks
+
+This system function will not utilize the index.
+
+## Next steps
+
+- [String functions Azure Cosmos DB](sql-query-string-functions.md)
+- [System functions Azure Cosmos DB](sql-query-system-functions.md)
+- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Mathematical Functions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-query-mathematical-functions.md
+
+ Title: Mathematical functions in Azure Cosmos DB query language
+description: Learn about the mathematical functions in Azure Cosmos DB to perform a calculation, based on input values that are provided as arguments, and return a numeric value.
++++ Last updated : 06/22/2021+++
+# Mathematical functions (Azure Cosmos DB)
+
+The mathematical functions each perform a calculation, based on input values that are provided as arguments, and return a numeric value.
+
+You can run queries like the following example:
+
+```sql
+ SELECT VALUE ABS(-4)
+```
+
+The result is:
+
+```json
+ [4]
+```
+
+## Functions
+
+The following supported built-in mathematical functions perform a calculation, usually based on input arguments, and return a numeric expression. The **index usage** column assumes, where applicable, that you're comparing the mathematical system function to another value with an equality filter.
+
+| System function | Index usage | [Index usage in queries with scalar aggregate functions](../index-overview.md#index-utilization-for-scalar-aggregate-functions) | Remarks |
+| - | -- | | |
+| [ABS](sql-query-abs.md) | Index seek | Index seek | |
+| [ACOS](sql-query-acos.md) | Full scan | Full scan | |
+| [ASIN](sql-query-asin.md) | Full scan | Full scan | |
+| [ATAN](sql-query-atan.md) | Full scan | Full scan | |
+| [ATN2](sql-query-atn2.md) | Full scan | Full scan | |
+| [CEILING](sql-query-ceiling.md) | Index seek | Index seek | |
+| [COS](sql-query-cos.md) | Full scan | Full scan | |
+| [COT](sql-query-cot.md) | Full scan | Full scan | |
+| [DEGREES](sql-query-degrees.md) | Index seek | Index seek | |
+| [EXP](sql-query-exp.md) | Full scan | Full scan | |
+| [FLOOR](sql-query-floor.md) | Index seek | Index seek | |
+| [LOG](sql-query-log.md) | Full scan | Full scan | |
+| [LOG10](sql-query-log10.md) | Full scan | Full scan | |
+| [PI](sql-query-pi.md) | N/A | N/A | PI () returns a constant value. Because the result is deterministic, comparisons with PI() can use the index. |
+| [POWER](sql-query-power.md) | Full scan | Full scan | |
+| [RADIANS](sql-query-radians.md) | Index seek | Index seek | |
+| [RAND](sql-query-rand.md) | N/A | N/A | Rand() returns a random number. Because the result is non-deterministic, comparisons that involve Rand() cannot use the index. |
+| [ROUND](sql-query-round.md) | Index seek | Index seek | |
+| [SIGN](sql-query-sign.md) | Index seek | Index seek | |
+| [SIN](sql-query-sin.md) | Full scan | Full scan | |
+| [SQRT](sql-query-sqrt.md) | Full scan | Full scan | |
+| [SQUARE](sql-query-square.md) | Full scan | Full scan | |
+| [TAN](sql-query-tan.md) | Full scan | Full scan | |
+| [TRUNC](sql-query-trunc.md) | Index seek | Index seek | |
+## Next steps
+
+- [System functions Azure Cosmos DB](sql-query-system-functions.md)
+- [Introduction to Azure Cosmos DB](../introduction.md)
+- [User Defined Functions](sql-query-udfs.md)
+- [Aggregates](sql-query-aggregate-functions.md)
cosmos-db Sql Query Object Array https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-query-object-array.md
+
+ Title: Working with arrays and objects in Azure Cosmos DB
+description: Learn the SQL syntax to create arrays and objects in Azure Cosmos DB. This article also provides some examples to perform operations on array objects
++++ Last updated : 02/02/2021+++
+# Working with arrays and objects in Azure Cosmos DB
+
+A key feature of the Azure Cosmos DB SQL API is array and object creation. This document uses examples that can be recreated using the [Family dataset](sql-query-getting-started.md#upload-sample-data).
+
+Here's an example item in this dataset:
+
+```json
+{
+ "id": "AndersenFamily",
+ "lastName": "Andersen",
+ "parents": [
+ { "firstName": "Thomas" },
+ { "firstName": "Mary Kay"}
+ ],
+ "children": [
+ {
+ "firstName": "Henriette Thaulow",
+ "gender": "female",
+ "grade": 5,
+ "pets": [{ "givenName": "Fluffy" }]
+ }
+ ],
+ "address": { "state": "WA", "county": "King", "city": "Seattle" },
+ "creationDate": 1431620472,
+ "isRegistered": true
+}
+```
+
+## Arrays
+
+You can construct arrays, as shown in the following example:
+
+```sql
+SELECT [f.address.city, f.address.state] AS CityState
+FROM Families f
+```
+
+The results are:
+
+```json
+[
+ {
+ "CityState": [
+ "Seattle",
+ "WA"
+ ]
+ },
+ {
+ "CityState": [
+ "NY",
+ "NY"
+ ]
+ }
+]
+```
+
+You can also use the [ARRAY expression](sql-query-subquery.md#array-expression) to construct an array from [subquery's](sql-query-subquery.md) results. This query gets all the distinct given names of children in an array.
+
+```sql
+SELECT f.id, ARRAY(SELECT DISTINCT VALUE c.givenName FROM c IN f.children) as ChildNames
+FROM f
+```
+
+The results are:
+
+```json
+[
+ {
+ "id": "AndersenFamily",
+ "ChildNames": []
+ },
+ {
+ "id": "WakefieldFamily",
+ "ChildNames": [
+ "Jesse",
+ "Lisa"
+ ]
+ }
+]
+```
+
+## <a id="Iteration"></a>Iteration
+
+The SQL API provides support for iterating over JSON arrays, with the [IN keyword](sql-query-keywords.md#in) in the FROM source. In the following example:
+
+```sql
+SELECT *
+FROM Families.children
+```
+
+The results are:
+
+```json
+[
+ [
+ {
+ "firstName": "Henriette Thaulow",
+ "gender": "female",
+ "grade": 5,
+ "pets": [{ "givenName": "Fluffy"}]
+ }
+ ],
+ [
+ {
+ "familyName": "Merriam",
+ "givenName": "Jesse",
+ "gender": "female",
+ "grade": 1
+ },
+ {
+ "familyName": "Miller",
+ "givenName": "Lisa",
+ "gender": "female",
+ "grade": 8
+ }
+ ]
+]
+```
+
+The next query performs iteration over `children` in the `Families` container. The output array is different from the preceding query. This example splits `children`, and flattens the results into a single array:
+
+```sql
+SELECT *
+FROM c IN Families.children
+```
+
+The results are:
+
+```json
+[
+ {
+ "firstName": "Henriette Thaulow",
+ "gender": "female",
+ "grade": 5,
+ "pets": [{ "givenName": "Fluffy" }]
+ },
+ {
+ "familyName": "Merriam",
+ "givenName": "Jesse",
+ "gender": "female",
+ "grade": 1
+ },
+ {
+ "familyName": "Miller",
+ "givenName": "Lisa",
+ "gender": "female",
+ "grade": 8
+ }
+]
+```
+
+You can filter further on each individual entry of the array, as shown in the following example:
+
+```sql
+SELECT c.givenName
+FROM c IN Families.children
+WHERE c.grade = 8
+```
+
+The results are:
+
+```json
+[{
+ "givenName": "Lisa"
+}]
+```
+
+You can also aggregate over the result of an array iteration. For example, the following query counts the number of children among all families:
+
+```sql
+SELECT COUNT(1) AS Count
+FROM child IN Families.children
+```
+
+The results are:
+
+```json
+[
+ {
+ "Count": 3
+ }
+]
+```
+
+> [!NOTE]
+> When using the IN keyword for iteration, you cannot filter or project any properties outside of the array. Instead, you should use [JOINs](sql-query-join.md).
+
+For additional examples, read our [blog post on working with arrays in Azure Cosmos DB](https://devblogs.microsoft.com/cosmosdb/understanding-how-to-query-arrays-in-azure-cosmos-db/).
+
+## Next steps
+
+- [Getting started](sql-query-getting-started.md)
+- [Azure Cosmos DB .NET samples](https://github.com/Azure/azure-cosmos-dotnet-v3)
+- [Joins](sql-query-join.md)
cosmos-db Sql Query Offset Limit https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-query-offset-limit.md
+
+ Title: OFFSET LIMIT clause in Azure Cosmos DB
+description: Learn how to use the OFFSET LIMIT clause to skip and take some certain values when querying in Azure Cosmos DB
++++ Last updated : 07/29/2020+++
+# OFFSET LIMIT clause in Azure Cosmos DB
+
+The OFFSET LIMIT clause is an optional clause to skip then take some number of values from the query. The OFFSET count and the LIMIT count are required in the OFFSET LIMIT clause.
+
+When OFFSET LIMIT is used in conjunction with an ORDER BY clause, the result set is produced by doing skip and take on the ordered values. If no ORDER BY clause is used, it will result in a deterministic order of values.
+
+## Syntax
+
+```sql
+OFFSET <offset_amount> LIMIT <limit_amount>
+```
+
+## Arguments
+
+- `<offset_amount>`
+
+ Specifies the integer number of items that the query results should skip.
+
+- `<limit_amount>`
+
+ Specifies the integer number of items that the query results should include
+
+## Remarks
+
+ Both the `OFFSET` count and the `LIMIT` count are required in the `OFFSET LIMIT` clause. If an optional `ORDER BY` clause is used, the result set is produced by doing the skip over the ordered values. Otherwise, the query will return a fixed order of values.
+
+ The RU charge of a query with `OFFSET LIMIT` will increase as the number of terms being offset increases. For queries that have [multiple pages of results](sql-query-pagination.md), we typically recommend using [continuation tokens](sql-query-pagination.md#continuation-tokens). Continuation tokens are a "bookmark" for the place where the query can later resume. If you use `OFFSET LIMIT`, there is no "bookmark". If you wanted to return the query's next page, you would have to start from the beginning.
+
+ You should use `OFFSET LIMIT` for cases when you would like to skip items entirely and save client resources. For example, you should use `OFFSET LIMIT` if you want to skip to the 1000th query result and have no need to view results 1 through 999. On the backend, `OFFSET LIMIT` still loads each item, including those that are skipped. The performance advantage is a savings in client resources by avoiding processing items that are not needed.
+
+## Examples
+
+For example, here's a query that skips the first value and returns the second value (in order of the resident city's name):
+
+```sql
+ SELECT f.id, f.address.city
+ FROM Families f
+ ORDER BY f.address.city
+ OFFSET 1 LIMIT 1
+```
+
+The results are:
+
+```json
+ [
+ {
+ "id": "AndersenFamily",
+ "city": "Seattle"
+ }
+ ]
+```
+
+Here's a query that skips the first value and returns the second value (without ordering):
+
+```sql
+ SELECT f.id, f.address.city
+ FROM Families f
+ OFFSET 1 LIMIT 1
+```
+
+The results are:
+
+```json
+ [
+ {
+ "id": "WakefieldFamily",
+ "city": "Seattle"
+ }
+ ]
+```
+
+## Next steps
+
+- [Getting started](sql-query-getting-started.md)
+- [SELECT clause](sql-query-select.md)
+- [ORDER BY clause](sql-query-order-by.md)
cosmos-db Sql Query Operators https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-query-operators.md
+
+ Title: SQL query operators for Azure Cosmos DB
+description: Learn about SQL operators such as equality, comparison, and logical operators supported by Azure Cosmos DB.
++++ Last updated : 07/29/2020+++
+# Operators in Azure Cosmos DB
+
+This article details the various operators supported by Azure Cosmos DB.
+
+## Equality and Comparison Operators
+
+The following table shows the result of equality comparisons in the SQL API between any two JSON types.
+
+| **Op** | **Undefined** | **Null** | **Boolean** | **Number** | **String** | **Object** | **Array** |
+|||||||||
+| **Undefined** | Undefined | Undefined | Undefined | Undefined | Undefined | Undefined | Undefined |
+| **Null** | Undefined | **Ok** | Undefined | Undefined | Undefined | Undefined | Undefined |
+| **Boolean** | Undefined | Undefined | **Ok** | Undefined | Undefined | Undefined | Undefined |
+| **Number** | Undefined | Undefined | Undefined | **Ok** | Undefined | Undefined | Undefined |
+| **String** | Undefined | Undefined | Undefined | Undefined | **Ok** | Undefined | Undefined |
+| **Object** | Undefined | Undefined | Undefined | Undefined | Undefined | **Ok** | Undefined |
+| **Array** | Undefined | Undefined | Undefined | Undefined | Undefined | Undefined | **Ok** |
+
+For comparison operators such as `>`, `>=`, `!=`, `<`, and `<=`, comparison across types or between two objects or arrays produces `Undefined`.
+
+If the result of the scalar expression is `Undefined`, the item isn't included in the result, because `Undefined` doesn't equal `true`.
+
+For example, the following query's comparison between a number and string value produces `Undefined`. Therefore, the filter does not include any results.
+
+```sql
+SELECT *
+FROM c
+WHERE 7 = 'a'
+```
+
+## Logical (AND, OR and NOT) operators
+
+Logical operators operate on Boolean values. The following tables show the logical truth tables for these operators:
+
+**OR operator**
+
+Returns `true` when either of the conditions is `true`.
+
+| | **True** | **False** | **Undefined** |
+| | | | |
+| **True** |True |True |True |
+| **False** |True |False |Undefined |
+| **Undefined** |True |Undefined |Undefined |
+
+**AND operator**
+
+Returns `true` when both expressions are `true`.
+
+| | **True** | **False** | **Undefined** |
+| | | | |
+| **True** |True |False |Undefined |
+| **False** |False |False |False |
+| **Undefined** |Undefined |False |Undefined |
+
+**NOT operator**
+
+Reverses the value of any Boolean expression.
+
+| | **NOT** |
+| | |
+| **True** |False |
+| **False** |True |
+| **Undefined** |Undefined |
+
+**Operator Precedence**
+
+The logical operators `OR`, `AND`, and `NOT` have the precedence level shown below:
+
+| **Operator** | **Priority** |
+| | |
+| **NOT** |1 |
+| **AND** |2 |
+| **OR** |3 |
+
+## * operator
+
+The special operator * projects the entire item as is. When used, it must be the only projected field. A query like `SELECT * FROM Families f` is valid, but `SELECT VALUE * FROM Families f` and `SELECT *, f.id FROM Families f` are not valid.
+
+## ? and ?? operators
+
+You can use the Ternary (?) and Coalesce (??) operators to build conditional expressions, as in programming languages like C# and JavaScript.
+
+You can use the ? operator to construct new JSON properties on the fly. For example, the following query classifies grade levels into `elementary` or `other`:
+
+```sql
+ SELECT (c.grade < 5)? "elementary": "other" AS gradeLevel
+ FROM Families.children[0] c
+```
+
+You can also nest calls to the ? operator, as in the following query:
+
+```sql
+ SELECT (c.grade < 5)? "elementary": ((c.grade < 9)? "junior": "high") AS gradeLevel
+ FROM Families.children[0] c
+```
+
+As with other query operators, the ? operator excludes items if the referenced properties are missing or the types being compared are different.
+
+Use the ?? operator to efficiently check for a property in an item when querying against semi-structured or mixed-type data. For example, the following query returns `lastName` if present, or `surname` if `lastName` isn't present.
+
+```sql
+ SELECT f.lastName ?? f.surname AS familyName
+ FROM Families f
+```
+
+## Next steps
+
+- [Azure Cosmos DB .NET samples](https://github.com/Azure/azure-cosmos-dotnet-v3)
+- [Keywords](sql-query-keywords.md)
+- [SELECT clause](sql-query-select.md)
cosmos-db Sql Query Order By https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-query-order-by.md
+
+ Title: ORDER BY clause in Azure Cosmos DB
+description: Learn about SQL ORDER BY clause for Azure Cosmos DB. Use SQL as an Azure Cosmos DB JSON query language.
++++ Last updated : 06/06/2020+++
+# ORDER BY clause in Azure Cosmos DB
+
+The optional `ORDER BY` clause specifies the sorting order for results returned by the query.
+
+## Syntax
+
+```sql
+ORDER BY <sort_specification>
+<sort_specification> ::= <sort_expression> [, <sort_expression>]
+<sort_expression> ::= {<scalar_expression> [ASC | DESC]} [ ,...n ]
+```
+
+## Arguments
+
+- `<sort_specification>`
+
+ Specifies a property or expression on which to sort the query result set. A sort column can be specified as a name or property alias.
+
+ Multiple properties can be specified. Property names must be unique. The sequence of the sort properties in the `ORDER BY` clause defines the organization of the sorted result set. That is, the result set is sorted by the first property and then that ordered list is sorted by the second property, and so on.
+
+ The property names referenced in the `ORDER BY` clause must correspond to either a property in the select list or to a property defined in the collection specified in the `FROM` clause without any ambiguities.
+
+- `<sort_expression>`
+
+ Specifies one or more properties or expressions on which to sort the query result set.
+
+- `<scalar_expression>`
+
+ See the [Scalar expressions](sql-query-scalar-expressions.md) section for details.
+
+- `ASC | DESC`
+
+ Specifies that the values in the specified column should be sorted in ascending or descending order. `ASC` sorts from the lowest value to highest value. `DESC` sorts from highest value to lowest value. `ASC` is the default sort order. Null values are treated as the lowest possible values.
+
+## Remarks
+
+ The `ORDER BY` clause requires that the indexing policy include an index for the fields being sorted. The Azure Cosmos DB query runtime supports sorting against a property name and not against computed properties. Azure Cosmos DB supports multiple `ORDER BY` properties. In order to run a query with multiple ORDER BY properties, you should define a [composite index](../index-policy.md#composite-indexes) on the fields being sorted.
+
+> [!Note]
+> If the properties being sorted might be undefined for some documents and you want to retrieve them in an ORDER BY query, you must explicitly include this path in the index. The default indexing policy won't allow for the retrieval of the documents where the sort property is undefined. [Review example queries on documents with some missing fields](#documents-with-missing-fields).
+
+## Examples
+
+For example, here's a query that retrieves families in ascending order of the resident city's name:
+
+```sql
+ SELECT f.id, f.address.city
+ FROM Families f
+ ORDER BY f.address.city
+```
+
+The results are:
+
+```json
+ [
+ {
+ "id": "WakefieldFamily",
+ "city": "NY"
+ },
+ {
+ "id": "AndersenFamily",
+ "city": "Seattle"
+ }
+ ]
+```
+
+The following query retrieves family `id`s in order of their item creation date. Item `creationDate` is a number representing the *epoch time*, or elapsed time since Jan. 1, 1970 in seconds.
+
+```sql
+ SELECT f.id, f.creationDate
+ FROM Families f
+ ORDER BY f.creationDate DESC
+```
+
+The results are:
+
+```json
+ [
+ {
+ "id": "WakefieldFamily",
+ "creationDate": 1431620462
+ },
+ {
+ "id": "AndersenFamily",
+ "creationDate": 1431620472
+ }
+ ]
+```
+
+Additionally, you can order by multiple properties. A query that orders by multiple properties requires a [composite index](../index-policy.md#composite-indexes). Consider the following query:
+
+```sql
+ SELECT f.id, f.creationDate
+ FROM Families f
+ ORDER BY f.address.city ASC, f.creationDate DESC
+```
+
+This query retrieves the family `id` in ascending order of the city name. If multiple items have the same city name, the query will order by the `creationDate` in descending order.
+
+## Documents with missing fields
+
+Queries with `ORDER BY` that are run against containers with the default indexing policy will not return documents where the sort property is undefined. If you would like to include documents where the sort property is undefined, you should explicitly include this property in the indexing policy.
+
+For example, here's a container with an indexing policy that does not explicitly include any paths besides `"/*"`:
+
+```json
+{
+ "indexingMode": "consistent",
+ "automatic": true,
+ "includedPaths": [
+ {
+ "path": "/*"
+ }
+ ],
+ "excludedPaths": []
+}
+```
+
+If you run a query that includes `lastName` in the `Order By` clause, the results will only include documents that have a `lastName` property defined. We have not defined an explicit included path for `lastName` so any documents without a `lastName` will not appear in the query results.
+
+Here is a query that sorts by `lastName` on two documents, one of which does not have a `lastName` defined:
+
+```sql
+ SELECT f.id, f.lastName
+ FROM Families f
+ ORDER BY f.lastName
+```
+
+The results only include the document that has a defined `lastName`:
+
+```json
+ [
+ {
+ "id": "AndersenFamily",
+ "lastName": "Andersen"
+ }
+ ]
+```
+
+If we update the container's indexing policy to explicitly include a path for `lastName`, we will include documents with an undefined sort property in the query results. You must explicitly define the path to lead to this scalar value (and not beyond it). You should use the `?` character in your path definition in the indexing policy to ensure that you explicitly index the property `lastName` and no additional nested paths beyond it. If your `Order By` query uses a [composite index](../index-policy.md#composite-indexes), the results will always include documents with an undefined sort property in the query results.
+
+Here is a sample indexing policy which allows you to have documents with an undefined `lastName` appear in the query results:
+
+```json
+{
+ "indexingMode": "consistent",
+ "automatic": true,
+ "includedPaths": [
+ {
+ "path": "/lastName/?"
+ },
+ {
+ "path": "/*"
+ }
+ ],
+ "excludedPaths": []
+}
+```
+
+If you run the same query again, documents that are missing `lastName` appear first in the query results:
+
+```sql
+ SELECT f.id, f.lastName
+ FROM Families f
+ ORDER BY f.lastName
+```
+
+The results are:
+
+```json
+[
+ {
+ "id": "WakefieldFamily"
+ },
+ {
+ "id": "AndersenFamily",
+ "lastName": "Andersen"
+ }
+]
+```
+
+If you modify the sort order to `DESC`, documents that are missing `lastName` appear last in the query results:
+
+```sql
+ SELECT f.id, f.lastName
+ FROM Families f
+ ORDER BY f.lastName DESC
+```
+
+The results are:
+
+```json
+[
+ {
+ "id": "AndersenFamily",
+ "lastName": "Andersen"
+ },
+ {
+ "id": "WakefieldFamily"
+ }
+]
+```
+
+> [!Note]
+> Only the .NET SDK version 3.4.0 or later supports ORDER BY with mixed types. Therefore, if you want to sort by a combination of undefined and defined values, you should use this version (or later).
+
+You can't control the order that different types appear in the results. In the above example, we showed how undefined values were sorted before string values. If instead, for example, you wanted more control over the sort order of undefined values, you could assign any undefined properties a string value of "aaaaaaaaa" or "zzzzzzzz" to ensure they were either first or last.
+
+## Next steps
+
+- [Getting started](sql-query-getting-started.md)
+- [Indexing policies in Azure Cosmos DB](../index-policy.md)
+- [OFFSET LIMIT clause](sql-query-offset-limit.md)
cosmos-db Sql Query Pagination https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-query-pagination.md
+
+ Title: Pagination in Azure Cosmos DB
+description: Learn about paging concepts and continuation tokens
+++++ Last updated : 03/15/2021++
+# Pagination in Azure Cosmos DB
+
+In Azure Cosmos DB, queries may have multiple pages of results. This document explains criteria that Azure Cosmos DB's query engine uses to decide whether to split query results into multiple pages. You can optionally use continuation tokens to manage query results that span multiple pages.
+
+## Understanding query executions
+
+Sometimes query results will be split over multiple pages. Each page's results is generated by a separate query execution. When query results cannot be returned in one single execution, Azure Cosmos DB will automatically split results into multiple pages.
+
+You can specify the maximum number of items returned by a query by setting the `MaxItemCount`. The `MaxItemCount` is specified per request and tells the query engine will to return that number of items or fewer. You can set `MaxItemCount` to `-1` if you don't want to place a limit on the number of results per query execution.
+
+In addition, there are other reasons that the query engine might need to split query results into multiple pages. These include:
+
+- The container was throttled and there weren't available RUs to return more query results
+- The query execution's response was too large
+- The query execution's time was too long
+- It was more efficient for the query engine to return results in additional executions
+
+The number of items returned per query execution will always be less than or equal to `MaxItemCount`. However, it is possible that other criteria might have limited the number of results the query could return. If you execute the same query multiple times, the number of pages might not be constant. For example, if a query is throttled there may be fewer available results per page, which means the query will have additional pages. In some cases, it is also possible that your query may return an empty page of results.
+
+## Handling multiple pages of results
+
+To ensure accurate query results, you should progress through all pages. You should continue to execute queries until there are no additional pages.
+
+Here are some examples for processing results from queries with multiple pages:
+
+- [.NET SDK](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/Queries/Program.cs#L280)
+- [Java SDK](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/documentcrud/sync/DocumentCRUDQuickstart.java#L162-L176)
+- [Node.js SDK](https://github.com/Azure/azure-sdk-for-js/blob/83fcc44a23ad771128d6e0f49043656b3d1df990/sdk/cosmosdb/cosmos/samples/IndexManagement.ts#L128-L140)
+- [Python SDK](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/examples.py#L89)
+
+## Continuation tokens
+
+In the .NET SDK and Java SDK, you can optionally use continuation tokens as a bookmark for your query's progress. Azure Cosmos DB query executions are stateless at the server side and can be resumed at any time using the continuation token. Continuation tokens are not supported in the Node.js SDK. For the Python SDK, it's supported for single partition queries, and the PK must be specified in the options object because it's not sufficient to have it in the query itself.
+
+Here are some example for using continuation tokens:
+
+- [.NET SDK](https://github.com/Azure/azure-cosmos-dotnet-v2/blob/master/samples/code-samples/Queries/Program.cs#L699-L734)
+- [Java SDK](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/queries/sync/QueriesQuickstart.java#L216)
+- [Python SDK](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/test/test_query.py#L533)
+
+If the query returns a continuation token, then there are additional query results.
+
+In Azure Cosmos DB's REST API, you can manage continuation tokens with the `x-ms-continuation` header. As with querying with the .NET or Java SDK, if the `x-ms-continuation` response header is not empty, it means the query has additional results.
+
+As long as you are using the same SDK version, continuation tokens never expire. You can optionally [restrict the size of a continuation token](/dotnet/api/microsoft.azure.documents.client.feedoptions.responsecontinuationtokenlimitinkb#Microsoft_Azure_Documents_Client_FeedOptions_ResponseContinuationTokenLimitInKb). Regardless of the amount of data or number of physical partitions in your container, queries return a single continuation token.
+
+You cannot use continuation tokens for queries with [GROUP BY](sql-query-group-by.md) or [DISTINCT](sql-query-keywords.md#distinct) because these queries would require storing a significant amount of state. For queries with `DISTINCT`, you can use continuation tokens if you add `ORDER BY` to the query.
+
+Here's an example of a query with `DISTINCT` that could use a continuation token:
+
+```sql
+SELECT DISTINCT VALUE c.name
+FROM c
+ORDER BY c.name
+```
+
+## Next steps
+
+- [Introduction to Azure Cosmos DB](../introduction.md)
+- [Azure Cosmos DB .NET samples](https://github.com/Azure/azure-cosmos-dotnet-v3)
+- [ORDER BY clause](sql-query-order-by.md)
cosmos-db Sql Query Parameterized Queries https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-query-parameterized-queries.md
+
+ Title: Parameterized queries in Azure Cosmos DB
+description: Learn how SQL parameterized queries provide robust handling and escaping of user input, and prevent accidental exposure of data through SQL injection.
++++ Last updated : 07/29/2020++
+# Parameterized queries in Azure Cosmos DB
+
+Azure Cosmos DB supports queries with parameters expressed by the familiar @ notation. Parameterized SQL provides robust handling and escaping of user input, and prevents accidental exposure of data through SQL injection.
+
+## Examples
+
+For example, you can write a query that takes `lastName` and `address.state` as parameters, and execute it for various values of `lastName` and `address.state` based on user input.
+
+```sql
+ SELECT *
+ FROM Families f
+ WHERE f.lastName = @lastName AND f.address.state = @addressState
+```
+
+You can then send this request to Azure Cosmos DB as a parameterized JSON query like the following:
+
+```sql
+ {
+ "query": "SELECT * FROM Families f WHERE f.lastName = @lastName AND f.address.state = @addressState",
+ "parameters": [
+ {"name": "@lastName", "value": "Wakefield"},
+ {"name": "@addressState", "value": "NY"},
+ ]
+ }
+```
+
+The following example sets the TOP argument with a parameterized query:
+
+```sql
+ {
+ "query": "SELECT TOP @n * FROM Families",
+ "parameters": [
+ {"name": "@n", "value": 10},
+ ]
+ }
+```
+
+Parameter values can be any valid JSON: strings, numbers, Booleans, null, even arrays or nested JSON. Since Azure Cosmos DB is schemaless, parameters aren't validated against any type.
+
+Here are examples for parameterized queries in each Azure Cosmos DB SDK:
+
+- [.NET SDK](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/Queries/Program.cs#L195)
+- [Java](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/queries/sync/QueriesQuickstart.java#L392-L421)
+- [Node.js](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ItemManagement.ts#L58-L79)
+- [Python](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/document_management.py#L66-L78)
+
+## Next steps
+
+- [Azure Cosmos DB .NET samples](https://github.com/Azure/azure-cosmos-dotnet-v3)
+- [Model document data](../modeling-data.md)
cosmos-db Sql Query Pi https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-query-pi.md
+
+ Title: PI in Azure Cosmos DB query language
+description: Learn about SQL system function PI in Azure Cosmos DB.
++++ Last updated : 09/13/2019+++
+# PI (Azure Cosmos DB)
+
+ Returns the constant value of PI.
+
+## Syntax
+
+```sql
+PI ()
+```
+
+## Return types
+
+ Returns a numeric expression.
+
+## Examples
+
+ The following example returns the value of `PI`.
+
+```sql
+SELECT PI() AS pi
+```
+
+ Here is the result set.
+
+```json
+[{"pi": 3.1415926535897931}]
+```
+
+## Next steps
+
+- [Mathematical functions Azure Cosmos DB](sql-query-mathematical-functions.md)
+- [System functions Azure Cosmos DB](sql-query-system-functions.md)
+- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Power https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-query-power.md
+
+ Title: POWER in Azure Cosmos DB query language
+description: Learn about SQL system function POWER in Azure Cosmos DB.
++++ Last updated : 09/13/2019+++
+# POWER (Azure Cosmos DB)
+
+ Returns the value of the specified expression to the specified power.
+
+## Syntax
+
+```sql
+POWER (<numeric_expr1>, <numeric_expr2>)
+```
+
+## Arguments
+
+*numeric_expr1*
+ Is a numeric expression.
+
+*numeric_expr2*
+ Is the power to which to raise *numeric_expr1*.
+
+## Return types
+
+ Returns a numeric expression.
+
+## Examples
+
+ The following example demonstrates raising a number to the power of 3 (the cube of the number).
+
+```sql
+SELECT POWER(2, 3) AS pow1, POWER(2.5, 3) AS pow2
+```
+
+ Here is the result set.
+
+```json
+[{pow1: 8, pow2: 15.625}]
+```
+
+## Next steps
+
+- [Mathematical functions Azure Cosmos DB](sql-query-mathematical-functions.md)
+- [System functions Azure Cosmos DB](sql-query-system-functions.md)
+- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Radians https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-query-radians.md
+
+ Title: RADIANS in Azure Cosmos DB query language
+description: Learn about SQL system function RADIANS in Azure Cosmos DB.
++++ Last updated : 09/13/2019+++
+# RADIANS (Azure Cosmos DB)
+
+ Returns radians when a numeric expression, in degrees, is entered.
+
+## Syntax
+
+```sql
+RADIANS (<numeric_expr>)
+```
+
+## Arguments
+
+*numeric_expr*
+ Is a numeric expression.
+
+## Return types
+
+ Returns a numeric expression.
+
+## Examples
+
+ The following example takes a few angles as input and returns their corresponding radian values.
+
+```sql
+SELECT RADIANS(-45.01) AS r1, RADIANS(-181.01) AS r2, RADIANS(0) AS r3, RADIANS(0.1472738) AS r4, RADIANS(197.1099392) AS r5
+```
+
+ Here is the result set.
+
+```json
+[{
+ "r1": -0.7855726963226477,
+ "r2": -3.1592204790349356,
+ "r3": 0,
+ "r4": 0.0025704127119236249,
+ "r5": 3.4402174274458375
+ }]
+```
+
+## Remarks
+
+This system function will not utilize the index.
+
+## Next steps
+
+- [Mathematical functions Azure Cosmos DB](sql-query-mathematical-functions.md)
+- [System functions Azure Cosmos DB](sql-query-system-functions.md)
+- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Rand https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-query-rand.md
+
+ Title: RAND in Azure Cosmos DB query language
+description: Learn about SQL system function RAND in Azure Cosmos DB.
++++ Last updated : 09/16/2019+++
+# RAND (Azure Cosmos DB)
+
+ Returns a randomly generated numeric value from [0,1).
+
+## Syntax
+
+```sql
+RAND ()
+```
+
+## Return types
+
+ Returns a numeric expression.
+
+## Remarks
+
+ `RAND` is a nondeterministic function. Repetitive calls of `RAND` do not return the same results. This system function will not utilize the index.
++
+## Examples
+
+ The following example returns a randomly generated numeric value.
+
+```sql
+SELECT RAND() AS rand
+```
+
+ Here is the result set.
+
+```json
+[{"rand": 0.87860053195618093}]
+```
+
+## Next steps
+
+- [Mathematical functions Azure Cosmos DB](sql-query-mathematical-functions.md)
+- [System functions Azure Cosmos DB](sql-query-system-functions.md)
+- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Regexmatch https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-query-regexmatch.md
+
+ Title: RegexMatch in Azure Cosmos DB query language
+description: Learn about the RegexMatch SQL system function in Azure Cosmos DB
++++ Last updated : 08/12/2021+++
+# REGEXMATCH (Azure Cosmos DB)
+
+Provides regular expression capabilities. Regular expressions are a concise and flexible notation for finding patterns of text. Azure Cosmos DB uses [PERL compatible regular expressions (PCRE)](http://www.pcre.org/).
+
+## Syntax
+
+```sql
+RegexMatch(<str_expr1>, <str_expr2>, [, <str_expr3>])
+```
+
+## Arguments
+
+*str_expr1*
+ Is the string expression to be searched.
+
+*str_expr2*
+ Is the regular expression.
+
+*str_expr3*
+ Is the string of selected modifiers to use with the regular expression. This string value is optional. If you'd like to run RegexMatch with no modifiers, you can either add an empty string or omit entirely.
+
+You can learn about [syntax for creating regular expressions in Perl](https://perldoc.perl.org/perlre).
+
+Azure Cosmos DB supports the following four modifiers:
+
+| Modifier | Description |
+| | -- |
+| `m` | Treat the string expression to be searched as multiple lines. Without this option, "^" and "$" will match at the beginning or end of the string and not each individual line. |
+| `s` | Allow "." to match any character, including a newline character. |
+| `i` | Ignore case when pattern matching. |
+| `x` | Ignore all whitespace characters. |
+
+## Return types
+
+ Returns a Boolean expression. Returns undefined if the string expression to be searched, the regular expression, or the selected modifiers are invalid.
+
+## Examples
+
+The following simple RegexMatch example checks the string "abcd" for regular expression match using a few different modifiers.
+
+```sql
+SELECT RegexMatch ("abcd", "ABC", "") AS NoModifiers,
+RegexMatch ("abcd", "ABC", "i") AS CaseInsensitive,
+RegexMatch ("abcd", "ab.", "") AS WildcardCharacter,
+RegexMatch ("abcd", "ab c", "x") AS IgnoreWhiteSpace,
+RegexMatch ("abcd", "aB c", "ix") AS CaseInsensitiveAndIgnoreWhiteSpace
+```
+
+ Here is the result set.
+
+```json
+[
+ {
+ "NoModifiers": false,
+ "CaseInsensitive": true,
+ "WildcardCharacter": true,
+ "IgnoreWhiteSpace": true,
+ "CaseInsensitiveAndIgnoreWhiteSpace": true
+ }
+]
+```
+
+With RegexMatch, you can use metacharacters to do more complex string searches that wouldn't otherwise be possible with the StartsWith, EndsWith, Contains, or StringEquals system functions. Here are some additional examples:
+
+> [!NOTE]
+> If you need to use a metacharacter in a regular expression and don't want it to have special meaning, you should escape the metacharacter using `\`.
+
+**Check items that have a description that contains the word "salt" exactly once:**
+
+```sql
+SELECT *
+FROM c
+WHERE RegexMatch (c.description, "salt{1}","")
+```
+
+**Check items that have a description that contain a number between 0 and 99:**
+
+```sql
+SELECT *
+FROM c
+WHERE RegexMatch (c.description, "[0-99]","")
+```
+
+**Check items that have a description that contain four letter words starting with "S" or "s":**
+
+```sql
+SELECT *
+FROM c
+WHERE RegexMatch (c.description, " s... ","i")
+```
+
+## Remarks
+
+This system function will benefit from a [range index](../index-policy.md#includeexclude-strategy) if the regular expression can be broken down into either StartsWith, EndsWith, Contains, or StringEquals system functions.
+
+## Next steps
+
+- [String functions Azure Cosmos DB](sql-query-string-functions.md)
+- [System functions Azure Cosmos DB](sql-query-system-functions.md)
+- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Replace https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-query-replace.md
+
+ Title: REPLACE in Azure Cosmos DB query language
+description: Learn about SQL system function REPLACE in Azure Cosmos DB.
++++ Last updated : 09/13/2019+++
+# REPLACE (Azure Cosmos DB)
+
+ Replaces all occurrences of a specified string value with another string value.
+
+## Syntax
+
+```sql
+REPLACE(<str_expr1>, <str_expr2>, <str_expr3>)
+```
+
+## Arguments
+
+*str_expr1*
+ Is the string expression to be searched.
+
+*str_expr2*
+ Is the string expression to be found.
+
+*str_expr3*
+ Is the string expression to replace occurrences of *str_expr2* in *str_expr1*.
+
+## Return types
+
+ Returns a string expression.
+
+## Examples
+
+ The following example shows how to use `REPLACE` in a query.
+
+```sql
+SELECT REPLACE("This is a Test", "Test", "desk") AS replace
+```
+
+ Here is the result set.
+
+```json
+[{"replace": "This is a desk"}]
+```
+
+## Remarks
+
+This system function will not utilize the index.
+
+## Next steps
+
+- [String functions Azure Cosmos DB](sql-query-string-functions.md)
+- [System functions Azure Cosmos DB](sql-query-system-functions.md)
+- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Replicate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-query-replicate.md
+
+ Title: REPLICATE in Azure Cosmos DB query language
+description: Learn about SQL system function REPLICATE in Azure Cosmos DB.
++++ Last updated : 03/03/2020+++
+# REPLICATE (Azure Cosmos DB)
+
+ Repeats a string value a specified number of times.
+
+## Syntax
+
+```sql
+REPLICATE(<str_expr>, <num_expr>)
+```
+
+## Arguments
+
+*str_expr*
+ Is a string expression.
+
+*num_expr*
+ Is a numeric expression. If *num_expr* is negative or non-finite, the result is undefined.
+
+## Return types
+
+ Returns a string expression.
+
+## Remarks
+
+ The maximum length of the result is 10,000 characters i.e. (length(*str_expr*) * *num_expr*) <= 10,000. This system function will not utilize the index.
+
+## Examples
+
+ The following example shows how to use `REPLICATE` in a query.
+
+```sql
+SELECT REPLICATE("a", 3) AS replicate
+```
+
+ Here is the result set.
+
+```json
+[{"replicate": "aaa"}]
+```
+
+## Next steps
+
+- [String functions Azure Cosmos DB](sql-query-string-functions.md)
+- [System functions Azure Cosmos DB](sql-query-system-functions.md)
+- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Reverse https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-query-reverse.md
+
+ Title: REVERSE in Azure Cosmos DB query language
+description: Learn about SQL system function REVERSE in Azure Cosmos DB.
++++ Last updated : 03/03/2020+++
+# REVERSE (Azure Cosmos DB)
+
+ Returns the reverse order of a string value.
+
+## Syntax
+
+```sql
+REVERSE(<str_expr>)
+```
+
+## Arguments
+
+*str_expr*
+ Is a string expression.
+
+## Return types
+
+ Returns a string expression.
+
+## Examples
+
+ The following example shows how to use `REVERSE` in a query.
+
+```sql
+SELECT REVERSE("Abc") AS reverse
+```
+
+ Here is the result set.
+
+```json
+[{"reverse": "cbA"}]
+```
+
+## Remarks
+
+This system function will not utilize the index.
+
+## Next steps
+
+- [String functions Azure Cosmos DB](sql-query-string-functions.md)
+- [System functions Azure Cosmos DB](sql-query-system-functions.md)
+- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Right https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-query-right.md
+
+ Title: RIGHT in Azure Cosmos DB query language
+description: Learn about SQL system function RIGHT in Azure Cosmos DB.
++++ Last updated : 03/03/2020+++
+# RIGHT (Azure Cosmos DB)
+
+ Returns the right part of a string with the specified number of characters.
+
+## Syntax
+
+```sql
+RIGHT(<str_expr>, <num_expr>)
+```
+
+## Arguments
+
+*str_expr*
+ Is the string expression to extract characters from.
+
+*num_expr*
+ Is a numeric expression which specifies the number of characters.
+
+## Return types
+
+ Returns a string expression.
+
+## Examples
+
+ The following example returns the right part of "abc" for various length values.
+
+```sql
+SELECT RIGHT("abc", 1) AS r1, RIGHT("abc", 2) AS r2
+```
+
+ Here is the result set.
+
+```json
+[{"r1": "c", "r2": "bc"}]
+```
+
+## Remarks
+
+This system function will not utilize the index.
+
+## Next steps
+
+- [String functions Azure Cosmos DB](sql-query-string-functions.md)
+- [System functions Azure Cosmos DB](sql-query-system-functions.md)
+- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Round https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-query-round.md
+
+ Title: ROUND in Azure Cosmos DB query language
+description: Learn about SQL system function ROUND in Azure Cosmos DB.
++++ Last updated : 09/13/2019+++
+# ROUND (Azure Cosmos DB)
+
+ Returns a numeric value, rounded to the closest integer value.
+
+## Syntax
+
+```sql
+ROUND(<numeric_expr>)
+```
+
+## Arguments
+
+*numeric_expr*
+ Is a numeric expression.
+
+## Return types
+
+ Returns a numeric expression.
+
+## Remarks
+
+The rounding operation performed follows midpoint rounding away from zero. If the input is a numeric expression which falls exactly between two integers then the result will be the closest integer value away from zero. This system function will benefit from a [range index](../index-policy.md#includeexclude-strategy).
+
+|<numeric_expr>|Rounded|
+|-|-|
+|-6.5000|-7|
+|-0.5|-1|
+|0.5|1|
+|6.5000|7|
+
+## Examples
+
+The following example rounds the following positive and negative numbers to the nearest integer.
+
+```sql
+SELECT ROUND(2.4) AS r1, ROUND(2.6) AS r2, ROUND(2.5) AS r3, ROUND(-2.4) AS r4, ROUND(-2.6) AS r5
+```
+
+Here is the result set.
+
+```json
+[{r1: 2, r2: 3, r3: 3, r4: -2, r5: -3}]
+```
+
+## Next steps
+
+- [Mathematical functions Azure Cosmos DB](sql-query-mathematical-functions.md)
+- [System functions Azure Cosmos DB](sql-query-system-functions.md)
+- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Rtrim https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-query-rtrim.md
+
+ Title: RTRIM in Azure Cosmos DB query language
+description: Learn about SQL system function RTRIM in Azure Cosmos DB.
++++ Last updated : 03/03/2020+++
+# RTRIM (Azure Cosmos DB)
+
+ Returns a string expression after it removes trailing blanks.
+
+## Syntax
+
+```sql
+RTRIM(<str_expr>)
+```
+
+## Arguments
+
+*str_expr*
+ Is any valid string expression.
+
+## Return types
+
+ Returns a string expression.
+
+## Examples
+
+ The following example shows how to use `RTRIM` inside a query.
+
+```sql
+SELECT RTRIM(" abc") AS r1, RTRIM("abc") AS r2, RTRIM("abc ") AS r3
+```
+
+ Here is the result set.
+
+```json
+[{"r1": " abc", "r2": "abc", "r3": "abc"}]
+```
+
+## Remarks
+
+This system function will not utilize the index.
+
+## Next steps
+
+- [String functions Azure Cosmos DB](sql-query-string-functions.md)
+- [System functions Azure Cosmos DB](sql-query-system-functions.md)
+- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Scalar Expressions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-query-scalar-expressions.md
+
+ Title: Scalar expressions in Azure Cosmos DB SQL queries
+description: Learn about the scalar expression SQL syntax for Azure Cosmos DB. This article also describes how to combine scalar expressions into complex expressions by using operators.
++++ Last updated : 05/17/2019+++
+# Scalar expressions in Azure Cosmos DB SQL queries
+
+The [SELECT clause](sql-query-select.md) supports scalar expressions. A scalar expression is a combination of symbols and operators that can be evaluated to obtain a single value. Examples of scalar expressions include: constants, property references, array element references, alias references, or function calls. Scalar expressions can be combined into complex expressions using operators.
+
+## Syntax
+
+```sql
+<scalar_expression> ::=
+ <constant>
+ | input_alias
+ | parameter_name
+ | <scalar_expression>.property_name
+ | <scalar_expression>'['"property_name"|array_index']'
+ | unary_operator <scalar_expression>
+ | <scalar_expression> binary_operator <scalar_expression>
+ | <scalar_expression> ? <scalar_expression> : <scalar_expression>
+ | <scalar_function_expression>
+ | <create_object_expression>
+ | <create_array_expression>
+ | (<scalar_expression>)
+
+<scalar_function_expression> ::=
+ 'udf.' Udf_scalar_function([<scalar_expression>][,…n])
+ | builtin_scalar_function([<scalar_expression>][,…n])
+
+<create_object_expression> ::=
+ '{' [{property_name | "property_name"} : <scalar_expression>][,…n] '}'
+
+<create_array_expression> ::=
+ '[' [<scalar_expression>][,…n] ']'
+
+```
+
+## Arguments
+
+- `<constant>`
+
+ Represents a constant value. See [Constants](sql-query-constants.md) section for details.
+
+- `input_alias`
+
+ Represents a value defined by the `input_alias` introduced in the `FROM` clause.
+ This value is guaranteed to not be **undefined** ΓÇô**undefined** values in the input are skipped.
+
+- `<scalar_expression>.property_name`
+
+ Represents a value of the property of an object. If the property does not exist or property is referenced on a value, which is not an object, then the expression evaluates to **undefined** value.
+
+- `<scalar_expression>'['"property_name"|array_index']'`
+
+ Represents a value of the property with name `property_name` or array element with index `array_index` of an array. If the property/array index does not exist or the property/array index is referenced on a value that is not an object/array, then the expression evaluates to undefined value.
+
+- `unary_operator <scalar_expression>`
+
+ Represents an operator that is applied to a single value. See [Operators](sql-query-operators.md) section for details.
+
+- `<scalar_expression> binary_operator <scalar_expression>`
+
+ Represents an operator that is applied to two values. See [Operators](sql-query-operators.md) section for details.
+
+- `<scalar_function_expression>`
+
+ Represents a value defined by a result of a function call.
+
+- `udf_scalar_function`
+
+ Name of the user-defined scalar function.
+
+- `builtin_scalar_function`
+
+ Name of the built-in scalar function.
+
+- `<create_object_expression>`
+
+ Represents a value obtained by creating a new object with specified properties and their values.
+
+- `<create_array_expression>`
+
+ Represents a value obtained by creating a new array with specified values as elements
+
+- `parameter_name`
+
+ Represents a value of the specified parameter name. Parameter names must have a single \@ as the first character.
+
+## Remarks
+
+ When calling a built-in or user-defined scalar function, all arguments must be defined. If any of the arguments is undefined, the function will not be called and the result will be undefined.
+
+ When creating an object, any property that is assigned undefined value will be skipped and not included in the created object.
+
+ When creating an array, any element value that is assigned **undefined** value will be skipped and not included in the created object. This will cause the next defined element to take its place in such a way that the created array will not have skipped indexes.
+
+## Examples
+
+```sql
+ SELECT ((2 + 11 % 7)-2)/3
+```
+
+The results are:
+
+```json
+ [{
+ "$1": 1.33333
+ }]
+```
+
+In the following query, the result of the scalar expression is a Boolean:
+
+```sql
+ SELECT f.address.city = f.address.state AS AreFromSameCityState
+ FROM Families f
+```
+
+The results are:
+
+```json
+ [
+ {
+ "AreFromSameCityState": false
+ },
+ {
+ "AreFromSameCityState": true
+ }
+ ]
+```
+
+## Next steps
+
+- [Introduction to Azure Cosmos DB](../introduction.md)
+- [Azure Cosmos DB .NET samples](https://github.com/Azure/azure-cosmos-dotnet-v3)
+- [Subqueries](sql-query-subquery.md)
cosmos-db Sql Query Select https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-query-select.md
+
+ Title: SELECT clause in Azure Cosmos DB
+description: Learn about SQL SELECT clause for Azure Cosmos DB. Use SQL as an Azure Cosmos DB JSON query language.
++++ Last updated : 05/08/2020+++
+# SELECT clause in Azure Cosmos DB
+
+Every query consists of a `SELECT` clause and optional [FROM](sql-query-from.md) and [WHERE](sql-query-where.md) clauses, per ANSI SQL standards. Typically, the source in the `FROM` clause is enumerated, and the `WHERE` clause applies a filter on the source to retrieve a subset of JSON items. The `SELECT` clause then projects the requested JSON values in the select list.
+
+## Syntax
+
+```sql
+SELECT <select_specification>
+
+<select_specification> ::=
+ '*'
+ | [DISTINCT] <object_property_list>
+ | [DISTINCT] VALUE <scalar_expression> [[ AS ] value_alias]
+
+<object_property_list> ::=
+{ <scalar_expression> [ [ AS ] property_alias ] } [ ,...n ]
+```
+