Updates from: 01/19/2022 06:45:40
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Date Transformations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/date-transformations.md
Use this claims transformation to determine if first date plus the `timeSpanInSe
- **operator**: later than - **timeSpanInSeconds**: 7776000 (90 days) - Output claims:
- - **result**: true
-
+ - **result**: true
+
+## IsTermsOfUseConsentRequired
+
+Determine whether a `dateTime` claim type is earlier or greater than a specific date. The result is a new Boolean claim with a value of `true` or `false`.
+
+| Item | TransformationClaimType | Data type | Notes |
+| - | -- | | -- |
+| InputClaim | termsOfUseConsentDateTime | dateTime | The `dateTime` claim type to check whether it is earlier or later than the `termsOfUseTextUpdateDateTime` input parameter. Undefined value returns `true` result. |
+| InputParameter | termsOfUseTextUpdateDateTime | dateTime | The `dateTime` claim type to check whether it is earlier or later than the `termsOfUseConsentDateTime` input claim. The time part of the date is optional. |
+| OutputClaim | result | boolean | The claim type that's produced after this claims transformation has been invoked. |
+
+Use this claims transformation to determine whether a `dateTime` claim type is earlier or greater than a specific date. For example, check whether a user has consented to the latest version of your terms of use (TOU) or terms of service. To check the last time a user consented, store the last time the user accepted the TOU in an [extension attribute](user-profile-attributes.md#extension-attributes). When your TOU wording changes, update the `termsOfUseTextUpdateDateTime` input parameter with the time of the change. Then, call this claims transformation to compare the dates. If the claims transformation returns `true`, the `termsOfUseConsentDateTime` value is earlier than the `termsOfUseTextUpdateDateTime` value, and you can ask the user to accept the updated TOU.
+
+```xml
+<ClaimsTransformation Id="IsTermsOfUseConsentRequired" TransformationMethod="IsTermsOfUseConsentRequired">
+ <InputClaims>
+ <InputClaim ClaimTypeReferenceId="extension_termsOfUseConsentDateTime" TransformationClaimType="termsOfUseConsentDateTime" />
+ </InputClaims>
+ <InputParameters>
+ <InputParameter Id="termsOfUseTextUpdateDateTime" DataType="dateTime" Value="2021-11-15T00:00:00" />
+ </InputParameters>
+ <OutputClaims>
+ <OutputClaim ClaimTypeReferenceId="termsOfUseConsentRequired" TransformationClaimType="result" />
+ </OutputClaims>
+</ClaimsTransformation>
+```
+
+### IsTermsOfUseConsentRequired example
+
+- Input claims:
+ - **termsOfUseConsentDateTime**: 2020-03-09T09:15:00
+- Input parameters:
+ - **termsOfUseTextUpdateDateTime**: 2021-11-15
+- Output claims:
+ - **result**: true
+ ## GetCurrentDateTime Get the current UTC date and time and add the value to a claim type.
active-directory-b2c Identity Provider Adfs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-adfs.md
To create an Application Group, follow theses steps:
1. Select **Next**. 1. On the Application Group Wizard **Native Application** screen: 1. Copy the **Client Identifier** value. The client identifier is your AD FS **Application ID**. You will need the application ID later in this article.
- 1. In **Redirect URI**, enter `https://your-tenant-name.b2clogin.com/your-tenant-name.onmicrosoft.com/oauth2/authresp`. If you use a [custom domain](custom-domain.md), enter `https://your-domain-name/your-tenant-name.onmicrosoft.com/oauth2/authresp`. Replace `your-tenant-name` with the name of your tenant, and `your-domain-name` with your custom domain.
- 1. Select **Next**, and then **Next** to complete the app registration wizard.
+ 1. In **Redirect URI**, enter `https://your-tenant-name.b2clogin.com/your-tenant-name.onmicrosoft.com/oauth2/authresp`, and then **Add**. If you use a [custom domain](custom-domain.md), enter `https://your-domain-name/your-tenant-name.onmicrosoft.com/oauth2/authresp`. Replace `your-tenant-name` with the name of your tenant, and `your-domain-name` with your custom domain.
+ 1. Select **Next**, and then **Next**, and then **Next** again to complete the app registration wizard.
1. Select **Close**.
In this step, configure the claims AD FS application returns to Azure AD B2C.
1. In the application properties window, under the **Applications**, select the **Web Application**. Then select **Edit**. :::image type="content" source="./media/identity-provider-adfs/ad-fs-edit-app.png" alt-text="Screenshot that shows how to edit a web application."::: 1. Select the **Issuance Transformation Rules** tab. Then select **Add Rule**.
-1. In **Claim rule template**, select **Send LDAP attributes as claims**.
-1. Provide a **Claim rule name**. For the **Attribute store**, select **Select Active Directory**, add the following claims.
+1. In **Claim rule template**, select **Send LDAP attributes as claims**, and then **Next**.
+1. Provide a **Claim rule name**. For the **Attribute store**, select **Active Directory**, add the following claims.
| LDAP attribute | Outgoing claim type | | -- | - |
In this step, configure the claims AD FS application returns to Azure AD B2C.
| Given-Name | given_name | | Display-Name | name |
- Note some of the names will not display in the outgoing claim type dropdown. You need to manually type them in. (The dropdown is editable).
+ Note some of the names will not display in the outgoing claim type dropdown. You need to manually type them in (the dropdown is editable).
-1. Select **Finish**, then select **Close**.
+1. Select **Finish**.
+1. Select **Apply**, and then **OK**.
+1. Select **OK** again to finish.
::: zone pivot="b2c-user-flow"
In this step, configure the claims AD FS application returns to Azure AD B2C.
1. For **Client ID**, enter the application ID that you previously recorded. 1. For the **Scope**, enter the `openid`.
-1. For **Response type**, select **id_token**.
+1. For **Response type**, select **id_token**, which makes the **Client secret** optional. Learn more about use of [Client ID and secret](identity-provider-generic-openid-connect.md#client-id-and-secret) when adding a generic OpenID Connect identity provider.
1. (Optional) For the **Domain hint**, enter `contoso.com`. For more information, see [Set up direct sign-in using Azure Active Directory B2C](direct-signin.md#redirect-sign-in-to-a-social-provider). 1. Under **Identity provider claims mapping**, select the following claims:
active-directory Userinfo https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/userinfo.md
Authorization: Bearer eyJ0eXAiOiJKV1QiLCJub25jZSI6Il…
"name": "Mikah Ollenburg", // names all require the ΓÇ£profileΓÇ¥ scope. "family_name": " Ollenburg", "given_name": "Mikah",
+ "picture": "https://graph.microsoft.com/v1.0/me/photo/$value",
"email": "mikoll@contoso.com" //requires the ΓÇ£emailΓÇ¥ scope. } ```
active-directory Workload Identity Federation Create Trust Gcp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/workload-identity-federation-create-trust-gcp.md
+
+ Title: Access Azure resources from Google Cloud without credentials
+
+description: Access Azure AD protected resources from a service running in Google Cloud without using secrets or certificates. Use workload identity federation to set up a trust relationship between an app in Azure AD and an identity in Google Cloud. The workload running in Google Cloud can get an access token from Microsoft identity platform and access Azure AD protected resources.
++++++++ Last updated : 01/06/2022+++
+#Customer intent: As an application developer, I want to create a trust relationship with a Google Cloud identity so my service in Google Cloud can access Azure AD protected resources without managing secrets.
++
+# Access Azure AD protected resources from an app in Google Cloud (preview)
+
+Software workloads running in Google Cloud need an Azure Active Directory (Azure AD) application to authenticate and access Azure AD protected resources. A common practice is to configure that application with credentials (a secret or certificate). The credentials are used by a Google Cloud workload to request an access token from Microsoft identity platform. These credentials pose a security risk and have to be stored securely and rotated regularly. You also run the risk of service downtime if the credentials expire.
+
+[Workload identity federation](workload-identity-federation.md) allows you to access Azure AD protected resources from services running in Google Cloud without needing to manage secrets. Instead, you can configure your Azure AD application to trust a token issued by Google and exchange it for an access token from Microsoft identity platform.
+
+## Create an app registration in Azure AD
+
+[Create an app registration](quickstart-register-app.md) in Azure AD.
+
+Take note of the *object ID* of the app (not the application (client) ID) which you need in the following steps. Go to the [list of registered applications](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/RegisteredApps) in the Azure portal, select your app registration, and find the **Object ID** in **Overview**->**Essentials**.
+
+## Grant your app permissions to resources
+
+Grant your app the permissions necessary to access the Azure AD protected resources targeted by your software workload running in Google Cloud. For example, [assign the Storage Blob Data Contributor role](/azure/storage/blobs/assign-azure-role-data-access) to your app if your application needs to read, write, and delete blob data in [Azure Storage](/azure/storage/blobs/storage-blobs-introduction).
+
+## Set up an identity in Google Cloud
+
+You need an identity in Google Cloud that can be associated with your Azure AD application. A [service account](https://cloud.google.com/iam/docs/service-accounts), for example, used by an application or compute workload. You can either use the default service account of your Cloud project or create a dedicated service account.
+
+Each service account has a unique ID. When you visit the **IAM & Admin** page in the Google Cloud console, click on **Service Accounts**. Select the service account you plan to use, and copy its **Unique ID**.
++
+Tokens issued by Google to the service account will have this **Unique ID** as the *subject* claim.
+
+The *issuer* claim in the tokens will be `https://accounts.google.com`.
+
+You need these claim values to configure a trust relationship with an Azure AD application, which allows your application to trust tokens issued by Google to your service account.
+
+## Configure an Azure AD app to trust a Google Cloud identity
+
+Configure a federated identity credential on your Azure AD application to set up the trust relationship.
+
+The most important fields for creating the federated identity credential are:
+
+- *object ID*: the object ID of the app (not the application (client) ID) you previously registered in Azure AD.
+- *subject*: must match the `sub` claim in the token issued by another identity provider, in this case Google. This is the Unique ID of the service account you plan to use.
+- *issuer*: must match the `iss` claim in the token issued by the identity provider. A URL that complies with the [OIDC Discovery spec](https://openid.net/specs/openid-connect-discovery-1_0.html). Azure AD uses this issuer URL to fetch the keys that are necessary to validate the token. In the case of Google Cloud, the issuer is `https://accounts.google.com`.
+- *audiences*: must match the `aud` claim in the token. For security reasons, you should pick a value that is unique for tokens meant for Azure AD. The Microsoft recommended value is `api://AzureADTokenExchange`.
+
+The following command configures a federated identity credential:
+
+```http
+az rest --method POST --uri 'https://graph.microsoft.com/beta/applications/41be38fd-caac-4354-aa1e-1fdb20e43bfa/federatedIdentityCredentials' --body '{"name":"GcpFederation","issuer":"https://accounts.google.com","subject":"112633961854638529490","description":"Testing","audiences":["api://AzureADTokenExchange"]}'
+```
+
+For more information and examples, see [Create a federated identity credential](workload-identity-federation-create-trust.md).
+
+## Exchange a Google token for an access token
+
+Now that you have configured the Azure AD application to trust the Google service account, you are ready to get a token from Google and exchange it for an access token from Microsoft identity platform. This code runs in an application deployed to Google Cloud and running, for example, on [App Engine](https://cloud.google.com/appengine/docs/standard/).
+
+### Get an ID token for your Google service account
+
+As mentioned earlier, Google cloud resources such as App Engine automatically use the default service account of your Cloud project. You can also configure the app to use a different service account when you deploy your service. Your service can [request an ID token](https://cloud.google.com/compute/docs/instances/verifying-instance-identity#request_signature) for that service account from the metadata server that handles such requests. With this approach, you don't need any keys for your service account: these are all managed by Google.
+
+# [TypeScript](#tab/typescript)
+HereΓÇÖs an example in TypeScript of how to request an ID token from the Google metadata server:
+
+```typescript
+async function getGoogleIDToken() {
+ const headers = new Headers();
+
+ headers.append("Metadata-Flavor", "Google ");
+
+ let aadAudience = "api://AzureADTokenExchange";
+
+ const endpoint="http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/identity?audience="+ aadAudience;
+
+ const options = {
+ method: "GET",
+ headers: headers,
+ };
+
+ return fetch(endpoint, options);
+}
+```
+
+# [C#](#tab/csharp)
+HereΓÇÖs an example in TypeScript of how to request an ID token from the Google metadata server:
+```csharp
+private string getGoogleIdToken()
+{
+ const string endpoint = "http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/identity?audience=api://AzureADTokenExchange";
+
+ var httpWebRequest = (HttpWebRequest)WebRequest.Create(endpoint);
+ //httpWebRequest.ContentType = "application/json";
+ httpWebRequest.Accept = "*/*";
+ httpWebRequest.Method = "GET";
+ httpWebRequest.Headers.Add("Metadata-Flavor", "Google ");
+
+ var httpResponse = (HttpWebResponse)httpWebRequest.GetResponse();
+
+ using (var streamReader = new StreamReader(httpResponse.GetResponseStream()))
+ {
+ string result = streamReader.ReadToEnd();
+ return result;
+ }
+}
+```
++
+> [!IMPORTANT]
+> The *audience* here needs to match the *audiences* value you configured on your Azure AD application when [creating the federated identity credential](#configure-an-azure-ad-app-to-trust-a-google-cloud-identity).
+
+### Exchange the identity token for an Azure AD access token
+
+Now that your app running in Google Cloud has an identity token from Google, exchange it for an access token from Microsoft identity platform. Use the [Microsoft Authentication Library (MSAL)](msal-overview.md) to pass the Google token as a client assertion. The following MSAL versions support client assertions:
+- [MSAL Go (Preview)](https://github.com/AzureAD/microsoft-authentication-library-for-go)
+- [MSAL Node](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-node)
+- [MSAL.NET](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet)
+- [MSAL Python](https://github.com/AzureAD/microsoft-authentication-library-for-python)
+- [MSAL Java](https://github.com/AzureAD/microsoft-authentication-library-for-java)
+
+Using MSAL, you write a token class (implementing the `TokenCredential` interface) exchange the ID token. The token class is used to with different client libraries to access Azure AD protected resources.
+
+# [TypeScript](#tab/typescript)
+The following TypeScript sample code snippet implements the `TokenCredential` interface, gets an ID token from Google (using the `getGoogleIDToken` method previously defined), and exchanges the ID token for an access token.
+
+```typescript
+const msal = require("@azure/msal-node");
+import {TokenCredential, GetTokenOptions, AccessToken} from "@azure/core-auth"
+
+class ClientAssertionCredential implements TokenCredential {
+
+ constructor(clientID:string, tenantID:string, aadAuthority:string) {
+ this.clientID = clientID;
+ this.tenantID = tenantID;
+ this.aadAuthority = aadAuthority; // https://login.microsoftonline.com/
+ }
+
+ async getToken(scope: string | string[], _options?: GetTokenOptions):Promise<AccessToken> {
+
+ var scopes:string[] = [];
+
+ if (typeof scope === "string") {
+ scopes[0]=scope;
+ } else if (Array.isArray(scope)) {
+ scopes = scope;
+ }
+
+ // Get the ID token from Google.
+ return getGoogleIDToken() // calling this directly just for clarity,
+ // this should be a callback
+ // pass this as a client assertion to the confidential client app
+ .then((clientAssertion:any)=> {
+ var msalApp: any;
+ msalApp = new msal.ConfidentialClientApplication({
+ auth: {
+ clientId: this.clientID,
+ authority: this.aadAuthority + this.tenantID,
+ clientAssertion: clientAssertion,
+ }
+ });
+ return msalApp.acquireTokenByClientCredential({ scopes })
+ })
+ .then(function(aadToken) {
+ // return in form expected by TokenCredential.getToken
+ let returnToken = {
+ token: aadToken.accessToken,
+ expiresOnTimestamp: aadToken.expiresOn.getTime(),
+ };
+ return (returnToken);
+ })
+ .catch(function(error) {
+ // error stuff
+ });
+ }
+}
+export default ClientAssertionCredential;
+```
+
+# [C#](#tab/csharp)
+
+The following C# sample code snippet implements the `TokenCredential` interface, gets an ID token from Google (using the `getGoogleIDToken` method previously defined), and exchanges the ID token for an access token.
+
+```csharp
+using System;
+using System.Threading.Tasks;
+using Microsoft.Identity.Client;
+using Azure.Core;
+using System.Threading;
+using System.Net;
+using System.IO;
+
+public class ClientAssertionCredential:TokenCredential
+{
+ private readonly string clientID;
+ private readonly string tenantID;
+ private readonly string aadAuthority;
+
+ public ClientAssertionCredential(string clientID, string tenantID, string aadAuthority)
+ {
+ this.clientID = clientID;
+ this.tenantID = tenantID;
+ this.aadAuthority = aadAuthority; // https://login.microsoftonline.com/
+ }
+
+ public override AccessToken GetToken(TokenRequestContext requestContext, CancellationToken cancellationToken = default) {
+
+ return GetTokenImplAsync(false, requestContext, cancellationToken).GetAwaiter().GetResult();
+ }
+
+ public override async ValueTask<AccessToken> GetTokenAsync(TokenRequestContext requestContext, CancellationToken cancellationToken = default)
+ {
+ return await GetTokenImplAsync(true, requestContext, cancellationToken).ConfigureAwait(false);
+ }
+
+ private async ValueTask<AccessToken> GetTokenImplAsync(bool async, TokenRequestContext requestContext, CancellationToken cancellationToken)
+ {
+ // calling this directly just for clarity, this should be a callback
+ string idToken = getGoogleIdToken();
+
+ try
+ {
+ // pass token as a client assertion to the confidential client app
+ var app = ConfidentialClientApplicationBuilder.Create(this.clientID)
+ .WithClientAssertion(idToken)
+ .Build();
+
+ var authResult = app.AcquireTokenForClient(requestContext.Scopes)
+ .WithAuthority(this.aadAuthority + this.tenantID)
+ .ExecuteAsync();
+
+ AccessToken token = new AccessToken(authResult.Result.AccessToken, authResult.Result.ExpiresOn);
+
+ return token;
+ }
+ catch (Exception ex)
+ {
+ throw (ex);
+ }
+ }
+}
+```
+++
+## Access Azure AD protected resources
+
+Your application running in Google Cloud now has an access token issued by Microsoft identity platform. Use the access token to access the Azure AD protected resources that your Azure AD app has permissions to access. As an example, here's how you can access Azure Blob storage using the `ClientAssertionCredential` token class and the Azure Blob Storage client library. When you make requests to the `BlobServiceClient` to access storage, the `BlobServiceClient` calls the `getToken` method on the `ClientAssertionCredential` object to get a fresh ID token and exchange it for an access token.
+
+# [TypeScript](#tab/typescript)
+
+The following TypeScript example initializes a new `ClientAssertionCredential` object and then creates a new `BlobServiceClient` object.
+
+```typescript
+const { BlobServiceClient } = require("@azure/storage-blob");
+
+var storageUrl = "https://<storageaccount>.blob.core.windows.net";
+var clientID:any = "<client-id>";
+var tenantID:any = "<tenant-id>";
+var aadAuthority:any = "https://login.microsoftonline.com/";
+var credential = new ClientAssertionCredential(clientID,
+ tenantID,
+ aadAuthority);
+
+const blobServiceClient = new BlobServiceClient(storageUrl, credential);
+
+// write code to access Blob storage
+```
+
+# [C#](#tab/csharp)
+
+```csharp
+string clientID = "<client-id>";
+string tenantID = "<tenant-id>";
+string authority = "https://login.microsoftonline.com/";
+string storageUrl = "https://<storageaccount>.blob.core.windows.net";
+
+var credential = new ClientAssertionCredential(clientID,
+ tenantID,
+ authority);
+
+BlobServiceClient blobServiceClient = new BlobServiceClient(new Uri(storageUrl), credential);
+
+// write code to access Blob storage
+```
+++
+## Next steps
+
+Learn more about [workload identity federation](workload-identity-federation.md).
active-directory Workload Identity Federation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/workload-identity-federation.md
You use workload identity federation to configure an Azure AD app registration t
The following scenarios are supported for accessing Azure AD protected resources using workload identity federation: - GitHub Actions. First, [Configure a trust relationship](workload-identity-federation-create-trust-github.md) between your app in Azure AD and a GitHub repo in the Azure portal or using Microsoft Graph. Then [configure a GitHub Actions workflow](/azure/developer/github/connect-from-azure) to get an access token from Microsoft identity provider and access Azure resources.
+- Google Cloud. First, configure a trust relationship between your app in Azure AD and an identity in Google Cloud. Then configure your software workload running in Google Cloud to get an access token from Microsoft identity provider and access Azure AD protected resources. See [Access Azure AD protected resources from an app in Google Cloud](workload-identity-federation-create-trust-gcp.md).
- Workloads running on Kubernetes. Install the Azure AD workload identity webhook and establish a trust relationship between your app in Azure AD and a Kubernetes workload (described in the [Kubernetes quickstart](https://azure.github.io/azure-workload-identity/docs/quick-start.html)). - Workloads running in compute platforms outside of Azure. [Configure a trust relationship](workload-identity-federation-create-trust.md) between your Azure AD application registration and the external IdP for your compute platform. You can use tokens issued by that platform to authenticate with Microsoft identity platform and call APIs in the Microsoft ecosystem. Use the [client credentials flow](v2-oauth2-client-creds-grant-flow.md#third-case-access-token-request-with-a-federated-credential) to get an access token from Microsoft identity platform, passing in the identity provider's JWT instead of creating one yourself using a stored certificate.
active-directory Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/whats-new-archive.md
IT Admins can start using the new "Hybrid Admin" role as the least privileged ro
In May 2020, we have added the following 36 new applications in our App gallery with Federation support:
-[Moula](https://moula.com.au/pay/merchants), [Surveypal](https://www.surveypal.com/app), [Kbot365](https://www.konverso.ai/virtual-assistant-digital-workplace/), [TackleBox](http://www.tacklebox.app/), [Powell Teams](https://powell-software.com/en/powell-teams-en/), [Talentsoft Assistant](https://msteams.talent-soft.com/), [ASC Recording Insights](https://teams.asc-recording.app/product), [GO1](https://www.go1.com/), [B-Engaged](https://b-engaged.se/), [Competella Contact Center Workgroup](http://www.competella.com/), [Asite](http://www.asite.com/), [ImageSoft Identity](https://identity.imagesoftinc.com/), [My IBISWorld](https://identity.imagesoftinc.com/), [insuite](../saas-apps/insuite-tutorial.md), [Change Process Management](../saas-apps/change-process-management-tutorial.md), [Cyara CX Assurance Platform](../saas-apps/cyara-cx-assurance-platform-tutorial.md), [Smart Global Governance](../saas-apps/smart-global-governance-tutorial.md), [Prezi](../saas-apps/prezi-tutorial.md), [Mapbox](../saas-apps/mapbox-tutorial.md), [Datava Enterprise Service Platform](../saas-apps/datava-enterprise-service-platform-tutorial.md), [Whimsical](../saas-apps/whimsical-tutorial.md), [Trelica](../saas-apps/trelica-tutorial.md), [EasySSO for Confluence](../saas-apps/easysso-for-confluence-tutorial.md), [EasySSO for BitBucket](../saas-apps/easysso-for-bitbucket-tutorial.md), [EasySSO for Bamboo](../saas-apps/easysso-for-bamboo-tutorial.md), [Torii](../saas-apps/torii-tutorial.md), [Axiad Cloud](../saas-apps/axiad-cloud-tutorial.md), [Humanage](../saas-apps/humanage-tutorial.md), [ColorTokens ZTNA](../saas-apps/colortokens-ztna-tutorial.md), [CCH Tagetik](../saas-apps/cch-tagetik-tutorial.md), [ShareVault](../saas-apps/sharevault-tutorial.md), [Vyond](../saas-apps/vyond-tutorial.md), [TextExpander](../saas-apps/textexpander-tutorial.md), [Anyone Home CRM](../saas-apps/anyone-home-crm-tutorial.md), [askSpoke](../saas-apps/askspoke-tutorial.md), [ice Contact Center](../saas-apps/ice-contact-center-tutorial.md)
+[Moula](https://moula.com.au/pay/merchants), [Surveypal](https://www.surveypal.com/app), [Kbot365](https://www.konverso.ai/virtual-assistant-digital-workplace/), [TackleBox](https://tacklebox.in/), [Powell Teams](https://powell-software.com/en/powell-teams-en/), [Talentsoft Assistant](https://msteams.talent-soft.com/), [ASC Recording Insights](https://teams.asc-recording.app/product), [GO1](https://www.go1.com/), [B-Engaged](https://b-engaged.se/), [Competella Contact Center Workgroup](http://www.competella.com/), [Asite](http://www.asite.com/), [ImageSoft Identity](https://identity.imagesoftinc.com/), [My IBISWorld](https://identity.imagesoftinc.com/), [insuite](../saas-apps/insuite-tutorial.md), [Change Process Management](../saas-apps/change-process-management-tutorial.md), [Cyara CX Assurance Platform](../saas-apps/cyara-cx-assurance-platform-tutorial.md), [Smart Global Governance](../saas-apps/smart-global-governance-tutorial.md), [Prezi](../saas-apps/prezi-tutorial.md), [Mapbox](../saas-apps/mapbox-tutorial.md), [Datava Enterprise Service Platform](../saas-apps/datava-enterprise-service-platform-tutorial.md), [Whimsical](../saas-apps/whimsical-tutorial.md), [Trelica](../saas-apps/trelica-tutorial.md), [EasySSO for Confluence](../saas-apps/easysso-for-confluence-tutorial.md), [EasySSO for BitBucket](../saas-apps/easysso-for-bitbucket-tutorial.md), [EasySSO for Bamboo](../saas-apps/easysso-for-bamboo-tutorial.md), [Torii](../saas-apps/torii-tutorial.md), [Axiad Cloud](../saas-apps/axiad-cloud-tutorial.md), [Humanage](../saas-apps/humanage-tutorial.md), [ColorTokens ZTNA](../saas-apps/colortokens-ztna-tutorial.md), [CCH Tagetik](../saas-apps/cch-tagetik-tutorial.md), [ShareVault](../saas-apps/sharevault-tutorial.md), [Vyond](../saas-apps/vyond-tutorial.md), [TextExpander](../saas-apps/textexpander-tutorial.md), [Anyone Home CRM](../saas-apps/anyone-home-crm-tutorial.md), [askSpoke](../saas-apps/askspoke-tutorial.md), [ice Contact Center](../saas-apps/ice-contact-center-tutorial.md)
You can also find the documentation of all the applications from here https://aka.ms/AppsTutorial.
active-directory Troubleshoot App Publishing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/troubleshoot-app-publishing.md
+
+ Title: Your sign-in was blocked
+description: Troubleshoot a blocked sign-in to the Microsoft Application Network portal.
++++++++ Last updated : 1/18/2022++
+#Customer intent: As a publisher of an application, I want troubleshoot a blocked sign-in to the Microsoft Application Network portal.
++
+# Your sign-in was blocked
+
+This article provides information for resolving a blocked sign-in to the Microsoft Application Network portal.
+
+## Symptoms
+
+The user sees this message when trying to sign in to the Microsoft Application Network portal.
++
+## Cause
+
+The guest user is federated to a home tenant that is also an Azure AD tenant. The guest user is at high risk. High risk users aren't allowed to access resources. All high risk users (employees, guests, or vendors) must remediate their risk to access resources. For guest users, this user risk comes from the home tenant and the policy comes from the resource tenant.
+
+## Solutions
+
+- MFA registered guest users remediate their own user risk. The guest user [resets or changes a secured password](https://aka.ms/sspr) at their home tenant (this needs MFA and SSPR at the home tenant). The secured password change or reset must be initiated on Azure AD and not on-premises.
+
+- Guest users have their administrators remediate their risk. In this case, the administrator resets a password (temporary password generation). The guest user's administrator can go to https://aka.ms/RiskyUsers and select **Reset password**.
+
+- Guest users have their administrators dismiss their risk. The admin can go to https://aka.ms/RiskyUsers and select **Dismiss user risk**. However, the administrator must do the due diligence to make sure the risk assessment was a false positive before dismissing the user risk. Otherwise, resources are put at risk by suppressing a risk assessment without investigation.
+
+If you have any issues with access, contact the [Azure AD SSO Integration Team](mailto:SaaSApplicationIntegrations@service.microsoft.com).
active-directory V2 Howto App Gallery Listing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/v2-howto-app-gallery-listing.md
+
+ Title: Publish your application
+description: Learn how to publish your application in the Azure Active Directory application gallery.
++++++++ Last updated : 1/18/2022+++
+# Publish your application in the Azure Active Directory application gallery
+
+You can publish your application in the Azure Active Directory (Azure AD) application gallery. When your application is published, it's made available as an option for users when they add applications to their tenant. For more information, see [Overview of the Azure Active Directory application gallery](overview-application-gallery.md).
+
+To publish your application in the gallery, you need to complete the following tasks:
+
+- Make sure that you complete the prerequisites.
+- Create and publish documentation.
+- Submit your application.
+- Join the Microsoft partner network.
+
+## Prerequisites
+
+- To publish your application in the gallery, you must first read and agree to specific [terms and conditions](https://azure.microsoft.com/support/legal/active-directory-app-gallery-terms/).
+- Every application in the gallery must implement one of the supported single sign-on (SSO) options. To learn more about the supported options, see [Plan a single sign-on deployment](plan-sso-deployment.md). To learn more about authentication, see [Authentication vs. authorization](../develop/authentication-vs-authorization.md) and [Azure active Directory code samples](../develop/sample-v2-code.md). For password SSO, make sure that your application supports form authentication so that password vaulting can be used. For a quick introduction about single sign-on configuration in the portal, see [Enable single sign-on for an enterprise application](add-application-portal-setup-sso.md).
+- For federated applications (OpenID and SAML/WS-Fed), the application must support the [software-as-a-service (SaaS) model](https://azure.microsoft.com/overview/what-is-saas/) to be listed in the gallery. The enterprise gallery applications must support multiple user configurations and not any specific user.
+- For Open ID Connect, the application must be multitenanted and the [Azure AD consent framework](../develop/consent-framework.md) must be properly implemented for the application. The user can send the sign-in request to a common endpoint so that any user can provide consent to the application. You can control user access based on the tenant ID and the user's UPN received in the token.
+- Supporting provisioning is optional, but highly recommended. Provisioning must be done using the System for Cross-domain Identity Management (SCIM) protocol, which is easy to implement. Using SCIM allows users to automatically create and update accounts in your application without relying on manual processes such as uploading CSV files. To learn more about the Azure AD SCIM implementation, see [build a SCIM endpoint and configure user provisioning with Azure AD](../app-provisioning/use-scim-to-provision-users-and-groups.md).
+
+You can get a free test account with all the premium Azure AD features - 90 days free and can get extended as long as you do dev work with it: [Join the Microsoft 365 Developer Program](/office/developer-program/microsoft-365-developer-program).
+
+## Create and publish documentation
+
+### Documentation on your site
+
+Ease of adoption is a significant factor in enterprise software decisions. Clear easy-to-follow documentation supports your users in their adoption journey and reduces support costs.
+
+Your documentation should at a minimum include the following items:
+
+- Introduction to your SSO functionality
+ - Protocols supported
+ - Version and SKU
+ - Supported identity providers list with documentation links
+- Licensing information for your application
+- Role-based access control for configuring SSO
+- SSO Configuration Steps
+ - UI configuration elements for SAML with expected values from the provider
+ - Service provider information to be passed to identity providers
+- If OIDC/OAuth, list of permissions required for consent with business justifications
+- Testing steps for pilot users
+- Troubleshooting information, including error codes and messages
+- Support mechanisms for users
+- Details about your SCIM endpoint, including the resources and attributes supported
+
+### Documentation on the Microsoft site
+
+When your application is added to the gallery, documentation is created that explains the step-by-step process. For an example, see [Tutorials for integrating SaaS applications with Azure Active Directory](../saas-apps/tutorial-list.md). This documentation is created based on your submission to the gallery, and you can easily update it if you make changes to your application using your GitHub account.
+
+## Submit your application
+
+After you've tested that your application integration works with Azure AD, submit your application request in the [Microsoft Application Network portal](https://microsoft.sharepoint.com/teams/apponboarding/Apps). The first time you try to sign into the portal you are presented with one of two screens.
+
+- If you receive the message "That didn't work", then you need to contact the [Azure AD SSO Integration Team](mailto:SaaSApplicationIntegrations@service.microsoft.com). Provide the email account that you want to use for submitting the request. A business email address such as `name@yourbusiness.com` is preferred. The Azure AD team will add the account in the Microsoft Application Network portal.
+- If you see a "Request Access" page, then fill in the business justification and select **Request Access**.
+
+After the account is added, you can sign in to the Microsoft Application Network portal and submit the request by selecting the **Submit Request (ISV)** tile on the home page. If you see the **Your sign-in was blocked** error while logging in, see [Troubleshoot sign-in to the Microsoft Application Network portal](troubleshoot-app-publishing.md).
+
+### Implementation-specific options
+
+On the Application Registration Form, select the feature that you want to enable. Select **OpenID Connect & OAuth 2.0**, **SAML 2.0/WS-Fed**, or **Password SSO(UserName & Password)** depending on the feature that your application supports.
+
+If you're implementing a [SCIM](../app-provisioning/use-scim-to-provision-users-and-groups.md) 2.0 endpoint for user provisioning, select **User Provisioning (SCIM 2.0)**. Download the schema to provide in the onboarding request. For more information, see [Export provisioning configuration and roll back to a known good state](../app-provisioning/export-import-provisioning-configuration.md). The schema that you configured is used when testing the non-gallery application to build the gallery application.
+
+You can track application requests by customer name at the Microsoft Application Network portal. For more information, see [Application requests by Customers](https://microsoft.sharepoint.com/teams/apponboarding/Apps/SitePages/AppRequestsByCustomers.aspx).
+
+### Timelines
+
+The timeline for the process of listing a SAML 2.0 or WS-Fed application in the gallery is 7 to 10 business days.
++
+The timeline for the process of listing an OpenID Connect application in the gallery is 2 to 5 business days.
++
+The timeline for the process of listing a SCIM provisioning application in the gallery is variable and depends on numerous factors.
+
+Not all applications can be onboarded. Per the terms and conditions, the choice may be made to not list an application. Onboarding applications is at the sole discretion of the onboarding team. If your application is declined, you should use the non-gallery provisioning application to satisfy your provisioning needs.
+
+Here's the flow of customer-requested applications.
++
+For any escalations, send email to the [Azure AD SSO Integration Team](mailto:SaaSApplicationIntegrations@service.microsoft.com), and a response is sent as soon as possible.
++
+## Join the Microsoft partner network
+
+The Microsoft Partner Network provides instant access to exclusive resources, programs, tools, and connections. To join the network and create your go to market plan, see [Reach commercial customers](https://partner.microsoft.com/explore/commercial#gtm).
+
+## Next steps
+
+- Learn more about managing enterprise applications in [What is application management in Azure Active Directory?](what-is-application-management.md)
aks Azure Disk Csi https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/azure-disk-csi.md
$ kubectl describe volumesnapshot azuredisk-volume-snapshot
Name: azuredisk-volume-snapshot Namespace: default Labels: <none>
-Annotations: API Version: snapshot.storage.k8s.io/v1beta1
+Annotations: API Version: snapshot.storage.k8s.io/v1
Kind: VolumeSnapshot Metadata: Creation Timestamp: 2020-08-27T05:27:58Z
Metadata:
snapshot.storage.kubernetes.io/volumesnapshot-bound-protection Generation: 1 Resource Version: 714582
- Self Link: /apis/snapshot.storage.k8s.io/v1beta1/namespaces/default/volumesnapshots/azuredisk-volume-snapshot
+ Self Link: /apis/snapshot.storage.k8s.io/v1/namespaces/default/volumesnapshots/azuredisk-volume-snapshot
UID: dd953ab5-6c24-42d4-ad4a-f33180e0ef87 Spec: Source:
metadata:
name: managed-csi-shared provisioner: disk.csi.azure.com parameters:
- skuname: Premium_LRS # Currently shared disk is only available with premium SSD
+ skuname: Premium_LRS
maxShares: "2" cachingMode: None # ReadOnly cache is not available for premium SSD with maxShares>1 reclaimPolicy: Delete
aks Cis Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/cis-kubernetes.md
+
+ Title: Center for Internet Security (CIS) Kubernetes benchmark
+description: Learn how AKS applies the CIS Kubernetes benchmark
++ Last updated : 01/18/2022++
+# Center for Internet Security (CIS) Kubernetes benchmark
+
+As a secure service, Azure Kubernetes Service (AKS) complies with SOC, ISO, PCI DSS, and HIPAA standards. This article covers the security hardening applied to AKS based on the CIS Kubernetes benchmark. For more information about AKS security, see [Security concepts for applications and clusters in Azure Kubernetes Service (AKS)](./concepts-security.md). For more information on the CIS benchmark, see [Center for Internet Security (CIS) Benchmarks][cis-benchmarks].
+
+## Kubernetes CIS benchmark
+
+The following are the results from the [CIS Kubernetes V1.20 Benchmark v1.0.0][cis-benchmark-kubernetes] recommendations on AKS.
+
+*Scored* recommendations affect the benchmark score if they are not applied, while *Not Scored* recommendations don't.
+
+CIS benchmarks provide two levels of security settings:
+
+* *L1*, or Level 1, recommends essential basic security requirements that can be configured on any system and should cause little or no interruption of service or reduced functionality.
+* *L2*, or Level 2, recommends security settings for environments requiring greater security that could result in some reduced functionality.
+
+Recommendations can have one of the following statuses:
+
+* *Pass* - The recommendation has been applied.
+* *Fail* - The recommendation has not been applied.
+* *N/A* - The recommendation relates to manifest file permission requirements that are not relevant to AKS. Kubernetes clusters by default use a manifest model to deploy the control plane pods, which rely on files from the node VM. The CIS Kubernetes benchmark recommends these files must have certain permission requirements. AKS clusters use a Helm chart to deploy control plane pods and don't rely on files in the node VM.
+* *Depends on Environment* - The recommendation is applied in the user's specific environment and is not controlled by AKS. *Scored* recommendations affect the benchmark score whether the recommendation applies to the user's specific environment or not.
+* *Equivalent Control* - The recommendation has been implemented in a different, equivalent manner.
+
+| CIS ID | Recommendation description|Scoring Type|Level|Status|
+||||||
+|1|Control Plane Components||||
+|1.1|Control Plane Node Configuration Files||||
+|1.1.1|Ensure that the API server pod specification file permissions are set to 644 or more restrictive|Scored|L1|N/A|
+|1.1.2|Ensure that the API server pod specification file ownership is set to root:root|Scored|L1|N/A|
+|1.1.3|Ensure that the controller manager pod specification file permissions are set to 644 or more restrictive|Scored|L1|N/A|
+|1.1.4|Ensure that the controller manager pod specification file ownership is set to root:root|Scored|L1|N/A|
+|1.1.5|Ensure that the scheduler pod specification file permissions are set to 644 or more restrictive|Scored|L1|N/A|
+|1.1.6|Ensure that the scheduler pod specification file ownership is set to root:root|Scored|L1|N/A|
+|1.1.7|Ensure that the etcd pod specification file permissions are set to 644 or more restrictive|Scored|L1|N/A|
+|1.1.8|Ensure that the etcd pod specification file ownership is set to root:root|Scored|L1|N/A|
+|1.1.9|Ensure that the Container Network Interface file permissions are set to 644 or more restrictive|Not Scored|L1|N/A|
+|1.1.10|Ensure that the Container Network Interface file ownership is set to root:root|Not Scored|L1|N/A|
+|1.1.11|Ensure that the etcd data directory permissions are set to 700 or more restrictive|Scored|L1|N/A|
+|1.1.12|Ensure that the etcd data directory ownership is set to etcd:etcd|Scored|L1|N/A|
+|1.1.13|Ensure that the admin.conf file permissions are set to 644 or more restrictive|Scored|L1|N/A|
+|1.1.14|Ensure that the admin.conf file ownership is set to root:root|Scored|L1|N/A|
+|1.1.15|Ensure that the scheduler.conf file permissions are set to 644 or more restrictive|Scored|L1|N/A|
+|1.1.16|Ensure that the scheduler.conf file ownership is set to root:root|Scored|L1|N/A|
+|1.1.17|Ensure that the controller-manager.conf file permissions are set to 644 or more restrictive|Scored|L1|N/A|
+|1.1.18|Ensure that the controller-manager.conf file ownership is set to root:root|Scored|L1|N/A|
+|1.1.19|Ensure that the Kubernetes PKI directory and file ownership is set to root:root|Scored|L1|N/A|
+|1.1.20|Ensure that the Kubernetes PKI certificate file permissions are set to 644 or more restrictive|Scored|L1|N/A|
+|1.1.21|Ensure that the Kubernetes PKI key file permissions are set to 600|Scored|L1|N/A|
+|1.2|API Server||||
+|1.2.1|Ensure that the `--anonymous-auth` argument is set to false|Not Scored|L1|Pass|
+|1.2.2|Ensure that the `--basic-auth-file` argument is not set|Scored|L1|Pass|
+|1.2.3|Ensure that the `--token-auth-file` parameter is not set|Scored|L1|Fail|
+|1.2.4|Ensure that the `--kubelet-https` argument is set to true|Scored|L1|Equivalent Control |
+|1.2.5|Ensure that the `--kubelet-client-certificate` and `--kubelet-client-key` arguments are set as appropriate|Scored|L1|Pass|
+|1.2.6|Ensure that the `--kubelet-certificate-authority` argument is set as appropriate|Scored|L1|Equivalent Control|
+|1.2.7|Ensure that the `--authorization-mode` argument is not set to AlwaysAllow|Scored|L1|Pass|
+|1.2.8|Ensure that the `--authorization-mode` argument includes Node|Scored|L1|Pass|
+|1.2.9|Ensure that the `--authorization-mode` argument includes RBAC|Scored|L1|Pass|
+|1.2.10|Ensure that the admission control plugin EventRateLimit is set|Not Scored|L1|Fail|
+|1.2.11|Ensure that the admission control plugin AlwaysAdmit is not set|Scored|L1|Pass|
+|1.2.12|Ensure that the admission control plugin AlwaysPullImages is set|Not Scored|L1|Fail|
+|1.2.13|Ensure that the admission control plugin SecurityContextDeny is set if PodSecurityPolicy is not used|Not Scored|L1|Fail|
+|1.2.14|Ensure that the admission control plugin ServiceAccount is set|Scored|L1|Pass|
+|1.2.15|Ensure that the admission control plugin NamespaceLifecycle is set|Scored|L1|Pass|
+|1.2.16|Ensure that the admission control plugin PodSecurityPolicy is set|Scored|L1|Fail|
+|1.2.17|Ensure that the admission control plugin NodeRestriction is set|Scored|L1|Fail|
+|1.2.18|Ensure that the `--insecure-bind-address` argument is not set|Scored|L1|Fail|
+|1.2.19|Ensure that the `--insecure-port` argument is set to 0|Scored|L1|Pass|
+|1.2.20|Ensure that the `--secure-port` argument is not set to 0|Scored|L1|Pass|
+|1.2.21|Ensure that the `--profiling` argument is set to false|Scored|L1|Pass|
+|1.2.22|Ensure that the `--audit-log-path` argument is set|Scored|L1|Pass|
+|1.2.23|Ensure that the `--audit-log-maxage` argument is set to 30 or as appropriate|Scored|L1|Equivalent Control|
+|1.2.24|Ensure that the `--audit-log-maxbackup` argument is set to 10 or as appropriate|Scored|L1|Equivalent Control|
+|1.2.25|Ensure that the `--audit-log-maxsize` argument is set to 100 or as appropriate|Scored|L1|Pass|
+|1.2.26|Ensure that the `--request-timeout` argument is set as appropriate|Scored|L1|Pass|
+|1.2.27|Ensure that the `--service-account-lookup` argument is set to true|Scored|L1|Pass|
+|1.2.28|Ensure that the `--service-account-key-file` argument is set as appropriate|Scored|L1|Pass|
+|1.2.29|Ensure that the `--etcd-certfile` and `--etcd-keyfile` arguments are set as appropriate|Scored|L1|Pass|
+|1.2.30|Ensure that the `--tls-cert-file` and `--tls-private-key-file` arguments are set as appropriate|Scored|L1|Pass|
+|1.2.31|Ensure that the `--client-ca-file` argument is set as appropriate|Scored|L1|Pass|
+|1.2.32|Ensure that the `--etcd-cafile` argument is set as appropriate|Scored|L1|Pass|
+|1.2.33|Ensure that the `--encryption-provider-config` argument is set as appropriate|Scored|L1|Fail|
+|1.2.34|Ensure that encryption providers are appropriately configured|Scored|L1|Fail|
+|1.2.35|Ensure that the API Server only makes use of Strong Cryptographic Ciphers|Not Scored|L1|Pass|
+|1.3|Controller Manager||||
+|1.3.1|Ensure that the `--terminated-pod-gc-threshold` argument is set as appropriate|Scored|L1|Pass|
+|1.3.2|Ensure that the `--profiling` argument is set to false|Scored|L1|Pass|
+|1.3.3|Ensure that the `--use-service-account-credentials` argument is set to true|Scored|L1|Pass|
+|1.3.4|Ensure that the `--service-account-private-key-file` argument is set as appropriate|Scored|L1|Pass|
+|1.3.5|Ensure that the `--root-ca-file` argument is set as appropriate|Scored|L1|Pass|
+|1.3.6|Ensure that the RotateKubeletServerCertificate argument is set to true|Scored|L2|Pass|
+|1.3.7|Ensure that the `--bind-address` argument is set to 127.0.0.1|Scored|L1|Fail|
+|1.4|Scheduler||||
+|1.4.1|Ensure that the `--profiling` argument is set to false|Scored|L1|Pass|
+|1.4.2|Ensure that the `--bind-address` argument is set to 127.0.0.1|Scored|L1|Fail|
+|2|etcd||||
+|2.1|Ensure that the `--cert-file` and `--key-file` arguments are set as appropriate|Scored|L1|Pass|
+|2.2|Ensure that the `--client-cert-auth` argument is set to true|Scored|L1|Pass|
+|2.3|Ensure that the `--auto-tls` argument is not set to true|Scored|L1|Pass|
+|2.4|Ensure that the `--peer-cert-file` and `--peer-key-file` arguments are set as appropriate|Scored|L1|Pass|
+|2.5|Ensure that the `--peer-client-cert-auth` argument is set to true|Scored|L1|Pass|
+|2.6|Ensure that the `--peer-auto-tls` argument is not set to true|Scored|L1|Pass|
+|2.7|Ensure that a unique Certificate Authority is used for etcd|Not Scored|L2|Pass|
+|3|Control Plane Configuration||||
+|3.1|Authentication and Authorization||||
+|3.1.1|Client certificate authentication should not be used for users|Not Scored|L2|Pass|
+|3.2|Logging||||
+|3.2.1|Ensure that a minimal audit policy is created|Scored|L1|Pass|
+|3.2.2|Ensure that the audit policy covers key security concerns|Not Scored|L2|Pass|
+|4|Worker Nodes||||
+|4.1|Worker Node Configuration Files||||
+|4.1.1|Ensure that the kubelet service file permissions are set to 644 or more restrictive|Scored|L1|Pass|
+|4.1.2|Ensure that the kubelet service file ownership is set to root:root|Scored|L1|Pass|
+|4.1.3|Ensure that the proxy kubeconfig file permissions are set to 644 or more restrictive|Scored|L1|Pass|
+|4.1.4|Ensure that the proxy kubeconfig file ownership is set to root:root|Scored|L1|Pass|
+|4.1.5|Ensure that the kubelet.conf file permissions are set to 644 or more restrictive|Scored|L1|Pass|
+|4.1.6|Ensure that the kubelet.conf file ownership is set to root:root|Scored|L1|Pass|
+|4.1.7|Ensure that the certificate authorities file permissions are set to 644 or more restrictive|Scored|L1|Pass|
+|4.1.8|Ensure that the client certificate authorities file ownership is set to root:root|Scored|L1|Pass|
+|4.1.9|Ensure that the kubelet configuration file has permissions set to 644 or more restrictive|Scored|L1|Pass|
+|4.1.10|Ensure that the kubelet configuration file ownership is set to root:root|Scored|L1|Pass|
+|4.2|Kubelet||||
+|4.2.1|Ensure that the `--anonymous-auth` argument is set to false|Scored|L1|Pass|
+|4.2.2|Ensure that the `--authorization-mode` argument is not set to AlwaysAllow|Scored|L1|Pass|
+|4.2.3|Ensure that the `--client-ca-file` argument is set as appropriate|Scored|L1|Pass|
+|4.2.4|Ensure that the `--read-only-port` argument is set to 0|Scored|L1|Pass|
+|4.2.5|Ensure that the `--streaming-connection-idle-timeout` argument is not set to 0|Scored|L1|Pass|
+|4.2.6|Ensure that the `--protect-kernel-defaults` argument is set to true|Scored|L1|Pass|
+|4.2.7|Ensure that the `--make-iptables-util-chains` argument is set to true|Scored|L1|Pass|
+|4.2.8|Ensure that the `--hostname-override` argument is not set|Not Scored|L1|Pass|
+|4.2.9|Ensure that the `--event-qps` argument is set to 0 or a level which ensures appropriate event capture|Not Scored|L2|Pass|
+|4.2.10|Ensure that the `--tls-cert-file`and `--tls-private-key-file` arguments are set as appropriate|Scored|L1|Equivalent Control|
+|4.2.11|Ensure that the `--rotate-certificates` argument is not set to false|Scored|L1|Pass|
+|4.2.12|Ensure that the RotateKubeletServerCertificate argument is set to true|Scored|L1|Pass|
+|4.2.13|Ensure that the Kubelet only makes use of Strong Cryptographic Ciphers|Not Scored|L1|Pass|
+|5|Policies||||
+|5.1|RBAC and Service Accounts||||
+|5.1.1|Ensure that the cluster-admin role is only used where required|Not Scored|L1|Depends on Environment|
+|5.1.2|Minimize access to secrets|Not Scored|L1|Depends on Environment|
+|5.1.3|Minimize wildcard use in Roles and ClusterRoles|Not Scored|L1|Depends on Environment|
+|5.1.4|Minimize access to create pods|Not Scored|L1|Depends on Environment|
+|5.1.5|Ensure that default service accounts are not actively used|Scored|L1|Depends on Environment|
+|5.1.6|Ensure that Service Account Tokens are only mounted where necessary|Not Scored|L1|Depends on Environment|
+|5.2|Pod Security Policies||||
+|5.2.1|Minimize the admission of privileged containers|Not Scored|L1|Depends on Environment|
+|5.2.2|Minimize the admission of containers wishing to share the host process ID namespace|Scored|L1|Depends on Environment|
+|5.2.3|Minimize the admission of containers wishing to share the host IPC namespace|Scored|L1|Depends on Environment|
+|5.2.4|Minimize the admission of containers wishing to share the host network namespace|Scored|L1|Depends on Environment|
+|5.2.5|Minimize the admission of containers with allowPrivilegeEscalation|Scored|L1|Depends on Environment|
+|5.2.6|Minimize the admission of root containers|Not Scored|L2|Depends on Environment|
+|5.2.7|Minimize the admission of containers with the NET_RAW capability|Not Scored|L1|Depends on Environment|
+|5.2.8|Minimize the admission of containers with added capabilities|Not Scored|L1|Depends on Environment|
+|5.2.9|Minimize the admission of containers with capabilities assigned|Not Scored|L2|Depends on Environment|
+|5.3|Network Policies and CNI||||
+|5.3.1|Ensure that the CNI in use supports Network Policies|Not Scored|L1|Pass|
+|5.3.2|Ensure that all Namespaces have Network Policies defined|Scored|L2|Depends on Environment|
+|5.4|Secrets Management||||
+|5.4.1|Prefer using secrets as files over secrets as environment variables|Not Scored|L1|Depends on Environment|
+|5.4.2|Consider external secret storage|Not Scored|L2|Depends on Environment|
+|5.5|Extensible Admission Control||||
+|5.5.1|Configure Image Provenance using ImagePolicyWebhook admission controller|Not Scored|L2|Depends on Environment|
+|5.6|General Policies||||
+|5.6.1|Create administrative boundaries between resources using namespaces|Not Scored|L1|Depends on Environment|
+|5.6.2|Ensure that the seccomp profile is set to docker/default in your pod definitions|Not Scored|L2|Depends on Environment|
+|5.6.3|Apply Security Context to Your Pods and Containers|Not Scored|L2|Depends on Environment|
+|5.6.4|The default namespace should not be used|Scored|L2|Depends on Environment|
+
+> [!NOTE]
+> In addition to the Kubernetes CIS benchmark, there is an [AKS CIS benchmark][cis-benchmark-aks] available as well.
+
+## Additional notes
+
+* The security hardened OS is built and maintained specifically for AKS and is **not** supported outside of the AKS platform.
+* To further reduce the attack surface area, some unnecessary kernel module drivers have been disabled in the OS.
+
+## Next steps
+
+For more information about AKS security, see the following articles:
+
+* [Azure Kubernetes Service (AKS)](./intro-kubernetes.md)
+* [AKS security considerations](./concepts-security.md)
+* [AKS best practices](./best-practices.md)
++
+[azure-update-management]: ../automation/update-management/overview.md
+[azure-file-integrity-monotoring]: ../security-center/security-center-file-integrity-monitoring.md
+[azure-time-sync]: ../virtual-machines/linux/time-sync.md
+[auzre-log-analytics-agent-overview]: ../azure-monitor/platform/log-analytics-agent.md
+[cis-benchmarks]: /compliance/regulatory/offering-CIS-Benchmark
+[cis-benchmark-aks]: https://www.cisecurity.org/benchmark/kubernetes/
+[cis-benchmark-kubernetes]: https://www.cisecurity.org/benchmark/kubernetes/
aks Concepts Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/concepts-identity.md
With the Azure RBAC integration, AKS will use a Kubernetes Authorization webhook
As shown in the above diagram, when using the Azure RBAC integration, all requests to the Kubernetes API will follow the same authentication flow as explained on the [Azure Active Directory integration section](#azure-ad-integration).
-If the identity making the request exists in Azure AD, Azure will team with Kubernetes RBAC to authorize the request. If the identity exists outside of Azure AD (i.e., a Kubernetes service account), authorization will deter to the normal Kubernetes RBAC.
+If the identity making the request exists in Azure AD, Azure will team with Kubernetes RBAC to authorize the request. If the identity exists outside of Azure AD (i.e., a Kubernetes service account), authorization will defer to the normal Kubernetes RBAC.
In this scenario, you use Azure RBAC mechanisms and APIs to assign users built-in roles or create custom roles, just as you would with Kubernetes roles.
aks Private Clusters https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/private-clusters.md
As mentioned, virtual network peering is one way to access your private cluster.
> [!NOTE] > If you are using [Bring Your Own Route Table with kubenet](./configure-kubenet.md#bring-your-own-subnet-and-route-table-with-kubenet) and Bring Your Own DNS with Private Cluster, the cluster creation will fail. You will need to associate the [RouteTable](./configure-kubenet.md#bring-your-own-subnet-and-route-table-with-kubenet) in the node resource group to the subnet after the cluster creation failed, in order to make the creation successful.
-## Using a private endpoint connection
+## Use a private endpoint connection
A private endpoint can be set up so that an Azure Virtual Network doesn't need to be peered to communicate to the private cluster. To use a private endpoint, create a new private endpoint in your virtual network then create a link between your virtual network and a new private DNS zone.
api-management Api Management Access Restriction Policies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-access-restriction-policies.md
The `quota` policy enforces a renewable or lifetime call volume and/or bandwidth
```xml <quota calls="number" bandwidth="kilobytes" renewal-period="seconds">
- <api name="API name" id="API id" calls="number" renewal-period="seconds" />
+ <api name="API name" id="API id" calls="number" renewal-period="seconds">
<operation name="operation name" id="operation id" calls="number" renewal-period="seconds" /> </api> </quota>
app-service App Service Hybrid Connections https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/app-service-hybrid-connections.md
In addition to the portal experience from within your app, you can create Hybrid
## Hybrid Connections and App Service plans ##
-App Service Hybrid Connections are only available in Basic, Standard, Premium, and Isolated pricing SKUs. There are limits tied to the pricing plan.
+App Service Hybrid Connections are only available in Basic, Standard, Premium, and Isolated pricing SKUs. Hybrid Connections aren't available for function apps in Consumption plans. There are limits tied to the pricing plan.
| Pricing plan | Number of Hybrid Connections usable in the plan | |-|-|
app-service Deploy Content Sync https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/deploy-content-sync.md
Because of underlying differences in the APIs, **OneDrive for Business** is not
Start a sync by running the following command and replacing \<group-name> and \<app-name>: ```azurecli-interactive
-az webapp deployment source sync ΓÇô-resource-group <group-name> ΓÇô-name <app-name>
+az webapp deployment source sync --resource-group <group-name> --name <app-name>
``` # [Azure PowerShell](#tab/powershell)
Invoke-AzureRmResourceAction -ResourceGroupName <group-name> -ResourceType Micro
## Next steps > [!div class="nextstepaction"]
-> [Deploy from local Git repo](deploy-local-git.md)
+> [Deploy from local Git repo](deploy-local-git.md)
app-service Deploy Zip https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/deploy-zip.md
This command restarts the app after deploying the ZIP package.
The following example uses the `--src-url` parameter to specify the URL of an Azure Storage account that the site should pull the ZIP from. ```azurecli-interactive
-az webapp deploy --resource-group <grou-name> --name <app-name> --src-url "https://storagesample.blob.core.windows.net/sample-container/myapp.zip?sv=2021-10-01&sb&sig=slk22f3UrS823n4kSh8Skjpa7Naj4CG3
+az webapp deploy --resource-group <group-name> --name <app-name> --src-url "https://storagesample.blob.core.windows.net/sample-container/myapp.zip?sv=2021-10-01&sb&sig=slk22f3UrS823n4kSh8Skjpa7Naj4CG3
``` # [Azure PowerShell](#tab/powershell)
az webapp deploy --resource-group <group-name> --name <app-name> --src-path ./<p
The following example uses the `--src-url` parameter to specify the URL of an Azure Storage account that the web app should pull the ZIP from. ```azurecli-interactive
-az webapp deploy --resource-group <grou-name> --name <app-name> --src-url "https://storagesample.blob.core.windows.net/sample-container/myapp.war?sv=2021-10-01&sb&sig=slk22f3UrS823n4kSh8Skjpa7Naj4CG3
+az webapp deploy --resource-group <group-name> --name <app-name> --src-url "https://storagesample.blob.core.windows.net/sample-container/myapp.war?sv=2021-10-01&sb&sig=slk22f3UrS823n4kSh8Skjpa7Naj4CG3
``` The CLI command uses the [Kudu publish API](#kudu-publish-api-reference) to deploy the package and can be fully customized.
app-service App Service App Service Environment Custom Settings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/environment/app-service-app-service-environment-custom-settings.md
# Custom configuration settings for App Service Environments ## Overview
-Because App Service Environments (ASEs) are isolated to a single customer, there are certain configuration settings that can be applied exclusively to App Service Environments. This article documents the various specific customizations that are available for App Service Environments.
+Because App Service Environments are isolated to a single customer, there are certain configuration settings that can be applied exclusively to App Service Environments. This article documents the various specific customizations that are available for App Service Environments.
-If you do not have an App Service Environment, see [How to Create an ASEv3](./creation.md).
+If you do not have an App Service Environment, see [How to Create an App Service Environment v3](./creation.md).
You can store App Service Environment customizations by using an array in the new **clusterSettings** attribute. This attribute is found in the "Properties" dictionary of the *hostingEnvironments* Azure Resource Manager entity.
The App Service Environment operates as a black box system where you cannot see
} ], ```
-Setting InternalEncryption to true encrypts internal network traffic in your ASE between the front ends and workers, encrypts the pagefile and also encrypts the worker disks. After the InternalEncryption clusterSetting is enabled, there can be an impact to your system performance. When you make the change to enable InternalEncryption, your ASE will be in an unstable state until the change is fully propagated. Complete propagation of the change can take a few hours to complete, depending on how many instances you have in your ASE. We highly recommend that you do not enable InternalEncryption on an ASE while it is in use. If you need to enable InternalEncryption on an actively used ASE, we highly recommend that you divert traffic to a backup environment until the operation completes.
+Setting InternalEncryption to true encrypts internal network traffic in your App Service Environment between the front ends and workers, encrypts the pagefile and also encrypts the worker disks. After the InternalEncryption clusterSetting is enabled, there can be an impact to your system performance. When you make the change to enable InternalEncryption, your App Service Environment will be in an unstable state until the change is fully propagated. Complete propagation of the change can take a few hours to complete, depending on how many instances you have in your App Service Environment. We highly recommend that you do not enable InternalEncryption on an App Service Environment while it is in use. If you need to enable InternalEncryption on an actively used App Service Environment, we highly recommend that you divert traffic to a backup environment until the operation completes.
## Disable TLS 1.0 and TLS 1.1 If you want to manage TLS settings on an app by app basis, then you can use the guidance provided with the [Enforce TLS settings](../configure-ssl-bindings.md#enforce-tls-versions) documentation.
-If you want to disable all inbound TLS 1.0 and TLS 1.1 traffic for all of the apps in an ASE, you can set the following **clusterSettings** entry:
+If you want to disable all inbound TLS 1.0 and TLS 1.1 traffic for all of the apps in an App Service Environment, you can set the following **clusterSettings** entry:
```json "clusterSettings": [
If you want to disable all inbound TLS 1.0 and TLS 1.1 traffic for all of the ap
The name of the setting says 1.0 but when configured, it disables both TLS 1.0 and TLS 1.1. ## Change TLS cipher suite order
-The ASE supports changing the cipher suite from the default. The default set of ciphers is the same set that is used in the multi-tenant service. Changing the cipher suites affects an entire App Service deployment making this only possible in the single-tenant ASE. There are two cipher suites required for an ASE; TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, and TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256. If you wish to operate your ASE with the strongest and most minimal set of cipher suites, then use just the two required ciphers. To configure your ASE to use just the ciphers that it requires, modify the **clusterSettings** as shown below.
+The App Service Environment supports changing the cipher suite from the default. The default set of ciphers is the same set that is used in the multi-tenant service. Changing the cipher suites affects an entire App Service deployment making this only possible in the single-tenant App Service Environment. There are two cipher suites required for an App Service Environment; TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, and TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256. If you wish to operate your App Service Environment with the strongest and most minimal set of cipher suites, then use just the two required ciphers. To configure your App Service Environment to use just the ciphers that it requires, modify the **clusterSettings** as shown below.
```json "clusterSettings": [
app-service How To Migrate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/environment/how-to-migrate.md
> This article describes a feature that is currently in preview. You should use this feature for dev environments first before migrating any production environments to ensure there are no unexpected issues. Please provide any feedback related to this article or the feature using the buttons at the bottom of the page. >
-An App Service Environment (ASE) v2 can be migrated to an [App Service Environment v3](overview.md). To learn more about the migration process and to see if your App Service Environment supports migration at this time, see the [Migration to App Service Environment v3 Overview](migrate.md).
+An App Service Environment v2 can be migrated to an [App Service Environment v3](overview.md). To learn more about the migration process and to see if your App Service Environment supports migration at this time, see the [Migration to App Service Environment v3 Overview](migrate.md).
## Prerequisites
az network vnet subnet update -g $ASE_RG -n <subnet-name> --vnet-name <vnet-name
## 3. Validate migration is supported
-The following command will check whether your App Service Environment is supported for migration. If you receive an error or if your App Service Environment is in an unhealthy or suspended state, you can't migrate at this time. For an estimate of when you can migrate, see the [timeline](migrate.md#preview-limitations). If your environment [won't be supported for migration](migrate.md#migration-feature-limitations) or you want to migrate to ASEv3 without using the migration feature, see [migration alternatives](migration-alternatives.md).
+The following command will check whether your App Service Environment is supported for migration. If you receive an error or if your App Service Environment is in an unhealthy or suspended state, you can't migrate at this time. For an estimate of when you can migrate, see the [timeline](migrate.md#preview-limitations). If your environment [won't be supported for migration](migrate.md#migration-feature-limitations) or you want to migrate to App Service Environment v3 without using the migration feature, see [migration alternatives](migration-alternatives.md).
```azurecli az rest --method post --uri "${ASE_ID}/migrate?api-version=2021-02-01&phase=validation"
app-service Migrate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/environment/migrate.md
> This article describes a feature that is currently in preview. You should use this feature for dev environments first before migrating any production environments to ensure there are no unexpected issues. Please provide any feedback related to this article or the feature using the buttons at the bottom of the page. >
-App Service can now migrate your App Service Environment (ASE) v2 to an [App Service Environment v3](overview.md). App Service Environment v3 provides [advantages and feature differences](overview.md#feature-differences) over earlier versions. Make sure to review the [supported features](overview.md#feature-differences) of App Service Environment v3 before migrating to reduce the risk of an unexpected application issue.
+App Service can now migrate your App Service Environment v2 to an [App Service Environment v3](overview.md). App Service Environment v3 provides [advantages and feature differences](overview.md#feature-differences) over earlier versions. Make sure to review the [supported features](overview.md#feature-differences) of App Service Environment v3 before migrating to reduce the risk of an unexpected application issue.
## Supported scenarios
app-service Migration Alternatives https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/environment/migration-alternatives.md
> The App Service Environment v3 [migration feature](migrate.md) is now available in preview for a set of supported environment configurations. Consider that feature which provides an automated migration path to [App Service Environment v3](overview.md). >
-If you're currently using App Service Environment (ASE) v1 or v2, you have the opportunity to migrate your workloads to [App Service Environment v3](overview.md). App Service Environment v3 has [advantages and feature differences](overview.md#feature-differences) that provide enhanced support for your workloads and can reduce overall costs. Consider using the [migration feature](migrate.md) if your environment falls into one of the [supported scenarios](migrate.md#supported-scenarios). If your environment isn't currently supported by the migration feature, you can wait for support if your scenario is listed in the [upcoming supported scenarios](migrate.md#preview-limitations). Otherwise, you can choose to use one of the alternative migration options given below.
+If you're currently using App Service Environment v1 or v2, you have the opportunity to migrate your workloads to [App Service Environment v3](overview.md). App Service Environment v3 has [advantages and feature differences](overview.md#feature-differences) that provide enhanced support for your workloads and can reduce overall costs. Consider using the [migration feature](migrate.md) if your environment falls into one of the [supported scenarios](migrate.md#supported-scenarios). If your environment isn't currently supported by the migration feature, you can wait for support if your scenario is listed in the [upcoming supported scenarios](migrate.md#preview-limitations). Otherwise, you can choose to use one of the alternative migration options given below.
If your App Service Environment [won't be supported for migration](migrate.md#migration-feature-limitations) with the migration feature, you must use one of the alternative methods to migrate to App Service Environment v3.
app-service Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/environment/overview.md
Last updated 11/15/2021 -+ # App Service Environment overview
-> [!NOTE]
-> This article is about the App Service Environment v3 which is used with Isolated v2 App Service plans
->
- The Azure App Service Environment is an Azure App Service feature that provides a fully isolated and dedicated environment for securely running App Service apps at high scale. This capability can host your: - Windows web apps
The Azure App Service Environment is an Azure App Service feature that provides
- Functions - Logic Apps (Standard)
-App Service Environments (ASEs) are appropriate for application workloads that require:
+> [!NOTE]
+> This article is about the App Service Environment v3 which is used with Isolated v2 App Service plans
+>
+
+App Service Environments are appropriate for application workloads that require:
- High scale. - Isolation and secure network access. - High memory utilization.-- High requests per second (RPS). You can make multiple ASEs in a single Azure region or across multiple Azure regions. This flexibility makes an ASE ideal for horizontally scaling stateless applications with a high RPS requirement.
+- High requests per second (RPS). You can make multiple App Service Environments in a single Azure region or across multiple Azure regions. This flexibility makes an App Service Environment ideal for horizontally scaling stateless applications with a high RPS requirement.
-ASE's host applications from only one customer and do so in one of their virtual networks. Customers have fine-grained control over inbound and outbound application network traffic. Applications can establish high-speed secure connections over VPNs to on-premises corporate resources.
+App Service Environment host applications from only one customer and do so in one of their virtual networks. Customers have fine-grained control over inbound and outbound application network traffic. Applications can establish high-speed secure connections over VPNs to on-premises corporate resources.
## Usage scenarios
The App Service Environment has many use cases including:
- Network isolated application hosting - Multi-tier applications
-There are many networking features that enable apps in the multi-tenant App Service to reach network isolated resources or become network isolated themselves. These features are enabled at the application level. With an ASE, there's no added configuration required for the apps to be in the virtual network. The apps are deployed into a network isolated environment that is already in a virtual network. On top of the ASE hosting network isolated apps, it's also a single-tenant system. There are no other customers using the ASE. If you really need a complete isolation story, you can also get your ASE deployed onto dedicated hardware.
+There are many networking features that enable apps in the multi-tenant App Service to reach network isolated resources or become network isolated themselves. These features are enabled at the application level. With an App Service Environment, there's no added configuration required for the apps to be in the virtual network. The apps are deployed into a network isolated environment that is already in a virtual network. If you really need a complete isolation story, you can also get your App Service Environment deployed onto dedicated hardware.
## Dedicated environment
-The ASE is a single tenant deployment of the Azure App Service that runs in your virtual network.
+The App Service Environment is a single tenant deployment of the Azure App Service that runs in your virtual network.
-Applications are hosted in App Service plans, which are created in an App Service Environment. The App Service plan is essentially a provisioning profile for an application host. As you scale your App Service plan out, you create more application hosts with all of the apps in that App Service plan on each host. A single ASEv3 can have up to 200 total App Service plan instances across all of the App Service plans combined. A single Isolated v2 App Service plan can have up to 100 instances by itself.
-
-> [!NOTE]
-> It is possible for the App Service plans and the apps they host to be provisioned in a different Subscription to the App Service Environment. This can be useful for segregating access. The prerequisties are:
-> - The Subscriptions must share the same Azure AD Tenant
-> - The App Service Environment, App Service plans and apps must be in the same region
-> - Today you cannot create App Service Plans and apps in a different subscription using the Azure Portal. Instead, use either [Azure REST API](/rest/api/appservice/app-service-plans/create-or-update) or [Azure Resource Manager Templates](/azure/templates/microsoft.web/allversions).
+Applications are hosted in App Service plans, which are created in an App Service Environment. The App Service plan is essentially a provisioning profile for an application host. As you scale your App Service plan out, you create more application hosts with all of the apps in that App Service plan on each host. A single App Service Environment v3 can have up to 200 total App Service plan instances across all of the App Service plans combined. A single Isolated v2 App Service plan can have up to 100 instances by itself.
## Virtual network support
-The ASE feature is a deployment of the Azure App Service into a single subnet in a customer's virtual network. When you deploy an app into an ASE, the app will be exposed on the inbound address assigned to the ASE. If your ASE is deployed with an internal virtual IP (VIP), then the inbound address for all of the apps will be an address in the ASE subnet. If your ASE is deployed with an external VIP, then the inbound address will be an internet addressable address and your apps will be in public DNS.
+The App Service Environment feature is a deployment of the Azure App Service into a single subnet in a customer's virtual network. When you deploy an app into an App Service Environment, the app will be exposed on the inbound address assigned to the App Service Environment. If your App Service Environment is deployed with an internal virtual IP (VIP), then the inbound address for all of the apps will be an address in the App Service Environment subnet. If your App Service Environment is deployed with an external VIP, then the inbound address will be an internet addressable address and your apps will be in public DNS.
-The number of addresses used by an ASEv3 in its subnet will vary based on how many instances you have along with how much traffic. There are infrastructure roles that are automatically scaled depending on the number of App Service plans and the load. The recommended size for your ASEv3 subnet is a `/24` CIDR block with 256 addresses in it as that can host an ASEv3 scaled out to its limit.
+The number of addresses used by an App Service Environment v3 in its subnet will vary based on how many instances you have along with how much traffic. There are infrastructure roles that are automatically scaled depending on the number of App Service plans and the load. The recommended size for your App Service Environment v3 subnet is a `/24` CIDR block with 256 addresses in it as that can host an App Service Environment v3 scaled out to its limit.
-The apps in an ASE do not need any features enabled to access resources in the same virtual network that the ASE is in. If the ASE virtual network is connected to another network, then the apps in the ASE can access resources in those extended networks. Traffic can be blocked by user configuration on the network.
+The apps in an App Service Environment do not need any features enabled to access resources in the same virtual network that the App Service Environment is in. If the App Service Environment virtual network is connected to another network, then the apps in the App Service Environment can access resources in those extended networks. Traffic can be blocked by user configuration on the network.
-The multi-tenant version of Azure App Service contains numerous features to enable your apps to connect to your various networks. Those networking features enable your apps to act as if they were deployed in a virtual network. The apps in an ASEv3 do not need any configuration to be in the virtual network. A benefit of using an ASE over the multi-tenant service is that any network access controls to the ASE hosted apps is external to the application configuration. With the apps in the multi-tenant service, you must enable the features on an app by app basis and use RBAC or policy to prevent any configuration changes.
+The multi-tenant version of Azure App Service contains numerous features to enable your apps to connect to your various networks. Those networking features enable your apps to act as if they were deployed in a virtual network. The apps in an App Service Environment v3 do not need any configuration to be in the virtual network. A benefit of using an App Service Environment over the multi-tenant service is that any network access controls to the App Service Environment hosted apps is external to the application configuration. With the apps in the multi-tenant service, you must enable the features on an app by app basis and use RBAC or policy to prevent any configuration changes.
## Feature differences
-Compared to earlier versions of the ASE, there are some differences with ASEv3. With ASEv3:
+Compared to earlier versions of the App Service Environment, there are some differences with App Service Environment v3. With App Service Environment v3:
- There are no networking dependencies in the customer virtual network. You can secure all inbound and outbound as desired. Outbound traffic can be routed also as desired. -- You can deploy it enabled for zone redundancy. Zone redundancy can only be set during ASEv3 creation and only in regions where all ASEv3 dependencies are zone redundant.
+- You can deploy it enabled for zone redundancy. Zone redundancy can only be set during creation and only in regions where all App Service Environment v3 dependencies are zone redundant.
- You can deploy it on a dedicated host group. Host group deployments are not zone redundant. -- Scaling is much faster than with ASEv2. While scaling still is not immediate as in the multi-tenant service, it is a lot faster.-- Front end scaling adjustments are no longer required. The ASEv3 front ends automatically scale to meet needs and are deployed on better hosts. -- Scaling no longer blocks other scale operations within the ASEv3 instance. Only one scale operation can be in effect for a combination of OS and size. For example, while your Windows small App Service plan was scaling, you could kick off a scale operation to run at the same time on a Windows medium or anything else other than Windows small. -- Apps in an internal VIP ASEv3 can be reached across global peering. Access across global peering was not possible with previous versions.
+- Scaling is much faster than with App Service Environment v2. While scaling still is not immediate as in the multi-tenant service, it is a lot faster.
+- Front end scaling adjustments are no longer required. The App Service Environment v3 front ends automatically scale to meet needs and are deployed on better hosts.
+- Scaling no longer blocks other scale operations within the App Service Environment v3 instance. Only one scale operation can be in effect for a combination of OS and size. For example, while your Windows small App Service plan was scaling, you could kick off a scale operation to run at the same time on a Windows medium or anything else other than Windows small.
+- Apps in an internal VIP App Service Environment v3 can be reached across global peering. Access across global peering was not possible with previous versions.
-There are a few features that are not available in ASEv3 that were available in earlier versions of the ASE. In ASEv3, you can't:
+There are a few features that are not available in App Service Environment v3 that were available in earlier versions of the App Service Environment. In App Service Environment v3, you can't:
- send SMTP traffic. You can still have email triggered alerts but your app can't send outbound traffic on port 25 - deploy your apps with FTP - use remote debug with your apps-- upgrade yet from ASEv2 - monitor your traffic with Network Watcher or NSG Flow - configure a IP-based TLS/SSL binding with your apps - configure custom domain suffix
There are a few features that are not available in ASEv3 that were available in
## Pricing
-With ASEv3, there is a different pricing model depending on the type of ASE deployment you have. The three pricing models are:
+With App Service Environment v3, there is a different pricing model depending on the type of App Service Environment deployment you have. The three pricing models are:
-- **ASEv3**: If ASE is empty, there is a charge as if you had one instance of Windows I1v2. The one instance charge is not an additive charge but is only applied if the ASE is empty.-- **Zone redundant ASEv3**: There is a minimum charge of nine instances. There is no added charge for availability zone support if you have nine or more App Service plan instances. If you have less than nine instances (of any size) across App Service plans in the zone redundant ASE, the difference between nine and the running instance count is charged as additional Windows I1v2 instances.-- **Dedicated host ASEv3**: With a dedicated host deployment, you are charged for two dedicated hosts per our pricing at ASEv3 creation then a small percentage of the Isolated V2 rate per core charge as you scale.
+- **App Service Environment v3**: If App Service Environment is empty, there is a charge as if you had one instance of Windows I1v2. The one instance charge is not an additive charge but is only applied if the App Service Environment is empty.
+- **Zone redundant App Service Environment v3**: There is a minimum charge of nine instances. There is no added charge for availability zone support if you have nine or more App Service plan instances. If you have less than nine instances (of any size) across App Service plans in the zone redundant App Service Environment, the difference between nine and the running instance count is charged as additional Windows I1v2 instances.
+- **Dedicated host App Service Environment v3**: With a dedicated host deployment, you are charged for two dedicated hosts per our pricing at App Service Environment v3 creation then a small percentage of the Isolated v2 rate per core charge as you scale.
Reserved Instance pricing for Isolated v2 is available and is described in [How reservation discounts apply to Azure App Service](../../cost-management-billing/reservations/reservation-discount-app-service.md). The pricing, along with reserved instance pricing, is available at [App Service pricing](https://azure.microsoft.com/pricing/details/app-service/windows/) under **Isolated v2 plan**. ## Regions
-The ASEv3 is available in the following regions.
+The App Service Environment v3 is available in the following regions.
-|Normal and dedicated host ASEv3 regions|AZ ASEv3 regions|
+|Normal and dedicated host regions|Availability zone regions|
||| |Australia East|Australia East| |Australia Southeast|Brazil South|
The ASEv3 is available in the following regions.
## App Service Environment v2
-App Service Environment has three versions: ASEv1, ASEv2, and ASEv3. The preceding information was based on ASEv3. To learn more about ASEv2, see [App Service Environment v2 introduction](./intro.md).
+App Service Environment has three versions: App Service Environment v1, App Service Environment v2, and App Service Environment v3. The preceding information was based on App Service Environment v3. To learn more about App Service Environment v2, see [App Service Environment v2 introduction](./intro.md).
app-service Overview Vnet Integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/overview-vnet-integration.md
When regional virtual network integration is enabled, your app makes outbound ca
When all traffic routing is enabled, all outbound traffic is sent into your virtual network. If all traffic routing isn't enabled, only private traffic (RFC1918) and service endpoints configured on the integration subnet will be sent into the virtual network and outbound traffic to the internet will go through the same channels as normal.
-The feature supports only one virtual interface per worker. One virtual interface per worker means one regional virtual network integration per App Service plan. All the apps in the same App Service plan can use the same virtual network integration. If you need an app to connect to another virtual network, you need to create another App Service plan. The virtual interface used isn't a resource that customers have direct access to.
+The feature supports only one virtual interface per worker. One virtual interface per worker means one regional virtual network integration per App Service plan. All the apps in the same App Service plan can only use the same virtual network integration to a specific subnet. If you need an app to connect to another virtual network or another subnet in the same virtual network, you need to create another App Service plan. The virtual interface used isn't a resource that customers have direct access to.
Because of the nature of how this technology operates, the traffic that's used with virtual network integration doesn't show up in Azure Network Watcher or NSG flow logs.
When you scale up or down in size, the required address space is doubled for a s
<sup>*</sup>Assumes that you'll need to scale up or down in either size or SKU at some point.
-Because subnet size can't be changed after assignment, use a subnet that's large enough to accommodate whatever scale your app might reach. To avoid any issues with subnet capacity, use a `/26` with 64 addresses.
+Because subnet size can't be changed after assignment, use a subnet that's large enough to accommodate whatever scale your app might reach. To avoid any issues with subnet capacity, use a `/26` with 64 addresses. When creating subnets in Azure portal as part of integrating with the virtual network, a minimum size of /27 is required.
When you want your apps in your plan to reach a virtual network that's already connected to by apps in another plan, select a different subnet than the one being used by the preexisting virtual network integration.
application-gateway Application Gateway Autoscaling Zone Redundant https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/application-gateway-autoscaling-zone-redundant.md
Title: Autoscaling and Zone-redundant Application Gateway v2
-description: This article introduces the Azure Application Standard_v2 and WAF_v2 SKU, which includes Autoscaling and Zone-redundant features.
+ Title: Scaling and Zone-redundant Application Gateway v2
+description: This article introduces the Azure Application Standard_v2 and WAF_v2 SKU Autoscaling and Zone-redundant features.
Previously updated : 12/17/2021 Last updated : 01/18/2022
-# Autoscaling and Zone-redundant Application Gateway v2
-
-Application Gateway is available under a Standard_v2 SKU. Web Application Firewall (WAF) is available under a WAF_v2 SKU. The v2 SKU offers performance enhancements and adds support for critical new features like autoscaling, zone redundancy, and support for static VIPs. Existing features under the Standard and WAF SKU continue to be supported in the new v2 SKU, with a few exceptions listed in [comparison](#differences-from-v1-sku) section.
-
-The new v2 SKU includes the following enhancements:
--- **Autoscaling**: Application Gateway or WAF deployments under the autoscaling SKU can scale out or in based on changing traffic load patterns. Autoscaling also removes the requirement to choose a deployment size or instance count during provisioning. This SKU offers true elasticity. In the Standard_v2 and WAF_v2 SKU, Application Gateway can operate both in fixed capacity (autoscaling disabled) and in autoscaling enabled mode. Fixed capacity mode is useful for scenarios with consistent and predictable workloads. Autoscaling mode is beneficial in applications that see variance in application traffic.-- **Zone redundancy**: An Application Gateway or WAF deployment can span multiple Availability Zones, removing the need to provision separate Application Gateway instances in each zone with a Traffic Manager. You can choose a single zone or multiple zones where Application Gateway instances are deployed, which makes it more resilient to zone failure. The backend pool for applications can be similarly distributed across availability zones.-
- Zone redundancy is available only where Azure Zones are available. In other regions, all other features are supported. For more information, see [Regions and Availability Zones in Azure](../availability-zones/az-overview.md)
-- **Static VIP**: Application Gateway v2 SKU supports the static VIP type exclusively. This ensures that the VIP associated with the application gateway doesn't change for the lifecycle of the deployment, even after a restart. There isn't a static VIP in v1, so you must use the application gateway URL instead of the IP address for domain name routing to App Services via the application gateway.-- **Header Rewrite**: Application Gateway allows you to add, remove, or update HTTP request and response headers with v2 SKU. For more information, see [Rewrite HTTP headers with Application Gateway](./rewrite-http-headers-url.md)-- **Key Vault Integration**: Application Gateway v2 supports integration with Key Vault for server certificates that are attached to HTTPS enabled listeners. For more information, see [TLS termination with Key Vault certificates](key-vault-certs.md).-- **Azure Kubernetes Service Ingress Controller**: The Application Gateway v2 Ingress Controller allows the Azure Application Gateway to be used as the ingress for an Azure Kubernetes Service (AKS) known as AKS Cluster. For more information, see [What is Application Gateway Ingress Controller?](ingress-controller-overview.md).-- **Performance enhancements**: The v2 SKU offers up to 5X better TLS offload performance as compared to the Standard/WAF SKU.-- **Faster deployment and update time** The v2 SKU provides faster deployment and update time as compared to Standard/WAF SKU. This also includes WAF configuration changes.-
-![Diagram of auto-scaling zone.](./media/application-gateway-autoscaling-zone-redundant/application-gateway-autoscaling-zone-redundant.png)
-
-## Supported regions
-
-The Standard_v2 and WAF_v2 SKU is available in the following regions: North Central US, South Central US, West US, West US 2, East US, East US 2, Central US, North Europe, West Europe, Southeast Asia, France Central, UK West, Japan East, Japan West, Australia East, Australia Southeast, Brazil South, Canada Central, Canada East, East Asia, Korea Central, Korea South, UK South, Central India, West India, South India,Jio India West,Norway East,Switzerland North,UAE North,South Arica North,Germany West Central.
-
-## Pricing
-
-With the v2 SKU, the pricing model is driven by consumption and is no longer attached to instance counts or sizes. The v2 SKU pricing has two components:
--- **Fixed price** - This is hourly (or partial hour) price to provision a Standard_v2 or WAF_v2 Gateway. Please note that 0 additional minimum instances still ensures high availability of the service which is always included with fixed price.-- **Capacity Unit price** - This is a consumption-based cost that is charged in addition to the fixed cost. Capacity unit charge is also computed hourly or partial hourly. There are three dimensions to capacity unit - compute unit, persistent connections, and throughput. Compute unit is a measure of processor capacity consumed. Factors affecting compute unit are TLS connections/sec, URL Rewrite computations, and WAF rule processing. Persistent connection is a measure of established TCP connections to the application gateway in a given billing interval. Throughput is average Megabits/sec processed by the system in a given billing interval. The billing is done at a Capacity Unit level for anything above the reserved instance count.-
-Each capacity unit is composed of at most: 1 compute unit, 2500 persistent connections, and 2.22-Mbps throughput.
-
-To learn more, see [Understanding pricing](understanding-pricing.md).
-
-## Scaling Application Gateway and WAF v2
+# Scaling Application Gateway v2 and WAF v2
Application Gateway and WAF can be configured to scale in two modes: -- **Autoscaling** - With autoscaling enabled, the Application Gateway and WAF v2 SKUs scale up or down based on application traffic requirements. This mode offers better elasticity to your application and eliminates the need to guess the application gateway size or instance count. This mode also allows you to save cost by not requiring the gateway to run at peak provisioned capacity for anticipated maximum traffic load. You must specify a minimum and optionally maximum instance count. Minimum capacity ensures that Application Gateway and WAF v2 don't fall below the minimum instance count specified, even in the absence of traffic. Each instance is roughly equivalent to 10 additional reserved Capacity Units. Zero signifies no reserved capacity and is purely autoscaling in nature. You can also optionally specify a maximum instance count, which ensures that the Application Gateway doesn't scale beyond the specified number of instances. You will only be billed for the amount of traffic served by the Gateway. The instance counts can range from 0 to 125. The default value for maximum instance count is 20 if not specified.-- **Manual** - You can alternatively choose Manual mode where the gateway won't autoscale. In this mode, if there is more traffic than what Application Gateway or WAF can handle, it could result in traffic loss. With manual mode, specifying instance count is mandatory. Instance count can vary from 1 to 125 instances.
+- **Autoscaling** - With autoscaling enabled, the Application Gateway and WAF v2 SKUs scale up or down based on application traffic requirements. This mode offers better elasticity to your application and eliminates the need to guess the application gateway size or instance count. This mode also allows you to save cost by not requiring the gateway to run at peak-provisioned capacity for expected maximum traffic load. You must specify a minimum and optionally maximum instance count. Minimum capacity ensures that Application Gateway and WAF v2 don't fall below the minimum instance count specified, even without traffic. Each instance is roughly equivalent to 10 more reserved Capacity Units. Zero signifies no reserved capacity and is purely autoscaling in nature. You can also optionally specify a maximum instance count, which ensures that the Application Gateway doesn't scale beyond the specified number of instances. You'll only be billed for the amount of traffic served by the Gateway. The instance counts can range from 0 to 125. The default value for maximum instance count is 20 if not specified.
+- **Manual** - You can also choose Manual mode where the gateway won't autoscale. In this mode, if there's more traffic than what Application Gateway or WAF can handle, it could result in traffic loss. With manual mode, specifying instance count is mandatory. Instance count can vary from 1 to 125 instances.
## Autoscaling and High Availability
-Azure Application Gateways are always deployed in a highly available fashion. The service is made out of multiple instances that are created as configured (if autoscaling is disabled) or required by the application load (if autoscaling is enabled). Note that from the user's perspective you do not necessarily have visibility into the individual instances, but just into the Application Gateway service as a whole. If a certain instance has a problem and stops being functional, Azure Application Gateway will transparently create a new instance.
-
-Please note that even if you configure autoscaling with zero minimum instances the service will still be highly available, which is always included with the fixed price.
-
-However, creating a new instance can take some time (around six or seven minutes). Hence, if you do not want to cope with this downtime you can configure a minimum instance count of 2, ideally with Availability Zone support. This way you will have at least two instances inside of your Azure Application Gateway under normal circumstances, so if one of them had a problem the other will try to cope with the traffic, during the time a new instance is being created. Note that an Azure Application Gateway instance can support around 10 Capacity Units, so depending on how much traffic you typically have you might want to configure your minimum instance autoscaling setting to a value higher than 2.
-
-## Feature comparison between v1 SKU and v2 SKU
+Azure Application Gateways are always deployed in a highly available fashion. The service is made out of multiple instances that are created as configured (if autoscaling is disabled) or required by the application load (if autoscaling is enabled). Note that from the user's perspective you don't necessarily have visibility into the individual instances, but just into the Application Gateway service as a whole. If a certain instance has a problem and stops being functional, Azure Application Gateway will transparently create a new instance.
-The following table compares the features available with each SKU.
+Even if you configure autoscaling with zero minimum instances the service will still be highly available, which is always included with the fixed price.
-| Feature | v1 SKU | v2 SKU |
-| - | -- | -- |
-| Autoscaling | | &#x2713; |
-| Zone redundancy | | &#x2713; |
-| Static VIP | | &#x2713; |
-| Azure Kubernetes Service (AKS) Ingress controller | | &#x2713; |
-| Azure Key Vault integration | | &#x2713; |
-| Rewrite HTTP(S) headers | | &#x2713; |
-| URL-based routing | &#x2713; | &#x2713; |
-| Multiple-site hosting | &#x2713; | &#x2713; |
-| Traffic redirection | &#x2713; | &#x2713; |
-| Web Application Firewall (WAF) | &#x2713; | &#x2713; |
-| WAF custom rules | | &#x2713; |
-| WAF policy associations | | &#x2713; |
-| Transport Layer Security (TLS)/Secure Sockets Layer (SSL) termination | &#x2713; | &#x2713; |
-| End-to-end TLS encryption | &#x2713; | &#x2713; |
-| Session affinity | &#x2713; | &#x2713; |
-| Custom error pages | &#x2713; | &#x2713; |
-| WebSocket support | &#x2713; | &#x2713; |
-| HTTP/2 support | &#x2713; | &#x2713; |
-| Connection draining | &#x2713; | &#x2713; |
+However, creating a new instance can take some time (around six or seven minutes). If you don't want to have this downtime, you can configure a minimum instance count of two, ideally with Availability Zone support. This way you'll have at least two instances in your Azure Application Gateway under normal circumstances. So if one of them had a problem the other will try to handle the traffic while a new instance is being created. An Azure Application Gateway instance can support around 10 Capacity Units, so depending on how much traffic you typically have you might want to configure your minimum instance autoscaling setting to a value higher than two.
-> [!NOTE]
-> The autoscaling v2 SKU now supports [default health probes](application-gateway-probe-overview.md#default-health-probe) to automatically monitor the health of all resources in its back-end pool and highlight those backend members that are considered unhealthy. The default health probe is automatically configured for backends that don't have any custom probe configuration. To learn more, see [health probes in application gateway](application-gateway-probe-overview.md).
-## Differences from v1 SKU
-
-This section describes features and limitations of the v2 SKU that differ from the v1 SKU.
-
-|Difference|Details|
-|--|--|
-|Authentication certificate|Not supported.<br>For more information, see [Overview of end to end TLS with Application Gateway](ssl-overview.md#end-to-end-tls-with-the-v2-sku).|
-|Mixing Standard_v2 and Standard Application Gateway on the same subnet|Not supported|
-|User-Defined Route (UDR) on Application Gateway subnet|Supported (specific scenarios). In preview.<br> For more information about supported scenarios, see [Application Gateway configuration overview](configuration-infrastructure.md#supported-user-defined-routes).|
-|NSG for Inbound port range| - 65200 to 65535 for Standard_v2 SKU<br>- 65503 to 65534 for Standard SKU.<br>For more information, see the [FAQ](application-gateway-faq.yml#are-network-security-groups-supported-on-the-application-gateway-subnet).|
-|Performance logs in Azure diagnostics|Not supported.<br>Azure metrics should be used.|
-|Billing|Billing scheduled to start on July 1, 2019.|
-|FIPS mode|These are currently not supported.|
-|ILB only mode|This is currently not supported. Public and ILB mode together is supported.|
-|Net watcher integration|Not supported.|
-|Microsoft Defender for Cloud integration|Not yet available.
-
-## Migrate from v1 to v2
-
-An Azure PowerShell script is available in the PowerShell gallery to help you migrate from your v1 Application Gateway/WAF to the v2 Autoscaling SKU. This script helps you copy the configuration from your v1 gateway. Traffic migration is still your responsibility. For more information, see [Migrate Azure Application Gateway from v1 to v2](migrate-v1-v2.md).
## Next steps -- [Quickstart: Direct web traffic with Azure Application Gateway - Azure portal](quick-create-portal.md)
+- Learn more about [Application Gateway v2](overview-v2.md)
- [Create an autoscaling, zone redundant application gateway with a reserved virtual IP address using Azure PowerShell](tutorial-autoscale-ps.md)-- Learn more about [Application Gateway](overview.md).-- Learn more about [Azure Firewall](../firewall/overview.md).+
application-gateway Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/features.md
Previously updated : 09/25/2020 Last updated : 01/18/2022
Application Gateway includes the following features: -- [Secure Sockets Layer (SSL/TLS) termination](#secure-sockets-layer-ssltls-termination)-- [Autoscaling](#autoscaling)-- [Zone redundancy](#zone-redundancy)-- [Static VIP](#static-vip)-- [Web Application Firewall](#web-application-firewall)-- [Ingress Controller for AKS](#ingress-controller-for-aks)-- [URL-based routing](#url-based-routing)-- [Multiple-site hosting](#multiple-site-hosting)-- [Redirection](#redirection)-- [Session affinity](#session-affinity)-- [Websocket and HTTP/2 traffic](#websocket-and-http2-traffic)-- [Connection draining](#connection-draining)-- [Custom error pages](#custom-error-pages)-- [Rewrite HTTP headers and URL](#rewrite-http-headers-and-url)-- [Sizing](#sizing) ## Secure Sockets Layer (SSL/TLS) termination
For more information, see [Overview of SSL termination and end to end SSL with A
Application Gateway Standard_v2 supports autoscaling and can scale up or down based on changing traffic load patterns. Autoscaling also removes the requirement to choose a deployment size or instance count during provisioning.
-For more information about the Application Gateway Standard_v2 features, see [Autoscaling v2 SKU](application-gateway-autoscaling-zone-redundant.md).
+For more information about the Application Gateway Standard_v2 features, see [What is Azure Application Gateway v2?](overview-v2.md).
## Zone redundancy
The following table shows an average performance throughput for each application
## Version feature comparison
-For an Application Gateway v1-v2 feature comparison, see [Autoscaling and Zone-redundant Application Gateway v2](application-gateway-autoscaling-zone-redundant.md#feature-comparison-between-v1-sku-and-v2-sku)
+For an Application Gateway v1-v2 feature comparison, see [What is Azure Application Gateway v2?](overview-v2.md#feature-comparison-between-v1-sku-and-v2-sku).
## Next steps
application-gateway Overview V2 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/overview-v2.md
+
+ Title: What is Azure Application Gateway v2?
+description: Learn about Azure application Gateway v2 features
++++ Last updated : 01/18/2022++++
+# What is Azure Application Gateway v2?
+
+Application Gateway is available under a Standard_v2 SKU. Web Application Firewall (WAF) is available under a WAF_v2 SKU. The v2 SKU offers performance enhancements and adds support for critical new features like autoscaling, zone redundancy, and support for static VIPs. Existing features under the Standard and WAF SKU continue to be supported in the new v2 SKU, with a few exceptions listed in [comparison](#differences-from-v1-sku) section.
+
+The new v2 SKU includes the following enhancements:
+
+- **Autoscaling**: Application Gateway or WAF deployments under the autoscaling SKU can scale out or in based on changing traffic load patterns. Autoscaling also removes the requirement to choose a deployment size or instance count during provisioning. This SKU offers true elasticity. In the Standard_v2 and WAF_v2 SKU, Application Gateway can operate both in fixed capacity (autoscaling disabled) and in autoscaling enabled mode. Fixed capacity mode is useful for scenarios with consistent and predictable workloads. Autoscaling mode is beneficial in applications that see variance in application traffic.
+- **Zone redundancy**: An Application Gateway or WAF deployment can span multiple Availability Zones, removing the need to provision separate Application Gateway instances in each zone with a Traffic Manager. You can choose a single zone or multiple zones where Application Gateway instances are deployed, which makes it more resilient to zone failure. The backend pool for applications can be similarly distributed across availability zones.
+
+ Zone redundancy is available only where Azure Zones are available. In other regions, all other features are supported. For more information, see [Regions and Availability Zones in Azure](../availability-zones/az-overview.md)
+- **Static VIP**: Application Gateway v2 SKU supports the static VIP type exclusively. This ensures that the VIP associated with the application gateway doesn't change for the lifecycle of the deployment, even after a restart. There isn't a static VIP in v1, so you must use the application gateway URL instead of the IP address for domain name routing to App Services via the application gateway.
+- **Header Rewrite**: Application Gateway allows you to add, remove, or update HTTP request and response headers with v2 SKU. For more information, see [Rewrite HTTP headers with Application Gateway](./rewrite-http-headers-url.md)
+- **Key Vault Integration**: Application Gateway v2 supports integration with Key Vault for server certificates that are attached to HTTPS enabled listeners. For more information, see [TLS termination with Key Vault certificates](key-vault-certs.md).
+- **Azure Kubernetes Service Ingress Controller**: The Application Gateway v2 Ingress Controller allows the Azure Application Gateway to be used as the ingress for an Azure Kubernetes Service (AKS) known as AKS Cluster. For more information, see [What is Application Gateway Ingress Controller?](ingress-controller-overview.md).
+- **Performance enhancements**: The v2 SKU offers up to 5X better TLS offload performance as compared to the Standard/WAF SKU.
+- **Faster deployment and update time** The v2 SKU provides faster deployment and update time as compared to Standard/WAF SKU. This also includes WAF configuration changes.
+
+![Diagram of auto-scaling zone.](./media/application-gateway-autoscaling-zone-redundant/application-gateway-autoscaling-zone-redundant.png)
+
+## Supported regions
+
+The Standard_v2 and WAF_v2 SKU is available in the following regions: North Central US, South Central US, West US, West US 2, East US, East US 2, Central US, North Europe, West Europe, Southeast Asia, France Central, UK West, Japan East, Japan West, Australia East, Australia Southeast, Brazil South, Canada Central, Canada East, East Asia, Korea Central, Korea South, UK South, Central India, West India, South India,Jio India West, Norway East, Switzerland North, UAE North, South Africa North, Germany West Central.
+
+## Pricing
+
+With the v2 SKU, the pricing model is driven by consumption and is no longer attached to instance counts or sizes. The v2 SKU pricing has two components:
+
+- **Fixed price** - This is hourly (or partial hour) price to provision a Standard_v2 or WAF_v2 Gateway. Please note that 0 additional minimum instances still ensures high availability of the service which is always included with fixed price.
+- **Capacity Unit price** - This is a consumption-based cost that is charged in addition to the fixed cost. Capacity unit charge is also computed hourly or partial hourly. There are three dimensions to capacity unit - compute unit, persistent connections, and throughput. Compute unit is a measure of processor capacity consumed. Factors affecting compute unit are TLS connections/sec, URL Rewrite computations, and WAF rule processing. Persistent connection is a measure of established TCP connections to the application gateway in a given billing interval. Throughput is average Megabits/sec processed by the system in a given billing interval. The billing is done at a Capacity Unit level for anything above the reserved instance count.
+
+Each capacity unit is composed of at most: 1 compute unit, 2500 persistent connections, and 2.22-Mbps throughput.
+
+To learn more, see [Understanding pricing](understanding-pricing.md).
+
+## Feature comparison between v1 SKU and v2 SKU
+
+The following table compares the features available with each SKU.
+
+| Feature | v1 SKU | v2 SKU |
+| - | -- | -- |
+| Autoscaling | | &#x2713; |
+| Zone redundancy | | &#x2713; |
+| Static VIP | | &#x2713; |
+| Azure Kubernetes Service (AKS) Ingress controller | | &#x2713; |
+| Azure Key Vault integration | | &#x2713; |
+| Rewrite HTTP(S) headers | | &#x2713; |
+| URL-based routing | &#x2713; | &#x2713; |
+| Multiple-site hosting | &#x2713; | &#x2713; |
+| Traffic redirection | &#x2713; | &#x2713; |
+| Web Application Firewall (WAF) | &#x2713; | &#x2713; |
+| WAF custom rules | | &#x2713; |
+| WAF policy associations | | &#x2713; |
+| Transport Layer Security (TLS)/Secure Sockets Layer (SSL) termination | &#x2713; | &#x2713; |
+| End-to-end TLS encryption | &#x2713; | &#x2713; |
+| Session affinity | &#x2713; | &#x2713; |
+| Custom error pages | &#x2713; | &#x2713; |
+| WebSocket support | &#x2713; | &#x2713; |
+| HTTP/2 support | &#x2713; | &#x2713; |
+| Connection draining | &#x2713; | &#x2713; |
+
+> [!NOTE]
+> The autoscaling v2 SKU now supports [default health probes](application-gateway-probe-overview.md#default-health-probe) to automatically monitor the health of all resources in its back-end pool and highlight those backend members that are considered unhealthy. The default health probe is automatically configured for backends that don't have any custom probe configuration. To learn more, see [health probes in application gateway](application-gateway-probe-overview.md).
+
+## Differences from v1 SKU
+
+This section describes features and limitations of the v2 SKU that differ from the v1 SKU.
+
+|Difference|Details|
+|--|--|
+|Authentication certificate|Not supported.<br>For more information, see [Overview of end to end TLS with Application Gateway](ssl-overview.md#end-to-end-tls-with-the-v2-sku).|
+|Mixing Standard_v2 and Standard Application Gateway on the same subnet|Not supported|
+|User-Defined Route (UDR) on Application Gateway subnet|Supported (specific scenarios). In preview.<br> For more information about supported scenarios, see [Application Gateway configuration overview](configuration-infrastructure.md#supported-user-defined-routes).|
+|NSG for Inbound port range| - 65200 to 65535 for Standard_v2 SKU<br>- 65503 to 65534 for Standard SKU.<br>For more information, see the [FAQ](application-gateway-faq.yml#are-network-security-groups-supported-on-the-application-gateway-subnet).|
+|Performance logs in Azure diagnostics|Not supported.<br>Azure metrics should be used.|
+|Billing|Billing scheduled to start on July 1, 2019.|
+|FIPS mode|These are currently not supported.|
+|ILB only mode|This is currently not supported. Public and ILB mode together is supported.|
+|Net watcher integration|Not supported.|
+|Microsoft Defender for Cloud integration|Not yet available.
+
+## Migrate from v1 to v2
+
+An Azure PowerShell script is available in the PowerShell gallery to help you migrate from your v1 Application Gateway/WAF to the v2 Autoscaling SKU. This script helps you copy the configuration from your v1 gateway. Traffic migration is still your responsibility. For more information, see [Migrate Azure Application Gateway from v1 to v2](migrate-v1-v2.md).
+
+## Next steps
+
+Depending on your requirements and environment, you can create a test Application Gateway using either the Azure portal, Azure PowerShell, or Azure CLI.
+
+- [Tutorial: Create an application gateway that improves web application access](tutorial-autoscale-ps.md)
application-gateway Troubleshoot App Service Redirection App Service Url https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/troubleshoot-app-service-redirection-app-service-url.md
In the previous example, notice that the response header has a status code of 30
Set the host name in the location header to the application gateway's domain name. To do this, create a [rewrite rule](./rewrite-http-headers-url.md) with a condition that evaluates if the location header in the response contains azurewebsites.net. It must also perform an action to rewrite the location header to have the application gateway's host name. For more information, see instructions on [how to rewrite the location header](./rewrite-http-headers-url.md#modify-a-redirection-url). > [!NOTE]
-> The HTTP header rewrite support is only available for the [Standard_v2 and WAF_v2 SKU](./application-gateway-autoscaling-zone-redundant.md) of Application Gateway. We recommend [migrating to v2](./migrate-v1-v2.md) for Header Rewrite and other [advanced capabilities](./application-gateway-autoscaling-zone-redundant.md#feature-comparison-between-v1-sku-and-v2-sku) that are available with v2 SKU.
+> The HTTP header rewrite support is only available for the [Standard_v2 and WAF_v2 SKU](./application-gateway-autoscaling-zone-redundant.md) of Application Gateway. We recommend [migrating to v2](./migrate-v1-v2.md) for Header Rewrite and other [advanced capabilities](./overview-v2.md#feature-comparison-between-v1-sku-and-v2-sku) that are available with v2 SKU.
## Alternate solution: Use a custom domain name
automanage Automanage Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automanage/automanage-virtual-machines.md
There are several prerequisites to consider before trying to enable Azure Automa
- VMs must be in a supported region (see below) - User must have correct permissions (see below) - Automanage does not support Sandbox subscriptions at this time-- Automanage does not support Windows 10 at this time
+- Automanage does not support Windows client images at this time
### Supported regions Automanage only supports VMs located in the following regions:
For the complete list of participating Azure services, as well as their supporte
In the Azure portal, you can enable Automanage on an existing virtual machine. For concise steps to this process, check out the [Automanage for virtual machines quickstart](quick-create-virtual-machines-portal.md).
-If it is your first time enabling Automanage for your VM, you can search in the Azure portal for **Automanage ΓÇô Azure machine best practices**. Click **Enable on existing VM**, select the [configuration profile](#configuration-profile) you wish to use and then select the machines you would like to onboard. Click **Enable**, and you're done.
+If it is your first time enabling Automanage for your VM, you can search in the Azure portal for **Automanage ΓÇô Azure machine best practices**. Click **Enable on existing VM**, select the [configuration profile](#configuration-profile) you wish to use and then select the machines you would like to onboard.
+
+In the Machine selection pane in the portal, you will notice the **Eligibility** column. You can click **Show ineligible machines** to see machines ineligible for Automanage. Currently, machines can be ineligible for the following reasons:
+- Machine is not using one of the supported images: [Windows Server versions](automanage-windows-server.md#supported-windows-server-versions) and [Linux distros](automanage-linux.md#supported-linux-distributions-and-versions)
+- Machine is not located in a supported [region](#supported-regions)
+- Machine's log analytics workspace is not located in a supported [region](#supported-regions)
+- User does not have permissions to the log analytics workspace's subscription. Check out the [required permissions](#required-rbac-permissions)
+- The Automanage resource provider is not registered on the subscription. Check out [how to register a Resource Provider](/azure/azure-resource-manager/management/resource-providers-and-types#register-resource-provider-1) with the Automanage resource provider: *Microsoft.Automanage*
+- Machine does not have necessary VM agents installed which the Automanage service requires. Check out the [Windows agent installation](/azure/virtual-machines/extensions/agent-windows) and the [Linux agent installation](/azure/virtual-machines/extensions/agent-linux)
+- Arc machine is not connected. Learn more about the [Arc agent status](/azure/azure-arc/servers/overview#agent-status) and [how to connect](/azure/azure-arc/servers/agent-overview#connected-machine-agent-technical-overview)
+
+Once you have selected your eligible machines, Click **Enable**, and you're done.
The only time you might need to interact with this machine to manage these services is in the event we attempted to remediate your VM, but failed to do so. If we successfully remediate your VM, we will bring it back into compliance without even alerting you. For more details, see [Status of VMs](#status-of-vms).
The **Status** column can display the following states:
- *Conformant* - the VM is configured and no drift is detected - *Not conformant* - the VM has drifted and we were unable to remediate or the machine is powered off and Automanage will attempt to onboard or remediate the VM when it is next running - *Needs upgrade* - the VM is onboarded to an earlier version of Automanage and needs to be [upgraded](automanage-upgrade.md) to the latest version
+- *Error* - the Automanage service is unable to monitor one or more resources
-If you see the **Status** as *Not conformant*, you can troubleshoot by clicking on the status in the portal and using the troubleshooting links provided
+If you see the **Status** as *Not conformant* or *Error*, you can troubleshoot by clicking on the status in the portal and using the troubleshooting links provided
## Disabling Automanage for VMs
automation Extension Based Hybrid Runbook Worker Install https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/extension-based-hybrid-runbook-worker-install.md
If you use a firewall to restrict access to the Internet, you must configure the
|Port | 443 for outbound internet access| |Global URL |*.azure-automation.net| |Global URL of US Gov Virginia |*.azure-automation.us|
-|Agent service |`https://<workspaceId>.agentsvc.azure-automation.net`|
## Create hybrid worker group
azure-arc Upgrade Sql Managed Instance Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/upgrade-sql-managed-instance-cli.md
az sql mi-arc upgrade --name <instance name> --desired-version <version> --k8s-n
Example: ````cli
-az sql mi-arc upgrade --name instance1 --target v1.0.0.20211028 --k8s-namespace arc1 --use-k8s
+az sql mi-arc upgrade --name instance1 --desired-version v1.0.0.20211028 --k8s-namespace arc1 --use-k8s
```` ## Monitor
azure-functions Durable Functions Orchestrations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/durable/durable-functions-orchestrations.md
A few notes on the column values:
* **PartitionKey**: Contains the instance ID of the orchestration. * **EventType**: Represents the type of the event. May be one of the following types:
- * **OrchestrationStarted**: The orchestrator function resumed from an await or is running for the first time. The `Timestamp` column is used to populate the deterministic value for the `CurrentUtcDateTime` (.NET), `currentUtcDateTime` (JavaScript), and `current_utc_datetime` (Python) APIs.
+ * **OrchestratorStarted**: The orchestrator function resumed from an await or is running for the first time. The `Timestamp` column is used to populate the deterministic value for the `CurrentUtcDateTime` (.NET), `currentUtcDateTime` (JavaScript), and `current_utc_datetime` (Python) APIs.
* **ExecutionStarted**: The orchestrator function started executing for the first time. This event also contains the function input in the `Input` column. * **TaskScheduled**: An activity function was scheduled. The name of the activity function is captured in the `Name` column. * **TaskCompleted**: An activity function completed. The result of the function is in the `Result` column.
azure-functions Quickstart Python Vscode https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/durable/quickstart-python-vscode.md
To complete this tutorial:
* Durable Functions require an Azure storage account. You need an Azure subscription.
-* Make sure that you have version 3.6, 3.7, or 3.8 of [Python](https://www.python.org/) installed.
+* Make sure that you have version 3.7, 3.8, or 3.9 of [Python](https://www.python.org/) installed.
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
azure-functions Functions Core Tools Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-core-tools-reference.md
func azure storage fetch-connection-string <STORAGE_ACCOUNT_NAME>
## func deploy
-Deploys a function app in a custom Linux container to a Kubernetes cluster without KEDA.
-
-```command
-func deploy --name <FUNCTION_APP> --platform kubernetes --registry <DOCKER_USER>
-```
-
-This command builds your project as a custom container and publishes it to a Kubernetes cluster using a default scaler or using KNative. To publish to a cluster using KEDA for dynamic scale, instead use the [`func kubernetes deploy` command](#func-kubernetes-deploy). Custom containers must have a Dockerfile. To create an app with a Dockerfile, use the `--dockerfile` option with the [`func init` command](#func-init).
-
-The `deploy` action supports the following options:
-
-| Option | Description |
-| | -- |
-| **`--config`** | Sets an optional deployment configuration file. |
-| **`--max`** | Optionally, sets the maximum number of function app instances to deploy to. |
-| **`--min`** | Optionally, sets the minimum number of function app instances to deploy to. |
-| **`--name`** | Function app name (required). |
-| **`--platform`** | Hosting platform for the function app (required). Valid options are: `kubernetes` and `knative`.|
-| **`--registry`** | The name of a Docker Registry the current user signed-in to (required). |
-
-Core Tools uses the local Docker CLI to build and publish the image.
-
-Make sure your Docker is already installed locally. Run the `docker login` command to connect to your account.
+The `func deploy` command is deprecated. Please instead use [`func kubernetes deploy`](#func-kubernetes-deploy).
## func durable delete-task-hub
Regenerates a missing extensions.csproj file. No action is taken when an extensi
## func kubernetes deploy
-Deploys a Functions project as a custom docker container to a Kubernetes cluster using KEDA.
+Deploys a Functions project as a custom docker container to a Kubernetes cluster.
```command func kubernetes deploy ```
-This command builds your project as a custom container and publishes it to a Kubernetes cluster using KEDA for dynamic scale. To publish to a cluster using a default scaler or using KNative, instead use the [`func deploy` command](#func-deploy). Custom containers must have a Dockerfile. To create an app with a Dockerfile, use the `--dockerfile` option with the [`func init` command](#func-init).
+This command builds your project as a custom container and publishes it to a Kubernetes cluster. Custom containers must have a Dockerfile. To create an app with a Dockerfile, use the `--dockerfile` option with the [`func init` command](#func-init).
The following Kubernetes deployment options are available:
azure-functions Functions Networking Options https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-networking-options.md
When you scale up or down in size, the required address space is doubled for a s
<sup>*</sup>Assumes that you'll need to scale up or down in either size or SKU at some point.
-Since subnet size can't be changed after assignment, use a subnet that's large enough to accommodate whatever scale your app might reach. To avoid any issues with subnet capacity for Functions Premium plans, you should use a /24 with 256 addresses for Windows and a /26 with 64 addresses for Linux.
+Since subnet size can't be changed after assignment, use a subnet that's large enough to accommodate whatever scale your app might reach. To avoid any issues with subnet capacity for Functions Premium plans, you should use a /24 with 256 addresses for Windows and a /26 with 64 addresses for Linux. When creating subnets in Azure portal as part of integrating with the virtual network, a minimum size of /26 and /24 is required for Windows and Linux respectively.
When you want your apps in another plan to reach a VNet that's already connected to by apps in another plan, select a different subnet than the one being used by the pre-existing VNet Integration.
azure-functions Functions Run Local https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-run-local.md
When you call an administrator endpoint on your function app in Azure, you must
## <a name="publish"></a>Publish to Azure
-The Azure Functions Core Tools supports three types of deployment:
+The Azure Functions Core Tools supports two types of deployment:
| Deployment type | Command | Description | | -- | -- | -- | | Project files | [`func azure functionapp publish`](functions-core-tools-reference.md#func-azure-functionapp-publish) | Deploys function project files directly to your function app using [zip deployment](functions-deployment-technologies.md#zip-deploy). |
-| Custom container | `func deploy` | Deploys your project to a Linux function app as a custom Docker container. |
| Kubernetes cluster | `func kubernetes deploy` | Deploys your Linux function app as a custom Docker container to a Kubernetes cluster. | ### Before you publish
azure-functions Set Runtime Version https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/set-runtime-version.md
You can change the runtime version used by your function app. Because of the pot
You can also view and set the `FUNCTIONS_EXTENSION_VERSION` from the Azure CLI.
-Using the Azure CLI, view the current runtime version with the [az functionapp config appsettings set](/cli/azure/functionapp/config/appsettings) command.
+Using the Azure CLI, view the current runtime version with the [az functionapp config appsettings list](/cli/azure/functionapp/config/appsettings) command.
```azurecli-interactive az functionapp config appsettings list --name <function_app> \
azure-maps How To Use Map Control https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/how-to-use-map-control.md
You can embed a map in a web page by using the Map Control client-side JavaScrip
``` > [!NOTE]
- > Typescript definitions can be imported into your application by adding the following code:
+ > TypeScript definitions can be imported into your application by adding the following code:
> > ```javascript > import * as atlas from 'azure-maps-control';
azure-maps Weather Coverage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/weather-coverage.md
Title: Microsoft Azure Maps Weather services coverage
description: Learn about Microsoft Azure Maps Weather services coverage Previously updated : 12/07/2020 Last updated : 01/18/2021
The following table provides information about what kind of weather information
|--|| |* |Covers Current Conditions, Hourly Forecast, Quarter-day Forecast, Daily Forecast, Weather Along Route and Daily Indices. | - ## Americas
-| Country/region | Satellite Tiles | Minute Forecast, Radar Tiles | Severe Weather Alerts | Other* |
-|--|:-:|:--:|:--:|:--:|
-| Anguilla | Γ£ô | | | Γ£ô|
-| Antarctica | Γ£ô | | |Γ£ô|
-| Antigua and Barbuda | Γ£ô | | |Γ£ô|
-| Argentina | Γ£ô | | |Γ£ô|
-| Aruba | Γ£ô | | |Γ£ô|
-| Bahamas | Γ£ô | | |Γ£ô|
-| Barbados | Γ£ô | | |Γ£ô|
-| Belize | Γ£ô | | |Γ£ô|
-| Bermuda | Γ£ô | | |Γ£ô|
-| Bolivia | Γ£ô | | |Γ£ô|
-| Bonaire | Γ£ô | | |Γ£ô|
-| Brazil | Γ£ô | | Γ£ô |Γ£ô|
-| British Virgin Islands | Γ£ô | | |Γ£ô|
-| Canada | Γ£ô | Γ£ô | Γ£ô | Γ£ô|
-| Cayman Islands | Γ£ô | | |Γ£ô|
-| Chile | Γ£ô | | |Γ£ô|
-| Colombia | Γ£ô | | |Γ£ô|
-| Costa Rica | Γ£ô | | |Γ£ô|
-| Cuba | Γ£ô | | |Γ£ô|
-| Curaçao | ✓ | | |✓|
-| Dominica | Γ£ô | | |Γ£ô|
-| Dominican Republic | Γ£ô | | |Γ£ô|
-| Ecuador | Γ£ô | | |Γ£ô|
-| El Salvador | Γ£ô | | |Γ£ô|
-| Falkland Islands | Γ£ô | | |Γ£ô|
-| French Guiana | Γ£ô | | |Γ£ô|
-| Greenland | Γ£ô | | |Γ£ô|
-| Grenada | Γ£ô | | |Γ£ô|
-| Guadeloupe | Γ£ô | | |Γ£ô|
-| Guatemala | Γ£ô | | |Γ£ô|
-| Guyana | Γ£ô | | |Γ£ô|
-| Haiti | Γ£ô | | |Γ£ô|
-| Honduras | Γ£ô | | |Γ£ô|
-| Jamaica | Γ£ô | | |Γ£ô|
-| Martinique | Γ£ô | | |Γ£ô|
-| Mexico | Γ£ô | | |Γ£ô|
-| Montserrat | Γ£ô | | |Γ£ô|
-| Nicaragua | Γ£ô | | |Γ£ô|
-| Panama | Γ£ô | | |Γ£ô|
-| Paraguay | Γ£ô | | |Γ£ô|
-| Peru | Γ£ô | | |Γ£ô|
-| Puerto Rico | Γ£ô | | Γ£ô |Γ£ô|
-| Saint Barthélemy | ✓ | | |✓|
-| Saint Kitts and Nevis | Γ£ô | | |Γ£ô|
-| Saint Lucia | Γ£ô | | |Γ£ô|
-| Saint Martin | Γ£ô | | |Γ£ô|
-| Saint Pierre and Miquelon | Γ£ô | | |Γ£ô|
-| Saint Vincent and the Grenadines | Γ£ô | | |Γ£ô|
-| Sint Eustatius | Γ£ô | | |Γ£ô|
-| Sint Maarten | Γ£ô | | |Γ£ô|
-| South Georgia and South Sandwich Islands | Γ£ô | | |Γ£ô|
-| Suriname | Γ£ô | | |Γ£ô|
-| Trinidad and Tobago | Γ£ô | | |Γ£ô|
-| Turks and Caicos Islands | Γ£ô | | |Γ£ô|
-| U.S. Outlying Islands | Γ£ô | | |Γ£ô|
-| U.S. Virgin Islands | Γ£ô | | Γ£ô|Γ£ô|
-| United States | Γ£ô | Γ£ô | Γ£ô| Γ£ô|
-| Uruguay | Γ£ô | | |Γ£ô|
-| Venezuela | Γ£ô | | |Γ£ô|
--
-## Middle East and Africa
-
-| Country/region | Satellite Tiles | Minute Forecast, Radar Tiles | Severe Weather Alerts | Other* |
-|--|:-:|:--:|:--:|:--:|
-| Algeria | Γ£ô | | | Γ£ô|
-| Angola | Γ£ô | | | Γ£ô|
-| Bahrain | Γ£ô | | | Γ£ô|
-| Benin | Γ£ô | | | Γ£ô|
-| Botswana | Γ£ô | | | Γ£ô|
-| Bouvet Island | Γ£ô | | | Γ£ô|
-| Burkina Faso | Γ£ô | | | Γ£ô|
-| Burundi | Γ£ô | | | Γ£ô|
-| Cameroon | Γ£ô | | | Γ£ô|
-| Cabo Verde | Γ£ô | | | Γ£ô|
-| Central African Republic | Γ£ô | | | Γ£ô|
-| Chad | Γ£ô | | | Γ£ô|
-| Comoros | Γ£ô | | | Γ£ô|
-| Congo (DRC) | Γ£ô | | | Γ£ô|
-| C├┤te d'Ivoire | Γ£ô | | | Γ£ô|
-| Djibouti | Γ£ô | | | Γ£ô|
-| Egypt | Γ£ô | | | Γ£ô|
-| Equatorial Guinea | Γ£ô | | | Γ£ô|
-| Eritrea | Γ£ô | | | Γ£ô|
-| eSwatini | Γ£ô | | | Γ£ô|
-| Ethiopia | Γ£ô | | | Γ£ô|
-| French Southern Territories | Γ£ô | | | Γ£ô|
-| Gabon | Γ£ô | | | Γ£ô|
-| Gambia | Γ£ô | | | Γ£ô|
-| Ghana | Γ£ô | | | Γ£ô|
-| Guinea | Γ£ô | | | Γ£ô|
-| Guinea-Bissau | Γ£ô | | | Γ£ô|
-| Iran | Γ£ô | | | Γ£ô|
-| Iraq | Γ£ô | | | Γ£ô|
-| Israel | Γ£ô | | Γ£ô | Γ£ô|
-| Jordan | Γ£ô | | | Γ£ô|
-| Kenya | Γ£ô | | | Γ£ô|
-| Kuwait | Γ£ô | | | Γ£ô|
-| Lebanon | Γ£ô | | | Γ£ô|
-| Lesotho | Γ£ô | | | Γ£ô|
-| Liberia | Γ£ô | | | Γ£ô|
-| Libya | Γ£ô | | | Γ£ô|
-| Madagascar | Γ£ô | | | Γ£ô|
-| Malawi | Γ£ô | | | Γ£ô|
-| Mali | Γ£ô | | | Γ£ô|
-| Mauritania | Γ£ô | | | Γ£ô|
-| Mauritius | Γ£ô | | | Γ£ô|
-| Mayotte | Γ£ô | | | Γ£ô|
-| Morocco | Γ£ô | | | Γ£ô|
-| Mozambique | Γ£ô | | | Γ£ô|
-| Namibia | Γ£ô | | | Γ£ô|
-| Niger | Γ£ô | | | Γ£ô|
-| Nigeria | Γ£ô | | | Γ£ô|
-| Oman | Γ£ô | | | Γ£ô|
-| Palestinian Authority | Γ£ô | | | Γ£ô|
-| Qatar | Γ£ô | | | Γ£ô|
-| Réunion | ✓ | | | ✓|
-| Rwanda | Γ£ô | | | Γ£ô|
-| St Helena, Ascension, Tristan da Cunha | Γ£ô | | | Γ£ô|
-| São Tomé and Príncipe | ✓ | | | ✓|
-| Saudi Arabia | Γ£ô | | | Γ£ô|
-| Senegal | Γ£ô | | | Γ£ô|
-| Seychelles | Γ£ô | | | Γ£ô|
-| Sierra Leone | Γ£ô | | | Γ£ô|
-| Somalia | Γ£ô | | | Γ£ô|
-| South Africa | Γ£ô | | | Γ£ô|
-| South Sudan | Γ£ô | | | Γ£ô|
-| Sudan | Γ£ô | | | Γ£ô|
-| Syria | Γ£ô | | | Γ£ô|
-| Tanzania | Γ£ô | | | Γ£ô|
-| Togo | Γ£ô | | | Γ£ô|
-| Tunisia | Γ£ô | | | Γ£ô|
-| Uganda | Γ£ô | | | Γ£ô|
-| United Arab Emirates | Γ£ô | | | Γ£ô|
-| Yemen | Γ£ô | | | Γ£ô|
-| Zambia | Γ£ô | | | Γ£ô|
-| Zimbabwe | Γ£ô | | | Γ£ô|
-
+| Country/region | Infrared Satellite Tiles | Minute Forecast, Radar Tiles | Severe Weather Alerts | Other* |
+||::|:-:|::|::|
+| Anguilla | Γ£ô | | | Γ£ô |
+| Antarctica | Γ£ô | | | Γ£ô |
+| Antigua & Barbuda | Γ£ô | | | Γ£ô |
+| Argentina | Γ£ô | | | Γ£ô |
+| Aruba | Γ£ô | | | Γ£ô |
+| Bahamas | Γ£ô | | | Γ£ô |
+| Barbados | Γ£ô | | | Γ£ô |
+| Belize | Γ£ô | | | Γ£ô |
+| Bermuda | Γ£ô | | | Γ£ô |
+| Bolivia | Γ£ô | | | Γ£ô |
+| Bonaire | Γ£ô | | | Γ£ô |
+| Brazil | Γ£ô | | Γ£ô | Γ£ô |
+| British Virgin Islands | Γ£ô | | | Γ£ô |
+| Canada | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Cayman Islands | Γ£ô | | | Γ£ô |
+| Chile | Γ£ô | | | Γ£ô |
+| Colombia | Γ£ô | | | Γ£ô |
+| Costa Rica | Γ£ô | | | Γ£ô |
+| Cuba | Γ£ô | | | Γ£ô |
+| Curaçao | ✓ | | | ✓ |
+| Dominica | Γ£ô | | | Γ£ô |
+| Dominican Republic | Γ£ô | | | Γ£ô |
+| Ecuador | Γ£ô | | | Γ£ô |
+| El Salvador | Γ£ô | | | Γ£ô |
+| Falkland Islands | Γ£ô | | | Γ£ô |
+| French Guiana | Γ£ô | | | Γ£ô |
+| Greenland | Γ£ô | | | Γ£ô |
+| Grenada | Γ£ô | | | Γ£ô |
+| Guadeloupe | Γ£ô | | | Γ£ô |
+| Guatemala | Γ£ô | | | Γ£ô |
+| Guyana | Γ£ô | | | Γ£ô |
+| Haiti | Γ£ô | | | Γ£ô |
+| Honduras | Γ£ô | | | Γ£ô |
+| Jamaica | Γ£ô | | | Γ£ô |
+| Martinique | Γ£ô | | | Γ£ô |
+| Mexico | Γ£ô | | | Γ£ô |
+| Montserrat | Γ£ô | | | Γ£ô |
+| Nicaragua | Γ£ô | | | Γ£ô |
+| Panama | Γ£ô | | | Γ£ô |
+| Paraguay | Γ£ô | | | Γ£ô |
+| Peru | Γ£ô | | | Γ£ô |
+| Puerto Rico | Γ£ô | | Γ£ô | Γ£ô |
+| Saint Barthélemy | ✓ | | | ✓ |
+| Saint Kitts & Nevis | Γ£ô | | | Γ£ô |
+| Saint Lucia | Γ£ô | | | Γ£ô |
+| Saint Martin | Γ£ô | | | Γ£ô |
+| Saint Pierre & Miquelon | Γ£ô | | | Γ£ô |
+| Saint Vincent & the Grenadines | Γ£ô | | | Γ£ô |
+| Sint Eustatius | Γ£ô | | | Γ£ô |
+| Sint Maarten | Γ£ô | | | Γ£ô |
+| South Georgia & South Sandwich Islands | Γ£ô | | | Γ£ô |
+| Suriname | Γ£ô | | | Γ£ô |
+| Trinidad & Tobago | Γ£ô | | | Γ£ô |
+| Turks & Caicos Islands | Γ£ô | | | Γ£ô |
+| U.S. Outlying Islands | Γ£ô | | | Γ£ô |
+| U.S. Virgin Islands | Γ£ô | | Γ£ô | Γ£ô |
+| United States | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Uruguay | Γ£ô | | | Γ£ô |
+| Venezuela | Γ£ô | | | Γ£ô |
## Asia Pacific
-| Country/region | Satellite Tiles | Minute Forecast, Radar Tiles | Severe Weather Alerts | Other* |
-|--|:-:|:--:|:--:| :--:|
-| Afghanistan | Γ£ô | | | Γ£ô|
-| American Samoa | Γ£ô | | Γ£ô| Γ£ô|
-| Australia | Γ£ô | Γ£ô | Γ£ô | Γ£ô|
-| Bangladesh | Γ£ô | | | Γ£ô|
-| Bhutan | Γ£ô | | | Γ£ô|
-| British Indian Ocean Territory | Γ£ô | | | Γ£ô|
-| Brunei | Γ£ô | | | Γ£ô|
-| Cambodia | Γ£ô | | | Γ£ô|
-| China | Γ£ô | Γ£ô | Γ£ô | Γ£ô|
-| Christmas Island | Γ£ô | | | Γ£ô|
-| Cocos (Keeling) Islands | Γ£ô | | | Γ£ô|
-| Cook Islands | Γ£ô | | | Γ£ô|
-| Fiji | Γ£ô | | | Γ£ô|
-| French Polynesia | Γ£ô | | | Γ£ô|
-| Guam | Γ£ô | | Γ£ô| Γ£ô|
-| Heard Island and McDonald Islands | Γ£ô | | | Γ£ô|
-| Hong Kong SAR | Γ£ô | | | Γ£ô|
-| India | Γ£ô | | | Γ£ô|
-| Indonesia | Γ£ô | | | Γ£ô|
-| Japan | Γ£ô | Γ£ô | Γ£ô| Γ£ô|
-| Kazakhstan | Γ£ô | | | Γ£ô|
-| Kiribati | Γ£ô | | | Γ£ô|
-| Korea | Γ£ô | Γ£ô | Γ£ô | Γ£ô|
-| Kyrgyzstan | Γ£ô | | | Γ£ô|
-| Laos | Γ£ô | | | Γ£ô|
-| Macao SAR | Γ£ô | | | Γ£ô|
-| Malaysia | Γ£ô | | | Γ£ô|
-| Maldives | Γ£ô | | | Γ£ô|
-| Marshall Islands | Γ£ô | | Γ£ô | Γ£ô|
-| Micronesia | Γ£ô | | Γ£ô | Γ£ô|
-| Mongolia | Γ£ô | | | Γ£ô|
-| Myanmar | Γ£ô | | | Γ£ô|
-| Nauru | Γ£ô | | | Γ£ô|
-| Nepal | Γ£ô | | | Γ£ô|
-| New Caledonia | Γ£ô | | | Γ£ô|
-| New Zealand | Γ£ô | | Γ£ô | Γ£ô|
-| Niue | Γ£ô | | | Γ£ô|
-| Norfolk Island | Γ£ô | | | Γ£ô|
-| North Korea | Γ£ô | | | Γ£ô|
-| Northern Mariana Islands | Γ£ô | | Γ£ô | Γ£ô|
-| Pakistan | Γ£ô | | | Γ£ô|
-| Palau | Γ£ô | | Γ£ô | Γ£ô|
-| Papua New Guinea | Γ£ô | | | Γ£ô|
-| Philippines | Γ£ô | | Γ£ô | Γ£ô|
-| Pitcairn Islands | Γ£ô | | | Γ£ô|
-| Samoa | Γ£ô | | | Γ£ô|
-| Singapore | Γ£ô | | | Γ£ô|
-| Solomon Islands | Γ£ô | | | Γ£ô|
-| Sri Lanka | Γ£ô | | | Γ£ô|
-| Taiwan | Γ£ô | | | Γ£ô|
-| Tajikistan | Γ£ô | | | Γ£ô|
-| Thailand | Γ£ô | | | Γ£ô|
-| Timor-Leste | Γ£ô | | | Γ£ô|
-| Tokelau | Γ£ô | | | Γ£ô|
-| Tonga | Γ£ô | | | Γ£ô|
-| Turkmenistan | Γ£ô | | | Γ£ô|
-| Tuvalu | Γ£ô | | | Γ£ô|
-| Uzbekistan | Γ£ô | | | Γ£ô|
-| Vanuatu | Γ£ô | | | Γ£ô|
-| Vietnam | Γ£ô | | | Γ£ô|
-| Wallis and Futuna | Γ£ô | | | Γ£ô|
-
+| Country/region | Infrared Satellite Tiles | Minute Forecast, Radar Tiles | Severe Weather Alerts | Other* |
+||--|::|:-:|::|::|
+| Afghanistan | Γ£ô | | | Γ£ô |
+| American Samoa | Γ£ô | | Γ£ô | Γ£ô |
+| Australia | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Bangladesh | Γ£ô | | | Γ£ô |
+| Bhutan | Γ£ô | | | Γ£ô |
+| British Indian Ocean Territory | Γ£ô | | | Γ£ô |
+| Brunei | Γ£ô | | | Γ£ô |
+| Cambodia | Γ£ô | | | Γ£ô |
+| China | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Christmas Island | Γ£ô | | | Γ£ô |
+| Cocos (Keeling) Islands | Γ£ô | | | Γ£ô |
+| Cook Islands | Γ£ô | | | Γ£ô |
+| Fiji | Γ£ô | | | Γ£ô |
+| French Polynesia | Γ£ô | | | Γ£ô |
+| Guam | Γ£ô | | Γ£ô | Γ£ô |
+| Heard Island & McDonald Islands | Γ£ô | | | Γ£ô |
+| Hong Kong SAR | Γ£ô | | | Γ£ô |
+| India | Γ£ô | | | Γ£ô |
+| Indonesia | Γ£ô | | | Γ£ô |
+| Japan | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Kazakhstan | Γ£ô | | | Γ£ô |
+| Kiribati | Γ£ô | | | Γ£ô |
+| Korea | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Kyrgyzstan | Γ£ô | | | Γ£ô |
+| Laos | Γ£ô | | | Γ£ô |
+| Macao SAR | Γ£ô | | | Γ£ô |
+| Malaysia | Γ£ô | | | Γ£ô |
+| Maldives | Γ£ô | | | Γ£ô |
+| Marshall Islands | Γ£ô | | Γ£ô | Γ£ô |
+| Micronesia | Γ£ô | | Γ£ô | Γ£ô |
+| Mongolia | Γ£ô | | | Γ£ô |
+| Myanmar | Γ£ô | | | Γ£ô |
+| Nauru | Γ£ô | | | Γ£ô |
+| Nepal | Γ£ô | | | Γ£ô |
+| New Caledonia | Γ£ô | | | Γ£ô |
+| New Zealand | Γ£ô | | Γ£ô | Γ£ô |
+| Niue | Γ£ô | | | Γ£ô |
+| Norfolk Island | Γ£ô | | | Γ£ô |
+| North Korea | Γ£ô | | | Γ£ô |
+| Northern Mariana Islands | Γ£ô | | Γ£ô | Γ£ô |
+| Pakistan | Γ£ô | | | Γ£ô |
+| Palau | Γ£ô | | Γ£ô | Γ£ô |
+| Papua New Guinea | Γ£ô | | | Γ£ô |
+| Philippines | Γ£ô | | Γ£ô | Γ£ô |
+| Pitcairn Islands | Γ£ô | | | Γ£ô |
+| Samoa | Γ£ô | | | Γ£ô |
+| Singapore | Γ£ô | | | Γ£ô |
+| Solomon Islands | Γ£ô | | | Γ£ô |
+| Sri Lanka | Γ£ô | | | Γ£ô |
+| Taiwan | Γ£ô | | | Γ£ô |
+| Tajikistan | Γ£ô | | | Γ£ô |
+| Thailand | Γ£ô | | | Γ£ô |
+| Timor-Leste | Γ£ô | | | Γ£ô |
+| Tokelau | Γ£ô | | | Γ£ô |
+| Tonga | Γ£ô | | | Γ£ô |
+| Turkmenistan | Γ£ô | | | Γ£ô |
+| Tuvalu | Γ£ô | | | Γ£ô |
+| Uzbekistan | Γ£ô | | | Γ£ô |
+| Vanuatu | Γ£ô | | | Γ£ô |
+| Vietnam | Γ£ô | | | Γ£ô |
+| Wallis & Futuna | Γ£ô | | | Γ£ô |
## Europe
-| Country/region | Satellite Tiles | Minute Forecast, Radar Tiles | Severe Weather Alerts | Other* |
-|--|:-:|:--:|:--:|:--:|
-| Albania | Γ£ô | | | Γ£ô|
-| Andorra | Γ£ô | | Γ£ô | Γ£ô|
-| Armenia | Γ£ô | | | Γ£ô|
-| Austria | Γ£ô | Γ£ô | Γ£ô | Γ£ô|
-| Azerbaijan | Γ£ô | | | Γ£ô|
-| Belarus | Γ£ô | | | Γ£ô|
-| Belgium | Γ£ô | Γ£ô | Γ£ô| Γ£ô|
-| Bosnia and Herzegovina | Γ£ô | Γ£ô | Γ£ô | Γ£ô|
-| Bulgaria | Γ£ô | | Γ£ô| Γ£ô|
-| Croatia | Γ£ô | Γ£ô | Γ£ô| Γ£ô|
-| Cyprus | Γ£ô | | Γ£ô | Γ£ô|
-| Czechia | Γ£ô | Γ£ô | Γ£ô | Γ£ô|
-| Denmark | Γ£ô | Γ£ô | Γ£ô | Γ£ô|
-| Estonia | Γ£ô | Γ£ô | Γ£ô| Γ£ô|
-| Faroe Islands | Γ£ô | | | Γ£ô|
-| Finland | Γ£ô | Γ£ô | Γ£ô | Γ£ô|
-| France | Γ£ô | Γ£ô | Γ£ô | Γ£ô|
-| Georgia | Γ£ô | | | Γ£ô|
-| Germany | Γ£ô | Γ£ô | Γ£ô | Γ£ô|
-| Gibraltar | Γ£ô | Γ£ô | | Γ£ô|
-| Greece | Γ£ô | | Γ£ô| Γ£ô|
-| Guernsey | Γ£ô | | | Γ£ô|
-| Hungary | Γ£ô | Γ£ô | Γ£ô| Γ£ô|
-| Iceland | Γ£ô | | Γ£ô | Γ£ô|
-| Ireland | Γ£ô | Γ£ô | Γ£ô| Γ£ô|
-| Italy | Γ£ô | | Γ£ô| Γ£ô|
-| Isle of Man | Γ£ô | | | Γ£ô|
-| Jan Mayen | Γ£ô | | | Γ£ô|
-| Jersey | Γ£ô | | | Γ£ô|
-| Kosovo | Γ£ô | | Γ£ô| Γ£ô|
-| Latvia | Γ£ô | | Γ£ô | Γ£ô|
-| Liechtenstein | Γ£ô | Γ£ô | Γ£ô| Γ£ô|
-| Lithuania | Γ£ô | | Γ£ô | Γ£ô|
-| Luxembourg | Γ£ô | Γ£ô | Γ£ô| Γ£ô|
-| North Macedonia | Γ£ô | | Γ£ô | Γ£ô|
-| Malta | Γ£ô | | Γ£ô | Γ£ô|
-| Moldova | Γ£ô | Γ£ô | Γ£ô | Γ£ô|
-| Monaco | Γ£ô | Γ£ô | Γ£ô| Γ£ô|
-| Montenegro | Γ£ô | Γ£ô | Γ£ô| Γ£ô|
-| Netherlands | Γ£ô | Γ£ô | Γ£ô| Γ£ô|
-| Norway | Γ£ô | Γ£ô | Γ£ô| Γ£ô|
-| Poland | Γ£ô | Γ£ô | Γ£ô| Γ£ô|
-| Portugal | Γ£ô | Γ£ô | Γ£ô| Γ£ô|
-| Romania | Γ£ô | Γ£ô | Γ£ô| Γ£ô|
-| Russia | Γ£ô | | Γ£ô| Γ£ô|
-| San Marino | Γ£ô | | Γ£ô| Γ£ô|
-| Serbia | Γ£ô | Γ£ô | Γ£ô| Γ£ô|
-| Slovakia | Γ£ô | Γ£ô | Γ£ô| Γ£ô|
-| Slovenia | Γ£ô | Γ£ô | Γ£ô| Γ£ô|
-| Spain | Γ£ô | Γ£ô | Γ£ô| Γ£ô|
-| Svalbard | Γ£ô | | | Γ£ô|
-| Sweden | Γ£ô | Γ£ô | Γ£ô| Γ£ô|
-| Switzerland | Γ£ô | Γ£ô | Γ£ô| Γ£ô|
-| Turkey | Γ£ô | | | Γ£ô|
-| Ukraine | Γ£ô | | | Γ£ô|
-| United Kingdom | Γ£ô | Γ£ô | Γ£ô| Γ£ô|
-| Vatican City | Γ£ô | |Γ£ô | Γ£ô|
+| Country/region | Infrared Satellite Tiles | Minute Forecast, Radar Tiles | Severe Weather Alerts | Other* |
+|-|::|:-:|::|::|
+| Albania | Γ£ô | | | Γ£ô |
+| Andorra | Γ£ô | | Γ£ô | Γ£ô |
+| Armenia | Γ£ô | | | Γ£ô |
+| Austria | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Azerbaijan | Γ£ô | | | Γ£ô |
+| Belarus | Γ£ô | | | Γ£ô |
+| Belgium | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Bosnia & Herzegovina | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Bulgaria | Γ£ô | | Γ£ô | Γ£ô |
+| Croatia | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Cyprus | Γ£ô | | Γ£ô | Γ£ô |
+| Czechia | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Denmark | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Estonia | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Faroe Islands | Γ£ô | | | Γ£ô |
+| Finland | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| France | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Georgia | Γ£ô | | | Γ£ô |
+| Germany | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Gibraltar | Γ£ô | Γ£ô | | Γ£ô |
+| Greece | Γ£ô | | Γ£ô | Γ£ô |
+| Guernsey | Γ£ô | | | Γ£ô |
+| Hungary | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Iceland | Γ£ô | | Γ£ô | Γ£ô |
+| Ireland | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Isle of Man | Γ£ô | | | Γ£ô |
+| Italy | Γ£ô | | Γ£ô | Γ£ô |
+| Jan Mayen | Γ£ô | | | Γ£ô |
+| Jersey | Γ£ô | | | Γ£ô |
+| Kosovo | Γ£ô | | Γ£ô | Γ£ô |
+| Latvia | Γ£ô | | Γ£ô | Γ£ô |
+| Liechtenstein | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Lithuania | Γ£ô | | Γ£ô | Γ£ô |
+| Luxembourg | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| North Macedonia | Γ£ô | | Γ£ô | Γ£ô |
+| Malta | Γ£ô | | Γ£ô | Γ£ô |
+| Moldova | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Monaco | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Montenegro | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Netherlands | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Norway | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Poland | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Portugal | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Romania | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Russia | Γ£ô | | Γ£ô | Γ£ô |
+| San Marino | Γ£ô | | Γ£ô | Γ£ô |
+| Serbia | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Slovakia | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Slovenia | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Spain | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Svalbard | Γ£ô | | | Γ£ô |
+| Sweden | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Switzerland | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Turkey | Γ£ô | | | Γ£ô |
+| Ukraine | Γ£ô | | | Γ£ô |
+| United Kingdom | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Vatican City | Γ£ô | | Γ£ô | Γ£ô |
+
+## Middle East & Africa
+
+| Country/region | Infrared Satellite Tiles | Minute Forecast, Radar Tiles | Severe Weather Alerts | Other* |
+|-|::|:-:|::|::|
+| Algeria | Γ£ô | | | Γ£ô |
+| Angola | Γ£ô | | | Γ£ô |
+| Bahrain | Γ£ô | | | Γ£ô |
+| Benin | Γ£ô | | | Γ£ô |
+| Botswana | Γ£ô | | | Γ£ô |
+| Bouvet Island | Γ£ô | | | Γ£ô |
+| Burkina Faso | Γ£ô | | | Γ£ô |
+| Burundi | Γ£ô | | | Γ£ô |
+| Cabo Verde | Γ£ô | | | Γ£ô |
+| Cameroon | Γ£ô | | | Γ£ô |
+| Central African Republic | Γ£ô | | | Γ£ô |
+| Chad | Γ£ô | | | Γ£ô |
+| Comoros | Γ£ô | | | Γ£ô |
+| Congo (DRC) | Γ£ô | | | Γ£ô |
+| C├┤te d'Ivoire | Γ£ô | | | Γ£ô |
+| Djibouti | Γ£ô | | | Γ£ô |
+| Egypt | Γ£ô | | | Γ£ô |
+| Equatorial Guinea | Γ£ô | | | Γ£ô |
+| Eritrea | Γ£ô | | | Γ£ô |
+| eSwatini | Γ£ô | | | Γ£ô |
+| Ethiopia | Γ£ô | | | Γ£ô |
+| French Southern Territories | Γ£ô | | | Γ£ô |
+| Gabon | Γ£ô | | | Γ£ô |
+| Gambia | Γ£ô | | | Γ£ô |
+| Ghana | Γ£ô | | | Γ£ô |
+| Guinea | Γ£ô | | | Γ£ô |
+| Guinea-Bissau | Γ£ô | | | Γ£ô |
+| Iran | Γ£ô | | | Γ£ô |
+| Iraq | Γ£ô | | | Γ£ô |
+| Israel | Γ£ô | | Γ£ô | Γ£ô |
+| Jordan | Γ£ô | | | Γ£ô |
+| Kenya | Γ£ô | | | Γ£ô |
+| Kuwait | Γ£ô | | | Γ£ô |
+| Lebanon | Γ£ô | | | Γ£ô |
+| Lesotho | Γ£ô | | | Γ£ô |
+| Liberia | Γ£ô | | | Γ£ô |
+| Libya | Γ£ô | | | Γ£ô |
+| Madagascar | Γ£ô | | | Γ£ô |
+| Malawi | Γ£ô | | | Γ£ô |
+| Mali | Γ£ô | | | Γ£ô |
+| Mauritania | Γ£ô | | | Γ£ô |
+| Mauritius | Γ£ô | | | Γ£ô |
+| Mayotte | Γ£ô | | | Γ£ô |
+| Morocco | Γ£ô | | | Γ£ô |
+| Mozambique | Γ£ô | | | Γ£ô |
+| Namibia | Γ£ô | | | Γ£ô |
+| Niger | Γ£ô | | | Γ£ô |
+| Nigeria | Γ£ô | | | Γ£ô |
+| Oman | Γ£ô | | | Γ£ô |
+| Palestinian Authority | Γ£ô | | | Γ£ô |
+| Qatar | Γ£ô | | | Γ£ô |
+| Réunion | ✓ | | | ✓ |
+| Rwanda | Γ£ô | | | Γ£ô |
+| Saint Helena, Ascension, Tristan da Cunha | Γ£ô | | | Γ£ô |
+| São Tomé & Príncipe | ✓ | | | ✓ |
+| Saudi Arabia | Γ£ô | | | Γ£ô |
+| Senegal | Γ£ô | | | Γ£ô |
+| Seychelles | Γ£ô | | | Γ£ô |
+| Sierra Leone | Γ£ô | | | Γ£ô |
+| Somalia | Γ£ô | | | Γ£ô |
+| South Africa | Γ£ô | | | Γ£ô |
+| South Sudan | Γ£ô | | | Γ£ô |
+| Sudan | Γ£ô | | | Γ£ô |
+| Syria | Γ£ô | | | Γ£ô |
+| Tanzania | Γ£ô | | | Γ£ô |
+| Togo | Γ£ô | | | Γ£ô |
+| Tunisia | Γ£ô | | | Γ£ô |
+| Uganda | Γ£ô | | | Γ£ô |
+| United Arab Emirates | Γ£ô | | | Γ£ô |
+| Yemen | Γ£ô | | | Γ£ô |
+| Zambia | Γ£ô | | | Γ£ô |
+| Zimbabwe | Γ£ô | | | Γ£ô |
azure-monitor Action Groups https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/action-groups.md
Title: Create and manage action groups in the Azure portal
description: Learn how to create and manage action groups in the Azure portal. Previously updated : 11/18/2021 Last updated : 01/15/2022 + # Create and manage action groups in the Azure portal An action group is a collection of notification preferences defined by the owner of an Azure subscription. Azure Monitor, Service Health and Azure Advisor alerts use action groups to notify users that an alert has been triggered. Various alerts may use the same action group or different action groups depending on the user's requirements.
Under **Instance details**:
> [!NOTE] > When you configure an action to notify a person by email or SMS, they receive a confirmation indicating they have been added to the action group.
+### Test an action group in the Azure portal (Preview)
+
+When creating or updating an action group in the Azure portal, you can **test** the action group.
+1. After creating an action rule, click on **Review + create**. Select *Test action group*.
+
+ ![The Test Action Group](./media/action-groups/test-action-group.png)
+
+1. Select the *sample type* and select the notification and action types that you want to test and select **Test**.
+
+ ![Select Sample Type + notification + action type](./media/action-groups/test-sample-action-group.png)
+
+1. If you close the window or select **Back to test setup** while the test is running, the test is stopped, and you will not get test results.
+
+ ![Stop running test](./media/action-groups/stop-running-test.png)
+
+1. When the test is complete either a **Success** or **Failed** test status is displayed. If the test failed, you could select *View details* to get more information.
+ ![Test sample failed](./media/action-groups/test-sample-failed.png)
+
+You can use the information in the **Error details section**, to understand the issue so that you can edit and test the action group again.
+To allow you to check the action groups are working as expected before you enable them in a production environment, you will get email and SMS alerts with the subject: Test.
+
+All the details and links in Test email notifications for the alerts fired are a sample set for reference.
+
+> [!NOTE]
+> You may have a limited number of actions in a test Action Group. See the [rate limiting information](./alerts-rate-limiting.md) article.
+>
+> You can opt in or opt out to the common alert schema through Action Groups, on the portal. You can [find common schema samples for test action groups for all the sample types](./alerts-common-schema-test-action-definitions.md).
## Manage your action groups
While setting up *Email ARM Role* you need to make sure below 3 conditions are m
> [!NOTE] > It can take upto **24 hours** for customer to start receiving notifications after they add new ARM Role to their subscription.
-### Event Hub (Preview)
+### Event hub (preview)
> [!NOTE]
-> The Event Hub action type is currently in *Preview*. During the preview there may be bugs and disruptions in availability of the functionality.
+> The event hub action type is currently in *Preview*. During the preview there may be bugs and disruptions in availability of the functionality.
-An Event Hub action publishes notifications to an [Azure Event Hub](~/articles/event-hubs/event-hubs-about.md). You may then subscribe to the alert notification stream from your event receiver.
+An event hub action publishes notifications to [Azure Event Hubs](~/articles/event-hubs/event-hubs-about.md). You may then subscribe to the alert notification stream from your event receiver.
### Function Calls an existing HTTP trigger endpoint in [Azure Functions](../../azure-functions/functions-get-started.md). To handle a request, your endpoint must handle the HTTP POST verb.
azure-monitor Alerts Common Schema Test Action Definitions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/alerts-common-schema-test-action-definitions.md
+
+ Title: Alert schema definitions in Azure Monitor for Test Action Group
+description: Understanding the common alert schema definitions for Azure Monitor for Test Action group
++ Last updated : 01/14/2022++
+# Common alert schema definitions for Test Action Group (Preview)
+
+This article describes the [common alert schema definitions](./alerts-common-schema.md) for Azure Monitor, including those for webhooks, Azure Logic Apps, Azure Functions, and Azure Automation runbooks.
+
+Any alert instance describes the resource that was affected and the cause of the alert. These instances are described in the common schema in the following sections:
+* **Essentials**: A set of standardized fields, common across all alert types, which describe what resource the alert is on, along with additional common alert metadata (for example, severity or description). Definitions of severity can be found in the [alerts overview](alerts-overview.md#overview).
+* **Alert context**: A set of fields that describes the cause of the alert, with fields that vary based on the alert type. For example, a metric alert includes fields like the metric name and metric value in the alert context, whereas an activity log alert has information about the event that generated the alert.
+
+**Sample alert payload**
+```json
+{
+ "schemaId": "azureMonitorCommonAlertSchema",
+ "data": {
+ "essentials": {
+ "alertId": "/subscriptions/<subscription ID>/providers/Microsoft.AlertsManagement/alerts/b9569717-bc32-442f-add5-83a997729330",
+ "alertRule": "WCUS-R2-Gen2",
+ "severity": "Sev3",
+ "signalType": "Metric",
+ "monitorCondition": "Resolved",
+ "monitoringService": "Platform",
+ "alertTargetIDs": [
+ "/subscriptions/<subscription ID>/resourcegroups/pipelinealertrg/providers/microsoft.compute/virtualmachines/wcus-r2-gen2"
+ ],
+ "configurationItems": [
+ "wcus-r2-gen2"
+ ],
+ "originAlertId": "3f2d4487-b0fc-4125-8bd5-7ad17384221e_PipeLineAlertRG_microsoft.insights_metricAlerts_WCUS-R2-Gen2_-117781227",
+ "firedDateTime": "2019-03-22T13:58:24.3713213Z",
+ "resolvedDateTime": "2019-03-22T14:03:16.2246313Z",
+ "description": "",
+ "essentialsVersion": "1.0",
+ "alertContextVersion": "1.0"
+ },
+ "alertContext": {
+ "properties": null,
+ "conditionType": "SingleResourceMultipleMetricCriteria",
+ "condition": {
+ "windowSize": "PT5M",
+ "allOf": [
+ {
+ "metricName": "Percentage CPU",
+ "metricNamespace": "Microsoft.Compute/virtualMachines",
+ "operator": "GreaterThan",
+ "threshold": "25",
+ "timeAggregation": "Average",
+ "dimensions": [
+ {
+ "name": "ResourceId",
+ "value": "3efad9dc-3d50-4eac-9c87-8b3fd6f97e4e"
+ }
+ ],
+ "metricValue": 7.727
+ }
+ ]
+ }
+ }
+ }
+}
+```
+
+## Essentials
+
+| Field | Description|
+|:|:|
+| alertId | The unique resource ID identifying the alert instance. |
+| alertRule | The name of the alert rule that generated the alert instance. |
+| Severity | The severity of the alert. Possible values: Sev0, Sev1, Sev2, Sev3, or Sev4. |
+| signalType | Identifies the signal on which the alert rule was defined. Possible values: Metric, Log, or Activity Log. |
+| monitorCondition | When an alert fires, the alert's monitor condition is set to **Fired**. When the underlying condition that caused the alert to fire clears, the monitor condition is set to **Resolved**. |
+| monitoringService | The monitoring service or solution that generated the alert. The fields for the alert context are dictated by the monitoring service. |
+| alertTargetIds | The list of the Azure Resource Manager IDs that are affected targets of an alert. For a log alert defined on a Log Analytics workspace or Application Insights instance, it's the respective workspace or application. |
+| configurationItems | The list of affected resources of an alert. The configuration items can be different from the alert targets in some cases, e.g. in metric-for-log or log alerts defined on a Log Analytics workspace, where the configuration items are the actual resources sending the telemetry, and not the workspace. This field is used by ITSM systems to correlate alerts to resources in a CMDB. |
+| originAlertId | The ID of the alert instance, as generated by the monitoring service generating it. |
+| firedDateTime | The date and time when the alert instance was fired in Coordinated Universal Time (UTC). |
+| resolvedDateTime | The date and time when the monitor condition for the alert instance is set to **Resolved** in UTC. Currently only applicable for metric alerts.|
+| description | The description, as defined in the alert rule. |
+|essentialsVersion| The version number for the essentials section.|
+|alertContextVersion | The version number for the `alertContext` section. |
+
+**Sample values**
+```json
+{
+ "essentials": {
+ "alertId": "/subscriptions/<subscription ID>/providers/Microsoft.AlertsManagement/alerts/b9569717-bc32-442f-add5-83a997729330",
+ "alertRule": "Contoso IT Metric Alert",
+ "severity": "Sev3",
+ "signalType": "Metric",
+ "monitorCondition": "Fired",
+ "monitoringService": "Platform",
+ "alertTargetIDs": [
+ "/subscriptions/<subscription ID>/resourceGroups/aimon-rg/providers/Microsoft.Insights/components/ai-orion-int-fe"
+ ],
+ "originAlertId": "74ff8faa0c79db6084969cf7c72b0710e51aec70b4f332c719ab5307227a984f",
+ "firedDateTime": "2019-03-26T05:25:50.4994863Z",
+ "description": "Test Metric alert",
+ "essentialsVersion": "1.0",
+ "alertContextVersion": "1.0"
+ }
+}
+```
+
+## Alert context
+
+### Metric alerts - Static threshold
+
+#### `monitoringService` = `Platform`
+
+**Sample values**
+```json
+{
+ "schemaId":"azureMonitorCommonAlertSchema",
+ "data":{
+ "essentials":{
+ "alertId":"/subscriptions/11111111-1111-1111-1111-111111111111/providers/Microsoft.AlertsManagement/alerts/12345678-1234-1234-1234-1234567890ab",
+ "alertRule":"test-metricAlertRule",
+ "severity":"Sev3",
+ "signalType":"Metric",
+ "monitorCondition":"Fired",
+ "monitoringService":"Platform",
+ "alertTargetIDs":[
+ "/subscriptions/11111111-1111-1111-1111-111111111111/resourcegroups/test-RG/providers/Microsoft.Storage/storageAccounts/test-storageAccount"
+ ],
+ "configurationItems":[
+ "test-storageAccount"
+ ],
+ "originAlertId":"11111111-1111-1111-1111-111111111111_test-RG_microsoft.insights_metricAlerts_test-metricAlertRule_1234567890",
+ "firedDateTime":"2021-11-15T09:35:24.3468506Z",
+ "description":"Alert rule description",
+ "essentialsVersion":"1.0",
+ "alertContextVersion":"1.0"
+ },
+ "alertContext":{
+ "properties":{
+ "customKey1":"value1",
+ "customKey2":"value2"
+ },
+ "conditionType":"DynamicThresholdCriteria",
+ "condition":{
+ "windowSize":"PT15M",
+ "allOf":[
+ {
+ "alertSensitivity":"Low",
+ "failingPeriods":{
+ "numberOfEvaluationPeriods":3,
+ "minFailingPeriodsToAlert":3
+ },
+ "ignoreDataBefore":null,
+ "metricName":"Transactions",
+ "metricNamespace":"Microsoft.Storage/storageAccounts",
+ "operator":"GreaterThan",
+ "threshold":"0.3",
+ "timeAggregation":"Average",
+ "dimensions":[
+
+ ],
+ "metricValue":78.09,
+ "webTestName":null
+ }
+ ],
+ "windowStartTime":"2021-12-15T01:04:11.719Z",
+ "windowEndTime":"2021-12-15T01:19:11.719Z"
+ }
+ },
+ "customProperties":{
+ "customKey1":"value1",
+ "customKey2":"value2"
+ }
+ }
+}
+```
+
+### Metric alerts - Dynamic threshold
+
+#### `monitoringService` = `Platform`
+
+**Sample values**
+```json
+{
+ "schemaId":"azureMonitorCommonAlertSchema",
+ "data":{
+ "essentials":{
+ "alertId":"/subscriptions/11111111-1111-1111-1111-111111111111/providers/Microsoft.AlertsManagement/alerts/12345678-1234-1234-1234-1234567890ab",
+ "alertRule":"test-metricAlertRule",
+ "severity":"Sev3",
+ "signalType":"Metric",
+ "monitorCondition":"Fired",
+ "monitoringService":"Platform",
+ "alertTargetIDs":[
+ "/subscriptions/11111111-1111-1111-1111-111111111111/resourcegroups/test-RG/providers/Microsoft.Storage/storageAccounts/test-storageAccount"
+ ],
+ "configurationItems":[
+ "test-storageAccount"
+ ],
+ "originAlertId":"11111111-1111-1111-1111-111111111111_test-RG_microsoft.insights_metricAlerts_test-metricAlertRule_1234567890",
+ "firedDateTime":"2021-11-15T09:35:24.3468506Z",
+ "description":"Alert rule description",
+ "essentialsVersion":"1.0",
+ "alertContextVersion":"1.0"
+ },
+ "alertContext":{
+ "properties":{
+ "customKey1":"value1",
+ "customKey2":"value2"
+ },
+ "conditionType":"DynamicThresholdCriteria",
+ "condition":{
+ "windowSize":"PT15M",
+ "allOf":[
+ {
+ "alertSensitivity":"Low",
+ "failingPeriods":{
+ "numberOfEvaluationPeriods":3,
+ "minFailingPeriodsToAlert":3
+ },
+ "ignoreDataBefore":null,
+ "metricName":"Transactions",
+ "metricNamespace":"Microsoft.Storage/storageAccounts",
+ "operator":"GreaterThan",
+ "threshold":"0.3",
+ "timeAggregation":"Average",
+ "dimensions":[
+
+ ],
+ "metricValue":78.09,
+ "webTestName":null
+ }
+ ],
+ "windowStartTime":"2021-12-15T01:04:11.719Z",
+ "windowEndTime":"2021-12-15T01:19:11.719Z"
+ }
+ },
+ "customProperties":{
+ "customKey1":"value1",
+ "customKey2":"value2"
+ }
+ }
+}
+```
+
+### Log alerts
+
+> [!NOTE]
+> For log alerts that have a custom email subject and/or JSON payload defined, enabling the common schema reverts email subject and/or payload schema to the one described as follows. This means that if you want to have a custom JSON payload defined, the webhook cannot use the common alert schema. Alerts with the common schema enabled have an upper size limit of 256 KB per alert. Search results aren't embedded in the log alerts payload if they cause the alert size to cross this threshold. You can determine this by checking the flag `IncludedSearchResults`. When the search results aren't included, you should use the `LinkToFilteredSearchResultsAPI` or `LinkToSearchResultsAPI` to access query results with the [Log Analytics API](/rest/api/loganalytics/dataaccess/query/get).
+
+#### `monitoringService` = `Log Alerts V1 ΓÇô Metric`
+
+**Sample values**
+```json
+{
+ "schemaId":"azureMonitorCommonAlertSchema",
+ "data":{
+ "essentials":{
+ "alertId":"/subscriptions/11111111-1111-1111-1111-111111111111/providers/Microsoft.AlertsManagement/alerts/12345678-1234-1234-1234-1234567890ab",
+ "alertRule":"test-logAlertRule-v1-metricMeasurement",
+ "severity":"Sev3",
+ "signalType":"Log",
+ "monitorCondition":"Fired",
+ "monitoringService":"Log Analytics",
+ "alertTargetIDs":[
+ "/subscriptions/11111111-1111-1111-1111-111111111111/resourcegroups/test-RG/providers/microsoft.operationalinsights/workspaces/test-logAnalyticsWorkspace"
+ ],
+ "configurationItems":[
+
+ ],
+ "originAlertId":"12345678-4444-4444-4444-1234567890ab",
+ "firedDateTime":"2021-11-16T15:17:21.9232467Z",
+ "description":"Alert rule description",
+ "essentialsVersion":"1.0",
+ "alertContextVersion":"1.1"
+ },
+ "alertContext":{
+ "SearchQuery":"Heartbeat | summarize AggregatedValue=count() by bin(TimeGenerated, 5m)",
+ "SearchIntervalStartTimeUtc":"2021-11-15T15:16:49Z",
+ "SearchIntervalEndtimeUtc":"2021-11-16T15:16:49Z",
+ "ResultCount":2,
+ "LinkToSearchResults":"https://portal.azure.com#@aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa/blade/Microsoft_Azure_Monitoring_Logs/LogsBlade/source/Alerts.EmailLinks/scope/%7B%22resources%22%3A%5B%7B%22resourceId%22%3A%22%2Fsubscriptions%2F11111111-1111-1111-1111-111111111111%2FresourceGroups%2Ftest-RG%2Fproviders%2FMicrosoft.OperationalInsights%2Fworkspaces%2Ftest-logAnalyticsWorkspace%22%7D%5D%7D/q/aBcDeFgHi%2BWqUSguzc1NLMqsSlVwTE8vSk1PLElNCUvMKU21Tc4vzSvRaBcDeFgHiaBcDeFgHiaBcDeFgHiaBcDeFgHi/prettify/1/timespan/2021-11-15T15%3a16%3a49.0000000Z%2f2021-11-16T15%3a16%3a49.0000000Z",
+ "LinkToFilteredSearchResultsUI":"https://portal.azure.com#@aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa/blade/Microsoft_Azure_Monitoring_Logs/LogsBlade/source/Alerts.EmailLinks/scope/%7B%22resources%22%3A%5B%7B%22resourceId%22%3A%22%2Fsubscriptions%2F11111111-1111-1111-1111-111111111111%2FresourceGroups%2Ftest-RG%2Fproviders%2FMicrosoft.OperationalInsights%2Fworkspaces%2Ftest-logAnalyticsWorkspace%22%7D%5D%7D/q/aBcDeFgHiaBcDeFgHiaBcDeFgHiaBcDeFgHiaBcDeFgHidp%2BOPOhDKsHR%2FFeJXsTgzGJRmVui3KF3RpLyEJCX9A2iMl6jgxMn6jRevng3JmIHLdYtKP4DRI9mhc%3D/prettify/1/timespan/2021-11-15T15%3a16%3a49.0000000Z%2f2021-11-16T15%3a16%3a49.0000000Z",
+ "LinkToSearchResultsAPI":"https://api.loganalytics.io/v1/workspaces/bbbbbbbb-bbbb-bbbb-bbbb-bbbbbbbbbbbb/query?query=Heartbeat%20%0A%7C%20summarize%20AggregatedValue%3Dcount%28%29%20by%20bin%28TimeGenerated%2C%205m%29&timespan=2021-11-15T15%3a16%3a49.0000000Z%2f2021-11-16T15%3a16%3a49.0000000Z",
+ "LinkToFilteredSearchResultsAPI":"https://api.loganalytics.io/v1/workspaces/bbbbbbbb-bbbb-bbbb-bbbb-bbbbbbbbbbbb/query?query=Heartbeat%20%0A%7C%20summarize%20AggregatedValue%3Dcount%28%29%20by%20bin%28TimeGenerated%2C%205m%29%7C%20where%20todouble%28AggregatedValue%29%20%3E%200&timespan=2021-11-15T15%3a16%3a49.0000000Z%2f2021-11-16T15%3a16%3a49.0000000Z",
+ "SeverityDescription":"Informational",
+ "WorkspaceId":"bbbbbbbb-bbbb-bbbb-bbbb-bbbbbbbbbbbb",
+ "SearchIntervalDurationMin":"1440",
+ "AffectedConfigurationItems":[
+
+ ],
+ "AlertType":"Metric measurement",
+ "IncludeSearchResults":true,
+ "Dimensions":[
+
+ ],
+ "SearchIntervalInMinutes":"1440",
+ "SearchResults":{
+ "tables":[
+ {
+ "name":"PrimaryResult",
+ "columns":[
+ {
+ "name":"TimeGenerated",
+ "type":"datetime"
+ },
+ {
+ "name":"AggregatedValue",
+ "type":"long"
+ }
+ ],
+ "rows":[
+ [
+ "2021-11-16T10:56:49Z",
+ 11
+ ],
+ [
+ "2021-11-16T11:56:49Z",
+ 11
+ ]
+ ]
+ }
+ ],
+ "dataSources":[
+ {
+ "resourceId":"/subscriptions/11111111-1111-1111-1111-111111111111/resourcegroups/test-RG/providers/microsoft.operationalinsights/workspaces/test-logAnalyticsWorkspace",
+ "region":"eastus",
+ "tables":[
+ "Heartbeat"
+ ]
+ }
+ ]
+ },
+ "Threshold":0,
+ "Operator":"Greater Than",
+ "IncludedSearchResults":"True"
+ }
+ }
+}
+```
+
+#### `monitoringService` = `Log Alerts V1 - Numresults`
+
+**Sample values**
+```json
+{
+ "schemaId":"azureMonitorCommonAlertSchema",
+ "data":{
+ "essentials":{
+ "alertId":"/subscriptions/11111111-1111-1111-1111-111111111111/providers/Microsoft.AlertsManagement/alerts/12345678-1234-1234-1234-1234567890ab",
+ "alertRule":"test-logAlertRule-v1-numResults",
+ "severity":"Sev3",
+ "signalType":"Log",
+ "monitorCondition":"Fired",
+ "monitoringService":"Log Analytics",
+ "alertTargetIDs":[
+ "/subscriptions/11111111-1111-1111-1111-111111111111/resourcegroups/test-RG/providers/microsoft.operationalinsights/workspaces/test-logAnalyticsWorkspace"
+ ],
+ "configurationItems":[
+ "test-computer"
+ ],
+ "originAlertId":"22222222-2222-2222-2222-222222222222",
+ "firedDateTime":"2021-11-16T15:15:58.3302205Z",
+ "description":"Alert rule description",
+ "essentialsVersion":"1.0",
+ "alertContextVersion":"1.1"
+ },
+ "alertContext":{
+ "SearchQuery":"Heartbeat",
+ "SearchIntervalStartTimeUtc":"2021-11-15T15:15:24Z",
+ "SearchIntervalEndtimeUtc":"2021-11-16T15:15:24Z",
+ "ResultCount":1,
+ "LinkToSearchResults":"https://portal.azure.com#@aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa/blade/Microsoft_Azure_Monitoring_Logs/LogsBlade/source/Alerts.EmailLinks/scope/%7B%22resources%22%3A%5B%7B%22resourceId%22%3A%22%2Fsubscriptions%2F11111111-1111-1111-1111-111111111111%2FresourceGroups%2Ftest-RG%2Fproviders%2FMicrosoft.OperationalInsights%2Fworkspaces%2Ftest-logAnalyticsWorkspace%22%7D%5D%7D/q/aBcDeFgHi%2ABCDE%3D%3D/prettify/1/timespan/2021-11-15T15%3a15%3a24.0000000Z%2f2021-11-16T15%3a15%3a24.0000000Z",
+ "LinkToFilteredSearchResultsUI":"https://portal.azure.com#@aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa/blade/Microsoft_Azure_Monitoring_Logs/LogsBlade/source/Alerts.EmailLinks/scope/%7B%22resources%22%3A%5B%7B%22resourceId%22%3A%22%2Fsubscriptions%2F11111111-1111-1111-1111-111111111111%2FresourceGroups%2Ftest-RG%2Fproviders%2FMicrosoft.OperationalInsights%2Fworkspaces%2Ftest-logAnalyticsWorkspace%22%7D%5D%7D/q/aBcDeFgHi%2ABCDE%3D%3D/prettify/1/timespan/2021-11-15T15%3a15%3a24.0000000Z%2f2021-11-16T15%3a15%3a24.0000000Z",
+ "LinkToSearchResultsAPI":"https://api.loganalytics.io/v1/workspaces/bbbbbbbb-bbbb-bbbb-bbbb-bbbbbbbbbbbb/query?query=Heartbeat%0A&timespan=2021-11-15T15%3a15%3a24.0000000Z%2f2021-11-16T15%3a15%3a24.0000000Z",
+ "LinkToFilteredSearchResultsAPI":"https://api.loganalytics.io/v1/workspaces/bbbbbbbb-bbbb-bbbb-bbbb-bbbbbbbbbbbb/query?query=Heartbeat%0A&timespan=2021-11-15T15%3a15%3a24.0000000Z%2f2021-11-16T15%3a15%3a24.0000000Z",
+ "SeverityDescription":"Informational",
+ "WorkspaceId":"bbbbbbbb-bbbb-bbbb-bbbb-bbbbbbbbbbbb",
+ "SearchIntervalDurationMin":"1440",
+ "AffectedConfigurationItems":[
+ "test-computer"
+ ],
+ "AlertType":"Number of results",
+ "IncludeSearchResults":true,
+ "SearchIntervalInMinutes":"1440",
+ "SearchResults":{
+ "tables":[
+ {
+ "name":"PrimaryResult",
+ "columns":[
+ {
+ "name":"TenantId",
+ "type":"string"
+ },
+ {
+ "name":"Computer",
+ "type":"string"
+ },
+ {
+ "name":"TimeGenerated",
+ "type":"datetime"
+ }
+ ],
+ "rows":[
+ [
+ "bbbbbbbb-bbbb-bbbb-bbbb-bbbbbbbbbbbb",
+ "test-computer",
+ "2021-11-16T12:00:00Z"
+ ]
+ ]
+ }
+ ],
+ "dataSources":[
+ {
+ "resourceId":"/subscriptions/11111111-1111-1111-1111-111111111111/resourcegroups/test-RG/providers/microsoft.operationalinsights/workspaces/test-logAnalyticsWorkspace",
+ "region":"eastus",
+ "tables":[
+ "Heartbeat"
+ ]
+ }
+ ]
+ },
+ "Threshold":0,
+ "Operator":"Greater Than",
+ "IncludedSearchResults":"True"
+ }
+ }
+}
+```
+
+#### `monitoringService` = `Log Alerts V2`
+
+> [!NOTE]
+> Log alerts rules from API version 2020-05-01 use this payload type, which only supports common schema. Search results aren't embedded in the log alerts payload when using this version. You should use [dimensions](./alerts-unified-log.md#split-by-alert-dimensions) to provide context to fired alerts. You can also use the `LinkToFilteredSearchResultsAPI` or `LinkToSearchResultsAPI` to access query results with the [Log Analytics API](/rest/api/loganalytics/dataaccess/query/get). If you must embed the results, use a logic app with the provided links the generate a custom payload.
+
+**Sample values**
+```json
+{
+ "schemaId":"azureMonitorCommonAlertSchema",
+ "data":{
+ "essentials":{
+ "alertId":"/subscriptions/11111111-1111-1111-1111-111111111111/providers/Microsoft.AlertsManagement/alerts/12345678-1234-1234-1234-1234567890ab",
+ "alertRule":"test-logAlertRule-v2",
+ "severity":"Sev3",
+ "signalType":"Log",
+ "monitorCondition":"Fired",
+ "monitoringService":"Log Alerts V2",
+ "alertTargetIDs":[
+ "/subscriptions/11111111-1111-1111-1111-111111111111/resourcegroups/test-RG/providers/microsoft.operationalinsights/workspaces/test-logAnalyticsWorkspace"
+ ],
+ "configurationItems":[
+ "test-computer"
+ ],
+ "originAlertId":"22222222-2222-2222-2222-222222222222",
+ "firedDateTime":"2021-11-16T11:47:41.4728231Z",
+ "description":"Alert rule description",
+ "essentialsVersion":"1.0",
+ "alertContextVersion":"1.0"
+ },
+ "alertContext":{
+ "properties":{
+ "customKey1":"value1",
+ "customKey2":"value2"
+ },
+ "conditionType":"LogQueryCriteria",
+ "condition":{
+ "windowSize":"PT1H",
+ "allOf":[
+ {
+ "searchQuery":"Heartbeat",
+ "metricMeasureColumn":null,
+ "targetResourceTypes":"['Microsoft.OperationalInsights/workspaces']",
+ "operator":"GreaterThan",
+ "threshold":"0",
+ "timeAggregation":"Count",
+ "dimensions":[
+ {
+ "name":"Computer",
+ "value":"test-computer"
+ }
+ ],
+ "metricValue":3.0,
+ "failingPeriods":{
+ "numberOfEvaluationPeriods":1,
+ "minFailingPeriodsToAlert":1
+ },
+ "linkToSearchResultsUI":"https://portal.azure.com#@aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa/blade/Microsoft_Azure_Monitoring_Logs/LogsBlade/source/Alerts.EmailLinks/scope/%7B%22resources%22%3A%5B%7B%22resourceId%22%3A%22%2Fsubscriptions%2F11111111-1111-1111-1111-111111111111%2FresourceGroups%2Ftest-RG%2Fproviders%2FMicrosoft.OperationalInsights%2Fworkspaces%2Ftest-logAnalyticsWorkspace%22%7D%5D%7D/q/aBcDeFgHiJkLmNaBcDeFgHiJkLmNaBcDeFgHiJkLmNaBcDeFgHiJkLmN1234567890ZAZBZiaGBlaG5lbKlnAAFRmnp6WNUZoqvTBAA%3D/prettify/1/timespan/2021-11-16T10%3a17%3a39.0000000Z%2f2021-11-16T11%3a17%3a39.0000000Z",
+ "linkToFilteredSearchResultsUI":"https://portal.azure.com#@aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa/blade/Microsoft_Azure_Monitoring_Logs/LogsBlade/source/Alerts.EmailLinks/scope/%7B%22resources%22%3A%5B%7B%22resourceId%22%3A%22%2Fsubscriptions%2F11111111-1111-1111-1111-111111111111%2FresourceGroups%2Ftest-RG%2Fproviders%2FMicrosoft.OperationalInsights%2Fworkspaces%2Ftest-logAnalyticsWorkspace%22%7D%5D%7D/q/aBcDeFgHiJkLmN%2Fl35oOTZoKioEOouaBcDeFgHiJkLmN%2BaBcDeFgHiJkLmN%2BaBcDeFgHiJkLmN7HHgOCZTR0Ak%2FaBcDeFgHiJkLmN1234567890Ltcw%2FOqZS%2FuX0L5d%2Bx3iMHNzQiu3Y%2BzsjpFSWlOzgA87vAxeHW2MoAtQxe6OUvVrZR3XYZPXrd%2FIE/prettify/1/timespan/2021-11-16T10%3a17%3a39.0000000Z%2f2021-11-16T11%3a17%3a39.0000000Z",
+ "linkToSearchResultsAPI":"https://api.loganalytics.io/v1/workspaces/bbbbbbbb-bbbb-bbbb-bbbb-bbbbbbbbbbbb/query?query=Heartbeat%7C%20where%20TimeGenerated%20between%28datetime%282021-11-16T10%3A17%3A39.0000000Z%29..datetime%282021-11-16T11%3A17%3A39.0000000Z%29%29&timespan=2021-11-16T10%3a17%3a39.0000000Z%2f2021-11-16T11%3a17%3a39.0000000Z",
+ "linkToFilteredSearchResultsAPI":"https://api.loganalytics.io/v1/workspaces/bbbbbbbb-bbbb-bbbb-bbbb-bbbbbbbbbbbb/query?query=Heartbeat%7C%20where%20TimeGenerated%20between%28datetime%282021-11-16T10%3A17%3A39.0000000Z%29..datetime%282021-11-16T11%3A17%3A39.0000000Z%29%29%7C%20where%20tostring%28Computer%29%20%3D%3D%20%27test-computer%27&timespan=2021-11-16T10%3a17%3a39.0000000Z%2f2021-11-16T11%3a17%3a39.0000000Z"
+ }
+ ],
+ "windowStartTime":"2021-11-16T10:17:39Z",
+ "windowEndTime":"2021-11-16T11:17:39Z"
+ }
+ }
+ }
+}
+```
+
+### Activity log alerts
+
+#### `monitoringService` = `Activity Log - Administrative`
+
+**Sample values**
+```json
+{
+ "schemaId":"azureMonitorCommonAlertSchema",
+ "data":{
+ "essentials":{
+ "alertId":"/subscriptions/11111111-1111-1111-1111-111111111111/providers/Microsoft.AlertsManagement/alerts/12345678-1234-1234-1234-1234567890ab",
+ "alertRule":"test-activityLogAlertRule",
+ "severity":"Sev4",
+ "signalType":"Activity Log",
+ "monitorCondition":"Fired",
+ "monitoringService":"Activity Log - Administrative",
+ "alertTargetIDs":[
+ "/subscriptions/11111111-1111-1111-1111-111111111111/resourcegroups/test-RG/providers/microsoft.compute/virtualmachines/test-VM"
+ ],
+ "configurationItems":[
+ "test-VM"
+ ],
+ "originAlertId":"bbbbbbbb-bbbb-bbbb-bbbb-bbbbbbbbbbbb_123456789012345678901234567890ab",
+ "firedDateTime":"2021-11-16T08:29:01.2932462Z",
+ "description":"Alert rule description",
+ "essentialsVersion":"1.0",
+ "alertContextVersion":"1.0"
+ },
+ "alertContext":{
+ "authorization":{
+ "action":"Microsoft.Compute/virtualMachines/restart/action",
+ "scope":"/subscriptions/11111111-1111-1111-1111-111111111111/resourceGroups/test-RG/providers/Microsoft.Compute/virtualMachines/test-VM"
+ },
+ "channels":"Operation",
+ "claims":"{}",
+ "caller":"user-email@domain.com",
+ "correlationId":"aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa",
+ "eventSource":"Administrative",
+ "eventTimestamp":"2021-11-16T08:27:36.1836909+00:00",
+ "eventDataId":"bbbbbbbb-bbbb-bbbb-bbbb-bbbbbbbbbbbb",
+ "level":"Informational",
+ "operationName":"Microsoft.Compute/virtualMachines/restart/action",
+ "operationId":"cccccccc-cccc-cccc-cccc-cccccccccccc",
+ "properties":{
+ "eventCategory":"Administrative",
+ "entity":"/subscriptions/11111111-1111-1111-1111-111111111111/resourceGroups/test-RG/providers/Microsoft.Compute/virtualMachines/test-VM",
+ "message":"Microsoft.Compute/virtualMachines/restart/action",
+ "hierarchy":"22222222-2222-2222-2222-222222222222/CnAIOrchestrationServicePublicCorpprod/33333333-3333-3333-3333-3333333333333/44444444-4444-4444-4444-444444444444/55555555-5555-5555-5555-555555555555/11111111-1111-1111-1111-111111111111"
+ },
+ "status":"Succeeded",
+ "subStatus":"",
+ "submissionTimestamp":"2021-11-16T08:29:00.141807+00:00",
+ "Activity Log Event Description":""
+ }
+ }
+}
+```
+
+#### `monitoringService` = `ServiceHealth`
+
+**Sample values**
+```json
+{
+ "schemaId":"azureMonitorCommonAlertSchema",
+ "data":{
+ "essentials":{
+ "alertId":"/subscriptions/11111111-1111-1111-1111-111111111111/providers/Microsoft.AlertsManagement/alerts/1234abcd5678efgh1234abcd5678efgh1234abcd5678efgh1234abcd5678efgh",
+ "alertRule":"test-ServiceHealthAlertRule",
+ "severity":"Sev4",
+ "signalType":"Activity Log",
+ "monitorCondition":"Fired",
+ "monitoringService":"ServiceHealth",
+ "alertTargetIDs":[
+ "/subscriptions/11111111-1111-1111-1111-111111111111"
+ ],
+ "originAlertId":"12345678-1234-1234-1234-1234567890ab",
+ "firedDateTime":"2021-11-17T05:34:48.0623172",
+ "description":"Alert rule description",
+ "essentialsVersion":"1.0",
+ "alertContextVersion":"1.0"
+ },
+ "alertContext":{
+ "authorization":null,
+ "channels":1,
+ "claims":null,
+ "caller":null,
+ "correlationId":"12345678-abcd-efgh-ijkl-abcd12345678",
+ "eventSource":2,
+ "eventTimestamp":"2021-11-17T05:34:44.5778226+00:00",
+ "httpRequest":null,
+ "eventDataId":"12345678-1234-1234-1234-1234567890ab",
+ "level":3,
+ "operationName":"Microsoft.ServiceHealth/incident/action",
+ "operationId":"12345678-abcd-efgh-ijkl-abcd12345678",
+ "properties":{
+ "title":"Test Action Group - Test Service Health Alert",
+ "service":"Azure Service Name",
+ "region":"Global",
+ "communication":"<p><strong>Summary of impact</strong>:&nbsp;This is the impact summary.</p>\n<p><br></p>\n<p><strong>Preliminary Root Cause</strong>: This is the preliminary root cause.</p>\n<p><br></p>\n<p><strong>Mitigation</strong>:&nbsp;Mitigation description.</p>\n<p><br></p>\n<p><strong>Next steps</strong>: These are the next steps.&nbsp;</p>\n<p><br></p>\n<p>Stay informed about Azure service issues by creating custom service health alerts: <a href=\"https://aka.ms/ash-videos\" rel=\"noopener noreferrer\" target=\"_blank\">https://aka.ms/ash-videos</a> for video tutorials and <a href=\"https://aka.ms/ash-alerts%20for%20how-to%20documentation\" rel=\"noopener noreferrer\" target=\"_blank\">https://aka.ms/ash-alerts for how-to documentation</a>.</p>\n<p><br></p>",
+ "incidentType":"Incident",
+ "trackingId":"ABC1-DEF",
+ "impactStartTime":"2021-11-16T20:00:00Z",
+ "impactMitigationTime":"2021-11-17T01:00:00Z",
+ "impactedServices":"[{\"ImpactedRegions\":[{\"RegionName\":\"Global\"}],\"ServiceName\":\"Azure Service Name\"}]",
+ "impactedServicesTableRows":"<tr>\r\n<td align='center' style='padding: 5px 10px; border-right:1px solid black; border-bottom:1px solid black'>Azure Service Name</td>\r\n<td align='center' style='padding: 5px 10px; border-bottom:1px solid black'>Global<br></td>\r\n</tr>\r\n",
+ "defaultLanguageTitle":"Test Action Group - Test Service Health Alert",
+ "defaultLanguageContent":"<p><strong>Summary of impact</strong>:&nbsp;This is the impact summary.</p>\n<p><br></p>\n<p><strong>Preliminary Root Cause</strong>: This is the preliminary root cause.</p>\n<p><br></p>\n<p><strong>Mitigation</strong>:&nbsp;Mitigation description.</p>\n<p><br></p>\n<p><strong>Next steps</strong>: These are the next steps.&nbsp;</p>\n<p><br></p>\n<p>Stay informed about Azure service issues by creating custom service health alerts: <a href=\"https://aka.ms/ash-videos\" rel=\"noopener noreferrer\" target=\"_blank\">https://aka.ms/ash-videos</a> for video tutorials and <a href=\"https://aka.ms/ash-alerts%20for%20how-to%20documentation\" rel=\"noopener noreferrer\" target=\"_blank\">https://aka.ms/ash-alerts for how-to documentation</a>.</p>\n<p><br></p>",
+ "stage":"Resolved",
+ "communicationId":"11223344556677",
+ "isHIR":"false",
+ "IsSynthetic":"True",
+ "impactType":"SubscriptionList",
+ "version":"0.1.1"
+ },
+ "status":"Resolved",
+ "subStatus":null,
+ "submissionTimestamp":"2021-11-17T01:23:45.0623172+00:00",
+ "ResourceType":null
+ }
+ }
+}
+```
+
+#### `monitoringService` = `Resource Health`
+
+**Sample values**
+```json
+{
+ "schemaId":"azureMonitorCommonAlertSchema",
+ "data":{
+ "essentials":{
+ "alertId":"/subscriptions/11111111-1111-1111-1111-111111111111/providers/Microsoft.AlertsManagement/alerts/12345678-1234-1234-1234-1234567890ab",
+ "alertRule":"test-ResourceHealthAlertRule",
+ "severity":"Sev4",
+ "signalType":"Activity Log",
+ "monitorCondition":"Fired",
+ "monitoringService":"Resource Health",
+ "alertTargetIDs":[
+ "/subscriptions/11111111-1111-1111-1111-111111111111/resourcegroups/test-RG/providers/microsoft.compute/virtualmachines/test-VM"
+ ],
+ "configurationItems":[
+ "test-VM"
+ ],
+ "originAlertId":"bbbbbbbb-bbbb-bbbb-bbbb-bbbbbbbbbbbb_123456789012345678901234567890ab",
+ "firedDateTime":"2021-11-16T09:54:08.9938123Z",
+ "description":"Alert rule description",
+ "essentialsVersion":"1.0",
+ "alertContextVersion":"1.0"
+ },
+ "alertContext":{
+ "channels":"Admin, Operation",
+ "correlationId":"aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa",
+ "eventSource":"ResourceHealth",
+ "eventTimestamp":"2021-11-16T09:50:20.406+00:00",
+ "eventDataId":"bbbbbbbb-bbbb-bbbb-bbbb-bbbbbbbbbbbb",
+ "level":"Informational",
+ "operationName":"Microsoft.Resourcehealth/healthevent/Activated/action",
+ "operationId":"bbbbbbbb-bbbb-bbbb-bbbb-bbbbbbbbbbbb",
+ "properties":{
+ "title":"Rebooted by user",
+ "details":null,
+ "currentHealthStatus":"Unavailable",
+ "previousHealthStatus":"Available",
+ "type":"Downtime",
+ "cause":"UserInitiated"
+ },
+ "status":"Active",
+ "submissionTimestamp":"2021-11-16T09:54:08.5303319+00:00",
+ "Activity Log Event Description":null
+ }
+ }
+}
+```
+
+#### `monitoringService` = `Budget`
+
+**Sample values**
+```json
+{
+ "schemaId":"AIP Budget Notification",
+ "data":{
+ "SubscriptionName":"test-subscription",
+ "SubscriptionId":"11111111-1111-1111-1111-111111111111",
+ "EnrollmentNumber":"",
+ "DepartmentName":"test-budgetDepartmentName",
+ "AccountName":"test-budgetAccountName",
+ "BillingAccountId":"",
+ "BillingProfileId":"",
+ "InvoiceSectionId":"",
+ "ResourceGroup":"test-RG",
+ "SpendingAmount":"1111.32",
+ "BudgetStartDate":"11/17/2021 5:40:29 PM -08:00",
+ "Budget":"10000",
+ "Unit":"USD",
+ "BudgetCreator":"email@domain.com",
+ "BudgetName":"test-budgetName",
+ "BudgetType":"Cost",
+ "NotificationThresholdAmount":"8000.0"
+ }
+}
+```
+
+#### `monitoringService` = `Smart Alert`
+
+**Sample values**
+```json
+{
+ "schemaId":"azureMonitorCommonAlertSchema",
+ "data":{
+ "essentials":{
+ "alertId":"/subscriptions/11111111-1111-1111-1111-111111111111/providers/Microsoft.AlertsManagement/alerts/12345678-1234-1234-1234-1234567890ab",
+ "alertRule":"Dependency Latency Degradation - test-applicationInsights",
+ "severity":"Sev3",
+ "signalType":"Log",
+ "monitorCondition":"Fired",
+ "monitoringService":"SmartDetector",
+ "alertTargetIDs":[
+ "/subscriptions/11111111-1111-1111-1111-111111111111/resourcegroups/test-RG/providers/microsoft.insights/components/test-applicationInsights"
+ ],
+ "configurationItems":[
+ "test-applicationInsights"
+ ],
+ "originAlertId":"1234abcd5678efgh1234abcd5678efgh1234abcd5678efgh1234abcd5678efgh",
+ "firedDateTime":"2021-10-28T19:09:09.1115084Z",
+ "description":"Dependency Latency Degradation notifies you of an unusual increase in response by a dependency your app is calling (e.g. REST API or database)",
+ "essentialsVersion":"1.0",
+ "alertContextVersion":"1.0"
+ },
+ "alertContext":{
+ "DetectionSummary":"A degradation in the dependency duration over the last 24 hours",
+ "FormattedOccurrenceTime":"2021-10-27T23:59:59Z",
+ "DetectedValue":"0.45 sec",
+ "NormalValue":"0.27 sec (over the last 7 days)",
+ "PresentationInsightEventRequest":"/subscriptions/11111111-1111-1111-1111-111111111111/resourceGroups/test-RG/providers/microsoft.insights/components/test-applicationInsights/query?query=systemEvents%0d%0a++++++++++++++++%7c+where+timestamp+%3e%3d+datetime(%272021-10-27T23%3a29%3a59.0000000Z%27)+%0d%0a++++++++++++++++%7c+where+itemType+%3d%3d+%27systemEvent%27+and+name+%3d%3d+%27ProactiveDetectionInsight%27+%0d%0a++++++++++++++++%7c+where+dimensions.InsightType+%3d%3d+3+%0d%0a++++++++++++++++%7c+where+dimensions.InsightVersion+%3d%3d+%27SmartAlert%27%0d%0a++++++++++++++++%7c+where+dimensions.InsightDocumentId+%3d%3d+%2712345678-abcd-1234-5678-abcd12345678%27+%0d%0a++++++++++++++++%7c+project+dimensions.InsightPropertiesTable%2cdimensions.InsightDegradationChart%2cdimensions.InsightCountChart%2cdimensions.InsightLinksTable%0d%0a++++++++++++++++&api-version=2018-04-20",
+ "SmartDetectorId":"DependencyPerformanceDegradationDetector",
+ "SmartDetectorName":"Dependency Performance Degradation Detector",
+ "AnalysisTimestamp":"2021-10-28T19:09:09.1115084Z"
+ }
+ }
+}
+```
azure-monitor Alerts Rate Limiting https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/alerts-rate-limiting.md
Last updated 3/12/2018
# Rate limiting for Voice, SMS, emails, Azure App push notifications and webhook posts Rate limiting is a suspension of notifications that occurs when too many are sent to a particular phone number, email address or device. Rate limiting ensures that alerts are manageable and actionable.
-The rate limit thresholds are:
+The rate limit thresholds in **production** are:
- **SMS**: No more than 1 SMS every 5 minutes. - **Voice**: No more than 1 Voice call every 5 minutes.
The rate limit thresholds are:
Other actions are not rate limited.
+The rate limit thresholds for **test action group** are:
+
+- **SMS**: No more than 1 SMS every 1 minute.
+- **Voice**: No more than 1 Voice call every 1 minute.
+- **Email**: No more than 2 emails in every 1 minute.
+
+ Other actions are not rate limited.
+ ## Rate limit rules - A particular phone number or email is rate limited when it receives more messages than the threshold allows. - A phone number or email can be part of action groups across many subscriptions. Rate limiting applies across all subscriptions. It applies as soon as the threshold is reached, even if messages are sent from multiple subscriptions.
The rate limit thresholds are:
## Next steps ## * Learn more about [SMS alert behavior](alerts-sms-behavior.md). * Get an [overview of activity log alerts](./alerts-overview.md), and learn how to receive alerts.
-* Learn how to [configure alerts whenever a service health notification is posted](../../service-health/alerts-activity-log-service-notifications-portal.md).
+* Learn how to [configure alerts whenever a service health notification is posted](../../service-health/alerts-activity-log-service-notifications-portal.md).
azure-monitor Change Analysis Visualizations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/change-analysis-visualizations.md
description: Learn how to use visualizations in Application Change Analysis in A
Previously updated : 02/11/2021 Last updated : 01/11/2022
Last updated 02/11/2021
## Standalone UI
-In Azure Monitor, there is a standalone pane for Change Analysis to view all changes with insights into application dependencies and resources.
+Change Analysis lives in a standalone pane under Azure Monitor, where you can view all changes and application dependency/resource insights.
-Search for Change Analysis in the search bar on Azure portal to launch the experience.
+In the Azure portal, search for Change Analysis to launch the experience.
-![Screenshot of searching Change Analysis in Azure portal](./media/change-analysis/search-change-analysis.png)
-All resources under a selected subscription are displayed with changes from the past 24 hours. All changes are displayed with old value and new value to provide insights at one glance.
+Select one or more subscriptions to view:
+- All of its resources' changes from the past 24 hours.
+- Old and new values to provide insights at one glance.
-![Screenshot of Change Analysis blade in Azure portal](./media/change-analysis/change-analysis-standalone-blade.png)
-Clicking into a change to view full Resource Manager snippet and other properties.
+Click into a change to view full Resource Manager snippet and other properties.
-![Screenshot of change details](./media/change-analysis/change-details.png)
-For any feedback, use the send feedback button or email changeanalysisteam@microsoft.com.
+Send any feedback to the [Change Analysis team](mailto:changeanalysisteam@microsoft.com) from the Change Analysis blade:
-![Screenshot of feedback button in Change Analysis tab](./media/change-analysis/change-analysis-feedback.png)
### Multiple subscription support The UI supports selecting multiple subscriptions to view resource changes. Use the subscription filter:
-![Screenshot of subscription filter that supports selecting multiple subscriptions](./media/change-analysis/multiple-subscriptions-support.png)
- ## Application Change Analysis in the Diagnose and solve problems tool
-Application Change Analysis is a standalone detector in the Web App diagnose and solve problems tools. It is also aggregated in **Application Crashes** and **Web App Down detectors**. As you enter the Diagnose and Solve Problems tool, the **Microsoft.ChangeAnalysis** resource provider will automatically be registered. Follow these instructions to enable web app in-guest change tracking.
+Application Change Analysis is:
+- A standalone detector in the Web App **Diagnose and solve problems** tool.
+- Aggregated in **Application Crashes** and **Web App Down detectors**.
+
+From your app service's overview page in Azure portal, select **Diagnose and solve problems** the left menu. As you enter the Diagnose and Solve Problems tool, the **Microsoft.ChangeAnalysis** resource provider will automatically be registered. Enable web app in-guest change tracking with the following instructions:
1. Select **Availability and Performance**.
- ![Screenshot of the "Availability and Performance" troubleshooting options](./media/change-analysis/availability-and-performance.png)
+ :::image type="content" source="./media/change-analysis/availability-and-performance.png" alt-text="Screenshot of the Availability and Performance troubleshooting options":::
+
+2. Select **Application Changes (Preview)**. The feature is also available in **Application Crashes**.
-2. Select **Application Changes**. The feature is also available in **Application Crashes**.
+ :::image type="content" source="./media/change-analysis/application-changes.png" alt-text="Screenshot of the Application Crashes button":::
- ![Screenshot of the "Application Crashes" button](./media/change-analysis/application-changes.png)
+ The link leads to Application Change Analysis UI scoped to the web app.
-3. The link leads to Application Change Analysis UI scoped to the web app. If web app in-guest change tracking is not enabled, follow the banner to get file and app settings changes.
+3. Enable web app in-guest change tracking if you haven't already.
- ![Screenshot of "Application Crashes" options](./media/change-analysis/enable-changeanalysis.png)
+ :::image type="content" source="./media/change-analysis/enable-changeanalysis.png" alt-text="Screenshot of the Application Crashes options":::
-4. Turn on **Change Analysis** and select **Save**. The tool displays all web apps under an App Service plan. You can use the plan level switch to turn on Change Analysis for all web apps under a plan.
+4. Toggle on **Change Analysis** status and select **Save**.
- ![Screenshot of the "Enable Change Analysis" user interface](./media/change-analysis/change-analysis-on.png)
+ :::image type="content" source="./media/change-analysis/change-analysis-on.png" alt-text="Screenshot of the Enable Change Analysis user interface":::
+
+ - The tool displays all web apps under an App Service plan, which you can toggle on and off individually.
-5. Change data is also available in select **Web App Down** and **Application Crashes** detectors. You'll see a graph that summarizes the type of changes over time along with details on those changes. By default, changes in the past 24 hours are displayed to help with immediate problems.
+ :::image type="content" source="./media/change-analysis/change-analysis-on-2.png" alt-text="Screenshot of the Enable Change Analysis user interface expanded":::
- ![Screenshot of the change diff view](./media/change-analysis/change-view.png)
-## Diagnose and Solve Problems tool
-Change Analysis is available as an insight card in Diagnose and Solve Problem tool. If a resource experiences issues and there are changes discovered in the past 72 hours, the insights card will display the number of changes. Clicking on view change details link will lead to the filtered view from Change Analysis standalone UI.
+You can also view change data via the **Web App Down** and **Application Crashes** detectors. The graph summarizes:
+- The change types over time.
+- Details on those changes.
+
+By default, the graph displays changes from within the past 24 hours help with immediate problems.
+
-![Screenshot of viewing change insight in Diagnose and Solve Problems tool.](./media/change-analysis/change-insight-diagnose-and-solve.png)
+## Diagnose and Solve Problems tool
+Change Analysis displays as an insight card in a virtual machine's **Diagnose and solve problems** tool. The insight card displays the number of changes or issues a resource experiences within the past 72 hours.
+Under **Common problems**, select **View change details** to view the filtered view from Change Analysis standalone UI.
## Virtual Machine Diagnose and Solve Problems
-Go to Diagnose and Solve Problems tool for a Virtual Machine. Go to **Troubleshooting Tools**, browse down the page and select **Analyze recent changes** to view changes on the Virtual Machine.
+1. Within your virtual machine, select **Diagnose and solve problems** from the left menu.
+1. Go to **Troubleshooting tools**.
+1. Scroll to the end of the troubleshooting options and select **Analyze recent changes** to view changes on the virtual machine.
+
-![Screenshot of the VM Diagnose and Solve Problems](./media/change-analysis/vm-dnsp-troubleshootingtools.png)
-![Change analyzer in troubleshooting tools](./media/change-analysis/analyze-recent-changes.png)
+## Activity Log change history
-## Activity Log Change History
+Use the [View change history](../essentials/activity-log.md#view-change-history) feature to call the Application Change Analysis service backend to view changes associated with an operation. Changes returned include:
+- Resource level changes from [Azure Resource Graph](../../governance/resource-graph/overview.md).
+- Resource properties from [Azure Resource Manager](../../azure-resource-manager/management/overview.md).
+- In-guest changes from PaaS services, such as App Services web app.
-The [View change history](../essentials/activity-log.md#view-change-history) feature in Activity Log calls Application Change Analysis service backend to get changes associated with an operation. **Change history** used to call [Azure Resource Graph](../../governance/resource-graph/overview.md) directly, but swapped the backend to call Application Change Analysis so changes returned will include resource level changes from [Azure Resource Graph](../../governance/resource-graph/overview.md), resource properties from [Azure Resource Manager](../../azure-resource-manager/management/overview.md), and in-guest changes from PaaS services such as App Services web app.
-In order for the Application Change Analysis service to be able to scan for changes in users' subscriptions, a resource provider needs to be registered. The first time entering **Change History** tab, the tool will automatically start to register **Microsoft.ChangeAnalysis** resource provider. After registered, changes from **Azure Resource Graph** will be available immediately and cover the past 14 days. Changes from other sources will be available after ~4 hours after subscription is onboard.
+1. From within your resource, select **Activity Log** from the side menu.
+1. Select a change from the list.
+1. Select the **Change history (Preview)** tab.
+1. For the Application Change Analysis service to scan for changes in users' subscriptions, a resource provider needs to be registered. Upon selecting the **Change history (Preview)** tab, the tool will automatically register **Microsoft.ChangeAnalysis** resource provider.
+1. Once registered, you can view changes from **Azure Resource Graph** immediately from the past 14 days.
+ - Changes from other sources will be available after ~4 hours after subscription is onboard.
-![Activity Log change history integration](./media/change-analysis/activity-log-change-history.png)
## VM Insights integration
-Users having [VM Insights](../vm/vminsights-overview.md) enabled can view what changed in their virtual machines that might of caused any spikes in a metrics chart such as CPU or Memory. Change data is integrated in the VM Insights side navigation bar. User can view if any changes happened in the VM and select **Investigate Changes** to view change details in Application Change Analysis standalone UI.
+If you've enabled [VM Insights](../vm/vminsights-overview.md), you can view changes in your virtual machines that may have caused any spikes in a metric chart, such as CPU or Memory.
+
+1. Within your virtual machine, select **Insights** from under **Monitoring** in the left menu.
+1. Select the **Performance** tab.
+1. Expand the property panel.
+
+ :::image type="content" source="./media/change-analysis/vm-insights.png" alt-text="Virtual machine insights performance and property panel.":::
+
+1. Select the **Changes** tab.
+1. Select the **Investigate Changes** button to view change details in the Application Change Analysis standalone UI.
-[![VM insights integration](./media/change-analysis/vm-insights.png)](./media/change-analysis/vm-insights.png#lightbox)
+ :::image type="content" source="./media/change-analysis/vm-insights-2.png" alt-text="View of the property panel, selecting Investigate Changes button.":::
## Next steps
azure-monitor Change Analysis https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/change-analysis.md
description: Use Application Change Analysis in Azure Monitor to troubleshoot ap
Previously updated : 05/04/2020 Last updated : 01/11/2022
-# Use Application Change Analysis (preview) in Azure Monitor
+# Use Application Change Analysis in Azure Monitor (preview)
-When a live site issue or outage occurs, quickly determining the root cause is critical. Standard monitoring solutions might alert you to a problem. They might even indicate which component is failing. But this alert won't always immediately explain the failure's cause. You know your site worked five minutes ago, and now it's broken. What changed in the last five minutes? This is the question that Application Change Analysis is designed to answer in Azure Monitor.
+While standard monitoring solutions might alert you to a live site issue, outage, or component failure, they often don't explain the cause. For example, your site worked five minutes ago, and now it's broken. What changed in the last five minutes?
-Building on the power of [Azure Resource Graph](../../governance/resource-graph/overview.md), Change Analysis provides insights into your Azure application changes to increase observability and reduce MTTR (mean time to repair).
+We've designed Application Change Analysis to answer that question in Azure Monitor.
+
+Building on the power of [Azure Resource Graph](../../governance/resource-graph/overview.md), Change Analysis:
+- Provides insights into your Azure application changes.
+- Increases observability.
+- Reduces mean time to repair (MTTR).
> [!IMPORTANT]
-> Change Analysis is currently in preview. This preview version is provided without a service-level agreement. This version is not recommended for production workloads. Some features might not be supported or might have constrained capabilities. For more information, see [Supplemental terms of use for Microsoft Azure previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+> Change Analysis is currently in preview. This version:
+>
+> - Is provided without a service-level agreement.
+> - Is not recommended for production workloads.
+> - Includes unsupported features and might have constrained capabilities.
+>
+> For more information, see [Supplemental terms of use for Microsoft Azure previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
## Overview
-Change Analysis detects various types of changes, from the infrastructure layer all the way to application deployment. It's a subscription-level Azure resource provider that checks resource changes in the subscription. Change Analysis provides data for various diagnostic tools to help users understand what changes might have caused issues.
+Change Analysis detects various types of changes, from the infrastructure layer through application deployment. Change Analysis is a subscription-level Azure resource provider that:
+- Checks resource changes in the subscription.
+- Provides data for various diagnostic tools to help users understand what changes might have caused issues.
The following diagram illustrates the architecture of Change Analysis:
Application Change Analysis service supports resource property level changes in
- App Service - Azure Kubernetes service - Azure Function-- Networking resources: Network Security Group, Virtual Network, Application Gateway, etc.-- Data
+- Networking resources:
+ - Network Security Group
+ - Virtual Network
+ - Application Gateway, etc.
+- Data
+ - Storage
+ - SQL
+ - Redis Cache
+ - Cosmos DB, etc.
## Data sources
-Application change analysis queries for Azure Resource Manager tracked properties, proxied configurations, and web app in-guest changes. In addition, the service tracks resource dependency changes to diagnose and monitor an application end-to-end.
+Application Change Analysis queries for:
+- Azure Resource Manager tracked properties.
+- Proxied configurations.
+- Web app in-guest changes.
+
+Change Analysis also tracks resource dependency changes to diagnose and monitor an application end-to-end.
### Azure Resource Manager tracked properties changes
-Using [Azure Resource Graph](../../governance/resource-graph/overview.md), Change Analysis provides a historical record of how the Azure resources that host your application have changed over time. Tracked settings such as managed identities, Platform OS upgrade, and hostnames can be detected.
+Using [Azure Resource Graph](../../governance/resource-graph/overview.md), Change Analysis provides a historical record of how the Azure resources that host your application have changed over time. The following tracked settings can be detected:
+- Managed identities
+- Platform OS upgrade
+- Hostnames
### Azure Resource Manager proxied setting changes
-Settings such as IP Configuration rule, TLS settings, and extension versions are not yet available in Azure Resource Graph, so Change Analysis queries and computes these changes securely to provide more details in what changed in the app.
+Unlike Azure Resource Graph, Change Analysis securely queries and computes IP Configuration rules, TLS settings, and extension versions to provide more change details in the app.
### Changes in web app deployment and configuration (in-guest changes)
-Change Analysis captures the deployment and configuration state of an application every 4 hours. It can detect, for example, changes in the application environment variables. The tool computes the differences and presents what has changed. Unlike Resource Manager changes, code deployment change information might not be available immediately in the tool. To view the latest changes in Change Analysis, select **Refresh**.
+Every 4 hours, Change Analysis captures the deployment and configuration state of an application. For example, it can detect changes in the application environment variables. The tool computes the differences and presents the changes.
+
+Unlike Azure Resource Manager changes, code deployment change information might not be available immediately in the Change Analysis tool. To view the latest changes in Change Analysis, select **Refresh**.
-![Screenshot of the "Scan changes now" button](./media/change-analysis/scan-changes.png)
Currently all text-based files under site root **wwwroot** with the following extensions are supported: - *.json
Currently all text-based files under site root **wwwroot** with the following ex
### Dependency changes
-Changes to resource dependencies can also cause issues in a resource. For example, if a web app calls into a Redis cache, the Redis cache SKU could affect the web app performance. Another example is if port 22 was closed in a Virtual Machine's Network Security Group, it will cause connectivity errors.
+Changes to resource dependencies can also cause issues in a resource. For example, if a web app calls into a Redis cache, the Redis cache SKU could affect the web app performance.
+
+As another example, if port 22 was closed in a virtual machine's Network Security Group, it will cause connectivity errors.
#### Web App diagnose and solve problems navigator (Preview) To detect changes in dependencies, Change Analysis checks the web app's DNS record. In this way, it identifies changes in all app components that could cause issues.+ Currently the following dependencies are supported in **Web App Diagnose and solve problems | Navigator (Preview)**: - Web Apps
Currently the following dependencies are supported in **Web App Diagnose and sol
#### Related resources
-Application Change Analysis detects related resources. Common examples are Network Security Group, Virtual Network, Application Gateway, and Load Balancer related to a Virtual Machine.
-The network resources are usually automatically provisioned in the same resource group as the resources using it, so filtering the changes by resource group will show all changes for the Virtual Machine and related networking resources.
+Change Analysis detects related resources. Common examples are:
-![Screenshot of Networking changes](./media/change-analysis/network-changes.png)
+- Network Security Group
+- Virtual Network
+- Application Gateway
+- Load Balancer related to a Virtual Machine.
+
+Network resources are usually provisioned in the same resource group as the resources using it. Filter the changes by resource group to show all changes for the virtual machine and its related networking resources.
+ ## Application Change Analysis service enablement
-The Application Change Analysis service computes and aggregates change data from the data sources mentioned above. It provides a set of analytics for users to easily navigate through all resource changes and to identify which change is relevant in the troubleshooting or monitoring context.
-"Microsoft.ChangeAnalysis" resource provider needs to be registered with a subscription for the Azure Resource Manager tracked properties and proxied settings change data to be available. As you enter the Web App diagnose and solve problems tool or bring up the Change Analysis standalone tab, this resource provider is automatically registered.
-For web app in-guest changes, separate enablement is required for scanning code files within a web app. For more information, see [Change Analysis in the Diagnose and solve problems tool](change-analysis-visualizations.md#application-change-analysis-in-the-diagnose-and-solve-problems-tool) section later in this article for more details.
+The Application Change Analysis service:
+- Computes and aggregates change data from the data sources mentioned earlier.
+- Provides a set of analytics for users to:
+ - Easily navigate through all resource changes.
+ - Identify relevant changes in the troubleshooting or monitoring context.
-## Cost
-Application Change Analysis is a free service - it does not incur any billing cost to subscriptions with it enabled. The service also does not have any performance impact for scanning Azure Resource properties changes. When you enable Change Analysis for web apps in-guest file changes (or enable the Diagnose and Solve problems tool), it will have negligible performance impact on the web app and no billing cost.
+You'll need to register the `Microsoft.ChangeAnalysis` resource provider with an Azure Resource Manager subscription to make the tracked properties and proxied settings change data available. The `Microsoft.ChangeAnalysis` resource is automatically registered as you either:
+- Enter the Web App **Diagnose and Solve Problems** tool, or
+- Bring up the Change Analysis standalone tab.
+For web app in-guest changes, separate enablement is required for scanning code files within a web app. For more information, see [Change Analysis in the Diagnose and solve problems tool](change-analysis-visualizations.md#application-change-analysis-in-the-diagnose-and-solve-problems-tool) section.
+
+## Cost
+Application Change Analysis is a free service. Once enabled, the Change Analysis **Diagnose and solve problems** tool does not:
+- Incur any billing cost to subscriptions.
+- Have any performance impact for scanning Azure Resource properties changes.
## Enable Change Analysis at scale for Web App in-guest file and environment variable changes
-If your subscription includes numerous web apps, enabling the service at the level of the web app would be inefficient. Run the following script to enable all web apps in your subscription.
+If your subscription includes several web apps, enabling the service at the web app level would be inefficient. Instead, run the following script to enable all web apps in your subscription.
-Pre-requisites:
+### Pre-requisites
-- PowerShell Az Module. Follow instructions at [Install the Azure PowerShell module](/powershell/azure/install-az-ps)
+PowerShell Az Module. Follow instructions at [Install the Azure PowerShell module](/powershell/azure/install-az-ps)
-Run the following script:
+### Run the following script:
```PowerShell # Log in to your Azure subscription
azure-monitor Codeless Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/codeless-overview.md
As we're adding new integrations, the auto-instrumentation capability matrix bec
|Environment/Resource Provider | .NET | .NET Core | Java | Node.js | Python | ||--|--|--|--|--| |Azure App Service on Windows | GA, OnBD* | GA, opt-in | Public Preview | Public Preview | Not supported |
-|Azure App Service on Linux | N/A | Not supported | GA | GA | Not supported |
+|Azure App Service on Linux | N/A | Public Preview | GA | GA | Not supported |
|Azure Functions - basic | GA, OnBD* | GA, OnBD* | GA, OnBD* | GA, OnBD* | GA, OnBD* | |Azure Functions - dependencies | Not supported | Not supported | Public Preview | Not supported | Through [extension](monitor-functions.md#distributed-tracing-for-python-function-apps) | |Azure Spring Cloud | Not supported | Not supported | Public Preview | Not supported | Not supported | |Azure Kubernetes Service | N/A | Not supported | Through agent | Not supported | Not supported |
-|Azure VMs Windows | Public Preview | Not supported | Through agent | Not supported | Not supported |
-|On-Premises VMs Windows | GA, opt-in | Not supported | Through agent | Not supported | Not supported |
+|Azure VMs Windows | Public Preview | Public Preview | Through agent | Not supported | Not supported |
+|On-Premises VMs Windows | GA, opt-in | Public Preview | Through agent | Not supported | Not supported |
|Standalone agent - any env. | Not supported | Not supported | GA | Not supported | Not supported | *OnBD is short for On by Default - the Application Insights will be enabled automatically once you deploy your app in supported environments.
azure-monitor Distributed Tracing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/distributed-tracing.md
The Application Insights agents and/or SDKs for .NET, .NET Core, Java, Node.js,
* [.NET](asp-net.md) * [.NET Core](asp-net-core.md) * [Java](./java-in-process-agent.md)
-* [Node.js](../app/nodejs-quick-start.md)
+* [Node.js](../app/nodejs.md)
* [JavaScript](./javascript.md) * [Python](opencensus-python.md)
azure-portal Azure Portal Dashboards https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-portal/azure-portal-dashboards.md
By default, data will be refreshed every hour. To change this, select **Auto ref
The default time settings are **UTC Time**, showing data for the **Past 24 hours**. To change this, select the button and choose a new time range, time granularity, and/or time zone, then select **Apply**.
-To apply additional filters, select **Add filters**. The options you'll see will vary depending on the tiles in your dashboard. For example, you may be able to show only data for a specific subscription or location. Select the filter you'd like to use and make your selections. The filter will then be applied to your data. To remove a filter, select the **X** in its button.
+To apply additional filters, select **Add filter**. The options you'll see will vary depending on the tiles in your dashboard. For example, you may be able to show only data for a specific subscription or location. Select the filter you'd like to use and make your selections. The filter will then be applied to your data. To remove a filter, select the **X** in its button.
Tiles which support filtering have a ![filter icon](./media/azure-portal-dashboards/dashboard-filter.png) filter icon in the top-left corner of the tile. Some tiles allow you to override the global filters with filters specific to that tile. To do so, select **Configure tile data** from the context menu, or select the filter icon, then apply the desired filters.
azure-sql Automated Backups Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/automated-backups-overview.md
Backup storage redundancy impacts backup costs in the following way:
For more details about backup storage pricing visit [Azure SQL Database pricing page](https://azure.microsoft.com/pricing/details/sql-database/single/) and [Azure SQL Managed Instance pricing page](https://azure.microsoft.com/pricing/details/azure-sql/sql-managed-instance/single/). > [!IMPORTANT]
-> Backup storage redundancy for Hyperscale and SQL Managed Instance can only be set during database creation. This setting cannot be modified once the resource is provisioned. [Database copy](database-copy.md) process can be used to update the backup storage redundancy settings for an existing Hyperscale database.
+> Backup storage redundancy for Hyperscale can only be set during database creation. This setting cannot be modified once the resource is provisioned. [Database copy](database-copy.md) process can be used to update the backup storage redundancy settings for an existing Hyperscale database.
> [!NOTE] > Backup storage redundancy for Hyperscale is currently in preview.
For more information, see [Backup Retention REST API](/rest/api/sql/backupshortt
## Configure backup storage redundancy
-Configurable storage redundancy for SQL Databases can be configured at the time of database creation or can be updated for an existing database; the changes made to an existing database apply to future backups only. For SQL Managed Instance and HyperScale backup storage redundancy can only be specified during the create process. Once the resource is provisioned, you can't change the backup storage redundancy option. The default value is geo-redundant storage. For differences in pricing between locally redundant, zone-redundant and geo-redundant backup storage visit [managed instance pricing page](https://azure.microsoft.com/pricing/details/azure-sql/sql-managed-instance/single/).
+Configurable storage redundancy for SQL Databases can be configured at the time of database creation or can be updated for an existing database; the changes made to an existing database apply to future backups only.
+For SQL Managed Instance, backup storage redundancy is set on the instance level, and it is applied for all belonging managed databases. It can be configured at the time of an instance creation or updated for existing instances; the backup storage redundancy change would trigger then a new full backup per database and the change will apply for all future backups. The default storage redundancy type is geo-redundancy (RA-GRS).
+For HyperScale backup storage redundancy can only be specified during the create process. Once the resource is provisioned, you can't change the backup storage redundancy option. The default value is geo-redundant storage. For differences in pricing between locally redundant, zone-redundant and geo-redundant backup storage visit [managed instance pricing page](https://azure.microsoft.com/pricing/details/azure-sql/sql-managed-instance/single/).
+
+> [!NOTE]
+> Backup storage redundancy change for SQL Managed Instance is currently available only for the Public cloud via Azure Portal.
### Configure backup storage redundancy by using the Azure portal #### [SQL Database](#tab/single-database) In Azure portal, you can configure the backup storage redundancy on the **Create SQL Database** pane. The option is available under the Backup Storage Redundancy section. + ![Open Create SQL Database pane](./media/automated-backups-overview/sql-database-backup-storage-redundancy.png) #### [SQL Managed Instance](#tab/managed-instance)
-In the Azure portal, the option to change backup storage redundancy is located on the **Compute + storage** pane accessible from the **Configure Managed Instance** option on the **Basics** tab when you are creating your SQL Managed Instance.
-![Open Compute+Storage configuration-pane](./media/automated-backups-overview/open-configuration-blade-managedinstance.png)
+In the Azure portal, during an instance creation, the default option for the backup storage redundancy is Geo-redundancy. The option to change it is located on the **Compute + storage** pane accessible from the **Configure Managed Instance** option on the **Basics** tab.
+
+![Open Compute+Storage configuration-pane](./media/automated-backups-overview/open-configuration-blade-managed-instance.png)
Find the option to select backup storage redundancy on the **Compute + storage** pane.
-![Configure backup storage redundancy](./media/automated-backups-overview/select-backup-storage-redundancy-managedinstance.png)
+
+![Configure backup storage redundancy](./media/automated-backups-overview/select-backup-storage-redundancy-managed-instance.png)
+
+To change the Backup storage redundancy option for an existing instance, go to the **Compute + storage** pane, choose the new backup option and select **Apply**. For now, this change will be applied only for PITR backups, while LTR backups will retain the old storage redundancy type. The time to perform the backup redundancy change depends on the size of the all the databases within a single managed instance. Changing the backup redundancy will take more time for instances that have large databases. It's possible to combine the backup storage redundancy change operation with the UpdateSLO operation. Use the **Notification** pane of the Azure portal to view the status of the change operation.
+
azure-sql Elastic Query Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/elastic-query-overview.md
Once you have defined your external data sources and your external tables, you c
You can use regular SQL Server connection strings to connect your applications and BI or data integration tools to databases that have external tables. Make sure that SQL Server is supported as a data source for your tool. Once connected, refer to the elastic query database and the external tables in that database just like you would do with any other SQL Server database that you connect to with your tool. > [!IMPORTANT]
-> Authentication using Azure Active Directory with elastic queries is not currently supported.
+> Elastic queries are only supported when connecting with SQL Server Authentication.
## Cost
azure-sql Failover Group Add Elastic Pool Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/failover-group-add-elastic-pool-tutorial.md
Previously updated : 01/05/2022 Last updated : 01/17/2022 # Tutorial: Add an Azure SQL Database elastic pool to a failover group
azure-sql Failover Group Add Single Database Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/failover-group-add-single-database-tutorial.md
Previously updated : 01/05/2022 Last updated : 01/17/2022 # Tutorial: Add an Azure SQL Database to an autofailover group
azure-sql Migrate Dtu To Vcore https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/migrate-dtu-to-vcore.md
Previously updated : 07/26/2021 Last updated : 01/18/2022 # Migrate Azure SQL Database from the DTU-based model to the vCore-based model [!INCLUDE[appliesto-sqldb](../includes/appliesto-sqldb.md)]
FROM dtu_vcore_map;
Besides the number of vCores (logical CPUs) and the hardware generation, several other factors may influence the choice of vCore service objective: -- The mapping T-SQL query matches DTU and vCore service objectives in terms of their CPU capacity, therefore the results will be more accurate for CPU-bound workloads.
+- The mapping Transact-SQL query matches DTU and vCore service objectives in terms of their CPU capacity, therefore the results will be more accurate for CPU-bound workloads.
- For the same hardware generation and the same number of vCores, IOPS and transaction log throughput resource limits for vCore databases are often higher than for DTU databases. For IO-bound workloads, it may be possible to lower the number of vCores in the vCore model to achieve the same level of performance. Actual resource limits for DTU and vCore databases are exposed in the [sys.dm_user_db_resource_governance](/sql/relational-databases/system-dynamic-management-views/sys-dm-user-db-resource-governor-azure-sql-database) view. Comparing these values between the DTU database or pool to be migrated, and a vCore database or pool with an approximately matching service objective will help you select the vCore service objective more precisely. - The mapping query also returns the amount of memory per core for the DTU database or elastic pool to be migrated, and for each hardware generation in the vCore model. Ensuring similar or higher total memory after migration to vCore is important for workloads that require a large memory data cache to achieve sufficient performance, or workloads that require large memory grants for query processing. For such workloads, depending on actual performance, it may be necessary to increase the number of vCores to get sufficient total memory. - The [historical resource utilization](/sql/relational-databases/system-catalog-views/sys-resource-stats-azure-sql-database) of the DTU database should be considered when choosing the vCore service objective. DTU databases with consistently under-utilized CPU resources may need fewer vCores than the number returned by the mapping query. Conversely, DTU databases where consistently high CPU utilization causes inadequate workload performance may require more vCores than returned by the query.-- If migrating databases with intermittent or unpredictable usage patterns, consider the use of [Serverless](serverless-tier-overview.md) compute tier. Note that the max number of concurrent workers (requests) in serverless is 75% the limit in provisioned compute for the same number of max vCores configured. Also, the max memory available in serverless is 3 GB times the maximum number of vCores configured, which is less than the per-core memory for provisioned compute. For example, on Gen5 max memory is 120 GB when 40 max vCores are configured in serverless, vs. 204 GB for a 40 vCore provisioned compute.
+- If migrating databases with intermittent or unpredictable usage patterns, consider the use of [Serverless](serverless-tier-overview.md) compute tier. Note that the max number of concurrent [workers](resource-limits-logical-server.md#sessions-workers-and-requests) in serverless is 75% of the limit in provisioned compute for the same number of max vCores configured. Also, the max memory available in serverless is 3 GB times the maximum number of vCores configured, which is less than the per-core memory for provisioned compute. For example, on Gen5 max memory is 120 GB when 40 max vCores are configured in serverless, vs. 204 GB for a 40 vCore provisioned compute.
- In the vCore model, the supported maximum database size may differ depending on hardware generation. For large databases, check supported maximum sizes in the vCore model for [single databases](resource-limits-vcore-single-databases.md) and [elastic pools](resource-limits-vcore-elastic-pools.md). - For elastic pools, the [DTU](resource-limits-dtu-elastic-pools.md) and [vCore](resource-limits-vcore-elastic-pools.md) models have differences in the maximum supported number of databases per pool. This should be considered when migrating elastic pools with many databases. - Some hardware generations may not be available in every region. Check availability under [Hardware generations for SQL Database](./service-tiers-sql-database-vcore.md#hardware-generations).
azure-sql Resource Limits Dtu Elastic Pools https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/resource-limits-dtu-elastic-pools.md
Previously updated : 10/12/2021 Last updated : 01/18/2022 # Resources limits for elastic pools using the DTU purchasing model [!INCLUDE[appliesto-sqldb](../includes/appliesto-sqldb.md)]
For the same number of DTUs, resources provided to an elastic pool may exceed th
| Max storage per pool (GB) | 5 | 10 | 20 | 29 | 39 | 78 | 117 | 156 | | Max In-Memory OLTP storage per pool (GB) | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | | Max number DBs per pool <sup>1</sup> | 100 | 200 | 500 | 500 | 500 | 500 | 500 | 500 |
-| Max concurrent workers (requests) per pool <sup>2</sup> | 100 | 200 | 400 | 600 | 800 | 1600 | 2400 | 3200 |
+| Max concurrent workers per pool <sup>2</sup> | 100 | 200 | 400 | 600 | 800 | 1600 | 2400 | 3200 |
| Max concurrent sessions per pool <sup>2</sup> | 30000 | 30000 | 30000 | 30000 |30000 | 30000 | 30000 | 30000 | | Min DTU per database choices | 0, 5 | 0, 5 | 0, 5 | 0, 5 | 0, 5 | 0, 5 | 0, 5 | 0, 5 | | Max DTU per database choices | 5 | 5 | 5 | 5 | 5 | 5 | 5 | 5 |
For the same number of DTUs, resources provided to an elastic pool may exceed th
<sup>1</sup> See [Resource management in dense elastic pools](elastic-pool-resource-management.md) for additional considerations.
-<sup>2</sup> For the max concurrent workers (requests) for any individual database, see [Single database resource limits](resource-limits-vcore-single-databases.md). For example, if the elastic pool is using Gen5 and the max vCore per database is set at 2, then the max concurrent workers value is 200. If max vCore per database is set to 0.5, then the max concurrent workers value is 50 since on Gen5 there are a max of 100 concurrent workers per vCore. For other max vCore settings per database that are less 1 vCore or less, the number of max concurrent workers is similarly rescaled.
+<sup>2</sup> For the max concurrent workers for any individual database, see [Single database resource limits](resource-limits-vcore-single-databases.md). For example, if the elastic pool is using Gen5 and the max vCore per database is set at 2, then the max concurrent workers value is 200. If max vCore per database is set to 0.5, then the max concurrent workers value is 50 since on Gen5 there are a max of 100 concurrent workers per vCore. For other max vCore settings per database that are less 1 vCore or less, the number of max concurrent workers is similarly rescaled.
### Standard elastic pool limits
For the same number of DTUs, resources provided to an elastic pool may exceed th
| Max storage per pool (GB) | 500 | 750 | 1024 | 1280 | 1536 | 2048 | | Max In-Memory OLTP storage per pool (GB) | N/A | N/A | N/A | N/A | N/A | N/A | | Max number DBs per pool <sup>2</sup> | 100 | 200 | 500 | 500 | 500 | 500 |
-| Max concurrent workers (requests) per pool <sup>3</sup> | 100 | 200 | 400 | 600 | 800 | 1600 |
+| Max concurrent workers per pool <sup>3</sup> | 100 | 200 | 400 | 600 | 800 | 1600 |
| Max concurrent sessions per pool <sup>3</sup> | 30000 | 30000 | 30000 | 30000 | 30000 | 30000 | | Min DTU per database choices | 0, 10, 20, 50 | 0, 10, 20, 50, 100 | 0, 10, 20, 50, 100, 200 | 0, 10, 20, 50, 100, 200, 300 | 0, 10, 20, 50, 100, 200, 300, 400 | 0, 10, 20, 50, 100, 200, 300, 400, 800 | | Max DTU per database choices | 10, 20, 50 | 10, 20, 50, 100 | 10, 20, 50, 100, 200 | 10, 20, 50, 100, 200, 300 | 10, 20, 50, 100, 200, 300, 400 | 10, 20, 50, 100, 200, 300, 400, 800 |
For the same number of DTUs, resources provided to an elastic pool may exceed th
<sup>2</sup> See [Resource management in dense elastic pools](elastic-pool-resource-management.md) for additional considerations.
-<sup>3</sup> For the max concurrent workers (requests) for any individual database, see [Single database resource limits](resource-limits-vcore-single-databases.md). For example, if the elastic pool is using Gen5 and the max vCore per database is set at 2, then the max concurrent workers value is 200. If max vCore per database is set to 0.5, then the max concurrent workers value is 50 since on Gen5 there are a max of 100 concurrent workers per vCore. For other max vCore settings per database that are less 1 vCore or less, the number of max concurrent workers is similarly rescaled.
+<sup>3</sup> For the max concurrent workers for any individual database, see [Single database resource limits](resource-limits-vcore-single-databases.md). For example, if the elastic pool is using Gen5 and the max vCore per database is set at 2, then the max concurrent workers value is 200. If max vCore per database is set to 0.5, then the max concurrent workers value is 50 since on Gen5 there are a max of 100 concurrent workers per vCore. For other max vCore settings per database that are less 1 vCore or less, the number of max concurrent workers is similarly rescaled.
### Standard elastic pool limits (continued)
For the same number of DTUs, resources provided to an elastic pool may exceed th
| Max storage per pool (GB) | 2560 | 3072 | 3584 | 4096 | 4096 | | Max In-Memory OLTP storage per pool (GB) | N/A | N/A | N/A | N/A | N/A | | Max number DBs per pool <sup>2</sup> | 500 | 500 | 500 | 500 | 500 |
-| Max concurrent workers (requests) per pool <sup>3</sup> | 2400 | 3200 | 4000 | 5000 | 6000 |
+| Max concurrent workers per pool <sup>3</sup> | 2400 | 3200 | 4000 | 5000 | 6000 |
| Max concurrent sessions per pool <sup>3</sup> | 30000 | 30000 | 30000 | 30000 | 30000 | | Min DTU per database choices | 0, 10, 20, 50, 100, 200, 300, 400, 800, 1200 | 0, 10, 20, 50, 100, 200, 300, 400, 800, 1200, 1600 | 0, 10, 20, 50, 100, 200, 300, 400, 800, 1200, 1600, 2000 | 0, 10, 20, 50, 100, 200, 300, 400, 800, 1200, 1600, 2000, 2500 | 0, 10, 20, 50, 100, 200, 300, 400, 800, 1200, 1600, 2000, 2500, 3000 | | Max DTU per database choices | 10, 20, 50, 100, 200, 300, 400, 800, 1200 | 10, 20, 50, 100, 200, 300, 400, 800, 1200, 1600 | 10, 20, 50, 100, 200, 300, 400, 800, 1200, 1600, 2000 | 10, 20, 50, 100, 200, 300, 400, 800, 1200, 1600, 2000, 2500 | 10, 20, 50, 100, 200, 300, 400, 800, 1200, 1600, 2000, 2500, 3000 |
For the same number of DTUs, resources provided to an elastic pool may exceed th
<sup>2</sup> See [Resource management in dense elastic pools](elastic-pool-resource-management.md) for additional considerations.
-<sup>3</sup> For the max concurrent workers (requests) for any individual database, see [Single database resource limits](resource-limits-vcore-single-databases.md). For example, if the elastic pool is using Gen5 and the max vCore per database is set at 2, then the max concurrent workers value is 200. If max vCore per database is set to 0.5, then the max concurrent workers value is 50 since on Gen5 there are a max of 100 concurrent workers per vCore. For other max vCore settings per database that are less 1 vCore or less, the number of max concurrent workers is similarly rescaled.
+<sup>3</sup> For the max concurrent workers for any individual database, see [Single database resource limits](resource-limits-vcore-single-databases.md). For example, if the elastic pool is using Gen5 and the max vCore per database is set at 2, then the max concurrent workers value is 200. If max vCore per database is set to 0.5, then the max concurrent workers value is 50 since on Gen5 there are a max of 100 concurrent workers per vCore. For other max vCore settings per database that are less 1 vCore or less, the number of max concurrent workers is similarly rescaled.
### Premium elastic pool limits
For the same number of DTUs, resources provided to an elastic pool may exceed th
<sup>2</sup> See [Resource management in dense elastic pools](elastic-pool-resource-management.md) for additional considerations.
-<sup>3</sup> For the max concurrent workers (requests) for any individual database, see [Single database resource limits](resource-limits-vcore-single-databases.md). For example, if the elastic pool is using Gen5 and the max vCore per database is set at 2, then the max concurrent workers value is 200. If max vCore per database is set to 0.5, then the max concurrent workers value is 50 since on Gen5 there are a max of 100 concurrent workers per vCore. For other max vCore settings per database that are less 1 vCore or less, the number of max concurrent workers is similarly rescaled.
+<sup>3</sup> For the max concurrent workers for any individual database, see [Single database resource limits](resource-limits-vcore-single-databases.md). For example, if the elastic pool is using Gen5 and the max vCore per database is set at 2, then the max concurrent workers value is 200. If max vCore per database is set to 0.5, then the max concurrent workers value is 50 since on Gen5 there are a max of 100 concurrent workers per vCore. For other max vCore settings per database that are less 1 vCore or less, the number of max concurrent workers is similarly rescaled.
### Premium elastic pool limits (continued)
For the same number of DTUs, resources provided to an elastic pool may exceed th
| Max storage per pool (GB) | 2048 | 2560 | 3072 | 3548 | 4096| | Max In-Memory OLTP storage per pool (GB) | 16 | 20 | 24 | 28 | 32 | | Max number DBs per pool <sup>2</sup> | 100 | 100 | 100 | 100 | 100 |
-| Max concurrent workers (requests) per pool <sup>3</sup> | 3200 | 4000 | 4800 | 5600 | 6400 |
+| Max concurrent workers per pool <sup>3</sup> | 3200 | 4000 | 4800 | 5600 | 6400 |
| Max concurrent sessions per pool <sup>3</sup> | 30000 | 30000 | 30000 | 30000 | 30000 | | Min DTU per database choices | 0, 25, 50, 75, 125, 250, 500, 1000, 1750 | 0, 25, 50, 75, 125, 250, 500, 1000, 1750 | 0, 25, 50, 75, 125, 250, 500, 1000, 1750 | 0, 25, 50, 75, 125, 250, 500, 1000, 1750 | 0, 25, 50, 75, 125, 250, 500, 1000, 1750, 4000 | | Max DTU per database choices | 25, 50, 75, 125, 250, 500, 1000, 1750 | 25, 50, 75, 125, 250, 500, 1000, 1750 | 25, 50, 75, 125, 250, 500, 1000, 1750 | 25, 50, 75, 125, 250, 500, 1000, 1750 | 25, 50, 75, 125, 250, 500, 1000, 1750, 4000 |
For the same number of DTUs, resources provided to an elastic pool may exceed th
<sup>2</sup> See [Resource management in dense elastic pools](elastic-pool-resource-management.md) for additional considerations.
-<sup>3</sup> For the max concurrent workers (requests) for any individual database, see [Single database resource limits](resource-limits-vcore-single-databases.md). For example, if the elastic pool is using Gen5 and the max vCore per database is set at 2, then the max concurrent workers value is 200. If max vCore per database is set to 0.5, then the max concurrent workers value is 50 since on Gen5 there are a max of 100 concurrent workers per vCore. For other max vCore settings per database that are less 1 vCore or less, the number of max concurrent workers is similarly rescaled.
+<sup>3</sup> For the max concurrent workers for any individual database, see [Single database resource limits](resource-limits-vcore-single-databases.md). For example, if the elastic pool is using Gen5 and the max vCore per database is set at 2, then the max concurrent workers value is 200. If max vCore per database is set to 0.5, then the max concurrent workers value is 50 since on Gen5 there are a max of 100 concurrent workers per vCore. For other max vCore settings per database that are less 1 vCore or less, the number of max concurrent workers is similarly rescaled.
> [!IMPORTANT] > More than 1 TB of storage in the Premium tier is currently available in all regions except: China East, China North, Germany Central, and Germany Northeast. In these regions, the storage max in the Premium tier is limited to 1 TB. For more information, see [P11-P15 current limitations](single-database-scale.md#p11-and-p15-constraints-when-max-size-greater-than-1-tb).
azure-sql Resource Limits Dtu Single Databases https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/resource-limits-dtu-single-databases.md
Previously updated : 04/16/2021 Last updated : 01/18/2022 # Resource limits for single databases using the DTU purchasing model - Azure SQL Database
The following tables show the resources available for a single database at each
| Included storage (GB) | 2 | | Max storage (GB) | 2 | | Max in-memory OLTP storage (GB) |N/A |
-| Max concurrent workers (requests) | 30 |
+| Max concurrent workers | 30 |
| Max concurrent sessions | 300 | |||
The following tables show the resources available for a single database at each
| Included storage (GB) <sup>1</sup> | 250 | 250 | 250 | 250 | | Max storage (GB) | 250 | 250 | 250 | 1024 | | Max in-memory OLTP storage (GB) | N/A | N/A | N/A | N/A |
-| Max concurrent workers (requests)| 60 | 90 | 120 | 200 |
+| Max concurrent workers | 60 | 90 | 120 | 200 |
| Max concurrent sessions |600 | 900 | 1200 | 2400 | ||||||
The following tables show the resources available for a single database at each
| Included storage (GB) <sup>1</sup> | 250 | 250 | 250 | 250 | 250 | | Max storage (GB) | 1024 | 1024 | 1024 | 1024 | 1024 | | Max in-memory OLTP storage (GB) | N/A | N/A | N/A | N/A |N/A |
-| Max concurrent workers (requests)| 400 | 800 | 1600 | 3200 |6000 |
+| Max concurrent workers | 400 | 800 | 1600 | 3200 |6000 |
| Max concurrent sessions |4800 | 9600 | 19200 | 30000 |30000 | |||||||
The following tables show the resources available for a single database at each
| Included storage (GB) <sup>1</sup> | 500 | 500 | 500 | 500 | 4096 <sup>2</sup> | 4096 <sup>2</sup> | | Max storage (GB) | 1024 | 1024 | 1024 | 1024 | 4096 <sup>2</sup> | 4096 <sup>2</sup> | | Max in-memory OLTP storage (GB) | 1 | 2 | 4 | 8 | 14 | 32 |
-| Max concurrent workers (requests)| 200 | 400 | 800 | 1600 | 2800 | 6400 |
+| Max concurrent workers | 200 | 400 | 800 | 1600 | 2800 | 6400 |
| Max concurrent sessions | 30000 | 30000 | 30000 | 30000 | 30000 | 30000 | |||||||
azure-sql Resource Limits Logical Server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/resource-limits-logical-server.md
Title: Resource management in Azure SQL Database
-description: This article provides an overview of the resource management in Azure SQL Database. It also provides information regarding what happens when those resource limits are reached.
+description: This article provides an overview of resource management in Azure SQL Database with information about what happens when resource limits are reached.
Previously updated : 10/01/2021 Last updated : 01/18/2022 # Resource management in Azure SQL Database [!INCLUDE[appliesto-sqldb](../includes/appliesto-sqldb.md)]
-This article provides an overview of resource management in Azure SQL Database. It provides information on what happens when resource limits are reached, and describes resource governance mechanisms that are used to enforce these limits.
+This article provides an overview of resource management in Azure SQL Database. It provides information on what happens when resource limits are reached, and describes resource governance mechanisms that are used to enforce these limits.
For specific resource limits per pricing tier (also known as service objective) for single databases, refer to either [DTU-based single database resource limits](resource-limits-dtu-single-databases.md) or [vCore-based single database resource limits](resource-limits-vcore-single-databases.md). For elastic pool resource limits, refer to either [DTU-based elastic pool resource limits](resource-limits-dtu-elastic-pools.md) or [vCore-based elastic pool resource limits](resource-limits-vcore-elastic-pools.md).
When encountering high space utilization, mitigation options include:
- Shrink a database to reclaim unused space. In elastic pools, shrinking a database provides more storage for other databases in the pool. For more information, see [Manage file space in Azure SQL Database](file-space-manage.md). - Check if high space utilization is due to a spike in the size of Persistent Version Store (PVS). PVS is a part of each database, and is used to implement [Accelerated Database Recovery](../accelerated-database-recovery.md). To determine current PVS size, see [PVS troubleshooting](/sql/relational-databases/accelerated-database-recovery-management#troubleshooting). A common reason for large PVS size is a transaction that is open for a long time (hours), preventing cleanup of older versions in PVS. - For large databases in Premium and Business Critical service tiers, you may receive an out-of-space error even though used space in the database is below its maximum data size limit. This may happen if tempdb or transaction log consume a large amount of storage toward the maximum local storage limit. [Fail over](high-availability-sla.md#testing-application-fault-resiliency) the database or elastic pool to reset tempdb to its initial smaller size, or [shrink](file-space-manage.md#shrinking-transaction-log-file) transaction log to reduce local storage consumption.
+### Sessions, workers, and requests
-### Sessions and workers (requests)
+Sessions, workers, and requests are defined as follows:
+
+- A session represents a process connected to the database engine.
+- A request is the logical representation of a query or batch. A request is issued by a client connected to a session. Over time, multiple requests may be issued on the same session.
+- A worker thread, also known as a worker or thread, is a logical representation of an operating system thread. A request may have many workers when executed with a parallel query execution plan, or a single worker when executed with a serial (single threaded) execution plan. Workers are also required to support activities outside of requests: for example, a worker is required to process a login request as a session connects.
The maximum numbers of sessions and workers are determined by the service tier and compute size. New requests are rejected when session or worker limits are reached, and clients receive an error message. While the number of connections available can be controlled by the application, the number of concurrent workers is often harder to estimate and control. This is especially true during peak load periods when database resource limits are reached and workers pile up due to longer running queries, large blocking chains, or excessive query parallelism.
-When encountering high session or worker utilization, mitigation options include:
+> [!NOTE]
+> The initial offering of Azure SQL Database supported only single threaded queries. At that time, the number of requests was always equivalent to the number of workers. Error message 10928 in Azure SQL Database contains the wording "The request limit for the database is *N* and has been reached" for backwards compatibility purposes. The limit reached is actually the number of workers. If your max degree of parallelism (MAXDOP) setting is equal to zero or is greater than one, the number of workers may be much higher than the number of requests, and the limit may be reached much sooner than when MAXDOP is equal to one. Learn more about error 10928 in [Resource governance errors](troubleshoot-common-errors-issues.md#resource-governance-errors).
+
+You can mitigate approaching or hitting worker or session limits by:
- Increasing the service tier or compute size of the database or elastic pool. See [Scale single database resources](single-database-scale.md) and [Scale elastic pool resources](elastic-pool-scale.md).-- Optimizing queries to reduce resource utilization of each query if the cause of increased worker utilization is due to contention for compute resources. For more information, see [Query Tuning/Hinting](performance-guidance.md#query-tuning-and-hinting).-- Reducing the [MAXDOP](configure-max-degree-of-parallelism.md) (maximum degree of parallelism) setting.-- Optimizing query workload to reduce the number of occurrences and duration of query blocking. For more information, see [Understand and resolve Azure SQL blocking problems](understand-resolve-blocking.md).
+- Optimizing queries to reduce resource utilization if the cause of increased workers is contention for compute resources. For more information, see [Query Tuning/Hinting](performance-guidance.md#query-tuning-and-hinting).
+- Optimizing the query workload to reduce the number of occurrences and duration of query blocking. For more information, see [Understand and resolve Azure SQL blocking problems](understand-resolve-blocking.md).
+- Reducing the [MAXDOP](configure-max-degree-of-parallelism.md) setting when appropriate.
+
+Find worker and session limits for Azure SQL Database by service tier and compute size:
+
+- [Resource limits for single databases using the vCore purchasing model](resource-limits-vcore-single-databases.md)
+- [Resource limits for elastic pools using the vCore purchasing model](resource-limits-vcore-elastic-pools.md)
+- [Resource limits for single databases using the DTU purchasing model](resource-limits-dtu-single-databases.md)
+- [Resources limits for elastic pools using the DTU purchasing model](resource-limits-dtu-elastic-pools.md)
++
+Learn more about troubleshooting specific errors for session or worker limits in [Resource governance errors](troubleshoot-common-errors-issues.md#resource-governance-errors).
### Memory
azure-sql Resource Limits Vcore Elastic Pools https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/resource-limits-vcore-elastic-pools.md
Previously updated : 10/12/2021 Last updated : 01/18/2022 # Resource limits for elastic pools using the vCore purchasing model [!INCLUDE[appliesto-sqldb](../includes/appliesto-sqldb.md)]
If all vCores of an elastic pool are busy, then each database in the pool receiv
|IO latency (approximate)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)| |Max data IOPS per pool <sup>3</sup> |400|800|1200|1600|2000|2400| |Max log rate per pool (MBps)|6|12|18|24|30|36|
-|Max concurrent workers per pool (requests) <sup>4</sup> |210|420|630|840|1050|1260|
+|Max concurrent workers per pool<sup>4</sup> |210|420|630|840|1050|1260|
|Max concurrent logins per pool <sup>4</sup> |210|420|630|840|1050|1260| |Max concurrent sessions|30,000|30,000|30,000|30,000|30,000|30,000| |Min/max elastic pool vCore choices per database|0, 0.25, 0.5, 1|0, 0.25, 0.5, 1, 2|0, 0.25, 0.5, 1...3|0, 0.25, 0.5, 1...4|0, 0.25, 0.5, 1...5|0, 0.25, 0.5, 1...6|
If all vCores of an elastic pool are busy, then each database in the pool receiv
<sup>3</sup> The maximum value for IO sizes ranging between 8 KB and 64 KB. Actual IOPS are workload-dependent. For details, see [Data IO Governance](resource-limits-logical-server.md#resource-governance).
-<sup>4</sup> For the max concurrent workers (requests) for any individual database, see [Single database resource limits](resource-limits-vcore-single-databases.md). For example, if the elastic pool is using Gen5 and the max vCore per database is set at 2, then the max concurrent workers value is 200. If max vCore per database is set to 0.5, then the max concurrent workers value is 50 since on Gen5 there are a max of 100 concurrent workers per vCore. For other max vCore settings per database that are less 1 vCore or less, the number of max concurrent workers is similarly rescaled.
+<sup>4</sup> For the max concurrent workers for any individual database, see [Single database resource limits](resource-limits-vcore-single-databases.md). For example, if the elastic pool is using Gen5 and the max vCore per database is set at 2, then the max concurrent workers value is 200. If max vCore per database is set to 0.5, then the max concurrent workers value is 50 since on Gen5 there are a max of 100 concurrent workers per vCore. For other max vCore settings per database that are less 1 vCore or less, the number of max concurrent workers is similarly rescaled.
### General purpose service tier: Generation 4 compute platform (part 2)
If all vCores of an elastic pool are busy, then each database in the pool receiv
|IO latency (approximate)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)| |Max data IOPS per pool <sup>3</sup>|2800|3200|3600|4000|6400|9600| |Max log rate per pool (MBps)|42|48|54|60|62.5|62.5|
-|Max concurrent workers per pool (requests) <sup>4</sup>|1470|1680|1890|2100|3360|5040|
-|Max concurrent logins pool (requests) <sup>4</sup>|1470|1680|1890|2100|3360|5040|
+|Max concurrent workers per pool <sup>4</sup>|1470|1680|1890|2100|3360|5040|
+|Max concurrent logins pool <sup>4</sup>|1470|1680|1890|2100|3360|5040|
|Max concurrent sessions|30,000|30,000|30,000|30,000|30,000|30,000| |Min/max elastic pool vCore choices per database|0, 0.25, 0.5, 1...7|0, 0.25, 0.5, 1...8|0, 0.25, 0.5, 1...9|0, 0.25, 0.5, 1...10|0, 0.25, 0.5, 1...10, 16|0, 0.25, 0.5, 1...10, 16, 24| |Number of replicas|1|1|1|1|1|1|
If all vCores of an elastic pool are busy, then each database in the pool receiv
<sup>3</sup> The maximum value for IO sizes ranging between 8 KB and 64 KB. Actual IOPS are workload-dependent. For details, see [Data IO Governance](resource-limits-logical-server.md#resource-governance).
-<sup>4</sup> For the max concurrent workers (requests) for any individual database, see [Single database resource limits](resource-limits-vcore-single-databases.md). For example, if the elastic pool is using Gen5 and the max vCore per database is set at 2, then the max concurrent workers value is 200. If max vCore per database is set to 0.5, then the max concurrent workers value is 50 since on Gen5 there are a max of 100 concurrent workers per vCore. For other max vCore settings per database that are less 1 vCore or less, the number of max concurrent workers is similarly rescaled.
+<sup>4</sup> For the max concurrent workers for any individual database, see [Single database resource limits](resource-limits-vcore-single-databases.md). For example, if the elastic pool is using Gen5 and the max vCore per database is set at 2, then the max concurrent workers value is 200. If max vCore per database is set to 0.5, then the max concurrent workers value is 50 since on Gen5 there are a max of 100 concurrent workers per vCore. For other max vCore settings per database that are less 1 vCore or less, the number of max concurrent workers is similarly rescaled.
## General purpose - provisioned compute - Gen5
If all vCores of an elastic pool are busy, then each database in the pool receiv
|IO latency (approximate)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)| |Max data IOPS per pool <sup>3</sup>|800|1600|2400|3200|4000|4800|5600| |Max log rate per pool (MBps)|12|24|36|48|60|62.5|62.5|
-|Max concurrent workers per pool (requests) <sup>4</sup>|210|420|630|840|1050|1260|1470|
-|Max concurrent logins per pool (requests) <sup>4</sup>|210|420|630|840|1050|1260|1470|
+|Max concurrent workers per pool <sup>4</sup>|210|420|630|840|1050|1260|1470|
+|Max concurrent logins per pool <sup>4</sup>|210|420|630|840|1050|1260|1470|
|Max concurrent sessions|30,000|30,000|30,000|30,000|30,000|30,000|30,000| |Min/max elastic pool vCore choices per database|0, 0.25, 0.5, 1, 2|0, 0.25, 0.5, 1...4|0, 0.25, 0.5, 1...6|0, 0.25, 0.5, 1...8|0, 0.25, 0.5, 1...10|0, 0.25, 0.5, 1...12|0, 0.25, 0.5, 1...14| |Number of replicas|1|1|1|1|1|1|1|
If all vCores of an elastic pool are busy, then each database in the pool receiv
<sup>3</sup> The maximum value for IO sizes ranging between 8 KB and 64 KB. Actual IOPS are workload-dependent. For details, see [Data IO Governance](resource-limits-logical-server.md#resource-governance).
-<sup>4</sup> For the max concurrent workers (requests) for any individual database, see [Single database resource limits](resource-limits-vcore-single-databases.md). For example, if the elastic pool is using Gen5 and the max vCore per database is set at 2, then the max concurrent workers value is 200. If max vCore per database is set to 0.5, then the max concurrent workers value is 50 since on Gen5 there are a max of 100 concurrent workers per vCore. For other max vCore settings per database that are less 1 vCore or less, the number of max concurrent workers is similarly rescaled.
+<sup>4</sup> For the max concurrent workers for any individual database, see [Single database resource limits](resource-limits-vcore-single-databases.md). For example, if the elastic pool is using Gen5 and the max vCore per database is set at 2, then the max concurrent workers value is 200. If max vCore per database is set to 0.5, then the max concurrent workers value is 50 since on Gen5 there are a max of 100 concurrent workers per vCore. For other max vCore settings per database that are less 1 vCore or less, the number of max concurrent workers is similarly rescaled.
### General purpose service tier: Generation 5 compute platform (part 2)
If all vCores of an elastic pool are busy, then each database in the pool receiv
|IO latency (approximate)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)| |Max data IOPS per pool <sup>3</sup> |6,400|7,200|8,000|9,600|12,800|16,000|16,000| |Max log rate per pool (MBps)|62.5|62.5|62.5|62.5|62.5|62.5|62.5|
-|Max concurrent workers per pool (requests) <sup>4</sup>|1680|1890|2100|2520|3360|4200|8400|
-|Max concurrent logins per pool (requests) <sup>4</sup>|1680|1890|2100|2520|3360|4200|8400|
+|Max concurrent workers per pool <sup>4</sup>|1680|1890|2100|2520|3360|4200|8400|
+|Max concurrent logins per pool <sup>4</sup>|1680|1890|2100|2520|3360|4200|8400|
|Max concurrent sessions|30,000|30,000|30,000|30,000|30,000|30,000|30,000| |Min/max elastic pool vCore choices per database|0, 0.25, 0.5, 1...16|0, 0.25, 0.5, 1...18|0, 0.25, 0.5, 1...20|0, 0.25, 0.5, 1...20, 24|0, 0.25, 0.5, 1...20, 24, 32|0, 0.25, 0.5, 1...16, 24, 32, 40|0, 0.25, 0.5, 1...16, 24, 32, 40, 80| |Number of replicas|1|1|1|1|1|1|1|
If all vCores of an elastic pool are busy, then each database in the pool receiv
<sup>3</sup> The maximum value for IO sizes ranging between 8 KB and 64 KB. Actual IOPS are workload-dependent. For details, see [Data IO Governance](resource-limits-logical-server.md#resource-governance).
-<sup>4</sup> For the max concurrent workers (requests) for any individual database, see [Single database resource limits](resource-limits-vcore-single-databases.md). For example, if the elastic pool is using Gen5 and the max vCore per database is set at 2, then the max concurrent workers value is 200. If max vCore per database is set to 0.5, then the max concurrent workers value is 50 since on Gen5 there are a max of 100 concurrent workers per vCore. For other max vCore settings per database that are less 1 vCore or less, the number of max concurrent workers is similarly rescaled.
+<sup>4</sup> For the max concurrent workers for any individual database, see [Single database resource limits](resource-limits-vcore-single-databases.md). For example, if the elastic pool is using Gen5 and the max vCore per database is set at 2, then the max concurrent workers value is 200. If max vCore per database is set to 0.5, then the max concurrent workers value is 50 since on Gen5 there are a max of 100 concurrent workers per vCore. For other max vCore settings per database that are less 1 vCore or less, the number of max concurrent workers is similarly rescaled.
## General purpose - provisioned compute - Fsv2-series
If all vCores of an elastic pool are busy, then each database in the pool receiv
|IO latency (approximate)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)| |Max data IOPS per pool <sup>3</sup>|2560|3200|3840|4480|5120| |Max log rate per pool (MBps)|48|60|62.5|62.5|62.5|
-|Max concurrent workers per pool (requests) <sup>4</sup>|400|500|600|700|800|
-|Max concurrent logins per pool (requests) <sup>4</sup>|800|1000|1200|1400|1600|
+|Max concurrent workers per pool <sup>4</sup>|400|500|600|700|800|
+|Max concurrent logins per pool <sup>4</sup>|800|1000|1200|1400|1600|
|Max concurrent sessions|30,000|30,000|30,000|30,000|30,000| |Min/max elastic pool vCore choices per database|0-8|0-10|0-12|0-14|0-16| |Number of replicas|1|1|1|1|1|
If all vCores of an elastic pool are busy, then each database in the pool receiv
<sup>3</sup> The maximum value for IO sizes ranging between 8 KB and 64 KB. Actual IOPS are workload-dependent. For details, see [Data IO Governance](resource-limits-logical-server.md#resource-governance).
-<sup>4</sup> For the max concurrent workers (requests) for any individual database, see [Single database resource limits](resource-limits-vcore-single-databases.md). For example, if the elastic pool is using Gen5 and the max vCore per database is set at 2, then the max concurrent workers value is 200. If max vCore per database is set to 0.5, then the max concurrent workers value is 50 since on Gen5 there are a max of 100 concurrent workers per vCore. For other max vCore settings per database that are less 1 vCore or less, the number of max concurrent workers is similarly rescaled.
+<sup>4</sup> For the max concurrent workers for any individual database, see [Single database resource limits](resource-limits-vcore-single-databases.md). For example, if the elastic pool is using Gen5 and the max vCore per database is set at 2, then the max concurrent workers value is 200. If max vCore per database is set to 0.5, then the max concurrent workers value is 50 since on Gen5 there are a max of 100 concurrent workers per vCore. For other max vCore settings per database that are less 1 vCore or less, the number of max concurrent workers is similarly rescaled.
### Fsv2-series compute generation (part 2)
If all vCores of an elastic pool are busy, then each database in the pool receiv
|IO latency (approximate)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)| |Max data IOPS per pool <sup>3</sup>|5760|6400|7680|10240|11520|12800| |Max log rate per pool (MBps)|62.5|62.5|62.5|62.5|62.5|62.5|
-|Max concurrent workers per pool (requests) <sup>4</sup>|900|1000|1200|1600|1800|3600|
-|Max concurrent logins per pool (requests) <sup>4</sup>|1800|2000|2400|3200|3600|7200|
+|Max concurrent workers per pool <sup>4</sup>|900|1000|1200|1600|1800|3600|
+|Max concurrent logins per pool <sup>4</sup>|1800|2000|2400|3200|3600|7200|
|Max concurrent sessions|30,000|30,000|30,000|30,000|30,000|30,000| |Min/max elastic pool vCore choices per database|0-18|0-20|0-24|0-32|0-36|0-72| |Number of replicas|1|1|1|1|1|1|
If all vCores of an elastic pool are busy, then each database in the pool receiv
<sup>3</sup> The maximum value for IO sizes ranging between 8 KB and 64 KB. Actual IOPS are workload-dependent. For details, see [Data IO Governance](resource-limits-logical-server.md#resource-governance).
-<sup>4</sup> For the max concurrent workers (requests) for any individual database, see [Single database resource limits](resource-limits-vcore-single-databases.md). For example, if the elastic pool is using Gen5 and the max vCore per database is set at 2, then the max concurrent workers value is 200. If max vCore per database is set to 0.5, then the max concurrent workers value is 50 since on Gen5 there are a max of 100 concurrent workers per vCore. For other max vCore settings per database that are less 1 vCore or less, the number of max concurrent workers is similarly rescaled.
+<sup>4</sup> For the max concurrent workers for any individual database, see [Single database resource limits](resource-limits-vcore-single-databases.md). For example, if the elastic pool is using Gen5 and the max vCore per database is set at 2, then the max concurrent workers value is 200. If max vCore per database is set to 0.5, then the max concurrent workers value is 50 since on Gen5 there are a max of 100 concurrent workers per vCore. For other max vCore settings per database that are less 1 vCore or less, the number of max concurrent workers is similarly rescaled.
## General purpose - provisioned compute - DC-series
If all vCores of an elastic pool are busy, then each database in the pool receiv
|IO latency (approximate)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)| |Max data IOPS per pool <sup>3</sup>|800|1600|2400|3200| |Max log rate per pool (MBps)|12|24|36|48|
-|Max concurrent workers per pool (requests) <sup>4</sup>|168|336|504|672|
-|Max concurrent logins per pool (requests) <sup>4</sup>|168|336|504|672|
+|Max concurrent workers per pool <sup>4</sup>|168|336|504|672|
+|Max concurrent logins per pool <sup>4</sup>|168|336|504|672|
|Max concurrent sessions|30,000|30,000|30,000|30,000| |Min/max elastic pool vCore choices per database|2|2...4|2...6|2...8| |Number of replicas|1|1|1|1|
If all vCores of an elastic pool are busy, then each database in the pool receiv
<sup>3</sup> The maximum value for IO sizes ranging between 8 KB and 64 KB. Actual IOPS are workload-dependent. For details, see [Data IO Governance](resource-limits-logical-server.md#resource-governance).
-<sup>4</sup> For the max concurrent workers (requests) for any individual database, see [Single database resource limits](resource-limits-vcore-single-databases.md). For example, if the elastic pool is using Gen5 and the max vCore per database is set at 2, then the max concurrent workers value is 200. If max vCore per database is set to 0.5, then the max concurrent workers value is 50 since on Gen5 there are a max of 100 concurrent workers per vCore. For other max vCore settings per database that are less 1 vCore or less, the number of max concurrent workers is similarly rescaled.
+<sup>4</sup> For the max concurrent workers for any individual database, see [Single database resource limits](resource-limits-vcore-single-databases.md). For example, if the elastic pool is using Gen5 and the max vCore per database is set at 2, then the max concurrent workers value is 200. If max vCore per database is set to 0.5, then the max concurrent workers value is 50 since on Gen5 there are a max of 100 concurrent workers per vCore. For other max vCore settings per database that are less 1 vCore or less, the number of max concurrent workers is similarly rescaled.
## Business critical - provisioned compute - Gen4
If all vCores of an elastic pool are busy, then each database in the pool receiv
|IO latency (approximate)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)| |Max data IOPS per pool <sup>3</sup>|9,000|13,500|18,000|22,500|27,000| |Max log rate per pool (MBps)|20|30|40|50|60|
-|Max concurrent workers per pool (requests) <sup>4</sup>|420|630|840|1050|1260|
-|Max concurrent logins per pool (requests) <sup>4</sup>|420|630|840|1050|1260|
+|Max concurrent workers per pool <sup>4</sup>|420|630|840|1050|1260|
+|Max concurrent logins per pool <sup>4</sup>|420|630|840|1050|1260|
|Max concurrent sessions|30,000|30,000|30,000|30,000|30,000| |Min/max elastic pool vCore choices per database|0, 0.25, 0.5, 1, 2|0, 0.25, 0.5, 1...3|0, 0.25, 0.5, 1...4|0, 0.25, 0.5, 1...5|0, 0.25, 0.5, 1...6| |Number of replicas|4|4|4|4|4|
If all vCores of an elastic pool are busy, then each database in the pool receiv
<sup>3</sup> The maximum value for IO sizes ranging between 8 KB and 64 KB. Actual IOPS are workload-dependent. For details, see [Data IO Governance](resource-limits-logical-server.md#resource-governance).
-<sup>4</sup> For the max concurrent workers (requests) for any individual database, see [Single database resource limits](resource-limits-vcore-single-databases.md). For example, if the elastic pool is using Gen5 and the max vCore per database is set at 2, then the max concurrent workers value is 200. If max vCore per database is set to 0.5, then the max concurrent workers value is 50 since on Gen5 there are a max of 100 concurrent workers per vCore. For other max vCore settings per database that are less 1 vCore or less, the number of max concurrent workers is similarly rescaled.
+<sup>4</sup> For the max concurrent workers for any individual database, see [Single database resource limits](resource-limits-vcore-single-databases.md). For example, if the elastic pool is using Gen5 and the max vCore per database is set at 2, then the max concurrent workers value is 200. If max vCore per database is set to 0.5, then the max concurrent workers value is 50 since on Gen5 there are a max of 100 concurrent workers per vCore. For other max vCore settings per database that are less 1 vCore or less, the number of max concurrent workers is similarly rescaled.
### Business critical service tier: Generation 4 compute platform (part 2)
If all vCores of an elastic pool are busy, then each database in the pool receiv
|IO latency (approximate)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)| |Max data IOPS per pool <sup>3</sup>|31,500|36,000|40,500|45,000|72,000|96,000| |Max log rate per pool (MBps)|70|80|80|80|80|80|
-|Max concurrent workers per pool (requests) <sup>4</sup>|1470|1680|1890|2100|3360|5040|
-|Max concurrent logins per pool (requests) <sup>4</sup>|1470|1680|1890|2100|3360|5040|
+|Max concurrent workers per pool <sup>4</sup>|1470|1680|1890|2100|3360|5040|
+|Max concurrent logins per pool <sup>4</sup>|1470|1680|1890|2100|3360|5040|
|Max concurrent sessions|30,000|30,000|30,000|30,000|30,000|30,000| |Min/max elastic pool vCore choices per database|0, 0.25, 0.5, 1...7|0, 0.25, 0.5, 1...8|0, 0.25, 0.5, 1...9|0, 0.25, 0.5, 1...10|0, 0.25, 0.5, 1...10, 16|0, 0.25, 0.5, 1...10, 16, 24| |Number of replicas|4|4|4|4|4|4|
If all vCores of an elastic pool are busy, then each database in the pool receiv
<sup>3</sup> The maximum value for IO sizes ranging between 8 KB and 64 KB. Actual IOPS are workload-dependent. For details, see [Data IO Governance](resource-limits-logical-server.md#resource-governance).
-<sup>4</sup> For the max concurrent workers (requests) for any individual database, see [Single database resource limits](resource-limits-vcore-single-databases.md). For example, if the elastic pool is using Gen5 and the max vCore per database is set at 2, then the max concurrent workers value is 200. If max vCore per database is set to 0.5, then the max concurrent workers value is 50 since on Gen5 there are a max of 100 concurrent workers per vCore. For other max vCore settings per database that are less 1 vCore or less, the number of max concurrent workers is similarly rescaled.
+<sup>4</sup> For the max concurrent workers for any individual database, see [Single database resource limits](resource-limits-vcore-single-databases.md). For example, if the elastic pool is using Gen5 and the max vCore per database is set at 2, then the max concurrent workers value is 200. If max vCore per database is set to 0.5, then the max concurrent workers value is 50 since on Gen5 there are a max of 100 concurrent workers per vCore. For other max vCore settings per database that are less 1 vCore or less, the number of max concurrent workers is similarly rescaled.
## Business critical - provisioned compute - Gen5
If all vCores of an elastic pool are busy, then each database in the pool receiv
|IO latency (approximate)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)| |Max data IOPS per pool <sup>3</sup>|18,000|27,000|36,000|45,000|54,000|63,000| |Max log rate per pool (MBps)|60|90|120|120|120|120|
-|Max concurrent workers per pool (requests) <sup>4</sup>|420|630|840|1050|1260|1470|
-|Max concurrent logins per pool (requests) <sup>4</sup>|420|630|840|1050|1260|1470|
+|Max concurrent workers per pool <sup>4</sup>|420|630|840|1050|1260|1470|
+|Max concurrent logins per pool <sup>4</sup>|420|630|840|1050|1260|1470|
|Max concurrent sessions|30,000|30,000|30,000|30,000|30,000|30,000| |Min/max elastic pool vCore choices per database|0, 0.25, 0.5, 1...4|0, 0.25, 0.5, 1...6|0, 0.25, 0.5, 1...8|0, 0.25, 0.5, 1...10|0, 0.25, 0.5, 1...12|0, 0.25, 0.5, 1...14| |Number of replicas|4|4|4|4|4|4|
If all vCores of an elastic pool are busy, then each database in the pool receiv
<sup>3</sup> The maximum value for IO sizes ranging between 8 KB and 64 KB. Actual IOPS are workload-dependent. For details, see [Data IO Governance](resource-limits-logical-server.md#resource-governance).
-<sup>4</sup> For the max concurrent workers (requests) for any individual database, see [Single database resource limits](resource-limits-vcore-single-databases.md). For example, if the elastic pool is using Gen5 and the max vCore per database is set at 2, then the max concurrent workers value is 200. If max vCore per database is set to 0.5, then the max concurrent workers value is 50 since on Gen5 there are a max of 100 concurrent workers per vCore. For other max vCore settings per database that are less 1 vCore or less, the number of max concurrent workers is similarly rescaled.
+<sup>4</sup> For the max concurrent workers for any individual database, see [Single database resource limits](resource-limits-vcore-single-databases.md). For example, if the elastic pool is using Gen5 and the max vCore per database is set at 2, then the max concurrent workers value is 200. If max vCore per database is set to 0.5, then the max concurrent workers value is 50 since on Gen5 there are a max of 100 concurrent workers per vCore. For other max vCore settings per database that are less 1 vCore or less, the number of max concurrent workers is similarly rescaled.
### Business critical service tier: Generation 5 compute platform (part 2)
If all vCores of an elastic pool are busy, then each database in the pool receiv
|IO latency (approximate)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)| |Max data IOPS per pool <sup>3</sup>|72,000|81,000|90,000|108,000|144,000|180,000|256,000| |Max log rate per pool (MBps)|120|120|120|120|120|120|120|
-|Max concurrent workers per pool (requests) <sup>4</sup>|1680|1890|2100|2520|3360|4200|8400|
-|Max concurrent logins per pool (requests) <sup>4</sup>|1680|1890|2100|2520|3360|4200|8400|
+|Max concurrent workers per pool <sup>4</sup>|1680|1890|2100|2520|3360|4200|8400|
+|Max concurrent logins per pool <sup>4</sup>|1680|1890|2100|2520|3360|4200|8400|
|Max concurrent sessions|30,000|30,000|30,000|30,000|30,000|30,000|30,000| |Min/max elastic pool vCore choices per database|0, 0.25, 0.5, 1...16|0, 0.25, 0.5, 1...18|0, 0.25, 0.5, 1...20|0, 0.25, 0.5, 1...20, 24|0, 0.25, 0.5, 1...20, 24, 32|0, 0.25, 0.5, 1...20, 24, 32, 40|0, 0.25, 0.5, 1...20, 24, 32, 40, 80| |Number of replicas|4|4|4|4|4|4|4|
If all vCores of an elastic pool are busy, then each database in the pool receiv
<sup>3</sup> The maximum value for IO sizes ranging between 8 KB and 64 KB. Actual IOPS are workload-dependent. For details, see [Data IO Governance](resource-limits-logical-server.md#resource-governance).
-<sup>4</sup> For the max concurrent workers (requests) for any individual database, see [Single database resource limits](resource-limits-vcore-single-databases.md). For example, if the elastic pool is using Gen5 and the max vCore per database is set at 2, then the max concurrent workers value is 200. If max vCore per database is set to 0.5, then the max concurrent workers value is 50 since on Gen5 there are a max of 100 concurrent workers per vCore. For other max vCore settings per database that are less 1 vCore or less, the number of max concurrent workers is similarly rescaled.
+<sup>4</sup> For the max concurrent workers for any individual database, see [Single database resource limits](resource-limits-vcore-single-databases.md). For example, if the elastic pool is using Gen5 and the max vCore per database is set at 2, then the max concurrent workers value is 200. If max vCore per database is set to 0.5, then the max concurrent workers value is 50 since on Gen5 there are a max of 100 concurrent workers per vCore. For other max vCore settings per database that are less 1 vCore or less, the number of max concurrent workers is similarly rescaled.
## Business critical - provisioned compute - M-series
If all vCores of an elastic pool are busy, then each database in the pool receiv
|IO latency (approximate)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)| |Max data IOPS per pool <sup>3</sup>|12,499|15,624|18,748|21,873|24,998|28,123| |Max log rate per pool (MBps)|48|60|72|84|96|108|
-|Max concurrent workers per pool (requests) <sup>4</sup>|800|1,000|1,200|1,400|1,600|1,800|
-|Max concurrent logins per pool (requests) <sup>4</sup>|800|1,000|1,200|1,400|1,600|1,800|
+|Max concurrent workers per pool <sup>4</sup>|800|1,000|1,200|1,400|1,600|1,800|
+|Max concurrent logins per pool <sup>4</sup>|800|1,000|1,200|1,400|1,600|1,800|
|Max concurrent sessions|30000|30000|30000|30000|30000|30000| |Min/max elastic pool vCore choices per database|0-8|0-10|0-12|0-14|0-16|0-18| |Number of replicas|4|4|4|4|4|4|
If all vCores of an elastic pool are busy, then each database in the pool receiv
<sup>3</sup> The maximum value for IO sizes ranging between 8 KB and 64 KB. Actual IOPS are workload-dependent. For details, see [Data IO Governance](resource-limits-logical-server.md#resource-governance).
-<sup>4</sup> For the max concurrent workers (requests) for any individual database, see [Single database resource limits](resource-limits-vcore-single-databases.md). For example, if the elastic pool is using Gen5 and the max vCore per database is set at 2, then the max concurrent workers value is 200. If max vCore per database is set to 0.5, then the max concurrent workers value is 50 since on Gen5 there are a max of 100 concurrent workers per vCore. For other max vCore settings per database that are less 1 vCore or less, the number of max concurrent workers is similarly rescaled.
+<sup>4</sup> For the max concurrent workers for any individual database, see [Single database resource limits](resource-limits-vcore-single-databases.md). For example, if the elastic pool is using Gen5 and the max vCore per database is set at 2, then the max concurrent workers value is 200. If max vCore per database is set to 0.5, then the max concurrent workers value is 50 since on Gen5 there are a max of 100 concurrent workers per vCore. For other max vCore settings per database that are less 1 vCore or less, the number of max concurrent workers is similarly rescaled.
### M-series compute generation (part 2)
If all vCores of an elastic pool are busy, then each database in the pool receiv
|IO latency (approximate)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)| |Max data IOPS per pool <sup>3</sup>|31,248|37,497|49,996|99,993|160,000| |Max log rate per pool (MBps)|120|144|192|264|264|
-|Max concurrent workers per pool (requests) <sup>4</sup>|2,000|2,400|3,200|6,400|12,800|
-|Max concurrent logins per pool (requests) <sup>4</sup>|2,000|2,400|3,200|6,400|12,800|
+|Max concurrent workers per pool <sup>4</sup>|2,000|2,400|3,200|6,400|12,800|
+|Max concurrent logins per pool <sup>4</sup>|2,000|2,400|3,200|6,400|12,800|
|Max concurrent sessions|30000|30000|30000|30000|30000| |Number of replicas|4|4|4|4|4| |Multi-AZ|No|No|No|No|No|
If all vCores of an elastic pool are busy, then each database in the pool receiv
<sup>3</sup> The maximum value for IO sizes ranging between 8 KB and 64 KB. Actual IOPS are workload-dependent. For details, see [Data IO Governance](resource-limits-logical-server.md#resource-governance).
-<sup>4</sup> For the max concurrent workers (requests) for any individual database, see [Single database resource limits](resource-limits-vcore-single-databases.md). For example, if the elastic pool is using Gen5 and the max vCore per database is set at 2, then the max concurrent workers value is 200. If max vCore per database is set to 0.5, then the max concurrent workers value is 50 since on Gen5 there are a max of 100 concurrent workers per vCore. For other max vCore settings per database that are less 1 vCore or less, the number of max concurrent workers is similarly rescaled.
+<sup>4</sup> For the max concurrent workers for any individual database, see [Single database resource limits](resource-limits-vcore-single-databases.md). For example, if the elastic pool is using Gen5 and the max vCore per database is set at 2, then the max concurrent workers value is 200. If max vCore per database is set to 0.5, then the max concurrent workers value is 50 since on Gen5 there are a max of 100 concurrent workers per vCore. For other max vCore settings per database that are less 1 vCore or less, the number of max concurrent workers is similarly rescaled.
## Business critical - provisioned compute - DC-series
If all vCores of an elastic pool are busy, then each database in the pool receiv
|IO latency (approximate)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)| |Max data IOPS per pool <sup>3</sup>|15750|31500|47250|56000| |Max log rate per pool (MBps)|20|60|90|120|
-|Max concurrent workers per pool (requests) <sup>4</sup>|168|336|504|672|
-|Max concurrent logins per pool (requests) <sup>4</sup>|168|336|504|672|
+|Max concurrent workers per pool <sup>4</sup>|168|336|504|672|
+|Max concurrent logins per pool <sup>4</sup>|168|336|504|672|
|Max concurrent sessions|30,000|30,000|30,000|30,000| |Min/max elastic pool vCore choices per database|2|2...4|2...6|2...8| |Number of replicas|4|4|4|4|
If all vCores of an elastic pool are busy, then each database in the pool receiv
<sup>3</sup> The maximum value for IO sizes ranging between 8 KB and 64 KB. Actual IOPS are workload-dependent. For details, see [Data IO Governance](resource-limits-logical-server.md#resource-governance).
-<sup>4</sup> For the max concurrent workers (requests) for any individual database, see [Single database resource limits](resource-limits-vcore-single-databases.md). For example, if the elastic pool is using Gen5 and the max vCore per database is set at 2, then the max concurrent workers value is 200. If max vCore per database is set to 0.5, then the max concurrent workers value is 50 since on Gen5 there are a max of 100 concurrent workers per vCore. For other max vCore settings per database that are less 1 vCore or less, the number of max concurrent workers is similarly rescaled.
+<sup>4</sup> For the max concurrent workers for any individual database, see [Single database resource limits](resource-limits-vcore-single-databases.md). For example, if the elastic pool is using Gen5 and the max vCore per database is set at 2, then the max concurrent workers value is 200. If max vCore per database is set to 0.5, then the max concurrent workers value is 50 since on Gen5 there are a max of 100 concurrent workers per vCore. For other max vCore settings per database that are less 1 vCore or less, the number of max concurrent workers is similarly rescaled.
## Database properties for pooled databases
azure-sql Resource Limits Vcore Single Databases https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/resource-limits-vcore-single-databases.md
Previously updated : 07/21/2021 Last updated : 01/18/2022 # Resource limits for single databases using the vCore purchasing model [!INCLUDE[appliesto-sqldb](../includes/appliesto-sqldb.md)]
The [serverless compute tier](serverless-tier-overview.md) is currently availabl
|IO latency (approximate)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)| |Max data IOPS <sup>3</sup>|320|640|1280|1920|2560| |Max log rate (MBps)|4.5|9|18|27|36|
-|Max concurrent workers (requests)|75|150|300|450|600|
+|Max concurrent workers|75|150|300|450|600|
|Max concurrent sessions|30,000|30,000|30,000|30,000|30,000| |Number of replicas|1|1|1|1|1| |Multi-AZ|N/A|N/A|N/A|N/A|N/A|
The [serverless compute tier](serverless-tier-overview.md) is currently availabl
|IO latency (approximate)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)| |Max data IOPS <sup>2</sup>|3200|3840|4480|5120| |Max log rate (MBps)|45|50|50|50|
-|Max concurrent workers (requests)|750|900|1050|1200|
+|Max concurrent workers|750|900|1050|1200|
|Max concurrent sessions|30,000|30,000|30,000|30,000| |Number of replicas|1|1|1|1| |Multi-AZ|N/A|N/A|N/A|N/A|
The [serverless compute tier](serverless-tier-overview.md) is currently availabl
|IO latency (approximate)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)| |Max data IOPS <sup>2</sup>|5760|6400|7680|10240|12800| |Max log rate (MBps)|50|50|50|50|50|
-|Max concurrent workers (requests)|1350|1500|1800|2400|3000|
+|Max concurrent workers|1350|1500|1800|2400|3000|
|Max concurrent sessions|30,000|30,000|30,000|30,000|30,000| |Number of replicas|1|1|1|1|1| |Multi-AZ|N/A|N/A|N/A|N/A|N/A|
The [serverless compute tier](serverless-tier-overview.md) is currently availabl
|Max local SSD IOPS <sup>1</sup>|4000 |8000 |12000 |16000 |20000 |24000 | |Max log rate (MBps)|100 |100 |100 |100 |100 |100 | |IO latency (approximate)|[Note 2](#notes)|[Note 2](#notes)|[Note 2](#notes)|[Note 2](#notes)|[Note 2](#notes)|[Note 2](#notes)|
-|Max concurrent workers (requests)|200|400|600|800|1000|1200|
+|Max concurrent workers|200|400|600|800|1000|1200|
|Max concurrent sessions|30,000|30,000|30,000|30,000|30,000|30,000| |Secondary replicas|0-4|0-4|0-4|0-4|0-4|0-4| |Multi-AZ|N/A|N/A|N/A|N/A|N/A|N/A|
The [serverless compute tier](serverless-tier-overview.md) is currently availabl
|Max local SSD IOPS <sup>1</sup>|28000 |32000 |36000 |40000 |64000 |76800 | |Max log rate (MBps)|100 |100 |100 |100 |100 |100 | |IO latency (approximate)|[Note 2](#notes)|[Note 2](#notes)|[Note 2](#notes)|[Note 2](#notes)|[Note 2](#notes)|[Note 2](#notes)|
-|Max concurrent workers (requests)|1400|1600|1800|2000|3200|4800|
+|Max concurrent workers|1400|1600|1800|2000|3200|4800|
|Max concurrent sessions|30,000|30,000|30,000|30,000|30,000|30,000| |Secondary replicas|0-4|0-4|0-4|0-4|0-4|0-4| |Multi-AZ|N/A|N/A|N/A|N/A|N/A|N/A|
The [serverless compute tier](serverless-tier-overview.md) is currently availabl
|Max local SSD IOPS <sup>1</sup>|8000 |16000 |24000 |32000 |40000 |48000 |56000 | |Max log rate (MBps)|100 |100 |100 |100 |100 |100 |100 | |IO latency (approximate)|[Note 2](#notes)|[Note 2](#notes)|[Note 2](#notes)|[Note 2](#notes)|[Note 2](#notes)|[Note 2](#notes)|[Note 2](#notes)|
-|Max concurrent workers (requests)|200|400|600|800|1000|1200|1400|
+|Max concurrent workers|200|400|600|800|1000|1200|1400|
|Max concurrent sessions|30,000|30,000|30,000|30,000|30,000|30,000|30,000| |Secondary replicas|0-4|0-4|0-4|0-4|0-4|0-4|0-4| |Multi-AZ|N/A|N/A|N/A|N/A|N/A|N/A|N/A|
The [serverless compute tier](serverless-tier-overview.md) is currently availabl
|Max local SSD IOPS <sup>1</sup>|64000 |72000 |80000 |96000 |128000 |160000 |204800 | |Max log rate (MBps)|100 |100 |100 |100 |100 |100 |100 | |IO latency (approximate)|[Note 2](#notes)|[Note 2](#notes)|[Note 2](#notes)|[Note 2](#notes)|[Note 2](#notes)|[Note 2](#notes)|[Note 2](#notes)|
-|Max concurrent workers (requests)|1600|1800|2000|2400|3200|4000|8000|
+|Max concurrent workers|1600|1800|2000|2400|3200|4000|8000|
|Max concurrent sessions|30,000|30,000|30,000|30,000|30,000|30,000|30,000| |Secondary replicas|0-4|0-4|0-4|0-4|0-4|0-4|0-4| |Multi-AZ|N/A|N/A|N/A|N/A|N/A|N/A|N/A|
The [serverless compute tier](serverless-tier-overview.md) is currently availabl
|Max local SSD IOPS <sup>1</sup>|14000|28000|42000|44800| |Max log rate (MBps)|100 |100 |100 |100 | |IO latency (approximate)|[Note 2](#notes)|[Note 2](#notes)|[Note 2](#notes)|[Note 2](#notes)|
-|Max concurrent workers (requests)|160|320|480|640|
+|Max concurrent workers|160|320|480|640|
|Max concurrent sessions|30,000|30,000|30,000|30,000| |Secondary replicas|0-4|0-4|0-4|0-4| |Multi-AZ|N/A|N/A|N/A|N/A|
The [serverless compute tier](serverless-tier-overview.md) is currently availabl
|IO latency (approximate)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)| |Max data IOPS <sup>2</sup>|320|640|960|1280|1600|1920| |Max log rate (MBps)|4.5|9|13.5|18|22.5|27|
-|Max concurrent workers (requests)|200|400|600|800|1000|1200|
+|Max concurrent workers|200|400|600|800|1000|1200|
|Max concurrent sessions|30,000|30,000|30,000|30,000|30,000|30,000| |Number of replicas|1|1|1|1|1|1| |Multi-AZ|N/A|N/A|N/A|N/A|N/A|N/A|
The [serverless compute tier](serverless-tier-overview.md) is currently availabl
|IO latency (approximate)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read) |Max data IOPS <sup>2</sup>|2240|2560|2880|3200|5120|7680| |Max log rate (MBps)|31.5|36|40.5|45|50|50|
-|Max concurrent workers (requests)|1400|1600|1800|2000|3200|4800|
+|Max concurrent workers|1400|1600|1800|2000|3200|4800|
|Max concurrent sessions|30,000|30,000|30,000|30,000|30,000|30,000| |Number of replicas|1|1|1|1|1|1| |Multi-AZ|N/A|N/A|N/A|N/A|N/A|N/A|
The [serverless compute tier](serverless-tier-overview.md) is currently availabl
|IO latency (approximate)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)| |Max data IOPS <sup>2</sup>|640|1280|1920|2560|3200|3840|4480| |Max log rate (MBps)|9|18|27|36|45|50|50|
-|Max concurrent workers (requests)|200|400|600|800|1000|1200|1400|
+|Max concurrent workers|200|400|600|800|1000|1200|1400|
|Max concurrent sessions|30,000|30,000|30,000|30,000|30,000|30,000|30,000| |Number of replicas|1|1|1|1|1|1|1| |Multi-AZ|[Available in preview](high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability-preview)|
The [serverless compute tier](serverless-tier-overview.md) is currently availabl
|IO latency (approximate)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)| |Max data IOPS <sup>2</sup>|5120|5760|6400|7680|10240|12800|12800| |Max log rate (MBps)|50|50|50|50|50|50|50|
-|Max concurrent workers (requests)|1600|1800|2000|2400|3200|4000|8000|
+|Max concurrent workers|1600|1800|2000|2400|3200|4000|8000|
|Max concurrent sessions|30,000|30,000|30,000|30,000|30,000|30,000|30,000| |Number of replicas|1|1|1|1|1|1|1| |Multi-AZ|[Available in preview](high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability-preview)|
The [serverless compute tier](serverless-tier-overview.md) is currently availabl
|IO latency (approximate)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)| |Max data IOPS <sup>2</sup>|2560|3200|3840|4480|5120| |Max log rate (MBps)|36|45|50|50|50|
-|Max concurrent workers (requests)|400|500|600|700|800|
+|Max concurrent workers|400|500|600|700|800|
|Max concurrent logins|800|1000|1200|1400|1600| |Max concurrent sessions|30,000|30,000|30,000|30,000|30,000| |Number of replicas|1|1|1|1|1|
The [serverless compute tier](serverless-tier-overview.md) is currently availabl
|IO latency (approximate)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)| |Max data IOPS <sup>2</sup>|5760|6400|7680|10240|11520|12800| |Max log rate (MBps)|50|50|50|50|50|50|
-|Max concurrent workers (requests)|900|1000|1200|1600|1800|3600|
+|Max concurrent workers|900|1000|1200|1600|1800|3600|
|Max concurrent logins|1800|2000|2400|3200|3600|7200| |Max concurrent sessions|30,000|30,000|30,000|30,000|30,000|30,000| |Number of replicas|1|1|1|1|1|1|
The [serverless compute tier](serverless-tier-overview.md) is currently availabl
|IO latency (approximate)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)| |Max data IOPS <sup>2</sup>|640|1280|1920|2560| |Max log rate (MBps)|9|18|27|36|
-|Max concurrent workers (requests)|160|320|480|640|
+|Max concurrent workers|160|320|480|640|
|Max concurrent sessions|30,000|30,000|30,000|30,000| |Number of replicas|1|1|1|1| |Multi-AZ|N/A|N/A|N/A|N/A|
The [serverless compute tier](serverless-tier-overview.md) is currently availabl
|IO latency (approximate)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)| |Max data IOPS <sup>2</sup>|4,000|8,000|12,000|16,000|20,000|24,000| |Max log rate (MBps)|8|16|24|32|40|48|
-|Max concurrent workers (requests)|200|400|600|800|1000|1200|
+|Max concurrent workers|200|400|600|800|1000|1200|
|Max concurrent logins|200|400|600|800|1000|1200| |Max concurrent sessions|30,000|30,000|30,000|30,000|30,000|30,000| |Number of replicas|4|4|4|4|4|4|
The [serverless compute tier](serverless-tier-overview.md) is currently availabl
|IO latency (approximate)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)| |Max data IOPS <sup>2</sup>|28,000|32,000|36,000|40,000|64,000|76,800| |Max log rate (MBps)|56|64|64|64|64|64|
-|Max concurrent workers (requests)|1400|1600|1800|2000|3200|4800|
-|Max concurrent logins (requests)|1400|1600|1800|2000|3200|4800|
+|Max concurrent workers|1400|1600|1800|2000|3200|4800|
+|Max concurrent logins|1400|1600|1800|2000|3200|4800|
|Max concurrent sessions|30,000|30,000|30,000|30,000|30,000|30,000| |Number of replicas|4|4|4|4|4|4| |Multi-AZ|Yes|Yes|Yes|Yes|Yes|Yes|
The [serverless compute tier](serverless-tier-overview.md) is currently availabl
|IO latency (approximate)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)| |Max data IOPS <sup>2</sup>|8000|16,000|24,000|32,000|40,000|48,000|56,000| |Max log rate (MBps)|24|48|72|96|96|96|96|
-|Max concurrent workers (requests)|200|400|600|800|1000|1200|1400|
+|Max concurrent workers|200|400|600|800|1000|1200|1400|
|Max concurrent logins|200|400|600|800|1000|1200|1400| |Max concurrent sessions|30,000|30,000|30,000|30,000|30,000|30,000|30,000| |Number of replicas|4|4|4|4|4|4|4|
The [serverless compute tier](serverless-tier-overview.md) is currently availabl
|IO latency (approximate)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)| |Max data IOPS <sup>2</sup>|64,000|72,000|80,000|96,000|128,000|160,000|204,800| |Max log rate (MBps)|96|96|96|96|96|96|96|
-|Max concurrent workers (requests)|1600|1800|2000|2400|3200|4000|8000|
+|Max concurrent workers|1600|1800|2000|2400|3200|4000|8000|
|Max concurrent logins|1600|1800|2000|2400|3200|4000|8000| |Max concurrent sessions|30,000|30,000|30,000|30,000|30,000|30,000|30,000| |Number of replicas|4|4|4|4|4|4|4|
The [serverless compute tier](serverless-tier-overview.md) is currently availabl
|IO latency (approximate)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)| |Max data IOPS <sup>2</sup>|12,499|15,624|18,748|21,873|24,998|28,123| |Max log rate (MBps)|48|60|72|84|96|108|
-|Max concurrent workers (requests)|800|1,000|1,200|1,400|1,600|1,800|
+|Max concurrent workers|800|1,000|1,200|1,400|1,600|1,800|
|Max concurrent logins|800|1,000|1,200|1,400|1,600|1,800| |Max concurrent sessions|30000|30000|30000|30000|30000|30000| |Number of replicas|4|4|4|4|4|4|
The [serverless compute tier](serverless-tier-overview.md) is currently availabl
|IO latency (approximate)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)| |Max data IOPS <sup>2</sup>|31,248|37,497|49,996|99,993|160,000| |Max log rate (MBps)|120|144|192|264|264|
-|Max concurrent workers (requests)|2,000|2,400|3,200|6,400|12,800|
+|Max concurrent workers|2,000|2,400|3,200|6,400|12,800|
|Max concurrent logins|2,000|2,400|3,200|6,400|12,800| |Max concurrent sessions|30000|30000|30000|30000|30000| |Number of replicas|4|4|4|4|4|
The [serverless compute tier](serverless-tier-overview.md) is currently availabl
|IO latency (approximate)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)| |Max data IOPS <sup>2</sup>|14000|28000|42000|44800| |Max log rate (MBps)|24|48|72|96|
-|Max concurrent workers (requests)|200|400|600|800|
+|Max concurrent workers|200|400|600|800|
|Max concurrent logins|200|400|600|800| |Max concurrent sessions|30,000|30,000|30,000|30,000| |Number of replicas|4|4|4|4|
azure-sql Add Database To Failover Group Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/scripts/add-database-to-failover-group-cli.md
Previously updated : 01/05/2022 Last updated : 01/17/2022 # Use Azure CLI to add a database to a failover group
azure-sql Add Elastic Pool To Failover Group Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/scripts/add-elastic-pool-to-failover-group-cli.md
Previously updated : 01/05/2022 Last updated : 01/17/2022 # Use CLI to add an Azure SQL Database elastic pool to a failover group
azure-sql Auditing Threat Detection Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/scripts/auditing-threat-detection-cli.md
Previously updated : 01/05/2022 Last updated : 01/17/2022 # Use CLI to configure SQL Database auditing and Advanced Threat Protection
azure-sql Backup Database Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/scripts/backup-database-cli.md
Previously updated : 01/05/2022 Last updated : 01/17/2022 # Use CLI to backup an Azure SQL single database to an Azure storage container
azure-sql Copy Database To New Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/scripts/copy-database-to-new-server-cli.md
Previously updated : 01/05/2022 Last updated : 01/17/2022 # Use CLI to copy a database in Azure SQL Database to a new server
azure-sql Create And Configure Database Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/scripts/create-and-configure-database-cli.md
Previously updated : 01/05/2022 Last updated : 01/17/2022 # Use Azure CLI to create a single database and configure a firewall rule
azure-sql Import From Bacpac Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/scripts/import-from-bacpac-cli.md
Previously updated : 01/05/2022 Last updated : 01/18/2022 # Use CLI to import a BACPAC file into a database in SQL Database
azure-sql Monitor And Scale Database Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/scripts/monitor-and-scale-database-cli.md
Previously updated : 01/05/2022 Last updated : 01/17/2022 # Use the Azure CLI to monitor and scale a single database in Azure SQL Database
azure-sql Move Database Between Elastic Pools Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/scripts/move-database-between-elastic-pools-cli.md
Previously updated : 01/05/2022 Last updated : 01/17/2022 # Use Azure CLI to move a database in SQL Database in a SQL elastic pool
azure-sql Restore Database Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/scripts/restore-database-cli.md
Previously updated : 01/05/2022 Last updated : 01/18/2022 # Use CLI to restore a single database in Azure SQL Database to an earlier point in time
azure-sql Scale Pool Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/scripts/scale-pool-cli.md
Previously updated : 01/05/2022 Last updated : 01/17/2022 # Use the Azure CLI to scale an elastic pool in Azure SQL Database
azure-sql Setup Geodr Failover Database Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/scripts/setup-geodr-failover-database-cli.md
Previously updated : 01/05/2022 Last updated : 01/17/2022 # Use CLI to configure active geo-replication for a single database in Azure SQL Database
azure-sql Setup Geodr Failover Group Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/scripts/setup-geodr-failover-group-cli.md
Previously updated : 01/05/2022 Last updated : 01/17/2022 # Use CLI to configure a failover group for a group of databases in Azure SQL Database
azure-sql Setup Geodr Failover Pool Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/scripts/setup-geodr-failover-pool-cli.md
Previously updated : 01/05/2022 Last updated : 01/17/2022 # Use CLI to configure active geo-replication for a pooled database in Azure SQL Database
azure-sql Single Database Create Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/single-database-create-quickstart.md
Previously updated : 01/05/2022 Last updated : 01/17/2022 # Quickstart: Create an Azure SQL Database single database
azure-sql Troubleshoot Common Errors Issues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/troubleshoot-common-errors-issues.md
Previously updated : 11/04/2021 Last updated : 01/18/2022 # Troubleshooting connectivity issues and other errors with Azure SQL Database and Azure SQL Managed Instance
You receive error messages when the connection to Azure SQL Database or Azure SQ
## Transient fault error messages (40197, 40613 and others)
-The Azure infrastructure has the ability to dynamically reconfigure servers when heavy workloads arise in the SQL Database service. This dynamic behavior might cause your client program to lose its connection to the database or instance. This kind of error condition is called a *transient fault*. Database reconfiguration events occur because of a planned event (for example, a software upgrade) or an unplanned event (for example, a process crash, or load balancing). Most reconfiguration events are generally short-lived and should be completed in less than 60 seconds at most. However, these events can occasionally take longer to finish, such as when a large transaction causes a long-running recovery. The following table lists various transient errors that applications can receive when connecting to SQL Database
+The Azure infrastructure has the ability to dynamically reconfigure servers when heavy workloads arise in the SQL Database service. This dynamic behavior might cause your client program to lose its connection to the database or instance. This kind of error condition is called a *transient fault*. Database reconfiguration events occur because of a planned event (for example, a software upgrade) or an unplanned event (for example, a process crash, or load balancing). Most reconfiguration events are generally short-lived and should be completed in less than 60 seconds at most. However, these events can occasionally take longer to finish, such as when a large transaction causes a long-running recovery. The following table lists various transient errors that applications can receive when connecting to Azure SQL Database.
### List of transient fault error codes
Connection timeouts occur because the application can't connect to the server. T
## Resource governance errors
-### Error 10928: Resource ID: %d
+Azure SQL Database uses a resource governance implementation based on [Resource Governor](/sql/relational-databases/resource-governor/resource-governor) to enforce resource limits. Learn more about [resource management in Azure SQL Database](resource-limits-logical-server.md).
-`10928: Resource ID: %d. The %s limit for the database is %d and has been reached. See http://go.microsoft.com/fwlink/?LinkId=267637 for assistance. The Resource ID value in error message indicates the resource for which limit has been reached. For sessions, Resource ID = 2.`
+The most common resource governance errors are listed first with details, followed by a table of resource governance error messages.
-To work around this issue, try one of the following methods:
+### Error 10928: Resource ID : 1. The request limit for the database is *%d* and has been reached.
-- Verify whether there are long-running queries.
+The detailed error message in this case reads: `Resource ID : 1. The request limit for the database is %d and has been reached. See 'http://go.microsoft.com/fwlink/?LinkId=267637' for assistance.`
- > [!NOTE]
- > This is a minimalist approach that might not resolve the issue. For more thorough information on troubleshooting long running or blocking queries, see [Understand and resolve Azure SQL Database blocking problems](understand-resolve-blocking.md).
+This error message indicates that the worker limit for Azure SQL Database has been reached. A value will be present instead of the placeholder *%d*. This value indicates the worker limit for your database at the time the limit was reached.
-1. Run the following SQL query to check the [sys.dm_exec_requests](/sql/relational-databases/system-dynamic-management-views/sys-dm-exec-requests-transact-sql) view to see any blocking requests:
+> [!NOTE]
+> The initial offering of Azure SQL Database supported only single threaded queries. At that time, the number of requests was always equivalent to the number of workers. Error message 10928 in Azure SQL Database contains the wording "The request limit for the database is *N* and has been reached" for backwards compatibility purposes. The limit reached is actually the number of workers. If your max degree of parallelism (MAXDOP) setting is equal to zero or is greater than one, the number of workers may be much higher than the number of requests, and the limit may be reached much sooner than when MAXDOP is equal to one.
+>
+> Learn more about [Sessions, workers, and requests](resource-limits-logical-server.md#sessions-workers-and-requests).
- ```sql
- SELECT * FROM sys.dm_exec_requests;
- ```
+#### Connect with the Dedicated Admin Connection (DAC) if needed
-1. Determine the **input buffer** for the head blocker using the [sys.dm_exec_input_buffer](/sql/relational-databases/system-dynamic-management-views/sys-dm-exec-input-buffer-transact-sql) dynamic management function, and the session_id of the offending query, for example:
+If a live incident is ongoing where the worker limit has been approached or reached, you may receive Error 10928 when you connect using [SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms) (SSMS) or [Azure Data Studio](/sql/azure-data-studio/what-is). One session can connect using the [Diagnostic Connection for Database Administrators (DAC)](/sql/database-engine/configure-windows/diagnostic-connection-for-database-administrators#connecting-with-dac) even when the maximum worker threshold has been reached.
- ```sql
- SELECT * FROM sys.dm_exec_input_buffer (100,0);
- ```
+To establish a connection with the DAC from SSMS:
+
+- From the menu, select **File > New > Database Engine Query**
+- From the connection dialog box in the Server Name field, enter `admin:<fully_qualified_server_name>` (this will be something like `admin:servername.database.windows.net`).
+- Select **Options >>**
+- Select the **Connection Properties** tab
+- In the **Connect to database:** box, type the name of your database
+- Select **Connect**.
+
+If you receive Error 40613, `Database '%.&#x2a;ls' on server '%.&#x2a;ls' is not currently available. Please retry the connection later. If the problem persists, contact customer support, and provide them the session tracing ID of '%.&#x2a;ls'`, this may indicate that another session is already connected to the DAC. Only one session may connect to the DAC for a single database or an elastic pool at a time.
+
+If you encounter the error 'Failed to connect to server' after selecting **Connect**, the DAC session may still have been established successfully if you are using a version of [SSMS prior to 18.9](/sql/ssms/release-notes-ssms#bug-fixes-in-189). Early versions of SSMS attempted to provide Intellisense for connections to the DAC. This failed, as the DAC supports only a single worker and Intellisense requires a separate worker.
+
+You cannot use a DAC connection with Object Explorer.
+
+#### Review your max_worker_percent usage
+
+To find resource consumption statistics for your database for 14 days, query the [sys.resource_stats](/sql/relational-databases/system-catalog-views/sys-resource-stats-azure-sql-database) system catalog view. The `max_worker_percent` column shows the percentage of workers used relative to the worker limit for your database. Connect to the master database on your [logical server](logical-servers.md) to query `sys.resource_stats`.
+
+```sql
+SELECT start_time, end_time, database_name, sku, avg_cpu_percent, max_worker_percent, max_session_percent
+FROM sys.resource_stats;
+```
+
+You can also query resource consumption statistics from the last hour from the
+[sys.dm_db_resource_stats](/sql/relational-databases/system-dynamic-management-views/sys-dm-db-resource-stats-azure-sql-database) dynamic management view. Connect directly to your database to query `sys.dm_db_resource_stats`.
+
+```sql
+SELECT end_time, avg_cpu_percent, max_worker_percent, max_session_percent
+FROM sys.dm_db_resource_stats;
+```
+#### Lower worker usage when possible
-1. Tune the head blocker query.
+Blocking chains can cause a sudden surge in the number of workers in a database. A large volume of concurrent parallel queries may cause a high number of workers. Increasing your [max degree of parallelism (MAXDOP](configure-max-degree-of-parallelism.md)) or setting MAXDOP to zero can increase the number of active workers.
-If the database consistently reaches its limit despite addressing blocking and long-running queries, consider upgrading to an edition with more resources [Editions](https://azure.microsoft.com/pricing/details/sql-database/).
+Triage an incident with insufficient workers by following these steps:
-For more information about database limits, see [SQL Database resource limits for servers](./resource-limits-logical-server.md).
+1. Investigate if blocking is occurring or if you can identify a large volume of concurrent workers. Run the following query to examine current requests and check for blocking when your database is returning Error 10928. You may need to [connect with the Dedicated Admin Connection (DAC)](#connect-with-the-dedicated-admin-connection-dac-if-needed) to execute the query.
+
+ ```sql
+ SELECT
+ r.session_id, r.request_id, r.blocking_session_id, r.start_time,
+ r.status, r.command, DB_NAME(r.database_id) AS database_name,
+ (SELECT COUNT(*)
+ FROM sys.dm_os_tasks AS t
+ WHERE t.session_id=r.session_id and t.request_id=r.request_id) AS worker_count,
+ i.parameters, i.event_info AS input_buffer,
+ r.last_wait_type, r.open_transaction_count, r.total_elapsed_time, r.cpu_time,
+ r.logical_reads, r.writes, s.login_time, s.login_name, s.program_name, s.host_name
+ FROM sys.dm_exec_requests as r
+ JOIN sys.dm_exec_sessions as s on r.session_id=s.session_id
+ OUTER APPLY sys.dm_exec_input_buffer (r.session_id,r.request_id) AS i
+ WHERE s.is_user_process=1;
+ GO
+ ```
+ 1. Look for rows with a `blocking_session_id` to identify blocked sessions. Find each `blocking_session_id` in the list to determine if that session is also blocked. This will eventually lead you to the head blocker. Tune the head blocker query.
+
+ > [!NOTE]
+ > For more thorough information on troubleshooting long running or blocking queries, see [Understand and resolve Azure SQL Database blocking problems](understand-resolve-blocking.md).
+
+ 1. To identify a large volume of concurrent workers, review the number of requests overall and the `worker_count` column for each request. `Worker_count` is the number of workers at the time sampled and may change over time as the request is executed. Tune queries to reduce resource utilization if the cause of increased workers is concurrent queries that are running at their optimal degree of parallelism. For more information, see [Query Tuning/Hinting](performance-guidance.md#query-tuning-and-hinting).
+
+1. Evaluate the [maximum degree of parallelism (MAXDOP)](configure-max-degree-of-parallelism.md) setting for the database.
+
+#### Increase worker limits
+
+If the database consistently reaches its limit despite addressing blocking, optimizing queries, and validating your MAXDOP setting, consider adding more resources to the database to increase the worker limit.
+
+Find resource limits for Azure SQL Database by service tier and compute size:
+
+- [Resource limits for single databases using the vCore purchasing model](resource-limits-vcore-single-databases.md)
+- [Resource limits for elastic pools using the vCore purchasing model](resource-limits-vcore-elastic-pools.md)
+- [Resource limits for single databases using the DTU purchasing model](resource-limits-dtu-single-databases.md)
+- [Resources limits for elastic pools using the DTU purchasing model](resource-limits-dtu-elastic-pools.md)
+
+Learn more about [Azure SQL Database resource governance of workers](./resource-limits-logical-server.md#sessions-workers-and-requests).
### Error 10929: Resource ID: 1
For more information about resource limits, see [Logical SQL server resource lim
This error occurs when the database has reached its size quota.
-The following steps can either help you work around the problem or provide you with additional options:
+The following steps can either help you work around the problem or provide you with more options:
1. Check the current size of the database by using the dashboard in the Azure portal.
The following steps can either help you work around the problem or provide you w
JOIN sys.dm_db_partition_stats p on p.object_id = o.object_id GROUP BY o.name ORDER BY [Table Size (MB)] DESC;
+ GO
``` 2. If the current size does not exceed the maximum size supported for your edition, you can use ALTER DATABASE to increase the MAXSIZE setting.
The following steps can either help you work around the problem or provide you w
If you repeatedly encounter this error, try to resolve the issue by following these steps:
-1. Check the `sys.dm_exec_requests` view to see any open sessions that have a high value for the `total_elapsed_time` column. Perform this check by running the following SQL script:
-
- ```sql
- SELECT * FROM sys.dm_exec_requests;
- ```
-
-2. Determine the input buffer for the head blocker using the [sys.dm_exec_input_buffer](/sql/relational-databases/system-dynamic-management-views/sys-dm-exec-input-buffer-transact-sql) dynamic management function, and the `session_id` of the offending query, for example:
-
- ```sql
- SELECT * FROM sys.dm_exec_input_buffer (100,0);
- ```
-
-3. Tune the query.
+1. Run the following query to see any open sessions that have a high value for the `duration_ms` column:
+
+ ```sql
+ SELECT
+ r.start_time, DATEDIFF(ms,start_time, SYSDATETIME()) as duration_ms,
+ r.session_id, r.request_id, r.blocking_session_id,
+ r.status, r.command, DB_NAME(r.database_id) AS database_name,
+ i.parameters, i.event_info AS input_buffer,
+ r.last_wait_type, r.open_transaction_count, r.total_elapsed_time, r.cpu_time,
+ r.logical_reads, r.writes, s.login_time, s.login_name, s.program_name, s.host_name
+ FROM sys.dm_exec_requests as r
+ JOIN sys.dm_exec_sessions as s on r.session_id=s.session_id
+ OUTER APPLY sys.dm_exec_input_buffer (r.session_id,r.request_id) AS i
+ WHERE s.is_user_process=1
+ ORDER BY start_time ASC;
+ GO
+ ```
+ You may choose to ignore rows where the `input_buffer` column shows a query reading from `sys.fn_MSxe_read_event_stream`: these requests are related to Extended Event sessions.
+1. Review the `blocking_session_id` column to see if blocking is contributing to long-running transactions.
> [!NOTE] > For more information on troubleshooting blocking in Azure SQL Database, see [Understand and resolve Azure SQL Database blocking problems](understand-resolve-blocking.md).
-Also consider batching your queries. For information on batching, see [How to use batching to improve SQL Database application performance](../performance-improve-use-batching.md).
+1. Consider batching your queries. For information on batching, see [How to use batching to improve SQL Database application performance](../performance-improve-use-batching.md).
### Error 40551: The session has been terminated because of excessive TEMPDB usage
For an in-depth troubleshooting procedure, see [Is my query running fine in the
For more information on other out of memory errors and sample queries, see [Troubleshoot out of memory errors with Azure SQL Database](troubleshoot-memory-errors-issues.md).
-### Table of additional resource governance error messages
+### Table of resource governance error messages
| Error code | Severity | Description | | :| :|: |
-| 10928 |20 |Resource ID: %d. The %s limit for the database is %d and has been reached. For more information, see [SQL Database resource limits for single and pooled databases](resource-limits-logical-server.md).<br/><br/>The Resource ID indicates the resource that has reached the limit. For worker threads, the Resource ID = 1. For sessions, the Resource ID = 2.<br/><br/>For more information about this error and how to resolve it, see: <br/>&bull; &nbsp;[Logical SQL server resource limits](resource-limits-logical-server.md)<br/>&bull; &nbsp;[DTU-based limits for single databases](service-tiers-dtu.md)<br/>&bull; &nbsp;[DTU-based limits for elastic pools](resource-limits-dtu-elastic-pools.md)<br/>&bull; &nbsp;[vCore-based limits for single databases](resource-limits-vcore-single-databases.md)<br/>&bull; &nbsp;[vCore-based limits for elastic pools](resource-limits-vcore-elastic-pools.md)<br/>&bull; &nbsp;[Azure SQL Managed Instance resource limits](../managed-instance/resource-limits.md). |
+| 10928 |20 |Resource ID: %d. The %s limit for the database is %d and has been reached. See 'http://go.microsoft.com/fwlink/?LinkId=267637' for assistance..<br/><br/>The Resource ID indicates the resource that has reached the limit. When Resource ID = 1, this indicates a worker limit has been reached. Learn more in [Error 10928: Resource ID : 1. The request limit for the database is *%d* and has been reached.](#error-10928-resource-id--1-the-request-limit-for-the-database-is-d-and-has-been-reached) When Resource ID = 2, this indicates the session limit has been reached.<br/><br/>Learn more about resource limits: <br/>&bull; &nbsp;[Logical SQL server resource limits](resource-limits-logical-server.md)<br/>&bull; &nbsp;[DTU-based limits for single databases](service-tiers-dtu.md)<br/>&bull; &nbsp;[DTU-based limits for elastic pools](resource-limits-dtu-elastic-pools.md)<br/>&bull; &nbsp;[vCore-based limits for single databases](resource-limits-vcore-single-databases.md)<br/>&bull; &nbsp;[vCore-based limits for elastic pools](resource-limits-vcore-elastic-pools.md)<br/>&bull; &nbsp;[Azure SQL Managed Instance resource limits](../managed-instance/resource-limits.md). |
| 10929 |20 |Resource ID: %d. The %s minimum guarantee is %d, maximum limit is %d, and the current usage for the database is %d. However, the server is currently too busy to support requests greater than %d for this database. The Resource ID indicates the resource that has reached the limit. For worker threads, the Resource ID = 1. For sessions, the Resource ID = 2. For more information, see: <br/>&bull; &nbsp;[Logical SQL server resource limits](resource-limits-logical-server.md)<br/>&bull; &nbsp;[DTU-based limits for single databases](service-tiers-dtu.md)<br/>&bull; &nbsp;[DTU-based limits for elastic pools](resource-limits-dtu-elastic-pools.md)<br/>&bull; &nbsp;[vCore-based limits for single databases](resource-limits-vcore-single-databases.md)<br/>&bull; &nbsp;[vCore-based limits for elastic pools](resource-limits-vcore-elastic-pools.md)<br/>&bull; &nbsp;[Azure SQL Managed Instance resource limits](../managed-instance/resource-limits.md). <br/>Otherwise, try again later. | | 40544 |20 |The database has reached its size quota. Partition or delete data, drop indexes, or consult the documentation for possible resolutions. For database scaling, see [Scale single database resources](single-database-scale.md) and [Scale elastic pool resources](elastic-pool-scale.md).| | 40549 |16 |Session is terminated because you have a long-running transaction. Try shortening your transaction. For information on batching, see [How to use batching to improve SQL Database application performance](../performance-improve-use-batching.md).|
The following errors are related to creating and using elastic pools:
| Error code | Severity | Description | Corrective action | |: |: |: |: | | 1132 | 17 |The elastic pool has reached its storage limit. The storage usage for the elastic pool cannot exceed (%d) MBs. Attempting to write data to a database when the storage limit of the elastic pool has been reached. For information on resource limits, see: <br/>&bull; &nbsp;[DTU-based limits for elastic pools](resource-limits-dtu-elastic-pools.md)<br/>&bull; &nbsp;[vCore-based limits for elastic pools](resource-limits-vcore-elastic-pools.md). <br/> |Consider increasing the DTUs of and/or adding storage to the elastic pool if possible in order to increase its storage limit, reduce the storage used by individual databases within the elastic pool, or remove databases from the elastic pool. For elastic pool scaling, see [Scale elastic pool resources](elastic-pool-scale.md). For more information on removing unused space from databases, see [Manage file space for databases in Azure SQL Database](file-space-manage.md).|
-| 10929 | 16 |The %s minimum guarantee is %d, maximum limit is %d, and the current usage for the database is %d. However, the server is currently too busy to support requests greater than %d for this database. For information on resource limits, see: <br/>&bull; &nbsp;[DTU-based limits for elastic pools](resource-limits-dtu-elastic-pools.md)<br/>&bull; &nbsp;[vCore-based limits for elastic pools](resource-limits-vcore-elastic-pools.md). <br/> Otherwise, try again later. DTU / vCore min per database; DTU / vCore max per database. The total number of concurrent workers (requests) across all databases in the elastic pool attempted to exceed the pool limit. |Consider increasing the DTUs or vCores of the elastic pool if possible in order to increase its worker limit, or remove databases from the elastic pool. |
+| 10929 | 16 |The %s minimum guarantee is %d, maximum limit is %d, and the current usage for the database is %d. However, the server is currently too busy to support requests greater than %d for this database. For information on resource limits, see: <br/>&bull; &nbsp;[DTU-based limits for elastic pools](resource-limits-dtu-elastic-pools.md)<br/>&bull; &nbsp;[vCore-based limits for elastic pools](resource-limits-vcore-elastic-pools.md). <br/> Otherwise, try again later. DTU / vCore min per database; DTU / vCore max per database. The total number of [concurrent workers](resource-limits-logical-server.md#sessions-workers-and-requests) across all databases in the elastic pool attempted to exceed the pool limit. |Consider increasing the DTUs or vCores of the elastic pool if possible in order to increase its worker limit, or remove databases from the elastic pool. |
| 40844 | 16 |Database '%ls' on Server '%ls' is a '%ls' edition database in an elastic pool and cannot have a continuous copy relationship. |N/A | | 40857 | 16 |Elastic pool not found for server: '%ls', elastic pool name: '%ls'. Specified elastic pool does not exist in the specified server. | Provide a valid elastic pool name. | | 40858 | 16 |Elastic pool '%ls' already exists in server: '%ls'. Specified elastic pool already exists in the specified server. | Provide new elastic pool name. |
For additional guidance on fine-tuning performance, see the following resources:
2. Check the application's connection string to make sure it's configured correctly. For example, make sure that the connection string specifies the correct port (1433) and fully qualified server name. See [Get connection information](./connect-query-ssms.md#get-server-connection-information). 3. Try increasing the connection timeout value. We recommend using a connection timeout of at least 30 seconds.
-4. Test the connectivity between the application server and the Azure SQL Database by using [SQL Server management Studio (SSMS)](./connect-query-ssms.md), a UDL file, ping, or telnet. For more information, see [Troubleshooting connectivity issues](https://support.microsoft.com/help/4009936/solving-connectivity-errors-to-sql-server) and [Diagnostics for connectivity issues](./troubleshoot-common-connectivity-issues.md#diagnostics).
+4. Test the connectivity between the application server and the Azure SQL Database by using [SQL Server Management Studio (SSMS)](./connect-query-ssms.md), a UDL file, ping, or telnet. For more information, see [Troubleshooting connectivity issues](https://support.microsoft.com/help/4009936/solving-connectivity-errors-to-sql-server) and [Diagnostics for connectivity issues](./troubleshoot-common-connectivity-issues.md#diagnostics).
> [!NOTE] > As a troubleshooting step, you can also test connectivity on a different client computer.
azure-sql Resource Limits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/resource-limits.md
Previously updated : 10/18/2021 Last updated : 01/18/2022 # Overview of Azure SQL Managed Instance resource limits [!INCLUDE[appliesto-sqlmi](../includes/appliesto-sqlmi.md)]
SQL Managed Instance has two service tiers: [General Purpose](../database/servic
| Storage IO latency (approximate) | 5-10 ms | 1-2 ms | | In-memory OLTP | Not supported | Available, [size depends on number of vCore](#in-memory-oltp-available-space) | | Max sessions | 30000 | 30000 |
-| Max concurrent workers (requests) | 105 * number of vCores + 800 | 105 * vCore count + 800 |
+| Max concurrent workers | 105 * number of vCores + 800 | 105 * vCore count + 800 |
| [Read-only replicas](../database/read-scale-out.md) | 0 | 1 (included in price) | | Compute isolation | Not supported as General Purpose instances may share physical hardware with other instances| **Standard-series (Gen5)**:<br/> Supported for 40, 64, 80 vCores<BR> **Premium-series**: Supported for 64, 80 vCores <BR> **Memory optimized premium-series**: Supported for 64 vCores |
The amount of In-memory OLTP space in [Business Critical](../database/service-ti
| Storage IO latency (approximate) | Gen4: 5-10 ms | Gen4: 1-2 ms | | In-memory OLTP | Gen4: Not supported | Gen4: Available, [size depends on number of vCore](#in-memory-oltp-available-space) | | Max sessions | Gen4: 30000 | Gen4: 30000 |
-| Max concurrent workers (requests) | Gen4: 210 * number of vCores + 800 | Gen4: 210 * vCore count + 800 |
+| Max concurrent workers | Gen4: 210 * number of vCores + 800 | Gen4: 210 * vCore count + 800 |
| [Read-only replicas](../database/read-scale-out.md) | Gen4: 0 | Gen4: 1 (included in price) | | Compute isolation | Gen4: not supported | Gen4: not supported |
azure-sql Create Configure Managed Instance Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/scripts/create-configure-managed-instance-cli.md
Previously updated : 01/05/2022 Last updated : 01/18/2022 # Use CLI to create an Azure SQL Managed Instance
azure-sql Restore Geo Backup Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/scripts/restore-geo-backup-cli.md
Previously updated : 01/05/2022 Last updated : 01/18/2022 # Use CLI to restore a Managed Instance database to another geo-region
azure-sql Restore Geo Backup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/scripts/restore-geo-backup.md
This script uses the following commands. Each command in the table links to comm
| Command | Notes | ||| | [New-AzResourceGroup](/powershell/module/az.resources/New-AzResourceGroup) | Creates a resource group in which all resources are stored. |
-| [Get-AzSqlInstanceDatabaseGeoBackup](/powershell/module/az.sql/Get-AzSqlInstanceDatabaseGeoBackup) | Creates a geo-redundant backup of a SQL Managed Instance database. |
+| [Get-AzSqlInstanceDatabaseGeoBackup](/powershell/module/az.sql/Get-AzSqlInstanceDatabaseGeoBackup) | Gets one or more geo-backups from a database within an Azure SQL Managed Instance. |
| [Restore-AzSqlInstanceDatabase](/powershell/module/az.sql/Restore-AzSqlInstanceDatabase) | Creates a database on SQL Managed Instance from geo-backup. | | [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) | Deletes a resource group, including all nested resources. |
azure-sql Transparent Data Encryption Byok Sql Managed Instance Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/scripts/transparent-data-encryption-byok-sql-managed-instance-cli.md
Previously updated : 01/05/2022 Last updated : 01/18/2022 # Manage Transparent Data Encryption in a Managed Instance using your own key from Azure Key Vault
backup Backup Azure Database Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-database-postgresql.md
You can configure backup on multiple databases across multiple Azure PostgreSQL
:::image type="content" source="./media/backup-azure-database-postgresql/create-or-add-backup-policy-inline.png" alt-text="Screenshot showing the option to add a backup policy." lightbox="./media/backup-azure-database-postgresql/create-or-add-backup-policy-expanded.png":::
-1. **Select Azure Postgres databases to back up**: Choose one of the Azure PostgreSQL servers across subscriptions if they're in the same region as that of the vault. Expand the arrow to see the list of databases within a server.
+1. **Select Azure PostgreSQL databases to back up**: Choose one of the Azure PostgreSQL servers across subscriptions if they're in the same region as that of the vault. Expand the arrow to see the list of databases within a server.
>[!Note] >You can't (and don't need to) back up the databases *azure_maintenance* and *azure_sys*. Additionally, you can't back up a database already backed-up to a Backup vault.
Choose from the list of retention rules that were defined in the associated Back
## Next steps
-[Troubleshoot PostgreSQL database backup by using Azure Backup](backup-azure-database-postgresql-troubleshoot.md)
+[Troubleshoot PostgreSQL database backup by using Azure Backup](backup-azure-database-postgresql-troubleshoot.md)
backup Backup Azure Sql Automation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-sql-automation.md
MSSQLSERVER/m... Restore InProgress 3/17/2019 10:02:45 AM
### On-demand backup
-Once backup has been enabled for a DB, you can also trigger an on-demand backup for the DB using [Backup-AzRecoveryServicesBackupItem](/powershell/module/az.recoveryservices/backup-azrecoveryservicesbackupitem) PowerShell cmdlet. The following example triggers a full backup on a SQL DB with compression enabled and the full backup should be retained for 60 days.
+Once backup has been enabled for a DB, you can also trigger an on-demand backup for the DB using [Backup-AzRecoveryServicesBackupItem](/powershell/module/az.recoveryservices/backup-azrecoveryservicesbackupitem) PowerShell cmdlet. The following example triggers a copy-only-full backup on a SQL DB with compression enabled and the copy-only-full backup should be retained for 60 days.
+
+> [!Note]
+> Copy-only-full backups are ideal for long term retention since they don't have any dependencies on other backup types such as logs. A 'Full' backup is treated as a parent of subsequent log backups and hence it's retention is tied to log retention in policy. Therefore, the customer provided expiry time is honored for copy-only-full backups and not for 'full' backups. A full backup retention time is set automatically for 45 days from the current time. It is also documented [here](manage-monitor-sql-database-backup.md#run-an-on-demand-backup).
```powershell $bkpItem = Get-AzRecoveryServicesBackupItem -BackupManagementType AzureWorkload -WorkloadType MSSQL -Name "<backup item name>" -VaultId $testVault.ID $endDate = (Get-Date).AddDays(45).ToUniversalTime()
-Backup-AzRecoveryServicesBackupItem -Item $bkpItem -BackupType Full -EnableCompression -VaultId $testVault.ID -ExpiryDateTimeUTC $endDate
+Backup-AzRecoveryServicesBackupItem -Item $bkpItem -BackupType CopyOnlyFull -EnableCompression -VaultId $testVault.ID -ExpiryDateTimeUTC $endDate
``` The on-demand backup command returns a job to be tracked.
backup Backup Mabs Protection Matrix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-mabs-protection-matrix.md
Title: MABS (Azure Backup Server) V3 UR1 protection matrix description: This article provides a support matrix listing all workloads, data types, and installations that Azure Backup Server protects. Previously updated : 12/09/2021 Last updated : 01/18/2022
The following sections details the protection support matrix for MABS:
| **Workload** | **Version** | **Azure Backup Server installation** | **Supported Azure Backup Server** | **Protection and recovery** | | -- | | | | |
-| Client computers (64-bit) | Windows 10 | Physical server <br><br> Hyper-V virtual machine <br><br> VMware virtual machine | V3 UR1 and V3 UR2 | Volume, share, folder, files, deduped volumes <br><br> Protected volumes must be NTFS. FAT and FAT32 aren't supported. <br><br> Volumes must be at least 1 GB. Azure Backup Server uses Volume Shadow Copy Service (VSS) to take the data snapshot and the snapshot only works if the volume is at least 1 GB. |
+| Client computers (64-bit) | Windows 11, Windows 10 | Physical server <br><br> Hyper-V virtual machine <br><br> VMware virtual machine | V3 UR1 and V3 UR2 | Volume, share, folder, files, deduped volumes <br><br> Protected volumes must be NTFS. FAT and FAT32 aren't supported. <br><br> Volumes must be at least 1 GB. Azure Backup Server uses Volume Shadow Copy Service (VSS) to take the data snapshot and the snapshot only works if the volume is at least 1 GB. |
| Servers (64-bit) | Windows Server 2022, 2019, 2016, 2012 R2, 2012 | Azure virtual machine (when workload is running as Azure virtual machine) <br><br> Physical server <br><br> Hyper-V virtual machine <br><br> VMware virtual machine <br><br> Azure Stack | V3 UR1 and V3 UR2 | Volume, share, folder, file <br><br> Deduped volumes (NTFS only) <br><br>When protecting a WS 2016 NTFS deduped volume with MABS v3 running on Windows Server 2019, the recoveries may be affected. We have a fix for doing recoveries in a non-deduped way that will be part of later versions of MABS. Contact MABS support if you need this fix on MABS v3 UR1.<br><br> When protecting a WS 2019 NTFS deduped volume with MABS v3 on Windows Server 2016, the backups and restores will be non-deduped. This means that the backups will consume more space on the MABS server than the original NTFS deduped volume. <br><br> System state and bare metal (Not supported when workload is running as Azure virtual machine) | | Servers (64-bit) | Windows Server 2008 R2 SP1, Windows Server 2008 SP2 (You need to install [Windows Management Framework](https://www.microsoft.com/download/details.aspx?id=54616)) | Physical server <br><br> Hyper-V virtual machine <br><br> VMware virtual machine <br><br> Azure Stack | V3 UR1 and V3 UR2 | Volume, share, folder, file, system state/bare metal | | SQL Server | SQL Server 2019, 2017, 2016 and [supported SPs](https://support.microsoft.com/lifecycle/search?alpha=SQL%20Server%202016), 2014 and supported [SPs](https://support.microsoft.com/lifecycle/search?alpha=SQL%20Server%202014) | Physical server <br><br> Hyper-V virtual machine <br><br> VMware virtual machine <br><br> Azure virtual machine (when workload is running as Azure virtual machine) <br><br> Azure Stack | V3 UR1 and V3 UR2 | All deployment scenarios: database <br><br> MABS v3 UR2 and later supports the backup of SQL database, stored on the Cluster Shared Volume. <br><br> MABS v3 UR1 supports the backup of SQL databases over ReFS volumes <br><br> MABS doesn't support SQL Server databases hosted on Windows Server 2012 Scale-Out File Servers (SOFS). <br><br> MABS can't protect SQL server Distributed Availability Group (DAG) or Availability Group (AG), where the role name on the failover cluster is different than the named AG on SQL. |
cloud-services Cloud Services Guestos Msrc Releases https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-guestos-msrc-releases.md
na Previously updated : 1/11/2022 Last updated : 1/18/2022
The following tables show the Microsoft Security Response Center (MSRC) updates
[4578955]: https://support.microsoft.com/kb/4578955 [4578953]: https://support.microsoft.com/kb/4578953 [4578956]: https://support.microsoft.com/kb/4578956
-[4578950]: https://support.microsoft.com/kb/4578950  
-[4578954]: https://support.microsoft.com/kb/4578954 
-[5004335]: https://support.microsoft.com/kb/5004335 
+[4578950]: https://support.microsoft.com/kb/4578950
+[4578954]: https://support.microsoft.com/kb/4578954
+[5004335]: https://support.microsoft.com/kb/5004335
[5008244]: https://support.microsoft.com/kb/5008244 [5008277]: https://support.microsoft.com/kb/5008277 [5008263]: https://support.microsoft.com/kb/5008263
-[5001401]: https://support.microsoft.com/kb/5001401 
-[5001403]: https://support.microsoft.com/kb/5001403 
+[5001401]: https://support.microsoft.com/kb/5001401
+[5001403]: https://support.microsoft.com/kb/5001403
[4578013]: https://support.microsoft.com/kb/4578013 [5005698]: https://support.microsoft.com/kb/5005698 [5006749]: https://support.microsoft.com/kb/5006749
-[5008287]: https://support.microsoft.com/kb/5008287 
-[4494175 ]: https://support.microsoft.com/kb/4494175 
+[5008287]: https://support.microsoft.com/kb/5008287
+[4494175]: https://support.microsoft.com/kb/4494175
[4494174]: https://support.microsoft.com/kb/4494174 [2.117]: ./cloud-services-guestos-update-matrix.md#family-2-releases [3.104]: ./cloud-services-guestos-update-matrix.md#family-3-releases
cognitive-services Multivariate How To https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Anomaly-Detector/How-to/multivariate-how-to.md
+
+ Title: How to use Multivariate Anomaly Detector APIs on your time series data
+
+description: Learn how to detect anomalies in your data with multivariate anomaly detector.
++++++ Last updated : 01/18/2022+++
+# How to: Use Multivariate Anomaly Detector on your time series data
+
+The Multivariate Anomaly Detector (MVAD) provides two primary methods to detect anomalies compared with Univariate Anomaly Detector (UVAD), **training** and **inference**. During the inference process, you can choose to use an asynchronous API or a synchronous API to trigger inference one time. Both of these APIs support batch or streaming scenarios.
+
+The following are the basic steps needed to use MVAD:
+ 1. Create an Anomaly Detector resource in the Azure portal.
+ 1. Prepare data for training and inference.
+ 1. Train an MVAD model.
+ 1. Get model status.
+ 1. Detect anomalies during the inference process with the trained MVAD model.
+
+To test out this feature, try this SDK [Notebook](https://github.com/Azure-Samples/AnomalyDetector/blob/master/ipython-notebook/API%20Sample/Multivariate%20API%20Demo%20Notebook.ipynb).
+
+## Multivariate Anomaly Detector APIs overview
+
+Generally, multivariate anomaly detector includes a set of APIs, covering the whole lifecycle of training and inference. For more information, refer to [Anomaly Detector API Operations](https://westus2.dev.cognitive.microsoft.com/docs/services/AnomalyDetector-v1-1-preview-1/operations/DetectAnomaly). Here are the **8 APIs** in MVAD:
+
+| APIs | Description |
+| - | - |
+| `/multivariate/models`| Create and train model using training data. |
+| `/multivariate/models/{modelid}`| Get model info including training status and parameters used in the model.|
+| `/multivariate/models[?$skip][&$top]`|List models in a subscription. |
+| `/multivariate/models/{modelid}/detect`| Submit asynchronous inference task with data. |
+| `/multivariate/models/{modelId}/last/detect`| Submit synchronous inference task with data. |
+| `/multivariate/results/{resultid}` | Get inference result with resultID in asynchronous inference. |
+| `/multivariate/models/{modelId}`| Delete an existing multivariate model according to the modelId. |
+| `/multivariate/models/{modelId}/export`| Export model as a Zip file. |
++
+## Create an Anomaly Detector resource in Azure portal
+
+* Create an Azure subscription if you don't have one - [Create one for free](https://azure.microsoft.com/free/cognitive-services)
+* Once you have your Azure subscription, [create an Anomaly Detector resource](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesAnomalyDetector) in the Azure portal to get your API key and API endpoint.
+
+> [!NOTE]
+> During preview stage, MVAD is available in limited regions only. Please bookmark [What's new in Anomaly Detector](../whats-new.md) to keep up to date with MVAD region roll-outs. You could also file a GitHub issue or contact us at [AnomalyDetector@microsoft.com](mailto:AnomalyDetector@microsoft.com) to request information regarding the timeline for specific regions being supported.
++
+## Data preparation
+
+Next you need to prepare your training data (and inference data with asynchronous API).
+++
+## Train an MVAD model
+
+Here is a sample request body and the sample code in Python to train an MVAD model.
+
+```json
+// Sample Request Body
+{
+ "slidingWindow": 200,
+ "alignPolicy": {
+ "alignMode": "Outer",
+ "fillNAMethod": "Linear",
+ "paddingValue": 0
+ },
+ // This could be your own ZIP file of training data stored on Azure Blob and a SAS url could be used here
+ "source": "https://aka.ms/AnomalyDetector/MVADSampleData",
+ "startTime": "2021-01-01T00:00:00Z",
+ "endTime": "2021-01-02T12:00:00Z",
+ "displayName": "Contoso model"
+}
+```
+
+```python
+# Sample Code in Python
+########### Python 3.x #############
+import http.client, urllib.request, urllib.parse, urllib.error, base64
+
+headers = {
+ # Request headers
+ 'Content-Type': 'application/json',
+ 'Ocp-Apim-Subscription-Key': '{API key}',
+}
+
+params = urllib.parse.urlencode({})
+
+try:
+ conn = http.client.HTTPSConnection('{endpoint}')
+ conn.request("POST", "/anomalydetector/v1.1-preview/multivariate/models?%s" % params, "{request body}", headers)
+ response = conn.getresponse()
+ data = response.read()
+ print(data)
+ conn.close()
+except Exception as e:
+ print("[Errno {0}] {1}".format(e.errno, e.strerror))
+
+####################################
+```
+
+Response code `201` indicates a successful request.
++
+## Get model status
+As the training API is asynchronous, you won't get the model immediately after calling the training API. However, you can query the status of models either by API key, which will list all the models, or by model ID, and will list information about the specific model.
++
+### List all the models
+
+You may refer to [this page](https://westus2.dev.cognitive.microsoft.com/docs/services/AnomalyDetector-v1-1-preview/operations/ListMultivariateModel) for information about the request URL and request headers. Notice that we only return 10 models ordered by update time, but you can visit other models by setting the `$skip` and the `$top` parameters in the request URL. For example, if your request URL is `https://{endpoint}/anomalydetector/v1.1-preview/multivariate/models?$skip=10&$top=20`, then we will skip the latest 10 models and return the next 20 models.
+
+A sample response is
+
+```json
+{
+ "models": [
+ {
+ "createdTime":"2020-12-01T09:43:45Z",
+ "displayName":"DevOps-Test",
+ "lastUpdatedTime":"2020-12-01T09:46:13Z",
+ "modelId":"b4c1616c-33b9-11eb-824e-0242ac110002",
+ "status":"READY",
+ "variablesCount":18
+ },
+ {
+ "createdTime":"2020-12-01T09:43:30Z",
+ "displayName":"DevOps-Test",
+ "lastUpdatedTime":"2020-12-01T09:45:10Z",
+ "modelId":"ab9d3e30-33b9-11eb-a3f4-0242ac110002",
+ "status":"READY",
+ "variablesCount":18
+ }
+ ],
+ "currentCount": 1,
+ "maxCount": 50,
+ "nextLink": "<link to more models>"
+}
+```
+
+The response contains four fields, `models`, `currentCount`, `maxCount`, and `nextLink`.
+
+* `models` contains the created time, last updated time, model ID, display name, variable counts, and the status of each model.
+* `currentCount` contains the number of trained multivariate models.
+* `maxCount` is the maximum number of models supported by this Anomaly Detector resource.
+* `nextLink` could be used to fetch more models.
+
+### Get models by model ID
+
+[To learn about the request URL query model by model ID.](https://westus2.dev.cognitive.microsoft.com/docs/services/AnomalyDetector-v1-1-preview/operations/GetMultivariateModel) A sample response looks like this:
+
+```json
+{
+ "modelId": "45aad126-aafd-11ea-b8fb-d89ef3400c5f",
+ "createdTime": "2020-06-30T00:00:00Z",
+ "lastUpdatedTime": "2020-06-30T00:00:00Z",
+ "modelInfo": {
+ "slidingWindow": 300,
+ "alignPolicy": {
+ "alignMode": "Outer",
+ "fillNAMethod": "Linear",
+ "paddingValue": 0
+ },
+ "source": "<TRAINING_ZIP_FILE_LOCATED_IN_AZURE_BLOB_STORAGE_WITH_SAS>",
+ "startTime": "2019-04-01T00:00:00Z",
+ "endTime": "2019-04-02T00:00:00Z",
+ "displayName": "Devops-MultiAD",
+ "status": "READY",
+ "errors": [],
+ "diagnosticsInfo": {
+ "modelState": {
+ "epochIds": [10, 20, 30, 40, 50, 60, 70, 80, 90, 100],
+ "trainLosses": [0.6291328072547913, 0.1671326905488968, 0.12354248017072678, 0.1025966405868533,
+ 0.0958492755889896, 0.09069952368736267,0.08686016499996185, 0.0860302299260931,
+ 0.0828735455870684, 0.08235538005828857],
+ "validationLosses": [1.9232804775238037, 1.0645641088485718, 0.6031560301780701, 0.5302737951278687,
+ 0.4698025286197664, 0.4395163357257843, 0.4182931482799006, 0.4057914316654053,
+ 0.4056498706340729, 0.3849248886108984],
+ "latenciesInSeconds": [0.3398594856262207, 0.3659665584564209, 0.37360644340515137,
+ 0.3513407707214355, 0.3370304107666056, 0.31876277923583984,
+ 0.3283309936523475, 0.3503587245941162, 0.30800247192382812,
+ 0.3327946662902832]
+ },
+ "variableStates": [
+ {
+ "variable": "ad_input",
+ "filledNARatio": 0,
+ "effectiveCount": 1441,
+ "startTime": "2019-04-01T00:00:00Z",
+ "endTime": "2019-04-02T00:00:00Z",
+ "errors": []
+ },
+ {
+ "variable": "ad_ontimer_output",
+ "filledNARatio": 0,
+ "effectiveCount": 1441,
+ "startTime": "2019-04-01T00:00:00Z",
+ "endTime": "2019-04-02T00:00:00Z",
+ "errors": []
+ },
+ // More variables
+ ]
+ }
+ }
+ }
+```
+
+You will receive more detailed information about the queried model. The response contains meta information about the model, its training parameters, and diagnostic information. Diagnostic Information is useful for debugging and tracing training progress.
+
+* `epochIds` indicates how many epochs the model has been trained out of a total of 100 epochs. For example, if the model is still in training status, `epochId` might be `[10, 20, 30, 40, 50]` , which means that it has completed its 50th training epoch, and therefore is halfway complete.
+* `trainLosses` and `validationLosses` are used to check whether the optimization progress converges in which case the two losses should decrease gradually.
+* `latenciesInSeconds` contains the time cost for each epoch and is recorded every 10 epochs. In this example, the 10th epoch takes approximately 0.34 second. This would be helpful to estimate the completion time of training.
+* `variableStates` summarizes information about each variable. It is a list ranked by `filledNARatio` in descending order. It tells how many data points are used for each variable and `filledNARatio` tells how many points are missing. Usually we need to reduce `filledNARatio` as much as possible.
+Too many missing data points will deteriorate model accuracy.
+* Errors during data processing will be included in the `errors` field.
++
+## Inference with asynchronous API
+
+You could choose the asynchronous API, or the synchronous API for inference.
+
+| Asynchronous API | Synchronous API |
+| - | - |
+| More suitable for batch use cases when customers donΓÇÖt need to get inference results immediately and want to detect anomalies and get results over a longer time period.| When customers want to get inference immediately and want to detect multivariate anomalies in real time, this API is recommended. Also suitable for customers having difficulties conducting the previous compressing and uploading process for inference. |
+
+To perform asynchronous inference, provide the blob source path to the zip file containing the inference data, the start time, and end time.
+
+This inference is asynchronous, so the results are not returned immediately. Notice that you need to save in a variable the link of the results in the **response header** which contains the `resultId`, so that you may know where to get the results afterwards.
+
+Failures are usually caused by model issues or data issues. You cannot perform inference if the model is not ready or the data link is invalid. Make sure that the training data and inference data are consistent, which means they should be **exactly** the same variables but with different timestamps. More variables, fewer variables, or inference with a different set of variables will not pass the data verification phase and errors will occur. Data verification is deferred so that you will get error message only when you query the results.
+
+### Get inference results (asynchronous only)
+
+You need the `resultId` to get results. `resultId` is obtained from the response header when you submit the inference request. Consult [this page for instructions to query the inference results](https://westus2.dev.cognitive.microsoft.com/docs/services/AnomalyDetector-v1-1-preview/operations/GetDetectionResult).
+
+A sample response looks like this:
+
+```json
+ {
+ "resultId": "663884e6-b117-11ea-b3de-0242ac130004",
+ "summary": {
+ "status": "READY",
+ "errors": [],
+ "variableStates": [
+ {
+ "variable": "ad_input",
+ "filledNARatio": 0,
+ "effectiveCount": 26,
+ "startTime": "2019-04-01T00:00:00Z",
+ "endTime": "2019-04-01T00:25:00Z",
+ "errors": []
+ },
+ {
+ "variable": "ad_ontimer_output",
+ "filledNARatio": 0,
+ "effectiveCount": 26,
+ "startTime": "2019-04-01T00:00:00Z",
+ "endTime": "2019-04-01T00:25:00Z",
+ "errors": []
+ },
+ // more variables
+ ],
+ "setupInfo": {
+ "source": "https://aka.ms/AnomalyDetector/MVADSampleData",
+ "startTime": "2019-04-01T00:15:00Z",
+ "endTime": "2019-04-01T00:40:00Z"
+ }
+ },
+ "results": [
+ {
+ "timestamp": "2019-04-01T00:15:00Z",
+ "errors": [
+ {
+ "code": "InsufficientHistoricalData",
+ "message": "historical data is not enough."
+ }
+ ]
+ },
+ // more results
+ {
+ "timestamp": "2019-04-01T00:20:00Z",
+ "value": {
+ "contributors": [],
+ "isAnomaly": false,
+ "severity": 0,
+ "score": 0.17805261260751692
+ }
+ },
+ // more results
+ {
+ "timestamp": "2019-04-01T00:27:00Z",
+ "value": {
+ "contributors": [
+ {
+ "contributionScore": 0.0007775013367514271,
+ "variable": "ad_ontimer_output"
+ },
+ {
+ "contributionScore": 0.0007989604079048129,
+ "variable": "ad_series_init"
+ },
+ {
+ "contributionScore": 0.0008900927229851369,
+ "variable": "ingestion"
+ },
+ {
+ "contributionScore": 0.008068144477478554,
+ "variable": "cpu"
+ },
+ {
+ "contributionScore": 0.008222036467507165,
+ "variable": "data_in_speed"
+ },
+ {
+ "contributionScore": 0.008674941549594993,
+ "variable": "ad_input"
+ },
+ {
+ "contributionScore": 0.02232242629793674,
+ "variable": "ad_output"
+ },
+ {
+ "contributionScore": 0.1583773213660846,
+ "variable": "flink_last_ckpt_duration"
+ },
+ {
+ "contributionScore": 0.9816531517495176,
+ "variable": "data_out_speed"
+ }
+ ],
+ "isAnomaly": true,
+ "severity": 0.42135109874230336,
+ "score": 1.213510987423033
+ }
+ },
+ // more results
+ ]
+ }
+```
+
+The response contains the result status, variable information, inference parameters, and inference results.
+
+* `variableStates` lists the information of each variable in the inference request.
+* `setupInfo` is the request body submitted for this inference.
+* `results` contains the detection results. There are three typical types of detection results.
+
+* Error code `InsufficientHistoricalData`. This usually happens only with the first few timestamps because the model inferences data in a window-based manner and it needs historical data to make a decision. For the first few timestamps, there is insufficient historical data, so inference cannot be performed on them. In this case, the error message can be ignored.
+
+* `"isAnomaly": false` indicates the current timestamp is not an anomaly.
+ * `severity ` indicates the relative severity of the anomaly and for normal data it is always 0.
+ * `score` is the raw output of the model on which the model makes a decision, which could be non-zero even for normal data points.
+* `"isAnomaly": true` indicates an anomaly at the current timestamp.
+ * `severity ` indicates the relative severity of the anomaly and for abnormal data it is always greater than 0.
+ * `score` is the raw output of the model on which the model makes a decision. `severity` is a derived value from `score`. Every data point has a `score`.
+* `contributors` is a list containing the contribution score of each variable. Higher contribution scores indicate higher possibility of the root cause. This list is often used for interpreting anomalies and diagnosing the root causes.
+
+> [!NOTE]
+> A common pitfall is taking all data points with `isAnomaly`=`true` as anomalies. That may end up with too many false positives.
+> You should use both `isAnomaly` and `severity` (or `score`) to sift out anomalies that are not severe and (optionally) use grouping to check the duration of the anomalies to suppress random noise.
+> Please refer to the [FAQ](../concepts/best-practices-multivariate.md#faq) in the best practices document for the difference between `severity` and `score`.
++
+## (NEW) inference with synchronous API
+
+> [!NOTE]
+> In v1.1-preview.1, we support synchronous API and add more fields in inference result for both asynchronous API and synchronous API, you could upgrade the API version to access to these features. Once you upgrade, you'll no longer use previous model trained in old version, you should retrain a model to fit for new fields. [Learn more about v1.1-preview.1](https://westus2.dev.cognitive.microsoft.com/docs/services/AnomalyDetector-v1-1-preview-1/operations/DetectAnomaly).
+
+With the synchronous API, you can get inference results point by point in real time, and no need for compressing and uploading task like training and asynchronous inference. Here are some requirements for the synchronous API:
+* Need to put data in **JSON format** into the API request body.
+* The inference results are limited to up to 10 data points, which means you could detect **1 to 10 timestamps** with one synchronous API call.
+* Due to payload limitation, the size of inference data in the request body is limited, which support at most `2880` timestamps * `300` variables.
+
+### Request schema
+
+You submit a bunch of timestamps of multiple variables into in JSON format in the request body, with an API call like this:
+
+`https://{endpoint}/anomalydetector/v1.1-preview.1/multivariate/models/{modelId}/last/detect`
+
+A sample request looks like following format, this case is detecting last two timestamps (`detectingPoints` is 2) of 3 variables in one synchronous API call.
+
+```json
+{
+ "variables": [
+ {
+ "variableName": "Variable_1",
+ "timestamps": [
+ "2021-01-01T00:00:00Z",
+ "2021-01-01T00:01:00Z",
+ "2021-01-01T00:02:00Z"
+ //more variables
+ ],
+ "values": [
+ 0.4551378545933972,
+ 0.7388603950488748,
+ 0.201088255984052
+ //more variables
+ ]
+ },
+ {
+ "variableName": "Variable_2",
+ "timestamps": [
+ "2021-01-01T00:00:00Z",
+ "2021-01-01T00:01:00Z",
+ "2021-01-01T00:02:00Z"
+ //more variables
+ ],
+ "values": [
+ 0.9617871613964145,
+ 0.24903311574778408,
+ 0.4920561254118613
+ //more variables
+ ]
+ },
+ {
+ "variableName": "Variable_3",
+ "timestamps": [
+ "2021-01-01T00:00:00Z",
+ "2021-01-01T00:01:00Z",
+ "2021-01-01T00:02:00Z"
+ //more variables
+ ],
+ "values": [
+ 0.4030756879437628,
+ 0.15526889968448554,
+ 0.36352226408981103
+ //more variables
+ ]
+ }
+ ],
+ "detectingPoints": 2
+}
+```
+
+### Response schema
+
+You will get the JSON response of inference results in real time after you call a synchronous API, which contains following new fields:
+
+| Field | Description |
+| - | - |
+| `interpretation`| This field only appears when a timestamp is detected as anomalous, which contains `variables`, `contributionScore`, `correlationChanges`. |
+| `correlationChanges`| This field only appears when a timestamp is detected as anomalous, which included in interpretation. It contains `changedVariables` and `changedValues` that interpret which correlations between variables changed. |
+| `changedVariables`| This field will show which variables that have significant change in correlation with `variable`. |
+| `changedValues`| This field calculates a number between 0 and 1 showing how much the correlation changed between variables. The bigger the number is, the greater the change on correlations. |
++
+See the following example of a JSON response:
+
+```json
+{
+ "variableStates": [
+ {
+ "variable": "variable_1",
+ "filledNARatio": 0,
+ "effectiveCount": 30,
+ "startTime": "2021-01-01T00:00:00Z",
+ "endTime": "2021-01-01T00:29:00Z"
+ },
+ {
+ "variable": "variable_2",
+ "filledNARatio": 0,
+ "effectiveCount": 30,
+ "startTime": "2021-01-01T00:00:00Z",
+ "endTime": "2021-01-01T00:29:00Z"
+ },
+ {
+ "variable": "variable_3",
+ "filledNARatio": 0,
+ "effectiveCount": 30,
+ "startTime": "2021-01-01T00:00:00Z",
+ "endTime": "2021-01-01T00:29:00Z"
+ }
+ ],
+ "results": [
+ {
+ "timestamp": "2021-01-01T00:28:00Z",
+ "value": {
+ "isAnomaly": false,
+ "severity": 0,
+ "score": 0.6928471326828003
+ },
+ "errors": []
+ },
+ {
+ "timestamp": "2021-01-01T00:29:00Z",
+ "value": {
+ "isAnomaly": true,
+ "severity": 0.5337404608726501,
+ "score": 0.9171165823936462,
+ "interpretation": [
+ {
+ "variable": "variable_2",
+ "contributionScore": 0.5371576215,
+ "correlationChanges": {
+ "changedVariables": [
+ "variable_1",
+ "variable_3"
+ ],
+ "changedValues": [
+ 0.1741322,
+ 0.1093203
+ ]
+ }
+ },
+ {
+ "variable": "variable_3",
+ "contributionScore": 0.3324159383,
+ "correlationChanges": {
+ "changedVariables": [
+ "variable_2"
+ ],
+ "changedValues": [
+ 0.1229392
+ ]
+ }
+ },
+ {
+ "variable": "variable_1",
+ "contributionScore": 0.1304264402,
+ "correlationChanges": {
+ "changedVariables": [],
+ "changedValues": []
+ }
+ }
+ ]
+ },
+ "errors": []
+ }
+ ]
+}
+```
+
+## Next steps
+
+* [What is the Multivariate Anomaly Detector API?](../overview-multivariate.md)
+* [Join us to get more supports!](https://aka.ms/adadvisorsjoin)
cognitive-services Overview Multivariate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Anomaly-Detector/overview-multivariate.md
Previously updated : 04/01/2021 Last updated : 01/16/2022 keywords: anomaly detection, machine learning, algorithms
If your goal is to detect anomalies out of a normal pattern on each individual t
If your goal is to detect system level anomalies from a group of time series data, use multivariate anomaly detection APIs. Particularly, when any individual time series won't tell you much, and you have to look at all signals (a group of time series) holistically to determine a system level issue. For example, you have an expensive physical asset like aircraft, equipment on an oil rig, or a satellite. Each of these assets has tens or hundreds of different types of sensors. You would have to look at all those time series signals from those sensors to decide whether there is system level issue.
-## Notebook
-
-To learn how to call the Anomaly Detector API (multivariate), try this [Notebook](https://github.com/Azure-Samples/AnomalyDetector/blob/master/ipython-notebook/API%20Sample/Multivariate%20API%20Demo%20Notebook.ipynb). This Jupyter Notebook shows you how to send an API request and visualize the result.
-
-To run the Notebook, you should get a valid Anomaly Detector API **subscription key** and an **API endpoint**. In the notebook, add your valid Anomaly Detector API subscription key to the `subscription_key` variable, and change the `endpoint` variable to your endpoint.
+## Sample Notebook
+
+To learn how to call the Multivariate Anomaly Detector API, try this [Notebook](https://github.com/Azure-Samples/AnomalyDetector/blob/master/ipython-notebook/API%20Sample/Multivariate%20API%20Demo%20Notebook.ipynb). To run the Notebook, you only need a valid Anomaly Detector API **subscription key** and an **API endpoint**. In the notebook, add your valid Anomaly Detector API subscription key to the `subscription_key` variable, and change the `endpoint` variable to your endpoint.
+
+Multivariate Anomaly Detector includes three main steps, **data preparation**, **training** and **inference**.
+
+### Data preparation
+For data preparation, you should prepare two parts of data, **training data** and **inference data**. As for training data, you should upload your data to Blob Storage and generate an SAS url which will be used in training API. As for inference data, you could either use the same data format as training data, or send the data into API header, which will be formatted as JSON. This depends on what API you choose to use in the inference process.
+
+### Training
+When training a model, you should call an asynchronous API on your training data, which means you won't get the model status immediately after calling this API, you should request another API to get the model status.
+
+### Inference
+In the inference process, you have two options to choose, an asynchronous API or a synchronous API. If you would like to do a batch validation, you are suggested to use the asynchronous API. If you want to do streaming in a short granularity and get the inference result immediately after each API request, you are suggested to use the synchronous API.
+* As for the asynchronous API, you won't get the inference result immediately like training process, which means you should use another API to request the result after some time. Data preparation is similar with the training process.
+* As for synchronized API, you could get the inference result immediately after you request, and you should send your data in a JSON format into the API body.
## Region support
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Anomaly-Detector/whats-new.md
description: This article is regularly updated with news about the Azure Cogniti
Previously updated : 06/23/2021 Last updated : 01/16/2022 # What's new in Anomaly Detector
We've also added links to some user-generated content. Those items will be marke
## Release notes
+### January 2022
+* **Multivariate Anomaly Detector API v1.1-preview.1 public preview on 1/18.** In this version, Multivariate Anomaly Detector supports synchronous API for inference and added new fields in API output interpreting the correlation change of variables.
+* Univariate Anomaly Detector added new fields in API output.
++ ### November 2021 * Multivariate Anomaly Detector available in six more regions: UAE North, France Central, North Central US, Switzerland North, South Africa North, Jio India West. Now in total 26 regions are supported.
cognitive-services Integrate With Power Virtual Assistant Fallback Topic https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/QnAMaker/Tutorials/integrate-with-power-virtual-assistant-fallback-topic.md
Last updated 11/09/2020
# Tutorial: Add your knowledge base to Power Virtual Agents Create and extend a [Power Virtual Agents](https://powervirtualagents.microsoft.com/) bot to provide answers from your knowledge base.
+> [!NOTE]
+> The integration demonstrated in this tutorial is in preview and is not intended for deployment to production environments.
+ In this tutorial, you learn how to: <!-- green checkmark -->
cognitive-services Speech Synthesis Markup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/speech-synthesis-markup.md
Speech Synthesis Markup Language (SSML) is an XML-based markup language that let
The Speech service implementation of SSML is based on World Wide Web Consortium's [Speech Synthesis Markup Language Version 1.0](https://www.w3.org/TR/2004/REC-speech-synthesis-20040907/). > [!IMPORTANT]
-> Chinese, Japanese, and Korean characters count as two characters for billing. For more information, see [Pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
+> Each Chinese characters are counted as two characters for billing, including Kanji used in Japanese, Hanja used in Korean, or Hanzi used in other languages. For more information, see [Pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
## Prebuilt neural voice and custom neural voice
cognitive-services Text To Speech https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/text-to-speech.md
When using the Text-to-Speech service, you are billed for each character that is
For detailed information, see [Pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/). > [!IMPORTANT]
-> Each Chinese, Japanese, and Korean language character is counted as two characters for billing.
+> Each Chinese characters are counted as two characters for billing, including Kanji used in Japanese, Hanja used in Korean, or Hanzi used in other languages.
## Reference docs
cognitive-services Cognitive Services Container Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/cognitive-services-container-support.md
Previously updated : 12/15/2021 Last updated : 01/12/2022 keywords: on-premises, Docker, container, Kubernetes #Customer intent: As a potential customer, I want to know more about how Cognitive Services provides and supports Docker containers for each service.
Azure Cognitive Services containers provide the following set of Docker containe
| Service | Container | Description | Availability | |--|--|--|--| | [LUIS][lu-containers] | **LUIS** ([image](https://go.microsoft.com/fwlink/?linkid=2043204&clcid=0x409)) | Loads a trained or published Language Understanding model, also known as a LUIS app, into a docker container and provides access to the query predictions from the container's API endpoints. You can collect query logs from the container and upload these back to the [LUIS portal](https://www.luis.ai) to improve the app's prediction accuracy. | Generally available |
-| [Language service][ta-containers-keyphrase] | **Key Phrase Extraction** ([image](https://go.microsoft.com/fwlink/?linkid=2018757&clcid=0x409)) | Extracts key phrases to identify the main points. For example, for the input text "The food was delicious and there were wonderful staff", the API returns the main talking points: "food" and "wonderful staff". | Generally available |
-| [Language service][ta-containers-language] | **Text Language Detection** ([image](https://go.microsoft.com/fwlink/?linkid=2018759&clcid=0x409)) | For up to 120 languages, detects which language the input text is written in and report a single language code for every document submitted on the request. The language code is paired with a score indicating the strength of the score. | Generally available |
-| [Language service][ta-containers-sentiment] | **Sentiment Analysis** ([image](https://go.microsoft.com/fwlink/?linkid=2018654&clcid=0x409)) | Analyzes raw text for clues about positive or negative sentiment. This version of sentiment analysis returns sentiment labels (for example *positive* or *negative*) for each document and sentence within it. | Generally available |
+| [Language service][ta-containers-keyphrase] | **Key Phrase Extraction** ([image](https://go.microsoft.com/fwlink/?linkid=2018757&clcid=0x409)) | Extracts key phrases to identify the main points. For example, for the input text "The food was delicious and there were wonderful staff", the API returns the main talking points: "food" and "wonderful staff". | Generally available. <br> This container can also [run in disconnected environments](containers/disconnected-containers.md). |
+| [Language service][ta-containers-language] | **Text Language Detection** ([image](https://go.microsoft.com/fwlink/?linkid=2018759&clcid=0x409)) | For up to 120 languages, detects which language the input text is written in and report a single language code for every document submitted on the request. The language code is paired with a score indicating the strength of the score. | Generally available. <br> This container can also [run in disconnected environments](containers/disconnected-containers.md). |
+| [Language service][ta-containers-sentiment] | **Sentiment Analysis** ([image](https://go.microsoft.com/fwlink/?linkid=2018654&clcid=0x409)) | Analyzes raw text for clues about positive or negative sentiment. This version of sentiment analysis returns sentiment labels (for example *positive* or *negative*) for each document and sentence within it. | Generally available. This <br> container can also [run in disconnected environments](containers/disconnected-containers.md). |
| [Language service][ta-containers-health] | **Text Analytics for health** | Extract and label medical information from unstructured clinical text. | Generally available |
-| [Translator][tr-containers] | **Translator** | Translate text in several languages and dialects. | Gated preview. [Request access](https://aka.ms/csgate-translator). |
+| [Translator][tr-containers] | **Translator** | Translate text in several languages and dialects. | Gated preview - [request access](https://aka.ms/csgate-translator). <br> This container can also [run in disconnected environments](containers/disconnected-containers.md). |
### Speech containers
Azure Cognitive Services containers provide the following set of Docker containe
| Service | Container | Description | Availability | |--|--|--|
-| [Speech Service API][sp-containers-stt] | **Speech-to-text** ([image](https://hub.docker.com/_/microsoft-azure-cognitive-services-speechservices-custom-speech-to-text)) | Transcribes continuous real-time speech into text. | Generally available |
+| [Speech Service API][sp-containers-stt] | **Speech-to-text** ([image](https://hub.docker.com/_/microsoft-azure-cognitive-services-speechservices-custom-speech-to-text)) | Transcribes continuous real-time speech into text. | Generally available. <br> This container can also [run in disconnected environments](containers/disconnected-containers.md). |
| [Speech Service API][sp-containers-cstt] | **Custom Speech-to-text** ([image](https://hub.docker.com/_/microsoft-azure-cognitive-services-speechservices-custom-speech-to-text)) | Transcribes continuous real-time speech into text using a custom model. | Generally available | | [Speech Service API][sp-containers-tts] | **Text-to-speech** ([image](https://hub.docker.com/_/microsoft-azure-cognitive-services-speechservices-text-to-speech)) | Converts text to natural-sounding speech. | Generally available | | [Speech Service API][sp-containers-ctts] | **Custom Text-to-speech** ([image](https://hub.docker.com/_/microsoft-azure-cognitive-services-speechservices-custom-text-to-speech)) | Converts text to natural-sounding speech using a custom model. | Gated preview |
-| [Speech Service API][sp-containers-ntts] | **Neural Text-to-speech** ([image](https://hub.docker.com/_/microsoft-azure-cognitive-services-speechservices-neural-text-to-speech)) | Converts text to natural-sounding speech using deep neural network technology, allowing for more natural synthesized speech. | Generally available |
+| [Speech Service API][sp-containers-ntts] | **Neural Text-to-speech** ([image](https://hub.docker.com/_/microsoft-azure-cognitive-services-speechservices-neural-text-to-speech)) | Converts text to natural-sounding speech using deep neural network technology, allowing for more natural synthesized speech. | Generally available. |
| [Speech Service API][sp-containers-lid] | **Speech language detection** ([image](https://hub.docker.com/_/microsoft-azure-cognitive-services-speechservices-language-detection)) | Determines the language of spoken audio. | Gated preview | ### Vision containers
Azure Cognitive Services containers provide the following set of Docker containe
| Service | Container | Description | Availability | |--|--|--|--|
-| [Computer Vision][cv-containers] | **Read OCR** ([image](https://hub.docker.com/_/microsoft-azure-cognitive-services-vision-read)) | The Read OCR container allows you to extract printed and handwritten text from images and documents with support for JPEG, PNG, BMP, PDF, and TIFF file formats. For more information, see the [Read API documentation](./computer-vision/overview-ocr.md). | Generally Available. |
+| [Computer Vision][cv-containers] | **Read OCR** ([image](https://hub.docker.com/_/microsoft-azure-cognitive-services-vision-read)) | The Read OCR container allows you to extract printed and handwritten text from images and documents with support for JPEG, PNG, BMP, PDF, and TIFF file formats. For more information, see the [Read API documentation](./computer-vision/overview-ocr.md). | Generally Available. This container can also [run in disconnected environments](containers/disconnected-containers.md). |
| [Spatial Analysis][spa-containers] | **Spatial analysis** ([image](https://hub.docker.com/_/microsoft-azure-cognitive-services-vision-spatial-analysis)) | Analyzes real-time streaming video to understand spatial relationships between people, their movement, and interactions with objects in physical environments. | Preview | <!--
cognitive-services Commitment Tier https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/commitment-tier.md
For more information, see [Azure Cognitive Services pricing](https://azure.micro
## Request approval to purchase a commitment plan
+> [!CAUTION]
+> The following instructions are for purchasing a commitment tier for web-based APIs and connected containers only. For instructions on purchasing plans for disconnected containers, see [Run containers in disconnected environments](containers/disconnected-containers.md).
+ Before you can purchase a commitment plan, you must [submit an online application](https://aka.ms/csgatecommitment). If your application is approved, you will be able to purchase a commitment tier on the Azure portal, for both new and existing Azure Resources. * On the form, you must use a corporate email address associated with an Azure subscription ID.
Once you are approved, you can use either create a new resource to use a commitm
> * The resource is using the standard pricing tier. > * You have been approved to purchase commitment tier pricing.
-1. Select **Change** to view the available commitments for hosted API and container usage.
+3. Select **Change** to view the available commitments for hosted API and container usage. Choose a commitment plan for one or more of the following offerings:
+ * **Web**: web-based APIs, where you send data to Azure for processing.
+ * **Connected container**: Docker containers that enable you to [deploy Cognitive services on premises](cognitive-services-container-support.md), and maintain an internet connection for billing and metering.
:::image type="content" source="media/commitment-tier/commitment-tier-pricing.png" alt-text="A screenshot showing the commitment tier pricing page on the Azure portal." lightbox="media/commitment-tier/commitment-tier-pricing.png":::
cognitive-services Disconnected Containers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/containers/disconnected-containers.md
+
+ Title: Use Docker containers in disconnected environments
+
+description: Learn how to run Azure Cognitive Services Docker containers disconnected from the internet.
+++++ Last updated : 01/14/2022+++
+# Use Docker containers in disconnected environments
+
+Containers enable you to run Cognitive Services APIs in your own environment, and are great for your specific security and data governance requirements. Disconnected containers enable you to use several of these APIs completely disconnected from the internet. Currently, the following containers can be run in this manner:
+
+* [Speech to Text (Standard)](../speech-service/speech-container-howto.md?tabs=stt)
+* [Text Translation (Standard)](../translator/containers/translator-how-to-install-container.md#host-computer)
+* Azure Cognitive Service for Language
+ * [Sentiment Analysis](../language-service/sentiment-opinion-mining/how-to/use-containers.md)
+ * [Key Phrase Extraction](../language-service/key-phrase-extraction/how-to/use-containers.md)
+ * [Language Detection](../language-service/language-detection/how-to/use-containers.md)
+* [Computer Vision - Read](../computer-vision/computer-vision-how-to-install-containers.md)
+
+Disconnected container usage is also available for the following Applied AI service:
+* [Form Recognizer ΓÇô Custom/Invoice](../../applied-ai-services/form-recognizer/containers/form-recognizer-container-install-run.md)
+
+Before attempting to run a Docker container in an offline environment, make sure you know the steps to successfully download and use the container. For example:
+* Host computer requirements and recommendations.
+* The Docker `pull` command you will use to download the container.
+* How to validate that a container is running.
+* How to send queries to the container's endpoint, once it's running.
+
+## Request access to use containers in disconnected environments
+
+Fill out and submit the [request form](https://aka.ms/csdisconnectedcontainers) to request access to the containers disconnected from the internet.
++
+Access is limited to customers that meet the following requirements:
+* Your organization must have a Microsoft Enterprise Agreement or an equivalent agreement and should identified as strategic customer or partner with Microsoft.
+* Disconnected containers are expected to run fully offline, hence your use cases must meet one of below or similar requirements:
+ * Environment or device(s) with zero connectivity to internet.
+ * Remote location that occasionally has internet access.
+ * Organization under strict regulation of not sending any kind of data back to cloud.
+* Application completed as instructed - Please pay close attention to guidance provided throughout the application to ensure you provide all the necessary information required for approval.
+
+## Purchase a commitment plan to use containers in disconnected environments
+
+### Create a new resource
+
+1. Sign into the [Azure portal](https://portal.azure.com/) and select **Create a new resource** for one of the applicable Cognitive Services or Applied AI services listed above.
+
+2. Enter the applicable information to create your resource. Be sure to select **Commitment tier disconnected containers** as your pricing tier.
+
+ > [!NOTE]
+ > * You will only see the option to purchase a commitment tier if you have been approved by Microsoft.
+ > * Pricing details are for example only.
+
+ :::image type="content" source="media/offline-container-signup.png" alt-text="A screenshot showing resource creation on the Azure portal." lightbox="media/offline-container-signup.png":::
+
+3. Select **Review + Create** at the bottom of the page. Review the information, and select **Create**.
+
+## Gather required parameters
+
+There are three primary parameters for all Cognitive Services' containers that are required. The end-user license agreement (EULA) must be present with a value of *accept*. Additionally, both an endpoint URL and API key are needed when you first run the container, to configure it for disconnected usage.
+
+You can find the key and endpoint on the **Key and endpoint** page for your resource.
+
+> [!IMPORTANT]
+> You will only use your key and endpoint to configure the container to be run in a disconnected environment. After you configure the container, you won't need them to send API requests. Store them securely, for example, using Azure Key Vault. Only one key is necessary for this process.
+
+## Download a Docker container with `docker pull`
+
+After you have a license file, download the Docker container you have approval to run in a disconnected environment. For example:
+
+```Docker
+docker pull mcr.microsoft.com/azure-cognitive-services/form-recognizer/invoice:latest
+```
+
+## Configure the container to be run in a disconnected environment
+
+Now that you've downloaded your container, you will need to run the container with the `DownloadLicense=True` parameter in your `docker run` command. This parameter will download a license file that will enable your Docker container to run when it isn't connected to the internet. It also contains an expiration date, after which the license file will be invalid to run the container.
+
+> [!IMPORTANT]
+> * You can only use a license file with the appropriate container that you've been approved for. For example, you cannot use a license file for a speech-to-text container with a form recognizer container.
+> * If you're using the [Translator container](../translator/containers/translator-how-to-install-container.md), using the example below will generate a docker `run` template that you can use to run the container, containing parameters you will need for the downloaded models and configuration file. Make sure you save this template.
+
+The following example shows the formatting of the `docker run` command you'll use, with placeholder values. Replace these placeholder values with your own values.
+
+| Placeholder | Value | Format or example |
+|-|-||
+| `{IMAGE}` | The container image you want to use. | `mcr.microsoft.com/azure-cognitive-services/form-recognizer/invoice` |
+| `{LICENSE_MOUNT}` | The path where the license will be downloaded, and mounted. | `/volume/license:/path/to/license/directory` |
+| `{ENDPOINT_URI}` | The endpoint for authenticating your service request. You can find it on your resource's **Key and endpoint** page, on the Azure portal. | `https://<your-custom-subdomain>.cognitiveservices.azure.com` |
+| `{API_KEY}` | The key for your Text Analytics resource. You can find it on your resource's **Key and endpoint** page, on the Azure portal. |`xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx`|
+
+```bash
+docker run {IMAGE} --rm -it -p 5000:5000 \
+-v {LICENSE_MOUNT} \
+eula=accept \
+billing={ENDPOINT_URI} \
+apikey={API_KEY} \
+DownloadLicense=True \
+Mounts:License={LICENSE_MOUNT} \
+```
+
+After you have configured the container, use the next section to run the container in your environment with the license, and appropriate memory and CPU allocations.
+
+## Run the container in a disconnected environment
+
+> [!IMPORTANT]
+> If you're using the Translator or Speech-to-text containers, read the **Additional parameters** section below for information on commands or additional parameters you will need to use.
+
+Once the license file has been downloaded, you can run the container in a disconnected environment. The following example shows the formatting of the `docker run` command you'll use, with placeholder values. Replace these placeholder values with your own values.
+
+Wherever the container is run, the license file must be mounted to the container and the location of the license folder on the container's local filesystem must be specified with `Mounts:License=`. An output mount must also be specified so that billing usage records can be written.
+
+Placeholder | Value | Format or example |
+|-|-||
+| `{IMAGE}` | The container image you want to use. | `mcr.microsoft.com/azure-cognitive-services/form-recognizer/invoice` |
+ `{MEMORY_SIZE}` | The appropriate size of memory to allocate for your container. | `4g` |
+| `{NUMBER_CPUS}` | The appropriate number of CPUs to allocate for your container. | `4` |
+| `{LICENSE_MOUNT}` | The path where the license will be located and mounted. | `/volume/license:/path/to/license/directory` |
+| `{OUTPUT_PATH}` | The output path for logging [usage records](#usage-records). | `/host/output:/path/to/output/directory` |
+
+```bash
+docker run {IMAGE} --rm -it -p 5000:5000 --memory {MEMORY_SIZE} --cpus {NUMBER_CPUS} \
+-v {LICENSE_MOUNT} \
+-v {OUTPUT_PATH} \
+eula=accept \
+Mounts:License={LICENSE_MOUNT}
+Mounts:Output={OUTPUT_PATH}
+```
+
+### Additional parameters and commands
+
+See the following sections for additional parameters and commands you may need to run the container.
+
+#### Translator container
+
+If you're using the [Translator container](../translator/containers/translator-how-to-install-container.md), you will need to add parameters for the downloaded translation models and container configuration. These values are generated and displayed in the container output when you [configure the container](#configure-the-container-to-be-run-in-a-disconnected-environment) as described above. For example:
+```bash
+-e MODELS= /path/to/model1/, /path/to/model2/
+-e TRANSLATORSYSTEMCONFIG=/path/to/model/config/translatorsystemconfig.json
+```
+
+#### Speech-to-text container
+
+The [speech-to-text container](../speech-service/speech-container-howto.md?tabs=stt) provides two default directories, `license` and `output`, by default for writing the license file and billing log at runtime. When you're mounting these directories to the container with the `docker run -v` command, make sure the local machine directory is set ownership to `user:group nonroot:nonroot` before running the container.
+
+Below is a sample command to set file/directory ownership.
+
+```bash
+sudo chown -R nonroot:nonroot <YOUR_LOCAL_MACHINE_PATH_1> <YOUR_LOCAL_MACHINE_PATH_2> ...
+```
+
+## Usage records
+
+When operating Docker containers in a disconnected environment, the container will write usage records to a volume where they are collected over time. You can also call a REST endpoint to generate a report about service usage.
+
+### Arguments for storing logs
+
+When run in a disconnected environment, an output mount must be available to the container to store usage logs. For example, you would include `-v /host/output:{OUTPUT_PATH}` and `Mounts:Output={OUTPUT_PATH}` in the example below, replacing `{OUTPUT_PATH}` with the path where the logs will be stored:
+
+```Docker
+docker run -v /host/output:{OUTPUT_PATH} ... <image> ... Mounts:Output={OUTPUT_PATH}
+```
+
+### Get records using the container endpoints
+
+The container provides two endpoints for returning records about its usage.
+
+#### Get all records
+
+The following endpoint will provide a report summarizing all of the usage collected in the mounted billing record directory.
+
+```http
+https://<service>/records/usage-logs/
+```
+
+It will return JSON similar to the example below.
+
+```json
+{
+ "apiType": "noop",
+ "serviceName": "noop",
+ "meters": [
+ {
+ "name": "Sample.Meter",
+ "quantity": 253
+ }
+ ]
+}
+```
+#### Get records for a specific month
+
+The following endpoint will provide a report summarizing usage over a specific month and year.
+
+```HTTP
+https://<service>/records/usage-logs/{MONTH}/{YEAR}
+```
+
+it will return a JSON response similar to the example below:
+
+```json
+{
+ "apiType": "string",
+ "serviceName": "string",
+ "meters": [
+ {
+ "name": "string",
+ "quantity": 253
+ }
+ ]
+}
+```
+
+## Purchase a different commitment plan for disconnected containers
+
+Commitment plans for disconnected containers have a calendar year commitment period. When you purchase a plan, you will be charged the full price immediately. During the commitment period, you cannot change your commitment plan, however you can purchase additional unit(s) at a pro-rated price for the remaining days in the year. You have until midnight (UTC) on the last day of your commitment, to end a commitment plan.
+
+You can choose a different commitment plan in the **Commitment Tier pricing** settings of your resource.
+
+## End a commitment plan
+
+If you decide that you don't want to continue purchasing a commitment plan, you can set your resource's auto-renewal to **Do not auto-renew**. Your commitment plan will expire on the displayed commitment end date. After this date, you won't be charged for the commitment plan. You will be able to continue using the Azure resource to make API calls, charged at pay-as-you-go pricing. You have until midnight (UTC) on the last day of the year to end a commitment plan for disconnected containers, and not be charged for the following year.
+
+## Troubleshooting
+
+If you run the container with an output mount and logging enabled, the container generates log files that are helpful to troubleshoot issues that happen while starting or running the container.
+
+> [!TIP]
+> For more troubleshooting information and guidance, see [Disconnected containers Frequently asked questions (FAQ)](disconnected-container-faq.yml).
+## Next steps
+
+[Azure Cognitive Services containers overview](../cognitive-services-container-support.md)
cognitive-services Call Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/language-service/custom-classification/how-to/call-api.md
See the [application development lifecycle](../overview.md#project-development-l
2. Select **Deploy model** from the left side menu.
-3. Select the model you want to deploy, then select **Deploy model**.
+3. Select the model you want to deploy, then select **Deploy model**. If you deploy your model through the Language Studio, your `deployment-name` is `prod`.
> [!TIP] > You can test your model in Language Studio by sending samples of text for it to classify.
cognitive-services Call Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/language-service/custom-named-entity-recognition/how-to/call-api.md
See the [application development lifecycle](../overview.md#application-developme
2. Select **Deploy model** from the left side menu.
-3. Select the model you want to deploy, then select **Deploy model**.
+3. Select the model you want to deploy, then select **Deploy model**. If you deploy your model through the Language Studio, your `deployment-name` is `prod`.
> [!TIP] > You can test your model in Language Studio by sending samples of text for it to classify.
See the [application development lifecycle](../overview.md#application-developme
> 4. Click on **Run the test**. > 5. In the **Result** tab, you can see the extracted entities from your text. You can also view the JSON response under the **JSON** tab.
-## Send a text classification request to your model
+## Send an entity recognition request to your model
# [Using Language Studio](#tab/language-studio)
communication-services Join Teams Meeting https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/join-teams-meeting.md
Custom applications built with Azure Communication Services to connect and commu
As with Teams anonymous meeting join, your application must have the meeting link to join, which can be retrieved via the Graph API or from the calendar in Microsoft Teams. The name of BYOI users displayed in Teams is configurable via the Communication Services Calling SDK and they're labeled as ΓÇ£externalΓÇ¥ to let Teams users know they haven't been authenticated using Azure Active Directory. When the first ACS user joins a Teams meeting, the Teams client will display a message indicating that some features might not be available because one of the participants is using a custom client.
+A Communication Service user will not be admitted to a Teams meeting until there is at least one Teams user present in the meeting. Once a Teams user is present, then the Communication Services user will wait in the lobby until explicitly admitted by a Teams user, unless the "Who can bypass the lobby?" meeting policy/setting is set to "Everyone".
+ During a meeting, Communication Services users will be able to use core audio, video, screen sharing, and chat functionality via Azure Communication Services SDKs. Once a Communication Services user leaves the meeting or the meeting ends, they can no longer send or receive new chat messages, but they will have access to messages sent and received during the meeting. Anonymous Communication Services users cannot add/remove participants to/from the meeting and they cannot start recording or transcription for the meeting. Additional information on required dataflows for joining Teams meetings is available at the [client and server architecture page](client-and-server-architecture.md). The [Group Calling Hero Sample](../samples/calling-hero-sample.md) provides example code for joining a Teams meeting from a web application.
container-apps Azure Resource Manager Api Spec https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-apps/azure-resource-manager-api-spec.md
The following is an example ARM template used to deploy a container app.
```json {
- "$schema": "https://schema.management.azure.com/schemas/2019-08-01/deploymentTemplate.json#",
+ "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0", "parameters": { "containerappName": {
cosmos-db Convert Vcore To Request Unit https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/convert-vcore-to-request-unit.md
Title: 'Convert the number of vCores or vCPUs in your nonrelational database to Azure Cosmos DB RU/s' description: 'Convert the number of vCores or vCPUs in your nonrelational database to Azure Cosmos DB RU/s'--++
cosmos-db Cosmosdb Migrationchoices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/cosmosdb-migrationchoices.md
Title: Cosmos DB Migration options description: This doc describes the various options to migrate your on-premises or cloud data to Azure Cosmos DB--++ Last updated 11/03/2021
cosmos-db Migrate Cosmosdb Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/migrate-cosmosdb-data.md
Title: Migrate hundreds of terabytes of data into Azure Cosmos DB description: This doc describes how you can migrate 100s of terabytes of data into Cosmos DB--++
cosmos-db Tutorial Mongotools Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/mongodb/tutorial-mongotools-cosmos-db.md
Title: Migrate MongoDB offline to Azure Cosmos DB API for MongoDB, using MongoDB native tools description: Learn how MongoDB native tools can be used to migrate small datasets from MongoDB instances to Azure Cosmos DB--++
cosmos-db Partners Migration Cosmosdb https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/partners-migration-cosmosdb.md
Title: Migration and application development partners for Azure Cosmos DB description: Lists Microsoft partners with migration solutions that support Azure Cosmos DB.--++ Last updated 08/26/2021
cosmos-db Create Sql Api Dotnet V4 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/create-sql-api-dotnet-v4.md
Title: Manage Azure Cosmos DB SQL API resources using .NET V4 SDK description: Use this quickstart to build a console app by using the .NET V4 SDK to manage Azure Cosmos DB SQL API account resources.--++ ms.devlang: csharp
cosmos-db Create Sql Api Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/create-sql-api-dotnet.md
Title: Quickstart - Build a .NET console app to manage Azure Cosmos DB SQL API resources description: Learn how to build a .NET console app to manage Azure Cosmos DB SQL API account resources in this quickstart.--++ ms.devlang: csharp
cosmos-db Create Sql Api Java Changefeed https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/create-sql-api-java-changefeed.md
Title: Create an end-to-end Azure Cosmos DB Java SDK v4 application sample by using Change Feed description: This guide walks you through a simple Java SQL API application which inserts documents into an Azure Cosmos DB container, while maintaining a materialized view of the container using Change Feed.-+ ms.devlang: java Last updated 06/11/2020-+
cosmos-db Create Sql Api Java https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/create-sql-api-java.md
Title: Quickstart - Use Java to create a document database using Azure Cosmos DB description: This quickstart presents a Java code sample you can use to connect to and query the Azure Cosmos DB SQL API-+ ms.devlang: java Last updated 08/26/2021-+
cosmos-db Create Sql Api Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/create-sql-api-nodejs.md
Title: Quickstart- Use Node.js to query from Azure Cosmos DB SQL API account description: How to use Node.js to create an app that connects to Azure Cosmos DB SQL API account and queries data.-+ ms.devlang: javascript Last updated 08/26/2021-+
cosmos-db Create Sql Api Spark https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/create-sql-api-spark.md
Title: Quickstart - Manage data with Azure Cosmos DB Spark 3 OLTP Connector for SQL API description: This quickstart presents a code sample for the Azure Cosmos DB Spark 3 OLTP Connector for SQL API that you can use to connect to and query data in your Azure Cosmos DB account-+ ms.devlang: java Last updated 11/23/2021-+
cosmos-db Create Sql Api Spring Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/create-sql-api-spring-data.md
Title: Quickstart - Use Spring Data Azure Cosmos DB v3 to create a document database using Azure Cosmos DB description: This quickstart presents a Spring Data Azure Cosmos DB v3 code sample you can use to connect to and query the Azure Cosmos DB SQL API-+ ms.devlang: java Last updated 08/26/2021-+
cosmos-db How To Time To Live https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/how-to-time-to-live.md
Title: Configure and manage Time to Live in Azure Cosmos DB description: Learn how to configure and manage time to live on a container and an item in Azure Cosmos DB-+ Last updated 12/09/2021-+ ms.devlang: csharp
cosmos-db Migrate Java V4 Sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/migrate-java-v4-sdk.md
Title: Migrate your application to use the Azure Cosmos DB Java SDK v4 (com.azure.cosmos) description: Learn how to upgrade your existing Java application from using the older Azure Cosmos DB Java SDKs to the newer Java SDK 4.0 (com.azure.cosmos package)for Core (SQL) API.-+ ms.devlang: java -+
cosmos-db Performance Tips Async Java https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/performance-tips-async-java.md
Title: Performance tips for Azure Cosmos DB Async Java SDK v2 description: Learn client configuration options to improve Azure Cosmos database performance for Async Java SDK v2-+ ms.devlang: java Last updated 05/11/2020-+
cosmos-db Performance Tips Java Sdk V4 Sql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/performance-tips-java-sdk-v4-sql.md
Title: Performance tips for Azure Cosmos DB Java SDK v4 description: Learn client configuration options to improve Azure Cosmos database performance for Java SDK v4-+ ms.devlang: java Last updated 08/26/2021-+
cosmos-db Performance Tips Java https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/performance-tips-java.md
Title: Performance tips for Azure Cosmos DB Sync Java SDK v2 description: Learn client configuration options to improve Azure Cosmos database performance for Sync Java SDK v2-+ ms.devlang: java Last updated 05/11/2020-+
cosmos-db Sql Api Java Application https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-api-java-application.md
Title: 'Tutorial: Build a Java web app using Azure Cosmos DB and the SQL API' description: 'Tutorial: This Java web application tutorial shows you how to use the Azure Cosmos DB and the SQL API to store and access data from a Java application hosted on Azure Websites.'-+ ms.devlang: java Last updated 08/26/2021-+
cosmos-db Sql Api Java Sdk Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-api-java-sdk-samples.md
Title: 'Azure Cosmos DB SQL API: Java SDK v4 examples' description: Find Java examples on GitHub for common tasks using the Azure Cosmos DB SQL API, including CRUD operations.-+ Last updated 08/26/2021 ms.devlang: java -+ # Azure Cosmos DB SQL API: Java SDK v4 examples
cosmos-db Sql Api Sdk Async Java https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-api-sdk-async-java.md
Title: 'Azure Cosmos DB: SQL Async Java API, SDK & resources' description: Learn all about the SQL Async Java API and SDK including release dates, retirement dates, and changes made between each version of the Azure Cosmos DB SQL Async Java SDK.-+ ms.devlang: java Last updated 11/11/2021-+
cosmos-db Sql Api Sdk Bulk Executor Dot Net https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-api-sdk-bulk-executor-dot-net.md
Title: 'Azure Cosmos DB: Bulk executor .NET API, SDK & resources' description: Learn all about the bulk executor .NET API and SDK including release dates, retirement dates, and changes made between each version of the Azure Cosmos DB bulk executor .NET SDK.-+ ms.devlang: csharp Last updated 04/06/2021-+ # .NET bulk executor library: Download information (Legacy)
cosmos-db Sql Api Sdk Bulk Executor Java https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-api-sdk-bulk-executor-java.md
Title: 'Azure Cosmos DB: Bulk executor Java API, SDK & resources' description: Learn all about the bulk executor Java API and SDK including release dates, retirement dates, and changes made between each version of the Azure Cosmos DB bulk executor Java SDK.-+ ms.devlang: java Last updated 04/06/2021-+
cosmos-db Sql Api Sdk Dotnet Changefeed https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-api-sdk-dotnet-changefeed.md
Title: Azure Cosmos DB .NET change feed Processor API, SDK release notes description: Learn all about the Change Feed Processor API and SDK including release dates, retirement dates, and changes made between each version of the .NET Change Feed Processor SDK.-+ ms.devlang: csharp Last updated 04/06/2021-+ # .NET Change Feed Processor SDK: Download and release notes (Legacy)
cosmos-db Sql Api Sdk Dotnet Core https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-api-sdk-dotnet-core.md
Title: 'Azure Cosmos DB: SQL .NET Core API, SDK & resources' description: Learn all about the SQL .NET Core API and SDK including release dates, retirement dates, and changes made between each version of the Azure Cosmos DB .NET Core SDK.-+ ms.devlang: csharp Last updated 11/11/2021-+
cosmos-db Sql Api Sdk Dotnet Standard https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-api-sdk-dotnet-standard.md
Title: 'Azure Cosmos DB: SQL .NET Standard API, SDK & resources' description: Learn all about the SQL API and .NET SDK including release dates, retirement dates, and changes made between each version of the Azure Cosmos DB .NET SDK.-+ ms.devlang: csharp Last updated 04/06/2021-+
cosmos-db Sql Api Sdk Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-api-sdk-dotnet.md
Title: 'Azure Cosmos DB: SQL .NET API, SDK & resources' description: Learn all about the SQL .NET API and SDK including release dates, retirement dates, and changes made between each version of the Azure Cosmos DB .NET SDK.-+ ms.devlang: csharp Last updated 11/11/2021-+
cosmos-db Sql Api Sdk Java Spark V3 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-api-sdk-java-spark-v3.md
Title: 'Azure Cosmos DB Apache Spark 3 OLTP Connector for SQL API (Preview) release notes and resources' description: Learn about the Azure Cosmos DB Apache Spark 3 OLTP Connector for SQL API, including release dates, retirement dates, and changes made between each version of the Azure Cosmos DB SQL Java SDK.-+ ms.devlang: java Last updated 11/12/2021-+
cosmos-db Sql Api Sdk Java Spark https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-api-sdk-java-spark.md
Title: 'Azure Cosmos DB Apache Spark 2 OLTP Connector for SQL API release notes and resources' description: Learn about the Azure Cosmos DB Apache Spark 2 OLTP Connector for SQL API, including release dates, retirement dates, and changes made between each version of the Azure Cosmos DB SQL Async Java SDK.-+ ms.devlang: java Last updated 04/06/2021-+
cosmos-db Sql Api Sdk Java Spring V2 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-api-sdk-java-spring-v2.md
Title: 'Spring Data Azure Cosmos DB v2 for SQL API release notes and resources' description: Learn about the Spring Data Azure Cosmos DB v2 for SQL API, including release dates, retirement dates, and changes made between each version of the Azure Cosmos DB SQL Async Java SDK.-+ ms.devlang: java Last updated 04/06/2021-+
cosmos-db Sql Api Sdk Java Spring V3 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-api-sdk-java-spring-v3.md
Title: 'Spring Data Azure Cosmos DB v3 for SQL API release notes and resources' description: Learn about the Spring Data Azure Cosmos DB v3 for SQL API, including release dates, retirement dates, and changes made between each version of the Azure Cosmos DB SQL Async Java SDK.-+ ms.devlang: java Last updated 04/06/2021-+
cosmos-db Sql Api Sdk Java V4 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-api-sdk-java-v4.md
Title: 'Azure Cosmos DB Java SDK v4 for SQL API release notes and resources' description: Learn all about the Azure Cosmos DB Java SDK v4 for SQL API and SDK including release dates, retirement dates, and changes made between each version of the Azure Cosmos DB SQL Async Java SDK.-+ ms.devlang: java Last updated 04/06/2021-+
cosmos-db Sql Api Sdk Java https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-api-sdk-java.md
Title: 'Azure Cosmos DB: SQL Java API, SDK & resources' description: Learn all about the SQL Java API and SDK including release dates, retirement dates, and changes made between each version of the Azure Cosmos DB SQL Java SDK.-+ ms.devlang: java Last updated 04/06/2021-+
cosmos-db Sql Api Sdk Node https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-api-sdk-node.md
Title: 'Azure Cosmos DB: SQL Node.js API, SDK & resources' description: Learn all about the SQL Node.js API and SDK including release dates, retirement dates, and changes made between each version of the Azure Cosmos DB Node.js SDK.-+ ms.devlang: javascript Last updated 12/09/2021-+
While it was possible to use the v2 SDK in the browser, it was not an ideal expe
Not always the most visible changes, but they help our team ship better code, faster. * Use rollup for production builds (#104)
-* Update to Typescript 3.5 (#327)
+* Update to TypeScript 3.5 (#327)
* Convert to TS project references. Extract test folder (#270) * Enable noUnusedLocals and noUnusedParameters (#275) * Azure Pipelines YAML for CI builds (#298)
cosmos-db Sql Api Spring Data Sdk Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-api-spring-data-sdk-samples.md
Title: 'Azure Cosmos DB SQL API: Spring Data v3 examples' description: Find Spring Data v3 examples on GitHub for common tasks using the Azure Cosmos DB SQL API, including CRUD operations.-+ Last updated 08/26/2021 -+ # Azure Cosmos DB SQL API: Spring Data Azure Cosmos DB v3 examples
cosmos-db Troubleshoot Dot Net Sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/troubleshoot-dot-net-sdk.md
Title: Diagnose and troubleshoot issues when using Azure Cosmos DB .NET SDK description: Use features like client-side logging and other third-party tools to identify, diagnose, and troubleshoot Azure Cosmos DB issues when using .NET SDK.-+ Last updated 03/05/2021-+
cosmos-db Troubleshoot Java Async Sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/troubleshoot-java-async-sdk.md
Title: Diagnose and troubleshoot Azure Cosmos DB Async Java SDK v2 description: Use features like client-side logging and other third-party tools to identify, diagnose, and troubleshoot Azure Cosmos DB issues in Async Java SDK v2.-+ Last updated 05/11/2020-+ ms.devlang: java
cosmos-db Troubleshoot Java Sdk V4 Sql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/troubleshoot-java-sdk-v4-sql.md
Title: Diagnose and troubleshoot Azure Cosmos DB Java SDK v4 description: Use features like client-side logging and other third-party tools to identify, diagnose, and troubleshoot Azure Cosmos DB issues in Java SDK v4.-+ Last updated 06/11/2020-+ ms.devlang: java
cost-management-billing Save Share Views https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/costs/save-share-views.md
Title: Save and share customized views - Azure Cost Management and Billing
description: This article explains how to save and share a customized view with others. Previously updated : 12/07/2021 Last updated : 01/18/2021
# Save and share customized views
-You can save and share customized views with others by pinning cost analysis to the Azure portal dashboard or by copying a link to cost analysis. You can also download a snapshot of the data or views and manually share it with others.
+Cost analysis is used to explore costs and get quick answers for things like finding the top cost contributors or understanding how you're charged for the services you use. As you analyze cost, you may find specific views you want to save or share with others.
-Watch the video [Sharing and saving views in Cost Management](https://www.youtube.com/watch?v=kQkXXj-SmvQ) to learn more about how to use the portal to share cost knowledge around your organization. To watch other videos, visit the [Cost Management YouTube channel](https://www.youtube.com/c/AzureCostManagement).
+## Save and share cost views
-## Save and share a view
+A *view* is a saved query in Cost Management. When you save a view, all settings in cost analysis are saved, including filters, grouping, granularity, the main chart type, and donut charts. Underlying data isn't saved. Only you can see private views, while everyone with Cost Management Reader access or greater to the scope can see shared views.
-When you save a view, all settings in cost analysis are saved, including filters, grouping, granularity, the main chart type, and donut charts. Underlying data isn't saved.
+Check out the [Sharing and saving views](https://www.youtube.com/watch?v=kQkXXj-SmvQ) video.
-After you save a view, you can share the URL to it with others using the **Share** command. The URL is specific to your current scope. Sharing only shares the view configuration and doesn't grant others access to the underlying data. If you don't have access to the scope, you'll see an `access denied` message.
+After you save a view, you can share a link to it with others using the **Share** command. The link is specific to your current scope and view configuration. The link doesn't grant others access to the view itself, which may change over time, or the underlying data. If you don't have access to the scope, you'll see an `access denied` message. We recommend using the Cost Management Contributor role to allow others to save and share views with others.
-You can also pin the current view to an Azure portal dashboard. The pinned view is a condensed view of the main chart or table and doesn't update when the view is updated. A pinned dashboard isn't the same thing as a saved view.
+You can also pin the current view to an Azure portal dashboard. This only includes a snapshot of the main chart or table and doesn't update when the view is updated. A pinned dashboard isn't the same thing as a saved view.
### To save a view
-1. In Cost analysis, make sure that the settings for your current view are the ones that you want saved.
-2. Under your billing scope or subscription name, select **Save** to update your current view or **Save as** to save a new view.
+1. In cost analysis, make sure that the settings that you want saved are chosen.
+1. Select the **Save** command at the top of the page to update your current view or **Save as** to save a new view.
:::image type="content" source="./media/save-share-views/save-options.png" alt-text="Screen shot showing the view save options." lightbox="./media/save-share-views/save-options.png" ::: 1. Enter a name for the view and then select **Save**. :::image type="content" source="./media/save-share-views/save-box.png" alt-text="Screen shot showing Save box where you enter a name to save." lightbox="./media/save-share-views/save-box.png" :::
-1. After you save a view, it's available to select from the **View** list.
- :::image type="content" source="./media/save-share-views/view-list.png" alt-text="Screen shot showing the View list." lightbox="./media/save-share-views/view-list.png" :::
+1. After you save a view, it's available to select from the **View** menu.
+ :::image type="content" source="./media/save-share-views/view-list.png" alt-text="Screen shot showing the View list." lightbox="./media/save-share-views/view-list.png" :::
### To share a view
-1. In Cost analysis, ensure that the currently selected view is the one that you want to share.
-2. Under your billing scope or subscription name, select **Share**.
-3. In the **Share** box, select **Copy to clipboard** to copy the URL and then select **OK**.
- :::image type="content" source="./media/save-share-views/share.png" alt-text="Screen shot showing the Share box." lightbox="./media/save-share-views/share.png" :::
-1. Paste the URL using any application that you like to send to others.
+1. In cost analysis, ensure that the currently selected view is the one that you want to share.
+1. Select the **Share** command at the top of the page.
+1. In the **Share** box, copy the URL and then select **OK**.
+ :::image type="content" source="./media/save-share-views/share.png" alt-text="Screen shot showing the Share box." lightbox="./media/save-share-views/share.png" :::
+1. You can paste the URL using any application that you like to send to others.
-## Pin to dashboard
+If you need to generate a link to a view programmatically, use one of the following formats:
-As mentioned previously, a pinned dashboard is only a saved main chart or table view. It's essentially a thumbnail view of the main chart you can select to get back to the view where the dashboard was originally pinned from.
+- View configuration ΓÇô `https://<portal-domain>/@<directory-domain>/#blade/Microsoft_Azure_CostManagement/Menu/open/costanalysis/scope/<scope-id>/view/<view-config>`
+- Saved view ΓÇô `https://<portal-domain>/@<directory-domain>/#blade/Microsoft_Azure_CostManagement/Menu/open/costanalysis/scope/<scope-id>/viewId/<view-id>`
-To pin cost analysis to a dashboard
-1. In Cost analysis, ensure that the currently selected view is the one that you want to pin.
-2. To the right of your billing scope or subscription name, select the **Pin** symbol.
-3. In the Pin to dashboard window, choose **Existing** to pin the current view to the existing dashboard or choose **Create new** to pin the current view to a new dashboard.
- :::image type="content" source="./media/save-share-views/pin-dashboard.png" alt-text="Screen shot showing the Pin to dashboard page." lightbox="./media/save-share-views/pin-dashboard.png" :::
-1. Select **Private** to if you don't want to share the dashboard and then select Pin or select **Shared to share** the dashboard with multiple subscriptions and then select **Pin**.
-1. To view the dashboard after you've pinned it, from the Azure portal menu, select **Dashboard**.
- :::image type="content" source="./media/save-share-views/saved-dashboard.png" alt-text="Screen shot showing the saved Dashboard page." lightbox="./media/save-share-views/saved-dashboard.png" :::
+Use the following table for each property in the URL.
-## Download data
+| URL property | Description|
+| | |
+| **portal-domain** | Primary domain for the Azure portal. For example, `portal.azure.com` or `portal.azure.us`). |
+| **directory-domain** | Domain used by your Azure Active Directory. You can also use the tenant ID. If it is omitted, the portal tries to use the default directory for the user that selected the link - it might differ from the scope. |
+| **scope-id** | Full Resource Manager ID for the resource group, subscription, management group, or billing account you want to view cost for. If not specified, Cost Management uses the last view the user used in the Azure portal. The value must be URL encoded. |
+| **view-config** | Encoded view configuration. See details below. If not specified, cost analysis uses the `view-id` parameter. If neither are specified, cost analysis uses the built-in Accumulated cost view. |
+| **view-id** | Full Resource Manager ID for the private or shared view to load. This value must be URL encoded. If not specified, cost analysis uses the `view` parameter. If neither are specified, cost analysis uses the built-in Accumulated cost view. |
-When you want to share information with others that don't have access to Cost analysis, you can **Download** the current view in PNG, Excel, and CSV formats. Then you can share it with them by email or other means. The downloaded data is a snapshot, so it isn't automatically updated.
+The `view-config` parameter is an encoded version of the JSON view configuration. For more information about the view body, see the [Views API reference](/rest/api/cost-management/views). To learn how to build specific customizations, pin the desired view to an empty Azure portal dashboard, then download the dashboard JSON to review the JSON view configuration.
+
+After you have the desired view configuration:
+
+1. Use Base 64 encode for the JSON view configuration.
+1. Use Gzip to compress the encoded string.
+1. URL encode the compressed string.
+1. Add the final encoded string to the URL after the `/view/` parameter.
+
+## Pin a view to the Azure portal dashboard
+
+As mentioned previously, pinning a view to an Azure portal dashboard only saves the main chart or table. It's essentially a thumbnail you can select to get back to the view configuration in cost analysis. Keep in mind the dashboard tile is a copy of your view configuration ΓÇô if you save a view that was previously pinned, the pinned tile doesn't update. To update the tile, pin the saved view again.
+
+### To pin cost analysis to a dashboard
+
+1. In cost analysis, ensure that the currently selected view is the one that you want to pin.
+1. To the right of your billing scope or subscription name, select the **Pin** symbol.
+1. In the Pin to dashboard window, choose **Existing** to pin the current view to the existing dashboard or choose **Create new** to pin the current view to a new dashboard.
+ :::image type="content" source="./media/save-share-views/pin-dashboard.png" alt-text="Screen shot showing the Pin to dashboard page." lightbox="./media/save-share-views/pin-dashboard.png" :::
+1. Select **Private** to if you don't want to share the dashboard and then select **Pin** or select **Shared** to share the dashboard with others and then select **Pin**.
+
+To view the dashboard after you've pinned it, from the Azure portal menu, select **Dashboard**.
++
+### To rename a tile
+
+1. From the dashboard where your tile is pinned, select the title of the tile you want to rename. This action opens cost analysis with that view.
+1. Select the **Save** command at the top of the page.
+1. Enter the name of the tile you want to use.
+1. Select **Save**.
+1. Select the **Pin** symbol to the right of the page header.
+1. From the dashboard, you can now remove the original tile.
+
+For more advanced dashboard customizations, you can also export the dashboard, customize the dashboard JSON, and upload a new dashboard. This can include additional tile sizes or names without saving new views. For more information, see [Create a dashboard in the Azure portal](../../azure-portal/azure-portal-dashboards.md).
+
+## Download data or charts
+
+When you want to share information with others that don't have access to the scope, you can download the view in PNG, Excel, and CSV formats. Then you can share it with them by email or other means. The downloaded data is a snapshot, so it isn't automatically updated.
:::image type="content" source="./media/save-share-views/download.png" alt-text="Screen shot showing the Download page." lightbox="./media/save-share-views/download.png" :::
+When downloading data, cost analysis includes summarized data as it's shown in the table. The cost by resource view includes all resource meters in addition to the resource details. If you want a download of only resources and not the nested meters, use the cost analysis preview. You can access the preview from the **Cost by resource** menu at the top of the page, where you can select the Resources, Resource groups, Subscriptions, Services, or Reservations view.
+
+If you need more advanced summaries or you're interested in raw data that hasn't been summarized, schedule an export to publish raw data to a storage account on a recurring basis.
+
+## Subscribe to cost alerts
+
+In addition to saving and opening views repeatedly or sharing them with others manually, you can also subscribe to updates or a recurring schedule to get alerted as costs change. You can also set up alerts to be shared with others who may not have direct access to costs in the portal.
+
+### To subscribe to cost alerts
+
+1. In cost analysis, select a private or shared view you want to subscribe to alerts for or create and save a new chart view.
+1. Select **Subscribe** at the top of the page.
+1. Select **+ Add** at the top of the list of alerts.
+1. Specify the desired email settings and select **Save**.
+ - The **Name** helps you distinguish the different emails setup for the current view. Use it to indicate audience or purpose of this specific email.
+ - The **Subject** is what people will see when they receive the email.
+ - You can include up to 20 recipients. Consider using a distribution list if you have a large audience. To see how the email looks, start by sending it only to yourself. You can update it later.
+ - The **Message** is shown in the email to give people some additional context about why they're receiving the email. You may want to include what it covers, who requested it, or who to contact to make changes.
+ - If you want to include an unauthenticated link to the data (for people who don't have access to the scope/view), select **CSV** in the **Include link to data** list.
+ - If you want to allow people who have write access to the scope to change the email configuration settings, check the **Allow contributors to change these settings** option. For example, you might to allow billing account admins or Cost Management Contributors. By default it is unselected and only you can see or edit the scheduled email.
+ - The **Start date** is when you'll start receiving the email. It defaults to the current day.
+ - The **End date** is when you'll receive the last email. It can be up to one year from the current day, which is the default. You can update this later.
+ - The **Frequency** indicates how often you want the email to be sent. It's based on the start date, so if want a weekly email on a different day of the week, change the start date first. To get an email after the month is closed, select **After invoice finalized**. Ensure your view is looking at last month. If you use the current month, it will only send you the first few days of the month. By default, all emails are sent at 8:00 AM local time. To customize any of the options, select **Custom**.
+1. After saving your alert, you'll see a list of configured alerts for the current view. If want to see a preview of the email, select the row and select **Send now** at the top to send the email to all recipients.
+
+Keep in mind that if you choose to include a link to data, anyone who receives the email will have access to the data included in that email. Data expires after seven days.
+ ## Next steps - For more information about creating dashboards, see [Create a dashboard in the Azure portal](../../azure-portal/azure-portal-dashboards.md).
cost-management-billing Pay Bill https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/understand/pay-bill.md
On 1 October 2021, automatic payments in India may block some credit card transa
[Learn more about the Reserve Bank of India regulation for recurring payments](https://www.rbi.org.in/Scripts/NotificationUser.aspx?Id=11668&Mode=0)
-On 1 January 2022, Microsoft and other online merchants will no longer be storing credit card information. To comply with this regulation Microsoft will be removing all stored card details from Microsoft Azure. To avoid service interruption, you will need to add a payment method and make a one-time payment for all invoices.
-
-[Learn about the Reserve Bank of India regulation for card storage](https://rbi.org.in/scripts/NotificationUser.aspx?Mode=0&Id=12159)
- ## Pay by default payment method The default payment method of your billing profile can either be a credit or debit card, or a check or wire transfer.
data-factory Connector Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-troubleshoot-guide.md
You can refer to the troubleshooting pages for each connector to see problems sp
- [Azure Blob Storage](connector-troubleshoot-azure-blob-storage.md) - [Azure Cosmos DB (including SQL API connector)](connector-troubleshoot-azure-cosmos-db.md) - [Azure Data Lake (Gen1 and Gen2)](connector-troubleshoot-azure-data-lake.md)-- [Azure database for PostgreSQL](connector-troubleshoot-postgresql.md)
+- [Azure Database for PostgreSQL](connector-troubleshoot-postgresql.md)
- [Azure Files storage](connector-troubleshoot-azure-files.md) - [Azure Synapse Analytics, Azure SQL Database, and SQL Server](connector-troubleshoot-synapse-sql.md) - [DB2](connector-troubleshoot-db2.md)
data-factory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/whats-new.md
The Azure Data Factory service is improved on an ongoing basis. To stay up to da
This page is updated monthly, so revisit it regularly.
+## December 2021
+<br>
+<table>
+<tr><td><b>Service Category</b></td><td><b>Service improvements</b></td><td><b>Details</b></td></tr>
++
+<tr><td rowspan=9><b>Data Flow</b></td><td>Dynamics connector as native source and sink for mapping data flows</td><td>The Dynamics connector is now supported as both a source and sink for mapping data flows.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory-blog/mapping-data-flow-gets-new-native-connectors/ba-p/2866754">Learn more</a></td></tr>
+<tr><td>Native change data capture (CDC) now natively supported</td><td>CDC is now natively supported in Azure Data Factory for CosmosDB, Blob Store, Azure Data Lake Storage Gen1 and Gen2, and CRM.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory-blog/cosmosdb-change-feed-is-supported-in-adf-now/ba-p/3037011">Learn more</a></td></tr>
+<tr><td>Flowlets public preview</td><td>The flowlets public preview allows data flow developers to build reusable components to easily build composable data transformation logic.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory-blog/introducing-the-flowlets-preview-for-adf-and-synapse/ba-p/3030699">Learn more</a></td></tr>
+
+<tr><td>Map Data public preview</td><td>The Map Data preview enables business users to define column mapping and transformations to load Synapse Lake Databases<br><a href="../synapse-analytics/database-designer/overview-map-data.md">Learn more</a></td></tr>
+
+<tr><td>Multiple output destinations from Power Query</td><td>You can now map multiple output desintations from Power Query in Azure Data Factory for flexible ETL patterns for citizen data integrators.<br><a href="control-flow-power-query-activity.md#sink">Learn more</a></td></tr>
+
+<tr><td>External Call transformation support</td><td>Extend the functionality of Mapping Data Flows by using the External Call transformation. You can now add your own custom code as a REST endpoint or call a curated third party service row-by-row.<br><a href="data-flow-external-call.md">Learn more</a></td></tr>
+
+<tr><td>Enable quick re-use by Synapse Mapping Data Flows with TTL support</td><td>Synapse Mapping Data Flows now support quick re-use by setting a TTL in the Azure Integration Runtime. This will enable your subsequent data flow activities to execute in under 5 seconds.<br><a href="control-flow-execute-data-flow-activity.md#data-flow-integration-runtime">Learn more</a></td></tr>
+
+<tr><td>Assert transformation</td><td>Easily add data quality, data domain validation, and metadata checks to your Azure Data Factory pipelines by using the Assert transformation in Mapping Data Flows.<br><a href="data-flow-assert.md">Learn more</a></td></tr>
+
+<tr><td>IntelliSense support in expression builder for more productive pipeline authoring experiences</td><td>We have introduced IntelliSense support in expression builder / dynamic content authoring to make Azure Data Factory / Synapse pipeline developers more productive while writing complex expressions in their data pipelines.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory-blog/intellisense-support-in-expression-builder-for-more-productive/ba-p/3041459">Learn more</a></td></tr>
+
+</table>
+ ## November 2021 <br> <table>
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/release-notes.md
Title: Release notes for Microsoft Defender for Cloud description: A description of what's new and changed in Microsoft Defender for Cloud Previously updated : 01/12/2022 Last updated : 01/18/2022 # What's new in Microsoft Defender for Cloud?
Updates in January include:
- [Communication with suspicious domain alert expanded to included known Log4Shell-related domains](#communication-with-suspicious-domain-alert-expanded-to-included-known-log4shell-related-domains) - ['Copy alert JSON' button added to security alert details pane](#copy-alert-json-button-added-to-security-alert-details-pane) - [Renamed two recommendations](#renamed-two-recommendations)-
+- [Deprecate Kubernetes cluster containers should only listen on allowed ports policy](#deprecate-kubernetes-cluster-containers-should-only-listen-on-allowed-ports-policy)
### Microsoft Defender for Resource Manager updated with new alerts and greater emphasis on high-risk operations mapped to MITRE ATT&CK® Matrix
For consistency with other recommendation names, we've renamed the following two
- Previous name: Diagnostic logs should be enabled in App Service - New name: Diagnostic logs in App Service should be enabled
+### Deprecate Kubernetes cluster containers should only listen on allowed ports policy
+
+We have deprecated the **Kubernetes cluster containers should only listen on allowed ports** recommendation.
+
+| Policy name | Description | Effect(s) | Version |
+|--|--|--|--|
+| [Kubernetes cluster containers should only listen on allowed ports](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F440b515e-a580-421e-abeb-b159a61ddcbc) | Restrict containers to listen only on allowed ports to secure access to the Kubernetes cluster. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). | audit, deny, disabled | [6.1.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ContainerAllowedPorts.json) |
+
+The **[Services should listen on allowed ports only](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/add45209-73f6-4fa5-a5a5-74a451b07fbe)** recommendation should be used to limit ports that an application exposes to the internet.
## December 2021
education-hub Create Lab Education Hub https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/education-hub/create-lab-education-hub.md
+
+ Title: Create a lab in Azure Education Hub through REST APIs
+description: Learn how to set up a lab in education hub
++++ Last updated : 12/21/2021+++
+<!-- 1. H1
+Required. Start your H1 with a verb. Pick an H1 that clearly conveys the task the
+user will complete.
+-->
+
+# Create a lab in Azure Education Hub through REST APIs.
+
+This article will walk you through how to create a lab, add students to that lab and verify that the lab has been created.
+
+## Prerequisites
+
+- Know billing account ID, Billing profile ID, and Invoice Section ID
+- Have an Edu approved Azure account
+
+## Create a lab
+
+```json
+PUT https://management.azure.com/providers/Microsoft.Billing/billingAccounts/<BillingAccountID>/billingProfiles/<BillingProfileID>/invoiceSections/<InvoiceSectionID>/providers/Microsoft.Education/labs/default?api-version=2021-12-01-preview
+```
+
+Call the above API with the body similar to the one below. Include your details for what the display name will be and how much budget you will allocate for this lab.
+
+```json
+{
+ "properties": {
+ "displayName": "string",
+ "budgetPerStudent": {
+ "currency": "string",
+ "value": 0
+ },
+ "description": "string",
+ "expirationDate": "2021-12-21T22:56:17.314Z",
+ "totalBudget": {
+ "currency": "string",
+ "value": 0
+ },
+ "totalAllocatedBudget": {
+ "currency": "string",
+ "value": 0
+ }
+ }
+}
+```
+
+The API response returns details of the newly created lab. Congratulations, you have created a lab in education hub.
+
+```json
+{
+ "id": "string",
+ "name": "string",
+ "type": "string",
+ "systemData": {
+ "createdBy": "string",
+ "createdByType": "User",
+ "createdAt": "2021-12-21T22:56:17.338Z",
+ "lastModifiedBy": "string",
+ "lastModifiedByType": "User",
+ "lastModifiedAt": "2021-12-21T22:56:17.338Z"
+ },
+ "properties": {
+ "displayName": "string",
+ "budgetPerStudent": {
+ "currency": "string",
+ "value": 0
+ },
+ "description": "string",
+ "expirationDate": "2021-12-21T22:56:17.339Z",
+ "effectiveDate": "2021-12-21T22:56:17.339Z",
+ "status": "Active",
+ "maxStudentCount": 0,
+ "invitationCode": "string",
+ "totalBudget": {
+ "currency": "string",
+ "value": 0
+ },
+ "totalAllocatedBudget": {
+ "currency": "string",
+ "value": 0
+ }
+ }
+}
+```
+
+## Add students to the lab
+
+Now that the lab has been successfully created, you can begin to add students to the lab.
+
+Call the endpoint below and make sure to replace the sections that are surrounded by <>.
+
+```json
+PUT https://management.azure.com/providers/Microsoft.Billing/billingAccounts/<BillingAccountID>/billingProfiles/<BillingProfileID>/invoiceSections/<InvoiceSectionID>/providers/Microsoft.Education/labs/default/students/<StudentID>?api-version=2021-12-01-preview
+```
+
+Call the above API with a body similar to the one below. Change the body to include details of the student you want to add to the lab.
+
+```json
+{
+ "properties": {
+ "firstName": "string",
+ "lastName": "string",
+ "email": "string",
+ "role": "Student",
+ "budget": {
+ "currency": "string",
+ "value": 0
+ },
+ "expirationDate": "2021-12-21T23:01:41.943Z",
+ "subscriptionAlias": "string",
+ "subscriptionInviteLastSentDate": "string"
+ }
+}
+```
+
+The API response returns details of the newly added student.
+
+```json
+{
+ "id": "string",
+ "name": "string",
+ "type": "string",
+ "systemData": {
+ "createdBy": "string",
+ "createdByType": "User",
+ "createdAt": "2021-12-21T23:02:20.163Z",
+ "lastModifiedBy": "string",
+ "lastModifiedByType": "User",
+ "lastModifiedAt": "2021-12-21T23:02:20.163Z"
+ },
+ "properties": {
+ "firstName": "string",
+ "lastName": "string",
+ "email": "string",
+ "role": "Student",
+ "budget": {
+ "currency": "string",
+ "value": 0
+ },
+ "subscriptionId": "string",
+ "expirationDate": "2021-12-21T23:02:20.163Z",
+ "status": "Active",
+ "effectiveDate": "2021-12-21T23:02:20.163Z",
+ "subscriptionAlias": "string",
+ "subscriptionInviteLastSentDate": "string"
+ }
+}
+```
+
+## Check the details of a lab
+
+Now that the lab has been created and a student has been added to the lab, let's get the details for the lab. Getting the lab details will provide you with meta data like when the lab was created and how much budget it has. It will not include information about students in the lab.
+
+```json
+GET https://management.azure.com/providers/Microsoft.Billing/billingAccounts/<BillingAccountID>/billingProfiles/<BillingProfileID>/invoiceSections/<InvoiceSectionID>/providers/Microsoft.Education/labs/default?includeBudget=true&api-version=2021-12-01-preview
+```
+
+The API response will include information about the lab and budget information (if the include budget flag is set to true)
+
+```json
+{
+ "id": "string",
+ "name": "string",
+ "type": "string",
+ "systemData": {
+ "createdBy": "string",
+ "createdByType": "User",
+ "createdAt": "2021-12-21T23:10:10.867Z",
+ "lastModifiedBy": "string",
+ "lastModifiedByType": "User",
+ "lastModifiedAt": "2021-12-21T23:10:10.867Z"
+ },
+ "properties": {
+ "displayName": "string",
+ "budgetPerStudent": {
+ "currency": "string",
+ "value": 0
+ },
+ "description": "string",
+ "expirationDate": "2021-12-21T23:10:10.867Z",
+ "effectiveDate": "2021-12-21T23:10:10.867Z",
+ "status": "Active",
+ "maxStudentCount": 0,
+ "invitationCode": "string",
+ "totalBudget": {
+ "currency": "string",
+ "value": 0
+ },
+ "totalAllocatedBudget": {
+ "currency": "string",
+ "value": 0
+ }
+ }
+}
+```
+
+## Check the details of the students in a lab
+
+Calling this API will allow us to see all of the students that are in the specified lab.
+
+```json
+GET https://management.azure.com/providers/Microsoft.Billing/billingAccounts/<BillingAccountID/billingProfiles/<BillingProfileID>/invoiceSections/<InvoiceSectionID>/providers/Microsoft.Education/labs/default/students?includeDeleted=true&api-version=2021-12-01-preview
+```
+
+The API response will include information about the students in the lab and will even show student that have been deleted from the lab (if the includeDeleted flag is set to true)
+
+```json
+{
+ "value": [
+ {
+ "id": "string",
+ "name": "string",
+ "type": "string",
+ "systemData": {
+ "createdBy": "string",
+ "createdByType": "User",
+ "createdAt": "2021-12-21T23:15:45.430Z",
+ "lastModifiedBy": "string",
+ "lastModifiedByType": "User",
+ "lastModifiedAt": "2021-12-21T23:15:45.430Z"
+ },
+ "properties": {
+ "firstName": "string",
+ "lastName": "string",
+ "email": "string",
+ "role": "Student",
+ "budget": {
+ "currency": "string",
+ "value": 0
+ },
+ "subscriptionId": "string",
+ "expirationDate": "2021-12-21T23:15:45.430Z",
+ "status": "Active",
+ "effectiveDate": "2021-12-21T23:15:45.430Z",
+ "subscriptionAlias": "string",
+ "subscriptionInviteLastSentDate": "string"
+ }
+ }
+ ],
+ "nextLink": "string"
+}
+```
+
+## Next steps
+- [Manage your Academic Grant using the Overview page](hub-overview-page.md)
+
+- [Support options](educator-service-desk.md)
education-hub Find Ids https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/education-hub/find-ids.md
You must have an Azure account linked with education hub.
While in the Azure portal, search for Cost Management + Billing and click on the service from the dropdown menu. ## Get Billing account ID
This section will show you how to get your Billing Profile ID.
1. Click on "Billing Profiles" tab under the Billing section 2. Click on the desired billing profile 3. Click on the "Properties" tab under the Settings section 4. This page will display your billing profile ID at the top of the page 5. Copy this and save it for later. You can also see your Billing Account ID at the bottom of the page. ## Get Invoice section ID
This section will show you how to get your Invoice Section ID.
1. Click on "Invoice sections" tab under the Billing tab. Note you must be in Billing Profile to see Invoice Sections 2. Click on the desired Invoice Section
+
+ 3. Click on the "Properties" tab under the Settings section 4. This page will display your invoice section ID at the top of the page 5. Copy this and save it for later. You can also see your Billing Account ID at the bottom of the page. ## Next steps
firewall Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall/overview.md
Previously updated : 01/04/2022 Last updated : 01/18/2022 # Customer intent: As an administrator, I want to evaluate Azure Firewall so I can determine if I want to use it.
Azure Firewall has the following known issues:
| Error encountered when creating more than 2000 rule collections. | The maximal number of NAT/Application or Network rule collections is 2000 (Resource Manager limit). | This is a current limitation. | |Unable to see Network Rule Name in Azure Firewall Logs|Azure Firewall network rule log data does not show the Rule name for network traffic.|A feature is being investigated to support this.| |XFF header in HTTP/S|XFF headers are overwritten with the original source IP address as seen by the firewall. This is applicable for the following use cases:<br>- HTTP requests<br>- HTTPS requests with TLS termination|A fix is being investigated.|
-| Firewall logs (Resource specific tables - Preview) | Resource specific log queries are in preview mode and aren't currently supported. | A fix is being investigated. |
+| Firewall logs (Resource specific tables - Preview) | Resource specific log queries are in preview mode and aren't currently supported. | A fix is being investigated.|
+|Availability Zones for Firewall Premium in the Southeast Asia region|You can't currently deploy Azure Firewall Premium with Availability Zones in the Southeast Asia region.|Deploy the firewall in Southeast Asia without Availability Zones, or deploy in a region that supports Availability Zones.|
## Next steps
frontdoor Front Door Caching https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/front-door-caching.md
Front Door caches assets until the asset's time-to-live (TTL) expires. Whenever
The best practice to make sure your users always obtain the latest copy of your assets is to version your assets for each update and publish them as new URLs. Front Door will immediately retrieve the new assets for the next client requests. Sometimes you may wish to purge cached content from all edge nodes and force them all to retrieve new updated assets. The reason could be because of updates to your web application, or to quickly update assets that contain incorrect information. + Select the assets you want to purge from the edge nodes. To clear all assets, select **Purge all**. Otherwise, in **Path**, enter the path of each asset you want to purge. These formats are supported in the lists of paths to purge:
frontdoor Front Door Custom Domain Https https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/front-door-custom-domain-https.md
You can use your own certificate to enable the HTTPS feature. This process is do
#### Register Azure Front Door
-Register the service principal for Azure Front Door as an app in your Azure Active Directory via PowerShell.
+Register the service principal for Azure Front Door as an app in your Azure Active Directory using Azure PowerShell or Azure CLI.
> [!NOTE] > This action requires Global Administrator permissions, and needs to be performed only **once** per tenant.
+##### Azure PowerShell
+ 1. If needed, install [Azure PowerShell](/powershell/azure/install-az-ps) in PowerShell on your local machine. 2. In PowerShell, run the following command:
- `New-AzADServicePrincipal -ApplicationId "ad0e1c7e-6d38-4ba4-9efd-0bc77ba9f037" -Role Contributor`
+ ```azurepowershell-interactive
+ New-AzADServicePrincipal -ApplicationId "ad0e1c7e-6d38-4ba4-9efd-0bc77ba9f037" -Role Contributor
+ ```
+
+##### Azure CLI
+
+1. If need, install [Azure CLI](/cli/azure/install-azure-cli) on your local machine.
+
+2. In CLI, run the following command:
+
+ ```azurecli-interactive
+ az ad sp create --id ad0e1c7e-6d38-4ba4-9efd-0bc77ba9f037 --role Contributor
+ ```
#### Grant Azure Front Door access to your key vault
governance Definition Structure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/concepts/definition-structure.md
The following properties are used with **field count**:
**count.where** condition expression. A numeric [condition](../concepts/definition-structure.md#conditions) should be used.
-**Field count** expressions can enumerate the same field array up to three times in a single
-**policyRule** definition.
- For more details on how to work with array properties in Azure Policy, including detailed explanation on how the **field count** expression is evaluated, see [Referencing array resource properties](../how-to/author-policies-for-arrays.md#referencing-array-resource-properties).
The following properties are used with **value count**:
`count.where` condition expression. A numeric [condition](../concepts/definition-structure.md#conditions) should be used.
-The following limits are enforced:
-- Up to 10 **value count** expressions can be used in a single **policyRule** definition.-- Each **value count** expression can perform up to 100 iterations. This number includes the number
- of iterations performed by any parent **value count** expressions.
- #### The current function The `current()` function is only available inside the `count.where` condition. It returns the value
resource name to start with the resource group name.
} ```
+### Policy rule limits
+
+#### Limits enforced during authoring
+
+Limits to the structure of policy rules are enforced during the authoring or assignment of a policy.
+Attempts to create or assign policy definitions that exceed these limits will fail.
+
+| Limit | Value | Additional details |
+|:|:|:|
+| Condition expressions in the **if** condition | 4096 | |
+| Condition expressions in the **then** block | 128 | Applies to the **existenceCondition** of **AuditIfNotExists** and **DeployIfNotExists** policies |
+| Policy functions per policy rule | 2048 | |
+| Policy function number of parameters | 128 | Example: `[function('parameter1', 'parameter2', ...)]` |
+| Nested policy functions depth | 64 | Example: `[function(nested1(nested2(...)))]` |
+| Policy functions expression string length | 81920 | Example: the length of `"[function(....)]"` |
+| **Field count** expressions per array | 5 | |
+| **Value count** expressions per policy rule | 10 | |
+| **Value count** expression iteration count | 100 | For nested **Value count** expressions, this also includes the iteration count of the parent expression |
+
+#### Limits enforced during evaluation
+
+Limits to the size of objects that are processed by policy functions during policy evaluation. These limits can't always be enforced during authoring since they depend on the evaluated content. For example:
+
+```json
+{
+ "field": "name",
+ "equals": "[concat(field('stringPropertyA'), field('stringPropertyB'))]"
+}
+```
+
+The length of the string created by the `concat()` function depends of the value of properties in the evaluated resource.
+
+| Limit | Value | Example |
+|:|:|:|
+| Length of string returned by a function | 131072 | `[concat(field('longString1'), field('longString2'))]`|
+| Depth of complex objects provided as a parameter to, or returned by a function | 128 | `[union(field('largeObject1'), field('largeObject2'))]` |
+| Number of nodes of complex objects provided as a parameter to, or returned by a function | 32768 | `[concat(field('largeArray1'), field('largeArray2'))]` |
+
+> [!WARNING]
+> Policy that exceed the above limits during evaluation will effectively become a **deny** policy and can block incoming requests.
+> When writing policies with complex functions, be mindful of these limits and test your policies against resources that have the potential to exceed them.
+ ## Aliases You use property aliases to access specific properties for a resource type. Aliases enable you to
governance Remediate Resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/how-to/remediate-resources.md
To create a **remediation task**, follow these steps:
1. On the **New remediation task** page, optional remediation settings are shown: - **Failure Threshold percentage** - Used to specify whether the remediation task should fail if the percentage of failures exceeds the given threshold. Provided as a number between 0 to 100. By default, the failure threshold is 100%.
- - **Resource Count** - Determines how many non-compliant resources to remediate in a given remediation task. The default value is 500 (the previous limit). The maximum number of is 10,000 resources.
- - **Parallel Deployments** - Determines how many resources to remediate at the same time. The allowed values are 1 to 15 resources at a time. The default value is 10.
+ - **Resource Count** - Determines how many non-compliant resources to remediate in a given remediation task. The default value is 500 (the previous limit). The maximum number of is 50,000 resources.
+ - **Parallel Deployments** - Determines how many resources to remediate at the same time. The allowed values are 1 to 30 resources at a time. The default value is 10.
> [!NOTE] > These settings cannot be changed once the remediation task has started.
healthcare-apis Use Postman https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/use-postman.md
Previously updated : 08/25/2021- Last updated : 01/18/2022+ # Access using Postman
You can also import and export Postman collections. For more information, see [t
## Create or update environment variables
-While you can use the full url in the request, it is recommended that you store the url and other data in variables and use them.
+While you can use the full URL in the request, it is recommended that you store the URL and other data in variables and use them.
To access the FHIR service, we'll need to create or update the following variables.
Open Postman, select the **workspace**, **collection**, and **environment** you
## Get capability statement
-Enter `{{fhirurl}}/metadata` in the `GET`request, and hit `Send`. You should see the capability statement of the FHIR service.
+Enter `{{fhirurl}}/metadata` in the `GET`request, and select `Send`. You should see the capability statement of the FHIR service.
[ ![Screenshot of capability statement parameters.](media/postman/postman-capability-statement.png) ](media/postman/postman-capability-statement.png#lightbox)
Create a new `POST` request:
- **client_secret**: `{{clientsecret}}` - **resource**: `{{fhirurl}}`
-3. Select the **Test** tab and enter in the text section: `pm.environment.set("bearerToken", pm.response.json().access_token);`
+3. Select the **Test** tab and enter in the text section: `pm.environment.set("bearerToken", pm.response.json().access_token);` To make the value available to the collection, use the pm.collectionVariables.set method. For more information on the set method and its scope level, see [Using variables in scripts](https://learning.postman.com/docs/sending-requests/variables/#defining-variables-in-scripts).
4. Select **Save** to save the settings.
-5. Hit **Send**. You should see a response with the Azure AD access token, which is saved to the variable `bearerToken` automatically. You can then use it in all FHIR service API requests.
+5. Select **Send**. You should see a response with the Azure AD access token, which is saved to the variable `bearerToken` automatically. You can then use it in all FHIR service API requests.
[ ![Screenshot of send button.](media/postman/postman-send-button.png) ](media/postman/postman-send-button.png#lightbox)
Select **Bearer Token** as authorization type. Enter `{{bearerToken}}` in the *
- **Accept**: `application/fhir+json` - **Prefer**: `respond-async`
-Hit **Send**. You should notice a `202 Accepted` response. Select the **Headers** tab of the response and make a note of the value in the **Content-Location**. You can use the value to query the export job status.
+Select **Send**. You should notice a `202 Accepted` response. Select the **Headers** tab of the response and make a note of the value in the **Content-Location**. You can use the value to query the export job status.
[ ![Screenshot of post to create a new patient 202 accepted response.](media/postman/postman-202-accepted-response.png) ](media/postman/postman-202-accepted-response.png#lightbox)
iot-edge How To Provision Single Device Linux Symmetric https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-provision-single-device-linux-symmetric.md
sudo apt-get remove iotedge
::: moniker range=">=iotedge-2020-11" ```bash
-sudo apt-get remove aziot-edge
+sudo apt-get remove --purge aziot-edge
``` ::: moniker-end
key-vault Certificate Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/certificates/certificate-scenarios.md
When you are importing the certificate, you need to ensure that the key is inclu
>Ensure that no other meta data is present in the certificate file and that the private key not showing as encrypted. ### Formats of Merge CSR we support
-AKV supports 2 PEM based formats. You can either merge a single PKCS#8 encoded certificate or a base64 encoded P7B (chain of certificates signed by CA)
+AKV supports 2 PEM based formats. You can either merge a single PKCS#8 encoded certificate or a base64 encoded P7B (chain of certificates signed by CA).
+If you need to covert the P7B's format to the supported one, you can use [certutil -encode](https://docs.microsoft.com/windows-server/administration/windows-commands/certutil#-encode)
--BEGIN CERTIFICATE-- --END CERTIFICATE--
load-testing How To Parameterize Load Tests https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-testing/how-to-parameterize-load-tests.md
If you're using Azure Load Testing in your CI/CD workflow, you can also use the
[ { "name": "appToken",
- "value": "${{ secrets.MY_SECRET }}",
+ "value": "${{ secrets.MY_SECRET }}"
} ] ```
If you're using Azure Load Testing in your CI/CD workflow, you can also use the
[ { "name": "appToken",
- "value": "$(mySecret)",
+ "value": "$(mySecret)"
} ] ```
The following YAML snippet shows a GitHub Actions example:
[ { "name": "webapp",
- "value": "myapplication.contoso.com",
+ "value": "myapplication.contoso.com"
} ] ```
The following YAML snippet shows an Azure Pipelines example:
[ { "name": "webapp",
- "value": "myapplication.contoso.com",
+ "value": "myapplication.contoso.com"
} ] ```
The values of the parameters aren't stored when they're passed from the CI/CD wo
- For information about high-scale load tests, see [Set up a high-scale load test](./how-to-high-scale-load.md). -- To learn about performance test automation, see [Configure automated performance testing](./tutorial-cicd-azure-pipelines.md).
+- To learn about performance test automation, see [Configure automated performance testing](./tutorial-cicd-azure-pipelines.md).
machine-learning How To Compute Cluster Instance Os Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-compute-cluster-instance-os-upgrade.md
Title: Upgrade host OS for compute cluster and instance
description: Upgrade the host OS for compute cluster and compute instance from Ubuntu 16.04 LTS to 18.04 LTS. --++
machine-learning How To Custom Dns https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-custom-dns.md
Previously updated : 10/29/2021 Last updated : 01/18/2022
Access to a given Azure Machine Learning workspace via Private Link is done by c
**Azure China 21Vianet regions**: - ```<per-workspace globally-unique identifier>.workspace.<region the workspace was created in>.api.ml.azure.cn``` - ```<per-workspace globally-unique identifier>.workspace.<region the workspace was created in>.cert.api.ml.azure.cn```-- ```<compute instance name>.<region the workspace was created in>.instances.ml.azure.cn```
+- ```<compute instance name>.<region the workspace was created in>.instances.azureml.cn```
- ```ml-<workspace-name, truncated>-<region>-<per-workspace globally-unique identifier>.notebooks.chinacloudapi.cn``` **Azure US Government regions**: - ```<per-workspace globally-unique identifier>.workspace.<region the workspace was created in>.api.ml.azure.us``` - ```<per-workspace globally-unique identifier>.workspace.<region the workspace was created in>.cert.api.ml.azure.us```-- ```<compute instance name>.<region the workspace was created in>.instances.ml.azure.us```
+- ```<compute instance name>.<region the workspace was created in>.instances.azureml.us```
- ```ml-<workspace-name, truncated>-<region>-<per-workspace globally-unique identifier>.notebooks.usgovcloudapi.net``` The Fully Qualified Domains resolve to the following Canonical Names (CNAMEs) called the workspace Private Link FQDNs:
The following FQDNs are for Azure China regions:
> [!NOTE] > The workspace name for this FQDN may be truncated. Truncation is done to keep `ml-<workspace-name, truncated>-<region>-<workspace-guid>` at 63 characters or less.
-* `<instance-name>.<region>.instances.ml.azure.cn`
+* `<instance-name>.<region>.instances.azureml.cn`
* The IP address for this FQDN is **not** the IP of the compute instance. Instead, use the private IP address of the workspace private endpoint (the IP of the `*.api.azureml.ms` entries.)
The following FQDNs are for Azure US Government regions:
> [!NOTE] > The workspace name for this FQDN may be truncated. Truncation is done to keep `ml-<workspace-name, truncated>-<region>-<workspace-guid>` at 63 characters or less.
-* `<instance-name>.<region>.instances.ml.azure.us`
+* `<instance-name>.<region>.instances.azureml.us`
> * The IP address for this FQDN is **not** the IP of the compute instance. Instead, use the private IP address of the workspace private endpoint (the IP of the `*.api.azureml.ms` entries.) ### Find the IP addresses
The following steps describe how this topology works:
**Azure Public regions**: - ```api.azureml.ms``` - ```notebooks.azure.net```
- - ```instances.ml.azure.ms```
+ - ```instances.azureml.ms```
**Azure China regions**: - ```api.ml.azure.cn``` - ```notebooks.chinacloudapi.cn```
- - ```instances.ml.azure.cn```
+ - ```instances.azureml.cn```
**Azure US Government regions**: - ```api.ml.azure.us``` - ```notebooks.usgovcloudapi.net```
- - ```instances.ml.azure.us```
+ - ```instances.azureml.us```
> [!IMPORTANT] > Configuration steps for the DNS Server are not included here, as there are many DNS solutions available that can be used as a custom DNS Server. Refer to the documentation for your DNS solution for how to appropriately configure conditional forwarding.
The following steps describe how this topology works:
**Azure Public regions**: - ```api.azureml.ms``` - ```notebooks.azure.net```
- - ```instances.ml.azure.us```
+ - ```instances.azureml.us```
**Azure China regions**: - ```api.ml.azure.cn``` - ```notebooks.chinacloudapi.cn```
- - ```instances.ml.azure.cn```
+ - ```instances.azureml.cn```
**Azure US Government regions**: - ```api.ml.azure.us``` - ```notebooks.usgovcloudapi.net```
- - ```instances.ml.azure.us```
+ - ```instances.azureml.us```
> [!IMPORTANT] > Configuration steps for the DNS Server are not included here, as there are many DNS solutions available that can be used as a custom DNS Server. Refer to the documentation for your DNS solution for how to appropriately configure conditional forwarding.
The following steps describe how this topology works:
**Azure Public regions**: - ```api.azureml.ms``` - ```notebooks.azure.net```
- - ```instances.ml.azure.us```
+ - ```instances.azureml.us```
**Azure China regions**: - ```api.ml.azure.cn``` - ```notebooks.chinacloudapi.cn```
- - ```instances.ml.azure.cn```
+ - ```instances.azureml.cn```
**Azure US Government regions**: - ```api.ml.azure.us``` - ```notebooks.usgovcloudapi.net```
- - ```instances.ml.azure.us```
+ - ```instances.azureml.us```
> [!IMPORTANT] > Configuration steps for the DNS Server are not included here, as there are many DNS solutions available that can be used as a custom DNS Server. Refer to the documentation for your DNS solution for how to appropriately configure conditional forwarding.
machine-learning Tutorial Automated Ml Forecast https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/tutorial-automated-ml-forecast.md
# Customer intent: As a non-coding data scientist, I want to use automated machine learning to build a demand forecasting model.
-# Tutorial: Forecast demand with automated machine learning
+# Tutorial: Forecast demand with no-code automated machine learning in the Azure Machine Learning studio
Learn how to create a [time-series forecasting model](concept-automated-ml.md#time-series-forecasting) without writing a single line of code using automated machine learning in the Azure Machine Learning studio. This model will predict rental demand for a bike sharing service.
Before you configure your experiment, upload your data file to your workspace in
1. On the **Datastore and file selection** form, select the default datastore that was automatically set up during your workspace creation, **workspaceblobstore (Azure Blob Storage)**. This is the storage location where you'll upload your data file.
- 1. Select **Browse**.
+ 1. Select **Upload files** from the **Upload** drop-down..
1. Choose the **bike-no.csv** file on your local computer. This is the file you downloaded as a [prerequisite](https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/forecasting-bike-share/bike-no.csv).
After you load and configure your data, set up your remote compute target and se
Field | Description | Value for tutorial -||
- Virtual&nbsp;machine&nbsp;priority |Select what priority your experiment should have| Dedicated
+ Virtual&nbsp;machine&nbsp;tier |Select what priority your experiment should have| Dedicated
Virtual&nbsp;machine&nbsp;type| Select the virtual machine type for your compute.|CPU (Central Processing Unit) Virtual&nbsp;machine&nbsp;size| Select the virtual machine size for your compute. A list of recommended sizes is provided based on your data and experiment type. |Standard_DS12_V2
Complete the setup for your automated ML experiment by specifying the machine le
1. Select **date** as your **Time column** and leave **Time series identifiers** blank.
-1. The **forecast horizon** is the length of time into the future you want to predict. Deselect Autodetect and type 14 in the field.
+1. The **Frequency** is how often your historic data is collected. Keep **Autodetect** selected.
+1.
+1. The **forecast horizon** is the length of time into the future you want to predict. Deselect **Autodetect** and type 14 in the field.
1. Select **View additional configuration settings** and populate the fields as follows. These settings are to better control the training job and specify settings for your forecast. Otherwise, defaults are applied based on experiment selection and data.
Complete the setup for your automated ML experiment by specifying the machine le
Blocked algorithms | Algorithms you want to exclude from the training job| Extreme Random Trees Additional forecasting settings| These settings help improve the accuracy of your model. <br><br> _**Forecast target lags:**_ how far back you want to construct the lags of the target variable <br> _**Target rolling window**_: specifies the size of the rolling window over which features, such as the *max, min* and *sum*, will be generated. | <br><br>Forecast&nbsp;target&nbsp;lags: None <br> Target&nbsp;rolling&nbsp;window&nbsp;size: None Exit criterion| If a criteria is met, the training job is stopped. |Training&nbsp;job&nbsp;time (hours): 3 <br> Metric&nbsp;score&nbsp;threshold: None
- Validation | Choose a cross-validation type and number of tests.|Validation type:<br>&nbsp;k-fold&nbsp;cross-validation <br> <br> Number of validations: 5
Concurrency| The maximum number of parallel iterations executed per iteration| Max&nbsp;concurrent&nbsp;iterations: 6 Select **Save**.
+1. Select **Next**.
+
+1. On the **[Optional] Validate and test** form,
+ 1. Select k-fold cross-validation as your **Validation type**.
+ 1. Select 5 as your **Number of cross validations**.
+ ## Run experiment To run your experiment, select **Finish**. The **Run details** screen opens with the **Run status** at the top next to the run number. This status updates as the experiment progresses. Notifications also appear in the top right corner of the studio, to inform you of the status of your experiment.
machine-learning Tutorial First Experiment Automated Ml https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/tutorial-first-experiment-automated-ml.md
Before you configure your experiment, upload your data file to your workspace in
1. On the **Datastore and file selection** form, select the default datastore that was automatically set up during your workspace creation, **workspaceblobstore (Azure Blob Storage)**. This is where you'll upload your data file to make it available to your workspace.
- 1. Select **Browse**.
+ 1. Select **Upload files** from the **Upload** drop-down.
1. Choose the **bankmarketing_train.csv** file on your local computer. This is the file you downloaded as a [prerequisite](https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_train.csv).
- 1. Give your dataset a unique name and provide an optional description.
- 1. Select **Next** on the bottom left, to upload it to the default container that was automatically set up during your workspace creation. When the upload is complete, the **Settings and preview** form is pre-populated based on the file type.
After you load and configure your data, you can set up your experiment. This set
Field | Description | Value for tutorial -||
- Virtual&nbsp;machine&nbsp;priority |Select what priority your experiment should have| Dedicated
+ Location | Your region that you'd like to run the machine from |West US 2
+ Virtual&nbsp;machine&nbsp;tier |Select what priority your experiment should have| Dedicated
Virtual&nbsp;machine&nbsp;type| Select the virtual machine type for your compute.|CPU (Central Processing Unit) Virtual&nbsp;machine&nbsp;size| Select the virtual machine size for your compute. A list of recommended sizes is provided based on your data and experiment type. |Standard_DS12_V2
After you load and configure your data, you can set up your experiment. This set
Primary metric| Evaluation metric that the machine learning algorithm will be measured by.|AUC_weighted Explain best model| Automatically shows explainability on the best model created by automated ML.| Enable Blocked algorithms | Algorithms you want to exclude from the training job| None
+ Additional&nbsp;classification settings | These settings help improve the accuracy of your model |Positive class label: None
Exit criterion| If a criteria is met, the training job is stopped. |Training&nbsp;job&nbsp;time (hours): 1 <br> Metric&nbsp;score&nbsp;threshold: None
- Validation | Choose a cross-validation type and number of tests.|Validation type:<br>&nbsp;k-fold&nbsp;cross-validation <br> <br> Number of validations: 2
Concurrency| The maximum number of parallel iterations executed per iteration| Max&nbsp;concurrent&nbsp;iterations: 5 Select **Save**.
+ 1. Select **Next**.
+1. On the **[Optional] Validate and test** form,
+ 1. Select k-fold cross-validation as your **Validation type**.
+ 1. Select 2 as your **Number of cross validations**.
+ 1. Select **Finish** to run the experiment. The **Run Detail** screen opens with the **Run status** at the top as the experiment preparation begins. This status updates as the experiment progresses. Notifications also appear in the top right corner of the studio to inform you of the status of your experiment. >[!IMPORTANT]
mariadb Howto Manage Vnet Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mariadb/howto-manage-vnet-cli.md
Virtual Network (VNet) services endpoints and rules extend the private address s
- You need an [Azure Database for MariaDB server and database](quickstart-create-mariadb-server-database-using-azure-cli.md). -- This article requires version 2.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.- > [!NOTE] > Support for VNet service endpoints is only for General Purpose and Memory Optimized servers. ## Configure VNet service endpoints
-The [az network vnet](/cli/azure/network/vnet) commands are used to configure Virtual Networks.
-
-If you have multiple subscriptions, choose the appropriate subscription in which the resource should be billed. Select the specific subscription ID under your account using [az account set](/cli/azure/account#az_account_set) command. Substitute the **id** property from the **az login** output for your subscription into the subscription id placeholder.
--- The account must have the necessary permissions to create a virtual network and service endpoint.-
-Service endpoints can be configured on virtual networks independently, by a user with write access to the virtual network.
+The [az network vnet](/cli/azure/network/vnet) commands are used to configure Virtual Networks. Service endpoints can be configured on virtual networks independently, by a user with write access to the virtual network.
To secure Azure service resources to a VNet, the user must have permission to "Microsoft.Network/virtualNetworks/subnets/joinViaServiceEndpoint/" for the subnets being added. This permission is included in the built-in service administrator roles, by default and can be modified by creating custom roles.
VNets and Azure service resources can be in the same or different subscriptions.
> [!IMPORTANT] > It is highly recommended to read this article about service endpoint configurations and considerations before configuring service endpoints. **Virtual Network service endpoint:** A [Virtual Network service endpoint](../virtual-network/virtual-network-service-endpoints-overview.md) is a subnet whose property values include one or more formal Azure service type names. VNet services endpoints use the service type name **Microsoft.Sql**, which refers to the Azure service named SQL Database. This service tag also applies to the Azure SQL Database, Azure Database for MariaDB, PostgreSQL, and MySQL services. It is important to note when applying the **Microsoft.Sql** service tag to a VNet service endpoint it configures service endpoint traffic for all Azure Database services, including Azure SQL Database, Azure Database for PostgreSQL, Azure Database for MariaDB, and Azure Database for MySQL servers on the subnet.
-### Sample script
-
-This sample script is used to create an Azure Database for MariaDB server, create a VNet, VNet service endpoint and secure the server to the subnet with a VNet rule. In this sample script, change the admin username and password. Replace the SubscriptionID used in the `az account set --subscription` command with your own subscription identifier.
-
-```azurecli-interactive
-# To find the name of an Azure region in the CLI run this command: az account list-locations
-# Substitute <subscription id> with your identifier
-az account set --subscription <subscription id>
-
-# Create a resource group
-az group create \
name myresourcegroup \location westus-
-# Create a MariaDB server in the resource group
-# Name of a server maps to DNS name and is thus required to be globally unique in Azure.
-# Substitute the <server_admin_password> with your own value.
-az mariadb server create \
name mydemoserver \resource-group myresourcegroup \location westus \admin-user mylogin \admin-password <server_admin_password> \sku-name GP_Gen5_2-
-# Get available service endpoints for Azure region output is JSON
-# Use the command below to get the list of services supported for endpoints, for an Azure region, say "westus".
-az network vnet list-endpoint-services \
--l westus-
-# Add Azure SQL service endpoint to a subnet *mySubnet* while creating the virtual network *myVNet* output is JSON
-az network vnet create \
--g myresourcegroup \--n myVNet \address-prefixes 10.0.0.0/16 \--l westus-
-# Creates the service endpoint
-az network vnet subnet create \
--g myresourcegroup \--n mySubnet \vnet-name myVNet \address-prefix 10.0.1.0/24 \service-endpoints Microsoft.SQL-
-# View service endpoints configured on a subnet
-az network vnet subnet show \
--g myresourcegroup \--n mySubnet \vnet-name myVNet-
-# Create a VNet rule on the sever to secure it to the subnet Note: resource group (-g) parameter is where the database exists. VNet resource group if different should be specified using subnet id (URI) instead of subnet, VNet pair.
-az mariadb server vnet-rule create \
--n myRule \--g myresourcegroup \--s mydemoserver \vnet-name myVNet \subnet mySubnet
-```
-
-<!--
-In this sample script, change the highlighted lines to customize the admin username and password. Replace the SubscriptionID used in the `az account set --subscription` command with your own subscription identifier.
-[!code-azurecli-interactive[main](../../cli_scripts/mariadb/create-mysql-server-vnet/create-mysql-server.sh?highlight=5,20 "Create an Azure Database for MariaDB, VNet, VNet service endpoint, and VNet rule.")]
>
+## Sample script
++
+### Run the script
+ ## Clean up deployment
-After the script sample has been run, the following command can be used to remove the resource group and all resources associated with it.
-```azurecli-interactive
-az group delete --name myresourcegroup
-```
+ ```azurecli
+ echo "Cleaning up resources by removing the resource group..."
+ az group delete --name $resourceGroup -y
-<!--
-[!code-azurecli-interactive[main](../../cli_scripts/mysql/create-mysql-server-vnet/delete-mysql.sh "Delete the resource group.")]
>
+ ```
-<!-- Link references, to text, Within this same GitHub repo. -->
+<!-- Link references, to text, Within this same GitHub repo. -->
[resource-manager-portal]: ../azure-resource-manager/management/resource-providers-and-types.md
mariadb Sample Scripts Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mariadb/sample-scripts-azure-cli.md
ms.devlang: azurecli Previously updated : 09/17/2021 Last updated : 01/11/2022 Keywords: azure cli samples, azure cli code samples, azure cli script samples
-# Azure CLI samples for Azure Database for MariaDB
+# Azure CLI samples for Azure Database for MariaDB
+
+You can configure Azure SQL Database for MariaDB by using the <a href="/cli/azure">Azure CLI</a>.
+++
+## Samples
+ The following table includes links to sample Azure CLI scripts for Azure Database for MariaDB.
-| Sample link | Description |
+| Sample link | Description |
|||
-|**Create a server**||
-| [Create a server and firewall rule](./scripts/sample-create-server-and-firewall-rule.md?toc=%2fcli%2fazure%2ftoc.json) | Azure CLI script that creates a single Azure Database for MariaDB server and configures a server-level firewall rule. |
+|**Create a server with firewall rule**||
+| [Create a server and firewall rule](./scripts/sample-create-server-and-firewall-rule.md) | Azure CLI script that creates a single Azure Database for MariaDB server and configures a server-level firewall rule. |
+| [Create script with vNet rules](./scripts/sample-create-server-with-vnet-rule.md) | Azure CLI that creates an Azure Database for MariaDB server with a service endpoint on a virtual network and configures a vNet rule. |
|**Scale a server**||
-| [Scale a server](./scripts/sample-scale-server.md?toc=%2fcli%2fazure%2ftoc.json) | Azure CLI script that scales a single Azure Database for MariaDB server up or down to allow for changing performance needs. |
+| [Scale a server](./scripts/sample-scale-server.md) | Azure CLI script that scales a single Azure Database for MariaDB server up or down to allow for changing performance needs. |
|**Change server configurations**||
-| [Change server configurations](./scripts/sample-change-server-configuration.md?toc=%2fcli%2fazure%2ftoc.json) | Azure CLI script that change configurations of a single Azure Database for MariaDB server. |
+| [Change server configurations](./scripts/sample-change-server-configuration.md) | Azure CLI script that change configurations of a single Azure Database for MariaDB server. |
|**Restore a server**||
-| [Restore a server](./scripts/sample-point-in-time-restore.md?toc=%2fcli%2fazure%2ftoc.json) | Azure CLI script that restores a single Azure Database for MariaDB server to a previous point in time. |
+| [Restore a server](./scripts/sample-point-in-time-restore.md) | Azure CLI script that restores a single Azure Database for MariaDB server to a previous point in time. |
|**Manipulate with server logs**||
-| [Enable and download server logs](./scripts/sample-server-logs.md?toc=%2fcli%2fazure%2ftoc.json) | Azure CLI script that enables and downloads server logs of a single Azure Database for MariaDB server. |
+| [Enable server logs](./scripts/sample-server-logs.md) | Azure CLI script that enables server logs of a single Azure Database for MariaDB server. |
|||
mariadb Sample Change Server Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mariadb/scripts/sample-change-server-configuration.md
ms.devlang: azurecli Previously updated : 12/02/2019 Last updated : 01/11/2022 # List and update configurations of an Azure Database for MariaDB server using Azure CLI+ This sample CLI script lists all available configuration parameters as well as their allowable values for Azure Database for MariaDB server, and sets the *innodb_lock_wait_timeout* to a value that is other than the default one. [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] [!INCLUDE [azure-cli-prepare-your-environment.md](../../../includes/azure-cli-prepare-your-environment.md)] -- This article requires version 2.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.- ## Sample script
-In this sample script, edit the highlighted lines to update the admin username and password to your own.
-[!code-azurecli-interactive[main](../../../cli_scripts/mariadb/change-server-configurations/change-server-configurations.sh?highlight=15-16 "List and update configurations of Azure Database for MariaDB.")]
-## Script explanation
+
+### Run the script
++
+## Clean up resources
++
+```azurecli
+az group delete --name $resourceGroup
+```
+
+## Sample reference
+ This script uses the commands outlined in the following table: | **Command** | **Notes** |
This script uses the commands outlined in the following table:
| [az group delete](/cli/azure/group#az_group_delete) | Deletes a resource group including all nested resources. | ## Next steps+ - Read more information on the Azure CLI: [Azure CLI documentation](/cli/azure). - Try additional scripts: [Azure CLI samples for Azure Database for MariaDB](../sample-scripts-azure-cli.md) - For more information on server parameters, see [How To Configure Server Parameters in Azure Database for MariaDB](../howto-server-parameters.md).
mariadb Sample Create Server And Firewall Rule https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mariadb/scripts/sample-create-server-and-firewall-rule.md
ms.devlang: azurecli Previously updated : 11/28/2018 Last updated : 01/11/2022 # Create a MariaDB server and configure a firewall rule using the Azure CLI+ This sample CLI script creates an Azure Database for MariaDB server and configures a server-level firewall rule. Once the script runs successfully, the MariaDB server is accessible by all Azure services and the configured IP address. [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] [!INCLUDE [azure-cli-prepare-your-environment.md](../../../includes/azure-cli-prepare-your-environment.md)] -- This article requires version 2.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.- ## Sample script
-In this sample script, edit the highlighted lines to update the admin username and password to your own.
-[!code-azurecli-interactive[main](../../../cli_scripts/mariadb/create-mariadb-server-and-firewall-rule/create-mariadb-server-and-firewall-rule.sh?highlight=15-16 "Create an Azure Database for mariadb, and server-level firewall rule.")]
-## Script explanation
+
+### Run the script
++
+## Clean up resources
++
+```azurecli
+az group delete --name $resourceGroup
+```
+
+## Sample reference
+ This script uses the commands outlined in the following table: | **Command** | **Notes** |
This script uses the commands outlined in the following table:
| [az group delete](/cli/azure/group#az_group_delete) | Deletes a resource group including all nested resources. | ## Next steps+ - Read more information on the Azure CLI: [Azure CLI documentation](/cli/azure). - Try additional scripts: [Azure CLI samples for Azure Database for MariaDB](../sample-scripts-azure-cli.md)
mariadb Sample Create Server With Vnet Rule https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mariadb/scripts/sample-create-server-with-vnet-rule.md
+
+ Title: CLI script - Create server with vNet rule - Azure Database for MariaDB
+description: This sample CLI script creates an Azure Database for MariaDB server with a service endpoint on a virtual network and configures a vNet rule.
+++
+ms.devlang: azurecli
++ Last updated : 01/11/2022++
+# Create a MariaDB server and configure a vNet rule using the Azure CLI
+
+This sample CLI script creates an Azure Database for MariaDB server and configures a vNet rule.
+++
+## Sample script
++
+### Run the script
++
+## Clean up resources
++
+```azurecli
+az group delete --name $resourceGroup
+```
+
+## Sample reference
+
+This script uses the commands outlined in the following table:
+
+| **Command** | **Notes** |
+|||
+| [az group create](/cli/azure/group#az_group_create) | Creates a resource group in which all resources are stored. |
+| [az mariadb server create](/cli/azure/mariadb/server#az_mariadb_server_create) | Creates a MariaDB server that hosts the databases. |
+| [az network vnet list-endpoint-services](/cli/cli/azure/network/vnet#az-network-vnet-list-endpoint-services) | List which services support VNET service tunneling in a given region. |
+| [az network vnet create](/cli/azure/network/vnet#az-network-vnet-create) | Creates a virtual network. |
+| [az network vnet subnet create](/cli/azure/network/vnet#az-network-vnet-subnet-create) | Create a subnet and associate an existing NSG and route table. |
+| [az network vnet subnet show](/cli/azure/network/vnet#az-network-vnet-subnet-show) | Shows details of a subnet. |
+| [az mariadb server vnet-rule create](/cli/azure/mariadb/server/vnet-rule#az-mariadb-server-vnet-rule-create) | Create a virtual network rule to allows access to a MariaDB server. |
+| [az group delete](/cli/azure/group#az_group_delete) | Deletes a resource group including all nested resources. |
+
+## Next steps
+
+- Read more information on the Azure CLI: [Azure CLI documentation](/cli/azure).
+- Try additional scripts: [Azure CLI samples for Azure Database for MariaDB](../sample-scripts-azure-cli.md).
mariadb Sample Point In Time Restore https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mariadb/scripts/sample-point-in-time-restore.md
ms.devlang: azurecli Previously updated : 12/02/2019 Last updated : 01/11/2022 # Restore an Azure Database for MariaDB server using Azure CLI+ This sample CLI script restores a single Azure Database for MariaDB server to a previous point in time. [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] [!INCLUDE [azure-cli-prepare-your-environment.md](../../../includes/azure-cli-prepare-your-environment.md)] -- This article requires version 2.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed. - ## Sample script
-In this sample script, edit the highlighted lines to update the admin username and password to your own. Replace the subscription ID used in the `az monitor` commands with your own subscription ID.
-[!code-azurecli-interactive[main](../../../cli_scripts/mariadb/backup-restore-pitr/backup-restore.sh?highlight=15-16 "Restore Azure Database for MariaDB.")]
-## Script explanation
+
+### Run the script
++
+## Clean up resources
++
+```azurecli
+az group delete --name $resourceGroup
+```
+
+## Sample reference
+ This script uses the commands outlined in the following table: | **Command** | **Notes** |
This script uses the commands outlined in the following table:
| [az group delete](/cli/azure/group#az_group_delete) | Deletes a resource group including all nested resources. | ## Next steps+ - Read more information on the Azure CLI: [Azure CLI documentation](/cli/azure). - Try additional scripts: [Azure CLI samples for Azure Database for MariaDB](../sample-scripts-azure-cli.md)
mariadb Sample Scale Server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mariadb/scripts/sample-scale-server.md
ms.devlang: azurecli Previously updated : 12/02/2019 Last updated : 01/11/2022 # Monitor and scale an Azure Database for MariaDB server using Azure CLI+ This sample CLI script scales compute and storage for a single Azure Database for MariaDB server after querying the metrics. Compute can scale up or down. Storage can only scale up. [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] [!INCLUDE [azure-cli-prepare-your-environment.md](../../../includes/azure-cli-prepare-your-environment.md)] -- This article requires version 2.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed. - ## Sample script
-Update the script with your subscription ID.
-[!code-azurecli-interactive[main](../../../cli_scripts/mariadb/scale-mariadb-server/scale-mariadb-server.sh "Create and scale Azure Database for MariaDB.")]
-## Script explanation
+
+### Run the script
++
+## Clean up resources
++
+```azurecli
+az group delete --name $resourceGroup
+```
+
+## Sample reference
+ This script uses the commands outlined in the following table: | **Command** | **Notes** |
This script uses the commands outlined in the following table:
| [az group delete](/cli/azure/group#az_group_delete) | Deletes a resource group including all nested resources. | ## Next steps+ - Learn more about [Azure Database for MariaDB compute and storage](../concepts-pricing-tiers.md) - Try additional scripts: [Azure CLI samples for Azure Database for MariaDB](../sample-scripts-azure-cli.md) - Learn more about the [Azure CLI](/cli/azure)
mariadb Sample Server Logs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mariadb/scripts/sample-server-logs.md
ms.devlang: azurecli Previously updated : 12/02/2019 Last updated : 01/11/2022 # Enable and download server slow query logs of an Azure Database for MariaDB server using Azure CLI+ This sample CLI script enables and downloads the slow query logs of a single Azure Database for MariaDB server. [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] [!INCLUDE [azure-cli-prepare-your-environment.md](../../../includes/azure-cli-prepare-your-environment.md)] -- This article requires version 2.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.- ## Sample script
-In this sample script, edit the highlighted lines to update the admin username and password to your own. Replace the &lt;log_file_name&gt; in the `az monitor` commands with your own server log file name.
-[!code-azurecli-interactive[main](../../../cli_scripts/mariadb/server-logs/server-logs.sh?highlight=15-16 "Manipulate with server logs.")]
-## Script explanation
+
+### Run the script
++
+## Clean up resources
++
+```azurecli
+az group delete --name $resourceGroup
+```
+
+## Sample reference
+ This script uses the commands outlined in the following table: | **Command** | **Notes** |
This script uses the commands outlined in the following table:
| [az group delete](/cli/azure/group#az_group_delete) | Deletes a resource group including all nested resources. | ## Next steps+ - Read more information on the Azure CLI: [Azure CLI documentation](/cli/azure). - Try additional scripts: [Azure CLI samples for Azure Database for MariaDB](../sample-scripts-azure-cli.md)
marketplace Marketplace Commercial Transaction Capabilities And Considerations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/marketplace-commercial-transaction-capabilities-and-considerations.md
Generally, SaaS offers are a good fit if your customers just want to subscribe t
Virtual Machine and Azure Application offers are a good fit if you want customers to deploy, manage, and run your packaged app or service (as a VM Image and/or other Azure services in the ARM template) in their own cloud infrastructure. [![Shows a flowchart for determining offer type and pricing plan.](media/commercial-marketplace-plans/offer-type-and-pricing-plan-flowchart.png)](media/commercial-marketplace-plans/offer-type-and-pricing-plan-flowchart.png#lightbox)
-&nbsp;&nbsp;&nbsp;<sup>(1)</sup> Contact [Microsoft Office Hours](https://microsoftcloudpartner.eventbuilder.com/MarketplaceDeveloperOfficeHours) or [support](./support.md).<br>
+&nbsp;&nbsp;&nbsp;<sup>(1)</sup> Attend [Microsoft Office Hours](https://microsoftcloudpartner.eventbuilder.com/MarketplaceOverviewandQAforPartners) or [support](./support.md).<br>
&nbsp;&nbsp;&nbsp;<sup>(2)</sup> VM offer images can be included in the Azure App offer to increase pricing flexibility.<br> &nbsp;&nbsp;&nbsp;<sup>(3)</sup> Customer pays the infrastructure costs since Azure services are deployed on the customer tenant for VM and Azure App offers.
marketplace What Is New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/what-is-new.md
Learn about important updates in the commercial marketplace program of Partner C
| Category | Description | Date | | | | |
-| Added a [Revenue Dashboard](revenue-dashboard.md) to Partner Center, including a revenue report, [sample queries](analytics-sample-queries.md#revenue-report-queries), and [FAQs](/analytics-faq#revenue) page. | 2021-12-08 |
+| Offers | Added a [Revenue Dashboard](revenue-dashboard.md) to Partner Center, including a revenue report, [sample queries](analytics-sample-queries.md#revenue-report-queries), and [FAQs](/analytics-faq#revenue) page. | 2021-12-08 |
| Offers | Container and container apps offers can now use the Microsoft [Standard Contract](standard-contract.md). | 2021-11-02 | | Offers | Private plans for [SaaS offers](plan-saas-offer.md) are now available on AppSource. | 2021-10-06 | | Offers | In [Set up an Azure Marketplace subscription for hosted test drives](test-drive-azure-subscription-setup.md), for **Set up for Dynamics 365 apps on Dataverse and Power Apps**, we added a new method to remove users from your Azure tenant. | 2021-10-01 |
mysql Concepts Server Parameters https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/flexible-server/concepts-server-parameters.md
The list of supported server parameters is constantly growing. Use the server pa
Refer to the following sections below to learn more about the limits of the several commonly updated server parameters. The limits are determined by the compute tier and size (vCores) of the server. > [!NOTE]
-> If you are looking to modify a server parameter which is non-modifiable but you would like to see as a modifiable for your environment, please open a [UserVoice](https://feedback.azure.com/d365community/forum/47b1e71d-ee24-ec11-b6e6-000d3a4f0da0) item or vote if the feedback already exist which can help us prioritize.
+>* If you are looking to modify a server parameter which are static using the portal, it will request you to restart the server for the changes to take effect. In case you are using automation scripts (using tools like ARM templates , Terraform, Azure CLI etc) then your script should have a provision to restart the service for the settings to take effect even if you are changing the configurations as a part of create experience.
+>* If you are looking to modify a server parameter which is non-modifiable but you would like to see as a modifiable for your environment, please open a [UserVoice](https://feedback.azure.com/d365community/forum/47b1e71d-ee24-ec11-b6e6-000d3a4f0da0) item or vote if the feedback already exist which can help us prioritize.
### log_bin_trust_function_creators
network-watcher View Relative Latencies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/network-watcher/view-relative-latencies.md
# View relative latency to Azure regions from specific locations > [!WARNING]
-> This feature is currently in preview and still being tested for stability.
+> This feature is currently under deprecation.
In this tutorial, learn how to use the Azure [Network Watcher](network-watcher-monitoring-overview.md) service to help you decide what Azure region to deploy your application or service in, based on your user demographic. Additionally, you can use it to help evaluate service providers' connections to Azure.
orbital Register Spacecraft https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/orbital/register-spacecraft.md
Sign in to the [Azure portal - Orbital Preview](https://aka.ms/orbital/portal).
| Direction | Select Uplink or Downlink | | Center Frequency | Enter the center frequency in Mhz | | Bandwidth | Enter the bandwidth in Mhz |
- | Polarization | Select RHCP, LHCP, Dual, or Linear Vertical |
+ | Polarization | Select RHCP, LHCP, or Linear Vertical |
:::image type="content" source="media/orbital-eos-register-links.png" alt-text="Spacecraft Links Resource Page" lightbox="media/orbital-eos-register-links.png":::
postgresql Concepts Data Access And Security Private Link https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/concepts-data-access-and-security-private-link.md
The following situations and outcomes are possible when you use Private Link in
## Deny public access for Azure Database for PostgreSQL Single server
-If you want to rely only on private endpoints for accessing their Azure Database for PostgreSQL Single server, you can disable setting all public endpoints([firewall rules](concepts-firewall-rules.md) and [VNet service endpoints](concepts-data-access-and-security-vnet.md)) by setting the **Deny Public Network Access** configuration on the database server.
+If you want to rely only on private endpoints for accessing their Azure Database for PostgreSQL Single server, you can disable setting all public endpoints ([firewall rules](concepts-firewall-rules.md) and [VNet service endpoints](concepts-data-access-and-security-vnet.md)) by setting the **Deny Public Network Access** configuration on the database server.
When this setting is set to *YES* only connections via private endpoints are allowed to your Azure Database for PostgreSQL. When this setting is set to *NO* clients can connect to your Azure Database for PostgreSQL based on your firewall or VNet service endpoint setting. Additionally, once the value of the Private network access is set, customers cannot add and/or update existing 'Firewall rules' and 'VNet service endpoint rules'.
postgresql Overview Postgres Choose Server Options https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/overview-postgres-choose-server-options.md
Last updated 12/01/2021
With Azure, your PostgreSQL Server workloads can run in a hosted virtual machine infrastructure as a service (IaaS) or as a hosted platform as a service (PaaS). PaaS has multiple deployment options, each with multiple service tiers. When you choose between IaaS and PaaS, you must decide if you want to manage your database, apply patches, and make backups, or if you want to delegate these operations to Azure. When making your decision, consider the following three options in PaaS or alternatively running on Azure VMs (IaaS)-- [Azure database for PostgreSQL Single Server](./overview-single-server.md)-- [Azure database for PostgreSQL Flexible Server](./flexible-server/overview.md)-- [Azure database for PostgreSQL Hyperscale (Citus)](hyperscale/index.yml)
+- [Azure Database for PostgreSQL Single Server](./overview-single-server.md)
+- [Azure Database for PostgreSQL Flexible Server](./flexible-server/overview.md)
+- [Azure Database for PostgreSQL Hyperscale (Citus)](hyperscale/index.yml)
**PostgreSQL on Azure VMs** option falls into the industry category of IaaS. With this service, you can run PostgreSQL Server inside a fully managed virtual machine on the Azure cloud platform. All recent versions and editions of PostgreSQL can be installed on an IaaS virtual machine. In the most significant difference from Azure Database for PostgreSQL, PostgreSQL on Azure VMs offers control over the database engine. However, this control comes at the cost of responsibility to manage the VMs and many database administration (DBA) tasks. These tasks include maintaining and patching database servers, database recovery, and high-availability design.
Additionally, configuring high availability to another data center requires mini
## Next steps - See Azure Database for [PostgreSQL pricing](https://azure.microsoft.com/pricing/details/postgresql/server/).-- Get started by creating your first server.
+- Get started by creating your first server.
purview Supported Classifications https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/supported-classifications.md
Azure Purview classifies data by [RegEx](https://wikipedia.org/wiki/Regular_expr
> [!Note] > Azure Purview can classify both structured (CSV, TSV, JSON, SQL Table etc.) as well as unstructured data (DOC, PDF, TXT etc.). However, there are certain classifications that are only applicable to structured data. Here is the list of classifications that Azure Purview doesn't apply on unstructured data - City Name, Country Name, Date Of Birth, Email, Ethnic Group, GeoLocation, Person Name, U.S. Phone Number, U.S. States, U.S. ZipCode
+> [!Note]
+> **Minimum match threshold**: It is the minimum percentage of data value matches in a column that must be found by the scanner for the classification to be applied. For system classification minimum match threshold value is set at 60% and cannot be changed. For custom classification, this value is configurable.
+ ## Bloom Filter based classifications ## City, Country, and Place
remote-rendering Data Residency https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/remote-rendering/reference/data-residency.md
Title: Data residency description: Describes data residency when using Azure Remote Rendering.--++ Last updated 02/04/2021
search Search Create Service Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-create-service-portal.md
Previously updated : 08/24/2021 Last updated : 01/17/2022 # Create an Azure Cognitive Search service in the portal
-[Azure Cognitive Search](search-what-is-azure-search.md) is an Azure resource used for adding a full text search experience to custom apps. You can integrate it easily with other Azure services that provide data or additional processing, with apps on network servers, or with software running on other cloud platforms.
+[**Azure Cognitive Search**](search-what-is-azure-search.md) is an Azure resource used for adding a full text search experience to custom apps.
You can create search service using the [Azure portal](https://portal.azure.com/), which is covered in this article. You can also use [Azure PowerShell](search-manage-powershell.md), [Azure CLI](/cli/azure/search), the [Management REST API](/rest/api/searchmanagement/), or an [Azure Resource Manager service template](https://azure.microsoft.com/resources/templates/azure-search-create/).
search Search File Storage Integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-file-storage-integration.md
Title: Index data from Azure Files (preview)
+ Title: Azure Files indexing (preview)
description: Set up an Azure Files indexer to automate indexing of file shares in Azure Cognitive Search. - Previously updated : 11/02/2021+ Last updated : 01/17/2022 # Index data from Azure Files > [!IMPORTANT]
-> Azure Files is currently in public preview under [Supplemental Terms of Use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). Use a [preview REST API (2021-04-30-preview or later)](search-api-preview.md) to index your content. There is currently limited portal support and no .NET SDK support.
+> Azure Files indexer is currently in public preview under [Supplemental Terms of Use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). Use a [preview REST API (2020-06-30-preview or later)](search-api-preview.md) to create the indexer data source.
-In this article, review the basic workflow for extracting content and metadata from Azure file shares and sending it to a search index in Azure Cognitive Search. The resulting index can be queried using full text search.
+In this article, learn the steps for extracting content and metadata from file shares in Azure Storage and sending the content to a search index in Azure Cognitive Search. The resulting index can be queried using full text search.
-> [!NOTE]
-> Already familiar with the workflow and composition? [How to configure a file indexer](#configure) is your next step.
+This article supplements [**Create an indexer**](search-howto-create-indexers.md) with information specific to indexing files in Azure Storage.
+
+## Prerequisites
-## Functionality
++ [Azure Files](https://azure.microsoft.com/services/storage/files/), Transaction Optimized tier.
-An indexer in Azure Cognitive Search is a crawler that extracts searchable data and metadata from a data source. The Azure Files indexer will connect to your Azure file share and index files. The indexer provides the following functionality:
++ An [SMB file share](../storage/files/files-smb-protocol.md) providing the source content. [NFS shares](../storage/files/files-nfs-protocol.md#support-for-azure-storage-features) are not supported.
-+ Index content from an Azure file share.
-+ The indexer will support incremental indexing meaning that it will identify which content in the Azure file share has changed and only index the updated content on future indexing runs. For example, if 5 PDFs are originally indexed by the indexer, then 1 is updated, then the indexer runs again, the indexer will only index the 1 PDF that was updated.
-+ Text and normalized images will be extracted by default from the files that are indexed. Optionally a skillset can be added to the pipeline for further content enrichment. More information on skillsets can be found in the article [Skillset concepts in Azure Cognitive Search](cognitive-search-working-with-skillsets.md).
++ Files should contain non-binary textual content for text-based indexing. This indexer also supports [AI enrichment](cognitive-search-concept-intro.md) if you have binary files. ## Supported document formats
-The Azure Cognitive Search Azure Files indexer can extract text from the following document formats:
+The Azure Files indexer can extract text from the following document formats:
[!INCLUDE [search-document-data-sources](../../includes/search-blob-data-sources.md)]
-## Required resources
-
-You need both Azure Cognitive Search and [Azure Files](https://azure.microsoft.com/services/storage/files/). Within Azure Files, you need a file share that provides source content.
+## Define the data source
-> [!NOTE]
-> To index a file share, it must support access through the [file data plane REST API](/rest/api/storageservices/file-service-rest-api). [NFS shares](../storage/files/files-nfs-protocol.md#support-for-azure-storage-features) do not support the file data plane REST API and cannot be used with Azure Cognitive Search indexers.[SMB shares](../storage/files/files-smb-protocol.md) support the file data plane REST API and can be used with Azure Cognitive Search indexers.
+A primary difference between a file share indexer and other indexers is the data source assignment. The data source definition specifies "type": `"azurefile"`, a content path, and how to connect.
-<a name="configure"></a>
+1. [Create or update a data source](/rest/api/searchservice/preview-api/create-or-update-data-source) to set its definition, using a preview API version 2020-06-30-Preview or 2021-04-30-Preview for "type": `"azurefile"`.
-## Configuring a file indexer
+ ```json
+ {
+ "name" : "my-file-datasource",
+ "type" : "azurefile",
+ "credentials" : { "connectionString" : "DefaultEndpointsProtocol=https;AccountName=<account name>;AccountKey=<account key>;" },
+ "container" : { "name" : "my-file-share", "query" : "<optional-directory-name>" }
+ }
+ ```
-Azure File indexers share many common configuration options with [Azure Blob indexers](search-howto-indexing-azure-blob-storage.md). For example, Azure File indexers support [producing multiple search documents from a single file](search-howto-index-one-to-many-blobs.md), [plain text files](search-howto-index-plaintext-blobs.md), [JSON files](search-howto-index-json-blobs.md), and [encrypted files](search-howto-index-encrypted-blobs.md). Many of the same [configuration options](search-howto-indexing-azure-blob-storage.md) also apply. Important differences are highlighted below.
+1. Set "type" to `"azurefile"` (required).
-## Data source definitions
+1. Set "credentials" to an Azure Storage connection string. The next section describes the supported formats.
-The primary difference between a file indexer and any other indexer is the data source definition that's assigned to the indexer. The data source definition specifies the data source type ("type": "azurefile"), and other properties for authentication and connection to the content to be indexed.
+1. Set "container" to the root file share, and use "query" to specify any subfolders.
-A file data source definition looks similar to the example below:
+A data source definition can also include additional properties for [soft deletion policies](#soft-delete-using-custom-metadata) and [field mappings](search-indexer-field-mappings.md) if field names and types are not the same.
-```http
-{
- "name" : "my-file-datasource",
- "type" : "azurefile",
- "credentials" : { "connectionString" : "DefaultEndpointsProtocol=https;AccountName=<account name>;AccountKey=<account key>;" },
- "container" : { "name" : "my-file", "query" : "<optional-directory-name>" }
-}
-```
+<a name="Credentials"></a>
-The `"credentials"` property can be a connection string, as shown in the above example, or one of the alternative approaches described in the next section. The `"container"` property provides the file share within Azure Storage, and `"query"` is used to specify a subfolder in the share. For more information about data source definitions, see [Create Data Source (REST)](/rest/api/searchservice/create-data-source).
+### Supported credentials and connection strings
-<a name="Credentials"></a>
+Indexers can connect to a file share using the following connections.
-## Credentials
+**Full access storage account connection string**:
+`{ "connectionString" : "DefaultEndpointsProtocol=https;AccountName=<your storage account>;AccountKey=<your account key>;" }`
-You can provide the credentials for the file share in one of these ways:
+You can get the connection string from the Storage account page in Azure portal by selecting **Access keys** in the left navigation pane. Make sure to select a full connection string and not just a key.
**Managed identity connection string**: `{ "connectionString" : "ResourceId=/subscriptions/<your subscription ID>/resourceGroups/<your resource group name>/providers/Microsoft.Storage/storageAccounts/<your storage account name>/;" }`
-This connection string does not require an account key, but you must follow the instructions for [Setting up a connection to an Azure Storage account using a managed identity](search-howto-managed-identities-storage.md).
-
-**Full access storage account connection string**:
-`{ "connectionString" : "DefaultEndpointsProtocol=https;AccountName=<your storage account>;AccountKey=<your account key>;" }`
-
-You can get the connection string from the Azure portal by navigating to the storage account blade > Settings > Keys (for Classic storage accounts) or Security + networking > Access keys (for Azure Resource Manager storage accounts).
+This connection string requires [configuring your search service as a trusted service](search-howto-managed-identities-storage.md) under Azure Active Directory,and then granting **Reader and data access** rights to the search service in Azure Storage.
**Storage account shared access signature** (SAS) connection string: `{ "connectionString" : "BlobEndpoint=https://<your account>.file.core.windows.net/;SharedAccessSignature=?sv=2016-05-31&sig=<the signature>&spr=https&se=<the validity end time>&sp=rl&sr=s;" }`
The SAS should have the list and read permissions on the file share. For more in
> [!NOTE] > If you use SAS credentials, you will need to update the data source credentials periodically with renewed signatures to prevent their expiration. If SAS credentials expire, the indexer will fail with an error message similar to "Credentials provided in the connection string are invalid or have expired".
-## Indexing file metadata
-
-A common scenario that makes it easy to sort through files of any content type is to index both custom metadata and system properties for each file. In this way, information for all files is indexed regardless of document type, stored in an index in your search service. Using your new index, you can then proceed to sort, filter, and facet across all File storage content.
+## Add search fields to an index
-Standard file metadata properties can be extracted into the fields listed below. The file indexer automatically creates internal field mappings for these file metadata properties. You still have to add the fields you want to use the index definition, but you can omit creating field mappings in the indexer.
+In the [search index](search-what-is-an-index.md), add fields to accept the content and metadata of your Azure files.
-+ **metadata_storage_name** (`Edm.String`) - the file name. For example, if you have a file /my-share/my-folder/subfolder/resume.pdf, the value of this field is `resume.pdf`.
-+ **metadata_storage_path** (`Edm.String`) - the full URI of the file, including the storage account. For example, `https://myaccount.file.core.windows.net/my-share/my-folder/subfolder/resume.pdf`
-+ **metadata_storage_content_type** (`Edm.String`) - content type as specified by the code you used to upload the file. For example, `application/octet-stream`.
-+ **metadata_storage_last_modified** (`Edm.DateTimeOffset`) - last modified timestamp for the file. Azure Cognitive Search uses this timestamp to identify changed files, to avoid reindexing everything after the initial indexing.
-+ **metadata_storage_size** (`Edm.Int64`) - file size in bytes.
-+ **metadata_storage_content_md5** (`Edm.String`) - MD5 hash of the file content, if available.
-+ **metadata_storage_sas_token** (`Edm.String`) - A temporary SAS token that can be used by [custom skills](cognitive-search-custom-skill-interface.md) to get access to the file. This token shouldn't stored for later use as it might expire.
+1. [Create or update an index](/rest/api/searchservice/create-index) to define search fields that will store file content, metadata, and system properties:
-## Index by file type
-
-You can control which documents are indexed and which are skipped.
+ ```json
+ POST /indexes?api-version=2020-06-30
+ {
+ "name" : "my-search-index",
+ "fields": [
+ { "name": "ID", "type": "Edm.String", "key": true, "searchable": false },
+ { "name": "content", "type": "Edm.String", "searchable": true, "filterable": false },
+ { "name": "metadata_storage_name", "type": "Edm.String", "searchable": false, "filterable": true, "sortable": true },
+ { "name": "metadata_storage_path", "type": "Edm.String", "searchable": false, "filterable": true, "sortable": true },
+ { "name": "metadata_storage_size", "type": "Edm.Int64", "searchable": false, "filterable": true, "sortable": true },
+ { "name": "metadata_storage_content_type", "type": "Edm.String", "searchable": true, "filterable": true, "sortable": true },
+ ]
+ }
+ ```
-### Include documents having specific file extensions
+1. Create a key field ("key": true) to uniquely identify each search document based on unique identifiers in the files. For this data source type, the indexer will automatically identify and encode a value for this field. No field mappings are necessary.
-You can index only the documents with the file name extensions you specify by using the `indexedFileNameExtensions` indexer configuration parameter. The value is a string containing a comma-separated list of file extensions (with a leading dot). For example, to index only the .PDF and .DOCX documents, do this:
+1. Add a "content" field to store extracted text from each file.
-```http
-PUT https://[service name].search.windows.net/indexers/[indexer name]?api-version=2020-06-30-Preview
-Content-Type: application/json
-api-key: [admin key]
+1. Add fields for standard metadata properties. In file indexing, the standard metadata properties are the same as blob metadata properties. The file indexer automatically creates internal field mappings for these properties that converts hyphenated property names to underscored property names. You still have to add the fields you want to use the index definition, but you can omit creating field mappings in the data source.
-{
- ... other parts of indexer definition
- "parameters" : { "configuration" : { "indexedFileNameExtensions" : ".pdf,.docx" } }
-}
-```
+ + **metadata_storage_name** (`Edm.String`) - the file name. For example, if you have a file /my-share/my-folder/subfolder/resume.pdf, the value of this field is `resume.pdf`.
+ + **metadata_storage_path** (`Edm.String`) - the full URI of the file, including the storage account. For example, `https://myaccount.file.core.windows.net/my-share/my-folder/subfolder/resume.pdf`
+ + **metadata_storage_content_type** (`Edm.String`) - content type as specified by the code you used to upload the file. For example, `application/octet-stream`.
+ + **metadata_storage_last_modified** (`Edm.DateTimeOffset`) - last modified timestamp for the file. Azure Cognitive Search uses this timestamp to identify changed files, to avoid reindexing everything after the initial indexing.
+ + **metadata_storage_size** (`Edm.Int64`) - file size in bytes.
+ + **metadata_storage_content_md5** (`Edm.String`) - MD5 hash of the file content, if available.
+ + **metadata_storage_sas_token** (`Edm.String`) - A temporary SAS token that can be used by [custom skills](cognitive-search-custom-skill-interface.md) to get access to the file. This token shouldn't be stored for later use as it might expire.
-### Exclude documents having specific file extensions
+## Configure the file indexer
-You can exclude documents with specific file name extensions from indexing by using the `excludedFileNameExtensions` configuration parameter. The value is a string containing a comma-separated list of file extensions (with a leading dot). For example, to index all content except those with the .PNG and .JPEG extensions, do this:
+1. [Create or update an indexer](/rest/api/searchservice/create-indexer) to use the predefined data source and search index.
-```http
-PUT https://[service name].search.windows.net/indexers/[indexer name]?api-version=2020-06-30-Preview
-Content-Type: application/json
-api-key: [admin key]
+ ```http
+ POST https://[service name].search.windows.net/indexers?api-version=2020-06-30
+ {
+ "name" : "my-file-indexer,
+ "dataSourceName" : "my-file-datasource",
+ "targetIndexName" : "my-search-index",
+ "parameters": {
+ "batchSize": null,
+ "maxFailedItems": null,
+ "maxFailedItemsPerBatch": null,
+ "configuration:" {
+ "indexedFileNameExtensions" : ".pdf,.docx",
+ "excludedFileNameExtensions" : ".png,.jpeg"
+ }
+ },
+ "schedule" : { },
+ "fieldMappings" : [ ]
+ }
+ ```
-{
- ... other parts of indexer definition
- "parameters" : { "configuration" : { "excludedFileNameExtensions" : ".png,.jpeg" } }
-}
-```
+1. In the optional "configuration" section, provide any inclusion or exclusion criteria. If left unspecified, all files in the file share are retrieved.
-If both `indexedFileNameExtensions` and `excludedFileNameExtensions` parameters are present, Azure Cognitive Search first looks at `indexedFileNameExtensions`, then at `excludedFileNameExtensions`. This means that if the same file extension is present in both lists, it will be excluded from indexing.
+ If both `indexedFileNameExtensions` and `excludedFileNameExtensions` parameters are present, Azure Cognitive Search first looks at `indexedFileNameExtensions`, then at `excludedFileNameExtensions`. If the same file extension is present in both lists, it will be excluded from indexing.
-<a name="deleted-files"></a>
+1. See [Create an indexer](search-howto-create-indexers.md) for more information about other properties.
-## Detecting deleted files
+## Change and deletion detection
-After an initial search index is created, you might want subsequent indexer jobs to only pick up new and changed documents. For search content that originates from Azure File Storage, change detection occurs automatically when you use a schedule to trigger indexing. By default, the service reindexes only the changed files, as determined by the file's `LastModified` timestamp. In contrast with other data sources supported by search indexers, files always have a timestamp, which eliminates the need to set up a change detection policy manually.
+After an initial search index is created, you might want subsequent indexer jobs to pick up only new and changed documents. Fortunately, content in Azure Storage is timestamped, which gives indexers sufficient information for determining what's new and changed automatically. For search content that originates from Azure File Storage, the indexer keeps track of the file's `LastModified` timestamp and reindexes only new and changed files.
-Although change detection is a given, deletion detection is not. If you want to detect deleted files, make sure to use a "soft delete" approach. If you delete the files outright, corresponding documents will not be removed from the search index.
+Although change detection is a given, deletion detection is not. If you want to detect deleted files, make sure to use a "soft delete" approach. If you delete the files outright in a file share, corresponding search documents will not be removed from the search index.
## Soft delete using custom metadata This method uses a file's metadata to determine whether a search document should be removed from the index. This method requires two separate actions, deleting the search document from the index, followed by file deletion in Azure Storage.
-There are steps to follow in both File storage and Cognitive Search, but there are no other feature dependencies. This capability is supported in generally available APIs.
+There are steps to follow in both File storage and Cognitive Search, but there are no other feature dependencies.
-1. Add a custom metadata key-value pair to the file to indicate to Azure Cognitive Search that it is logically deleted.
+1. Add a custom metadata key-value pair to the file in Azure storage to indicate to Azure Cognitive Search that it is logically deleted.
1. Configure a soft deletion column detection policy on the data source. For example, the following policy considers a file to be deleted if it has a metadata property `IsDeleted` with the value `true`:
There are steps to follow in both File storage and Cognitive Search, but there a
} ```
-1. Once the indexer has processed the blob and deleted the document from the index, you can delete the blob in Azure Blob Storage.
+1. Once the indexer has processed the file and deleted the document from the search index, you can delete the file in Azure Storage.
### Reindexing undeleted files (using custom metadata)
-After an indexer processes a deleted file and removes the corresponding search document from the index, it won't revisit that file if you restore it later if the blob's `LastModified` timestamp is older than the last indexer run.
+After an indexer processes a deleted file and removes the corresponding search document from the index, it won't revisit that file if you restore it later if the file's `LastModified` timestamp is older than the last indexer run.
If you would like to reindex that document, change the `"softDeleteMarkerValue" : "false"` for that file and rerun the indexer.
search Search Howto Connecting Azure Sql Mi To Azure Search Using Indexers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-howto-connecting-azure-sql-mi-to-azure-search-using-indexers.md
description: Enable public endpoint to allow connections to SQL Managed Instances from an indexer on Azure Cognitive Search. --++ Last updated 06/26/2021
search Search Howto Index Azure Data Lake Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-howto-index-azure-data-lake-storage.md
Title: Index data from Azure Data Lake Storage Gen2
+ Title: Azure Data Lake Storage Gen2 indexer
description: Set up an Azure Data Lake Storage Gen2 indexer to automate indexing of content and metadata for full text search in Azure Cognitive Search.
- Previously updated : 10/01/2021+ Last updated : 01/17/2022 # Index data from Azure Data Lake Storage Gen2
-This article shows you how to configure an Azure Data Lake Storage Gen2 indexer to extract content and make it searchable in Azure Cognitive Search. This workflow creates a search index on Azure Cognitive Search and loads it with existing content extracted from Azure Data Lake Storage Gen2.
+This article shows you how to configure an Azure Data Lake Storage (ADLS) Gen2 indexer to extract content and make it searchable in Azure Cognitive Search. This workflow creates a search index on Azure Cognitive Search and loads it with existing content extracted from ADLS Gen2.
-Azure Data Lake Storage Gen2 is available through Azure Storage. When setting up an Azure storage account, you have the option to enable [hierarchical namespace](../storage/blobs/data-lake-storage-namespace.md). This allows the collection of content in an account to be organized into a hierarchy of directories and nested subdirectories. By enabling hierarchical namespace, you enable [Azure Data Lake Storage Gen2](../storage/blobs/data-lake-storage-introduction.md).
+ADLS Gen2 is available through Azure Storage. When setting up an Azure Storage account, you have the option of enabling [hierarchical namespace](../storage/blobs/data-lake-storage-namespace.md) that organizes files into a hierarchy of directories and nested subdirectories. By enabling hierarchical namespace, you enable ADLS Gen2.
Examples in this article use the portal and REST APIs. For examples in C#, see [Index Data Lake Gen2 using Azure AD](https://github.com/Azure-Samples/azure-search-dotnet-samples/blob/master/data-lake-gen2-acl-indexing/README.md) on GitHub.
-## Supported access tiers
+This article supplements [**Create an indexer**](search-howto-create-indexers.md) with information specific to indexing from Blob Storage.
-Data Lake Storage Gen2 [access tiers](../storage/blobs/access-tiers-overview.md) include hot, cool, and archive. Only hot and cool can be accessed by indexers.
+## Prerequisites
-## Access control
++ [ADLS Gen2](../storage/blobs/data-lake-storage-introduction.md) with [hierarchical namespace](../storage/blobs/data-lake-storage-namespace.md) enabled.+++ [Access tiers](../storage/blobs/access-tiers-overview.md) for ADLS Gen2 include hot, cool, and archive. Only hot and cool can be accessed by search indexers.
-Data Lake Storage Gen2 implements an [access control model](../storage/blobs/data-lake-storage-access-control.md) that supports both Azure role-based access control (Azure RBAC) and POSIX-like access control lists (ACLs). Access control lists are partially supported in Azure Cognitive Search scenarios:
++ Blob content cannot exceed the [indexer limits](search-limits-quotas-capacity.md#indexer-limits) for your search service tier.
-+ Support for access control is enabled on indexer access to content in Data Lake Storage Gen2. For a search service that has a system or user-assigned managed identity, you can define role assignments that determine indexer access to specific files and folders in Azure Storage.
+## Access control
-+ Support for document-level permissions on an index is not available. If your access controls vary the level of access on a per user basis, those permissions cannot be carried forward into a search index on your search service. All users have the same level of access to all searchable and retrievable content in the index.
+ADLS Gen2 implements an [access control model](../storage/blobs/data-lake-storage-access-control.md) that supports both Azure role-based access control (Azure RBAC) and POSIX-like access control lists (ACLs).
-If maintaining access control on each document in the index is important, it is up to the application developer to implement [security trimming](./search-security-trimming-for-azure-search.md).
+Azure Cognitive Search supports [Azure RBAC for indexer access](search-howto-managed-identities-storage.md) to your content in storage, but it does not support document-level permissions. In Azure Cognitive Search, all users have the same level of access to all searchable and retrievable content in the index. If document-level permissions are an application requirement, consider [security trimming](search-security-trimming-for-azure-search.md) as a workaround.
<a name="SupportedFormats"></a> ## Supported document formats
-The Azure Cognitive Search blob indexer can extract text from the following document formats:
+The ADLS Gen2 indexer can extract text from the following document formats:
[!INCLUDE [search-blob-data-sources](../../includes/search-blob-data-sources.md)]
-## Indexing through the Azure portal
-
-The Azure portal supports importing data from Azure Data Lake Storage Gen2. To import data from Data Lake Storage Gen2, navigate to your Azure Cognitive Search service page in the Azure portal, select **Import data**, select **Azure Data Lake Storage Gen2**, then continue to follow the Import data flow to create your data source, skillset, index, and indexer.
-
-## Indexing with the REST API
-
-The Data Lake Storage Gen2 indexer is supported by the REST API. Follow the instructions below to set up a data source, index, and indexer.
-
-### Step 1 - Create the data source
+## Define the data source
The data source definition specifies the data source type, as well as other properties for authentication and connection to the content to be indexed.
The `"credentials"` property can be a connection string, as shown in the above e
<a name="Credentials"></a>
-#### Credentials
+### Supported credentials and connection strings
You can provide the credentials for the container in one of these ways:
If both `indexedFileNameExtensions` and `excludedFileNameExtensions` parameters
### Add "skip" metadata the blob
-The indexer configuration parameters apply to all blobs in the container or folder. Sometimes, you want to control how *individual blobs* are indexed. You can do this by adding the following metadata properties and values to blobs in Blob storage. When the indexer encounters this properties, it will skip the blob or its content in the indexing run.
+The indexer configuration parameters apply to all blobs in the container or folder. Sometimes, you want to control how *individual blobs* are indexed. You can do this by adding the following metadata properties and values to blobs in Blob storage. When the indexer encounters this property, it will skip the blob or its content in the indexing run.
| Property name | Property value | Explanation | | - | -- | -- |
search Search Howto Index Csv Blobs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-howto-index-csv-blobs.md
Last updated 02/01/2021
-# How to index CSV blobs using delimitedText parsing mode and Blob indexers in Azure Cognitive Search
+# How to index CSV blobs and files using delimitedText parsing mode
-The Azure Cognitive Search [blob indexer](search-howto-indexing-azure-blob-storage.md) provides a `delimitedText` parsing mode for CSV files that treats each line in the CSV as a separate search document. For example, given the following comma-delimited text, `delimitedText` would result in two documents in the search index:
+**Applies to**: [Blob indexers](search-howto-indexing-azure-blob-storage.md), [File indexers](search-file-storage-integration.md)
+
+In Azure Cognitive Search, both blob indexers and file indexers support a `delimitedText` parsing mode for CSV files that treats each line in the CSV as a separate search document. For example, given the following comma-delimited text, `delimitedText` would result in two documents in the search index:
```text id, datePublished, tags
search Search Howto Index Encrypted Blobs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-howto-index-encrypted-blobs.md
Last updated 11/19/2021
# How to index encrypted blobs using blob indexers and skillsets in Azure Cognitive Search
+**Applies to**: [Blob indexers](search-howto-indexing-azure-blob-storage.md), [File indexers](search-file-storage-integration.md)
+ This article shows you how to use [Azure Cognitive Search](search-what-is-azure-search.md) to index documents that have been previously encrypted within [Azure Blob Storage](../storage/blobs/storage-blobs-introduction.md) using [Azure Key Vault](../key-vault/general/overview.md). Normally, an indexer cannot extract content from encrypted files because it doesn't have access to the encryption key. However, by leveraging the [DecryptBlobFile](https://github.com/Azure-Samples/azure-search-power-skills/blob/main/Utils/DecryptBlobFile) custom skill, followed by the [DocumentExtractionSkill](cognitive-search-skill-document-extraction.md), you can provide controlled access to the key to decrypt the files and then have content extracted from them. This unlocks the ability to index these documents without compromising the encryption status of your stored documents. Starting with previously encrypted whole documents (unstructured text) such as PDF, HTML, DOCX, and PPTX in Azure Blob Storage, this guide uses Postman and the Search REST APIs to perform the following tasks:
search Search Howto Index Json Blobs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-howto-index-json-blobs.md
Last updated 02/01/2021
-# How to index JSON blobs using a Blob indexer in Azure Cognitive Search
+# How to index JSON blobs and files in Azure Cognitive Search
-This article shows you how to [configure a blob indexer](search-howto-indexing-azure-blob-storage.md) for blobs that consist of JSON documents. JSON blobs in Azure Blob Storage commonly assume any of these forms:
+**Applies to**: [Blob indexers](search-howto-indexing-azure-blob-storage.md), [File indexers](search-file-storage-integration.md)
+
+This article shows you how to set JSON-specific properties for blobs or files that consist of JSON documents. JSON blobs in Azure Blob Storage or Azure File Storage commonly assume any of these forms:
+ A single JSON document + A JSON document containing an array of well-formed JSON elements
search Search Howto Index One To Many Blobs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-howto-index-one-to-many-blobs.md
Last updated 02/01/2021
-# Indexing blobs to produce multiple search documents
+# Indexing blobs and files to produce multiple search documents
-By default, a blob indexer will treat the contents of a blob as a single search document. If you want a more granular representation of the blob in a search index, you can set **parsingMode** values to create multiple search documents from one blob. The **parsingMode** values that result in many search documents include `delimitedText` (for [CSV](search-howto-index-csv-blobs.md)), and `jsonArray` or `jsonLines` (for [JSON](search-howto-index-json-blobs.md)).
+**Applies to**: [Blob indexers](search-howto-indexing-azure-blob-storage.md), [File indexers](search-file-storage-integration.md)
+
+By default, an indexer will treat the contents of a blob or file as a single search document. If you want a more granular representation in a search index, you can set **parsingMode** values to create multiple search documents from one blob or file. The **parsingMode** values that result in many search documents include `delimitedText` (for [CSV](search-howto-index-csv-blobs.md)), and `jsonArray` or `jsonLines` (for [JSON](search-howto-index-json-blobs.md)).
When you use any of these parsing modes, the new search documents that emerge must have unique document keys, and a problem arises in determining where that value comes from. The parent blob has at least one unique value in the form of `metadata_storage_path property`, but if it contributes that value to more than one search document, the key is no longer unique in the index.
search Search Howto Index Plaintext Blobs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-howto-index-plaintext-blobs.md
Last updated 02/01/2021
-# How to index plain text blobs in Azure Cognitive Search
+# How to index plain text blobs and files in Azure Cognitive Search
-When using a [blob indexer](search-howto-indexing-azure-blob-storage.md) to extract searchable blob text for full text search, you can assign a parsing mode to get better indexing outcomes. By default, the indexer parses blob content as a single chunk of text. However, if all blobs contain plain text in the same encoding, you can significantly improve indexing performance by using the `text` parsing mode.
+**Applies to**: [Blob indexers](search-howto-indexing-azure-blob-storage.md), [File indexers](search-file-storage-integration.md)
+
+When using an indexer to extract searchable blob text or file content for full text search, you can assign a parsing mode to get better indexing outcomes. By default, the indexer parses the content as a single chunk of text. However, if all blobs and files contain plain text in the same encoding, you can significantly improve indexing performance by using the `text` parsing mode.
Recommendations for use `text` parsing include:
search Search Howto Indexing Azure Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-howto-indexing-azure-blob-storage.md
Last updated 01/17/2022
-# Configure a Blob indexer to import data from Azure Blob Storage
+# Index data from Azure Blob Storage
In Azure Cognitive Search, blob [indexers](search-indexer-overview.md) are frequently used for both [AI enrichment](cognitive-search-concept-intro.md) and text-based processing.
This article supplements [**Create an indexer**](search-howto-create-indexers.md
+ [Access tiers](../storage/blobs/access-tiers-overview.md) for Blob storage include hot, cool, and archive. Only hot and cool can be accessed by search indexers.
-+ Blob content cannot exceed the [indexer limits](search-limits-quotas-capacity.md#indexer-limits) for your search service tier.
++ Blob containers storing non-binary textual content for text-based indexing. This indexer also supports [AI enrichment](cognitive-search-concept-intro.md) if you have binary files. Note that blob content cannot exceed the [indexer limits](search-limits-quotas-capacity.md#indexer-limits) for your search service tier. <a name="SupportedFormats"></a>
The Azure Cognitive Search blob indexer can extract text from the following docu
## Define the data source
-A primary difference between a blob indexer and other indexers is the data source assignment. The data source definition specifies the type ("type": `"azureblob"`) and how to connect.
+A primary difference between a blob indexer and other indexers is the data source assignment. The data source definition specifies "type": `"azureblob"`, a content path, and how to connect
1. [Create or update a data source](/rest/api/searchservice/create-data-source) to set its definition:
A primary difference between a blob indexer and other indexers is the data sourc
1. Set "type" to `"azureblob"` (required).
-1. Set "credentials" to the connection string, as shown in the above example, or one of the alternative approaches described in the next section.
+1. Set "credentials" to an Azure Storage connection string. The next section describes the supported formats.
-1. Set "container" to the blob container within Azure Storage. If the container uses folders to organize content, set "query" to specify a subfolder.
+1. Set "container" to the blob container, and use "query" to specify any subfolders.
+
+A data source definition can also include additional properties for [soft deletion policies](search-howto-index-changed-deleted-blobs.md) and [field mappings](search-indexer-field-mappings.md) if field names and types are not the same.
<a name="credentials"></a> ### Supported credentials and connection strings
-You can provide the credentials for the blob container in one of these ways:
+Indexers can connect to a blob container using the following connections.
| Managed identity connection string | ||
You can provide the credentials for the blob container in one of these ways:
| Full access storage account connection string | |--| |`{ "connectionString" : "DefaultEndpointsProtocol=https;AccountName=<your storage account>;AccountKey=<your account key>;" }` |
-| You can get the connection string from the Azure portal by navigating to the Storage Account > Settings > Keys (for Classic storage accounts) or Security + networking > Access keys (for Azure Resource Manager storage accounts). |
+| You can get the connection string from the Storage account page in Azure portal by selecting **Access keys** in the left navigation pane. Make sure to select a full connection string and not just a key. |
| Storage account shared access signature** (SAS) connection string | |-|
You can provide the credentials for the blob container in one of these ways:
> [!NOTE] > If you use SAS credentials, you will need to update the data source credentials periodically with renewed signatures to prevent their expiration. If SAS credentials expire, the indexer will fail with an error message similar to "Credentials provided in the connection string are invalid or have expired".
-## Define search fields for blob data
+## Add search fields to an index
-A [search index](search-what-is-an-index.md) specifies the fields in a search document, attributes, and other constructs that shape the search experience. All indexers require that you specify a search index definition as the destination.
+In a [search index](search-what-is-an-index.md), add fields to accept the content and metadata of your Azure blobs.
1. [Create or update an index](/rest/api/searchservice/create-index) to define search fields that will store blob content and metadata: ```http
- PUT /indexes?api-version=2020-06-30
+ POST /indexes?api-version=2020-06-30
{
- "name" : "my-target-index",
+ "name" : "my-search-index",
"fields": [ { "name": "metadata_storage_path", "type": "Edm.String", "key": true, "searchable": false },
- { "name": "content", "type": "Edm.String", "searchable": true, "filterable": false }
+ { "name": "content", "type": "Edm.String", "searchable": true, "filterable": false },
+ { "name": "metadata_storage_name", "type": "Edm.String", "searchable": false, "filterable": true, "sortable": true },
+ { "name": "metadata_storage_path", "type": "Edm.String", "searchable": false, "filterable": true, "sortable": true },
+ { "name": "metadata_storage_size", "type": "Edm.Int64", "searchable": false, "filterable": true, "sortable": true },
+ { "name": "metadata_storage_content_type", "type": "Edm.String", "searchable": false, "filterable": true, "sortable": true },
] } ```
-1. <a name="DocumentKeys"></a> Designate one string field as the document key that uniquely identifies each document. For blob content, the best candidates for a document key are metadata properties on the blob:
+1. Designate one string field as the document key that uniquely identifies each document. For blob content, the best candidates for a document key are metadata properties on the blob:
+ **`metadata_storage_path`** (default). Using the full path ensures uniqueness, but the path contains `/` characters that are [invalid in a document key](/rest/api/searchservice/naming-rules). Use the [base64Encode function](search-indexer-field-mappings.md#base64EncodeFunction) to encode characters (see the example in the next section). If using the portal to define the indexer, the encoding step is built in.
A [search index](search-what-is-an-index.md) specifies the fields in a search do
1. Add more fields for any blob metadata that you want in the index. The indexer can read custom metadata properties, [standard metadata](#indexing-blob-metadata) properties, and [content-specific metadata](search-blob-metadata-properties.md) properties.
-## Set field mappings
+## Configure the blob indexer
+
+Indexer configuration specifies the inputs, parameters, and properties that inform run time behaviors.
+
+Under "configuration", you can control which blobs are indexed, and which are skipped, by the blob's file type or by setting properties on the blob themselves, causing the indexer to skip over them.
+
+1. [Create or update an indexer](/rest/api/searchservice/create-indexer) to use the predefined data source and search index.
+
+ ```http
+ POST https://[service name].search.windows.net/indexers?api-version=2020-06-30
+ {
+ "name" : "my-blob-indexer,
+ "dataSourceName" : "my-blob-datasource",
+ "targetIndexName" : "my-search-index",
+ "parameters": {
+ "batchSize": null,
+ "maxFailedItems": null,
+ "maxFailedItemsPerBatch": null,
+ "configuration:" {
+ "indexedFileNameExtensions" : ".pdf,.docx",
+ "excludedFileNameExtensions" : ".png,.jpeg"
+ }
+ },
+ "schedule" : { },
+ "fieldMappings" : [ ]
+ }
+ ```
+
+1. In the optional "configuration" section, provide any inclusion or exclusion criteria. If left unspecified, all blobs in the container are retrieved.
+
+ If both `indexedFileNameExtensions` and `excludedFileNameExtensions` parameters are present, Azure Cognitive Search first looks at `indexedFileNameExtensions`, then at `excludedFileNameExtensions`. If the same file extension is present in both lists, it will be excluded from indexing.
+
+1. See [Create an indexer](search-howto-create-indexers.md) for more information about other properties.
+
+### Set field mappings
Field mappings are a section in the indexer definition that maps source fields to destination fields in the search index.
Reasons for [creating an explicit field mapping](search-indexer-field-mappings.m
The following example demonstrates "metadata_storage_name" as the document key. Assume the index has a key field named "key" and another field named "fileSize" for storing the document size. [Field mappings](search-indexer-field-mappings.md) in the indexer definition establish field associations, and "metadata_storage_name" has the [base64Encode field mapping function](search-indexer-field-mappings.md#base64EncodeFunction) to handle unsupported characters. ```http
-PUT /indexers/my-blob-indexer?api-version=2020-06-30
+POST https://[service name].search.windows.net/indexers?api-version=2020-06-30
{
+ "name" : "my-blob-indexer",
"dataSourceName" : "my-blob-datasource ",
- "targetIndexName" : "my-target-index",
+ "targetIndexName" : "my-search-index",
"schedule" : { "interval" : "PT2H" }, "fieldMappings" : [ { "sourceFieldName" : "metadata_storage_name", "targetFieldName" : "key", "mappingFunction" : { "name" : "base64Encode" } },
PUT /indexers/blob-indexer?api-version=2020-06-30
<a name="PartsOfBlobToIndex"></a>
-## Set parameters
+### Set parameters
Blob indexers include parameters that optimize indexing for specific use cases, such as content types (JSON, CSV, PDF), or to specify which parts of the blob to index.
Lastly, any metadata properties specific to the document format of the blobs you
It's important to point out that you don't need to define fields for all of the above properties in your search index - just capture the properties you need for your application.
-## How blobs are indexed
-
-By default, most blobs are indexed as a single search document in the index, including blobs with structured content, such as JSON or CSV, which are indexed as a single chunk of text. However, for JSON or CSV documents that have an internal structure (delimiters), you can assign parsing modes to generate individual search documents for each line or element. For more information, see [Indexing JSON blobs](search-howto-index-json-blobs.md) and [Indexing CSV blobs](search-howto-index-csv-blobs.md).
-
-A compound or embedded document (such as a ZIP archive, a Word document with embedded Outlook email containing attachments, or a .MSG file with attachments) is also indexed as a single document. For example, all images extracted from the attachments of an .MSG file will be returned in the normalized_images field within the same search document.
-
-<a name="WhichBlobsAreIndexed"></a>
- ## How to control which blobs are indexed You can control which blobs are indexed, and which are skipped, by the blob's file type or by setting properties on the blob themselves, causing the indexer to skip over them.
-### Include specific file extensions
-
-Use "indexedFileNameExtensions" to provide a comma-separated list of file extensions to index (with a leading dot). For example, to index only the .PDF and .DOCX blobs, do this:
+Include specific file extensions by setting `"indexedFileNameExtensions"` to a comma-separated list of file extensions (with a leading dot). Exclude specific file extensions by setting `"excludedFileNameExtensions"` to the extensions that should be skipped. If the same extension is in both lists, it will be excluded from indexing.
```http PUT /indexers/[indexer name]?api-version=2020-06-30 { "parameters" : { "configuration" : {
- "indexedFileNameExtensions" : ".pdf, .docx"
+ "indexedFileNameExtensions" : ".pdf, .docx",
+ "excludedFileNameExtensions" : ".png, .jpeg"
} } } ```
-### Exclude specific file extensions
+## How blobs are indexed
-Use "excludedFileNameExtensions" to provide a comma-separated list of file extensions to skip (again, with a leading dot). For example, to index all blobs except those with the .PNG and .JPEG extensions, do this:
+By default, most blobs are indexed as a single search document in the index, including blobs with structured content, such as JSON or CSV, which are indexed as a single chunk of text. However, for JSON or CSV documents that have an internal structure (delimiters), you can assign parsing modes to generate individual search documents for each line or element:
-```http
-PUT /indexers/[indexer name]?api-version=2020-06-30
-{
- "parameters" : {
- "configuration" : {
- "excludedFileNameExtensions" : ".png, .jpeg"
- }
- }
-}
-```
++ [Indexing JSON blobs](search-howto-index-json-blobs.md)++ [Indexing CSV blobs](search-howto-index-csv-blobs.md).
-If both "indexedFileNameExtensions" and "excludedFileNameExtensions" parameters are present, the indexer first looks at "indexedFileNameExtensions", then at "excludedFileNameExtensions". If the same file extension is in both lists, it will be excluded from indexing.
+A compound or embedded document (such as a ZIP archive, a Word document with embedded Outlook email containing attachments, or a .MSG file with attachments) is also indexed as a single document. For example, all images extracted from the attachments of an .MSG file will be returned in the normalized_images field within the same search document.
### Add "skip" metadata the blob
search Search Howto Indexing Azure Tables https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-howto-indexing-azure-tables.md
This article supplements [**Create an indexer**](search-howto-create-indexers.md
+ [Azure Table Storage](../storage/tables/table-storage-overview.md)
-+ Tables with entities containing non-binary data for text-based indexing
++ Tables with entities containing non-binary textual content for text-based indexing. This indexer also supports [AI enrichment](cognitive-search-concept-intro.md) if you have binary files. ## Define the data source
-A primary difference between a table indexer and other indexers is the data source assignment. The data source definition specifies the type ("type": `"azuretable"`) and how to connect.
+A primary difference between a table indexer and other indexers is the data source assignment. The data source definition specifies "type": `"azuretable"`, a content path, and how to connect.
1. [Create or update a data source](/rest/api/searchservice/create-data-source) to set its definition:
A primary difference between a table indexer and other indexers is the data sour
1. Set "type" to `"azuretable"` (required).
-1. Set "credentials" to the connection string. The following examples show commonly used connection strings for connections using shared access keys or a [system-managed identity](search-howto-managed-identities-storage.md). Additional examples are in the next section.
-
- + `"connectionString" : "DefaultEndpointsProtocol=https;AccountName=<account name>;AccountKey=<account key>;"`
-
- + `"connectionString" : "ResourceId=/subscriptions/[your subscription ID]/[your resource ID]/providers/Microsoft.Storage/storageAccounts/[your storage account];"`
+1. Set "credentials" to an Azure Storage connection string. The next section describes the supported formats.
1. Set "container" to the name of the table. 1. Optionally, set "query" to a filter on PartitionKey. This is a best practice that improves performance. If "query" is specified any other way, the indexer will execute a full table scan, resulting in poor performance if the tables are large.
-> [!TIP]
-> The Import data wizard will build a data source for you, including a valid connection string for system-assigned and shared key credentials. If you have trouble setting up the connection programmatically, [use the wizard](search-get-started-portal.md) as a syntax check.
+A data source definition can also include additional properties for [soft deletion policies](#soft-delete-using-custom-metadata) and [field mappings](search-indexer-field-mappings.md) if field names and types are not the same.
<a name="Credentials"></a>
-### Credentials for Table Storage
+### Supported credentials and connection strings
+
+Indexers can connect to a table using the following connections.
-You can provide the credentials for the connection in one of these ways:
+**Full access storage account connection string**:
+`{ "connectionString" : "DefaultEndpointsProtocol=https;AccountName=<your storage account>;AccountKey=<your account key>;" }`
+
+You can get the connection string from the Storage account page in Azure portal by selecting **Access keys** in the left navigation pane. Make sure to select a full connection string and not just a key.
+ **Managed identity connection string**: `ResourceId=/subscriptions/<your subscription ID>/resourceGroups/<your resource group name>/providers/Microsoft.Storage/storageAccounts/<your storage account name>/;` This connection string does not require an account key, but you must follow the instructions for [Setting up a connection to an Azure Storage account using a managed identity](search-howto-managed-identities-storage.md).
-+ **Full access storage account connection string**: `DefaultEndpointsProtocol=https;AccountName=<your storage account>;AccountKey=<your account key>` You can get the connection string from the Azure portal by going to the **Storage account blade** > **Settings** > **Keys** (for classic storage accounts) or **Settings** > **Access keys** (for Azure Resource Manager storage accounts).
- + **Storage account shared access signature connection string**: `TableEndpoint=https://<your account>.table.core.windows.net/;SharedAccessSignature=?sv=2016-05-31&sig=<the signature>&spr=https&se=<the validity end time>&srt=co&ss=t&sp=rl` The shared access signature should have the list and read permissions on containers (tables in this case) and objects (table rows). + **Table shared access signature**: `ContainerSharedAccessUri=https://<your storage account>.table.core.windows.net/<table name>?tn=<table name>&sv=2016-05-31&sig=<the signature>&se=<the validity end time>&sp=r` The shared access signature should have query (read) permissions on the table.
For more information on storage shared access signatures, see [Using shared acce
> [!NOTE] > If you use shared access signature credentials, you will need to update the data source credentials periodically with renewed signatures to prevent their expiration or the indexer will fail with a "Credentials provided in the connection string are invalid or have expired" message.
-## Define fields in a search index
+## Add search fields to an index
+
+In a [search index](search-what-is-an-index.md), add fields to accept the content and metadata of your table entities.
1. [Create or update an index](/rest/api/searchservice/create-index) to define search fields that will store content from entities: ```http
- POST https://[service name].search.windows.net/indexes?api-version=2020-06-30
- Content-Type: application/json
- api-key: [admin key]
-
+ POST https://[service name].search.windows.net/indexes?api-version=2020-06-30
{
- "name" : "my-target-index",
- "fields": [
- { "name": "key", "type": "Edm.String", "key": true, "searchable": false },
+ "name" : "my-search-index",
+ "fields": [
+ { "name": "ID", "type": "Edm.String", "key": true, "searchable": false },
{ "name": "SomeColumnInMyTable", "type": "Edm.String", "searchable": true }
- ]
+ ]
} ```
-1. Check for field correspondence between entity fields and search fields. If names and types don't match, [add field mappings](search-indexer-field-mappings.md) to the indexer definition to ensure the source-to-destination path is clear.
- 1. Create a key field, but do not define field mappings to alternative unique strings in the table. A table indexer will populate the key field with concatenated partition and row keys from the table. For example, if a rowΓÇÖs PartitionKey is `PK1` and RowKey is `RK1`, then the `Key` field's value is `PK1RK1`. If the partition key is null, just the row key is used.
-## Set properties on the indexer
+1. Create additional fields that correspond to entity fields. Using the same names and compatible [data types](/rest/api/searchservice/supported-data-types) minimizes the need for [field mappings](search-indexer-field-mappings.md).
-[Create Indexer](/rest/api/searchservice/create-indexer) connects a data source with a target search index and provides a schedule to automate the data refresh.
+## Configure the table indexer
-An indexer definition for Table Storage uses the global properties for data source, index, [schedule](search-howto-schedule-indexers.md), mapping functions for base-64 encoding, and any field mappings.
+1. [Create or update an indexer](/rest/api/searchservice/create-indexer) to use the predefined data source and search index.
-```http
-POST https://[service name].search.windows.net/indexers?api-version=2020-06-30
-Content-Type: application/json
-api-key: [admin key]
+ ```http
+ POST https://[service name].search.windows.net/indexers?api-version=2020-06-30
+ {
+ "name" : "table-indexer",
+ "dataSourceName" : "my-table-datasource",
+ "targetIndexName" : "my-search-index",
+ "schedule" : { "interval" : "PT2H" }
+ }
+ ```
-{
- "name" : "table-indexer",
- "dataSourceName" : "table-datasource",
- "targetIndexName" : "my-target-index",
- "schedule" : { "interval" : "PT2H" }
-}
-```
+1. See [Create an indexer](search-howto-create-indexers.md) for more information about other properties.
## Change and deletion detection
-When you set up a table indexer to run on a schedule, it reindexes only new or updated rows, as determined by a row's `Timestamp` value. When indexing out of Azure Table Storage, you donΓÇÖt have to specify a change detection policy. Incremental indexing is enabled for you automatically.
+After an initial search index is created, you might want subsequent indexer jobs to pick up only new and changed documents. Fortunately, content in Azure Storage is timestamped, which gives indexers sufficient information for determining what's new and changed automatically. For search content that originates from Azure Table Storage, the indexer keeps track of the entity's `Timestamp` timestamp and reindexes only new and changed content.
+
+Although change detection is a given, deletion detection is not. If you want to detect deleted entities, make sure to use a "soft delete" approach. If you delete the files outright in a table, corresponding search documents will not be removed from the search index.
+
+## Soft delete using custom metadata
-To indicate that certain documents must be removed from the index, you can use a soft delete strategy. Instead of deleting a row, add a property to indicate that it's deleted, and set up a soft deletion detection policy on the data source. For example, the following policy considers that a row is deleted if the row has a property `IsDeleted` with the value `"true"`:
+To indicate that certain documents must be removed from the search index, you can use a soft delete strategy. Instead of deleting an entity, add a property to indicate that it's deleted, and set up a soft deletion detection policy on the data source. For example, the following policy considers that an entity is deleted if it has an `IsDeleted` property set to `"true"`:
```http PUT https://[service name].search.windows.net/datasources?api-version=2020-06-30
sentinel Dns Normalization Schema https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/dns-normalization-schema.md
The most important fields in a DNS event are:
- [DnsQuery](#query), which reports the domain name for which the query was issued. -- The [SrcIpAddr](#srcipaddr) (aliased to [IpAddr](#ipaddr)), which represents the IP address from which the request was generated.
+- The [SrcIpAddr](#srcipaddr) (aliased to [IpAddr](#ipaddr)), which represents the IP address from which the request was generated. DNS servers typically provide the SrcIpAddr field, but DNS clients sometimes do not provide this field and only provide the [SrcHostname](#srchostname) field.
- [EventResultDetails](#eventresultdetails), which reports as to whether the request was successful and if not, why.
sentinel Ingestion Delay https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/ingestion-delay.md
The event is generated within the first look-back period, but isn't ingested in
## How to handle delay
+> [!NOTE]
+>
+> You can either solve the issue using the process described below, or implement Microsoft Sentinel's near-real-time detection (NRT) rules. For more information, see [Detect threats quickly with near-real-time (NRT) analytics rules in Microsoft Sentinel](near-real-time-rules.md).
+>
+ To solve the issue, you need to know the delay for your data type. For this example, you already know the delay is two minutes. For your own data, you can understand delay using the Kusto `ingestion_time()` function, and calculating the difference between **TimeGenerated** and the ingestion time. For more information, see [Calculate ingestion delay](#calculate-ingestion-delay).
For more information, see:
- [Customize alert details in Azure Sentinel](customize-alert-details.md) - [Manage template versions for your scheduled analytics rules in Azure Sentinel](manage-analytics-rule-templates.md) - [Use the health monitoring workbook](monitor-data-connector-health.md)-- [Log data ingestion time in Azure Monitor](../azure-monitor/logs/data-ingestion-time.md)
+- [Log data ingestion time in Azure Monitor](../azure-monitor/logs/data-ingestion-time.md)
sentinel Normalization About Schemas https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/normalization-about-schemas.md
The following fields are defined by ASIM for all schemas:
| <a name="eventtype"></a>**EventType** | Mandatory | Enumerated | Describes the operation reported by the record. Each schema documents the list of values valid for this field. | | **EventSubType** | Optional | Enumerated | Describes a subdivision of the operation reported in the [EventType](#eventtype) field. Each schema documents the list of values valid for this field. | | <a name="eventresult"></a>**EventResult** | Mandatory | Enumerated | One of the following values: **Success**, **Partial**, **Failure**, **NA** (Not Applicable).<br> <br>The value might be provided in the source record by using different terms, which should be normalized to these values. Alternatively, the source might provide only the [EventResultDetails](#eventresultdetails) field, which should be analyzed to derive the EventResult value.<br><br>Example: `Success`|
-| <a name="eventresultdetails"></a>**EventResultDetails** | Mandatory | Alias | Reason or details for the result reported in the [EventResult](#eventresult) field. Each schema documents the list of values valid for this field.<br><br>Example: `NXDOMAIN`|
+| <a name="eventresultdetails"></a>**EventResultDetails** | Mandatory | Enumerated | Reason or details for the result reported in the [EventResult](#eventresult) field. Each schema documents the list of values valid for this field.<br><br>Example: `NXDOMAIN`|
| **EventOriginalUid** | Optional | String | A unique ID of the original record, if provided by the source.<br><br>Example: `69f37748-ddcd-4331-bf0f-b137f1ea83b`| | **EventOriginalType** | Optional | String | The original event type or ID, if provided by the source. For example, this field will be used to store the original Windows event ID.<br><br>Example: `4624`| | <a name="eventoriginalresultdetails"></a>**EventOriginalResultDetails** | Optional | String | The original result details provided by the source. This value is used to derive [EventResultDetails](#eventresultdetails), which should have only one of the values documented for each schema. |
service-bus-messaging Service Bus Dotnet Get Started With Queues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/service-bus-dotnet-get-started-with-queues.md
In this section, you'll add code to retrieve messages from the queue.
string body = args.Message.Body.ToString(); Console.WriteLine($"Received: {body}");
- // complete the message. messages is deleted from the queue.
+ // complete the message. message is deleted from the queue.
await args.CompleteMessageAsync(args.Message); }
spring-cloud How To Config Server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/spring-cloud/how-to-config-server.md
All configurable properties used to set up private Git repository with basic aut
> [!NOTE] > Many `Git` repository servers support the use of tokens rather than passwords for HTTP Basic Authentication. Some repositories allow tokens to persist indefinitely. However, some Git repository servers, including Azure DevOps Server, force tokens to expire in a few hours. Repositories that cause tokens to expire shouldn't use token-based authentication with Azure Spring Cloud.
-> Github has removed support for password authentication, so you'll need to use a personal access token instead of password authentication for Github. For more information, see [Token authentication](https://github.blog/2020-12-15-token-authentication-requirements-for-git-operations/).
+> GitHub has removed support for password authentication, so you'll need to use a personal access token instead of password authentication for GitHub. For more information, see [Token authentication](https://github.blog/2020-12-15-token-authentication-requirements-for-git-operations/).
### Other Git repositories
Now that your configuration files are saved in a repository, you need to connect
> [!CAUTION] > Some Git repository servers use a *personal-token* or an *access-token*, such as a password, for **Basic Authentication**. You can use that kind of token as a password in Azure Spring Cloud, because it will never expire. But for other Git repository servers, such as Bitbucket and Azure DevOps Server, the *access-token* expires in one or two hours. This means that the option isn't viable when you use those repository servers with Azure Spring Cloud.
- > GitHub has removed support for password authentication, so you'll need to use a personal access token instead of password authentication for Github. For more information, see [Token authentication](https://github.blog/2020-12-15-token-authentication-requirements-for-git-operations/).
+ > GitHub has removed support for password authentication, so you'll need to use a personal access token instead of password authentication for GitHub. For more information, see [Token authentication](https://github.blog/2020-12-15-token-authentication-requirements-for-git-operations/).
* **SSH**: In the **Default repository** section, in the **Uri** box, paste the repository URI, and then select the **Authentication** ("pencil" icon) button. In the **Edit Authentication** pane, in the **Authentication type** drop-down list, select **SSH**, and then enter your **Private key**. Optionally, specify your **Host key** and **Host key algorithm**. Be sure to include your public key in your Config Server repository. Select **OK**, and then select **Apply** to finish setting up your Config Server instance.
static-web-apps Authentication Authorization https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/static-web-apps/authentication-authorization.md
To configure Static Web Apps to use an API function as the role assignment funct
After defining the `rolesSource` property in your app's configuration, add an [API function](apis.md) in your static web app at the path you specified. You can use a managed function app or a bring your own function app.
-Each time a user successfully authenticates with an identity provider, the specified function is called. The function is passed a JSON object in the request body that contains the user's information from the provider. For some identity providers, the user information also includes an `accessToken` that the function can use to make API calls using the user's identity.
+Each time a user successfully authenticates with an identity provider, the specified function is called via the POST method. The function is passed a JSON object in the request body that contains the user's information from the provider. For some identity providers, the user information also includes an `accessToken` that the function can use to make API calls using the user's identity.
This is an example payload from Azure Active Directory:
static-web-apps Local Development https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/static-web-apps/local-development.md
The following chart shows how requests are handled locally.
- **Responses** from all services are returned to the browser as if they were all a single application.
+The following article details the steps for running a node-based application, but the process is the same for any language or environment. Once you start the UI and the Azure Functions API apps independently, then start the Static Web Apps CLI and point it to the running apps using the following command:
+
+```console
+swa start http://localhost:<DEV-SERVER-PORT-NUMBER> --api-location http://localhost:7071
+```
+ ## Prerequisites - **Existing Azure Static Web Apps site**: If you don't have one, begin with the [vanilla-api](https://github.com/staticwebdev/vanilla-api/generate?return_to=/staticwebdev/vanilla-api/generate) starter app.
For more information on different debugging scenarios, with guidance on how to c
### Sample debugging configuration
-Visual Studio Code uses a file to enable debugging sessions in the editor. If Visual Studio Code doesn't generate a *launch.json* file for you, you can place the the following configuration in *.vscode/launch.json*.
+Visual Studio Code uses a file to enable debugging sessions in the editor. If Visual Studio Code doesn't generate a *launch.json* file for you, you can place the following configuration in *.vscode/launch.json*.
```json {
storage Network File System Protocol Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/network-file-system-protocol-known-issues.md
This article describes limitations and known issues of Network File System (NFS)
- GRS, GZRS, and RA-GRS redundancy options aren't supported when you create an NFS 3.0 storage account.
+- NFS 3.0 and SSH File Transfer Protocol (SFTP) can't be enabled on the same storage account.
+ ## NFS 3.0 features The following NFS 3.0 features aren't yet supported.
storage Secure File Transfer Protocol Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/secure-file-transfer-protocol-known-issues.md
SFTP support in Azure Blob Storage currently limits its cryptographic algorithm
- Symbolic links are not supported. -- PowerShell and Azure CLI and not supported. You can leverage Portal and ARM templates for Public Preview.
+- PowerShell and Azure CLI are not supported. You can leverage Portal and ARM templates for Public Preview.
- `ssh-keycan` is not supported.
storage Secure File Transfer Protocol Support How To https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/secure-file-transfer-protocol-support-how-to.md
To learn more about the SFTP permissions model, see [SFTP Permissions model](sec
If you enabled password authentication, then the Azure generated password appears in a dialog box after the local user has been added. > [!IMPORTANT]
- > You can't retrieve this password later, so make sure to copy the password, and then store it in a place where you can find it.
+ > You can't retrieve this password later, so make sure you copy the password, and then store it in a place where you can find it.
+ >
+ > If you do lose your password, you can generate a new password.
If you chose to generate a new key pair, then you'll be prompted to download the private key of that key pair after the local user has been added.
storage File Sync Planning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/file-sync/file-sync-planning.md
We'll use an example to illustrate how to estimate the amount of free space woul
1. NTFS allocates a cluster size for each of the tiered files. 1 million files * 4 KiB cluster size = 4,000,000 KiB (4 GiB) > [!Note] > The space occupied by tiered files is allocated by NTFS. Therefore, it will not show up in any UI.
-3. Sync metadata occupies a cluster size per item. (1 million files + 100,000 directories) * 4 KiB cluster size = 4,400,000 KiB (4.4 GiB)
+3. Sync metadata occupies a cluster size per item. (1 million files + 100,000 directories) * 4 KB cluster size = 4,400,000 KiB (4.4 GiB)
4. Azure File Sync heatstore occupies 1.1 KiB per file. 1 million files * 1.1 KiB = 1,100,000 KiB (1.1 GiB) 5. Volume free space policy is 20%. 1000 GiB * 0.2 = 200 GiB
These increases in both the number of recalls and the amount of data being recal
* [Consider firewall and proxy settings](file-sync-firewall-and-proxy.md) * [Deploy Azure Files](../files/storage-how-to-create-file-share.md?toc=%2fazure%2fstorage%2ffilesync%2ftoc.json) * [Deploy Azure File Sync](file-sync-deployment-guide.md)
-* [Monitor Azure File Sync](file-sync-monitoring.md)
+* [Monitor Azure File Sync](file-sync-monitoring.md)
synapse-analytics Continuous Integration Delivery https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/cicd/continuous-integration-delivery.md
Use the [Synapse workspace deployment](https://marketplace.visualstudio.com/item
:::image type="content" source="media/create-release-artifacts-deployment.png" lightbox="media/create-release-artifacts-deployment.png" alt-text="Screenshot that shows setting up the Synapse deployment task for the workspace.":::
+1. The deployment of managed private endpoint is only supported in version 2.x. please make sure you select the right version and check the **Deploy managed private endpoints in template**.
+
+ :::image type="content" source="media/deploy-private-endpoints.png" alt-text="Screenshot that shows selecting version 2.x to deploy private endpoints with synapse deployment task.":::
+
+1. To manage triggers, you can use trigger toggle to stop the triggers before deployment. And you can also add a task to restart the triggers after the deployment task.
+
+ :::image type="content" source="media/toggle-trigger.png" alt-text="Screenshot that shows managing triggers before and after deployment.":::
+ > [!IMPORTANT] > In CI/CD scenarios, the integration runtime type in different environments must be the same. For example, if you have a self-hosted integration runtime in the development environment, the same integration runtime must be self-hosted in other environments, such as in test and production. Similarly, if you're sharing integration runtimes across multiple stages, the integration runtimes must be linked and self-hosted in all environments, such as in development, test, and production.
synapse-analytics Database https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/metadata/database.md
Last updated 10/05/2021--++
synapse-analytics Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/metadata/overview.md
Last updated 10/05/2021--++
synapse-analytics Table https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/metadata/table.md
Last updated 10/13/2021--++
synapse-analytics Browse Partners https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/partner/browse-partners.md
Last updated 07/14/2021--++
synapse-analytics Business Intelligence https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/partner/business-intelligence.md
Last updated 07/09/2021--++
synapse-analytics Compatibility Issues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/partner/compatibility-issues.md
Title: Compatibility issues with third-party applications and Azure Synapse Analytics description: Describes known issues that third-party applications may find with Azure Synapse -+ Last updated 11/18/2020 -+
synapse-analytics Data Integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/partner/data-integration.md
Title: Data integration partners description: Lists of third-party partners with data integration solutions that support Azure Synapse Analytics. -+ Last updated 03/27/2019-+
synapse-analytics Data Management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/partner/data-management.md
Title: Data management partners description: Lists of third-party data management partners with solutions that support Azure Synapse Analytics.-+ Last updated 04/17/2018-+
synapse-analytics Machine Learning Ai https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/partner/machine-learning-ai.md
Title: Machine learning and AI partners description: Lists of third-party machine learning and artificial intelligence partners with solutions that support Azure Synapse Analytics.-+ Last updated 06/22/2020-+
synapse-analytics System Integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/partner/system-integration.md
Title: System integration partners description: List of industry system integrators building customer solutions with Azure Synapse Analytics -+ Last updated 11/24/2020 -+
synapse-analytics Sql Data Warehouse Overview What Is https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-what-is.md
Previously updated : 01/22/2021 Last updated : 01/18/2022 -+ # What is dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics?
Dedicated SQL pool (formerly SQL DW) represents a collection of analytic resourc
Once your dedicated SQL pool is created, you can import big data with simple [PolyBase](/sql/relational-databases/polybase/polybase-guide?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true) T-SQL queries, and then use the power of the distributed query engine to run high-performance analytics. As you integrate and analyze the data, dedicated SQL pool (formerly SQL DW) will become the single version of truth your business can count on for faster and more robust insights. > [!NOTE]
->Explore the [Azure Synapse Analytics documentation](../overview-what-is.md).
+> Not all features of the dedicated SQL pool in Azure Synapse workspaces apply to dedicated SQL pool (formerly SQL DW), and vice versa. To enable workspace features for an existing dedicated SQL pool (formerly SQL DW) refer to [How to enable a workspace for your dedicated SQL pool (formerly SQL DW)](workspace-connected-create.md). Explore the [Azure Synapse Analytics documentation](../overview-what-is.md) and [Get Started with Azure Synapse](../get-started.md).
> ## Key component of a big data solution
The analysis results can go to worldwide reporting databases or applications. Bu
- Quickly [create a dedicated SQL pool](create-data-warehouse-portal.md) - [Load sample data](./load-data-from-azure-blob-storage-using-copy.md). - Explore [Videos](https://azure.microsoft.com/documentation/videos/index/?services=sql-data-warehouse)
+- [Get Started with Azure Synapse](../get-started.md)
Or look at some of these other Azure Synapse resources.
synapse-analytics Workspace Connected Create https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql-data-warehouse/workspace-connected-create.md
Title: Enabling Synapse workspace features
description: This document describes how a user can enable the Synapse workspace features on an existing dedicated SQL pool (formerly SQL DW). -+
-# Enabling Synapse workspace features for a dedicated SQL pool (formerly SQL DW)
+# Enable Synapse workspace features for a dedicated SQL pool (formerly SQL DW)
-All SQL data warehouse users can now access and use an existing dedicated SQL pool (formerly SQL DW) instance via the Synapse Studio and Workspace. Users can use the Synapse Studio and Workspace without impacting automation, connections, or tooling. This article explains how an existing Azure Synapse Analytics user can enable the Synapse workspace features for an existing dedicated SQL pool (formerly SQL DW). The user can expand their existing Analytics solution by taking advantage of the new feature-rich capabilities now available via the Synapse workspace and Studio.
+All SQL data warehouse users can now access and use an existing dedicated SQL pool (formerly SQL DW) instance via the Synapse Studio and Workspace. Users can use the Synapse Studio and Workspace without impacting automation, connections, or tooling. This article explains how an existing Azure Synapse Analytics user can enable the Synapse workspace features for an existing dedicated SQL pool (formerly SQL DW). The user can expand their existing Analytics solution by taking advantage of the new feature-rich capabilities now available via the Synapse workspace and Studio.
+
+Not all features of the dedicated SQL pool in Azure Synapse workspaces apply to dedicated SQL pool (formerly SQL DW), and vice versa. This article is a guide to enable workspace features for an existing dedicated SQL pool (formerly SQL DW).
## Prerequisites Before you enable the Synapse workspace features on your data warehouse, you must ensure that you've the following
Before you enable the Synapse workspace features on your data warehouse, you mus
Sign in to the [Azure portal](https://portal.azure.com/).
-## Enabling Synapse workspace features for an existing dedicated SQL pool (formerly SQL DW)
+## Enable Synapse workspace features for an existing dedicated SQL pool (formerly SQL DW)
The Synapse workspace features can be enabled on any existing dedicated SQL pool (formerly SQL DW) in a supported region. This capability is only available via the Azure portal.
Follow these steps to create a Synapse workspace for your existing data warehous
The following steps must be completed to ensure that your existing dedicated SQL pool (formerly SQL DW) instances can be accessed via the Synapse Studio. 1. In the Synapse workspace overview page, select **Connected server**. The **Connected server** takes you to the connected SQL Logical server that hosts your data warehouses. In the essential menu, select **Connected server**. 2. Open **Firewalls and virtual networks** and ensure that your client IP or a predetermined IP range has access to the logical server.
-3. Open **Active Directory admin** and ensure that an AAD admin has been set on the logical server.
+3. Open **Active Directory admin** and ensure that an Azure AD admin has been set on the logical server.
4. Select one of the dedicated SQL pool (formerly SQL DW) instances hosted on the logical server. In the overview page, select **Launch Synapse Studio** or Go to the [Sign in to the Synapse Studio](https://web.azuresynapse.net) and sign in to your workspace. 5. Open the **Data hub** and expand the dedicated SQL pool in the Object explorer to ensure that you've access and can query your data warehouse.
synapse-analytics Develop Storage Files Spark Tables https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/develop-storage-files-spark-tables.md
Last updated 10/05/2021--++
synapse-analytics Develop Tables External Tables https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/develop-tables-external-tables.md
Title: Use external tables with Synapse SQL description: Reading or writing data files with external tables in Synapse SQL -+ Last updated 07/23/2021-+
virtual-machines Custom Script Linux https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/extensions/custom-script-linux.md
Title: Run Custom Script Extension on Linux VMs in Azure
-description: Automate Linux VM configuration tasks by using the Custom Script Extension v2
+description: Automate Linux VM configuration tasks by using the Custom Script Extension Version 2.
Last updated 04/25/2018
# Use the Azure Custom Script Extension Version 2 with Linux virtual machines
-The Custom Script Extension Version 2 downloads and runs scripts on Azure virtual machines. This extension is useful for post-deployment configuration, software installation, or any other configuration/management task. You can download scripts from Azure Storage or another accessible internet location, or you can provide them to the extension runtime.
-The Custom Script Extension integrates with Azure Resource Manager templates. You can also run it by using Azure CLI, PowerShell, or the Azure Virtual Machines REST API.
+The Custom Script Extension Version 2 downloads and runs scripts on Azure virtual machines (VMs). This extension is useful for post-deployment configuration, software installation, or any other configuration or management task. You can download scripts from Azure Storage or another accessible internet location, or you can provide them to the extension runtime.
-This article details how to use the Custom Script Extension from Azure CLI, and how to run the extension by using an Azure Resource Manager template. This article also provides troubleshooting steps for Linux systems.
+The Custom Script Extension integrates with Azure Resource Manager templates. You can also run it by using the Azure CLI, PowerShell, or the Azure Virtual Machines REST API.
+This article details how to use the Custom Script Extension from the Azure CLI, and how to run the extension by using an Azure Resource Manager template. This article also provides troubleshooting steps for Linux systems.
There are two Linux Custom Script Extensions:
-* Version 1 - Microsoft.OSTCExtensions.CustomScriptForLinux
-* Version 2 - Microsoft.Azure.Extensions.CustomScript
+* Version 1: Microsoft.OSTCExtensions.CustomScriptForLinux
+* Version 2: Microsoft.Azure.Extensions.CustomScript
-Please switch new and existing deployments to use the new version 2 instead. The new version is intended to be a drop-in replacement. Therefore, the migration is as easy as changing the name and version, you do not need to change your extension configuration.
+Please switch new and existing deployments to use Version 2. The new version is a drop-in replacement. The migration is as easy as changing the name and version. You don't need to change your extension configuration.
+## Prerequisites
-### Operating System
+### Operating system
-The Custom Script Extension for Linux will run on the extension supported extension OS's, for more information, see this [article](../linux/endorsed-distros.md).
+The Custom Script Extension for Linux will run on supported operating systems. For more information, see [Endorsed Linux distributions on Azure](../linux/endorsed-distros.md).
-### Script Location
+### Script location
-You can use the extension to use your Azure Blob storage credentials, to access Azure Blob storage. Alternatively, the script location can be any where, as long as the VM can route to that end point, such as GitHub, internal file server etc.
+You can set the extension to use your Azure Blob Storage credentials so that it can access Azure Blob Storage. The script location can be anywhere, as long as the VM can route to that endpoint (for example, GitHub or an internal file server).
-### Internet Connectivity
-If you need to download a script externally such as GitHub or Azure Storage, then additional firewall/Network Security Group ports need to be opened. For example if your script is located in Azure Storage, you can allow access using Azure NSG Service Tags for [Storage](../../virtual-network/network-security-groups-overview.md#service-tags).
+### Internet connectivity
-If your script is on a local server, then you may still need additional firewall/Network Security Group ports need to be opened.
+If you need to download a script externally, such as from GitHub or Azure Storage, then you need to open additional firewall or network security group (NSG) ports. For example, if your script is located in Azure Storage, you can allow access by using Azure NSG [service tags for Storage](../../virtual-network/network-security-groups-overview.md#service-tags).
-### Tips and Tricks
-* The highest failure rate for this extension is due to syntax errors in the script, test the script runs without error, and also put in additional logging into the script to make it easier to find where it failed.
-* Write scripts that are idempotent, so if they get run again more than once accidentally, it will not cause system changes.
-* Ensure the scripts do not require user input when they run.
-* There is 90 mins allowed for the script to run, anything longer will result in a failed provision of the extension.
-* Do not put reboots inside the script, this will cause issues with other extensions that are being installed, and post reboot, the extension will not continue after the restart.
-* It is not recommended to run a script that will cause a stop or update of the VM Agent. This might leave the extension in a Transitioning state and lead to a timeout.
-* If you have a script that will cause a reboot, then install applications and run scripts etc. You should schedule the reboot using a Cron job, or using tools such as DSC, or Chef, Puppet extensions.
-* The extension will only run a script once, if you want to run a script on every boot, then you can use [cloud-init image](../linux/using-cloud-init.md) and use a [Scripts Per Boot](https://cloudinit.readthedocs.io/en/latest/topics/modules.html#scripts-per-boot) module. Alternatively, you can use the script to create a SystemD service unit.
-* You can only have one version of an extension applied to the VM. In order to run a second custom script, you can update the existing extension with new configuration. Alternatively, you can remove the custom script extension and reapply it again with the updated script.
-* If you want to schedule when a script will run, you should use the extension to create a Cron job.
-* When the script is running, you will only see a 'transitioning' extension status from the Azure portal or CLI. If you want more frequent status updates of a running script, you will need to create your own solution.
-* Custom Script extension does not natively support proxy servers, however you can use a file transfer tool that supports proxy servers within your script, such as *Curl*.
-* Be aware of non default directory locations that your scripts or commands may rely on, have logic to handle this.
+If your script is on a local server, you might still need to open additional firewall or NSG ports.
+### Tips and tricks
-## Extension schema
+* The highest failure rate for this extension is due to syntax errors in the script. Test that the script runs without errors. Put additional logging into the script to make it easier to find failures.
+* Write scripts that are idempotent, so running them more than once accidentally won't cause system changes.
+* Ensure that the scripts don't require user input when they run.
+* The script is allowed 90 minutes to run. Anything longer will result in a failed provision of the extension.
+* Don't put reboots inside the script. This action will cause problems with other extensions that are being installed, and the extension won't continue after the reboot.
+* If you have a script that will cause a reboot before installing applications and running scripts, schedule the reboot by using a Cron job or by using tools such as DSC, Chef, or Puppet extensions.
+* Don't run a script that will cause a stop or update of the VM agent. It might leave the extension in a transitioning state and lead to a timeout.
+* The extension will run a script only once. If you want to run a script on every startup, you can use a [cloud-init image](../linux/using-cloud-init.md) and use a [Scripts Per Boot](https://cloudinit.readthedocs.io/en/latest/topics/modules.html#scripts-per-boot) module. Alternatively, you can use the script to create a [systemd](https://systemd.io/) service unit.
+* You can have only one version of an extension applied to the VM. To run a second custom script, you can update the existing extension with a new configuration. Alternatively, you can remove the custom script extension and reapply it with the updated script.
+* If you want to schedule when a script will run, use the extension to create a Cron job.
+* When the script is running, you'll only see a "transitioning" extension status from the Azure portal or CLI. If you want more frequent status updates for a running script, you'll need to create your own solution.
+* The Custom Script Extension doesn't natively support proxy servers. However, you can use a file transfer tool that supports proxy servers within your script, such as *Curl*.
+* Be aware of non-default directory locations that your scripts or commands might rely on. Have logic to handle this situation.
-The Custom Script Extension configuration specifies things like script location and the command to be run. You can store this configuration in configuration files, specify it on the command line, or specify it in an Azure Resource Manager template.
+## Extension schema
-You can store sensitive data in a protected configuration, which is encrypted and only decrypted inside the virtual machine. The protected configuration is useful when the execution command includes secrets such as a password.
+The Custom Script Extension configuration specifies things like script location and the command to be run. You can store this information in configuration files, specify it on the command line, or specify it in an Azure Resource Manager template.
-These items should be treated as sensitive data and specified in the extensions protected setting configuration. Azure VM extension protected setting data is encrypted, and only decrypted on the target virtual machine.
+You can store sensitive data in a protected configuration, which is encrypted and only decrypted on the target virtual machine. The protected configuration is useful when the execution command includes secrets such as a password. Here's an example:
```json {
These items should be treated as sensitive data and specified in the extensions
``` >[!NOTE]
-> managedIdentity property **must not** be used in conjunction with storageAccountName or storageAccountKey properties
+> The `managedIdentity` property *must not* be used in conjunction with the `storageAccountName` or `storageAccountKey` property.
### Property values
-| Name | Value / Example | Data Type |
+| Name | Value or example | Data type |
| - | - | - |
-| apiVersion | 2019-03-01 | date |
-| publisher | Microsoft.Azure.Extensions | string |
-| type | CustomScript | string |
-| typeHandlerVersion | 2.1 | int |
-| fileUris (e.g) | `https://github.com/MyProject/Archive/MyPythonScript.py` | array |
-| commandToExecute (e.g) | python MyPythonScript.py \<my-param1> | string |
-| script | IyEvYmluL3NoCmVjaG8gIlVwZGF0aW5nIHBhY2thZ2VzIC4uLiIKYXB0IHVwZGF0ZQphcHQgdXBncmFkZSAteQo= | string |
-| skipDos2Unix (e.g) | false | boolean |
-| timestamp (e.g) | 123456789 | 32-bit integer |
-| storageAccountName (e.g) | examplestorageacct | string |
-| storageAccountKey (e.g) | TmJK/1N3AbAZ3q/+hOXoi/l73zOqsaxXDhqa9Y83/v5UpXQp2DQIBuv2Tifp60cE/OaHsJZmQZ7teQfczQj8hg== | string |
-| managedIdentity (e.g) | { } or { "clientId": "31b403aa-c364-4240-a7ff-d85fb6cd7232" } or { "objectId": "12dd289c-0583-46e5-b9b4-115d5c19ef4b" } | json object |
+| `apiVersion` | `2019-03-01` | date |
+| `publisher` | `Microsoft.Azure.Extensions` | string |
+| `type` | `CustomScript` | string |
+| `typeHandlerVersion` | `2.1` | int |
+| `fileUris` | `https://github.com/MyProject/Archive/MyPythonScript.py` | array |
+| `commandToExecute` | `python MyPythonScript.py \<my-param1>` | string |
+| `script` | `IyEvYmluL3NoCmVjaG8gIlVwZGF0aW5nIHBhY2thZ2VzIC4uLiIKYXB0IHVwZGF0ZQphcHQgdXBncmFkZSAteQo=` | string |
+| `skipDos2Unix` | `false` | Boolean |
+| `timestamp` | `123456789` | 32-bit integer |
+| `storageAccountName` | `examplestorageacct` | string |
+| `storageAccountKey` | `TmJK/1N3AbAZ3q/+hOXoi/l73zOqsaxXDhqa9Y83/v5UpXQp2DQIBuv2Tifp60cE/OaHsJZmQZ7teQfczQj8hg==` | string |
+| `managedIdentity` | `{ }` or `{ "clientId": "31b403aa-c364-4240-a7ff-d85fb6cd7232" }` or `{ "objectId": "12dd289c-0583-46e5-b9b4-115d5c19ef4b" }` | JSON object |
### Property value details
-* `apiVersion`: The most up to date apiVersion can be found using [Resource Explorer](https://resources.azure.com/) or from Azure CLI using the following command `az provider list -o json`
-* `skipDos2Unix`: (optional, boolean) skip dos2unix conversion of script-based file URLs or script.
-* `timestamp` (optional, 32-bit integer) use this field only to trigger a re-run of the
- script by changing value of this field. Any integer value is acceptable; it must only be different than the previous value.
-* `commandToExecute`: (**required** if script not set, string) the entry point script to execute. Use
- this field instead if your command contains secrets such as passwords.
-* `script`: (**required** if commandToExecute not set, string)a base64 encoded (and optionally gzip'ed) script executed by /bin/sh.
-* `fileUris`: (optional, string array) the URLs for file(s) to be downloaded.
-* `storageAccountName`: (optional, string) the name of storage account. If you
- specify storage credentials, all `fileUris` must be URLs for Azure Blobs.
-* `storageAccountKey`: (optional, string) the access key of storage account
-* `managedIdentity`: (optional, json object) the [managed identity](../../active-directory/managed-identities-azure-resources/overview.md) for downloading file(s)
- * `clientId`: (optional, string) the client ID of the managed identity
- * `objectId`: (optional, string) the object ID of the managed identity
--
-The following values can be set in either public or protected settings, the extension will reject any configuration where the values below are set in both public and protected settings.
+
+| Property | Optional or required | Details |
+| - | - | - |
+| `apiVersion` | Not applicable | You can find the most up-to-date API version by using [Resource Explorer](https://resources.azure.com/) or by using the command `az provider list -o json` in the Azure CLI. |
+| `fileUris` | Optional | URLs for files to be downloaded. |
+| `commandToExecute` | Required if `script` isn't set | The entry point script to run. Use this property instead of `script` if your command contains secrets such as passwords. |
+| `script` | Required if `commandToExecute` isn't set | A Base64-encoded (and optionally gzip'ed) script run by `/bin/sh`. |
+| `skipDos2Unix` | Optional | Set this value to `false` if you want to skip dos2unix conversion of script-based file URLs or scripts. |
+| `timestamp` | Optional | Change this value only to trigger a rerun of the script. Any integer value is acceptable, as long as it's different from the previous value. |
+| `storageAccountName` | Optional | The name of storage account. If you specify storage credentials, all `fileUris` values must be URLs for Azure blobs. |
+| `storageAccountKey` | Optional | The access key of the storage account. |
+| `managedIdentity` | Optional | The [managed identity](../../active-directory/managed-identities-azure-resources/overview.md) for downloading files:<br><br>`clientId` (optional, string): The client ID of the managed identity.<br><br>`objectId` (optional, string): The object ID of the managed identity.|
+
+You can set the following values in either public or protected settings. The extension will reject any configuration where these values are set in both public and protected settings.
+ * `commandToExecute` * `script` * `fileUris`
-Using public settings maybe useful for debugging, but it is strongly recommended that you use protected settings.
+Using public settings might be useful for debugging, but we strongly recommend that you use protected settings.
-Public settings are sent in clear text to the VM where the script will be executed. Protected settings are encrypted using a key known only to the Azure and the VM. The settings are saved to the VM as they were sent, i.e. if the settings were encrypted they are saved encrypted on the VM. The certificate used to decrypt the encrypted values is stored on the VM, and used to decrypt settings (if necessary) at runtime.
+Public settings are sent in clear text to the VM where the script will be run. Protected settings are encrypted through a key known only to Azure and the VM. The settings are saved to the VM as they were sent. That is, if the settings were encrypted, they're saved encrypted on the VM. The certificate that's used to decrypt the encrypted values is stored on the VM. The certificate is also used to decrypt settings (if necessary) at runtime.
#### Property: skipDos2Unix
-The default value is false, which means dos2unix conversion **is** executed.
+The default value is `false`, which means dos2unix conversion *is* executed.
-The previous version of CustomScript, Microsoft.OSTCExtensions.CustomScriptForLinux, would automatically convert DOS files to UNIX files by translating `\r\n` to `\n`. This translation still exists, and is on by default. This conversion is applied to all files downloaded from fileUris or the script setting based on any of the following criteria.
+The previous version of the Custom Script Extension, Microsoft.OSTCExtensions.CustomScriptForLinux, would automatically convert DOS files to UNIX files by translating `\r\n` to `\n`. This translation still exists and is on by default. This conversion is applied to all files downloaded from `fileUris` or the script setting based on either of the following criteria:
-* If the extension is one of `.sh`, `.txt`, `.py`, or `.pl` it will be converted. The script setting will always match this criteria because it is assumed to be a script executed with /bin/sh, and is saved as script.sh on the VM.
-* If the file starts with `#!`.
+* The extension is .sh, .txt, .py, or .pl. The script setting will always match this criterion because it's assumed to be a script run with `/bin/sh`. The script setting is saved as *script.sh* on the VM.
+* The file starts with `#!`.
-The dos2unix conversion can be skipped by setting the skipDos2Unix to true.
+You can skip the dos2unix conversion by setting `skipDos2Unix` to `true`:
```json {
The dos2unix conversion can be skipped by setting the skipDos2Unix to true.
} ```
-#### Property: script
+#### Property: script
+
+The Custom Script Extension supports execution of a user-defined script. The script settings combine `commandToExecute` and `fileUris` into a single setting. Instead of having to set up a file for download from Azure Storage or a GitHub gist, you can simply encode the script as a
+setting. You can use the script to replace `commandToExecute` and `fileUris`.
-CustomScript supports execution of a user-defined script. The script settings to combine commandToExecute and fileUris into a single setting. Instead of the having to setup a file for download from Azure storage or GitHub gist, you can simply encode the script as a
-setting. Script can be used to replaced commandToExecute and fileUris.
+Here are some requirements:
-The script **must** be base64 encoded. The script can **optionally** be gzip'ed. The script setting can be used in public or protected settings. The maximum size of the script parameter's data is 256 KB. If the script exceeds this size it will not be executed.
+- The script *must* be Base64 encoded.
+- The script can *optionally* be gzip'ed.
+- You can use the script setting in public or protected settings.
+- The maximum size of the script parameter's data is 256 KB. If the script exceeds this size, it won't be run.
-For example, given the following script saved to the file /script.sh/.
+For example, the following script is saved to the file */script.sh/*:
```sh #!/bin/sh
apt update
apt upgrade -y ```
-The correct CustomScript script setting would be constructed by taking the output of the following command.
+You would construct the correct Custom Script Extension script setting by taking the output of the following command:
```sh cat script.sh | base64 -w0
cat script.sh | base64 -w0
} ```
-The script can optionally be gzip'ed to further reduce size (in most cases). (CustomScript auto-detects the use of gzip compression.)
+In most cases, the script can optionally be gzip'ed to further reduce size. The Custom Script Extension automatically detects the use of gzip compression.
```sh cat script | gzip -9 | base64 -w 0 ```
-CustomScript uses the following algorithm to execute a script.
+The Custom Script Extension uses the following algorithm to run a script:
- 1. assert the length of the script's value does not exceed 256 KB.
- 1. base64 decode the script's value
- 1. _attempt_ to gunzip the base64 decoded value
- 1. write the decoded (and optionally decompressed) value to disk (/var/lib/waagent/custom-script/#/script.sh)
- 1. execute the script using _/bin/sh -c /var/lib/waagent/custom-script/#/script.sh.
+ 1. Assert that the length of the script's value does not exceed 256 KB.
+ 1. Base64 decode the script's value.
+ 1. _Try_ to gunzip the Base64-decoded value.
+ 1. Write the decoded (and optionally decompressed) value to disk (*/var/lib/waagent/custom-script/#/script.sh*).
+ 1. Run the script by using `_/bin/sh -c /var/lib/waagent/custom-script/#/script.sh`.
+
+#### Property: managedIdentity
-#### Property: managedIdentity
> [!NOTE]
-> This property **must** be specified in protected settings only.
+> This property *must* be specified in protected settings only.
-CustomScript (version 2.1 onwards) supports [managed identity](../../active-directory/managed-identities-azure-resources/overview.md) for downloading file(s) from URLs provided in the "fileUris" setting. It allows CustomScript to access Azure Storage private blobs or containers without the user having to pass secrets like SAS tokens or storage account keys.
+The Custom Script Extension (version 2.1 and later) supports [managed identities](../../active-directory/managed-identities-azure-resources/overview.md) for downloading files from URLs provided in the `fileUris` setting. It allows the Custom Script Extension to access Azure Storage private blobs or containers without the user having to pass secrets like shared access signature (SAS) tokens or storage account keys.
-To use this feature, the user must add a [system-assigned](../../app-service/overview-managed-identity.md?tabs=dotnet#add-a-system-assigned-identity) or [user-assigned](../../app-service/overview-managed-identity.md?tabs=dotnet#add-a-user-assigned-identity) identity to the VM or VMSS where CustomScript is expected to run, and [grant the managed identity access to the Azure Storage container or blob](../../active-directory/managed-identities-azure-resources/tutorial-vm-windows-access-storage.md#grant-access).
+To use this feature, the user must add a [system-assigned](../../app-service/overview-managed-identity.md?tabs=dotnet#add-a-system-assigned-identity) or [user-assigned](../../app-service/overview-managed-identity.md?tabs=dotnet#add-a-user-assigned-identity) identity to the VM or virtual machine scale set where the Custom Script Extension is expected to run. The user must then [grant the managed identity access to the Azure Storage container or blob](../../active-directory/managed-identities-azure-resources/tutorial-vm-windows-access-storage.md#grant-access).
-To use the system-assigned identity on the target VM/VMSS, set "managedidentity" field to an empty json object.
+To use the system-assigned identity on the target VM or virtual machine scale set, set `managedidentity` to an empty JSON object.
> Example: >
To use the system-assigned identity on the target VM/VMSS, set "managedidentity"
> } > ```
-To use the user-assigned identity on the target VM/VMSS, configure "managedidentity" field with the client ID or the object ID of the managed identity.
+To use the user-assigned identity on the target VM or virtual machine scale set, configure `managedidentity` with the client ID or the object ID of the managed identity.
> Examples: >
To use the user-assigned identity on the target VM/VMSS, configure "managedident
> ``` > [!NOTE]
-> managedIdentity property **must not** be used in conjunction with storageAccountName or storageAccountKey properties
+> The `managedIdentity` property *must not* be used in conjunction with the `storageAccountName` or `storageAccountKey` property.
## Template deployment
-Azure VM extensions can be deployed with Azure Resource Manager templates. The JSON schema detailed in the previous section can be used in an Azure Resource Manager template to run the Custom Script Extension during an Azure Resource Manager template deployment. A sample template that includes the Custom Script Extension can be found here, [GitHub](https://github.com/Microsoft/dotnet-core-sample-templates/tree/master/dotnet-core-music-linux).
+You can deploy Azure VM extensions by using Azure Resource Manager templates. The JSON schema detailed in the previous section can be used in an Azure Resource Manager template to run the Custom Script Extension during the template's deployment. You can find a sample template that includes the Custom Script Extension on [GitHub](https://github.com/Microsoft/dotnet-core-sample-templates/tree/master/dotnet-core-music-linux).
```json
Azure VM extensions can be deployed with Azure Resource Manager templates. The J
>These property names are case-sensitive. To avoid deployment problems, use the names as shown here. ## Azure CLI
-When you're using Azure CLI to run the Custom Script Extension, create a configuration file or files. At a minimum, you must have 'commandToExecute'.
+
+When you're using the Azure CLI to run the Custom Script Extension, create a configuration file or files. At a minimum, you must have `commandToExecute`.
```azurecli az vm extension set \
az vm extension set \
--protected-settings ./script-config.json ```
-Optionally, you can specify the settings in the command as a JSON formatted string. This allows the configuration to be specified during execution and without a separate configuration file.
+Optionally, you can specify the settings in the command as a JSON-formatted string. This allows the configuration to be specified during execution and without a separate configuration file.
```azurecli az vm extension set \
az vm extension set \
--protected-settings '{"fileUris": ["https://raw.githubusercontent.com/Microsoft/dotnet-core-sample-templates/master/dotnet-core-music-linux/scripts/config-music.sh"],"commandToExecute": "./config-music.sh"}' ```
-### Azure CLI examples
-
-#### Public configuration with script file
+### Example: Public configuration with script file
```json {
az vm extension set \
--settings ./script-config.json ```
-#### Public configuration with no script file
+### Example: Public configuration with no script file
```json {
az vm extension set \
--settings ./script-config.json ```
-#### Public and protected configuration files
+### Example: Public and protected configuration files
-You use a public configuration file to specify the script file URI. You use a protected configuration file to specify the command to be run.
+You use a public configuration file to specify the script file's URI. You use a protected configuration file to specify the command to be run.
Public configuration file:
az vm extension set \
## Virtual machine scale sets
-If you deploy the Custom Script Extension from the Azure portal, you don't have control over the expiration of the shared access signature token for accessing the script in your storage account. The result is that the initial deployment works, but when the storage account shared access signature token expires, any subsequent scaling operation fails because the Custom Script Extension can no longer access the storage account.
+If you deploy the Custom Script Extension from the Azure portal, you don't have control over the expiration of the SAS token for accessing the script in your storage account. The result is that the initial deployment works, but when the storage account's SAS token expires, any subsequent scaling operation fails because the Custom Script Extension can no longer access the storage account.
-We recommend that you use [PowerShell](/powershell/module/az.Compute/Add-azVmssExtension?view=azps-7.0.0), the [Azure CLI](/cli/azure/vmss/extension?view=azure-cli-latest), or an Azure Resource Manager template when you deploy the Custom Script Extension on a virtual machine scale set. This way, you can choose to use a managed identity or have direct control of the expiration of the shared access signature token for accessing the script in your storage account for as long as you need.
+We recommend that you use [PowerShell](/powershell/module/az.Compute/Add-azVmssExtension?view=azps-7.0.0), the [Azure CLI](/cli/azure/vmss/extension?view=azure-cli-latest), or an Azure Resource Manager template when you deploy the Custom Script Extension on a virtual machine scale set. This way, you can choose to use a managed identity or have direct control of the expiration of the SAS token for accessing the script in your storage account for as long as you need.
## Troubleshooting When the Custom Script Extension runs, the script is created or downloaded into a directory that's similar to the following example. The command output is also saved into this directory in `stdout` and `stderr` files.
When the Custom Script Extension runs, the script is created or downloaded into
/var/lib/waagent/custom-script/download/0/ ```
-To troubleshoot, first check the Linux Agent Log, ensure the extension ran, check:
+To troubleshoot, first check the Linux Agent Log and ensure that the extension ran:
```bash /var/log/waagent.log ```
-You should look for the extension execution, it will look something like:
+Look for the extension execution. It will look something like:
```output 2018/04/26 17:47:22.110231 INFO [Microsoft.Azure.Extensions.customScript-2.0.6] [Enable] current handler state is: notinstalled
You should look for the extension execution, it will look something like:
2018/04/26 17:47:24.516444 INFO Event: name=Microsoft.Azure.Extensions.customScript, op=Enable, message=Launch command succeeded: bin/custom-sc ```
-Some points to note:
-1. Enable is when the command starts running.
-2. Download relates to the downloading of the CustomScript extension package from Azure, not the script files specified in fileUris.
+In the preceding output:
+
+- `Enable` is when the command starts running.
+- `Download` relates to the downloading of the Custom Script Extension package from Azure, not the script files specified in `fileUris`.
The Azure Script Extension produces a log, which you can find here:
The Azure Script Extension produces a log, which you can find here:
/var/log/azure/custom-script/handler.log ```
-You should look for the individual execution, it will look something like:
+Look for the individual execution. It will look something like:
```output time=2018-04-26T17:47:23Z version=v2.0.6/git@1008306-clean operation=enable seq=0 event=start
time=2018-04-26T17:47:23Z version=v2.0.6/git@1008306-clean operation=enable seq=
``` Here you can see:
-* The Enable command starting is this log
-* The settings passed to the extension
-* The extension downloading file and the result of that.
+
+* The `enable` command that starts this log.
+* The settings passed to the extension.
+* The extension downloading the file and the result of that.
* The command being run and the result.
-You can also retrieve the execution state of the Custom Script Extension including the actual arguments passed as the `commandToExecute` by using Azure CLI:
+You can also retrieve the execution state of the Custom Script Extension, including the actual arguments passed as `commandToExecute`, by using the Azure CLI:
```azurecli az vm extension list -g myResourceGroup --vm-name myVM
The output looks like the following text:
``` ## Next steps
-To see the code, current issues and versions, see [custom-script-extension-linux repo](https://github.com/Azure/custom-script-extension-linux).
+To see the code, current issues, and versions, go to the [custom-script-extension-linux repo on GitHub](https://github.com/Azure/custom-script-extension-linux).
virtual-machines Custom Script Windows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/extensions/custom-script-windows.md
Title: Azure Custom Script Extension for Windows
-description: Automate Windows VM configuration tasks by using the Custom Script extension
+description: Automate Windows VM configuration tasks by using the Custom Script Extension.
# Custom Script Extension for Windows
-The Custom Script Extension downloads and executes scripts on Azure virtual machines. This extension is useful for post deployment configuration, software installation, or any other configuration or management tasks. Scripts can be downloaded from Azure storage or GitHub, or provided to the Azure portal at extension run time. The Custom Script Extension integrates with Azure Resource Manager templates, and can be run using the Azure CLI, PowerShell, Azure portal, or the Azure Virtual Machine REST API.
+The Custom Script Extension downloads and runs scripts on Azure virtual machines (VMs). This extension is useful for post-deployment configuration, software installation, or any other configuration or management task. You can download scripts from Azure Storage or GitHub, or provide them to the Azure portal at extension runtime.
-This document details how to use the Custom Script Extension using the Azure PowerShell module, Azure Resource Manager templates, and details troubleshooting steps on Windows systems.
+The Custom Script Extension integrates with Azure Resource Manager templates. You can also run it by using the Azure CLI, PowerShell, the Azure portal, or the Azure Virtual Machines REST API.
+
+This article details how to use the Custom Script Extension by using the Azure PowerShell module and Azure Resource Manager templates. It also provides troubleshooting steps for Windows systems.
## Prerequisites > [!NOTE]
-> Do not use Custom Script Extension to run Update-AzVM with the same VM as its parameter, since it will wait on itself.
-
-### Operating System
+> Don't use the Custom Script Extension to run `Update-AzVM` with the same VM as its parameter, because it will wait for itself.
-The Custom Script Extension for Windows will run on the extension supported extension OSs;
+### Operating system
-### Windows
+The Custom Script Extension for Windows will run on these supported operating systems:
* Windows Server 2008 R2 * Windows Server 2012
The Custom Script Extension for Windows will run on the extension supported exte
* Windows Server 2019 * Windows Server 2019 Core
-### Script Location
+### Script location
-You can configure the extension to use your Azure Blob storage credentials to access Azure Blob storage. The script location can be anywhere, as long as the VM can route to that end point, such as GitHub or an internal file server.
+You can set the extension to use your Azure Blob Storage credentials so that it can access Azure Blob Storage. The script location can be anywhere, as long as the VM can route to that endpoint (for example, GitHub or an internal file server).
-### Internet Connectivity
+### Internet connectivity
-If you need to download a script externally such as from GitHub or Azure Storage, then additional firewall and Network Security Group ports need to be opened. For example, if your script is located in Azure Storage, you can allow access using Azure NSG Service Tags for [Storage](../../virtual-network/network-security-groups-overview.md#service-tags).
+If you need to download a script externally, such as from GitHub or Azure Storage, then you need to open additional firewall or network security group (NSG) ports. For example, if your script is located in Azure Storage, you can allow access by using Azure NSG [service tags for Storage](../../virtual-network/network-security-groups-overview.md#service-tags).
-Note that CustomScript Extension does not have any way to bypass certificate validation. So if you're downloading from a secured location with eg. a self-signed certificate, you might end up with errors like *"The remote certificate is invalid according to the validation procedure"*. Please make sure the certificate is correctly installed in the *"Trusted Root Certification Authorities"* store on the Virtual Machine.
+The Custom Script Extension does not have any way to bypass certificate validation. So if you're downloading from a secured location with, for example, a self-signed certificate, you might get errors like "The remote certificate is invalid according to the validation procedure." Make sure that the certificate is correctly installed in the *Trusted Root Certification Authorities* store on the VM.
-If your script is on a local server, then you may still need additional firewall and Network Security Group ports need to be opened.
+If your script is on a local server, you might still need to open additional firewall or NSG ports.
-### Tips and Tricks
+### Tips and tricks
-* The highest failure rate for this extension is because of syntax errors in the script, test the script runs without error, and also put in additional logging into the script to make it easier to find where it failed.
-* Write scripts that are idempotent. This ensures that if they run again accidentally, it will not cause system changes.
-* Ensure the scripts don't require user input when they run.
-* There's 90 minutes allowed for the script to run, anything longer will result in a failed provision of the extension.
-* Don't put reboots inside the script, this action will cause issues with other extensions that are being installed. Post reboot, the extension won't continue after the restart.
-* If you have a script that will cause a reboot, then install applications and run scripts, you can schedule the reboot using a Windows Scheduled Task, or use tools such as DSC, Chef, or Puppet extensions.
-* It is not recommended to run a script that will cause a stop or update of the VM Agent. This can leave the extension in a Transitioning state, leading to a timeout.
-* The extension will only run a script once, if you want to run a script on every boot, then you need to use the extension to create a Windows Scheduled Task.
-* If you want to schedule when a script will run, you should use the extension to create a Windows Scheduled Task.
-* When the script is running, you will only see a 'transitioning' extension status from the Azure portal or CLI. If you want more frequent status updates of a running script, you'll need to create your own solution.
-* Custom Script extension does not natively support proxy servers, however you can use a file transfer tool that supports proxy servers within your script, such as *Invoke-WebRequest*
-* Be aware of non-default directory locations that your scripts or commands may rely on, have logic to handle this situation.
-* Custom Script Extension will run under the LocalSystem Account
-* If you plan to use the *storageAccountName* and *storageAccountKey* properties, these properties must be collocated in *protectedSettings*.
+* The highest failure rate for this extension is due to syntax errors in the script. Test that the script runs without errors. Put additional logging into the script to make it easier to find failures.
+* Write scripts that are idempotent, so running them more than once accidentally won't cause system changes.
+* Ensure that the scripts don't require user input when they run.
+* The script is allowed 90 minutes to run. Anything longer will result in a failed provision of the extension.
+* Don't put reboots inside the script. This action will cause problems with other extensions that are being installed, and the extension won't continue after the reboot.
+* If you have a script that will cause a reboot before installing applications and running scripts, schedule the reboot by using a Windows Scheduled Task or by using tools such as DSC, Chef, or Puppet extensions.
+* Don't run a script that will cause a stop or update of the VM agent. It might leave the extension in a transitioning state and lead to a timeout.
+* The extension will run a script only once. If you want to run a script on every startup, use the extension to create a Windows Scheduled Task.
+* If you want to schedule when a script will run, use the extension to create a Windows Scheduled Task.
+* When the script is running, you'll only see a "transitioning" extension status from the Azure portal or CLI. If you want more frequent status updates for a running script, you'll need to create your own solution.
+* The Custom Script Extension doesn't natively support proxy servers. However, you can use a file transfer tool that supports proxy servers within your script, such as *Invoke-WebRequest*.
+* Be aware of non-default directory locations that your scripts or commands might rely on. Have logic to handle this situation.
+* The Custom Script Extension runs under the LocalSystem account.
+* If you plan to use the `storageAccountName` and `storageAccountKey` properties, these properties must be collocated in `protectedSettings`.
## Extension schema The Custom Script Extension configuration specifies things like script location and the command to be run. You can store this configuration in configuration files, specify it on the command line, or specify it in an Azure Resource Manager template.
-You can store sensitive data in a protected configuration, which is encrypted and only decrypted inside the virtual machine. The protected configuration is useful when the execution command includes secrets such as a password or a shared access signature (SAS) file reference, which should be protected.
+You can store sensitive data in a protected configuration, which is encrypted and only decrypted inside the virtual machine. The protected configuration is useful when the execution command includes secrets such as a password or a shared access signature (SAS) file reference. Here's an example:
-These items should be treated as sensitive data and specified in the extensions protected setting configuration. Azure VM extension protected setting data is encrypted, and only decrypted on the target virtual machine.
```json {
These items should be treated as sensitive data and specified in the extensions
``` > [!NOTE]
-> managedIdentity property **must not** be used in conjunction with storageAccountName or storageAccountKey properties
+> The `managedIdentity` property *must not* be used in conjunction with the `storageAccountName` or `storageAccountKey` property.
-> [!NOTE]
-> Only one version of an extension can be installed on a VM at a point in time, specifying custom script twice in the same Resource Manager template for the same VM will fail.
+Only one version of an extension can be installed on a VM at a point in time. Specifying a custom script twice in the same Azure Resource Manager template for the same VM will fail.
-> [!NOTE]
-> We can use this schema inside the VirtualMachine resource or as a standalone resource. The name of the resource has to be in this format "virtualMachineName/extensionName", if this extension is used as a standalone resource in the ARM template.
+You can use this schema inside the VM resource or as a standalone resource. The name of the resource has to be in the format *virtualMachineName/extensionName*, if this extension is used as a standalone resource in the Azure Resource Manager template.
### Property values
-| Name | Value / Example | Data Type |
+| Name | Value or example | Data type |
| - | - | - |
-| apiVersion | 2015-06-15 | date |
-| publisher | Microsoft.Compute | string |
-| type | CustomScriptExtension | string |
-| typeHandlerVersion | 1.10 | int |
-| fileUris (e.g) | https://raw.githubusercontent.com/Microsoft/dotnet-core-sample-templates/master/dotnet-core-music-windows/scripts/configure-music-app.ps1 | array |
-| timestamp (e.g) | 123456789 | 32-bit integer |
-| commandToExecute (e.g) | powershell -ExecutionPolicy Unrestricted -File configure-music-app.ps1 | string |
-| storageAccountName (e.g) | examplestorageacct | string |
-| storageAccountKey (e.g) | TmJK/1N3AbAZ3q/+hOXoi/l73zOqsaxXDhqa9Y83/v5UpXQp2DQIBuv2Tifp60cE/OaHsJZmQZ7teQfczQj8hg== | string |
-| managedIdentity (e.g) | { } or { "clientId": "31b403aa-c364-4240-a7ff-d85fb6cd7232" } or { "objectId": "12dd289c-0583-46e5-b9b4-115d5c19ef4b" } | json object |
+| `apiVersion` | `2015-06-15` | date |
+| `publisher` | `Microsoft.Compute` | string |
+| `type` | `CustomScriptExtension` | string |
+| `typeHandlerVersion` | `1.10` | int |
+| `fileUris` | `https://raw.githubusercontent.com/Microsoft/dotnet-core-sample-templates/master/dotnet-core-music-windows/scripts/configure-music-app.ps1` | array |
+| `timestamp` | `123456789` | 32-bit integer |
+| `commandToExecute` | `powershell -ExecutionPolicy Unrestricted -File configure-music-app.ps1` | string |
+| `storageAccountName` | `examplestorageacct` | string |
+| `storageAccountKey` | `TmJK/1N3AbAZ3q/+hOXoi/l73zOqsaxXDhqa9Y83/v5UpXQp2DQIBuv2Tifp60cE/OaHsJZmQZ7teQfczQj8hg==` | string |
+| `managedIdentity` | `{ }` or `{ "clientId": "31b403aa-c364-4240-a7ff-d85fb6cd7232" }` or `{ "objectId": "12dd289c-0583-46e5-b9b4-115d5c19ef4b" }` | JSON object |
>[!NOTE] >These property names are case-sensitive. To avoid deployment problems, use the names as shown here.
-#### Property value details
+### Property value details
-* `commandToExecute`: (**required**, string) the entry point script to execute. Use this field instead if your command contains secrets such as passwords, or your fileUris are sensitive.
-* `fileUris`: (optional, string array) the URLs for file(s) to be downloaded. If URLs are sensitive (such as URLs containing keys), this field should be specified in protectedSettings
-* `timestamp` (optional, 32-bit integer) use this field only to trigger a rerun of the
-script by changing value of this field. Any integer value is acceptable; it must only be different than the previous value.
-* `storageAccountName`: (optional, string) the name of storage account. If you specify storage credentials, all `fileUris` must be URLs for Azure Blobs.
-* `storageAccountKey`: (optional, string) the access key of storage account
-* `managedIdentity`: (optional, json object) the [managed identity](../../active-directory/managed-identities-azure-resources/overview.md) for downloading file(s)
- * `clientId`: (optional, string) the client ID of the managed identity
- * `objectId`: (optional, string) the object ID of the managed identity
+| Property | Optional or required | Details |
+| - | - | - |
+| `fileUris` | Optional | URLs for files to be downloaded. If URLs are sensitive (for example, they contain keys), this field should be specified in `protectedSettings`. |
+| `commandToExecute` | Required | The entry point script to run. Use this property if your command contains secrets such as passwords or if your file URIs are sensitive. |
+| `timestamp` | Optional | Change this value only to trigger a rerun of the script. Any integer value is acceptable, as long as it's different from the previous value. |
+| `storageAccountName` | Optional | The name of storage account. If you specify storage credentials, all `fileUris` values must be URLs for Azure blobs. |
+| `storageAccountKey` | Optional | The access key of the storage account. |
+| `managedIdentity` | Optional | The [managed identity](../../active-directory/managed-identities-azure-resources/overview.md) for downloading files:<br><br>`clientId` (optional, string): The client ID of the managed identity.<br><br>`objectId` (optional, string): The object ID of the managed identity.|
-The following values can be set in either public or protected settings, the extension will reject any configuration where the values below are set in both public and protected settings.
+You can set the following values in either public or protected settings. The extension will reject any configuration where these values are set in both public and protected settings.
* `commandToExecute` * `fileUris`
-Using public settings maybe useful for debugging, but it's recommended that you use protected settings.
+Using public settings might be useful for debugging, but we recommend that you use protected settings.
-Public settings are sent in clear text to the VM where the script will be executed. Protected settings are encrypted using a key known only to the Azure and the VM. The settings are saved to the VM as they were sent, that is, if the settings were encrypted they're saved encrypted on the VM. The certificate used to decrypt the encrypted values is stored on the VM, and used to decrypt settings (if necessary) at runtime.
+Public settings are sent in clear text to the VM where the script will be run. Protected settings are encrypted through a key known only to Azure and the VM. The settings are saved to the VM as they were sent. That is, if the settings were encrypted, they're saved encrypted on the VM. The certificate that's used to decrypt the encrypted values is stored on the VM. The certificate is also used to decrypt settings (if necessary) at runtime.
+
+#### Property: managedIdentity
-#### Property: managedIdentity
> [!NOTE]
-> This property **must** be specified in protected settings only.
+> This property *must* be specified in protected settings only.
-CustomScript (version 1.10 onwards) supports [managed identity](../../active-directory/managed-identities-azure-resources/overview.md) for downloading file(s) from URLs provided in the "fileUris" setting. It allows CustomScript to access Azure Storage private blobs or containers without the user having to pass secrets like SAS tokens or storage account keys.
+The Custom Script Extension (version 1.10 and later) supports [managed identities](../../active-directory/managed-identities-azure-resources/overview.md) for downloading files from URLs provided in the `fileUris` setting. It allows the Custom Script Extension to access Azure Storage private blobs or containers without the user having to pass secrets like SAS tokens or storage account keys.
-To use this feature, the user must add a [system-assigned](../../app-service/overview-managed-identity.md?tabs=dotnet#add-a-system-assigned-identity) or [user-assigned](../../app-service/overview-managed-identity.md?tabs=dotnet#add-a-user-assigned-identity) identity to the VM or VMSS where CustomScript is expected to run, and [grant the managed identity access to the Azure Storage container or blob](../../active-directory/managed-identities-azure-resources/tutorial-vm-windows-access-storage.md#grant-access).
+To use this feature, the user must add a [system-assigned](../../app-service/overview-managed-identity.md?tabs=dotnet#add-a-system-assigned-identity) or [user-assigned](../../app-service/overview-managed-identity.md?tabs=dotnet#add-a-user-assigned-identity) identity to the VM or virtual machine scale set where the Custom Script Extension is expected to run. The user must then [grant the managed identity access to the Azure Storage container or blob](../../active-directory/managed-identities-azure-resources/tutorial-vm-windows-access-storage.md#grant-access).
-To use the system-assigned identity on the target VM/VMSS, set "managedidentity" field to an empty json object.
+To use the system-assigned identity on the target VM or virtual machine scale set, set `managedidentity` to an empty JSON object.
> Example: >
To use the system-assigned identity on the target VM/VMSS, set "managedidentity"
> } > ```
-To use the user-assigned identity on the target VM/VMSS, configure "managedidentity" field with the client ID or the object ID of the managed identity.
+To use the user-assigned identity on the target VM or virtual machine scale set, configure `managedidentity` with the client ID or the object ID of the managed identity.
> Examples: >
To use the user-assigned identity on the target VM/VMSS, configure "managedident
> ``` > [!NOTE]
-> managedIdentity property **must not** be used in conjunction with storageAccountName or storageAccountKey properties
+> The `managedIdentity` property *must not* be used in conjunction with the `storageAccountName` or `storageAccountKey` property.
## Template deployment
-Azure VM extensions can be deployed with Azure Resource Manager templates. The JSON schema, which is detailed in the previous section can be used in an Azure Resource Manager template to run the Custom Script Extension during deployment. The following samples show how to use the Custom Script extension:
+You can deploy Azure VM extensions by using Azure Resource Manager templates. The JSON schema detailed in the previous section can be used in an Azure Resource Manager template to run the Custom Script Extension during the template's deployment. The following samples show how to use the Custom Script Extension:
* [Tutorial: Deploy virtual machine extensions with Azure Resource Manager templates](../../azure-resource-manager/templates/template-tutorial-deploy-vm-extensions.md)
-* [Deploy Two Tier Application on Windows and Azure SQL DB](https://github.com/Microsoft/dotnet-core-sample-templates/tree/master/dotnet-core-music-windows)
+* [Deploy Two Tier Application on Windows and Azure SQL Database](https://github.com/Microsoft/dotnet-core-sample-templates/tree/master/dotnet-core-music-windows)
## PowerShell deployment
-The `Set-AzVMCustomScriptExtension` command can be used to add the Custom Script extension to an existing virtual machine. For more information, see [Set-AzVMCustomScriptExtension](/powershell/module/az.compute/set-azvmcustomscriptextension).
+You can use the `Set-AzVMCustomScriptExtension` command to add the Custom Script Extension to an existing virtual machine. For more information, see [Set-AzVMCustomScriptExtension](/powershell/module/az.compute/set-azvmcustomscriptextension).
```powershell Set-AzVMCustomScriptExtension -ResourceGroupName <resourceGroupName> `
Set-AzVMCustomScriptExtension -ResourceGroupName <resourceGroupName> `
### Using multiple scripts
-In this example, you have three scripts that are used to build your server. The **commandToExecute** calls the first script, then you have options on how the others are called. For example, you can have a master script that controls the execution, with the right error handling, logging, and state management. The scripts are downloaded to the local machine for running. For example in `1_Add_Tools.ps1` you would call `2_Add_Features.ps1` by adding `.\2_Add_Features.ps1` to the script, and repeat this process for the other scripts you define in `$settings`.
+In this example, you're using three scripts to build your server. The `commandToExecute` property calls the first script. You then have options on how the others are called. For example, you can have a master script that controls the execution, with the right error handling, logging, and state management. The scripts are downloaded to the local machine for execution.
+
+For example, in *1_Add_Tools.ps1*, you would call *2_Add_Features.ps1* by adding `.\2_Add_Features.ps1` to the script. You would repeat this process for the other scripts that you define in `$settings`.
```powershell $fileUri = @("https://xxxxxxx.blob.core.windows.net/buildServer1/1_Add_Tools.ps1",
Set-AzVMExtension -ResourceGroupName <resourceGroupName> `
### Running scripts from a local share
-In this example, you may want to use a local SMB server for your script location. By doing this, you don't need to provide any other settings, except **commandToExecute**.
+In this example, you might want to use a local Server Message Block (SMB) server for your script location. You then don't need to provide any other settings, except `commandToExecute`.
```powershell $protectedSettings = @{"commandToExecute" = "powershell -ExecutionPolicy Unrestricted -File \\filesvr\build\serverUpdate1.ps1"};
Set-AzVMExtension -ResourceGroupName <resourceGroupName> `
```
-### How to run custom script more than once with CLI
+### Running a custom script more than once by using the CLI
-The custom script extension handler will prevent re-executing a script if the *exact* same settings have been passed. This is to prevent accidental re-execution which might cause unexpected behaviors in case the script is not idempotent. You can confirm if the handler has blocked the re-execution by looking at the C:\WindowsAzure\Logs\Plugins\Microsoft.Compute.CustomScriptExtension\<HandlerVersion>\CustomScriptHandler.log, and search for a warning like below:
+The Custom Script Extension handler will prevent rerunning a script if the *exact* same settings have been passed. This behavior prevents accidental rerunning, which might cause unexpected behaviors if the script isn't idempotent. You can confirm if the handler has blocked the rerunning by looking at *C:\WindowsAzure\Logs\Plugins\Microsoft.Compute.CustomScriptExtension\<HandlerVersion>\CustomScriptHandler.log* and searching for a warning like this one:
```warning Current sequence number, <SequenceNumber>, is not greater than the sequence number of the most recently executed configuration. Exiting... ```
-If you want to run the custom script extension more than once, you can only do this action under these conditions:
+If you want to run the Custom Script Extension more than once, you can do that only under these conditions:
-* The extension **Name** parameter is the same as the previous deployment of the extension.
-* Update the configuration otherwise the command won't be re-executed. You can add in a dynamic property into the command, such as a timestamp. If the handler detects a change in the configuration settings, then it will consider it as an explicit desire to re-execute the script.
+* The extension's `Name` parameter is the same as the previous deployment of the extension.
+* You've updated the configuration. You can add a dynamic property to the command, such as a timestamp. If the handler detects a change in the configuration settings, it will consider that change as an explicit desire to rerun the script.
-Alternatively, you can set the [ForceUpdateTag](/dotnet/api/microsoft.azure.management.compute.models.virtualmachineextension.forceupdatetag) property to **true**.
+Alternatively, you can set the [ForceUpdateTag](/dotnet/api/microsoft.azure.management.compute.models.virtualmachineextension.forceupdatetag) property to `true`.
### Using Invoke-WebRequest
-If you are using [Invoke-WebRequest](/powershell/module/microsoft.powershell.utility/invoke-webrequest) in your script, you must specify the parameter `-UseBasicParsing` or else you will receive the following error when checking the detailed status:
+If you're using [Invoke-WebRequest](/powershell/module/microsoft.powershell.utility/invoke-webrequest) in your script, you must specify the parameter `-UseBasicParsing`. If you don't specify the parameter, you'll get the following error when checking the detailed status:
```error The response content cannot be parsed because the Internet Explorer engine is not available, or Internet Explorer's first-launch configuration is not complete. Specify the UseBasicParsing parameter and try again.
The response content cannot be parsed because the Internet Explorer engine is no
## Virtual machine scale sets
-If you deploy the Custom Script Extension from the Azure portal, you don't have control over the expiration of the shared access signature token for accessing the script in your storage account. The result is that the initial deployment works, but when the storage account shared access signature token expires, any subsequent scaling operation fails because the Custom Script Extension can no longer access the storage account.
+If you deploy the Custom Script Extension from the Azure portal, you don't have control over the expiration of the SAS token for accessing the script in your storage account. The result is that the initial deployment works, but when the storage account's SAS token expires, any subsequent scaling operation fails because the Custom Script Extension can no longer access the storage account.
-We recommend that you use [PowerShell](/powershell/module/az.Compute/Add-azVmssExtension?view=azps-7.0.0), the [Azure CLI](/cli/azure/vmss/extension?view=azure-cli-latest), or an Azure Resource Manager template when you deploy the Custom Script Extension on a virtual machine scale set. This way, you can choose to use a managed identity or have direct control of the expiration of the shared access signature token for accessing the script in your storage account for as long as you need.
+We recommend that you use [PowerShell](/powershell/module/az.Compute/Add-azVmssExtension?view=azps-7.0.0), the [Azure CLI](/cli/azure/vmss/extension?view=azure-cli-latest), or an Azure Resource Manager template when you deploy the Custom Script Extension on a virtual machine scale set. This way, you can choose to use a managed identity or have direct control of the expiration of the SAS token for accessing the script in your storage account for as long as you need.
## Classic VMs [!INCLUDE [classic-vm-deprecation](../../../includes/classic-vm-deprecation.md)]
-To deploy the Custom Script Extension on classic VMs, you can use the Azure portal or the Classic Azure PowerShell cmdlets.
+To deploy the Custom Script Extension on classic VMs, you can use the Azure portal or the classic Azure PowerShell cmdlets.
### Azure portal
-Navigate to your Classic VM resource. Select **Extensions** under **Settings**.
-
-Click **+ Add** and in the list of resources choose **Custom Script Extension**.
-
-On the **Install extension** page, select the local PowerShell file, and fill out any arguments and click **Ok**.
+1. Go to your classic VM resource. Select **Extensions** under **Settings**.
+1. Select **+ Add**. In the list of resources, select **Custom Script Extension**.
+1. On the **Install extension** page, select the local PowerShell file. Fill out any arguments, and then select **Ok**.
### PowerShell
-Use the [Set-AzureVMCustomScriptExtension](/powershell/module/servicemanagement/azure.service/set-azurevmcustomscriptextension) cmdlet can be used to add the Custom Script extension to an existing virtual machine.
+You can use the [Set-AzureVMCustomScriptExtension](/powershell/module/servicemanagement/azure.service/set-azurevmcustomscriptextension) cmdlet to add the Custom Script Extension to an existing virtual machine:
```powershell # define your file URI
$vm | Update-AzureVM
## Troubleshoot and support
-### Troubleshoot
-
-Data about the state of extension deployments can be retrieved from the Azure portal, and by using the Azure PowerShell module. To see the deployment state of extensions for a given VM, run the following command:
+You can retrieve data about the state of extension deployments from the Azure portal and by using the Azure PowerShell module. To see the deployment state of extensions for a VM, run the following command:
```powershell Get-AzVMExtension -ResourceGroupName <resourceGroupName> -VMName <vmName> -Name myExtensionName ```
-Extension output is logged to files found under the following folder on the target virtual machine.
+Extension output is logged to files found under the following folder on the target virtual machine:
```cmd C:\WindowsAzure\Logs\Plugins\Microsoft.Compute.CustomScriptExtension ```
-The specified files are downloaded into the following folder on the target virtual machine.
+The specified files are downloaded into the following folder on the target virtual machine:
```cmd C:\Packages\Plugins\Microsoft.Compute.CustomScriptExtension\1.*\Downloads\<n> ```
-where `<n>` is a decimal integer, which may change between executions of the extension. The `1.*` value matches the actual, current `typeHandlerVersion` value of the extension. For example, the actual directory could be `C:\Packages\Plugins\Microsoft.Compute.CustomScriptExtension\1.8\Downloads\2`.
+In the preceding path, `<n>` is a decimal integer that might change between executions of the extension. The `1.*` value matches the actual, current `typeHandlerVersion` value of the extension. For example, the actual directory could be `C:\Packages\Plugins\Microsoft.Compute.CustomScriptExtension\1.8\Downloads\2`.
-When executing the `commandToExecute` command, the extension sets this directory (for example, `...\Downloads\2`) as the current working directory. This process enables the use of relative paths to locate the files downloaded via the `fileURIs` property. See the table below for examples.
+When you run the `commandToExecute` command, the extension sets this directory (for example, `...\Downloads\2`) as the current working directory. This process enables the use of relative paths to locate the files downloaded via the `fileURIs` property. Here are examples of downloaded files:
-Since the absolute download path may vary over time, it's better to opt for relative script/file paths in the `commandToExecute` string, whenever possible. For example:
+| URI in `fileUris` | Relative download location | Absolute download location <sup>1</sup> |
+| - | - |: |
+| `https://someAcct.blob.core.windows.net/aContainer/scripts/myscript.ps1` | `./scripts/myscript.ps1` |`C:\Packages\Plugins\Microsoft.Compute.CustomScriptExtension\1.8\Downloads\2\scripts\myscript.ps1` |
+| `https://someAcct.blob.core.windows.net/aContainer/topLevel.ps1` | `./topLevel.ps1` | `C:\Packages\Plugins\Microsoft.Compute.CustomScriptExtension\1.8\Downloads\2\topLevel.ps1` |
+
+<sup>1</sup> The absolute directory paths change over the lifetime of the VM, but not within a single execution of the Custom Script Extension.
+
+Because the absolute download path might vary over time, it's better to opt for relative script/file paths in the `commandToExecute` string, whenever possible. For example:
```json "commandToExecute": "powershell.exe . . . -File \"./scripts/myscript.ps1\"" ```
-Path information after the first URI segment is kept for files downloaded via the `fileUris` property list. As shown in the table below, downloaded files are mapped into download subdirectories to reflect the structure of the `fileUris` values.
-
-#### Examples of Downloaded Files
-
-| URI in fileUris | Relative downloaded location | Absolute downloaded location <sup>1</sup> |
-| - | - |: |
-| `https://someAcct.blob.core.windows.net/aContainer/scripts/myscript.ps1` | `./scripts/myscript.ps1` |`C:\Packages\Plugins\Microsoft.Compute.CustomScriptExtension\1.8\Downloads\2\scripts\myscript.ps1` |
-| `https://someAcct.blob.core.windows.net/aContainer/topLevel.ps1` | `./topLevel.ps1` | `C:\Packages\Plugins\Microsoft.Compute.CustomScriptExtension\1.8\Downloads\2\topLevel.ps1` |
+Path information after the first URI segment is kept for files downloaded via the `fileUris` property list. As shown in the earlier table, downloaded files are mapped into download subdirectories to reflect the structure of the `fileUris` values.
-<sup>1</sup> The absolute directory paths change over the lifetime of the VM, but not within a single execution of the CustomScript extension.
+## Support
-### Support
+If you need help with any part of this article, you can contact the Azure experts at [Azure Community Support](https://azure.microsoft.com/support/forums/).
-If you need more help at any point in this article, you can contact the Azure experts on the [MSDN Azure and Stack Overflow forums](https://azure.microsoft.com/support/forums/). You can also file an Azure support incident. Go to the [Azure support site](https://azure.microsoft.com/support/options/) and select Get support. For information about using Azure Support, read the [Microsoft Azure support FAQ](https://azure.microsoft.com/support/faq/).
+You can also file an Azure support incident. Go to the [Azure support site](https://azure.microsoft.com/support/options/) and select **Get support**. For information about using Azure support, read the [Microsoft Azure support FAQ](https://azure.microsoft.com/support/faq/).
virtual-machines Ssh Keys Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/ssh-keys-azure-cli.md
For more detailed information about creating and using SSH keys with Linux VMs,
## Generate new keys
-Start by preparing your environment for the Azure CLI:
-- 1. After you sign in, use the [az sshkey create](/cli/azure/sshkey#az_sshkey_create) command to create the new SSH key: ```azurecli